threads
listlengths
1
275
[ { "msg_contents": "Hello again.\n\nI have to track user subscriptions to certain mailinglists, and I also\nneed to track credits users have on those mailinglists. On one side I\nhave procedures that add credits, on other side I have procedures that\nsubtract available credits. Add/subtract is pretty intensive, around\n30-50 adds per minute (usualy 10 or 100 credits), and around 200-500\nsubtracts per minute (usualy by one or two credits).\n\nI have created table user_subscriptions to track user subscriptions to\ncertain mailing list. I have derived subscription_id as primary key. I\nhave two other tables, user_subscription_credits_given, and\n_credits_taken, wich track credits for subscription added or subtracted\nto or from certain subscription. I created those two tables so I could\neliminate a lot of UPDATES on user_subscriptions table (if I were to\nhave a column 'credits' in that table). user_subscriptions table is\nprojected to have around 100.000 rows, and _credits_given/_credits_taken\ntable is projected to have around 10.000.000 rows. \n\nNow, I have filled the tables with test data, and the query results is\nkind of poor. It takes almost 50 seconds to get the data for the\nparticular subscription. Now, is there a way to speed this up, or I need\ndifferent approach?\n\nHere is the DDL/DML:\n\nCREATE TABLE user_subscriptions\n(\n subscription_id int4 NOT NULL DEFAULT\nnextval('user_subscriptions_id_seq'::regclass),\n user_id int4 NOT NULL,\n mailinglist_id int4 NOT NULL,\n valid_from timestamptz NOT NULL,\n valid_to timestamptz,\n CONSTRAINT user_subscriptions_pkey PRIMARY KEY (subscription_id)\n);\n\nCREATE TABLE user_subscription_credits_given\n(\n subscription_id int4 NOT NULL,\n credits int4 NOT NULL,\n CONSTRAINT user_subscription_credits_given_fk__subscription_id FOREIGN\nKEY (subscription_id)\n REFERENCES user_subscriptions (subscription_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n);\n\n\nCREATE INDEX fki_user_subscriptions_fk__mailinglist_id\n ON user_subscriptions\n USING btree\n (mailinglist_id);\n\nCREATE INDEX fki_user_subscriptions_fk__users_id\n ON user_subscriptions\n USING btree\n (user_id);\n\nCREATE INDEX fki_user_subscription_credits_given_fk__subscription_id\n ON user_subscription_credits_given\n USING btree\n (subscription_id);\n\nCREATE INDEX fki_user_subscription_credits_taken_fk__subscription_id\n ON user_subscription_credits_taken\n USING btree\n (subscription_id);\n\n\nHere is the query which gets information on particular user, shows\nsubscriptions to mailinglists and available credits on those\nmailinglists:\n\nSELECT u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,\nu.valid_to, sum(credits.credits_given - credits.credits_taken)::integer\nAS credits\nFROM user_subscriptions u\nLEFT JOIN \n\t(SELECT user_subscription_credits_given.subscription_id,\nuser_subscription_credits_given.credits AS credits_given, 0 AS\ncredits_taken\n FROM user_subscription_credits_given\n\tUNION ALL \n SELECT user_subscription_credits_taken.subscription_id, 0 AS\ncredits_given, user_subscription_credits_taken.credits AS credits_taken\n FROM user_subscription_credits_taken) credits \n\tON u.subscription_id = credits.subscription_id\nwhere\n\tu.user_id = 1\nGROUP BY u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,\nu.valid_to\n\nAnd here is the 'explain analyze' of the above query:\n\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=200079055.24..200079055.28 rows=2 width=36)\n(actual time=56527.153..56527.163 rows=2 loops=1)\n -> Nested Loop Left Join (cost=200033690.72..200078931.34 rows=8260\nwidth=36) (actual time=0.432..54705.844 rows=275366 loops=1)\n Join Filter: (\"outer\".subscription_id =\n\"inner\".subscription_id)\n -> Index Scan using fki_user_subscriptions_fk__users_id on\nuser_subscriptions u (cost=0.00..3.03 rows=2 width=28) (actual\ntime=0.030..0.055 rows=2 loops=1)\n Index Cond: (user_id = 1)\n -> Materialize (cost=200033690.72..200045984.63 rows=825991\nwidth=12) (actual time=0.043..22404.107 rows=826032 loops=2)\n -> Subquery Scan credits\n(cost=100000000.00..200028830.73 rows=825991 width=12) (actual\ntime=0.050..31500.589 rows=826032 loops=1)\n -> Append (cost=100000000.00..200020570.82\nrows=825991 width=8) (actual time=0.041..22571.540 rows=826032 loops=1)\n -> Subquery Scan \"*SELECT* 1\"\n(cost=100000000.00..100001946.96 rows=78148 width=8) (actual\ntime=0.031..1226.640 rows=78148 loops=1)\n -> Seq Scan on\nuser_subscription_credits_given (cost=100000000.00..100001165.48\nrows=78148 width=8) (actual time=0.022..404.253 rows=78148 loops=1)\n -> Subquery Scan \"*SELECT* 2\"\n(cost=100000000.00..100018623.86 rows=747843 width=8) (actual\ntime=0.032..12641.705 rows=747884 loops=1)\n -> Seq Scan on\nuser_subscription_credits_taken (cost=100000000.00..100011145.43\nrows=747843 width=8) (actual time=0.023..4386.769 rows=747884 loops=1)\n Total runtime: 56536.774 ms\n(13 rows)\n\n\nThank you all in advance,\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n", "msg_date": "Tue, 30 May 2006 16:18:01 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Speedup hint needed, if available? :)" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> Here is the query which gets information on particular user, shows\n> subscriptions to mailinglists and available credits on those\n> mailinglists:\n\n> SELECT u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,\n> u.valid_to, sum(credits.credits_given - credits.credits_taken)::integer\n> AS credits\n> FROM user_subscriptions u\n> LEFT JOIN \n> \t(SELECT user_subscription_credits_given.subscription_id,\n> user_subscription_credits_given.credits AS credits_given, 0 AS\n> credits_taken\n> FROM user_subscription_credits_given\n> \tUNION ALL \n> SELECT user_subscription_credits_taken.subscription_id, 0 AS\n> credits_given, user_subscription_credits_taken.credits AS credits_taken\n> FROM user_subscription_credits_taken) credits \n> \tON u.subscription_id = credits.subscription_id\n> where\n> \tu.user_id = 1\n> GROUP BY u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,\n> u.valid_to\n\nDo you have realistic test data? The EXPLAIN shows that this is pulling\n275366 of the 826032 rows in the two tables, which seems like rather a\nlot for a single user. If it's reasonable that the query needs to fetch\none-third of the data, then you should resign yourself to it taking\nawhile :-(\n\nIf the expected number of matching rows were much smaller, it would\nmake sense to use indexscans over the two big tables, but unfortunately\nexisting PG releases don't know how to generate an indexscan join\nwith a UNION ALL in between :-(. FWIW, 8.2 will be able to do it.\nIn current releases the only thing I can suggest is to merge\nuser_subscription_credits_given and user_subscription_credits_taken\ninto one table so you don't need the UNION ALL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2006 11:05:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedup hint needed, if available? :) " }, { "msg_contents": "On Tue, 2006-05-30 at 11:05 -0400, Tom Lane wrote:\n\n> Do you have realistic test data? The EXPLAIN shows that this is pulling\n> 275366 of the 826032 rows in the two tables, which seems like rather a\n> lot for a single user. If it's reasonable that the query needs to fetch\n> one-third of the data, then you should resign yourself to it taking\n> awhile :-(\n\nI'd say so, yes. The user_subscription table now has only six rows, but\nthe number of actions (giving/taking credits) for a user could easily be\nas high as 50.000. \n\n> If the expected number of matching rows were much smaller, it would\n> make sense to use indexscans over the two big tables, but unfortunately\n> existing PG releases don't know how to generate an indexscan join\n> with a UNION ALL in between :-(. FWIW, 8.2 will be able to do it.\n> In current releases the only thing I can suggest is to merge\n> user_subscription_credits_given and user_subscription_credits_taken\n> into one table so you don't need the UNION ALL.\n\nSee, that's an idea! :) Thnx, I'll try that.\n\nIs it inapropriate to ask about rough estimate on availableness of\n8.2? :)\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n", "msg_date": "Tue, 30 May 2006 17:16:40 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speedup hint needed, if available? :)" }, { "msg_contents": "Mario Splivalo wrote:\n> Hello again.\n> \n> I have to track user subscriptions to certain mailinglists, and I also\n> need to track credits users have on those mailinglists. On one side I\n> have procedures that add credits, on other side I have procedures that\n> subtract available credits. Add/subtract is pretty intensive, around\n> 30-50 adds per minute (usualy 10 or 100 credits), and around 200-500\n> subtracts per minute (usualy by one or two credits).\n> \n> I have created table user_subscriptions to track user subscriptions to\n> certain mailing list. I have derived subscription_id as primary key. I\n> have two other tables, user_subscription_credits_given, and\n> _credits_taken, wich track credits for subscription added or subtracted\n> to or from certain subscription. I created those two tables so I could\n> eliminate a lot of UPDATES on user_subscriptions table (if I were to\n> have a column 'credits' in that table). \n\nIt sounds to me like you have decided beforehand that the obvious\nsolution (update a credit field in the user_subscriptions table) is not \ngoing to perform well. Have you tried it? How does it perform?\n\nIf it does indeed give you performance problems, you could instead run \nsome kind of batch job to update the credits field (and delete the \n/given/taken records).\n\nFinally: You could refactor the query to get rid of the union:\n\nSELECT u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,\nu.valid_to, (\n SELECT sum(credits)\n FROM credits_given\n WHERE subscription_id = u.subscription_id\n) - (\n SELECT sum(credits)\n FROM credits_taken\n WHERE subscription_id = u.subscription_id)\n) AS credits\nFROM user_subscriptions u\nWHERE u.user_id = 1\n\n(Not tested).\n\nYou will probably need a COALESCE around each of the subqueries to avoid\nproblems with nulls. <rant>The sum of an empty set of numbers is 0. The\nconjunction of an empty set of booleans is true. The SQL standard \nsomehow manages to get this wrong</rant>\n\n/Nis\n\n\n", "msg_date": "Wed, 31 May 2006 11:28:26 +0200", "msg_from": "Nis Jorgensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedup hint needed, if available? :)" } ]
[ { "msg_contents": "Good morning,\n\nI have identical postgres installations running on identical machines. Dual\nCore AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and\n120GB worth of disk space on two drives.\n\nRecently, I have noticed that my nightly backups take longer on one machine\nthan on the other. I back up five (5) databases totaling 8.6GB in size. On\nProd001 the backups take app. 7 minutes, on Prod002 the backups take app. 26\nminutes! Quite a discrepancy. I checked myself than checked with our\nEngineering staff and have been assured that the machines are identical\nhardware wise, CPU, disk, etc. \n\nQuestion; has anyone run into a similar issue? Here is the command I use\nfor the nightly backup on both machines:\n\npg_dump -F c -f $DB.backup.$DATE $DB\n\nKind of scratching my head on this one....\n\nThank you,\nTim McElroy\n\n\n\n\n\n\npg_dump issue\n\n\nGood morning,\n\nI have identical postgres installations running on identical machines.  Dual Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and 120GB worth of disk space on two drives.\nRecently, I have noticed that my nightly backups take longer on one machine than on the other.  I back up five (5) databases totaling 8.6GB in size.  On Prod001 the backups take app. 7 minutes, on Prod002 the backups take app. 26 minutes!  Quite a discrepancy.  I checked myself than checked with our Engineering staff and have been assured that the machines are identical hardware wise, CPU, disk, etc.  \nQuestion; has anyone run into a similar issue?  Here is the command I use for the nightly backup on both machines:\n\npg_dump -F c -f $DB.backup.$DATE $DB\n\nKind of scratching my head on this one....\n\nThank you,\nTim McElroy", "msg_date": "Tue, 30 May 2006 10:31:08 -0400", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump issue" }, { "msg_contents": "\"mcelroy, tim\" <[email protected]> writes:\n> I have identical postgres installations running on identical machines. Dual\n> Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and\n> 120GB worth of disk space on two drives.\n\n> Recently, I have noticed that my nightly backups take longer on one machine\n> than on the other. I back up five (5) databases totaling 8.6GB in size. On\n> Prod001 the backups take app. 7 minutes, on Prod002 the backups take app. 26\n> minutes! Quite a discrepancy.\n\nAre the resulting backup files identical? Chasing down the reasons for\nany diffs might yield some enlightenment.\n\nOne idea that comes to mind is that Prod002 is having performance\nproblems due to table bloat (maybe a missing vacuum cron job or\nsome such). A quick \"du\" on the two $PGDATA directories to check\nfor significant size differences would reveal this if so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2006 11:15:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump issue " }, { "msg_contents": "\ntry to dump-restore your 'slow' database,\nthis might help if your db or filesystem gets too fragmented.\n\nOn Tue, 30 May 2006 10:31:08 -0400\n\"mcelroy, tim\" <[email protected]> wrote:\n\n> Good morning,\n> \n> I have identical postgres installations running on identical machines. Dual\n> Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and\n> 120GB worth of disk space on two drives.\n> \n> Recently, I have noticed that my nightly backups take longer on one machine\n> than on the other. I back up five (5) databases totaling 8.6GB in size. On\n> Prod001 the backups take app. 7 minutes, on Prod002 the backups take app. 26\n> minutes! Quite a discrepancy. I checked myself than checked with our\n> Engineering staff and have been assured that the machines are identical\n> hardware wise, CPU, disk, etc. \n> \n> Question; has anyone run into a similar issue? Here is the command I use\n> for the nightly backup on both machines:\n> \n> pg_dump -F c -f $DB.backup.$DATE $DB\n> \n> Kind of scratching my head on this one....\n> \n> Thank you,\n> Tim McElroy\n> \n> \n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n", "msg_date": "Fri, 2 Jun 2006 16:17:35 +0400", "msg_from": "Evgeny Gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump issue" } ]
[ { "msg_contents": "Thanks Tom. I have scheduled vacuums as follows and all have run without\nerror.\n\nMon - Thu after-hours: vacuumdb -z -e -a -v On Fridays I add the -f\noption vacuumdb -z -e -a -v -f\n\nThe du . -h in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so\nthey're pretty much the same, one would think the smaller one should be\nfaster. Yes, the backup files are identical in size. BTW - this is\npostgres 8.0.1. Stuck at this due to \"that is the latest postgresql version\ncertified by our vendor's application\".\n\nI'm hoping the Engineering staff can find something system related as I\ndoubted and still doubt that it's a postgres issue.\n\nTim\n\n\n -----Original Message-----\nFrom: \tTom Lane [mailto:[email protected]] \nSent:\tTuesday, May 30, 2006 11:16 AM\nTo:\tmcelroy, tim\nCc:\[email protected]\nSubject:\tRe: [PERFORM] pg_dump issue \n\n\"mcelroy, tim\" <[email protected]> writes:\n> I have identical postgres installations running on identical machines.\nDual\n> Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and\n> 120GB worth of disk space on two drives.\n\n> Recently, I have noticed that my nightly backups take longer on one\nmachine\n> than on the other. I back up five (5) databases totaling 8.6GB in size.\nOn\n> Prod001 the backups take app. 7 minutes, on Prod002 the backups take app.\n26\n> minutes! Quite a discrepancy.\n\nAre the resulting backup files identical? Chasing down the reasons for\nany diffs might yield some enlightenment.\n\nOne idea that comes to mind is that Prod002 is having performance\nproblems due to table bloat (maybe a missing vacuum cron job or\nsome such). A quick \"du\" on the two $PGDATA directories to check\nfor significant size differences would reveal this if so.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] pg_dump issue \n\n\nThanks Tom.  I have scheduled vacuums as follows and all have run without error.\n\nMon - Thu after-hours:  vacuumdb -z -e -a -v   On Fridays I add the -f option  vacuumdb -z -e -a -v -f\n\nThe du . -h  in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so they're pretty much the same, one would think the smaller one should be faster.  Yes, the backup files are identical in size.  BTW - this is postgres 8.0.1.  Stuck at this due to \"that is the latest postgresql version certified by our vendor's application\".\nI'm hoping the Engineering staff can find something system related as I doubted and still doubt that it's a postgres issue.\nTim\n\n\n -----Original Message-----\nFrom:   Tom Lane [mailto:[email protected]] \nSent:   Tuesday, May 30, 2006 11:16 AM\nTo:     mcelroy, tim\nCc:     [email protected]\nSubject:        Re: [PERFORM] pg_dump issue \n\n\"mcelroy, tim\" <[email protected]> writes:\n> I have identical postgres installations running on identical machines.  Dual\n> Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and\n> 120GB worth of disk space on two drives.\n\n> Recently, I have noticed that my nightly backups take longer on one machine\n> than on the other.  I back up five (5) databases totaling 8.6GB in size.  On\n> Prod001 the backups take app. 7 minutes, on Prod002 the backups take app. 26\n> minutes!  Quite a discrepancy.\n\nAre the resulting backup files identical?  Chasing down the reasons for\nany diffs might yield some enlightenment.\n\nOne idea that comes to mind is that Prod002 is having performance\nproblems due to table bloat (maybe a missing vacuum cron job or\nsome such).  A quick \"du\" on the two $PGDATA directories to check\nfor significant size differences would reveal this if so.\n\n                        regards, tom lane", "msg_date": "Tue, 30 May 2006 11:33:54 -0400", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump issue " }, { "msg_contents": "\"mcelroy, tim\" <[email protected]> writes:\n> The du . -h in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so\n> they're pretty much the same, one would think the smaller one should be\n> faster. Yes, the backup files are identical in size.\n\nHmph. You should carry the \"du\" analysis down to the subdirectory\nlevel, eg make sure that it's not a case of lots of pg_xlog bloat\nbalancing out bloat in a different area on the other system. But I\nsuspect you won't find anything.\n\n> I'm hoping the Engineering staff can find something system related as I\n> doubted and still doubt that it's a postgres issue.\n\nI tend to agree. You might try watching \"vmstat 1\" output while taking\nthe dumps, so you could at least get a clue whether the problem is CPU\nor I/O related ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2006 12:20:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump issue " } ]
[ { "msg_contents": "I did carry it down to the subdirectory level but only included the total\nfor brevity. I'll paste the complete readout at the end of the email. I'll\ntry the \"vmstat 1\" as you suggest next time the backups run. If the Eng\nstaff finds anything I'll post the results and maybe save someone else some\ngrief if they have the same issue. Thanks again for your input Tom.\n\nTim\n\nPROD001\tPROD002\n220K ./global[PARA]4.0K ./pg_xlog/archive_status[PARA]529M\n./pg_xlog[PARA]36K ./pg_clog[PARA]256K ./pg_subtrans[PARA]4.0K\n./base/1/pgsql_tmp[PARA]4.8M ./base/1[PARA]4.8M ./base/17229[PARA]4.0K\n./base/62878500/pgsql_tmp[PARA]4.8M ./base/62878500[PARA]5.5M\n./base/1152695[PARA]4.0K ./base/67708567/pgsql_tmp[PARA]1.6G\n./base/67708567[PARA]12K ./base/1157024/pgsql_tmp[PARA]6.3G\n./base/1157024[PARA]4.0K ./base/1159370/pgsql_tmp[PARA]543M\n./base/1159370[PARA]4.0K ./base/1157190/pgsql_tmp[PARA]164M\n./base/1157190[PARA]4.0K ./base/1621391/pgsql_tmp[PARA]81M\n./base/1621391[PARA]8.6G ./base[PARA]4.0K ./pg_tblspc[PARA]604K\n./pg_log[PARA]9.1G .\t220K ./global[PARA]4.0K\n./pg_xlog/archive_status[PARA]529M ./pg_xlog[PARA]136K\n./pg_clog[PARA]208K ./pg_subtrans[PARA]4.0K\n./base/1/pgsql_tmp[PARA]4.9M ./base/1[PARA]4.8M ./base/17229[PARA]5.3M\n./base/1274937[PARA]4.0K ./base/64257611/pgsql_tmp[PARA]1.6G\n./base/64257611[PARA]4.0K ./base/71683200/pgsql_tmp[PARA]6.1G\n./base/71683200[PARA]4.0K ./base/1281929/pgsql_tmp[PARA]478M\n./base/1281929[PARA]4.0K ./base/58579022/pgsql_tmp[PARA]154M\n./base/58579022[PARA]81M ./base/1773916[PARA]4.0K\n./base/55667447/pgsql_tmp[PARA]4.8M ./base/55667447[PARA]8.3G\n./base[PARA]4.0K ./pg_tblspc[PARA]588K ./pg_log[PARA]8.8G .\n\n\n -----Original Message-----\nFrom: \tTom Lane [mailto:[email protected]] \nSent:\tTuesday, May 30, 2006 12:20 PM\nTo:\tmcelroy, tim\nCc:\[email protected]\nSubject:\tRe: [PERFORM] pg_dump issue \n\n\"mcelroy, tim\" <[email protected]> writes:\n> The du . -h in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so\n> they're pretty much the same, one would think the smaller one should be\n> faster. Yes, the backup files are identical in size.\n\nHmph. You should carry the \"du\" analysis down to the subdirectory\nlevel, eg make sure that it's not a case of lots of pg_xlog bloat\nbalancing out bloat in a different area on the other system. But I\nsuspect you won't find anything.\n\n> I'm hoping the Engineering staff can find something system related as I\n> doubted and still doubt that it's a postgres issue.\n\nI tend to agree. You might try watching \"vmstat 1\" output while taking\nthe dumps, so you could at least get a clue whether the problem is CPU\nor I/O related ...\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] pg_dump issue \n\n\nI did carry it down to the subdirectory level but only included the total for brevity.  I'll paste the complete readout at the end of the email.  I'll try the \"vmstat 1\" as you suggest next time the backups run.  If the Eng staff finds anything I'll post the results and maybe save someone else some grief if they have the same issue.  Thanks again for your input Tom.\nTim\n\nPROD001 PROD002\n220K    ./global[PARA]4.0K    ./pg_xlog/archive_status[PARA]529M    ./pg_xlog[PARA]36K     ./pg_clog[PARA]256K    ./pg_subtrans[PARA]4.0K    ./base/1/pgsql_tmp[PARA]4.8M    ./base/1[PARA]4.8M    ./base/17229[PARA]4.0K    ./base/62878500/pgsql_tmp[PARA]4.8M    ./base/62878500[PARA]5.5M    ./base/1152695[PARA]4.0K    ./base/67708567/pgsql_tmp[PARA]1.6G    ./base/67708567[PARA]12K     ./base/1157024/pgsql_tmp[PARA]6.3G    ./base/1157024[PARA]4.0K    ./base/1159370/pgsql_tmp[PARA]543M    ./base/1159370[PARA]4.0K    ./base/1157190/pgsql_tmp[PARA]164M    ./base/1157190[PARA]4.0K    ./base/1621391/pgsql_tmp[PARA]81M     ./base/1621391[PARA]8.6G    ./base[PARA]4.0K    ./pg_tblspc[PARA]604K    ./pg_log[PARA]9.1G    .   220K    ./global[PARA]4.0K    ./pg_xlog/archive_status[PARA]529M    ./pg_xlog[PARA]136K    ./pg_clog[PARA]208K    ./pg_subtrans[PARA]4.0K    ./base/1/pgsql_tmp[PARA]4.9M    ./base/1[PARA]4.8M    ./base/17229[PARA]5.3M    ./base/1274937[PARA]4.0K    ./base/64257611/pgsql_tmp[PARA]1.6G    ./base/64257611[PARA]4.0K    ./base/71683200/pgsql_tmp[PARA]6.1G    ./base/71683200[PARA]4.0K    ./base/1281929/pgsql_tmp[PARA]478M    ./base/1281929[PARA]4.0K    ./base/58579022/pgsql_tmp[PARA]154M    ./base/58579022[PARA]81M     ./base/1773916[PARA]4.0K    ./base/55667447/pgsql_tmp[PARA]4.8M    ./base/55667447[PARA]8.3G    ./base[PARA]4.0K    ./pg_tblspc[PARA]588K    ./pg_log[PARA]8.8G    .\n\n -----Original Message-----\nFrom:   Tom Lane [mailto:[email protected]] \nSent:   Tuesday, May 30, 2006 12:20 PM\nTo:     mcelroy, tim\nCc:     [email protected]\nSubject:        Re: [PERFORM] pg_dump issue \n\n\"mcelroy, tim\" <[email protected]> writes:\n> The du . -h  in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so\n> they're pretty much the same, one would think the smaller one should be\n> faster.  Yes, the backup files are identical in size.\n\nHmph.  You should carry the \"du\" analysis down to the subdirectory\nlevel, eg make sure that it's not a case of lots of pg_xlog bloat\nbalancing out bloat in a different area on the other system.  But I\nsuspect you won't find anything.\n\n> I'm hoping the Engineering staff can find something system related as I\n> doubted and still doubt that it's a postgres issue.\n\nI tend to agree.  You might try watching \"vmstat 1\" output while taking\nthe dumps, so you could at least get a clue whether the problem is CPU\nor I/O related ...\n\n                        regards, tom lane", "msg_date": "Tue, 30 May 2006 13:12:33 -0400", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump issue " } ]
[ { "msg_contents": "Hi,\n\nIs there a command to Insert a record If It does not exists and a update \nif It exists?\n\nI do not want to do a select before a insert or update.\n\nI mean the postgres should test if a record exist before insert and if \nIt exist then the postgres must do an update instead an insert.\n\nThanks,\n\nWMiro.\n\n\n\n\n\n\n\n\nHi,\n\nIs there a command to Insert a record If It does not exists and a\nupdate if It exists?\n\nI do not want to do a select before a insert or update. \n\nI mean the postgres should test if a record exist before insert and if\nIt exist then the postgres must do an update instead an insert.\n\nThanks,\n\nWMiro.", "msg_date": "Tue, 30 May 2006 15:29:49 -0300", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "On 5/30/06, Waldomiro <[email protected]> wrote:\n> Is there a command to Insert a record If It does not exists and a update if\n> It exists?\n\nSure, it's called MERGE. See http://en.wikipedia.org/wiki/Merge_%28SQL%29\n\n> I mean the postgres should test if a record exist before insert and if It\n> exist then the postgres must do an update instead an insert.\n\nPostgreSQL does not support MERGE at the moment, sorry.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Tue, 30 May 2006 17:05:17 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "> PostgreSQL does not support MERGE at the moment, sorry.\n\n\tIssue an UPDATE, and watch the rowcount ; if the rowcount is 0, issue an \nINSERT.\n\tBe prepared to retry if another transaction has inserted the row \nmeanwhile, though.\n\n\tMERGE would be really useful.\n\n", "msg_date": "Wed, 31 May 2006 00:35:15 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "PFC wrote:\n> >PostgreSQL does not support MERGE at the moment, sorry.\n> \n> \tIssue an UPDATE, and watch the rowcount ; if the rowcount is 0, \n> \tissue an INSERT.\n> \tBe prepared to retry if another transaction has inserted the row \n> meanwhile, though.\n\nOh, you mean, like the example that's in the documentation?\n\nhttp://www.postgresql.org/docs/8.1/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n\nExample 36-1\n\n> \tMERGE would be really useful.\n\nIt has been discussed before -- MERGE is something different.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 30 May 2006 18:38:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "What I do when I'm feeling lazy is execute a delete statement and then\nan insert. I only do it when I'm inserting/updating a very small number\nof rows, so I've never worried if its optimal for performance. Besides\nI've heard that an update in postgres is similar in performance to a\ndelete/insert.\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of PFC\n> Sent: Tuesday, May 30, 2006 5:35 PM\n> To: Jonah H. Harris; Waldomiro\n> Cc: [email protected]\n> Subject: Re: [PERFORM] INSERT OU UPDATE WITHOUT SELECT?\n> \n> \n> > PostgreSQL does not support MERGE at the moment, sorry.\n> \n> \tIssue an UPDATE, and watch the rowcount ; if the \n> rowcount is 0, issue an \n> INSERT.\n> \tBe prepared to retry if another transaction has \n> inserted the row \n> meanwhile, though.\n> \n> \tMERGE would be really useful.\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n", "msg_date": "Tue, 30 May 2006 17:54:00 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "On Tue, 30 May 2006 17:54:00 -0500\n\"Dave Dutcher\" <[email protected]> wrote:\n> What I do when I'm feeling lazy is execute a delete statement and then\n> an insert. I only do it when I'm inserting/updating a very small number\n> of rows, so I've never worried if its optimal for performance. Besides\n> I've heard that an update in postgres is similar in performance to a\n> delete/insert.\n\nWell, they are basically the same operation in PostgreSQL. An update\nadds a row to the end and marks the old one dead. A delete/insert\nmarks the row dead and adds one at the end. There may be some\noptimization if the engine does both in one operation.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 30 May 2006 19:05:08 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" }, { "msg_contents": "On Tue, May 30, 2006 at 07:05:08PM -0400, D'Arcy J.M. Cain wrote:\n> On Tue, 30 May 2006 17:54:00 -0500\n> \"Dave Dutcher\" <[email protected]> wrote:\n> > What I do when I'm feeling lazy is execute a delete statement and then\n> > an insert. I only do it when I'm inserting/updating a very small number\n> > of rows, so I've never worried if its optimal for performance. Besides\n> > I've heard that an update in postgres is similar in performance to a\n> > delete/insert.\n> \n> Well, they are basically the same operation in PostgreSQL. An update\n> adds a row to the end and marks the old one dead. A delete/insert\n> marks the row dead and adds one at the end. There may be some\n> optimization if the engine does both in one operation.\n\nThe new tuple will actually go on the same page during an update, if\npossible. If not, the FSM is consulted. Appending to the end of the\ntable is a last resort.\n\nUpdate is more effecient than delete/insert. First, it's one less\nstatement to parse and plan. Second, AFAIK insert always goes to the\nFSM; it has no way to know you're replacing the row(s) you just deleted.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 31 May 2006 01:29:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OU UPDATE WITHOUT SELECT?" } ]
[ { "msg_contents": "Hi, \n\nIs there a command to Insert a record If It does not exists and a update if \nIt exists? \n\nI do not want to do a select before a insert or update. \n\nI mean the postgres should test if a record exist before insert and if It \nexist then the postgres must do an update instead an insert. \n\nThanks, \n\nWMiro. \n\n\n", "msg_date": "Tue, 30 May 2006 17:47:47 -0300", "msg_from": "[email protected] <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT OR UPDATE WITHOUT SELECT" }, { "msg_contents": "am 30.05.2006, um 17:47:47 -0300 mailte [email protected] folgendes:\n> Hi, \n> \n> Is there a command to Insert a record If It does not exists and a update if \n> It exists? \n\nNot a single command, but a solution:\nhttp://developer.postgresql.org/docs/postgres/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Sun, 4 Jun 2006 09:40:14 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT OR UPDATE WITHOUT SELECT" } ]
[ { "msg_contents": "Hi,\nWe just don't seem to be getting much benefit from autovacuum. Running\na manual vacuum seems to still be doing a LOT, which suggests to me\nthat I should either run a cron job and disable autovacuum, or just\nrun a cron job on top of autovacuum.\nThe problem is that if I run the same query (an update query) on the\ndb it takes 4 - 6 times longer than on a fresh copy (dumped then\nrestored to a different name on the same machine/postgres). There is\nclearly an issue here...\nI have been thinking about strategies and am still a bit lost. Our\napps are up 24/7 and we didn't code for the eventuality of having the\ndb going offline for maintenance... we live and learn!\nWould it be wise to, every week or so, dump then restore the db\n(closing all our apps and then restarting them)? The dump is only\nabout 270MB, and restore is about 10mins (quite a few large indexes).\nIt seems that we have no real need for vacuum full (I am clutching at\nstraws here...), so in theory I could just vacuum/analyse/reindex and\nthings would be OK. Will a fresh restore be much more performant than\na fully vacuumed/analysed/reindexed db? Probably? Possibly?\nI believe I understand the autovacuum docs but...\nHelp!\n8-]\nCheers\nAntoine\n\n-- \nThis is where I should put some witty comment.\n", "msg_date": "Thu, 1 Jun 2006 13:54:08 +0200", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "vacuuming problems continued" }, { "msg_contents": "Antoine <[email protected]> writes:\n> We just don't seem to be getting much benefit from autovacuum. Running\n> a manual vacuum seems to still be doing a LOT, which suggests to me\n> that I should either run a cron job and disable autovacuum, or just\n> run a cron job on top of autovacuum.\n\nThe default autovac parameters are very unaggressive --- have you\nexperimented with changing them? Do you have the FSM set large enough?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2006 11:05:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued " }, { "msg_contents": "On Thu, Jun 01, 2006 at 11:05:55AM -0400, Tom Lane wrote:\n> Antoine <[email protected]> writes:\n> > We just don't seem to be getting much benefit from autovacuum. Running\n> > a manual vacuum seems to still be doing a LOT, which suggests to me\n> > that I should either run a cron job and disable autovacuum, or just\n> > run a cron job on top of autovacuum.\n> \n> The default autovac parameters are very unaggressive --- have you\n> experimented with changing them? Do you have the FSM set large enough?\n\nDo any of the tables have a high 'churn rate' (a lot of updates) but are\nsupposed to be small? In cases like that autovacuum may not be enough.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 09:43:49 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:\n> Hi,\n> We just don't seem to be getting much benefit from autovacuum. Running\n> a manual vacuum seems to still be doing a LOT, which suggests to me\n> that I should either run a cron job and disable autovacuum, or just\n> run a cron job on top of autovacuum.\n\nWhat the others said; but also, which version of autovacuum (== which\nversion of the database) is this? Because the early versions had a\nnumber of missing bits to them that tended to mean the whole thing\ndidn't hang together very well. \n\n> I have been thinking about strategies and am still a bit lost. Our\n> apps are up 24/7 and we didn't code for the eventuality of having the\n> db going offline for maintenance... we live and learn!\n\nYou shouldn't need to, with anything after 7.4, if your vacuum\nregimen is right. There's something of a black art to it, though.\n\nA\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Mon, 5 Jun 2006 16:41:51 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:\n>> Hi,\n>> We just don't seem to be getting much benefit from autovacuum. Running\n>> a manual vacuum seems to still be doing a LOT, which suggests to me\n>> that I should either run a cron job and disable autovacuum, or just\n>> run a cron job on top of autovacuum.\n\nDon't know if this was covered in an earlier thread. Bear with me if so.\n\nI'm working with 7.4.8 and 8.0.3 systems, and pg_autovacuum does have some \nglitches ... in part solved by the integrated autovac in 8.1:\n\n- in our env, clients occasionally hit max_connections. This is a known and \n(sort of) desired pushback on load. However, that sometimes knocks pg_autovacuum \nout.\n\n- db server goes down for any reason: same problem.\n\n\nJust restarting pg_autovacuum is not good enough; when pg_autovacuum terminates, \nit loses its state, so big tables that change less than 50% between such \nterminations may never get vacuumed (!)\n\nFor that reason, it's taken a switch to a Perl script run from cron every 5 \nminutes, that persists state in a table. The script is not a plug-compatible \nmatch for pg_autovacuum (hardcoded rates; hardcoded distinction between user and \nsystem tables), but you may find it useful.\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.", "msg_date": "Tue, 06 Jun 2006 11:51:56 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "Mischa Sandberg wrote:\n> Andrew Sullivan wrote:\n>> On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:\n>>> Hi,\n>>> We just don't seem to be getting much benefit from autovacuum. Running\n>>> a manual vacuum seems to still be doing a LOT, which suggests to me\n>>> that I should either run a cron job and disable autovacuum, or just\n>>> run a cron job on top of autovacuum.\n> \n> Don't know if this was covered in an earlier thread. Bear with me if so.\n> \n> I'm working with 7.4.8 and 8.0.3 systems, and pg_autovacuum does have \n> some glitches ... in part solved by the integrated autovac in 8.1:\n> \n> - in our env, clients occasionally hit max_connections. This is a known \n> and (sort of) desired pushback on load. However, that sometimes knocks \n> pg_autovacuum out.\n\nThat is when you use:\n\nsuperuser_reserved_connections\n\nIn the postgresql.conf\n\n> - db server goes down for any reason: same problem.\n\nI believe you can use\n\nstats_reset_on_server_start = on\n\nFor that little problem.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 06 Jun 2006 12:21:39 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "Joshua D. Drake wrote:\n>> - in our env, clients occasionally hit max_connections. This is a \n>> known and (sort of) desired pushback on load. However, that sometimes \n>> knocks pg_autovacuum out.\n> \n> That is when you use:\n> \n> superuser_reserved_connections\n\nBlush. Good point. Though, when we hit max_connections on 7.4.8 systems,\nit's been a lemonade-from-lemons plus that vacuuming didn't fire up on top of \neverything else :-)\n\n>> - db server goes down for any reason: same problem.\n> \n> I believe you can use\n> stats_reset_on_server_start = on\n\nWe do. The problem is not the loss of pg_stat_user_tables.(n_tup_ins,...)\nIt's the loss of pg_autovacuum's CountAtLastVacuum (and ...Analyze)\nnumbers, which are kept in process memory. Never considered patching\npg_autovacuum to just sleep and try again, rather than exit, on a failed\ndb connection.\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Tue, 06 Jun 2006 12:38:09 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "Hi all and thanks for your responses. I haven't yet had a chance to\ntweak the autovac settings but I really don't think that things can be\nmaxing out even the default settings.\nWe have about 4 machines that are connected 24/7 - they were doing\nconstant read/inserts (24/7) but that was because the code was\nrubbish. I managed to whinge enough to get the programme to read, do\nthe work, then insert, and that means they are accessing (connected\nbut idle) for less than 5% of the day. We have about another 10\nmachines that access (reads and updates) from 8-5. It is running on a\nP4 with 256 or 512meg of ram and I simply refuse to believe this load\nis anything significant... :-(.\nThere are only two tables that see any action, and the smaller one is\nalmost exclusively inserts.\nMuch as I believe it shouldn't be possible the ratio of 5:1 for the db\nvs fresh copy has given me a taste for a copy/drop scenario...\nI will try and increase settings and keep you posted.\nCheers\nAntoine\n\n\n-- \nThis is where I should put some witty comment.\n", "msg_date": "Wed, 7 Jun 2006 22:52:26 +0200", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuuming problems continued" }, { "msg_contents": "Bloat doesn't depend on your update/delete rate; it depends on how many\nupdate/deletes occur between vacuums. Long running transactions also\ncome into play.\n\nAs for performance, a P4 with 512M of ram is pretty much a toy in the\ndatabase world; it wouldn't be very hard to swamp it.\n\nBut without actual details there's no way to know.\n\nOn Wed, Jun 07, 2006 at 10:52:26PM +0200, Antoine wrote:\n> Hi all and thanks for your responses. I haven't yet had a chance to\n> tweak the autovac settings but I really don't think that things can be\n> maxing out even the default settings.\n> We have about 4 machines that are connected 24/7 - they were doing\n> constant read/inserts (24/7) but that was because the code was\n> rubbish. I managed to whinge enough to get the programme to read, do\n> the work, then insert, and that means they are accessing (connected\n> but idle) for less than 5% of the day. We have about another 10\n> machines that access (reads and updates) from 8-5. It is running on a\n> P4 with 256 or 512meg of ram and I simply refuse to believe this load\n> is anything significant... :-(.\n> There are only two tables that see any action, and the smaller one is\n> almost exclusively inserts.\n> Much as I believe it shouldn't be possible the ratio of 5:1 for the db\n> vs fresh copy has given me a taste for a copy/drop scenario...\n> I will try and increase settings and keep you posted.\n> Cheers\n> Antoine\n> \n> \n> -- \n> This is where I should put some witty comment.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 7 Jun 2006 17:17:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming problems continued" } ]
[ { "msg_contents": "when i try do \\d in psql on a table i get this message!\nthis happens too when i try to run pg_dump...\n\nERROR: could not access status of transaction 4294967295\nDETAIL: could not open file \"pg_clog/0FFF\": File or directory not found\n\n\nsomeone could help me?? PleasE!\n\n\n\n\n\n\n\n\n\n\n\nwhen i try do \\d in psql on a table i get this \nmessage!\nthis happens too when i try to run pg_dump...\n \nERROR:  could not access status of transaction \n4294967295DETAIL:  could not open file \"pg_clog/0FFF\": File or \ndirectory not found\n \n \nsomeone could help me?? PleasE!", "msg_date": "Thu, 1 Jun 2006 14:17:50 -0300", "msg_from": "\"Joao\" <[email protected]>", "msg_from_op": true, "msg_subject": "help me problems with pg_clog file" }, { "msg_contents": "Joao,\n\nIf you had send the Email to pgsql-admin mailing list you would have got a\nfaster answer to ur query..\n\nhere is what i managed to do:-\n1. I deleted the\n$ls -lart the pg_clog folder\ntotal 756\n-rw------- 1 postgres users 262144 2006-04-10 17:16 0001\n-rw------- 1 postgres users 262144 2006-04-10 17:16 0000\ndrwx------ 2 postgres users 4096 2006-04-10 17:16 .\n-rw------- 1 postgres users 229376 2006-05-31 18:17 0002\ndrwx------ 10 postgres users 4096 2006-06-02 12:22 ..\n$ mv 0002 ../\n$ ls\n0000 0001\n$psql regression\nregression=# select count(1) from accounts;\nERROR: could not access status of transaction 2225656\nDETAIL: could not open file \"pg_clog/0002\": No such file or directory\nregression=# \\q\n\nThis Error came since the 0002 file from the pg_clog folder was missing.\nSince the logs are missing from pg_clog folder can perfom pg_resetxlogs to\nreset the logs and bring up the database.\n\n$ /usr/local/pgsql/bin/pg_ctl -D /newdisk/postgres/data -l\n/newdisk/postgres/data_log stop\nwaiting for postmaster to shut down... done\npostmaster stopped\n\n$ /usr/local/pgsql/bin/pg_resetxlog -x 2999800 /newdisk/postgres/data\nTransaction log reset\n\nThe Value 2999800 u can get if u see the postgresql output file during\nstartup or using\ngrep \"next transaction ID\" /newdisk/postgres/data_log\n\nthan did the following:-\n/usr/local/pgsql/bin/psql regression\n\nregression=# select count(1) from accounts;\n count\n---------\n 1000001\n(1 row)\n\nregression=# \\q\nYou can get more info from\nhttp://unix.business.utah.edu/doc/applications/postgres/postgres-html/app-pgresetxlog.html\n\nHope this gives u some usefull information in solving ur recovery condition.\n\n~gourish\n\nOn 6/1/06, Joao <[email protected]> wrote:\n>\n> when i try do \\d in psql on a table i get this message!\n> this happens too when i try to run pg_dump...\n>\n> ERROR: could not access status of transaction 4294967295\n> DETAIL: could not open file \"pg_clog/0FFF\": File or directory not found\n>\n>\n> someone could help me?? PleasE!\n>\n>\n>\n>\n>\n\n\n\n-- \nBest,\nGourish Singbal\n\n \nJoao,\n \nIf you had send the Email to pgsql-admin mailing list you would have got a faster answer to ur query..\n \nhere is what i managed to do:-\n1. I deleted the \n$ls -lart the pg_clog folder\ntotal 756-rw-------   1 postgres users 262144 2006-04-10 17:16 0001-rw-------   1 postgres users 262144 2006-04-10 17:16 0000drwx------   2 postgres users   4096 2006-04-10 17:16 .-rw-------   1 postgres users 229376 2006-05-31 18:17 0002\ndrwx------  10 postgres users   4096 2006-06-02 12:22 ..$ mv 0002 ../$ ls 0000  0001$psql regression\nregression=# select count(1) from accounts;ERROR:  could not access status of transaction 2225656DETAIL:  could not open file \"pg_clog/0002\": No such file or directoryregression=# \\qThis Error came since the 0002 file from the pg_clog folder was missing.\nSince the logs are missing from pg_clog folder can perfom pg_resetxlogs to reset the logs and bring up the database.\n \n$ /usr/local/pgsql/bin/pg_ctl -D /newdisk/postgres/data -l /newdisk/postgres/data_log stopwaiting for postmaster to shut down... donepostmaster stopped \n$ /usr/local/pgsql/bin/pg_resetxlog -x 2999800 /newdisk/postgres/dataTransaction log reset \nThe Value 2999800 u can get if u see the postgresql output file during startup or using  \ngrep \"next transaction ID\" /newdisk/postgres/data_log\n \nthan did the following:-\n/usr/local/pgsql/bin/psql regression\n\nregression=# select count(1) from accounts;  count--------- 1000001(1 row)\nregression=# \\q\nYou can get more info from\nhttp://unix.business.utah.edu/doc/applications/postgres/postgres-html/app-pgresetxlog.html\n \nHope this gives u some usefull information in solving ur recovery condition.\n \n~gourish\n \nOn 6/1/06, Joao <[email protected]> wrote:\n\n\n\n\nwhen i try do \\d in psql on a table i get this message!\nthis happens too when i try to run pg_dump...\n \nERROR:  could not access status of transaction 4294967295DETAIL:  could not open file \"pg_clog/0FFF\": File or directory not found\n \n \nsomeone could help me?? PleasE!\n \n \n -- Best,Gourish Singbal", "msg_date": "Fri, 2 Jun 2006 13:11:27 +0530", "msg_from": "\"Gourish Singbal\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] help me problems with pg_clog file" }, { "msg_contents": "On Fri, Jun 02, 2006 at 01:11:27PM +0530, Gourish Singbal wrote:\n> This Error came since the 0002 file from the pg_clog folder was missing.\n> Since the logs are missing from pg_clog folder can perfom pg_resetxlogs to\n> reset the logs and bring up the database.\n\nUnderstand that this will almost certainly lose some of your data.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 09:48:41 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] help me problems with pg_clog file" } ]
[ { "msg_contents": "Hi,\n\nI would like to know if my supposition is right.\n\nConsidering an environment with only one hard disk attached to a server, an\ninitial loading of the database probably is much faster using an IDE/ATA\ninterface with write-back on than using an SCSI interface. That�s because of\nthe SCSI command interface overhead.\n\nThen main advantage of SCSI interfaces, the multiuser environment is lost in\nthis scenery.\n\nAm I right? Am I missing something here?\n\nEven if I�m right, is something that could be done too improove SCSI loading\nperformance in this scenery?\n\nThanks in advance!\n\nReimer\n\n", "msg_date": "Fri, 2 Jun 2006 15:25:57 -0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Initial database loading and IDE x SCSI" }, { "msg_contents": "On Fri, 2006-06-02 at 13:25, [email protected] wrote:\n> Hi,\n> \n> I would like to know if my supposition is right.\n> \n> Considering an environment with only one hard disk attached to a server, an\n> initial loading of the database probably is much faster using an IDE/ATA\n> interface with write-back on than using an SCSI interface. That´s because of\n> the SCSI command interface overhead.\n> \n> Then main advantage of SCSI interfaces, the multiuser environment is lost in\n> this scenery.\n> \n> Am I right? Am I missing something here?\n> \n> Even if I´m right, is something that could be done too improove SCSI loading\n> performance in this scenery?\n\nThe answer is yes. And no.\n\nIDE drives notoriously lie about their cache, so that if you have the\ncache enabled, the IDE drive will nominally ack to an fsync before it's\nactually written the data. So, the IDE drive will write faster, but\nyour data probably won't survive a system crash or power loss during a\nwrite. If you turn off the cache, then the IDE drive will be much\nslower.\n\nSCSI overhead isn't really a big issue during loads because you're\nusually writing data at a good clip, and the overhead of SCSI is pretty\nsmall by comparison to how much data you'll be slinging.\n\nHowever, SCSI drives don't lie about Fsync, so the maximum speed of your\noutput will be limited by the speed at which your machine can fsync the\npg_xlog output. \n\nFor a single disk system, just doing development or a reporting\ndatabase, an IDE drive is often just fine. But under no circumstances\nshould you put an accounting system on a single drive, especially IDE\nwith cache turned on.\n", "msg_date": "Fri, 02 Jun 2006 13:40:11 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initial database loading and IDE x SCSI" }, { "msg_contents": "On Fri, 2006-06-02 at 15:25 -0300, [email protected] wrote:\n> Hi,\n> \n> I would like to know if my supposition is right.\n> \n> Considering an environment with only one hard disk attached to a server, an\n> initial loading of the database probably is much faster using an IDE/ATA\n> interface with write-back on than using an SCSI interface. That´s because of\n> the SCSI command interface overhead.\n\nNo, it's because the SCSI drive is honoring the database's request to\nmake sure the data is safe.\n\n> Then main advantage of SCSI interfaces, the multiuser environment is lost in\n> this scenery.\n> \n> Am I right? Am I missing something here?\n> \n> Even if I´m right, is something that could be done too improove SCSI loading\n> performance in this scenery?\n\nYou can perform the initial load in large transactions. The extra\noverhead for ensuring that data is safely written to disk will only be\nincurred once per transaction, so try to minimize the number of\ntransactions.\n\nYou could optionally set fsync=off in postgresql.conf, which means that\nthe SCSI drive will operate with no more safety than an IDE drive. But\nyou should only do that if you're willing to deal with catastrophic data\ncorruption. But if this is for a desktop application where you need to\nsupport IDE drives, you'll need to deal with that anyway.\n\n> Thanks in advance!\n> \n> Reimer\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n", "msg_date": "Fri, 02 Jun 2006 11:41:27 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initial database loading and IDE x SCSI" }, { "msg_contents": "<[email protected]> writes:\n> I would like to know if my supposition is right.\n\n> Considering an environment with only one hard disk attached to a server, an\n> initial loading of the database probably is much faster using an IDE/ATA\n> interface with write-back on than using an SCSI interface. That�s because of\n> the SCSI command interface overhead.\n\nI *seriously* doubt that.\n\nIf you see a difference in practice it's likely got more to do with the\nSCSI drive not lying about write-complete ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jun 2006 14:43:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initial database loading and IDE x SCSI " }, { "msg_contents": "> <[email protected]> writes:\n> > I would like to know if my supposition is right.\n>\n> > Considering an environment with only one hard disk attached to\n> a server, an\n> > initial loading of the database probably is much faster using an IDE/ATA\n> > interface with write-back on than using an SCSI interface.\n> That�s because of\n> > the SCSI command interface overhead.\n>\n> I *seriously* doubt that.\n>\n> If you see a difference in practice it's likely got more to do with the\n> SCSI drive not lying about write-complete ...\n>\n\nMany thanks for the answers! There are some more thinks I could not\nunderstand about this issue?\n\nI was considering it but if you have a lot of writes operations, will not\nthe disk cache full quickly?\n\nIf it�s full will not the system wait until something could be write to the\ndisk surface?\n\nIf you have almost all the time the cache full will it not useless?\n\nShould not, in this scenary, with almost all the time the cache full, IDE\nand SCSI write operations have almost the same performance?\n\nThanks in advance,\n\nReimer\n\n\n", "msg_date": "Fri, 2 Jun 2006 16:54:50 -0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "On Fri, 2006-06-02 at 16:54 -0300, [email protected] wrote:\n> > <[email protected]> writes:\n> > > I would like to know if my supposition is right.\n> >\n> > > Considering an environment with only one hard disk attached to\n> > a server, an\n> > > initial loading of the database probably is much faster using an IDE/ATA\n> > > interface with write-back on than using an SCSI interface.\n> > That´s because of\n> > > the SCSI command interface overhead.\n> >\n> > I *seriously* doubt that.\n> >\n> > If you see a difference in practice it's likely got more to do with the\n> > SCSI drive not lying about write-complete ...\n> >\n> \n> Many thanks for the answers! There are some more thinks I could not\n> understand about this issue?\n> \n> I was considering it but if you have a lot of writes operations, will not\n> the disk cache full quickly?\n> \n> If it´s full will not the system wait until something could be write to the\n> disk surface?\n> \n> If you have almost all the time the cache full will it not useless?\n> \n> Should not, in this scenary, with almost all the time the cache full, IDE\n> and SCSI write operations have almost the same performance?\n> \n\nThis is the ideal case. However, you only get to that case if you use\nlarge transactions or run with fsync=off or run with a write cache (like\nIDE drives, or nice RAID controllers which have a battery-backed cache).\n\nRemember that one of the important qualities of a transaction is that\nit's durable, so once you commit it the data is definitely stored on the\ndisk and one nanosecond later you could power the machine off and it\nwould still be there.\n\nTo achieve that durability guarantee, the system needs to make sure that\nif you commit a transaction, the data is actually written to the\nphysical platters on the hard drive.\n\nThis means that if you take the naive approach to importing data (one\nrow at a time, each in its own transaction), then instead of blasting\ndata onto the hard drive at maximum speed, the application will wait for\nthe platter to rotate to the right position, write one row's worth of\ndata, then wait for the platter to rotate to the right position again\nand insert another row, etc. This approach is very slow.\n\nThe naive approach works on IDE drives because they don't (usually)\nhonor the request to write the data immediately, so it can fill its\nwrite cache up with several megabytes of data and write it out to the\ndisk at its leisure.\n\n> Thanks in advance,\n> \n> Reimer\n> \n\n-- Mark\n", "msg_date": "Fri, 02 Jun 2006 13:13:29 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "Many thanks Mark,\n\nI will consider fsync=off only to do an initial load, not for a database normal operation.\n\nI was just thinking about this hipotetical scenario: \na) a restore database operation\nb) fsync off\nc) write-back on (IDE)\n\nAs I could understand, in this sceneraio, it´s normal the IDE drive be faster than the SCSI, ok?\n\nOf course, the database is exposed because of the fsync=off, but if you consider only the system performance, then it is true. Isn´t it?\n\nThanks,\n\nReimer\n\n\n", "msg_date": "Fri, 2 Jun 2006 17:37:01 -0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "On Fri, 2006-06-02 at 17:37 -0300, [email protected] wrote:\n> Many thanks Mark,\n> \n> I will consider fsync=off only to do an initial load, not for a database normal operation.\n> \n\nThis approach works well. You just need to remember to shut down the\ndatabase and start it back up again with fsync enabled for it to be safe\nafter the initial load.\n\n> I was just thinking about this hipotetical scenario: \n> a) a restore database operation\n> b) fsync off\n> c) write-back on (IDE)\n> \n> As I could understand, in this sceneraio, it´s normal the IDE drive be faster than the SCSI, ok?\n> \n\nIf fsync is off, then the IDE drive loses its big advantage, so IDE and\nSCSI should be about the same speed.\n\n> Of course, the database is exposed because of the fsync=off, but if you consider only the system performance, then it is true. Isn´t it?\n\n\n\n> Thanks,\n> \n> Reimer\n> \n> \n", "msg_date": "Fri, 02 Jun 2006 13:44:04 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "> > Many thanks Mark,\n> > \n> > I will consider fsync=off only to do an initial load, not for a \n> database normal operation.\n> > \n> \n> This approach works well. You just need to remember to shut down the\n> database and start it back up again with fsync enabled for it to be safe\n> after the initial load.\n> \n> > I was just thinking about this hipotetical scenario: \n> > a) a restore database operation\n> > b) fsync off\n> > c) write-back on (IDE)\n> > \n> > As I could understand, in this sceneraio, it´s normal the IDE \n> drive be faster than the SCSI, ok?\n> > \n> \n> If fsync is off, then the IDE drive loses its big advantage, so IDE and\n> SCSI should be about the same speed.\n> \nSorry, I would like to say fsync on instead of fsync off. But I think I understood.\n\nWith fsync off the performance should be almost the same (SCSI and IDE), and with fsync on \nthe IDE will be faster, but data are exposed.\n\nThanks!\n\n", "msg_date": "Fri, 2 Jun 2006 17:55:38 -0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "Mark Lewis wrote:\n\n> \n> The naive approach works on IDE drives because they don't (usually)\n> honor the request to write the data immediately, so it can fill its\n> write cache up with several megabytes of data and write it out to the\n> disk at its leisure.\n> \n\nFWIW - If you are using MacOS X or Windows, then later SATA (in \nparticular, not sure about older IDE) will honor the request to write \nimmediately, even if the disk write cache is enabled.\n\nI believe that Linux 2.6+ and SATA II will also behave this way (I'm \nthinking that write barrier support *is* in 2.6 now - however you would \nbe wise to follow up on the Linux kernel list if you want to be sure!)\n\nIn these cases data integrity becomes similar to SCSI - however, unless \nyou buy SATA specifically designed for a server type workload (e.g WD \nRaptor), then ATA/SATA tend to fail more quickly if used in this way \n(e.g. 24/7, hot/dusty environment etc).\n\nCheers\n\nMark\n", "msg_date": "Sat, 03 Jun 2006 12:13:27 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "Mark Kirkwood wrote:\n> Mark Lewis wrote:\n> \n> > \n> > The naive approach works on IDE drives because they don't (usually)\n> > honor the request to write the data immediately, so it can fill its\n> > write cache up with several megabytes of data and write it out to the\n> > disk at its leisure.\n> > \n> \n> FWIW - If you are using MacOS X or Windows, then later SATA (in \n> particular, not sure about older IDE) will honor the request to write \n> immediately, even if the disk write cache is enabled.\n> \n> I believe that Linux 2.6+ and SATA II will also behave this way (I'm \n> thinking that write barrier support *is* in 2.6 now - however you would \n> be wise to follow up on the Linux kernel list if you want to be sure!)\n> \n> In these cases data integrity becomes similar to SCSI - however, unless \n> you buy SATA specifically designed for a server type workload (e.g WD \n> Raptor), then ATA/SATA tend to fail more quickly if used in this way \n> (e.g. 24/7, hot/dusty environment etc).\n\nThe definitive guide to servers vs. desktop drives is:\n\n http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 3 Jun 2006 22:03:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Initial database loading and IDE x SCSI" }, { "msg_contents": "Bruce Momjian wrote:\n\n> \n> The definitive guide to servers vs. desktop drives is:\n> \n> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> \n\nYeah - very nice paper, well worth a read (in spite of the fact that it \nis also Seagate propaganda, supporting their marketing position and \nterminology!)\n\nCheers\n\nMark\n", "msg_date": "Sun, 04 Jun 2006 14:21:08 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Initial database loading and IDE x SCSI" } ]
[ { "msg_contents": "Hello,\n\nI am setting up a postgres server that will hold a critical event witin\nthe next few weeks.\nIt's national exam result (140000 students)\nthe problem is that the first few hours there will be a huge traffic,\n(last year 250K requests only the first hour)\nI do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\nOne beast will be apache, and the other will be postgres.\nI'm using httperf/autobench for measurments and the best result I can get \nis that my system can handle a trafiic of almost 1600 New con/sec.\n\nI cannot scale beyond that value and the funny thing, is that none of the \nservers is swapping, or heavy loaded, neither postgres nor apache are\nrefusing connexions.\nmy database is only 58M it's a read only DB and will lasts only for a month.\nhere is my pstgres.conf :\n---------------------------- postgres.conf - begin\nmax_connections = 6000\nshared_buffers = 12288 \nwork_mem = 512 \nmaintenance_work_mem = 16384 \neffective_cache_size = 360448 \nrandom_page_cost \t2\nlog_destination \t'stderr'\nredirect_stderr\ton\nlog_min_messages\tnotice\nlog_error_verbosity\tdefault\nlog_disconnections \ton\nautovacuum\toff\nstats_start_collector\toff\nstats_row_level\toff\n---------------------------- postgres.conf - end\n---------------------------- sysctl.conf (postgres) - begin\nfs.file-max= 5049800\nnet.ipv4.ip_local_port_range= 1024 65000\nnet.ipv4.tcp_keepalive_time= 120\nkernel.shmmax= 2147483648\nkernel.sem= 250 96000 100 384\n---------------------------- sysctl.conf (postgres) - end\nkernel semaphores are grown as postgres needs.\n\nvmstat is not showing any \"annoyance\" I cannot find where is it blocked !\n\nplease, I need help as it's a very critical deployment :(\n\nNB : apache when stressed for a static page, i can handle more 16k new con/sec\n\n\n", "msg_date": "Sat, 03 Jun 2006 10:31:03 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "scaling up postgres" }, { "msg_contents": "On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n> I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n> One beast will be apache, and the other will be postgres.\n> I'm using httperf/autobench for measurments and the best result I can get \n> is that my system can handle a trafiic of almost 1600 New con/sec.\n\nWhat version of PostgreSQL? (8.1 is better than 8.0 is much better than 7.4.)\nHave you remembered to turn HT off? Have you considered Opterons instead of\nXeons? (The Xeons generally scale bad with PostgreSQL.) What kind of queries\nare you running? Are you using connection pooling? Prepared queries?\n\n> vmstat is not showing any \"annoyance\" I cannot find where is it blocked !\n\nHow many context switches (\"cs\" in vmstat) do you get per second?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 3 Jun 2006 11:43:55 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "\n> One beast will be apache, and the other will be postgres.\n\n> I'm using httperf/autobench for measurments and the best result I can get\n> is that my system can handle a trafiic of almost 1600 New con/sec.\n\n> NB : apache when stressed for a static page, i can handle more 16k new \n> con/sec\n\n\tThat's not the point.\n\tHere are a few points of advice.\n\n\tUSE LIGHTTPD DAMMIT !\n\n\tIf you want performance, that is.\n\tOn my server (Crap Celeron) it handles about 100 hits/s on static files \nand 10 hits/s on PHP pages ; CPU utilization is 1-2%, not counting PHP.\n\tlighttpd handles 14K static pages/s on my laptop. That's about as much as \nyour bi-xeon does with apache...\n\n\tYou want a web server that uses as little CPU as possible so that more \nCPU is left for generating dynamic content.\n\n\tAlso, you want to have a number of concurrent DB connections which is \nneither too high, nor too low.\n\tApache + mod_php needs to spawn a lot of processes, thus you need a lot \nof database connections.\n\tThis tends not to be optimal.\n\n\tToo few concurrent DB connections -> network latency between DB and \nwebserver will be a bottleneck.\n\tToo many connections -> excess context switching, suboptimal use of CPU \ncache, memory use bloat.\n\n\tSo, I advise you to use lighttpd fronting PHP as fastcgi (if you use PHP) \n; if you use Java or whatever which has a HTTP interface, use lighttpd as \na proxy for your dynamic page generation.\n\tSpawn a reasonable number of PHP processes. The number depends on your \napplication, but from 10 to 30 is a good starting point.\n\n\tUSE PERSISTENT DATABASE CONNECTIONS !\n\n\tPostgres will breathe a little better ; now, check if it is still slow. \nIf it is, you need to find the bottleneck...\n\tI can help you a bit by private email if you want.\n", "msg_date": "Sat, 03 Jun 2006 11:57:50 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "\n>I cannot scale beyond that value and the funny thing, is that none of the \n>servers is swapping, or heavy loaded, neither postgres nor apache are\n>refusing connexions.\n> \n>\nHearing a story like this (throughput hits a hard limit, but\nhardware doesn't appear to be 100% utilized), I'd suspect\ninsufficient concurrency to account for the network latency\nbetween the two servers.\n\nAlso check that your disks aren't saturating somewhere (with\niostat or something similar).\n\nYou could run pstack against both processes and see what they're\ndoing while the system is under load. That might give a clue\n(e.g. you might see the apache processs waiting on\na response from PG, and the PG processes waiting\non a new query to process).\n\nSince you've proved that your test client and apache can\nhandle a much higher throughput, the problem must lie\nsomewhere else (in posgresql or the interface between\nthe web server and postgresql).\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 03 Jun 2006 06:43:24 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "[email protected] writes:\n> I'm using httperf/autobench for measurments and the best result I can get \n> is that my system can handle a trafiic of almost 1600 New con/sec.\n\nAs per PFC's comment, if connections/sec is a bottleneck for you then\nthe answer is to use persistent connections. Launching a new backend\nis a fairly heavyweight operation in Postgres. It sounds like there\nmay be some system-level constraints affecting the process creation\nrate as well, but it's silly to spend a lot of effort on this when\nyou can so easily go around the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jun 2006 11:12:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres " }, { "msg_contents": "Tom Lane wrote:\n\n>[email protected] writes:\n> \n>\n>>I'm using httperf/autobench for measurments and the best result I can get \n>>is that my system can handle a trafiic of almost 1600 New con/sec.\n>> \n>>\n>\n>As per PFC's comment, if connections/sec is a bottleneck for you then\n>the answer is to use persistent connections. Launching a new backend\n>is a fairly heavyweight operation in Postgres. \n>\nI thought the OP was talking about HTTP connections/s. He didn't say if he\nwas using persistent database connections or not (obviously better if so).\nIf it were the case that his setup is new backend launch rate-limited, then\nwouldn't the machine show CPU saturation ? (he said it didn't).\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\[email protected] writes:\n \n\nI'm using httperf/autobench for measurments and the best result I can get \nis that my system can handle a trafiic of almost 1600 New con/sec.\n \n\n\nAs per PFC's comment, if connections/sec is a bottleneck for you then\nthe answer is to use persistent connections. Launching a new backend\nis a fairly heavyweight operation in Postgres. \n\nI thought the OP was talking about HTTP connections/s. He didn't say if\nhe\nwas using persistent database connections or not (obviously better if\nso).\nIf it were the case that his setup is new backend launch rate-limited,\nthen \nwouldn't the machine show CPU saturation ?  (he said it didn't).", "msg_date": "Sat, 03 Jun 2006 09:18:46 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": ">\n> Tom Lane wrote:\n> As per PFC's comment, if connections/sec is a bottleneck for you then\nthe\n> answer is to use persistent connections. Launching a new backend\nis a fairly\n> heavyweight operation in Postgres.\n>\n\nIn which case maybe pgpool could help in this respect?\nhttp://pgpool.projects.postgresql.org/\n\nCheers,\n\nNeil.\n", "msg_date": "Sat, 3 Jun 2006 16:35:54 +0100", "msg_from": "\"Neil Saunders\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:\n> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n> > I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n> > One beast will be apache, and the other will be postgres.\n> > I'm using httperf/autobench for measurments and the best result I can get \n> > is that my system can handle a trafiic of almost 1600 New con/sec.\n> \n> What version of PostgreSQL? (8.1 is better than 8.0 is much better than 7.4.)\n> Have you remembered to turn HT off? Have you considered Opterons instead of\n> Xeons? (The Xeons generally scale bad with PostgreSQL.) What kind of queries\n\nCould you point out to some more detailed reading on why Xeons are\npoorer choice than Opterons when used with PostgreSQL?\n\n\tMario\n\n", "msg_date": "Sun, 11 Jun 2006 23:42:20 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Sun, Jun 11, 2006 at 11:42:20PM +0200, Mario Splivalo wrote:\n> Could you point out to some more detailed reading on why Xeons are\n> poorer choice than Opterons when used with PostgreSQL?\n\nThere are lots of theories, none conclusive, but the benchmarks certainly\npoint that way consistently. Read the list archives for the details.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 11 Jun 2006 23:43:41 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "Mario Splivalo wrote:\n> On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:\n>> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n>>> I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n>>> One beast will be apache, and the other will be postgres.\n>>> I'm using httperf/autobench for measurments and the best result I can get \n>>> is that my system can handle a trafiic of almost 1600 New con/sec.\n>> What version of PostgreSQL? (8.1 is better than 8.0 is much better than 7.4.)\n>> Have you remembered to turn HT off? Have you considered Opterons instead of\n>> Xeons? (The Xeons generally scale bad with PostgreSQL.) What kind of queries\n> \n> Could you point out to some more detailed reading on why Xeons are\n> poorer choice than Opterons when used with PostgreSQL?\n\nIt isn't just PostgreSQL. It is any database. Opterons can move memory \nand whole lot faster then Xeons.\n\nJoshua D. Drake\n\n> \n> \tMario\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Sun, 11 Jun 2006 16:21:54 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On 12 Jun 2006, at 00:21, Joshua D. Drake wrote:\n\n> Mario Splivalo wrote:\n>> On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:\n>>> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n>>>> I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n>>>> One beast will be apache, and the other will be postgres.\n>>>> I'm using httperf/autobench for measurments and the best result \n>>>> I can get is that my system can handle a trafiic of almost 1600 \n>>>> New con/sec.\n>>> What version of PostgreSQL? (8.1 is better than 8.0 is much \n>>> better than 7.4.)\n>>> Have you remembered to turn HT off? Have you considered Opterons \n>>> instead of\n>>> Xeons? (The Xeons generally scale bad with PostgreSQL.) What kind \n>>> of queries\n>> Could you point out to some more detailed reading on why Xeons are\n>> poorer choice than Opterons when used with PostgreSQL?\n>\n> It isn't just PostgreSQL. It is any database. Opterons can move \n> memory and whole lot faster then Xeons.\n\nA whole lot faster indeed.\n\nhttp://www.amd.com/us-en/Processors/ProductInformation/ \n0,,30_118_8796_8799,00.html\nhttp://www.theinquirer.net/?article=10797\n\nAlthough apparently the dual core ones are a little better than the \nold ones\n\nhttp://www.anandtech.com/IT/showdoc.aspx?i=2644\n\n(Just to provide some evidence ;)\n", "msg_date": "Mon, 12 Jun 2006 10:56:07 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "Hi Mario,\n\nI did run pgbench on several production servers:\nHP DL585 - 4-way AMD Opteron 875\nHP DL585 - 4-way AMD Opteron 880\nHP DL580 G3 - 4-way Intel XEON MP 3.0 GHz\nFSC RX600 S2 - 4-way Intel XEON MP DC 2.66 GHz\nFSC RX600 - 4-way Intel XEON MP 2.5 GHz\n\nThis test has been done with 8.1.4. I increased the number of clients.\nI attached the result as diagram. I included not all test system but the \ngap between XEON and Opteron is always the same.\n\nThe experiences with production systems were the same. We replaced the \nXEON box with Opteron box with a dramatic change of performance.\n\nBest regards\nSven.\n\n\nMario Splivalo schrieb:\n> On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:\n>> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n>>> I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n>>> One beast will be apache, and the other will be postgres.\n>>> I'm using httperf/autobench for measurments and the best result I can get \n>>> is that my system can handle a trafiic of almost 1600 New con/sec.\n>> What version of PostgreSQL? (8.1 is better than 8.0 is much better than 7.4.)\n>> Have you remembered to turn HT off? Have you considered Opterons instead of\n>> Xeons? (The Xeons generally scale bad with PostgreSQL.) What kind of queries\n> \n> Could you point out to some more detailed reading on why Xeons are\n> poorer choice than Opterons when used with PostgreSQL?\n> \n> \tMario\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany", "msg_date": "Mon, 12 Jun 2006 12:01:09 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "Hi all,\n\nJoshua D. Drake schrieb:\n> Mario Splivalo wrote:\n>> Could you point out to some more detailed reading on why Xeons are\n>> poorer choice than Opterons when used with PostgreSQL?\n> \n> It isn't just PostgreSQL. It is any database. Opterons can move memory \n> and whole lot faster then Xeons.\n\nYes. You can run good old memtest86 and you see the difference.\nHere my numbers with memtest86 (blocksize 128 MB).\nHP DL580 G3 (4-way XEON MP - DDR RAM) => 670 MByte/sec\nFSC RX600 S2 (4-way XEOM MP DC - DDR2-400 PC2-3200) => 1300 MByte/sec\nHP DL585 (4-way Opteron DDR2-400 PC2-3200) => 1500 MByte/sec\nI used memxfer5b.cpp.\n\nCheers Sven.\n", "msg_date": "Mon, 12 Jun 2006 13:21:05 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nSven Geisler wrote:\n> Hi Mario,\n> \n> I did run pgbench on several production servers:\n> HP DL585 - 4-way AMD Opteron 875\n> HP DL585 - 4-way AMD Opteron 880\n> HP DL580 G3 - 4-way Intel XEON MP 3.0 GHz\n> FSC RX600 S2 - 4-way Intel XEON MP DC 2.66 GHz\n> FSC RX600 - 4-way Intel XEON MP 2.5 GHz\n> \n> This test has been done with 8.1.4. I increased the number of clients.\n> I attached the result as diagram. I included not all test system but the\n> gap between XEON and Opteron is always the same.\n> \n> The experiences with production systems were the same. We replaced the\n> XEON box with Opteron box with a dramatic change of performance.\n> \n> Best regards\n> Sven.\n> \n> \n> Mario Splivalo schrieb:\n>> On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:\n>>> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [email protected] wrote:\n>>>> I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)\n>>>> One beast will be apache, and the other will be postgres.\n>>>> I'm using httperf/autobench for measurments and the best result I\n>>>> can get is that my system can handle a trafiic of almost 1600 New\n>>>> con/sec.\n>>> What version of PostgreSQL? (8.1 is better than 8.0 is much better\n>>> than 7.4.)\n>>> Have you remembered to turn HT off? Have you considered Opterons\n>>> instead of\n>>> Xeons? (The Xeons generally scale bad with PostgreSQL.) What kind of\n>>> queries\n>>\n>> Could you point out to some more detailed reading on why Xeons are\n>> poorer choice than Opterons when used with PostgreSQL?\n>>\n>> Mario\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\nThank you for sharing this.\nComing back to my problem :) A very faithful partner accepted to\ngracefully borrow us 3 Pseries (bi-ppc + 2G RAM not more). with linux on\nthem.\nNow I'm trying to make my tests, and I'm not that sure I will make the\nswitch to the PSeries, since my dual xeon with 4 G RAM can handle 3500\nconcurrent postmasters consuming 3.7 G of the RAM. I cannot reach this\nnumber on the PSeries with 2 G.\n\ncan someone give me advice ?\nBTW, I promise, at the end of my tests, I'll publish my report.\n\n- --\nZied Fakhfakh\nGPG Key : gpg --keyserver subkeys.pgp.net --recv-keys F06B55B5\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.3 (GNU/Linux)\nComment: Using GnuPG with Fedora - http://enigmail.mozdev.org\n\niD8DBQFEjeDbS1DO7ovpKz8RAnLGAJ96/1ndGoc+HhBvOfrmlQnJcfxa6QCfQK9w\ni6/GGUCBGk5pdNUDAmVN5RQ=\n=5Mns\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 12 Jun 2006 23:47:07 +0200", "msg_from": "Zydoon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Mon, Jun 12, 2006 at 11:47:07PM +0200, Zydoon wrote:\n> Thank you for sharing this.\n> Coming back to my problem :) A very faithful partner accepted to\n> gracefully borrow us 3 Pseries (bi-ppc + 2G RAM not more). with linux on\n> them.\n> Now I'm trying to make my tests, and I'm not that sure I will make the\n> switch to the PSeries, since my dual xeon with 4 G RAM can handle 3500\n> concurrent postmasters consuming 3.7 G of the RAM. I cannot reach this\n> number on the PSeries with 2 G.\n> \n> can someone give me advice ?\n\nUhm... stick with commodity CPUs?\n\nSeriously, unless you're going to run on some seriously big hardware\nyou'll be hard-pressed to find better performance/dollar than going with\na server running Opterons.\n\nIf you're trying to decide how much hardware you need to meet a specific\nperformance target there's a company here in Austin I can put you in\ntouch with; if I'm not mistaken on the cost of pSeries hardware their\nfee would be well worth it to make sure you don't end up with too much\n(or worse, too little) hardware for your load.\n\n> BTW, I promise, at the end of my tests, I'll publish my report.\n\nGreat. More performance data is always good to have.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 14:26:09 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Mon, 2006-06-12 at 16:47, Zydoon wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Thank you for sharing this.\n> Coming back to my problem :) A very faithful partner accepted to\n> gracefully borrow us 3 Pseries (bi-ppc + 2G RAM not more). with linux on\n> them.\n> Now I'm trying to make my tests, and I'm not that sure I will make the\n> switch to the PSeries, since my dual xeon with 4 G RAM can handle 3500\n> concurrent postmasters consuming 3.7 G of the RAM. I cannot reach this\n> number on the PSeries with 2 G.\n> \n> can someone give me advice ?\n> BTW, I promise, at the end of my tests, I'll publish my report.\n\nSearch the performance archives for the last 4 or 5 months for PPC /\npseries machines.\n\nYou'll find a very long thread about the disappointing performance the\ntester got with a rather expensive P Series machine. And his happy\nending of testing on an Opteron machine.\n", "msg_date": "Tue, 13 Jun 2006 14:28:49 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Tue, 13 Jun 2006 14:28:49 -0500\nScott Marlowe <[email protected]> wrote:\n\n> Search the performance archives for the last 4 or 5 months for PPC /\n> pseries machines.\n> \n> You'll find a very long thread about the disappointing performance the\n> tester got with a rather expensive P Series machine. And his happy\n> ending of testing on an Opteron machine.\n\nAmen, brother :)\n\nWe hoped throwing a silly pSeries 650 would solve all our problems. Boy\nwere we wrong... a world of pain...\n\nDon't go there - just buy an Opteron system - if you're talking about\nIBM big iron, a decent Opteron will cost you about as much as a couple\nof compilers and an on-site visit from IBM...\n\nCheers,\nGavin.\n", "msg_date": "Tue, 13 Jun 2006 20:38:24 +0100", "msg_from": "Gavin Hamill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "\n> Uhm... stick with commodity CPUs?\n\n\tHehe, does this include Opterons ?\n\n\tStill, I looked on the \"customize your server\" link someone posted and \nit's amazing ; these things have become cheaper while I wasn't looking...\n\tYou can buy 10 of these boxes with raptors and 4 opteron cores and 8 gigs \nof RAM for the price of your average marketing boss's car... definitely \nmakes you think doesn't it.\n\n\tJuts wait until someone equates the price in man-hours to fix/run a \nborken Dell box...\n\n\n\n\t\n", "msg_date": "Tue, 13 Jun 2006 22:14:44 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Tue, Jun 13, 2006 at 10:14:44PM +0200, PFC wrote:\n> \n> >Uhm... stick with commodity CPUs?\n> \n> \tHehe, does this include Opterons ?\n \nAbsolutely. Heck, it wouldn't surprise me if a single model # of Opteron\nsold more than all Power CPUs put together...\n\n> \tStill, I looked on the \"customize your server\" link someone posted \n> \tand it's amazing ; these things have become cheaper while I wasn't \n> looking...\n> \tYou can buy 10 of these boxes with raptors and 4 opteron cores and 8 \n> \tgigs of RAM for the price of your average marketing boss's car... \n> definitely makes you think doesn't it.\n \nAnd if you spend that much on CPU for a database, you're likely to be\npretty sadly disappointed, depending on what you're doing.\n\n> \tJuts wait until someone equates the price in man-hours to fix/run a \n> borken Dell box...\n\nWould probably sound like a Mastercard commercial...\n\nNot having to babysit your servers every day: Priceless\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 16:21:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGavin Hamill wrote:\n> On Tue, 13 Jun 2006 14:28:49 -0500\n> Scott Marlowe <[email protected]> wrote:\n> \n>> Search the performance archives for the last 4 or 5 months for PPC /\n>> pseries machines.\n>>\n>> You'll find a very long thread about the disappointing performance the\n>> tester got with a rather expensive P Series machine. And his happy\n>> ending of testing on an Opteron machine.\n> \n> Amen, brother :)\n> \n> We hoped throwing a silly pSeries 650 would solve all our problems. Boy\n> were we wrong... a world of pain...\n> \n> Don't go there - just buy an Opteron system - if you're talking about\n> IBM big iron, a decent Opteron will cost you about as much as a couple\n> of compilers and an on-site visit from IBM...\n> \n> Cheers,\n> Gavin.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \nThings cannot be clearer :)\nI really know that opterons are the best I can have.\nBut for now I have to publish the results Sunday 25th on either the\nxeons or the PPCs.\nTomorrow I'll conduct a deeper test, and come back.\nCheers.\n\n- --\nZied Fakhfakh\nGPG Key : gpg --keyserver subkeys.pgp.net --recv-keys F06B55B5\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.3 (GNU/Linux)\nComment: Using GnuPG with Fedora - http://enigmail.mozdev.org\n\niD8DBQFEjy0cS1DO7ovpKz8RAklqAKDC75a8SQUoGwNHGxu4ysZhNt5eJwCgt0mP\nYHfZbYVS44kxFyxxEzs9KE0=\n=aLbn\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 13 Jun 2006 23:24:44 +0200", "msg_from": "Zydoon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "Maybe from a postgresql perspective the cpus may be useless but the memory\non the pSeries can't be beat. We've been looking at running our warehouse\n(PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1.\n\nWe've been thrilled with the performance of our DB2 systems that run on\nAIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've got\ntwo 86GB p570s sitting there being under utilized.\n\n\nFYI,\n\nI've not seen my posts showing up on the list or the archives so I'm hoping\nthis gets through.\n\nOn 6/13/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Tue, Jun 13, 2006 at 10:14:44PM +0200, PFC wrote:\n> >\n> > >Uhm... stick with commodity CPUs?\n> >\n> > Hehe, does this include Opterons ?\n>\n> Absolutely. Heck, it wouldn't surprise me if a single model # of Opteron\n> sold more than all Power CPUs put together...\n>\n> > Still, I looked on the \"customize your server\" link someone posted\n> > and it's amazing ; these things have become cheaper while I\n> wasn't\n> > looking...\n> > You can buy 10 of these boxes with raptors and 4 opteron cores and\n> 8\n> > gigs of RAM for the price of your average marketing boss's car...\n> > definitely makes you think doesn't it.\n>\n> And if you spend that much on CPU for a database, you're likely to be\n> pretty sadly disappointed, depending on what you're doing.\n>\n> > Juts wait until someone equates the price in man-hours to fix/run\n> a\n> > borken Dell box...\n>\n> Would probably sound like a Mastercard commercial...\n>\n> Not having to babysit your servers every day: Priceless\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\n-- \nJohn E. Vincent\[email protected]\n\nMaybe from a postgresql perspective the cpus may be useless but the memory on the pSeries can't be beat. We've been looking at running our warehouse (PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1. \nWe've been thrilled with the performance of our DB2 systems that run on AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've got two 86GB p570s sitting there being under utilized.FYI,\nI've not seen my posts showing up on the list or the archives so I'm hoping this gets through.On 6/13/06, Jim C. Nasby <\[email protected]> wrote:On Tue, Jun 13, 2006 at 10:14:44PM +0200, PFC wrote:\n>> >Uhm... stick with commodity CPUs?>>       Hehe, does this include Opterons ?Absolutely. Heck, it wouldn't surprise me if a single model # of Opteronsold more than all Power CPUs put together...\n>       Still, I looked on the \"customize your server\" link someone posted>       and  it's amazing ; these things have become cheaper while I wasn't> looking...>       You can buy 10 of these boxes with raptors and 4 opteron cores and 8\n>       gigs  of RAM for the price of your average marketing boss's car...> definitely  makes you think doesn't it.And if you spend that much on CPU for a database, you're likely to bepretty sadly disappointed, depending on what you're doing.\n>       Juts wait until someone equates the price in man-hours to fix/run a> borken Dell box...Would probably sound like a Mastercard commercial...Not having to babysit your servers every day: Priceless\n--Jim C. Nasby, Sr. Engineering Consultant      [email protected] Software      http://pervasive.com    work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461---------------------------(end of broadcast)---------------------------TIP 9: In versions below \n8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not       match-- John E. Vincent\[email protected]", "msg_date": "Tue, 13 Jun 2006 17:40:58 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Tue, Jun 13, 2006 at 05:40:58PM -0400, John Vincent wrote:\n> Maybe from a postgresql perspective the cpus may be useless but the memory\n> on the pSeries can't be beat. We've been looking at running our warehouse\n> (PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1.\n \nProbably just because not many people have access to that kind of\nhardware. Have you tried building on Linux on Power?\n\nAlso, I believe Opterons can do up to 4 DIMMs per memory controller, so\nwith 2G sticks an 8 way Opteron could hit 64GB, which isn't exactly\nshabby, and I suspect it'd cost quite a bit less than a comperable\np570...\n\n> We've been thrilled with the performance of our DB2 systems that run on\n> AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've got\n> two 86GB p570s sitting there being under utilized.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 17:05:09 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On 6/13/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Tue, Jun 13, 2006 at 05:40:58PM -0400, John Vincent wrote:\n> > Maybe from a postgresql perspective the cpus may be useless but the\n> memory\n> > on the pSeries can't be beat. We've been looking at running our\n> warehouse\n> > (PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1.\n>\n> Probably just because not many people have access to that kind of\n> hardware. Have you tried building on Linux on Power?\n\n\nActually it's on my radar. I was looking for a precompiled build first (we\nactually checked the Pervasive and Bizgres sites first since we're\nconsidering a support contract) before going the self-compiled route. When I\ndidn't see a pre-compiled build available, I started looking at the\ndeveloper archives and got a little worried that I wouldn't want to base my\njob on a self-built Postgres on a fairly new (I'd consider Power 5 fairly\nnew) platform.\n\nAs it stands we're currently migrating to an IBM x445 (8 XPU Xeon, 16GB of\nmemory) that was our old DB2 production server.\n\nAlso, I believe Opterons can do up to 4 DIMMs per memory controller, so\n> with 2G sticks an 8 way Opteron could hit 64GB, which isn't exactly\n> shabby, and I suspect it'd cost quite a bit less than a comperable\n> p570...\n\n\nThis is true. In our case I couldn't get the approval for the new hardware\nsince we had two x445 boxes sitting there doing nothing (I wanted them for\nour VMware environment personally). Another sticking point is finding a\nvendor that will provide a hardware support contract similar to what we have\nwith our existing IBM hardware (24x7x4). Since IBM has f-all for Opteron\nbased systems and we've sworn off Dell, I was pretty limited. HP was able to\nget in on a pilot program and we're considering them now for future hardware\npurchases but beyond Dell/IBM/HP, there's not much else that can provide the\nkind of hardware support turn-around we need.\n\n> We've been thrilled with the performance of our DB2 systems that run on\n> > AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've\n> got\n> > two 86GB p570s sitting there being under utilized.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\n\n\n-- \nJohn E. Vincent\n\nOn 6/13/06, Jim C. Nasby <[email protected]> wrote:\nOn Tue, Jun 13, 2006 at 05:40:58PM -0400, John Vincent wrote:> Maybe from a postgresql perspective the cpus may be useless but the memory> on the pSeries can't be beat. We've been looking at running our warehouse\n> (PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1.Probably just because not many people have access to that kind ofhardware. Have you tried building on Linux on Power?\nActually it's on my radar. I was looking for a precompiled build first (we actually checked the Pervasive and Bizgres sites first since we're considering a support contract) before going the self-compiled route. When I didn't see a pre-compiled build available, I started looking at the developer archives and got a little worried that I wouldn't want to base my job on a self-built Postgres on a fairly new (I'd consider Power 5 fairly new) platform. \nAs it stands we're currently migrating to an IBM x445 (8 XPU Xeon, 16GB of memory) that was our old DB2 production server.\nAlso, I believe Opterons can do up to 4 DIMMs per memory controller, sowith 2G sticks an 8 way Opteron could hit 64GB, which isn't exactlyshabby, and I suspect it'd cost quite a bit less than a comperablep570...\nThis is true. In our case I couldn't get the approval for the new hardware since we had two x445 boxes sitting there doing nothing (I wanted them for our VMware environment personally). Another sticking point is finding a vendor that will provide a hardware support contract similar to what we have with our existing IBM hardware (24x7x4). Since IBM has f-all for Opteron based systems and we've sworn off Dell, I was pretty limited. HP was able to get in on a pilot program and we're considering them now for future hardware purchases but beyond Dell/IBM/HP, there's not much else that can provide the kind of hardware support turn-around we need.\n> We've been thrilled with the performance of our DB2 systems that run on\n> AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've got> two 86GB p570s sitting there being under utilized.--Jim C. Nasby, Sr. Engineering Consultant      \[email protected] Software      http://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf\n       cell: 512-569-9461-- John E. Vincent", "msg_date": "Tue, 13 Jun 2006 18:21:21 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "On Tue, Jun 13, 2006 at 06:21:21PM -0400, John Vincent wrote:\n> On 6/13/06, Jim C. Nasby <[email protected]> wrote:\n> >\n> >On Tue, Jun 13, 2006 at 05:40:58PM -0400, John Vincent wrote:\n> >> Maybe from a postgresql perspective the cpus may be useless but the\n> >memory\n> >> on the pSeries can't be beat. We've been looking at running our\n> >warehouse\n> >> (PGSQL) in a LoP lpar but I wasn't able to find a LoP build of 8.1.\n> >\n> >Probably just because not many people have access to that kind of\n> >hardware. Have you tried building on Linux on Power?\n> \n> \n> Actually it's on my radar. I was looking for a precompiled build first (we\n> actually checked the Pervasive and Bizgres sites first since we're\n> considering a support contract) before going the self-compiled route. When I\n> didn't see a pre-compiled build available, I started looking at the\n> developer archives and got a little worried that I wouldn't want to base my\n> job on a self-built Postgres on a fairly new (I'd consider Power 5 fairly\n> new) platform.\n \nWell, pre-compiled isn't going to make much of a difference\nstability-wise. What you will run into is that very few people are\nrunning PostgreSQL on your hardware, so it's possible you'd run into\nsome odd corner cases. I think it's pretty unlikely you'd lose data, but\nyou could end up with performance-related issues.\n\nIf you can, it'd be great to do some testing on that hardware to see if\nyou can break PostgreSQL.\n\n> This is true. In our case I couldn't get the approval for the new hardware\n> since we had two x445 boxes sitting there doing nothing (I wanted them for\n> our VMware environment personally). Another sticking point is finding a\n> vendor that will provide a hardware support contract similar to what we have\n> with our existing IBM hardware (24x7x4). Since IBM has f-all for Opteron\n> based systems and we've sworn off Dell, I was pretty limited. HP was able to\n> get in on a pilot program and we're considering them now for future hardware\n> purchases but beyond Dell/IBM/HP, there's not much else that can provide the\n> kind of hardware support turn-around we need.\n \nWhat about Sun?\n\n> >We've been thrilled with the performance of our DB2 systems that run on\n> >> AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've\n> >got\n> >> two 86GB p570s sitting there being under utilized.\n\nBTW, in a past life we moved a DB2 database off of Xeons and onto\nRS/6000s with Power4. The difference was astounding.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 17:45:23 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "> Well, pre-compiled isn't going to make much of a difference\n> stability-wise. What you will run into is that very few people are\n> running PostgreSQL on your hardware, so it's possible you'd run into\n> some odd corner cases. I think it's pretty unlikely you'd lose data, but\n> you could end up with performance-related issues.\n>\n> If you can, it'd be great to do some testing on that hardware to see if\n> you can break PostgreSQL.\n\n\nIt shouldn't be too hard to snag resources for an LPAR. In fact since it was\none of the things I was looking at testing (postgres/LoP or Postgres/AIX).\n\nI'll see what I can work out. If I can't get a CPU on the 570, we have a 520\nthat I should be able to use.\n\n> This is true. In our case I couldn't get the approval for the new hardware\n> > since we had two x445 boxes sitting there doing nothing (I wanted them\n> for\n> > our VMware environment personally). Another sticking point is finding a\n> > vendor that will provide a hardware support contract similar to what we\n> have\n> > with our existing IBM hardware (24x7x4). Since IBM has f-all for Opteron\n> > based systems and we've sworn off Dell, I was pretty limited. HP was\n> able to\n> > get in on a pilot program and we're considering them now for future\n> hardware\n> > purchases but beyond Dell/IBM/HP, there's not much else that can provide\n> the\n> > kind of hardware support turn-around we need.\n>\n> What about Sun?\n\n\nGood question. At the time, Sun was off again/on again with Linux. Quite\nhonestly I'm not sure where Sun is headed. I actually suggested the Sun\nhardware for our last project (a Windows-platformed package we needed) but\ncost-wise, they were just too much compared to the HP solution. HP has a\ncluster-in-a-box solution that runs about 10K depending on your VAR (2 DL380\nwith shared SCSI to an MSA500 - sounds like a perfect VMware solution).\n\n\n> >We've been thrilled with the performance of our DB2 systems that run on\n> > >> AIX/Power 5 but since the DB2 instance memory is limited to 18GB,\n> we've\n> > >got\n> > >> two 86GB p570s sitting there being under utilized.\n>\n> BTW, in a past life we moved a DB2 database off of Xeons and onto\n> RS/6000s with Power4. The difference was astounding.\n\n\n I'm amazed myself. My last experience with AIX before this was pre Power4.\nAIX 5.3 on Power 5 is a sight to behold. I'm still cursing our DBAs for not\nrealizing the 18GB instance memory thing though ;)\n\n--\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\n\n\n-- \nJohn E. Vincent\[email protected]\n\nWell, pre-compiled isn't going to make much of a difference\nstability-wise. What you will run into is that very few people arerunning PostgreSQL on your hardware, so it's possible you'd run intosome odd corner cases. I think it's pretty unlikely you'd lose data, but\nyou could end up with performance-related issues.If you can, it'd be great to do some testing on that hardware to see ifyou can break PostgreSQL.It shouldn't be too hard to snag resources for an LPAR. In fact since it was one of the things I was looking at testing (postgres/LoP or Postgres/AIX).\nI'll see what I can work out. If I can't get a CPU on the 570, we have a 520 that I should be able to use.\n> This is true. In our case I couldn't get the approval for the new hardware> since we had two x445 boxes sitting there doing nothing (I wanted them for> our VMware environment personally). Another sticking point is finding a\n> vendor that will provide a hardware support contract similar to what we have> with our existing IBM hardware (24x7x4). Since IBM has f-all for Opteron> based systems and we've sworn off Dell, I was pretty limited. HP was able to\n> get in on a pilot program and we're considering them now for future hardware> purchases but beyond Dell/IBM/HP, there's not much else that can provide the> kind of hardware support turn-around we need.\nWhat about Sun?Good question. At the time, Sun was off again/on again with Linux. Quite honestly I'm not sure where Sun is headed. I actually suggested the Sun hardware for our last project (a Windows-platformed package we needed) but cost-wise, they were just too much compared to the HP solution. HP has a cluster-in-a-box solution that runs about 10K depending on your VAR (2 DL380 with shared SCSI to an MSA500 - sounds like a perfect VMware solution).\n> >We've been thrilled with the performance of our DB2 systems that run on\n> >> AIX/Power 5 but since the DB2 instance memory is limited to 18GB, we've> >got> >> two 86GB p570s sitting there being under utilized.BTW, in a past life we moved a DB2 database off of Xeons and onto\nRS/6000s with Power4. The difference was astounding. I'm amazed myself. My last experience with AIX before this was pre Power4. AIX 5.3 on Power 5 is a sight to behold. I'm still cursing our DBAs for not realizing the 18GB instance memory thing though ;)\n--Jim C. Nasby, Sr. Engineering Consultant      \[email protected] Software      http://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf\n       cell: 512-569-9461-- John E. [email protected]", "msg_date": "Tue, 13 Jun 2006 21:19:26 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 13, 2006 at 06:21:21PM -0400, John Vincent wrote:\n>> Actually it's on my radar. I was looking for a precompiled build first (we\n>> actually checked the Pervasive and Bizgres sites first since we're\n>> considering a support contract) before going the self-compiled route. When I\n>> didn't see a pre-compiled build available, I started looking at the\n>> developer archives and got a little worried that I wouldn't want to base my\n>> job on a self-built Postgres on a fairly new (I'd consider Power 5 fairly\n>> new) platform.\n \n> Well, pre-compiled isn't going to make much of a difference\n> stability-wise. What you will run into is that very few people are\n> running PostgreSQL on your hardware, so it's possible you'd run into\n> some odd corner cases.\n\nPower 5 is just a PPC64 platform isn't it? Red Hat's been building PG\nfor PPC64 for years, and I've not heard any problems reported. Now,\nif you're using a non-gcc compiler then maybe that track record doesn't\ncarry over to whatever you are using ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 22:24:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres " }, { "msg_contents": "Hi, Fzied,\n\[email protected] wrote:\n\n> I'm using httperf/autobench for measurments and the best result I can\n> get is that my system can handle a trafiic of almost 1600 New\n> con/sec.\n\nAre you using connection pooling or persistent connections between\nPostgreSQL and the Apaches?\n\nMaybe it simply is the network latency between the two machines - as the\ndatabase is read-only, did you think about having both PostgreSQL and\nApache on both machines, and then load-balancing ingoing http requests\nbetween them?\n\n> I cannot scale beyond that value and the funny thing, is that none of\n> the servers is swapping, or heavy loaded, neither postgres nor apache\n> are refusing connexions.\n\nAnd for measuring, are you really throwing parallel http connections to\nthe server? This sounds like you measure request latency, but the\nmaximum throughput might be much higher.\n\n> my database is only 58M it's a read only DB and will lasts only for a\n> month.\n\nI guess it is a simple table with a single PK (some subscription numer)\n- no joins or other things.\n\nFor this cases, a special non-RDBMS like MySQL, SQLite, or even some\nhancrafted thingy may give you better results.\n\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 21 Jun 2006 09:26:05 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" }, { "msg_contents": "Hi, Zydoon,\n\nZydoon wrote:\n\n> Now I'm trying to make my tests, and I'm not that sure I will make the\n> switch to the PSeries, since my dual xeon with 4 G RAM can handle 3500\n> concurrent postmasters consuming 3.7 G of the RAM. I cannot reach this\n> number on the PSeries with 2 G.\n\nThis sounds like you want to have one postgresql backend per apache\nfrontend.\n\nDid you try running pgpool on the Apache machine, and have only a few\n(hundred) connections to the backend?\n\nMaybe http://en.wikipedia.org/wiki/Memcached could be helpful, too.\n\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 21 Jun 2006 09:31:22 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling up postgres" } ]
[ { "msg_contents": "On Sat, Jun 03, 2006 at 11:38:10AM +0100, [email protected] wrote:\n>> What version of PostgreSQL? (8.1 is better than 8.0 is much better than 7.4.)\n> 8.1.3 on RHEL 4\n\nOK, that sounds good.\n\n>> Have you remembered to turn HT off?\n> no !! what is it ?\n\nHT = Hyperthreading. It usually does more harm than good on databases\n(although others have reported differing results).\n\n>> Have you considered Opterons instead of Xeons? (The Xeons generally\n>> scale bad with PostgreSQL.) \n> No can't have it\n\nOK, that's bad -- it's probably the single thing that could have helped\nperformance the most.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 3 Jun 2006 12:52:23 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: =?utf-8?B?UsOpcG9uZHJl?=" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that psql's unformatted output uses too much\nmemory. Is it normal? It seems that psql draws all records\nof a query off the server before it displays or writes the output.\nI would expect this only with formatted output.\n\nProblem is, I have an export that produces 500'000+ records\nwhich changes frequently. Several (20+) sites run this query\nnightly with different parameters and download it. The SELECTs\nthat run in psql -A -t -c '...' may overlap and the query that runs\nin less than 1.5 minutes if it's the only one at the time may take\n3+ hours if ten such queries overlap. The time is mostly spent\nin swapping, all psql processes take up 300+ MB, so the 1GB\nserver is brought to its knees quickly, peek swap usage is 1.8 GB.\nI watched the progress in top and the postmaster processes finished\ntheir work in about half an hour (that would still be acceptable)\nthen the psql processes started eating up memory as they read\nthe records.\n\nPostgreSQL 8.1.4 was used on RHEL3.\n\nIs there a way to convince psql to use less memory in unformatted\nmode? I know COPY will be able to use arbitrary SELECTs\nbut until then I am still stuck with redirecting psql's output.\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Mon, 05 Jun 2006 00:01:24 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "psql -A (unaligned format) eats too much memory" }, { "msg_contents": "Hi,\n\nanswering to myself. :-)\n\nZoltan Boszormenyi ďż˝rta:\n> Hi,\n>\n> I just noticed that psql's unformatted output uses too much\n> memory. Is it normal? It seems that psql draws all records\n> of a query off the server before it displays or writes the output.\n> I would expect this only with formatted output.\n>\n> Problem is, I have an export that produces 500'000+ records\n> which changes frequently. Several (20+) sites run this query\n> nightly with different parameters and download it. The SELECTs\n> that run in psql -A -t -c '...' may overlap and the query that runs\n> in less than 1.5 minutes if it's the only one at the time may take\n> 3+ hours if ten such queries overlap. The time is mostly spent\n> in swapping, all psql processes take up 300+ MB, so the 1GB\n> server is brought to its knees quickly, peek swap usage is 1.8 GB.\n> I watched the progress in top and the postmaster processes finished\n> their work in about half an hour (that would still be acceptable)\n> then the psql processes started eating up memory as they read\n> the records.\n>\n> PostgreSQL 8.1.4 was used on RHEL3.\n>\n> Is there a way to convince psql to use less memory in unformatted\n> mode? I know COPY will be able to use arbitrary SELECTs\n> but until then I am still stuck with redirecting psql's output.\n\nThe answer it to use SELECT INTO TEMP and then COPY.\nPsql will use much less memory that way. But still...\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Mon, 05 Jun 2006 00:32:38 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql -A (unaligned format) eats too much memory" }, { "msg_contents": "Moving to -hackers\n\nOn Mon, Jun 05, 2006 at 12:32:38AM +0200, Zoltan Boszormenyi wrote:\n> >I just noticed that psql's unformatted output uses too much\n> >memory. Is it normal? It seems that psql draws all records\n> >of a query off the server before it displays or writes the output.\n> >I would expect this only with formatted output.\n> >\n> >Problem is, I have an export that produces 500'000+ records\n> >which changes frequently. Several (20+) sites run this query\n> >nightly with different parameters and download it. The SELECTs\n> >that run in psql -A -t -c '...' may overlap and the query that runs\n> >in less than 1.5 minutes if it's the only one at the time may take\n> >3+ hours if ten such queries overlap. The time is mostly spent\n> >in swapping, all psql processes take up 300+ MB, so the 1GB\n> >server is brought to its knees quickly, peek swap usage is 1.8 GB.\n> >I watched the progress in top and the postmaster processes finished\n> >their work in about half an hour (that would still be acceptable)\n> >then the psql processes started eating up memory as they read\n> >the records.\n> >\n> >PostgreSQL 8.1.4 was used on RHEL3.\n> >\n> >Is there a way to convince psql to use less memory in unformatted\n> >mode? I know COPY will be able to use arbitrary SELECTs\n> >but until then I am still stuck with redirecting psql's output.\n> \n> The answer it to use SELECT INTO TEMP and then COPY.\n> Psql will use much less memory that way. But still...\n\nI've been able to verify this on 8.1.4; psql -A -t -c 'SELECT * FROM\nlargetable' > /dev/null results in psql consuming vast quantities of\nmemory. Why is this? ISTM this is a bug...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 10:22:59 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much memory" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I've been able to verify this on 8.1.4; psql -A -t -c 'SELECT * FROM\n> largetable' > /dev/null results in psql consuming vast quantities of\n> memory. Why is this?\n\nIs it different without the -A?\n\nI'm reading this as just another uninformed complaint about libpq's\nhabit of buffering the whole query result. It's possible that there's\na memory leak in the -A path specifically, but nothing said so far\nprovided any evidence for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 11:27:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much memory " }, { "msg_contents": "On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > I've been able to verify this on 8.1.4; psql -A -t -c 'SELECT * FROM\n> > largetable' > /dev/null results in psql consuming vast quantities of\n> > memory. Why is this?\n> \n> Is it different without the -A?\n\nNope.\n\n> I'm reading this as just another uninformed complaint about libpq's\n> habit of buffering the whole query result. It's possible that there's\n> a memory leak in the -A path specifically, but nothing said so far\n> provided any evidence for that.\n\nCertainly seems like it. It seems like it would be good to allow for\nlibpq not to buffer, since there's cases where it's not needed...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 10:53:49 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much memory" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n>> I'm reading this as just another uninformed complaint about libpq's\n>> habit of buffering the whole query result. It's possible that there's\n>> a memory leak in the -A path specifically, but nothing said so far\n>> provided any evidence for that.\n\n> Certainly seems like it. It seems like it would be good to allow for\n> libpq not to buffer, since there's cases where it's not needed...\n\nSee past discussions. The problem is that libpq's API says that when it\nhands you back the completed query result, the command is complete and\nguaranteed not to fail later. A streaming interface could not make that\nguarantee, so it's not a transparent substitution.\n\nI wouldn't have any strong objection to providing a separate API that\noperates in a streaming fashion, but defining it is something no one's\nbothered to do yet. In practice, if you have to code to a variant API,\nit's not that much more trouble to use a cursor...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 12:00:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much memory " }, { "msg_contents": "Mark Woodward wrote:\n>> \"Jim C. Nasby\" <[email protected]> writes:\n>> \n>>> On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n>>> \n>>>> I'm reading this as just another uninformed complaint about libpq's\n>>>> habit of buffering the whole query result. It's possible that there's\n>>>> a memory leak in the -A path specifically, but nothing said so far\n>>>> provided any evidence for that.\n>>>> \n>>> Certainly seems like it. It seems like it would be good to allow for\n>>> libpq not to buffer, since there's cases where it's not needed...\n>>> \n>> See past discussions. The problem is that libpq's API says that when it\n>> hands you back the completed query result, the command is complete and\n>> guaranteed not to fail later. A streaming interface could not make that\n>> guarantee, so it's not a transparent substitution.\n>>\n>> I wouldn't have any strong objection to providing a separate API that\n>> operates in a streaming fashion, but defining it is something no one's\n>> bothered to do yet. In practice, if you have to code to a variant API,\n>> it's not that much more trouble to use a cursor...\n>>\n>> \n>\n> Wouldn't the \"COPY (select ...) TO STDOUT\" format being discussed solve\n> this for free?\n>\n>\n> \n\nIt won't solve it in the general case for clients that expect a result \nset. ISTM that \"use a cursor\" is a perfectly reasonable answer, though.\n\ncheers\n\nandrew\n\n", "msg_date": "Mon, 05 Jun 2006 12:40:38 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "Hi!\n\nTom Lane ďż˝rta:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> \n>> I've been able to verify this on 8.1.4; psql -A -t -c 'SELECT * FROM\n>> largetable' > /dev/null results in psql consuming vast quantities of\n>> memory. Why is this?\n>> \n>\n> Is it different without the -A?\n>\n> I'm reading this as just another uninformed complaint about libpq's\n> habit of buffering the whole query result. It's possible that there's\n> a memory leak in the -A path specifically, but nothing said so far\n> provided any evidence for that.\n>\n> \t\t\tregards, tom lane\n> \n\nSo, is libpq always buffering the result? Thanks.\nI thought psql buffers only because in its formatted output mode\nit has to know the widest value for all the columns.\n\nThen the SELECT INTO TEMP ; COPY TO STDOUT solution\nI found is _the_ solution.\n\nI guess then the libpq-based ODBC driver suffers\nfrom the same problem? It certainly explains the\nperformance problems I observed: the server\nfinishes the query, the ODBC driver (or libpq underneath)\nfetches all the records and the application receives\nthe first record after all these. Nice.\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Mon, 05 Jun 2006 18:45:13 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "> \"Jim C. Nasby\" <[email protected]> writes:\n>> On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n>>> I'm reading this as just another uninformed complaint about libpq's\n>>> habit of buffering the whole query result. It's possible that there's\n>>> a memory leak in the -A path specifically, but nothing said so far\n>>> provided any evidence for that.\n>\n>> Certainly seems like it. It seems like it would be good to allow for\n>> libpq not to buffer, since there's cases where it's not needed...\n>\n> See past discussions. The problem is that libpq's API says that when it\n> hands you back the completed query result, the command is complete and\n> guaranteed not to fail later. A streaming interface could not make that\n> guarantee, so it's not a transparent substitution.\n>\n> I wouldn't have any strong objection to providing a separate API that\n> operates in a streaming fashion, but defining it is something no one's\n> bothered to do yet. In practice, if you have to code to a variant API,\n> it's not that much more trouble to use a cursor...\n>\n\nWouldn't the \"COPY (select ...) TO STDOUT\" format being discussed solve\nthis for free?\n", "msg_date": "Mon, 5 Jun 2006 12:48:12 -0400 (EDT)", "msg_from": "\"Mark Woodward\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much " }, { "msg_contents": "> Mark Woodward wrote:\n>>> \"Jim C. Nasby\" <[email protected]> writes:\n>>>\n>>>> On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n>>>>\n>>>>> I'm reading this as just another uninformed complaint about libpq's\n>>>>> habit of buffering the whole query result. It's possible that\n>>>>> there's\n>>>>> a memory leak in the -A path specifically, but nothing said so far\n>>>>> provided any evidence for that.\n>>>>>\n>>>> Certainly seems like it. It seems like it would be good to allow for\n>>>> libpq not to buffer, since there's cases where it's not needed...\n>>>>\n>>> See past discussions. The problem is that libpq's API says that when\n>>> it\n>>> hands you back the completed query result, the command is complete and\n>>> guaranteed not to fail later. A streaming interface could not make\n>>> that\n>>> guarantee, so it's not a transparent substitution.\n>>>\n>>> I wouldn't have any strong objection to providing a separate API that\n>>> operates in a streaming fashion, but defining it is something no one's\n>>> bothered to do yet. In practice, if you have to code to a variant API,\n>>> it's not that much more trouble to use a cursor...\n>>>\n>>>\n>>\n>> Wouldn't the \"COPY (select ...) TO STDOUT\" format being discussed solve\n>> this for free?\n>>\n>>\n>>\n>\n> It won't solve it in the general case for clients that expect a result\n> set. ISTM that \"use a cursor\" is a perfectly reasonable answer, though.\n\nI'm not sure I agree -- surprise!\n\npsql is often used as a command line tool and using a cursor is not\nacceptable.\n\nGranted, with an unaligned output, perhaps psql should not buffer the\nWHOLE result at once, but without rewriting that behavior, a COPY from\nquery may be close enough.\n", "msg_date": "Mon, 5 Jun 2006 13:01:45 -0400 (EDT)", "msg_from": "\"Mark Woodward\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "Mark Woodward wrote:\n>>>>\n>>>>\n>>>> \n>>> Wouldn't the \"COPY (select ...) TO STDOUT\" format being discussed solve\n>>> this for free?\n>>>\n>>>\n>>>\n>>> \n>> It won't solve it in the general case for clients that expect a result\n>> set. ISTM that \"use a cursor\" is a perfectly reasonable answer, though.\n>> \n>\n> I'm not sure I agree -- surprise!\n>\n> psql is often used as a command line tool and using a cursor is not\n> acceptable.\n>\n> Granted, with an unaligned output, perhaps psql should not buffer the\n> WHOLE result at once, but without rewriting that behavior, a COPY from\n> query may be close enough.\n>\n> \n\nYou have missed my point. Surprise!\n\nI didn't say it wasn't OK in the psql case, I said it wasn't helpful in \nthe case of *other* libpq clients.\n\nExpecting clients generally to split and interpret COPY output is not \nreasonable, but if they want large result sets they should use a cursor.\n\ncheers\n\nandrew\n\n", "msg_date": "Mon, 05 Jun 2006 13:03:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "Andrew Dunstan �rta:\n> Mark Woodward wrote:\n>>> \"Jim C. Nasby\" <[email protected]> writes:\n>>> \n>>>> On Mon, Jun 05, 2006 at 11:27:30AM -0400, Tom Lane wrote:\n>>>> \n>>>>> I'm reading this as just another uninformed complaint about libpq's\n>>>>> habit of buffering the whole query result. It's possible that \n>>>>> there's\n>>>>> a memory leak in the -A path specifically, but nothing said so far\n>>>>> provided any evidence for that.\n>>>>> \n>>>> Certainly seems like it. It seems like it would be good to allow for\n>>>> libpq not to buffer, since there's cases where it's not needed...\n>>>> \n>>> See past discussions. The problem is that libpq's API says that \n>>> when it\n>>> hands you back the completed query result, the command is complete and\n>>> guaranteed not to fail later. A streaming interface could not make \n>>> that\n>>> guarantee, so it's not a transparent substitution.\n>>>\n>>> I wouldn't have any strong objection to providing a separate API that\n>>> operates in a streaming fashion, but defining it is something no one's\n>>> bothered to do yet. In practice, if you have to code to a variant API,\n>>> it's not that much more trouble to use a cursor...\n>>>\n>>> \n>>\n>> Wouldn't the \"COPY (select ...) TO STDOUT\" format being discussed solve\n>> this for free? \n\nYes, it would for me.\n \n> It won't solve it in the general case for clients that expect a result \n> set. ISTM that \"use a cursor\" is a perfectly reasonable answer, though.\n\nThe general case cannot be applied for all particular cases.\nE.g. you cannot use cursors from shell scripts and just for\nproducing an \"export file\" it's not too reasonable either.\nRedirecting psql's output or COPY is enough.\n\nBest regards,\nZolt�n B�sz�r�nyi\n\n", "msg_date": "Mon, 05 Jun 2006 19:17:31 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "On Mon, 2006-06-05 at 19:17 +0200, Zoltan Boszormenyi wrote:\n> The general case cannot be applied for all particular cases.\n> E.g. you cannot use cursors from shell scripts\n\nThis could be fixed by adding an option to psql to transparently produce\nSELECT result sets via a cursor.\n\n-Neil\n\n\n", "msg_date": "Mon, 05 Jun 2006 10:52:25 -0700", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> On Mon, 2006-06-05 at 19:17 +0200, Zoltan Boszormenyi wrote:\n>> The general case cannot be applied for all particular cases.\n>> E.g. you cannot use cursors from shell scripts\n\n> This could be fixed by adding an option to psql to transparently produce\n> SELECT result sets via a cursor.\n\nNote of course that such a thing would push the incomplete-result\nproblem further upstream. For instance in (hypothetical --cursor\nswitch)\n\tpsql --cursor -c \"select ...\" | myprogram\nthere would be no very good way for myprogram to find out that it'd\nbeen sent an incomplete result due to error partway through the SELECT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 14:10:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much " }, { "msg_contents": "Ühel kenal päeval, E, 2006-06-05 kell 14:10, kirjutas Tom Lane:\n> Neil Conway <[email protected]> writes:\n> > On Mon, 2006-06-05 at 19:17 +0200, Zoltan Boszormenyi wrote:\n> >> The general case cannot be applied for all particular cases.\n> >> E.g. you cannot use cursors from shell scripts\n> \n> > This could be fixed by adding an option to psql to transparently produce\n> > SELECT result sets via a cursor.\n\nI think this is an excellent idea. \n\npsql --cursor --fetchby 10000 -c \"select ...\" | myprogram\n\n> Note of course that such a thing would push the incomplete-result\n> problem further upstream. For instance in (hypothetical --cursor\n> switch)\n> \tpsql --cursor -c \"select ...\" | myprogram\n> there would be no very good way for myprogram to find out that it'd\n> been sent an incomplete result due to error partway through the SELECT.\n\nwould it not learn about it at the point of error ?\n\neven without --cursor there is still no very good way to find out when\nsomething else goes wrong, like the result inside libpq taking up all\nmemory and so psql runs out of memory on formatting some longer lines.\n\n\n-- \n----------------\nHannu Krosing\nDatabase Architect\nSkype Technologies OÜ\nAkadeemia tee 21 F, Tallinn, 12618, Estonia\n\nSkype me: callto:hkrosing\nGet Skype for free: http://www.skype.com\n\n\n", "msg_date": "Tue, 06 Jun 2006 15:30:12 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Ühel kenal päeval, E, 2006-06-05 kell 14:10, kirjutas Tom Lane:\n>> Note of course that such a thing would push the incomplete-result\n>> problem further upstream. For instance in (hypothetical --cursor\n>> switch)\n>>\tpsql --cursor -c \"select ...\" | myprogram\n>> there would be no very good way for myprogram to find out that it'd\n>> been sent an incomplete result due to error partway through the SELECT.\n\n> would it not learn about it at the point of error ?\n\nNo, it would merely see EOF after some number of result rows. (I'm\nassuming you're also using -A -t so that the output is unadorned.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 09:48:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much " }, { "msg_contents": "On Tue, Jun 06, 2006 at 09:48:43AM -0400, Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > Ühel kenal päeval, E, 2006-06-05 kell 14:10, kirjutas Tom Lane:\n> >> Note of course that such a thing would push the incomplete-result\n> >> problem further upstream. For instance in (hypothetical --cursor\n> >> switch)\n> >>\tpsql --cursor -c \"select ...\" | myprogram\n> >> there would be no very good way for myprogram to find out that it'd\n> >> been sent an incomplete result due to error partway through the SELECT.\n> \n> > would it not learn about it at the point of error ?\n> \n> No, it would merely see EOF after some number of result rows. (I'm\n> assuming you're also using -A -t so that the output is unadorned.)\n\nSo if an error occurs partway through reading a cursor, no error message\nis generated? That certainly sounds like a bug to me...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 6 Jun 2006 09:39:19 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 06, 2006 at 09:48:43AM -0400, Tom Lane wrote:\n>>> psql --cursor -c \"select ...\" | myprogram\n>>> there would be no very good way for myprogram to find out that it'd\n>>> been sent an incomplete result due to error partway through the SELECT.\n\n> So if an error occurs partway through reading a cursor, no error message\n> is generated? That certainly sounds like a bug to me...\n\nSure an error is generated. But it goes to stderr. The guy at the\ndownstream end of the stdout pipe cannot see either the error message,\nor the nonzero status that psql will (hopefully) exit with.\n\nYou can theoretically deal with this by having the shell script calling\nthis combination check psql exit status and discard the results of\nmyprogram on failure, but it's not easy or simple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 10:47:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] psql -A (unaligned format) eats too much " } ]
[ { "msg_contents": "Hello,\n\nI've noticed some posts on hanging queries but haven't seen any\nsolutions yet so far.\n\nOur problem is that about a week and a half ago we started to get some\nqueries that would (seemingly) never return (e.g., normally run in a\ncouple minutes, but after 2.5 hours, they were still running, the\nprocess pushing the processor up to 99.9% active).\n\nWe are running Postgres 8.1.1 on Redhat 7.3 using Dell poweredge quad\nprocessor boxes with 4 GB of memory. We have a main database that is\nreplicated via Sloney to a identical system.\n\nThings we've tried so far:\n\nWe've stopped and restarted postgres and that didn't seem to help, we've\nrebuilt all the indexes and that didn't seem to help either. We've\nstopped replication between the boxes and that didn't do anything. \nWe've tried the queries on both the production and the replicated box,\nand there is no difference in the queries (or query plans)\n\nWe do have another identical system that is a backup box (same type of\nbox, Postgres 8.1.1, Redhat 7.3, etc), and there, the query does\ncomplete executing in a short time. We loaded up a current copy of the\nproduction database and it still responded quickly.\n\nGenerally these queries, although not complicated, are on the more\ncomplex side of our application. Second, they have been running up\nuntil a few weeks ago.\n\nAttached are an example query plan: Query.sql\nThe query plan from our production sever: QueryPlanBroke.txt\nThe working query plan from our backup server: QueryPlanWork.txt\n\nWhat we found that has worked so far is to remove all the outer joins,\nput the results into a temp table and then left join from the temp table\nto get our results. Certainly this isn't a solution, but rather\nsomething we have resorted to in a place or to as we limp along.\n\n\nAny help would be greatly appreciated.\n\nThanks,\nChris Beecroft", "msg_date": "Mon, 05 Jun 2006 12:05:08 -0700", "msg_from": "Chris Beecroft <[email protected]>", "msg_from_op": true, "msg_subject": "Some queries starting to hang" }, { "msg_contents": "On Mon, Jun 05, 2006 at 12:05:08PM -0700, Chris Beecroft wrote:\n> Our problem is that about a week and a half ago we started to get some\n> queries that would (seemingly) never return (e.g., normally run in a\n> couple minutes, but after 2.5 hours, they were still running, the\n> process pushing the processor up to 99.9% active).\n\nAre there any locks preventing the query from completing? I can't\nrecall how you check in 7.3, but if nothing else, you can check with\nps for something WAITING.\n\nA\n\n\n\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n", "msg_date": "Mon, 5 Jun 2006 15:42:52 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Chris Beecroft <[email protected]> writes:\n> Our problem is that about a week and a half ago we started to get some\n> queries that would (seemingly) never return (e.g., normally run in a\n> couple minutes, but after 2.5 hours, they were still running, the\n> process pushing the processor up to 99.9% active).\n\n> Attached are an example query plan: Query.sql\n> The query plan from our production sever: QueryPlanBroke.txt\n> The working query plan from our backup server: QueryPlanWork.txt\n\nNote the major difference in estimated row counts. That's the key to\nyour problem... you need to find out why the \"broke\" case thinks only\none row is getting selected.\n\nbroke:\n> -> Nested Loop (cost=30150.77..129334.04 rows=1 width=305)\n\nwork:\n> -> Hash Join (cost=30904.77..125395.89 rows=1810 width=306)\n\nI'm wondering about out-of-date or nonexistent ANALYZE stats, missing\ncustom adjustments of statistics target settings, etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 16:07:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:\n> \n> broke:\n> > -> Nested Loop (cost=30150.77..129334.04 rows=1 width=305)\n> \n> work:\n> > -> Hash Join (cost=30904.77..125395.89 rows=1810 width=306)\n> \n> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing\n> custom adjustments of statistics target settings, etc.\n\nBut even the nested loop shouldn't be a \"never returns\" case, should\nit? For 1800 rows?\n\n(I've _had_ bad plans that picked nestloop, for sure, but they're\nusually for tens of thousands of rows when they take forever).\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nUsers never remark, \"Wow, this software may be buggy and hard \nto use, but at least there is a lot of code underneath.\"\n\t\t--Damien Katz\n", "msg_date": "Mon, 5 Jun 2006 16:38:51 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Thanks Tom,\n\nI knew you would come through again!\n\nQuery is now returning with results on our replicated database. Will\nvacuum analyze production now. So it seems to have done the trick. Now\nthe question is has our auto vacuum failed or was not set up properly...\nA question for my IT people.\n\nThanks once again,\nChris Beecroft\n\nOn Mon, 2006-06-05 at 13:07, Tom Lane wrote:\n> Chris Beecroft <[email protected]> writes:\n> > Our problem is that about a week and a half ago we started to get some\n> > queries that would (seemingly) never return (e.g., normally run in a\n> > couple minutes, but after 2.5 hours, they were still running, the\n> > process pushing the processor up to 99.9% active).\n> \n> > Attached are an example query plan: Query.sql\n> > The query plan from our production sever: QueryPlanBroke.txt\n> > The working query plan from our backup server: QueryPlanWork.txt\n> \n> Note the major difference in estimated row counts. That's the key to\n> your problem... you need to find out why the \"broke\" case thinks only\n> one row is getting selected.\n> \n> broke:\n> > -> Nested Loop (cost=30150.77..129334.04 rows=1 width=305)\n> \n> work:\n> > -> Hash Join (cost=30904.77..125395.89 rows=1810 width=306)\n> \n> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing\n> custom adjustments of statistics target settings, etc.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Mon, 05 Jun 2006 13:39:38 -0700", "msg_from": "Chris Beecroft <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:\n>> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing\n>> custom adjustments of statistics target settings, etc.\n\n> But even the nested loop shouldn't be a \"never returns\" case, should\n> it? For 1800 rows?\n\nWell, it's a big query. If it ought to take a second or two, and\ninstead is taking an hour or two (1800 times the expected runtime), that\nmight be close enough to \"never\" to exhaust Chris' patience. Besides,\nwe don't know whether the 1800 might itself be an underestimate (too bad\nChris didn't provide EXPLAIN ANALYZE results). The hash plan will scale\nto larger numbers of rows much more gracefully than the nestloop ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 17:06:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Mon, 2006-06-05 at 14:06, Tom Lane wrote:\n> Andrew Sullivan <[email protected]> writes:\n> > On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:\n> >> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing\n> >> custom adjustments of statistics target settings, etc.\n> \n> > But even the nested loop shouldn't be a \"never returns\" case, should\n> > it? For 1800 rows?\n> \n> Well, it's a big query. If it ought to take a second or two, and\n> instead is taking an hour or two (1800 times the expected runtime), that\n> might be close enough to \"never\" to exhaust Chris' patience. Besides,\n> we don't know whether the 1800 might itself be an underestimate (too bad\n> Chris didn't provide EXPLAIN ANALYZE results). The hash plan will scale\n> to larger numbers of rows much more gracefully than the nestloop ...\n> \n> \t\t\tregards, tom lane\n\nHello,\n\nIf anyone is curious, I've attached an explain analyze from the now\nworking replicated database. Explain analyze did not seem return on the\n'broken' database (or at least, when we originally tried to test these,\ndid not return after an hour and a half, which enough time to head right\npast patient into crabby...)\n\nChris", "msg_date": "Mon, 05 Jun 2006 15:22:00 -0700", "msg_from": "Chris Beecroft <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:\n> Andrew Sullivan <[email protected]> writes:\n> > On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:\n> >> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing\n> >> custom adjustments of statistics target settings, etc.\n> \n> > But even the nested loop shouldn't be a \"never returns\" case, should\n> > it? For 1800 rows?\n> \n> Well, it's a big query. If it ought to take a second or two, and\n> instead is taking an hour or two (1800 times the expected runtime), that\n> might be close enough to \"never\" to exhaust Chris' patience. Besides,\n> we don't know whether the 1800 might itself be an underestimate (too bad\n> Chris didn't provide EXPLAIN ANALYZE results). \n\nThis is a good example of a case where the inefficiency of EXPLAIN\nANALYZE would be a contributory factor to it not actually being\navailable for diagnosing a problem.\n\nMaybe we need something even more drastic than recent proposed changes\nto EXPLAIN ANALYZE?\n\nPerhaps we could annotate the query tree with individual limits. That\nway a node that was expecting to deal with 1 row would simply stop\nexecuting the EXPLAIN ANALYZE when it hit N times as many rows\n(default=no limit). That way, we would still be able to see a bad plan\neven without waiting for the whole query to execute - just stop at a\npoint where the plan is far enough off track. That would give us what we\nneed: pinpoint exactly which part of the plan is off-track and see how\nfar off track it is. If the limits were configurable, we'd be able to\nopt for faster-but-less-accurate or slower-yet-100% accuracy behaviour.\nWe wouldn't need to worry about timing overhead either then.\n\ne.g. EXPLAIN ANALYZE ERRLIMIT 10 SELECT ...\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 06 Jun 2006 15:39:01 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:\n>> Well, it's a big query. If it ought to take a second or two, and\n>> instead is taking an hour or two (1800 times the expected runtime), that\n>> might be close enough to \"never\" to exhaust Chris' patience. Besides,\n>> we don't know whether the 1800 might itself be an underestimate (too bad\n>> Chris didn't provide EXPLAIN ANALYZE results). \n\n> This is a good example of a case where the inefficiency of EXPLAIN\n> ANALYZE would be a contributory factor to it not actually being\n> available for diagnosing a problem.\n\nHuh? The problem is the inefficiency of the underlying query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 10:43:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Tue, 2006-06-06 at 10:43 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:\n> >> Well, it's a big query. If it ought to take a second or two, and\n> >> instead is taking an hour or two (1800 times the expected runtime), that\n> >> might be close enough to \"never\" to exhaust Chris' patience. Besides,\n> >> we don't know whether the 1800 might itself be an underestimate (too bad\n> >> Chris didn't provide EXPLAIN ANALYZE results). \n> \n> > This is a good example of a case where the inefficiency of EXPLAIN\n> > ANALYZE would be a contributory factor to it not actually being\n> > available for diagnosing a problem.\n> \n> Huh? The problem is the inefficiency of the underlying query.\n\nOf course that was the main problem from the OP.\n\nYou mentioned it would be good if the OP had delivered an EXPLAIN\nANALYZE; I agree(d). The lack of EXPLAIN ANALYZE is frequently because\nyou can't get them to run to completion - more so when the query you\nwish to analyze doesn't appear to complete either. \n\nThe idea I just had was: why do we need EXPLAIN ANALYZE to run to\ncompletion? In severe cases like this thread, we might be able to\ndiscover the root cause by a *partial* execution of the plan, as long as\nit was properly instrumented. That way, the OP might have been able to\ndiscover the root cause himself...\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 06 Jun 2006 15:52:56 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Mon, Jun 05, 2006 at 01:39:38PM -0700, Chris Beecroft wrote:\n> Thanks Tom,\n> \n> I knew you would come through again!\n> \n> Query is now returning with results on our replicated database. Will\n> vacuum analyze production now. So it seems to have done the trick. Now\n> the question is has our auto vacuum failed or was not set up properly...\n> A question for my IT people.\n\nYou should almost certainly be running the autovacuum that's built in\nnow. If you enable vacuum_cost_delay you should be able to make it so\nthat vacuum's impact on production is minimal. The other thing you'll\nwant to do is cut all the vacuum threshold and scale settings in half\n(the defaults are very conservative).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 6 Jun 2006 10:03:26 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> You mentioned it would be good if the OP had delivered an EXPLAIN\n> ANALYZE; I agree(d). The lack of EXPLAIN ANALYZE is frequently because\n> you can't get them to run to completion - more so when the query you\n> wish to analyze doesn't appear to complete either. \n\nWell, he could have shown EXPLAIN ANALYZE for the server that was\nmanaging to run the query in a reasonable amount of time.\n\n> The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> completion? In severe cases like this thread, we might be able to\n> discover the root cause by a *partial* execution of the plan, as long as\n> it was properly instrumented. That way, the OP might have been able to\n> discover the root cause himself...\n\nI don't think that helps, as it just replaces one uncertainty by\nanother: how far did the EXPLAIN really get towards completion of the\nplan? You still don't have any hard data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 11:06:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:\n> > The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> > completion? In severe cases like this thread, we might be able to\n> > discover the root cause by a *partial* execution of the plan, as long as\n> > it was properly instrumented. That way, the OP might have been able to\n> > discover the root cause himself...\n> \n> I don't think that helps, as it just replaces one uncertainty by\n> another: how far did the EXPLAIN really get towards completion of the\n> plan? You still don't have any hard data.\n\nDoes that really matter, though? The point is to find the node where the\nestimate proved to be fantasy. It might even make sense to highlight\nthat node in the output, so that users don't have to wade through a sea\nof numbers to find it.\n\nIf it is important to report how far along the query got, it seems that\ncould always be added to the explain output.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 6 Jun 2006 10:14:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Explain analyze could at least put an asterisk around actual time that\ndeviated by some factor from the estimated time.\n\nOn Tue, June 6, 2006 10:39 am, Simon Riggs wrote:\n\n>\n> This is a good example of a case where the inefficiency of EXPLAIN\n> ANALYZE would be a contributory factor to it not actually being\n> available for diagnosing a problem.\n>\n> Maybe we need something even more drastic than recent proposed changes\n> to EXPLAIN ANALYZE?\n>\n> Perhaps we could annotate the query tree with individual limits. That\n> way a node that was expecting to deal with 1 row would simply stop\n> executing the EXPLAIN ANALYZE when it hit N times as many rows (default=no\n> limit). That way, we would still be able to see a bad plan even without\n> waiting for the whole query to execute - just stop at a point where the\n> plan is far enough off track. That would give us what we need: pinpoint\n> exactly which part of the plan is off-track and see how far off track it\n> is. If the limits were configurable, we'd be able to opt for\n> faster-but-less-accurate or slower-yet-100% accuracy behaviour. We\n> wouldn't need to worry about timing overhead either then.\n>\n> e.g. EXPLAIN ANALYZE ERRLIMIT 10 SELECT ...\n>\n> --\n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n\n\n", "msg_date": "Tue, 6 Jun 2006 11:21:15 -0400 (EDT)", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:\n> > it was properly instrumented. That way, the OP might have been able to\n> > discover the root cause himself...\n> \n> I don't think that helps, as it just replaces one uncertainty by\n> another: how far did the EXPLAIN really get towards completion of the\n> plan? You still don't have any hard data.\n\nWell, you _might_ get something useful, if you're trying to work on a\nmaladjusted production system, because you get to the part that trips\nthe limit, and then you know, \"Well, I gotta fix it that far,\nanyway.\"\n\nOften, when you're in real trouble, you can't or don't wait for the\nfull plan to come back from EXPLAIN ANALYSE, because a manager is\nhelpfully standing over your shoulder asking whether you're there\nyet. Being able to say, \"Aha, we have the first symptom,\" might be\nhelpful to users. Because the impatient simply won't wait for the\nfull report to come back, and therefore they'll end up flying blind\ninstead. (Note that \"the impatient\" is not always the person logged\nin and executing the commands.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Tue, 6 Jun 2006 11:29:49 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n\n> The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> completion? In severe cases like this thread, we might be able to\n> discover the root cause by a *partial* execution of the plan, as long as\n> it was properly instrumented. That way, the OP might have been able to\n> discover the root cause himself...\n\nAn alternate approach would be to implement a SIGINFO handler that prints out\nthe explain analyze output for the data built up so far. You would be able to\nkeep hitting C-t and keep getting updates until the query completes or you\ndecided to hit C-c.\n\nI'm not sure how easy this would be to implement but it sure would be nice\nfrom a user's point of view. Much nicer than having to specify some arbitrary\nlimit before running the query.\n\n-- \ngreg\n\n", "msg_date": "06 Jun 2006 11:37:46 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Hmmm...It could generate NOTICEs whenever there is a drastic difference in\nrowcount or actual time...\n\nOn Tue, June 6, 2006 11:29 am, Andrew Sullivan wrote:\n> On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:\n>\n>>> it was properly instrumented. That way, the OP might have been able\n>>> to discover the root cause himself...\n>>\n>> I don't think that helps, as it just replaces one uncertainty by\n>> another: how far did the EXPLAIN really get towards completion of the\n>> plan? You still don't have any hard data.\n>\n> Well, you _might_ get something useful, if you're trying to work on a\n> maladjusted production system, because you get to the part that trips the\n> limit, and then you know, \"Well, I gotta fix it that far, anyway.\"\n>\n> Often, when you're in real trouble, you can't or don't wait for the\n> full plan to come back from EXPLAIN ANALYSE, because a manager is helpfully\n> standing over your shoulder asking whether you're there yet. Being able\n> to say, \"Aha, we have the first symptom,\" might be helpful to users.\n> Because the impatient simply won't wait for the\n> full report to come back, and therefore they'll end up flying blind\n> instead. (Note that \"the impatient\" is not always the person logged in\n> and executing the commands.)\n>\n> A\n>\n>\n> --\n> Andrew Sullivan | [email protected]\n> I remember when computers were frustrating because they *did* exactly what\n> you told them to. That actually seems sort of quaint now. --J.D. Baldwin\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n>\n> http://archives.postgresql.org\n>\n>\n\n\n", "msg_date": "Tue, 6 Jun 2006 11:37:58 -0400 (EDT)", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:\n>> I don't think that helps, as it just replaces one uncertainty by\n>> another: how far did the EXPLAIN really get towards completion of the\n>> plan? You still don't have any hard data.\n\n> Does that really matter, though? The point is to find the node where the\n> estimate proved to be fantasy.\n\nNo, the point is to find out what reality is. Just knowing that the\nestimates are wrong doesn't really get you anywhere (we pretty much knew\nthat before we even started looking at the EXPLAIN, eh?).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 11:41:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Tue, Jun 06, 2006 at 11:37:46AM -0400, Greg Stark wrote:\n\n> An alternate approach would be to implement a SIGINFO handler that\n> prints out the explain analyze output for the data built up so far.\n> You would be able to keep hitting C-t and keep getting updates\n> until the query completes or you decided to hit C-c.\n\nThis is even better, and pretty much along the lines I was thinking\nin my other mail. If you can see the _first_ spot you break, you can\nstart working. We all know (or I hope so, anyway) that it would be\nbetter to get the full result, and know everything that needs\nattention before starting. As nearly as I can tell, however, they\ndon't teach Mill's methods to MBAs of a certain stripe, so changes\nstart getting made without all the data being available. It'd be\nnice to be able to bump the set of available data to something\nhigher than \"none\".\n\n(That said, I appreciate that there's precious little reason to spend\na lot of work optimising a feature that is mostly there to counteract\nbad management practices.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Tue, 6 Jun 2006 12:20:19 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Simon Riggs wrote:\n>>Well, it's a big query. If it ought to take a second or two, and\n>>instead is taking an hour or two (1800 times the expected runtime), that\n>>might be close enough to \"never\" to exhaust Chris' patience. Besides,\n>>we don't know whether the 1800 might itself be an underestimate (too bad\n>>Chris didn't provide EXPLAIN ANALYZE results). \n> \n> This is a good example of a case where the inefficiency of EXPLAIN\n> ANALYZE would be a contributory factor to it not actually being\n> available for diagnosing a problem.\n\nThis is a frustration I have, but Simon expressed it much more concisely. The first question one gets in this forum is, \"did you run EXPLAIN ANALYZE?\" But if EXPLAIN ANALYZE never finishes, you can't get the information you need to diagnose the problem. Simon's proposal,\n\n> e.g. EXPLAIN ANALYZE ERRLIMIT 10 SELECT ...\n\nor something similar, would be a big help. I.e. \"If you can't finish in a reasonable time, at least tell me as much as you can.\"\n\nCraig\n", "msg_date": "Tue, 06 Jun 2006 10:45:09 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Tom Lane wrote:\n>>The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n>>completion? In severe cases like this thread, we might be able to\n>>discover the root cause by a *partial* execution of the plan, as long as\n>>it was properly instrumented. That way, the OP might have been able to\n>>discover the root cause himself...\n> \n> \n> I don't think that helps, as it just replaces one uncertainty by\n> another: how far did the EXPLAIN really get towards completion of the\n> plan? You still don't have any hard data.\n\nBut at least you have some data, which is better than no data. Even knowing that the plan got stuck on a particular node of the query plan could be vital information. For a query that never finishes, you can't even find out where it's getting stuck.\n\nThat's why Simon's proposal might help in some particularly difficult situations.\n\nRegards,\nCraig\n", "msg_date": "Tue, 06 Jun 2006 10:50:25 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Tue, 2006-06-06 at 12:50, Craig A. James wrote:\n> Tom Lane wrote:\n> >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> >>completion? In severe cases like this thread, we might be able to\n> >>discover the root cause by a *partial* execution of the plan, as long as\n> >>it was properly instrumented. That way, the OP might have been able to\n> >>discover the root cause himself...\n> > \n> > \n> > I don't think that helps, as it just replaces one uncertainty by\n> > another: how far did the EXPLAIN really get towards completion of the\n> > plan? You still don't have any hard data.\n> \n> But at least you have some data, which is better than no data. Even knowing that the plan got stuck on a particular node of the query plan could be vital information. For a query that never finishes, you can't even find out where it's getting stuck.\n> \n> That's why Simon's proposal might help in some particularly difficult situations.\n\nHmmmmm. I wonder if it be hard to have explain analyze have a timeout\nper node qualifier? Something that said if it takes more than x\nmilliseconds for a node to kill the explain analyze and list the up to\nthe nasty node that's using all the time up?\n\nThat would be extremely useful.\n", "msg_date": "Tue, 06 Jun 2006 12:54:27 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Tue, Jun 06, 2006 at 12:54:27PM -0500, Scott Marlowe wrote:\n> On Tue, 2006-06-06 at 12:50, Craig A. James wrote:\n> > Tom Lane wrote:\n> > >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> > >>completion? In severe cases like this thread, we might be able to\n> > >>discover the root cause by a *partial* execution of the plan, as long as\n> > >>it was properly instrumented. That way, the OP might have been able to\n> > >>discover the root cause himself...\n> > > \n> > > \n> > > I don't think that helps, as it just replaces one uncertainty by\n> > > another: how far did the EXPLAIN really get towards completion of the\n> > > plan? You still don't have any hard data.\n> > \n> > But at least you have some data, which is better than no data. Even knowing that the plan got stuck on a particular node of the query plan could be vital information. For a query that never finishes, you can't even find out where it's getting stuck.\n> > \n> > That's why Simon's proposal might help in some particularly difficult situations.\n> \n> Hmmmmm. I wonder if it be hard to have explain analyze have a timeout\n> per node qualifier? Something that said if it takes more than x\n> milliseconds for a node to kill the explain analyze and list the up to\n> the nasty node that's using all the time up?\n> \n> That would be extremely useful.\n\nMaybe, maybe not. It would be very easy for this to croak on the first\nsort it hits. I suspect the original proposal of aborting once a\nrowcount estimate proves to be way off is a better idea.\n\nFor the record, I also think being able to get a current snapshot is\ngreat, too.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 6 Jun 2006 15:51:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Tue, 2006-06-06 at 15:51, Jim C. Nasby wrote:\n> On Tue, Jun 06, 2006 at 12:54:27PM -0500, Scott Marlowe wrote:\n> > On Tue, 2006-06-06 at 12:50, Craig A. James wrote:\n> > > Tom Lane wrote:\n> > > >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to\n> > > >>completion? In severe cases like this thread, we might be able to\n> > > >>discover the root cause by a *partial* execution of the plan, as long as\n> > > >>it was properly instrumented. That way, the OP might have been able to\n> > > >>discover the root cause himself...\n> > > > \n> > > > \n> > > > I don't think that helps, as it just replaces one uncertainty by\n> > > > another: how far did the EXPLAIN really get towards completion of the\n> > > > plan? You still don't have any hard data.\n> > > \n> > > But at least you have some data, which is better than no data. Even knowing that the plan got stuck on a particular node of the query plan could be vital information. For a query that never finishes, you can't even find out where it's getting stuck.\n> > > \n> > > That's why Simon's proposal might help in some particularly difficult situations.\n> > \n> > Hmmmmm. I wonder if it be hard to have explain analyze have a timeout\n> > per node qualifier? Something that said if it takes more than x\n> > milliseconds for a node to kill the explain analyze and list the up to\n> > the nasty node that's using all the time up?\n> > \n> > That would be extremely useful.\n> \n> Maybe, maybe not. It would be very easy for this to croak on the first\n> sort it hits. I suspect the original proposal of aborting once a\n> rowcount estimate proves to be way off is a better idea.\n> \n> For the record, I also think being able to get a current snapshot is\n> great, too.\n\nI can see value in both.\n\nJust because the row count is right doesn't mean it won't take a\nfortnight of processing. :)\n\nThe problem with the row count estimate being off from the real thing is\nyou only get it AFTER the set is retrieved for that node.\n\nThe cost of aborting on the first sort is minimal. You just turn up the\nnumber for the timeout and run it again. 1 minute or so wasted.\n\nThe cost of not aborting on the first sort is that you may never see\nwhat the part of the plan is that's killing your query, since you never\nget the actual plan.\n", "msg_date": "Tue, 06 Jun 2006 16:02:21 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> The cost of not aborting on the first sort is that you may never see\n> what the part of the plan is that's killing your query, since you never\n> get the actual plan.\n\nWell, you can get the plan without waiting a long time; that's what\nplain EXPLAIN is for. But I still disagree with the premise that you\ncan extrapolate anything very useful from an unfinished EXPLAIN ANALYZE\nrun. As an example, if the plan involves setup steps such as sorting or\nloading a hashtable, cancelling after a minute might make it look like\nthe setup step is the big problem, distracting you from the possibility\nthat the *rest* of the plan would take weeks to run if you ever got to\nit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 17:11:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang " }, { "msg_contents": "On Tue, 2006-06-06 at 16:11, Tom Lane wrote:\n> Scott Marlowe <[email protected]> writes:\n> > The cost of not aborting on the first sort is that you may never see\n> > what the part of the plan is that's killing your query, since you never\n> > get the actual plan.\n> \n> Well, you can get the plan without waiting a long time; that's what\n> plain EXPLAIN is for. But I still disagree with the premise that you\n> can extrapolate anything very useful from an unfinished EXPLAIN ANALYZE\n> run. As an example, if the plan involves setup steps such as sorting or\n> loading a hashtable, cancelling after a minute might make it look like\n> the setup step is the big problem, distracting you from the possibility\n> that the *rest* of the plan would take weeks to run if you ever got to\n> it.\n\nSure, but it would be nice to see it report the partial work.\n\ni.e. I got to using a nested loop, thought there would be 20 rows,\nprocessed 250,000 or so, timed out at 10 minutes, and gave up.\n\nI would find that useful.\n", "msg_date": "Tue, 06 Jun 2006 16:14:48 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "On Tue, 2006-06-06 at 11:41 -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:\n> >> I don't think that helps, as it just replaces one uncertainty by\n> >> another: how far did the EXPLAIN really get towards completion of the\n> >> plan? You still don't have any hard data.\n> \n> > Does that really matter, though? The point is to find the node where the\n> > estimate proved to be fantasy.\n> \n> No, the point is to find out what reality is. \n\nMy point is knowing reality with less than 100% certainty is still very\nfrequently useful.\n\n> Just knowing that the\n> estimates are wrong doesn't really get you anywhere (we pretty much knew\n> that before we even started looking at the EXPLAIN, eh?).\n\nWe were lucky enough to have two EXPLAINS that could be examined for\ndifferences. Often, you have just one EXPLAIN and no idea which estimate\nis incorrect, or whether they are all exactly correct. That is when an\nEXPLAIN ANALYZE becomes essential - yet a *full* execution isn't\nrequired in order to tell you what you need to know.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 07 Jun 2006 12:42:28 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" }, { "msg_contents": "Hi, Chris,\n\nChris Beecroft wrote:\n\n> Query is now returning with results on our replicated database. Will\n> vacuum analyze production now. So it seems to have done the trick. Now\n> the question is has our auto vacuum failed or was not set up properly...\n> A question for my IT people.\n\nMost of the cases when we had database bloat despite running autovacuum,\nit was due to a low free_space_map setting.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 21 Jun 2006 09:49:15 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some queries starting to hang" } ]
[ { "msg_contents": "I have UTF-8 Postgres 8.1 database on W2K3\n\nQuery\n\nSELECT toode, nimetus\nFROM toode\nWHERE toode ILIKE 'x10%' ESCAPE '!'\nORDER BY UPPER(toode ),nimetus LIMIT 100\n\nruns 1 minute in first time for small table size.\n\nToode field type is CHAR(20)\n\nHow to create index on toode field so that query can use it ?\n\n\n\n", "msg_date": "Mon, 5 Jun 2006 22:29:11 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to force Postgres to use index on ILIKE" }, { "msg_contents": "Andrus,\n\n> SELECT toode, nimetus\n> FROM toode\n> WHERE toode ILIKE 'x10%' ESCAPE '!'\n> ORDER BY UPPER(toode ),nimetus LIMIT 100\n>\n> runs 1 minute in first time for small table size.\n>\n> Toode field type is CHAR(20)\n\n1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you \nproblems using an index, period.\n\n2) You can't use an index on ILIKE. You can, however, use an index on \nlower(field) if your query is properly phrased and if you've created an \nexpression index on lower(field).\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 5 Jun 2006 14:26:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Postgres to use index on ILIKE" }, { "msg_contents": ">> SELECT toode, nimetus\n>> FROM toode\n>> WHERE toode ILIKE 'x10%' ESCAPE '!'\n>> ORDER BY UPPER(toode ),nimetus LIMIT 100\n>>\n>> runs 1 minute in first time for small table size.\n>>\n>> Toode field type is CHAR(20)\n>\n> 1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you\n> problems using an index, period.\n\n1. I haven't seen any example where VARCHAR is better that CHAR for indexing\n2. I have a lot of existing code. Changing CHAR to VARCHAR requires probably \nre-writing a lot of code, a huge work.\n\n> 2) You can't use an index on ILIKE.\n\nI'ts very sad. I expected that lower(toode) index can be used.\n\n\n> You can, however, use an index on\n> lower(field) if your query is properly phrased and if you've created an\n> expression index on lower(field).\n\nI tried by Postgres does not use index. Why ?\n\ncreate index nimib2 on firma1.klient(lower(nimi) bpchar_pattern_ops);\n\nexplain analyze select nimi from firma1.klient where lower(nimi) like\n'mokter%'\n\n\"Seq Scan on klient (cost=0.00..9.79 rows=1 width=74) (actual\ntime=0.740..0.761 rows=1 loops=1)\"\n\" Filter: (lower((nimi)::text) ~~ 'mokter%'::text)\"\n\"Total runtime: 0.877 ms\"\n\n\n\n", "msg_date": "Tue, 6 Jun 2006 12:57:31 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to force Postgres to use index on ILIKE" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n>> 1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you\n>> problems using an index, period.\n\n> 1. I haven't seen any example where VARCHAR is better that CHAR for indexing\n\nThe advice you were given is good, even if the explanation is bad.\nCHAR(n) is a poor choice for just about every purpose, because of all\nthe padding blanks it insists on storing and transmitting. That adds\nup to a lot of wasted space, I/O effort, and CPU cycles.\n\n> I tried by Postgres does not use index. Why ?\n> create index nimib2 on firma1.klient(lower(nimi) bpchar_pattern_ops);\n\nTry to get over this fixation on CHAR. That would work with\ntext_pattern_ops --- lower() returns TEXT, and TEXT is what the LIKE\noperator accepts, so that's the opclass you need to use to optimize\nlower() LIKE 'pattern'.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jun 2006 10:23:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Postgres to use index on ILIKE " } ]
[ { "msg_contents": "Hi,\n\nUsing postgres 8.0.1, I'm having a problem where a user-defined function \nthat executes quite quickly on its own slows down the calling query, \nwhich ignores at least one index. I don't think this should be happening.\n\nPlease forgive the long explanation below; I'm trying to be clear.\n\nSo -- I have a function (marked STABLE) that takes 1-2 ms to execute when called \nvia a simple select, eg:\n\n\\timing\nselect * from ascend_tree_breadcrumb( category_by_topic( 'World' ) );\n category_id | parent_category_id | topic | num_sub_items | num_sub_cats\n-------------+--------------------+-------+---------------+--------------\n 1 | | World | 0 | 0\n(1 row)\n\nTime: 1.311 ms\n\n\nAs you can see, there are actually 2 functions being called, and the top-level \nfunction returns a set, containing one row. In practice, it will never return \nmore than 5 rows.\n\nFor this very simple example, I can return the same data by calling a table directly:\n\nlyrff=# SELECT * from category c where c.topic = 'World';\n category_id | parent_category_id | topic | num_sub_items | num_sub_cats\n-------------+--------------------+-------+---------------+--------------\n 1 | | World | 0 | 0\n(1 row)\n\nTime: 2.660 ms\n\n\nSo far, so good.\n\nBut now, when I join the set that is returned by the function with another \ntable category_lang, which contains about 40k records, using the primary key \nfor category_lang, then things become slow.\n\nlyrff=# SELECT cl.category_id, atb.topic, cl.title\nlyrff-# FROM ascend_tree_breadcrumb( category_by_topic( 'World' ) ) atb\nlyrff-# inner join category_lang cl on (atb.category_id = cl.category_id and cl.lang_code = 'en');\n category_id | topic | title\n-------------+-------+-------\n 1 | World | World\n(1 row)\n\nTime: 308.822 ms\n\n\n(Okay, so 308 ms is not a super-long time, but this query is supposed to run\non all pages of a website, so it quickly becomes painful. And anyway, it's\nabout 300x where it could/should be.)\n\nSo now if I remove the function call and substitute the SQL that \nlooks directly in the category table, then things are fast again:\n\nlyrff=# SELECT cl.category_id, c.topic, cl.title\nlyrff-# FROM category c\nlyrff-# inner join category_lang cl on (c.category_id = cl.category_id and cl.lang_code = 'en')\nlyrff-# where\nlyrff-# c.topic = 'World';\n category_id | topic | title\n-------------+-------+-------\n 1 | World | World\n(1 row)\n\nTime: 1.914 ms\n\n\nSo clearly the user-defined function is contributing to the slow-down, even though \nthe function itself executes quite quickly. Here's what explain has to say:\n\nlyrff=# explain analyze\nlyrff-# SELECT cl.category_id, atb.topic, cl.title\nlyrff-# FROM ascend_tree_breadcrumb( category_by_topic( 'World' ) ) atb\nlyrff-# inner join category_lang cl on (atb.category_id = cl.category_id and cl.lang_code = 'en');\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1791.79..2854.89 rows=1001 width=532) (actual time=350.935..352.317 rows=1 loops=1)\n Hash Cond: (\"outer\".category_id = \"inner\".category_id)\n -> Function Scan on ascend_tree_breadcrumb atb (cost=0.00..12.50 rows=1000 width=520) (actual time=0.834..0.835 rows=1 loops=1)\n -> Hash (cost=1327.33..1327.33 rows=58986 width=16) (actual time=329.393..329.393 rows=0 loops=1)\n -> Seq Scan on category_lang cl (cost=0.00..1327.33 rows=58986 width=16) (actual time=0.036..191.442 rows=40603 loops=1)\n Filter: (lang_code = 'en'::bpchar)\n Total runtime: 352.689 ms\n\nAs you can see, it is doing a Sequential Scan on the category_lang table, \nwhich has 40603 rows now, and will grow. So that's not good.\n\nNow, let's see the non-function version:\n\nlyrff=# explain analyze\nlyrff-# SELECT cl.category_id, c.topic, cl.title\nlyrff-# FROM category c\nlyrff-# inner join category_lang cl on (c.category_id = cl.category_id and cl.lang_code = 'en')\nlyrff-# where\nlyrff-# c.topic = 'World';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..29.23 rows=2 width=72) (actual time=0.104..0.112 rows=1 loops=1)\n -> Index Scan using category_topic_key on category c (cost=0.00..9.70 rows=2 width=60) (actual time=0.058..0.060 rows=1 loops=1)\n Index Cond: ((topic)::text = 'World'::text)\n -> Index Scan using table_lang_pk on category_lang cl (cost=0.00..9.75 rows=1 width=16) (actual time=0.028..0.032 rows=1 loops=1)\n Index Cond: ((\"outer\".category_id = cl.category_id) AND (cl.lang_code = 'en'::bpchar))\n Total runtime: 0.312 ms\n\nThis time, it used the index on category_lang, as it should.\n\nI'm not an expert at reading explain output, so I've probably \nmissed something important.\n\nI've tried modifying the query in several ways, eg putting \nthe function call in a sub-select, and so on. I also tried\ndisabling the various query plans, but in the end I've only \nmanaged to slow it down even further.\n\nSo, I'm hoping someone can tell me what the magical cure is. \nOr failing that, I'd at least like to understand why the planner \nis deciding not to use the category_lang index when the result \nset is coming from a function instead of a \"regular\" table.\n\n\nMany thanks in advance.\n\n\n-- \nDan Libby\n", "msg_date": "Mon, 5 Jun 2006 18:03:59 -0600", "msg_from": "Dan Libby <[email protected]>", "msg_from_op": true, "msg_subject": "Problem: query becomes slow when calling a fast user defined\n function." }, { "msg_contents": "Dan Libby <[email protected]> writes:\n> Or failing that, I'd at least like to understand why the planner \n> is deciding not to use the category_lang index when the result \n> set is coming from a function instead of a \"regular\" table.\n\nThe planner defaults to assuming that set-returning functions return\n1000 rows (as you can see in the EXPLAIN output). A plan that would\nwin for a single returned row would lose badly at 1000 rows ...\nand vice versa. See the archives for various debates about how to\nget a better estimate; it's not an easy problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jun 2006 22:43:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem: query becomes slow when calling a fast user defined\n\tfunction." } ]
[ { "msg_contents": "Hello,\n\n\n\nWe have database on which continueous operations of INSERT, DELETE, UPDATE\nare going on, In the mean time irrespective of INSERT and UPDATE we want to\nALTER some filelds from the table can we do that?\n\nWould the ALTER command on heavily loaded database create any perfomance\nproblem?\n\nIs it feasible to do ALTER when lots of INSERT operations are going on?\n\n\n\nPostgresql version we are using is -- PostgreSQL 7.2.4\n\n\n\nPlease provide me some help regarding this.\n\n\n\nThanks,\n\nSoni\n\n \n\nHello,\n \nWe have database on which continueous operations of INSERT, DELETE, UPDATE are going on, In the mean time irrespective of INSERT and UPDATE we want to ALTER some filelds from the table can we do that? \n\nWould the ALTER command on heavily loaded database create any perfomance problem? \nIs it feasible to do ALTER when lots of INSERT operations are going on?\n \nPostgresql version we are using is -- PostgreSQL 7.2.4 \n \nPlease provide me some help regarding this.\n \nThanks,\nSoni", "msg_date": "Wed, 7 Jun 2006 18:13:11 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding ALTER Command" }, { "msg_contents": "On Wed, Jun 07, 2006 at 06:13:11PM +0530, soni de wrote:\n> Hello,\n> \n> \n> \n> We have database on which continueous operations of INSERT, DELETE, UPDATE\n> are going on, In the mean time irrespective of INSERT and UPDATE we want to\n> ALTER some filelds from the table can we do that?\n> \n> Would the ALTER command on heavily loaded database create any perfomance\n> problem?\n> \n> Is it feasible to do ALTER when lots of INSERT operations are going on?\n \nThe problem you'll run into is that ALTER will grab an exclusive table\nlock. If *all* the transactions hitting the table are very short, this\nshouldn't be too big of an issue; the ALTER will block all new accesses\nto the table while it waits for all the pending ones to complete, but if\nall the pending ones complete quickly it shouldn't be a big issue.\n\nIf one of the pending statements takes a long time though...\n \n> Postgresql version we are using is -- PostgreSQL 7.2.4\n\nYou very badly need to upgrade. 7.2 is no longer supported, and there\nhave been over a half-dozen data loss bugs fixed since then.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 7 Jun 2006 09:50:20 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding ALTER Command" }, { "msg_contents": "Hello,\n\n\nWe are planning to use latest postgres version. I have one query as below:\n\n\n\nOne more thing I have to mention is that we are using 2 postmasters running\non different machines and both are accessing same data directory. (i.e both\nthe machines uses same tables or the databases)\n\nIn that case if from first machine, continuous INSERT operation on any table\nare going on and from the second we have to update the same table using\nALTER command.\n\nWould this create any problem because INSERT and ALTER operations are\nexecuted from the two different postmasters but for a same data directory?\n\nWould there be any data loss or in this case also ALTER will block all the\nnew accesses to the table?\n\n\nThanks,\nSoni\n\n\nOn 6/7/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Wed, Jun 07, 2006 at 06:13:11PM +0530, soni de wrote:\n> > Hello,\n> >\n> >\n> >\n> > We have database on which continueous operations of INSERT, DELETE,\n> UPDATE\n> > are going on, In the mean time irrespective of INSERT and UPDATE we want\n> to\n> > ALTER some filelds from the table can we do that?\n> >\n> > Would the ALTER command on heavily loaded database create any perfomance\n> > problem?\n> >\n> > Is it feasible to do ALTER when lots of INSERT operations are going on?\n>\n> The problem you'll run into is that ALTER will grab an exclusive table\n> lock. If *all* the transactions hitting the table are very short, this\n> shouldn't be too big of an issue; the ALTER will block all new accesses\n> to the table while it waits for all the pending ones to complete, but if\n> all the pending ones complete quickly it shouldn't be a big issue.\n>\n> If one of the pending statements takes a long time though...\n>\n> > Postgresql version we are using is -- PostgreSQL 7.2.4\n>\n> You very badly need to upgrade. 7.2 is no longer supported, and there\n> have been over a half-dozen data loss bugs fixed since then.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\nHello,\n \nWe are planning to use latest postgres version. I have one query as below:\n \nOne more thing I have to mention is that we are using 2 postmasters running on different machines and both are accessing same data directory. (\ni.e both the machines uses same tables or the databases)\nIn that case if from first machine, continuous INSERT operation on any table are going on and from the second we have to update the same table using ALTER command.\n\nWould this create any problem because INSERT and ALTER operations are executed from the two different postmasters but for a same data directory?\n\nWould there be any data loss or in this case also ALTER will block all the new accesses to the table?\n \nThanks,\nSoni \nOn 6/7/06, Jim C. Nasby <[email protected]> wrote:\nOn Wed, Jun 07, 2006 at 06:13:11PM +0530, soni de wrote:> Hello,>>>> We have database on which continueous operations of INSERT, DELETE, UPDATE\n> are going on, In the mean time irrespective of INSERT and UPDATE we want to> ALTER some filelds from the table can we do that?>> Would the ALTER command on heavily loaded database create any perfomance\n> problem?>> Is it feasible to do ALTER when lots of INSERT operations are going on?The problem you'll run into is that ALTER will grab an exclusive tablelock. If *all* the transactions hitting the table are very short, this\nshouldn't be too big of an issue; the ALTER will block all new accessesto the table while it waits for all the pending ones to complete, but ifall the pending ones complete quickly it shouldn't be a big issue.\nIf one of the pending statements takes a long time though...> Postgresql version we are using is -- PostgreSQL 7.2.4You very badly need to upgrade. 7.2 is no longer supported, and therehave been over a half-dozen data loss bugs fixed since then.\n--Jim C. Nasby, Sr. Engineering Consultant      [email protected] Software      http://pervasive.com    work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461", "msg_date": "Thu, 8 Jun 2006 11:37:44 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding ALTER Command" }, { "msg_contents": "\"soni de\" <[email protected]> writes:\n> One more thing I have to mention is that we are using 2 postmasters running\n> on different machines and both are accessing same data directory. (i.e both\n> the machines uses same tables or the databases)\n\nThe above is guaranteed NOT to work. I'm surprised you haven't already\nobserved wholesale data corruption. But don't worry, you will soon.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2006 11:04:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding ALTER Command " } ]
[ { "msg_contents": "The situation is this: we're using a varchar column to store \nalphanumeric codes which are by themselves 7-bit clean. But we are \noperating under a locale which has its own special collation rules, and \nis also utf-8 encoded. Recently we've discovered a serious \"d'oh!\"-type \nbug which we tracked down to the fact that when we sort by this column \nthe collation respects locale sorting rules, which is messing up other \nparts of the application.\n\nThe question is: what is the most efficient way to solve this problem \n(the required operation is to sort data using binary \"collation\" - i.e. \ncompare byte by byte)? Since this field gets queried a lot it must have \nan index. Some of the possible solutions we thought of are: replacing \nthe varchar type with numeric and do magical transcoding (bad, needs \nchanges thoughout the application) and inserting spaces after every \ncharacter (not as bad, but still requires modifying both the application \nand the data). An ideal solution would be to have a \n\"not-locale-affected-varchar\" field type :)\n\n", "msg_date": "Wed, 07 Jun 2006 21:26:18 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Curious sorting puzzle" }, { "msg_contents": "Ivan Voras <[email protected]> writes:\n> The situation is this: we're using a varchar column to store \n> alphanumeric codes which are by themselves 7-bit clean. But we are \n> operating under a locale which has its own special collation rules, and \n> is also utf-8 encoded. Recently we've discovered a serious \"d'oh!\"-type \n> bug which we tracked down to the fact that when we sort by this column \n> the collation respects locale sorting rules, which is messing up other \n> parts of the application.\n\n> The question is: what is the most efficient way to solve this problem \n> (the required operation is to sort data using binary \"collation\" - i.e. \n> compare byte by byte)? Since this field gets queried a lot it must have \n> an index. Some of the possible solutions we thought of are: replacing \n> the varchar type with numeric and do magical transcoding (bad, needs \n> changes thoughout the application) and inserting spaces after every \n> character (not as bad, but still requires modifying both the application \n> and the data). An ideal solution would be to have a \n> \"not-locale-affected-varchar\" field type :)\n\nIf you're just storing ASCII then I think bytea might work for this.\nDo you need any actual text operations (like concatenation), or this\njust a store-and-retrieve field?\n\nIf you need text ops too then probably the best answer is to make your\nown datatype. It's not that hard --- look at the citext datatype (on\npgfoundry IIRC, or else gborg) for a closely related example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jun 2006 17:16:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious sorting puzzle " }, { "msg_contents": "Tom Lane wrote:\n\n>>An ideal solution would be to have a \n>>\"not-locale-affected-varchar\" field type :)\n> \n> \n> If you're just storing ASCII then I think bytea might work for this.\n> Do you need any actual text operations (like concatenation), or this\n> just a store-and-retrieve field?\n\nI've just tested bytea and it looks like a perfect solution - it supports:\n\n- character-like syntax\n- indexes\n- uses indexes with LIKE 'x%' queries\n- SUBSTRING()\n\nThat's good enough for us - it seems it's just what we need - a \nstring-like type with byte collation.\n\nThanks!\n", "msg_date": "Thu, 08 Jun 2006 01:04:53 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious sorting puzzle" } ]
[ { "msg_contents": "Hello,\n\n\n\nWe have to take a backup of database and we know the pg_dump utility of\npostgresql.\n\nBut may I know, is there any API for this pg_dump utility so that we can\ncall it from the C program? Or only script support is possible for this.\n\n\n\nI think script support is bit risky because if anything goes wrong while\ntaking backup using pg_dump then user will not understand the problem of\nfalling\n\nIf only script support is possible then what should we prefer perl or shell?\n\n\n\nPlease provide me some help regarding this\n\n\n\nThanks,\n\nSoni\n\n \n\nHello,\n \nWe have to take a backup of database and we know the pg_dump utility of postgresql.\nBut may I know, is there any API for this pg_dump utility so that we can call it from the C program? Or only script support is possible for this.\n\n \nI think script support is bit risky because if anything goes wrong while taking backup using pg_dump then user will not understand the problem of falling\n\nIf only script support is possible then what should we prefer perl or shell?\n \nPlease provide me some help regarding this\n \nThanks,\nSoni", "msg_date": "Thu, 8 Jun 2006 11:39:48 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding pg_dump utility" }, { "msg_contents": "\"soni de\" <[email protected]> writes:\n> We have to take a backup of database and we know the pg_dump utility of\n> postgresql.\n\n> But may I know, is there any API for this pg_dump utility so that we can\n> call it from the C program? Or only script support is possible for this.\n\nThere's always system(3) ....\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2006 11:06:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility " }, { "msg_contents": "On Thu, Jun 08, 2006 at 11:39:48AM +0530, soni de wrote:\n> We have to take a backup of database and we know the pg_dump utility of\n> postgresql.\n> \n> But may I know, is there any API for this pg_dump utility so that we can\n> call it from the C program? Or only script support is possible for this.\n \nIt probably wouldn't be terribly difficult to put the guts of pg_dump\ninto a library that you could interface with via C. I'm not sure if the\ncommunity would accept such a patch; though, I seem to recall other\npeople asking for this on occasion.\n \n> I think script support is bit risky because if anything goes wrong while\n> taking backup using pg_dump then user will not understand the problem of\n> falling\n> \n> If only script support is possible then what should we prefer perl or shell?\n\nDepends on what you're trying to accomplish. Perl is a much more capable\nlanguage than shell, obviously.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 8 Jun 2006 10:16:57 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "Tom Lane wrote:\n> \"soni de\" <[email protected]> writes:\n>> We have to take a backup of database and we know the pg_dump utility of\n>> postgresql.\n> \n>> But may I know, is there any API for this pg_dump utility so that we can\n>> call it from the C program? Or only script support is possible for this.\n> \n> There's always system(3) ....\n\nfork(), exec()...\n\n-- \nUntil later, Geoffrey\n\nAny society that would give up a little liberty to gain a little\nsecurity will deserve neither and lose both. - Benjamin Franklin\n", "msg_date": "Thu, 08 Jun 2006 11:42:40 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Thu, Jun 08, 2006 at 11:39:48AM +0530, soni de wrote:\n> > We have to take a backup of database and we know the pg_dump utility of\n> > postgresql.\n> > \n> > But may I know, is there any API for this pg_dump utility so that we can\n> > call it from the C program? Or only script support is possible for this.\n> \n> It probably wouldn't be terribly difficult to put the guts of pg_dump\n> into a library that you could interface with via C. I'm not sure if the\n> community would accept such a patch; though, I seem to recall other\n> people asking for this on occasion.\n\nPersonally I think it would be neat. For example the admin-tool guys\nwould be able to get a dump without invoking an external program.\nSecond it would really be independent of core releases (other than being\ntied to the output format.) pg_dump would be just a simple caller of\nsuch a library, and anyone else would be able to get dumps easily, in\nwhatever format.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 8 Jun 2006 12:23:27 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "Alvaro Herrera wrote:\n\n> \n> \n> Personally I think it would be neat. For example the admin-tool guys\n> would be able to get a dump without invoking an external program.\n> Second it would really be independent of core releases (other than being\n> tied to the output format.) pg_dump would be just a simple caller of\n> such a library, and anyone else would be able to get dumps easily, in\n> whatever format.\n\npgAdmin currently invokes pg_dump/restore externally with pipes attached \nto stdin/out/err, but a library implementation would solve some \nheadaches (esp. concerning portability) managing background \nexecution/GUI updates/process control. I'd like a libpgdumprestore \nlibrary, with pg_dump/pg_restore being lean wrapper programs.\n\nRegards,\nAndreas\n", "msg_date": "Thu, 08 Jun 2006 18:33:28 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "On Thu, Jun 08, 2006 at 06:33:28PM +0200, Andreas Pflug wrote:\n> Alvaro Herrera wrote:\n> \n> >\n> >\n> >Personally I think it would be neat. For example the admin-tool guys\n> >would be able to get a dump without invoking an external program.\n> >Second it would really be independent of core releases (other than being\n> >tied to the output format.) pg_dump would be just a simple caller of\n> >such a library, and anyone else would be able to get dumps easily, in\n> >whatever format.\n> \n> pgAdmin currently invokes pg_dump/restore externally with pipes attached \n> to stdin/out/err, but a library implementation would solve some \n> headaches (esp. concerning portability) managing background \n> execution/GUI updates/process control. I'd like a libpgdumprestore \n> library, with pg_dump/pg_restore being lean wrapper programs.\n\nWould a pg_dumpall library also make sense?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 8 Jun 2006 11:35:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Thu, Jun 08, 2006 at 06:33:28PM +0200, Andreas Pflug wrote:\n> > Alvaro Herrera wrote:\n> > \n> > >\n> > >\n> > >Personally I think it would be neat. For example the admin-tool guys\n> > >would be able to get a dump without invoking an external program.\n> > >Second it would really be independent of core releases (other than being\n> > >tied to the output format.) pg_dump would be just a simple caller of\n> > >such a library, and anyone else would be able to get dumps easily, in\n> > >whatever format.\n> > \n> > pgAdmin currently invokes pg_dump/restore externally with pipes attached \n> > to stdin/out/err, but a library implementation would solve some \n> > headaches (esp. concerning portability) managing background \n> > execution/GUI updates/process control. I'd like a libpgdumprestore \n> > library, with pg_dump/pg_restore being lean wrapper programs.\n> \n> Would a pg_dumpall library also make sense?\n\nOne would think that libpgdump should take care of this as well ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 8 Jun 2006 12:38:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "> It probably wouldn't be terribly difficult to put the guts of pg_dump\n> into a library that you could interface with via C. I'm not sure if the\n> community would accept such a patch; though, I seem to recall other\n> people asking for this on occasion.\n> \n>> I think script support is bit risky because if anything goes wrong while\n>> taking backup using pg_dump then user will not understand the problem of\n>> falling\n>>\n>> If only script support is possible then what should we prefer perl or shell?\n> \n> Depends on what you're trying to accomplish. Perl is a much more capable\n> language than shell, obviously.\n\n\nIn phpPgAdmin we just execute pg_dump as a child process and capture its \noutput....\n\n", "msg_date": "Fri, 09 Jun 2006 09:16:35 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "> Personally I think it would be neat. For example the admin-tool guys\n> would be able to get a dump without invoking an external program.\n> Second it would really be independent of core releases (other than being\n> tied to the output format.) pg_dump would be just a simple caller of\n> such a library, and anyone else would be able to get dumps easily, in\n> whatever format.\n\nWhat about fully completing our SQL API for dumping?\n\nie. We finish adding pg_get_blahdef() for all objects, add a function \nthat returns the proper ordering of all objects in the database, and \nthen somehow drop out a dump with a single JOIN :D\n\nChris\n\n", "msg_date": "Fri, 09 Jun 2006 09:19:15 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" }, { "msg_contents": "I think that having an API for backup functionality would definitely be\nuseful.\n\nJust my 2 cents...\n\nPaul\n\n\n\n\nOn 6/8/06, Christopher Kings-Lynne <[email protected]>\nwrote:\n>\n> > Personally I think it would be neat. For example the admin-tool guys\n> > would be able to get a dump without invoking an external program.\n> > Second it would really be independent of core releases (other than being\n> > tied to the output format.) pg_dump would be just a simple caller of\n> > such a library, and anyone else would be able to get dumps easily, in\n> > whatever format.\n>\n> What about fully completing our SQL API for dumping?\n>\n> ie. We finish adding pg_get_blahdef() for all objects, add a function\n> that returns the proper ordering of all objects in the database, and\n> then somehow drop out a dump with a single JOIN :D\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nI think that having an API for backup functionality would definitely be useful.  \n \nJust my 2 cents...\n \nPaul\n \n \nOn 6/8/06, Christopher Kings-Lynne <[email protected]> wrote:\n> Personally I think it would be neat.  For example the admin-tool guys> would be able to get a dump without invoking an external program.\n> Second it would really be independent of core releases (other than being> tied to the output format.)  pg_dump would be just a simple caller of> such a library, and anyone else would be able to get dumps easily, in\n> whatever format.What about fully completing our SQL API for dumping?ie. We finish adding pg_get_blahdef() for all objects, add a functionthat returns the proper ordering of all objects in the database, and\nthen somehow drop out a dump with a single JOIN :DChris---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend", "msg_date": "Fri, 9 Jun 2006 10:22:50 -0400", "msg_from": "\"Paul S\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding pg_dump utility" } ]
[ { "msg_contents": "I have this table setup on a 8.1.4 server:\n\npj_info_attach(attachment_nr, some more cols) -- index, 50k rows\npj_info_attach_compressable() INHERITS (pj_info_attach) -- index, 1M rows\npj_info_attach_not_compressable() INHERITS (pj_info_attach) -- index, 0 \nrows\n\nEXPLAIN ANALYZE SELECT aes FROM pj_info_attach\n WHERE attachment_nr in (.. 20 numeric key values.. )\nyields a big bitmap index scan plan, 1.8ms total runtime, that's fine.\n\nUsing a subselect on zz_attachment_graustufentest, which has 20 rows of \nexactly the key values entered manually in the query above:\n\nEXPLAIN ANALYZE SELECT aes FROM pj_info_attach\n WHERE attachment_nr in\n(SELECT attachment_nr FROM zz_attachment_graustufentest)\ngives 49s runtime, and full table scans.\n\nMerge Join (cost=158472.98..164927.22 rows=107569 width=8)\n (actual time=49714.702..49715.142 rows=20 loops=1)\n Merge Cond: (\"outer\".\"?column2?\" = \"inner\".\"?column3?\")\n -> Sort (cost=2.16..2.21 rows=20 width=13)(actual time=0.752..0.830 \nrows=20 loops=1)\n Sort Key: (zz_attachment_graustufentest.attachment_nr)::numeric\n -> Result (cost=1.63..1.73 rows=20 width=13) (actual \ntime=0.220..0.637 rows=20 loops=1)\n -> Unique (cost=1.63..1.73 rows=20 width=13) (actual \ntime=0.210..0.459 rows=20 loops=1)\n -> Sort (cost=1.63..1.68 rows=20 width=13) (actual \ntime=0.202..0.281 rows=20 loops=1)\n Sort Key: \nzz_attachment_graustufentest.attachment_nr\n -> Seq Scan on zz_attachment_graustufentest \n(cost=0.00..1.20 rows=20 width=13)\n \n(actual time=0.007..0.092 rows=20 loops=1)\n -> Sort (cost=158470.81..161160.04 rows=1075690 width=40)\n (actual time=44705.196..47222.685 rows=589842 loops=1)\n Sort Key: (public.pj_info_attach.attachment_nr)::numeric\n -> Result (cost=0.00..32736.90 rows=1075690 width=40)\n (actual time=0.023..21958.761 rows=1074930 loops=1)\n -> Append (cost=0.00..32736.90 rows=1075690 width=40)\n (actual time=0.015..13485.153 \nrows=1074930 loops=1)\n -> Seq Scan on pj_info_attach (cost=0.00..1433.57 \nrows=49957 width=21)\n \n(actual time=0.008..214.308 rows=49957 loops=1)\n -> Seq Scan on pj_info_attach_compressable \npj_info_attach (cost=0.00..31285.73 rows=1024973 width=21)\n \n(actual time=0.032..4812.090 rows=1024973 loops=1)\n -> Seq Scan on pj_info_attach_not_compressable \npj_info_attach (cost=0.00..17.60 rows=760 width=40)\n \n(actual time=0.005..0.005 rows=0 loops=1)\nTotal runtime: 49747.630 ms\n\nAny explanation for this horror?\n\nRegards,\nAndreas\n\n", "msg_date": "Thu, 08 Jun 2006 13:40:33 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": true, "msg_subject": "JOIN with inherited table ignores indexes" }, { "msg_contents": "On Thu, Jun 08, 2006 at 01:40:33PM +0200, Andreas Pflug wrote:\n> I have this table setup on a 8.1.4 server:\n> \n> pj_info_attach(attachment_nr, some more cols) -- index, 50k rows\n> pj_info_attach_compressable() INHERITS (pj_info_attach) -- index, 1M rows\n> pj_info_attach_not_compressable() INHERITS (pj_info_attach) -- index, 0 \n> rows\n> \n> EXPLAIN ANALYZE SELECT aes FROM pj_info_attach\n> WHERE attachment_nr in (.. 20 numeric key values.. )\n> yields a big bitmap index scan plan, 1.8ms total runtime, that's fine.\n> \n> Using a subselect on zz_attachment_graustufentest, which has 20 rows of \n> exactly the key values entered manually in the query above:\n\nI'm pretty sure the issue is that the planner doesn't know what values\nwill be coming back from the subselect at plan time, so if the\ndistribution of values in attachment_nr isn't fairly constant you can g\net some pretty bad plans. Unfortunately, no one's figured out a good way\nto fix this yet.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 8 Jun 2006 10:22:15 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JOIN with inherited table ignores indexes" }, { "msg_contents": "Andreas Pflug <[email protected]> writes:\n> Any explanation for this horror?\n\nExisting releases aren't smart about planning joins to inheritance\ntrees. CVS HEAD is better...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2006 12:06:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JOIN with inherited table ignores indexes " }, { "msg_contents": "Tom Lane wrote:\n> Andreas Pflug <[email protected]> writes:\n> \n>>Any explanation for this horror?\n> \n> \n> Existing releases aren't smart about planning joins to inheritance\n> trees.\n\nUsing a view that UNIONs SELECT .. ONLY as replacement for the parent \ntable isn't any better. Is that improved too?\n\n> CVS HEAD is better...\n\nCustomers like HEAD versions for production purposes :-)\n\nRegards,\nAndreas\n", "msg_date": "Thu, 08 Jun 2006 18:42:25 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JOIN with inherited table ignores indexes" } ]
[ { "msg_contents": "Why Postgres 8.1 does not use makse_kuupaev_idx index in the following query \n?\n\nHow to speed this query up ?\n\nexplain analyze select * from makse order by kuupaev desc, kellaaeg desc \nlimit 100\n\n\"Limit (cost=62907.94..62908.19 rows=100 width=876) (actual \ntime=33699.551..33701.001 rows=100 loops=1)\"\n\" -> Sort (cost=62907.94..63040.49 rows=53022 width=876) (actual \ntime=33699.534..33700.129 rows=100 loops=1)\"\n\" Sort Key: kuupaev, kellaaeg\"\n\" -> Seq Scan on makse (cost=0.00..2717.22 rows=53022 width=876) \n(actual time=0.020..308.502 rows=53028 loops=1)\"\n\"Total runtime: 37857.177 ms\"\n\n\nCREATE TABLE makse(\n kuupaev date,\n kellaaeg char(6) NOT NULL DEFAULT ''::bpchar,\n guid char(36) NOT NULL,\n CONSTRAINT makse_pkey PRIMARY KEY (guid) )\n\n\nCREATE INDEX makse_kuupaev_idx ON makse USING btree (kuupaev);\n\n\nAndrus. \n\n\n", "msg_date": "Thu, 8 Jun 2006 21:53:17 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why date index is not used" }, { "msg_contents": "If you want to benefit from the usage of an index, the query has to\ncontain some WHERE conditions (on the indexed columns). This is a\n'select all' query - there is no way to speed it up using index.\n\nTomas\n\n> > Why Postgres 8.1 does not use makse_kuupaev_idx index in the\nfollowing query\n> > ?\n> >\n> > How to speed this query up ?\n> >\n> > explain analyze select * from makse order by kuupaev desc, kellaaeg\ndesc\n> > limit 100\n> >\n> > \"Limit (cost=62907.94..62908.19 rows=100 width=876) (actual\n> > time=33699.551..33701.001 rows=100 loops=1)\"\n> > \" -> Sort (cost=62907.94..63040.49 rows=53022 width=876) (actual\n> > time=33699.534..33700.129 rows=100 loops=1)\"\n> > \" Sort Key: kuupaev, kellaaeg\"\n> > \" -> Seq Scan on makse (cost=0.00..2717.22 rows=53022\nwidth=876)\n> > (actual time=0.020..308.502 rows=53028 loops=1)\"\n> > \"Total runtime: 37857.177 ms\"\n> >\n> >\n> > CREATE TABLE makse(\n> > kuupaev date,\n> > kellaaeg char(6) NOT NULL DEFAULT ''::bpchar,\n> > guid char(36) NOT NULL,\n> > CONSTRAINT makse_pkey PRIMARY KEY (guid) )\n> >\n> >\n> > CREATE INDEX makse_kuupaev_idx ON makse USING btree (kuupaev);\n", "msg_date": "Thu, 08 Jun 2006 21:10:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "More precisely - the Postgres could use the index to speed up the\nsorting, but in this case the sorting is very fast (less than one\nsecond according to the output), so Postgres probably decided not\nto use the index because it would be slower.\n\nBtw. have you run ANALYZE on the table recently? What is the number\nof distinct values in the 'kuupaev' column?\n\nTomas\n\n> Why Postgres 8.1 does not use makse_kuupaev_idx index in the following query \n> ?\n> \n> How to speed this query up ?\n> \n> explain analyze select * from makse order by kuupaev desc, kellaaeg desc \n> limit 100\n> \n> \"Limit (cost=62907.94..62908.19 rows=100 width=876) (actual \n> time=33699.551..33701.001 rows=100 loops=1)\"\n> \" -> Sort (cost=62907.94..63040.49 rows=53022 width=876) (actual \n> time=33699.534..33700.129 rows=100 loops=1)\"\n> \" Sort Key: kuupaev, kellaaeg\"\n> \" -> Seq Scan on makse (cost=0.00..2717.22 rows=53022 width=876) \n> (actual time=0.020..308.502 rows=53028 loops=1)\"\n> \"Total runtime: 37857.177 ms\"\n> \n> \n> CREATE TABLE makse(\n> kuupaev date,\n> kellaaeg char(6) NOT NULL DEFAULT ''::bpchar,\n> guid char(36) NOT NULL,\n> CONSTRAINT makse_pkey PRIMARY KEY (guid) )\n> \n> \n> CREATE INDEX makse_kuupaev_idx ON makse USING btree (kuupaev);\n> \n> \n> Andrus. \n\n", "msg_date": "Thu, 08 Jun 2006 21:20:21 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> Why Postgres 8.1 does not use makse_kuupaev_idx index in the following query \n> ?\n\nBecause it doesn't help --- the system still has to do the sort.\nYou'd need a two-column index on both of the ORDER BY columns to avoid\nsorting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2006 15:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used " }, { "msg_contents": "Actually It looks to me like the sorting is the slow part of this query.\nMaybe if you did create an index on both kuupaev and kellaaeg it might\nmake the sorting faster. Or maybe you could try increasing the server's\nwork mem. The sort will be much slower if the server can't do the whole\nthing in ram.\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Tomas Vondra\n> Sent: Thursday, June 08, 2006 2:20 PM\n> To: Andrus\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Why date index is not used\n> \n> \n> More precisely - the Postgres could use the index to speed up the\n> sorting, but in this case the sorting is very fast (less than one\n> second according to the output), so Postgres probably decided not\n> to use the index because it would be slower.\n> \n> Btw. have you run ANALYZE on the table recently? What is the number\n> of distinct values in the 'kuupaev' column?\n> \n> Tomas\n> \n> > Why Postgres 8.1 does not use makse_kuupaev_idx index in \n> the following query \n> > ?\n> > \n> > How to speed this query up ?\n> > \n> > explain analyze select * from makse order by kuupaev desc, \n> kellaaeg desc \n> > limit 100\n> > \n> > \"Limit (cost=62907.94..62908.19 rows=100 width=876) (actual \n> > time=33699.551..33701.001 rows=100 loops=1)\"\n> > \" -> Sort (cost=62907.94..63040.49 rows=53022 width=876) (actual \n> > time=33699.534..33700.129 rows=100 loops=1)\"\n> > \" Sort Key: kuupaev, kellaaeg\"\n> > \" -> Seq Scan on makse (cost=0.00..2717.22 \n> rows=53022 width=876) \n> > (actual time=0.020..308.502 rows=53028 loops=1)\"\n> > \"Total runtime: 37857.177 ms\"\n> > \n> > \n> > CREATE TABLE makse(\n> > kuupaev date,\n> > kellaaeg char(6) NOT NULL DEFAULT ''::bpchar,\n> > guid char(36) NOT NULL,\n> > CONSTRAINT makse_pkey PRIMARY KEY (guid) )\n> > \n> > \n> > CREATE INDEX makse_kuupaev_idx ON makse USING btree (kuupaev);\n> > \n> > \n> > Andrus. \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n http://archives.postgresql.org\n\n", "msg_date": "Thu, 8 Jun 2006 15:07:42 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "On Thu, Jun 08, 2006 at 03:20:55PM -0400, Tom Lane wrote:\n> \"Andrus\" <[email protected]> writes:\n> > Why Postgres 8.1 does not use makse_kuupaev_idx index in the following query \n> > ?\n> \n> Because it doesn't help --- the system still has to do the sort.\n> You'd need a two-column index on both of the ORDER BY columns to avoid\n> sorting.\n\nAnd even then you better have a pretty high correlation on the first\ncolumn, otherwise you'll still get a seqscan.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 8 Jun 2006 16:57:45 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> And even then you better have a pretty high correlation on the first\n> column, otherwise you'll still get a seqscan.\n\nNot with the LIMIT. (If he were fetching the whole table, very possibly\nthe sort would be the right plan anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jun 2006 18:19:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used " }, { "msg_contents": "> Btw. have you run ANALYZE on the table recently?\n\nI have autovacuum with default statitics settings running so I expect that \nit is analyzed.\n\n> What is the number\n> of distinct values in the 'kuupaev' column?\n\nselect count(distinct kuupaev) from makse\n\nreturns 61\n\nkuupaev is sales date.\n\nSo this can contain 365 distinct values per year and max 10 year database, \ntotal can be 3650 distinct values after 10 years.\n\nAndrus \n\n\n", "msg_date": "Fri, 9 Jun 2006 12:06:47 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "> Actually It looks to me like the sorting is the slow part of this query.\n> Maybe if you did create an index on both kuupaev and kellaaeg it might\n> make the sorting faster.\n\nThank you. It makes query fast.\n\n> Or maybe you could try increasing the server's\n> work mem. The sort will be much slower if the server can't do the whole\n> thing in ram.\n\nI have W2K server with 0.5 GB RAM\nthere are only 6 connections open ( 6 point of sales) to this server.\nshared_buffes is 10000\nI see approx 10 postgres processes in task manager each taking about 30 MB\nram\n\nServer prefomance is very slow: Windows swap file size is 1 GB\n\nFor each sale a new row will be inserted to this table. So the file size\ngrows rapidly every day.\nChanging work_mem by 1 MB increares memory requirment by 10 MB since I may\nhave 10 processes running. Sorting in memory this table requires very large\namout of work_mem for each process address space.\n\nI think that if I increase work_mem then swap file will became bigger and\nperfomance will decrease even more.\n\nHow to increase perfomance ?\n\nAndrus.\n\n\n\n", "msg_date": "Fri, 9 Jun 2006 12:40:26 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "Tom,\n\n> Because it doesn't help --- the system still has to do the sort.\n\nIt can help a lot in this case.\n\nkuupaev is sales date\nkellaaeg is sales time\n\nPostgres can use kuupaev index to fetch first 100 rows plus a number of more \nrows whose kellaaeg value is equal to kellaaeg in 100 th row. I have 500 \nsales per day.\nSo it can fetch 600 rows using index on kuupaev column.\n\nAfter that it can sort those 600 rows fast.\nCurrently it sorts blindly all 54000 rows in table.\n\n> You'd need a two-column index on both of the ORDER BY columns to avoid\n> sorting.\n\nThank you. It works.\n\nAndrus. \n\n\n", "msg_date": "Fri, 9 Jun 2006 12:52:21 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why date index is not used" }, { "msg_contents": "On Fri, Jun 09, 2006 at 12:40:26PM +0300, Andrus wrote:\n> > Actually It looks to me like the sorting is the slow part of this query.\n> > Maybe if you did create an index on both kuupaev and kellaaeg it might\n> > make the sorting faster.\n> \n> Thank you. It makes query fast.\n> \n> > Or maybe you could try increasing the server's\n> > work mem. The sort will be much slower if the server can't do the whole\n> > thing in ram.\n> \n> I have W2K server with 0.5 GB RAM\n> there are only 6 connections open ( 6 point of sales) to this server.\n> shared_buffes is 10000\n> I see approx 10 postgres processes in task manager each taking about 30 MB\n> ram\n> \n> Server prefomance is very slow: Windows swap file size is 1 GB\n> \n> For each sale a new row will be inserted to this table. So the file size\n> grows rapidly every day.\n> Changing work_mem by 1 MB increares memory requirment by 10 MB since I may\n> have 10 processes running. Sorting in memory this table requires very large\n> amout of work_mem for each process address space.\n> \n> I think that if I increase work_mem then swap file will became bigger and\n> perfomance will decrease even more.\n> \n> How to increase perfomance ?\n\nDo you have effective_cache_size set correctly? You might try dropping\nrandom_page_cost down to 2 or so.\n\nOf course you could just put more memory in the machine, too.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 9 Jun 2006 10:54:02 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why date index is not used" } ]
[ { "msg_contents": "Hello,\n\n \n\nDuring insert or update, potgresql write in pgsql_tmp directory and so\nperformance are very poor.\n\n \n\nMy configuration is:\n\nWork mem 10240\n\nEffective_cache_size 30000\n\nShared buffers 9000\n\nMax_fsm_pages 35000\n\nWal Buffers 24\n\nAutovacuum on\n\n \n\nManual vacuum analyze and vacuum full analyze every day\n\n \n\n \n\nServer:\n\n1 Xeon processor\n\n2500 MB ram\n\nRed Hat Enterprise ES 3\n\nPostgresql (RPM from official website) 8.1.0\n\n \n\n \n\nTables are vacuumed frequently and now fsm is very low (only 3000 pages).\n\n \n\nUpdates and inserts on this database are infrequent, and files to import\naren't so big (7-50 Mb for 2000-20000 record in a txt file).\n\n \n\nOn this server are installed and active also Apache - Tomcat - Java 1.4.2\nwhich provide data to import.\n\n \n\nTables interested have only max 4 index.\n\n \n\nAre parameters adapted? \n\n \n\n \n\n \n\n \n\nThanks \n\n \n\nDomenico Mozzanica\n\n\n", "msg_date": "Fri, 9 Jun 2006 14:23:04 +0200", "msg_from": "\"Domenico - Sal. F.lli Riva\" <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql_tmp and postgres settings" }, { "msg_contents": "On Fri, Jun 09, 2006 at 02:23:04PM +0200, Domenico - Sal. F.lli Riva wrote:\n> Hello,\n> \n> During insert or update, potgresql write in pgsql_tmp directory and so\n> performance are very poor.\n\npgsql_tmp is used if a query runs out of work_mem, so you can try\nincreasing that.\n\n> My configuration is:\n> \n> Work mem 10240\n> \n> Effective_cache_size 30000\nYou're off by a factor of 10. \n\n> Shared buffers 9000\nI'd suggest bumping that up to at least 30000.\n\n> Postgresql (RPM from official website) 8.1.0\n\nYou should upgrade to 8.1.4. There's a number of data loss bugs waiting\nto bite you.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 9 Jun 2006 10:59:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql_tmp and postgres settings" }, { "msg_contents": "Where is the pgsql_tmp folder present ?. i am unable to see it in the data\ndirectory of postgresql.\n\n~gourish\n\nOn 6/9/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Fri, Jun 09, 2006 at 02:23:04PM +0200, Domenico - Sal. F.lli Riva\n> wrote:\n> > Hello,\n> >\n> > During insert or update, potgresql write in pgsql_tmp directory and so\n> > performance are very poor.\n>\n> pgsql_tmp is used if a query runs out of work_mem, so you can try\n> increasing that.\n>\n> > My configuration is:\n> >\n> > Work mem 10240\n> >\n> > Effective_cache_size 30000\n> You're off by a factor of 10.\n>\n> > Shared buffers 9000\n> I'd suggest bumping that up to at least 30000.\n>\n> > Postgresql (RPM from official website) 8.1.0\n>\n> You should upgrade to 8.1.4. There's a number of data loss bugs waiting\n> to bite you.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n-- \nBest,\nGourish Singbal\n\n \nWhere is the pgsql_tmp folder present ?.  i am unable to see it in the data directory of postgresql. \n \n~gourish \nOn 6/9/06, Jim C. Nasby <[email protected]> wrote:\nOn Fri, Jun 09, 2006 at 02:23:04PM +0200, Domenico - Sal. F.lli Riva wrote:> Hello,>> During insert or update, potgresql write in pgsql_tmp directory and so\n> performance are very poor.pgsql_tmp is used if a query runs out of work_mem, so you can tryincreasing that.> My configuration is:>> Work mem                    10240>\n> Effective_cache_size      30000You're off by a factor of 10.> Shared buffers              9000I'd suggest bumping that up to at least 30000.> Postgresql (RPM from official website) 8.1.0\nYou should upgrade to 8.1.4. There's a number of data loss bugs waitingto bite you.--Jim C. Nasby, Sr. Engineering Consultant      [email protected] Software      \nhttp://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?              http://archives.postgresql.org-- Best,Gourish Singbal", "msg_date": "Mon, 12 Jun 2006 11:26:23 +0530", "msg_from": "\"Gourish Singbal\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql_tmp and postgres settings" }, { "msg_contents": "On Mon, Jun 12, 2006 at 11:26:23AM +0530, Gourish Singbal wrote:\n> Where is the pgsql_tmp folder present ?. i am unable to see it in the data\n> directory of postgresql.\n\nIt will be under the *database* directory, under $PGDATA/base. SELECT\noid,* FROM pg_database; will tell you what directory to look in for your\ndatabase.\n\n> On 6/9/06, Jim C. Nasby <[email protected]> wrote:\n> >\n> >On Fri, Jun 09, 2006 at 02:23:04PM +0200, Domenico - Sal. F.lli Riva\n> >wrote:\n> >> Hello,\n> >>\n> >> During insert or update, potgresql write in pgsql_tmp directory and so\n> >> performance are very poor.\n> >\n> >pgsql_tmp is used if a query runs out of work_mem, so you can try\n> >increasing that.\n> >\n> >> My configuration is:\n> >>\n> >> Work mem 10240\n> >>\n> >> Effective_cache_size 30000\n> >You're off by a factor of 10.\n> >\n> >> Shared buffers 9000\n> >I'd suggest bumping that up to at least 30000.\n> >\n> >> Postgresql (RPM from official website) 8.1.0\n> >\n> >You should upgrade to 8.1.4. There's a number of data loss bugs waiting\n> >to bite you.\n> >--\n> >Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> >Pervasive Software http://pervasive.com work: 512-231-6117\n> >vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> \n> \n> \n> -- \n> Best,\n> Gourish Singbal\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 12 Jun 2006 10:15:30 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql_tmp and postgres settings" } ]
[ { "msg_contents": "AFAIK, the reason why seperating pg_xlog from the base files provides so\nmuch performance is because the latency on pg_xlog is critical: a\ntransaction can't commit until all of it's log data is written to disk\nvia fsync, and if you're trying to fsync frequently on the same drive as\nthe data tables are on, you'll have a big problem with the activity on\nthe data drives competing with trying to fsync pg_xlog rapidly.\n\nBut if you have a raid array with a battery-backed controller, this\nshouldn't be anywhere near as big an issue. The fsync on the log will\nreturn very quickly thanks to the cache, and the controller is then free\nto batch up writes to pg_xlog. Or at least that's the theory.\n\nHas anyone actually done any testing on this? Specifically, I'm\nwondering if the benefit of adding 2 more drives to a RAID10 outweighs\nwhatever penalties there are to having pg_xlog on that RAID10 with all\nthe rest of the data.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 9 Jun 2006 14:41:16 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_xlog on data partition with BBU RAID" }, { "msg_contents": "On Fri, 2006-06-09 at 14:41, Jim C. Nasby wrote:\n> AFAIK, the reason why seperating pg_xlog from the base files provides so\n> much performance is because the latency on pg_xlog is critical: a\n> transaction can't commit until all of it's log data is written to disk\n> via fsync, and if you're trying to fsync frequently on the same drive as\n> the data tables are on, you'll have a big problem with the activity on\n> the data drives competing with trying to fsync pg_xlog rapidly.\n> \n> But if you have a raid array with a battery-backed controller, this\n> shouldn't be anywhere near as big an issue. The fsync on the log will\n> return very quickly thanks to the cache, and the controller is then free\n> to batch up writes to pg_xlog. Or at least that's the theory.\n> \n> Has anyone actually done any testing on this? Specifically, I'm\n> wondering if the benefit of adding 2 more drives to a RAID10 outweighs\n> whatever penalties there are to having pg_xlog on that RAID10 with all\n> the rest of the data.\n\nI tested it WAY back when 7.4 first came out on a machine with BBU, and\nit didn't seem to make any difference HOW I set up the hard drives,\nRAID-5, 1+0, 1 it was all about the same. With BBU the transactions per\nsecond varied very little. If I recall correctly, it was something like\n600 or so tps with pgbench (scaling and num clients was around 20 I\nbelieve) It's been a while.\n\nIn the end, that server ran with a pair of 18 Gig drives in a RAID-1 and\nwas plenty fast for what we used it for. Due to corporate shenanigans\nit was still running pgsql 7.2.x at the time. ugh.\n\nI've not got access to a couple of Dell servers I might be able to test\nthis on... After our security audit maybe.\n", "msg_date": "Fri, 09 Jun 2006 15:21:22 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog on data partition with BBU RAID" }, { "msg_contents": "Forwarding to -performance\n\nFrom: Alan Hodgson [mailto:[email protected]]\n\nOn Friday 09 June 2006 12:41, \"Jim C. Nasby\" <[email protected]> wrote:\n> Has anyone actually done any testing on this? Specifically, I'm\n> wondering if the benefit of adding 2 more drives to a RAID10 outweighs\n> whatever penalties there are to having pg_xlog on that RAID10 with all\n> the rest of the data.\n\nI have an external array with 1GB of write-back cache, and testing on it \nbefore deployment showed no difference under any workload I could generate \nbetween having pg_xlog on a separate RAID-1 or having it share a RAID-10 \nwith the default tablespace. I left it on the RAID-10, and it has been \nfine there. We have a very write-heavy workload.\n\n-- \n\"If a nation expects to be ignorant and free, in a state of civilization,\nit expects what never was and never will be.\" -- Thomas Jefferson\n\n\n", "msg_date": "Fri, 9 Jun 2006 21:24:47 -0500", "msg_from": "\"Jim Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "FW: pg_xlog on data partition with BBU RAID" } ]
[ { "msg_contents": "Hi,\n\nI have the following quering plans:\n\n\"Seq Scan on ind_uni_100 (cost=0.00..27242.00 rows=1000000 width=104)\n(actual time=0.272..2444.667 rows=1000000 loops=1)\"\n\"Total runtime: 4229.449 ms\"\n\nand\n\n\"Bitmap Heap Scan on ind_uni_100 (cost=314.00..18181.00 rows=50000\nwidth=104) (actual time=74.106..585.368 rows=49758 loops=1)\"\n\" Recheck Cond: (b = 1)\"\n\" -> Bitmap Index Scan on index_b_ind_uni_100\n(cost=0.00..314.00rows=50000 width=0) (actual time=\n61.814..61.814 rows=49758 loops=1)\"\n\" Index Cond: (b = 1)\"\n\"Total runtime: 638.787 ms\"\n\nfrom pg_stast_get_blocks_fetched i can see that both queries need almost the\nsame number of disk fetches which is quite reasonable ( the index is\nunclustered).\n\nBut as you can see there is a great variation between query\nruntimes.Cansomeone explain this differnce?\n\nThanks!\n\nHi,I have the following quering plans:\"Seq Scan on ind_uni_100 (cost=0.00..27242.00 rows=1000000 width=104) (actual time=0.272..2444.667 rows=1000000 loops=1)\"\"Total runtime: 4229.449 ms\"\nand\"Bitmap Heap Scan on ind_uni_100  (cost=314.00..18181.00 rows=50000 width=104) (actual time=74.106..585.368 rows=49758 loops=1)\"\"  Recheck Cond: (b = 1)\"\"  ->  Bitmap Index Scan on index_b_ind_uni_100  (cost=\n0.00..314.00 rows=50000 width=0) (actual time=61.814..61.814 rows=49758 loops=1)\"\"        Index Cond: (b = 1)\"\"Total runtime: 638.787 ms\"from pg_stast_get_blocks_fetched i can see that both queries need almost the same number of disk fetches which is quite reasonable ( the index is unclustered).\nBut as you can see there is a great variation between query runtimes.Can someone explain this differnce?Thanks!", "msg_date": "Sun, 11 Jun 2006 09:35:20 +0300", "msg_from": "\"John Top-k apad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Variation between query runtimes" }, { "msg_contents": "\n\"\"John Top-k apad\"\" <[email protected]> wrote\n>\n> from pg_stast_get_blocks_fetched i can see that both queries need almost\nthe\n> same number of disk fetches which is quite reasonable ( the index is\n> unclustered).\n>\n> But as you can see there is a great variation between query\n> runtimes.Cansomeone explain this differnce?\n>\n\nCan you give a self-contained example (including what you did to clear the\nfile system cache (maybe unmount?) to *not* let the 2nd query to use the\nfile content from the 1st query)?\n\nRegards,\nQingqing\n\n\n", "msg_date": "Mon, 12 Jun 2006 10:50:28 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variation between query runtimes" } ]
[ { "msg_contents": "My application has a function, call it \"foo()\", that requires initialization from a table of about 800 values. Rather than build these values into the C code, it seemed like a good idea to put them on a PG table and create a second function, call it \"foo_init()\", which is called for each value, like this:\n\n select foo_init(value) from foo_init_table order by value_id;\n\nThis works well, but it requires me to actually retrieve the function's value 800 times. So I thought I'd be clever:\n\n select count(1) from (select foo_init(value) from foo_init_table order by value_id) as foo;\n\nAnd indeed, it count() returns 800, as expected. But my function foo_init() never gets called! Apparently the optimizer figures out that foo_init() must return one value for each row, so it doesn't bother to actually call the function.\n\ndb=> explain select count(1) from (select foo_init(value) from foo_init_table order by db_no) as foo;\n query plan \n----------------------------------------------------------------------------------------------------\n aggregate (cost=69.95..69.95 rows=1 width=0)\n -> Subquery Scan foo (cost=0.00..67.93 rows=806 width=0)\n -> Index Scan using foo_init_table_pkey on foo_init_table (cost=0.00..59.87 rows=806 width=30)\n\nThis doesn't seem right to me -- how can the optimizer possibly know that a function doesn't have a side effect, as in my case? Functions could do all sorts of things, such as logging activity, filling in other tables, etc, etc.\n\nAm I missing something here?\n\nThanks,\nCraig\n", "msg_date": "Sun, 11 Jun 2006 10:18:20 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "function not called if part of aggregate" }, { "msg_contents": "On Sun, Jun 11, 2006 at 10:18:20AM -0700, Craig A. James wrote:\n> This works well, but it requires me to actually retrieve the function's \n> value 800 times.\n\nIs this actually a problem?\n\n> So I thought I'd be clever:\n> \n> select count(1) from (select foo_init(value) from foo_init_table order by \n> value_id) as foo;\n\nWhy not just count(foo_init(value))?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 11 Jun 2006 19:31:55 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function not called if part of aggregate" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> select count(1) from (select foo_init(value) from foo_init_table order by value_id) as foo;\n\n> And indeed, it count() returns 800, as expected. But my function foo_init() never gets called!\n\nReally? With the ORDER BY in there, it does get called, in my\nexperiments. What PG version is this exactly?\n\nHowever, the short answer to your question is that PG does not guarantee\nto evaluate parts of the query not needed to determine the result. You\ncould do something like\n\nselect count(x) from (select foo_init(value) as x from foo_init_table order by value_id) as foo;\n\nto ensure that foo_init() must be evaluated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jun 2006 13:39:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function not called if part of aggregate " }, { "msg_contents": "On Sun, Jun 11, 2006 at 10:18:20AM -0700, Craig A. James wrote:\n> This doesn't seem right to me -- how can the optimizer possibly know that a \n> function doesn't have a side effect, as in my case? Functions could do all \n> sorts of things, such as logging activity, filling in other tables, etc, \n> etc.\n> \n> Am I missing something here?\n\nRead about function stability in the docs.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Sun, 11 Jun 2006 12:48:37 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function not called if part of aggregate" }, { "msg_contents": "\n\"Craig A. James\" <[email protected]> writes:\n\n> This doesn't seem right to me -- how can the optimizer possibly know that a\n> function doesn't have a side effect, as in my case? Functions could do all\n> sorts of things, such as logging activity, filling in other tables, etc, etc.\n\nThe optimizer can know this if the user tells it so by marking the function\nIMMUTABLE. If the function is marked VOLATILE then the optimizer can know it\nmight have side effects.\n\nHowever that's not enough to explain what you've shown. How about you show the\nactual query and actual plan you're working with? The plan you've shown can't\nresult from the query you sent.\n\n-- \ngreg\n\n", "msg_date": "11 Jun 2006 14:00:38 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function not called if part of aggregate" }, { "msg_contents": "Greg Stark wrote:\n> However that's not enough to explain what you've shown. How about you show the\n> actual query and actual plan you're working with? The plan you've shown can't\n> result from the query you sent.\n\nMea culpa, sort of. But ... in fact, the plan I sent *was* from query I sent, with the table/column names changed for clarity. This time I'll send the plan \"raw\". (This is PG 8.0.1.)\n\nchm=> explain select count(1) from (select normalize_add_salt(smiles) from\nchm(> salt_smiles order by db_no) as foo;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------\n Aggregate (cost=69.95..69.95 rows=1 width=0)\n -> Subquery Scan foo (cost=0.00..67.93 rows=806 width=0)\n -> Index Scan using salt_smiles_pkey on salt_smiles (cost=0.00..59.87 rows=806 width=30)\n(3 rows)\n\nAs pointed out by Tom and others, this query DOES in fact call the normalize_add_salt() function.\n\nNow here's the weird part. (And where my original posting went wrong -- sorry for the error! I got the two queries mixed up.)\n\nI originally had a more complex query, the purpose being to guarantee that the function was called on the strings in the order specified. (More on this below.) Here is the original query I used:\n\nchm=> explain select count(1) from (select normalize_add_salt(smiles)\nchm(> from (select smiles from salt_smiles order by db_no) as foo) as bar;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------\n Aggregate (cost=67.94..67.94 rows=1 width=0)\n -> Subquery Scan foo (cost=0.00..65.92 rows=806 width=0)\n -> Index Scan using salt_smiles_pkey on salt_smiles (cost=0.00..57.86 rows=806 width=30)\n(3 rows)\n\nNotice that the plans are essentially identical, yet in this one the function does NOT get called. I proved this by brute force, inserting \"char **p = NULL; *p = \"foo\";\" into the C code to guarantee a segmentation violation if the function gets called. In the first case it does SIGSEGV, and in the second case it does not.\n\nNow the reason for this more-complex query with an additional subselect is that the SMILES (which, by the way, are a lexical way of representing chemical structures - see www.daylight.com), must be passed to the function in a particular order (hence the ORDER BY). In retrospect I realize the optimizer apparently flattens this query anyway (hence the identical plans, above).\n\nBut the weird thing is that, in spite of flattening, which would appear to make the queries equivalent, the function gets called in one case, and not in the other.\n\nSteinar H. Gunderson asked:\n>> select count(1) from (select foo_init(value) from foo_init_table order by \n>> value_id) as foo;\n> Why not just count(foo_init(value))?\n\nBecause the SMILES must be processed in a specific order, hence the more complex queries.\n\nThe simple answer to this whole problem is what Steinar wrote:\n>>This works well, but it requires me to actually retrieve the function's \n>>value 800 times.\n> \n> Is this actually a problem?\n\nNo, it's just a nuisance. It occurs to me that in spite of the ORDER BY expression, Postgres is free to evaluate the function first, THEN sort the results, which means the SMILES would be processed in random order anyway. I.e. my ORDER BY clause is useless for the intended purpose.\n\nSo the only way I can see to get this right is to pull the SMILES into my application with the ORDER BY to ensure I have them in the correct order, then send them back one at a time via a \"select normalize_add_salt(smiles)\", meaning I'll retrieve 800 strings and then send them back. \n\nI just thought there ought to be a way to do this all on the PG server instead of sending all these strings back and forth. I'd like to say to Postgres, \"Just do it this way, OK?\" But the optimizer can't be turned off, so I guess I have to do it the slow way. The good news is that this is just an initialization step, after which I typically process thousands of molecules, so the extra overhead won't kill me.\n\nThanks to all for your help.\n\nCraig\n", "msg_date": "Mon, 12 Jun 2006 23:26:26 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function not called if part of aggregate" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> But the weird thing is that, in spite of flattening, which would appear to make the queries equivalent, the function gets called in one case, and not in the other.\n\nNo, nothing particularly weird about it. ORDER BY in a subselect\nacts as an \"optimization fence\" that prevents flattening. An\nun-flattened subquery will evaluate all its output columns whether the\nparent query reads them or not. (This is not set in stone mind you,\nbut in the current planner implementation it's hard to avoid, because\nsuch a sub-query gets planned before we've figured out which columns\nthe parent wants to reference.) The cases in which you had the function\nin a subquery without ORDER BY were flattenable, and in that case the\nplanner threw the function expression away as being unreferenced.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 10:36:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function not called if part of aggregate " }, { "msg_contents": "I have a query that needs to run faster, with the obvious solution being to add an index. But to confirm this, I ran explain analyze. When I run the actual query, it consistently takes 6-7 seconds by the wall clock. My application with a \"verbose\" mode enabled reports 6.6 seconds consistently. However, when I run EXPLAIN ANALYZE, it takes 120 seconds! This is 20x longer, and it leads me to distrust the plan that it claims to be executing. How can the actual run time be so much faster than that claimed by EXPLAIN ANALYZE? How can I find out the actual plan it's using?\n\nThanks,\nCraig\n\n\nDetails:\n Postgres 8.0.3\n shared_buffers = 20000\n work_mem = 500000\n effective_cache_size = 430000\n Dell w/ Xeon\n Linux kernel 2.6.9-1.667smp\n 4 GB memory\n\n=> explain analyze select SAMPLE.SAMPLE_ID, SAMPLE.VERSION_ID,SAMPLE.SUPPLIER_ID,SAMPLE.CATALOGUE_ID,SAMPLE.PREP_ID from HITLIST_ROWS_281430 join SAMPLE on (HITLIST_ROWS_281430.OBJECTID = SAMPLE.SAMPLE_ID) where SAMPLE.VERSION_ID in (7513672,7513650,7513634,7513620,7513592,7513590,7513582,7513576,7513562,7513560) order by HITLIST_ROWS_281430.SortOrder;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=234964.38..234964.52 rows=58 width=24) (actual time=120510.842..120510.889 rows=10 loops=1)\n Sort Key: hitlist_rows_281430.sortorder\n -> Hash Join (cost=353.68..234962.68 rows=58 width=24) (actual time=81433.194..120510.753 rows=10 loops=1)\n Hash Cond: (\"outer\".objectid = \"inner\".sample_id)\n -> Seq Scan on hitlist_rows_281430 (cost=0.00..177121.61 rows=11497361 width=8) (actual time=0.008..64434.110 rows=11497361 loops=1)\n -> Hash (cost=353.48..353.48 rows=82 width=20) (actual time=0.293..0.293 rows=0 loops=1)\n -> Index Scan using i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id, i_sample_version_id on sample (cost=0.00..353.48 rows=82 width=20) (actual time=0.042..0.201 rows=12 loops=1)\n Index Cond: ((version_id = 7513672) OR (version_id = 7513650) OR (version_id = 7513634) OR (version_id = 7513620) OR (version_id = 7513592) OR (version_id = 7513590) OR (version_id = 7513582) OR (version_id = 7513576) OR (version_id = 7513562) OR (version_id = 7513560))\n Total runtime: 120511.485 ms\n(9 rows)\n", "msg_date": "Thu, 29 Jun 2006 21:52:42 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "explain analyze reports 20x more time than actual" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I have a query that needs to run faster, with the obvious solution\n> being to add an index. But to confirm this, I ran explain analyze.\n> When I run the actual query, it consistently takes 6-7 seconds by the\n> wall clock. My application with a \"verbose\" mode enabled reports 6.6\n> seconds consistently. However, when I run EXPLAIN ANALYZE, it takes\n> 120 seconds!\n\nSee recent discussions --- if you've got duff PC hardware, it seems that\nreading the clock takes forever :-(. In this case I'd assume that the\ncost of the seqscan (11497361 rows returned) is being overstated because\nof the 2*11497361 gettimeofday calls involved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jun 2006 02:04:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports 20x more time than actual " } ]
[ { "msg_contents": "\nHi,\n\nIm having a problem with postgres 8.1.3 on a Fedora Core 3 (kernel \n2.6.9-1.667smp)\n\nI have two similar servers, one in production and another for testing \npurposes.\nDatabases are equal (with a difference of some hours)\n\nIn the testing server, an sql sentence takes arround 1 sec.\nIn production server (low server load) takes arround 50 secs, and uses \ntoo much resources.\n\nExplain analyze takes too much load, i had to cancel it!\n\nCould it be a it a bug?\nAny ideas?\n\nThanks in advance\n\n\n", "msg_date": "Mon, 12 Jun 2006 16:38:57 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Posrgres speed problem" }, { "msg_contents": "Do you run analyze on the production server regularly?\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Ruben Rubio Rey\n> Sent: Monday, June 12, 2006 9:39 AM\n> To: [email protected]\n> Subject: [PERFORM] Posrgres speed problem\n> \n> \n> \n> Hi,\n> \n> Im having a problem with postgres 8.1.3 on a Fedora Core 3 (kernel \n> 2.6.9-1.667smp)\n> \n> I have two similar servers, one in production and another for testing \n> purposes.\n> Databases are equal (with a difference of some hours)\n> \n> In the testing server, an sql sentence takes arround 1 sec.\n> In production server (low server load) takes arround 50 secs, \n> and uses \n> too much resources.\n> \n> Explain analyze takes too much load, i had to cancel it!\n> \n> Could it be a it a bug?\n> Any ideas?\n> \n> Thanks in advance\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Mon, 12 Jun 2006 09:45:23 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "G�briel �kos wrote:\n\n> Ruben Rubio Rey wrote:\n>\n>>\n>> Hi,\n>>\n>> Im having a problem with postgres 8.1.3 on a Fedora Core 3 (kernel \n>> 2.6.9-1.667smp)\n>>\n>> I have two similar servers, one in production and another for testing \n>> purposes.\n>> Databases are equal (with a difference of some hours)\n>>\n>> In the testing server, an sql sentence takes arround 1 sec.\n>> In production server (low server load) takes arround 50 secs, and \n>> uses too much resources.\n>>\n>> Explain analyze takes too much load, i had to cancel it!\n>>\n>> Could it be a it a bug?\n>> Any ideas?\n>\n>\n> vacuum full analyse the database.\n>\n>\nI use to do it all nights\nIts an script with content:\n\nDIREC=/usr/local/pgsql/bin/\nDIRLOGS=/var/log/rentalia\nLOGBIN=/usr/sbin/cronolog\necho \"vacuum vacadb...\" | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\ndate | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\n$DIREC/vacuumdb -f -v --analyze vacadb 2>&1 | $LOGBIN \n$DIRLOGS/%Y-%m-%d_limpieza.log\necho \"reindex database vacadb;\" | $DIREC/psql vacadb 2>&1 | $LOGBIN \n$DIRLOGS/%Y-%m-%d_limpieza.log\ndate | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\n\nNo errors or warnings are reported. instead repeating it now, I preffer \nto wait at tomorrow to check again the logs\n", "msg_date": "Mon, 12 Jun 2006 16:58:49 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "Jonah H. Harris wrote:\n\n> On 6/12/06, Ruben Rubio Rey <[email protected]> wrote:\n>\n>> I have two similar servers, one in production and another\n>> for testing purposes. In testing server ~1sec ... in\n>> production ~50 secs\n>\n>\n> What ver of PostgreSQL?\n\nVersion 8.1.3\n\n> Same ver on both systems?\n\nYes\n\n> Are there any\n> locks currently held on the resources needed in your Production\n> environment?\n\nHow to check it?\n\n> Have you analyzed both databases?\n\nI have restores testing server today. Full Analyce included.\nProduction server all nights is done. (i have posted the script in other \nmessage to the mailing list)\n\n> Any sequential scans\n> running?\n\nIn the table, there is several scans.\n\nvacadb=# \\d grupoforo\n Table \"public.grupoforo\"\n Column | Type \n| Modifiers\n------------------+-----------------------------+---------------------------------------------------------------\n idmensaje | integer | not null default \nnextval('grupoforo_idmensaje_seq'::regclass)\n idusuario | integer | not null\n idgrupo | integer | not null\n idmensajetema | integer | not null default -1\n mensaje | character varying(4000) |\n asunto | character varying(255) | not null\n fechalocal | timestamp without time zone | default now()\n webenabled | integer | not null default 1\n por | character varying(255) |\n estadocomentario | character(1) | default 'D'::bpchar\n idlenguaje | character(2) | default 'ES'::bpchar\n fechacreacion | timestamp without time zone | default now()\n hijos | integer |\n hijoreciente | timestamp without time zone |\n valoracion | integer | default 0\n codigo | character varying(100) |\nIndexes:\n \"pk_grupoforo\" PRIMARY KEY, btree (idmensaje)\n \"grupoforo_asunto_idx\" btree (asunto)\n \"grupoforo_codigo_idx\" btree (codigo)\n \"grupoforo_estadocomentario_idx\" btree (estadocomentario)\n \"grupoforo_idgrupo_idx\" btree (idgrupo)\n \"grupoforo_idlenguaje_idx\" btree (idlenguaje)\n \"grupoforo_idmensajetema_idx\" btree (idmensajetema)\n \"grupoforo_idusuario_idx\" btree (idusuario)\n \"idx_grupoforo_webenabled\" btree (webenabled)\n\n\n> If so, have you vacuumed?\n\nYes.\n\n>\n> Send the explain analyze from your test database.\n\nTomorrow morning i ll send it ... now it could be a disaster ...\n\n>\n>\n\n", "msg_date": "Mon, 12 Jun 2006 17:02:28 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "On Mon, Jun 12, 2006 at 04:38:57PM +0200, Ruben Rubio Rey wrote:\n> I have two similar servers, one in production and another for testing \n> purposes.\n> Databases are equal (with a difference of some hours)\n> \n> In the testing server, an sql sentence takes arround 1 sec.\n> In production server (low server load) takes arround 50 secs, and uses \n> too much resources.\n> \n> Explain analyze takes too much load, i had to cancel it!\n\nThe EXPLAIN ANALYZE output would be helpful, but if you don't want\nto run it to completion then please post the output of EXPLAIN\nANALYZE for the fast system and EXPLAIN (without ANALYZE) for the\nslow one.\n\nAs someone else asked, are you running ANALYZE regularly? What\nabout VACUUM?\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 12 Jun 2006 09:05:06 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "On Mon, Jun 12, 2006 at 04:58:49PM +0200, Ruben Rubio Rey wrote:\n> $DIREC/vacuumdb -f -v --analyze vacadb 2>&1 | $LOGBIN \n> $DIRLOGS/%Y-%m-%d_limpieza.log\n> echo \"reindex database vacadb;\" | $DIREC/psql vacadb 2>&1 | $LOGBIN \n> $DIRLOGS/%Y-%m-%d_limpieza.log\n> date | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\n\nUgh. Is there some reason you're not using the built-in autovacuum? If\nyou enable it and cut the thresholds in half you'll most likely never\nneed to vacuum manually, let alone reindex.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 12 Jun 2006 10:17:58 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "Jim C. Nasby wrote:\n\n>On Mon, Jun 12, 2006 at 04:58:49PM +0200, Ruben Rubio Rey wrote:\n> \n>\n>>$DIREC/vacuumdb -f -v --analyze vacadb 2>&1 | $LOGBIN \n>>$DIRLOGS/%Y-%m-%d_limpieza.log\n>>echo \"reindex database vacadb;\" | $DIREC/psql vacadb 2>&1 | $LOGBIN \n>>$DIRLOGS/%Y-%m-%d_limpieza.log\n>>date | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\n>> \n>>\n>\n>Ugh. Is there some reason you're not using the built-in autovacuum?\n>\nHow do I execute built-in autovacuum?\n\n\n", "msg_date": "Mon, 12 Jun 2006 17:22:05 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "On Mon, Jun 12, 2006 at 09:05:06AM -0600, Michael Fuhr wrote:\n> On Mon, Jun 12, 2006 at 04:38:57PM +0200, Ruben Rubio Rey wrote:\n> > I have two similar servers, one in production and another for testing \n> > purposes.\n> > Databases are equal (with a difference of some hours)\n> > \n> > In the testing server, an sql sentence takes arround 1 sec.\n> > In production server (low server load) takes arround 50 secs, and uses \n> > too much resources.\n> > \n> > Explain analyze takes too much load, i had to cancel it!\n> \n> The EXPLAIN ANALYZE output would be helpful, but if you don't want\n> to run it to completion then please post the output of EXPLAIN\n> ANALYZE for the fast system and EXPLAIN (without ANALYZE) for the\n> slow one.\n> \n> As someone else asked, are you running ANALYZE regularly? What\n> about VACUUM?\n\nFor the next vacuum, can you add the -v (verbose) switch and email the\nlast few lines of output?\n\nINFO: free space map contains 39 pages in 56 relations\nDETAIL: A total of 896 page slots are in use (including overhead).\n896 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 223 KB.\nVACUUM\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 12 Jun 2006 10:23:54 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "On Mon, Jun 12, 2006 at 05:22:05PM +0200, Ruben Rubio Rey wrote:\n> Jim C. Nasby wrote:\n> \n> >On Mon, Jun 12, 2006 at 04:58:49PM +0200, Ruben Rubio Rey wrote:\n> > \n> >\n> >>$DIREC/vacuumdb -f -v --analyze vacadb 2>&1 | $LOGBIN \n> >>$DIRLOGS/%Y-%m-%d_limpieza.log\n> >>echo \"reindex database vacadb;\" | $DIREC/psql vacadb 2>&1 | $LOGBIN \n> >>$DIRLOGS/%Y-%m-%d_limpieza.log\n> >>date | $LOGBIN $DIRLOGS/%Y-%m-%d_limpieza.log\n> >> \n> >>\n> >\n> >Ugh. Is there some reason you're not using the built-in autovacuum?\n> >\n> How do I execute built-in autovacuum?\n\nMake the following changes to postgresql.conf:\n\nautovacuum = on # enable autovacuum subprocess?\nautovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\nautovacuum_analyze_threshold = 200 # min # of tuple updates before \nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before \n # vacuum\nautovacuum_analyze_scale_factor = 0.1 # fraction of rel size before \n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 12 Jun 2006 10:25:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "Hi Ruben,\n\nRuben Rubio Rey schrieb:\n> \n> Hi,\n> \n> Im having a problem with postgres 8.1.3 on a Fedora Core 3 (kernel \n> 2.6.9-1.667smp)\n> \n> I have two similar servers, one in production and another for testing \n> purposes.\n> Databases are equal (with a difference of some hours)\n> \n> In the testing server, an sql sentence takes arround 1 sec.\n> In production server (low server load) takes arround 50 secs, and uses \n> too much resources.\n> \n> Explain analyze takes too much load, i had to cancel it!\n> \n> Could it be a it a bug?\n> Any ideas?\n\nHow do you load the data to the testing server? (Dump, Copy, etc)\nAs you wrote the difference are some hours. I think you copy something.\n\nIt is possible that you production database as too much deleted tuples.\nVacuum full does only rebuild the table an not the index. You may also \nrun reindex on certain tables. I guess, this may the issue if you use \ndump/restore to get your production copy.\n\nIs three a huge difference in the result of this queries:\nselect relname,relpages,reltuples from pg_class order by relpages desc;\nand\nselect relname,relpages,reltuples from pg_class where relname like \n'%index' order by relpages desc;\n\nCheers Sven.\n", "msg_date": "Mon, 12 Jun 2006 17:33:20 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "Jim C. Nasby wrote:\n\n>On Mon, Jun 12, 2006 at 09:05:06AM -0600, Michael Fuhr wrote:\n> \n>\n>>On Mon, Jun 12, 2006 at 04:38:57PM +0200, Ruben Rubio Rey wrote:\n>> \n>>\n>>>I have two similar servers, one in production and another for testing \n>>>purposes.\n>>>Databases are equal (with a difference of some hours)\n>>>\n>>>In the testing server, an sql sentence takes arround 1 sec.\n>>>In production server (low server load) takes arround 50 secs, and uses \n>>>too much resources.\n>>>\n>>>Explain analyze takes too much load, i had to cancel it!\n>>> \n>>>\n>>The EXPLAIN ANALYZE output would be helpful, but if you don't want\n>>to run it to completion then please post the output of EXPLAIN\n>>ANALYZE for the fast system and EXPLAIN (without ANALYZE) for the\n>>slow one.\n>>\n>>As someone else asked, are you running ANALYZE regularly? What\n>>about VACUUM?\n>> \n>>\n>\n>For the next vacuum, can you add the -v (verbose) switch and email the\n>last few lines of output?\n>\n>INFO: free space map contains 39 pages in 56 relations\n>DETAIL: A total of 896 page slots are in use (including overhead).\n>896 page slots are required to track all free space.\n>Current limits are: 20000 page slots, 1000 relations, using 223 KB.\n>VACUUM\n> \n>\nINFO: free space map contains 1624 pages in 137 relations\nDETAIL: A total of 3200 page slots are in use (including overhead).\n3200 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 182 KB.\n\n", "msg_date": "Tue, 13 Jun 2006 08:37:36 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem" }, { "msg_contents": "Tonight database has been vacumm full and reindex (all nights database \ndo it)\n\nNow its working fine. Speed is as spected. I ll be watching that sql ...\nMaybe the problem exists when database is busy, or maybe its solved ...\n", "msg_date": "Tue, 13 Jun 2006 08:44:31 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem - solved?" }, { "msg_contents": "On 13.06.2006, at 8:44 Uhr, Ruben Rubio Rey wrote:\n\n> Tonight database has been vacumm full and reindex (all nights \n> database do it)\n>\n> Now its working fine. Speed is as spected. I ll be watching that \n> sql ...\n> Maybe the problem exists when database is busy, or maybe its \n> solved ...\n\nDepending on the usage pattern the nightly re-index / vacuum analyse \nis suboptimal. If you have high insert/update traffic your \nperformance will decrease over the day and will only be good in the \nmorning hours and I hope this is not what you intend to have.\n\nAutovacuum is the way to go, if you have \"changing content\". Perhaps \ncombined with vacuum analyse in a nightly or weekly schedule. We do \nthis weekly.\n\ncug\n", "msg_date": "Tue, 13 Jun 2006 10:11:23 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem - solved?" }, { "msg_contents": "Guido Neitzer wrote:\n\n> On 13.06.2006, at 8:44 Uhr, Ruben Rubio Rey wrote:\n>\n>> Tonight database has been vacumm full and reindex (all nights \n>> database do it)\n>>\n>> Now its working fine. Speed is as spected. I ll be watching that sql \n>> ...\n>> Maybe the problem exists when database is busy, or maybe its solved ...\n>\n>\n> Depending on the usage pattern the nightly re-index / vacuum analyse \n> is suboptimal. If you have high insert/update traffic your \n> performance will decrease over the day and will only be good in the \n> morning hours and I hope this is not what you intend to have.\n>\n> Autovacuum is the way to go, if you have \"changing content\". Perhaps \n> combined with vacuum analyse in a nightly or weekly schedule. We do \n> this weekly.\n>\n> cug\n>\n>\nI ll configure autovacum. I ll write if problem is solved.\n\n", "msg_date": "Tue, 13 Jun 2006 10:17:08 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem - solved?" }, { "msg_contents": "\nSeems autovacumm is working fine. Logs are reporting that is being useful.\n\nBut server load is high. Is out there any way to stop \"autovacumm\" if \nserver load is very high?\n\nThanks everyone!!!\n", "msg_date": "Tue, 13 Jun 2006 12:33:56 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posrgres speed problem - solved!" }, { "msg_contents": "On 13.06.2006, at 12:33 Uhr, Ruben Rubio Rey wrote:\n\n> Seems autovacumm is working fine. Logs are reporting that is being \n> useful.\n>\n> But server load is high. Is out there any way to stop \"autovacumm\" \n> if server load is very high?\n\nLook at the cost settings for vacuum and autovacuum. From the manual:\n\n\"During the execution of VACUUM and ANALYZE commands, the system \nmaintains an internal\ncounter that keeps track of the estimated cost of the various I/O \noperations that are performed. When\nthe accumulated cost reaches a limit (specified by \nvacuum_cost_limit), the process performing\nthe operation will sleep for a while (specified by \nvacuum_cost_delay). Then it will reset the\ncounter and continue execution.\n\nThe intent of this feature is to allow administrators to reduce the I/ \nO impact of these commands on\nconcurrent database activity. There are many situations in which it \nis not very important that mainte-\nnance commands like VACUUM and ANALYZE finish quickly; however, it is \nusually very important that\nthese commands do not significantly interfere with the ability of the \nsystem to perform other database\noperations. Cost-based vacuum delay provides a way for administrators \nto achieve this.\"\n\ncug\n\n\n", "msg_date": "Tue, 13 Jun 2006 13:48:26 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posrgres speed problem - solved!" } ]
[ { "msg_contents": "Hi all!\n\nI had an interesting discussion today w/ an Enterprise DB developer and\nsales person, and was told, twice, that the 64-bit linux version of\nEnterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\nis SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\nare PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n32-bit version?\n\nI was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\non the processor / etc.... which actually slows down the pointers and\nnecessary back-end \"stuff\" on the database.\n\nI'm curious if anyone can back this up .... or debunk it. It's about\nthe polar opposite of everything I've heard from every other database\nvendor for the past several years, and would be quite an eye-opener for\nme.\n\nAnyone?\n\nThanks.\n\n--\nAnthony\n\n", "msg_date": "Mon, 12 Jun 2006 17:28:02 -0500", "msg_from": "Anthony Presley <[email protected]>", "msg_from_op": true, "msg_subject": "64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Anthony,\n\n> I'm curious if anyone can back this up .... or debunk it. It's about\n> the polar opposite of everything I've heard from every other database\n> vendor for the past several years, and would be quite an eye-opener for\n> me.\n\nI generally see a 20% \"free\" gain in performance on 64-bit (Opteron, \nactually). Possibly EDB is still using ICC to compile, and ICC is bad at \n64-bit?\n\nI have seen some applications which failed to gain any performance from \n64-bit, but have never personally dealt with one which was slower.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 12 Jun 2006 16:16:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Anthony Presley wrote:\n\n>I had an interesting discussion today w/ an Enterprise DB developer and\n>sales person, and was told, twice, that the 64-bit linux version of\n>Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n>is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\n>are PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n>32-bit version?\n>\n>I was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\n>on the processor / etc.... which actually slows down the pointers and\n>necessary back-end \"stuff\" on the database.\n>\n>I'm curious if anyone can back this up .... or debunk it. It's about\n>the polar opposite of everything I've heard from every other database\n>vendor for the past several years, and would be quite an eye-opener for\n>me.\n> \n>\nWhat they are saying is strictly true : 64-bit pointers tend to increase \nthe working set size\nof an application vs. 32-bit pointers. This means that any caches will \nhave somewhat lower\nhit ratio. Also the bytes/s between the CPU and memory will be higher \ndue to moving those larger pointers.\nIn the case of a 32-bit OS this also applies to the kernel so the effect \nwill be system-wide.\n\nHowever, an application that needs to work on > around 2G of data will \nin the end be\nmuch faster 64-bit due to reduced I/O (it can keep more of the data in \nmemory).\n\nI worked on porting a large database application from 32-bit to 64-bit. One\nof our customers required us to retain the 32-bit version because of \nthis phenomenon.\n\nIn measurements I conducted on that application, the performance \ndifference wasn't\ngreat (10% or so), but it was measurable. This was with Sun Sparc hardware.\nIt is possible that more modern CPU designs have more efficient 64-bit\nimplementation than 32-bit, so the opposite might be seen too.\n\nWhether or not PG would show the same thing I can't say for sure. \nProbably it would though.\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 12 Jun 2006 17:19:46 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\nOn Jun 12, 2006, at 3:28 PM, Anthony Presley wrote:\n\n> Hi all!\n>\n> I had an interesting discussion today w/ an Enterprise DB developer \n> and\n> sales person, and was told, twice, that the 64-bit linux version of\n> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n> is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of \n> EDB\n> are PostgreSQL ..... has anyone seen that the 64-bit is slower than \n> the\n> 32-bit version?\n>\n> I was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\n> on the processor / etc.... which actually slows down the pointers and\n> necessary back-end \"stuff\" on the database.\n>\n> I'm curious if anyone can back this up .... or debunk it. It's about\n> the polar opposite of everything I've heard from every other database\n> vendor for the past several years, and would be quite an eye-opener \n> for\n> me.\n>\n> Anyone?\n\nIt's unsurprising for code written with 64 bit pointers (\"64 bit \ncode\") to be a little\nslower than 32 bit code. The code and data structures are bigger, \nmore has to\nbe copied from main memory, fewer cache hits, all those bad things.\n\nOn CPUs with a uniform instructions set in both 32 and 64 bit modes \nyou're\nonly likely to see improved performance in 64 bit mode if your code can\ntake advantage of the larger address space (postgresql doesn't).\n\nSome x86-esque architectures provide a somewhat different instruction\nset in their 64 bit mode, with more programmer visible registers. The\nincrease in performance they can offer (with the right compiler) can \noffset\nthe reduction due to pointer bloat, in some cases.\n\nEmpirically... postgresql built for 64 bits is marginally slower than \nthat built\nfor a 32 bit api on sparc. None of my customers have found 64 bit x86\nsystems to be suitable for production use, yet, so I've not tested on \nany\nof those architectures.\n\nCheers,\n Steve\n", "msg_date": "Mon, 12 Jun 2006 16:26:47 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Anthony Presley <[email protected]> writes:\n> I had an interesting discussion today w/ an Enterprise DB developer and\n> sales person, and was told, twice, that the 64-bit linux version of\n> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n> is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\n> are PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n> 32-bit version?\n\nThat is a content-free statement, since they didn't mention what\narchitectures they are comparing, what compilers (and compiler options)\nthey are using, or what test cases they are measuring on.\n\nTheoretically speaking, 64-bit *should* be slower than 32-bit (because\nmore data to transfer between memory and CPU to accomplish the same\nwork), except when considering workloads that can profit from having\ndirect access to more than 4Gb of memory. However the theoretical\nadvantage is probably completely swamped by implementation details,\nie, how tensely did the designers of your 64-bit chip optimize its\n32-bit behavior.\n\nI believe that Red Hat generally recommends using 32-bit mode for\nsmall-memory applications on PPC machines, because PPC32 is indeed\nmeasurably faster than PPC64, but finds no such advantage on x86_64,\nia64 or s390x. I don't know what applications they tested to come\nto that conclusion, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2006 19:35:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards? " }, { "msg_contents": "* Anthony Presley ([email protected]) wrote:\n> I had an interesting discussion today w/ an Enterprise DB developer and\n> sales person, and was told, twice, that the 64-bit linux version of\n> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n> is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\n> are PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n> 32-bit version?\n\nAlot of it depends on what you're doing in the database, exactly, and\njust which 32/64-bit platform is under discussion.. They're not all the\nsame (not even just amoung the ones Linux runs on :).\n\n> I was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\n> on the processor / etc.... which actually slows down the pointers and\n> necessary back-end \"stuff\" on the database.\n\nThat's so hand-wavy that I'd be disinclined to believe the speaker, so\nI'll assume you're (poorly) paraphrasing... It's true that running\n64bit means that you've got 64bit pointers, which are physically\nlarger than 32bit pointers. Larger pointers means more effort to keep\ntrack of them, copy them around, etc. This is mitigated on some\nplatforms (ie: amd64) where there are extra registers available in\n'64bit' mode (which is really more than just a 64bit mode of a 32bit\nplatform, unlike a platform like PPC or sparc).\n\n> I'm curious if anyone can back this up .... or debunk it. It's about\n> the polar opposite of everything I've heard from every other database\n> vendor for the past several years, and would be quite an eye-opener for\n> me.\n\nPostgreSQL doesn't generally operate on >2G resident memory. I'm not\nsure if it's possible for it to (I havn't really tried to find out,\nthough I have systems where it'd be possible to want to sort a >2G table\nor similar, I don't have the work_mem set high enough for it to try, I\ndon't think). This is because Postgres lets the OS handle most of the\ncacheing, so as long as your OS can see all the memory you have in the\nbox, that benefit of running 64bit isn't going to be seen on Postgres.\nOn many other database systems (notably the 800-pound gorillas...) the\ndatabase handle the cacheing and so wants to basically have control over\nall the memory in the box, which means running 64bit if you have more\nthan 2G in your system.\n\nJust my 2c.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Mon, 12 Jun 2006 20:04:41 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Anthony Presley <[email protected]> wrote:\n\n> Hi all!\n> \n> I had an interesting discussion today w/ an Enterprise DB developer and\n> sales person, and was told, twice, that the 64-bit linux version of\n> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n> is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\n> are PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n> 32-bit version?\n> \n> I was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\n> on the processor / etc.... which actually slows down the pointers and\n> necessary back-end \"stuff\" on the database.\n> \n> I'm curious if anyone can back this up .... or debunk it. It's about\n> the polar opposite of everything I've heard from every other database\n> vendor for the past several years, and would be quite an eye-opener for\n> me.\n\nWe did some tests on with identical hardware in both EMT64 and ia32 mode.\n(Dell 2850, if you're curious) This was PostgreSQL 8.1 running on\nFreeBSD 6.\n\nWe found 64 bit to be ~5% slower than 32 bit mode in the (very) limited\ntests that we did. We pulled the plug before doing any extensive\ntesting, because it just didn't seem as if it was going to be worth it.\n\n-- \nBill Moran\n\nI already know the ending it's the part that makes your face implode.\nI don't know what makes your face implode, but that's the way the movie ends.\n\n\tTMBG\n\n", "msg_date": "Mon, 12 Jun 2006 20:29:33 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\n> Empirically... postgresql built for 64 bits is marginally slower than \n> that built\n> for a 32 bit api on sparc. None of my customers have found 64 bit x86\n> systems to be suitable for production use, yet, so I've not tested on any\n> of those architectures.\n\nReally? All of our customers are migrating to Opteron and I have many \nthat have been using Opteron for over 12 months happily.\n\nJoshua D. Drake\n\n> \n> Cheers,\n> Steve\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Mon, 12 Jun 2006 18:15:29 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\nOn Jun 12, 2006, at 6:15 PM, Joshua D. Drake wrote:\n\n>\n>> Empirically... postgresql built for 64 bits is marginally slower \n>> than that built\n>> for a 32 bit api on sparc. None of my customers have found 64 bit x86\n>> systems to be suitable for production use, yet, so I've not tested \n>> on any\n>> of those architectures.\n>\n> Really? All of our customers are migrating to Opteron and I have \n> many that have been using Opteron for over 12 months happily.\n\nAn Opteron is 64 bit capable; that doesn't mean you have to run 64 bit\ncode on it.\n\nMine're mostly reasonably conservative users, with hundreds of machines\nto support. Using 64 bit capable hardware, such as Opterons, is one \nthing,\nbut using an entirely different linux installation and userspace \ncode, say, is\na much bigger change in support terms. In the extreme case it makes no\nsense to double your OS support overheads to get a single digit \npercentage\nperformance improvement on one database system.\n\nThat's not to say that linux/x86-64 isn't production ready for some \nusers, just\nthat it's not necessarily a good operational decision for my \ncustomers. Given\nmy internal workloads aren't really stressing the hardware they're on \nI don't\nhave much incentive to benchmark x86-64 yet - by the time the numbers\nmight be useful to me we'll be on a different postgresql, likely a \ndifferent\ngcc/icc and so on.\n\nCheers,\n Steve\n\n", "msg_date": "Mon, 12 Jun 2006 18:29:05 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Opteron is ~20% faster at executing code in 64-bit mode than 32-bit because\nof the extra registers made available with their 64-bit mode:\n http://www.tomshardware.com/2003/04/22/duel_of_the_titans/page7.html\n\nDoubling the GPRs from 8 to 16 has generally made a 20%-30% difference in\nCPU-bound work:\n http://www.tomshardware.com/2003/04/22/duel_of_the_titans/page18.html\n\nIf the task is memory bandwidth bound, there should be an advantage to using\nless memory for the same task, but if the database is using types that are\nthe same width for either execution mode, you wouldn't expect a significant\ndifference just from wider pointer arithmetic.\n\n- Luke \n\n\n\nRe: [PERFORM] 64-bit vs 32-bit performance ... backwards?\n\n\n\nOpteron is ~20% faster at executing code in 64-bit mode than 32-bit because of the extra registers made available with their 64-bit mode:\n  http://www.tomshardware.com/2003/04/22/duel_of_the_titans/page7.html\n\nDoubling the GPRs from 8 to 16 has generally made a 20%-30% difference in CPU-bound work:\n  http://www.tomshardware.com/2003/04/22/duel_of_the_titans/page18.html\n\nIf the task is memory bandwidth bound, there should be an advantage to using less memory for the same task, but if the database is using types that are the same width for either execution mode, you wouldn't expect a significant difference just from wider pointer arithmetic.\n\n- Luke", "msg_date": "Mon, 12 Jun 2006 18:50:27 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "I've been trying to track this stuff - in fact, I'll likely be\nswitching from AMD32 to AMD64 in the next few weeks.\n\nI believe I have a handle on the + vs - of 64-bit. It makes sense that\nfull 64-bit would be slower. At an extreme it halfs the amount of\navailable memory or doubles the required memory bandwidth, depending\non the work load.\n\nHas anybody taken a look at PostgreSQL to ensure that it uses 32-bit\nintegers instead of 64-bit integers where only 32-bit is necessary?\n32-bit offsets instead of 64-bit pointers? This sort of thing?\n\nI haven't. I'm meaning to take a look. Within registers, 64-bit should\nbe equal speed to 32-bit. Outside the registers, it would make sense\nto only deal with the lower 32-bits where 32-bits is all that is\nrequired.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Mon, 12 Jun 2006 22:16:27 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Anyone who has tried x86-64 linux knows what a royal pain in the ass it\nis. They didn't do anything sensible, like just make the whole OS 64 bit,\nno, they had to split it up, and put 64-bit libs in a new directory /lib64.\nThis means that a great many applications don't know to check in there for\nlibs, and don't compile pleasantly, php is one among them. I forget what\nothers, it's been awhile now. Of course if you actualy want to use more\nthan 4gig RAM in a pleasant way, it's pretty much essential.\n\nAlex.\n\nOn 6/12/06, Steve Atkins <[email protected]> wrote:\n>\n>\n> On Jun 12, 2006, at 6:15 PM, Joshua D. Drake wrote:\n>\n> >\n> >> Empirically... postgresql built for 64 bits is marginally slower\n> >> than that built\n> >> for a 32 bit api on sparc. None of my customers have found 64 bit x86\n> >> systems to be suitable for production use, yet, so I've not tested\n> >> on any\n> >> of those architectures.\n> >\n> > Really? All of our customers are migrating to Opteron and I have\n> > many that have been using Opteron for over 12 months happily.\n>\n> An Opteron is 64 bit capable; that doesn't mean you have to run 64 bit\n> code on it.\n>\n> Mine're mostly reasonably conservative users, with hundreds of machines\n> to support. Using 64 bit capable hardware, such as Opterons, is one\n> thing,\n> but using an entirely different linux installation and userspace\n> code, say, is\n> a much bigger change in support terms. In the extreme case it makes no\n> sense to double your OS support overheads to get a single digit\n> percentage\n> performance improvement on one database system.\n>\n> That's not to say that linux/x86-64 isn't production ready for some\n> users, just\n> that it's not necessarily a good operational decision for my\n> customers. Given\n> my internal workloads aren't really stressing the hardware they're on\n> I don't\n> have much incentive to benchmark x86-64 yet - by the time the numbers\n> might be useful to me we'll be on a different postgresql, likely a\n> different\n> gcc/icc and so on.\n>\n> Cheers,\n> Steve\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nAnyone who has tried x86-64 linux knows what a royal pain in the ass it is.   They didn't do anything sensible, like just make the whole OS 64 bit, no, they had to split it up, and put 64-bit libs in a new directory /lib64.  This means that a great many applications don't know to check in there for libs, and don't compile pleasantly, php is one among them.  I forget what others, it's been awhile now.  Of course if you actualy want to use more than 4gig RAM in a pleasant way, it's pretty much essential.\nAlex.On 6/12/06, Steve Atkins <[email protected]> wrote:\nOn Jun 12, 2006, at 6:15 PM, Joshua D. Drake wrote:>>> Empirically... postgresql built for 64 bits is marginally slower>> than that built>> for a 32 bit api on sparc. None of my customers have found 64 bit x86\n>> systems to be suitable for production use, yet, so I've not tested>> on any>> of those architectures.>> Really? All of our customers are migrating to Opteron and I have> many that have been using Opteron for over 12 months happily.\nAn Opteron is 64 bit capable; that doesn't mean you have to run 64 bitcode on it.Mine're mostly reasonably conservative users, with hundreds of machinesto support. Using 64 bit capable hardware, such as Opterons, is one\nthing,but using an entirely different linux installation and userspacecode, say, isa much bigger change in support terms. In the extreme case it makes nosense to double your OS support overheads to get a single digit\npercentageperformance improvement on one database system.That's not to say that linux/x86-64 isn't production ready for someusers, justthat it's not necessarily a good operational decision for my\ncustomers. Givenmy internal workloads aren't really stressing the hardware they're onI don'thave much incentive to benchmark x86-64 yet - by the time the numbersmight be useful to me we'll be on a different postgresql, likely a\ndifferentgcc/icc and so on.Cheers,   Steve---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster", "msg_date": "Mon, 12 Jun 2006 22:26:05 -0400", "msg_from": "\"Alex Turner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "[email protected] (Anthony Presley) wrote:\n> Hi all!\n>\n> I had an interesting discussion today w/ an Enterprise DB developer and\n> sales person, and was told, twice, that the 64-bit linux version of\n> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)\n> is SIGNIFICANTLY SLOWER than the 32-bit version. Since the guys of EDB\n> are PostgreSQL ..... has anyone seen that the 64-bit is slower than the\n> 32-bit version?\n>\n> I was told that the added 32-bits puts a \"strain\" and extra \"overhead\"\n> on the processor / etc.... which actually slows down the pointers and\n> necessary back-end \"stuff\" on the database.\n>\n> I'm curious if anyone can back this up .... or debunk it. It's about\n> the polar opposite of everything I've heard from every other database\n> vendor for the past several years, and would be quite an eye-opener for\n> me.\n>\n> Anyone?\n\nTraditionally, there has been *some* truth to such assertions.\n\nConsider:\n\n1. 64 bit versions of things need to manipulate 64 bit address values\nand such; this will bloat up the code somewhat as compared to 32 bit\nversions that will be somewhat more compact.\n\n2. If you only have 2GB of memory, you get no particular advantage\nout of 64 bittedness.\n\nIn the days when people had 64 bit Alphas with 256MB of memory, there\nwas considerable debate about the actual merits of running in 64 bit\nmode, and the answers were unobvious.\n\nOn the other hand...\n\n3. Opterons tend to address memory quite a bit quicker than Intels of\npretty much any variety.\n\n4. 64 bit CPUs offer additional registers that can be expected to\nmake register-bound code quicker.\n\n5. If you have >>2GB of memory, a 64 bit system is needful to harness\nthat, and that will make a *big* difference to performance.\n\nThe overall claim is somewhat content-free, in the absence of\ninformation about the architecture of the database server.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxfinances.info/info/\n\"A program invented (sic) by a Finnish computer hacker and handed out free\nin 1991 cost investors in Microsoft $11 billion (#6.75 billion) this week.\"\n-- Andrew Butcher in the UK's Sunday Times, Feb 20th, 1999\n", "msg_date": "Mon, 12 Jun 2006 22:31:57 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Alex Turner wrote:\n> Anyone who has tried x86-64 linux knows what a royal pain in the ass it \n> is. They didn't do anything sensible, like just make the whole OS 64 \n> bit, no, they had to split it up, and put 64-bit libs in a new directory \n> /lib64. This means that a great many applications don't know to check \n> in there for libs, and don't compile pleasantly, php is one among them. \n> I forget what others, it's been awhile now. Of course if you actualy \n> want to use more than 4gig RAM in a pleasant way, it's pretty much \n> essential.\n> \nThat depends entirely on what AMD64 distribution you use -- on a Debian \nor Ubuntu 64-bit system, the main system is pre 64-bit, with some \n(optional) add-on libraries in separate directories to provide some \n32-bit compatibility.\n\nOn the performance stuff, my own testing of AMD64 on AMD's chips (not \nwith PostgreSQL, but with various other things) has shown it to be about \n10% faster on average. As Luke mentioned, this isn't because of any \ninherent advantage in 64-bit -- it's because AMD did some tweaking while \nthey had the hood open, adding extra registers among other things.\n\nI remember reading an article some time back comparing AMD's \nimplementation to Intel's that showed that EM64T Xeons ran 64-bit code \nabout 5-10% more slowly than they ran 32-bit code. I can't find the link \nnow, but it may explain why some people are getting better performance \nwith 64-bit (on Opterons), while others are finding it slower (on Xeons).\n\nThanks\nLeigh\n\n> Alex.\n> \n> On 6/12/06, *Steve Atkins* <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> \n> On Jun 12, 2006, at 6:15 PM, Joshua D. Drake wrote:\n> \n> >\n> >> Empirically... postgresql built for 64 bits is marginally slower\n> >> than that built\n> >> for a 32 bit api on sparc. None of my customers have found 64\n> bit x86\n> >> systems to be suitable for production use, yet, so I've not tested\n> >> on any\n> >> of those architectures.\n> >\n> > Really? All of our customers are migrating to Opteron and I have\n> > many that have been using Opteron for over 12 months happily.\n> \n> An Opteron is 64 bit capable; that doesn't mean you have to run 64 bit\n> code on it.\n> \n> Mine're mostly reasonably conservative users, with hundreds of machines\n> to support. Using 64 bit capable hardware, such as Opterons, is one\n> thing,\n> but using an entirely different linux installation and userspace\n> code, say, is\n> a much bigger change in support terms. In the extreme case it makes no\n> sense to double your OS support overheads to get a single digit\n> percentage\n> performance improvement on one database system.\n> \n> That's not to say that linux/x86-64 isn't production ready for some\n> users, just\n> that it's not necessarily a good operational decision for my\n> customers. Given\n> my internal workloads aren't really stressing the hardware they're on\n> I don't\n> have much incentive to benchmark x86-64 yet - by the time the numbers\n> might be useful to me we'll be on a different postgresql, likely a\n> different\n> gcc/icc and so on.\n> \n> Cheers,\n> Steve\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n> \n\n", "msg_date": "Tue, 13 Jun 2006 12:42:03 +1000", "msg_from": "Leigh Dyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\"Alex Turner\" <[email protected]> writes:\n> Anyone who has tried x86-64 linux knows what a royal pain in the ass it\n> is. They didn't do anything sensible, like just make the whole OS 64 bit,\n> no, they had to split it up, and put 64-bit libs in a new directory /lib64.\n\nActually, there's nothing wrong with that. As this thread already made\nclear, there are good reasons why you might want to run 32-bit apps as\nwell as 64-bit apps on your 64-bit hardware. So the 32-bit libraries\nlive in /usr/lib and the 64-bit ones in /usr/lib64. If you ask me, the\nreally serious mistake in this design is they didn't decree separate bin\ndirectories /usr/bin and /usr/bin64 too. This makes it impossible to\ninstall 32-bit and 64-bit versions of the same package at the same time,\nsomething that curiously enough people are now demanding support for.\n\n(Personally, if I'd designed it, the libraries would actually live in\n/usr/lib32 and /usr/lib64, and /usr/lib would be a symlink to whichever\nyou needed it to be at the moment. Likewise for /usr/bin.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2006 22:44:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards? " }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (\"Alex Turner\") wrote:\n> Anyone who has tried x86-64 linux knows what a royal pain in the ass\n> it is.�� They didn't do anything sensible, like just make the whole\n> OS 64 bit, no, they had to split it up, and put 64-bit libs in a new\n> directory /lib64.� This means that a great many applications don't\n> know to check in there for libs, and don't compile pleasantly, php\n> is one among them.� I forget what others, it's been awhile now.� Of\n> course if you actualy want to use more than 4gig RAM in a pleasant\n> way, it's pretty much essential. Alex.\n\nThat's absolute nonsense.\n\nI have been running the Debian AMD64 port since I can't recall when.\nI have experienced NO such issues.\n\nPackages simply install, in most cases.\n\nWhen I do need to compile things, they *do* compile pleasantly.\n\nI seem to recall hearing there being \"significant issues\" as to how\nRed Hat's distributions of Linux coped with AMD64. That's not a\nproblem with Linux, of course...\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nhttp://linuxdatabases.info/info/spreadsheets.html\n\"Imagine a law so stupid that civil obedience becomes an efficient way\nto fighting it\" --Per Abrahamsen on the DMCA\n", "msg_date": "Mon, 12 Jun 2006 22:51:42 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Mark,\n\nOn 6/12/06 7:16 PM, \"[email protected]\" <[email protected]> wrote:\n\n> I haven't. I'm meaning to take a look. Within registers, 64-bit should\n> be equal speed to 32-bit. Outside the registers, it would make sense\n> to only deal with the lower 32-bits where 32-bits is all that is\n> required.\n\nThe short answer to all of this as shown in many lab tests by us and\nelsewhere (see prior post):\n\n- 64-bit pgsql on Opteron is generally faster than 32-bit, often by a large\namount (20%-30%) on queries that perform sorting, aggregation, etc. It's\ngenerally not slower.\n\n- 64-bit pgsql on Xeon is generally slower than 32-bit by about 5%\n\nSo if you have Opterons and you want the best performance, run in 64-bit.\nIf you have Xeons, you would only run in 64-bit if you use more than 4GB of\nmemory.\n\n- Luke\n\n\n", "msg_date": "Mon, 12 Jun 2006 19:57:32 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "On Jun 12, 2006, at 19:44, Tom Lane wrote:\n\n> (Personally, if I'd designed it, the libraries would actually live in\n> /usr/lib32 and /usr/lib64, and /usr/lib would be a symlink to \n> whichever\n> you needed it to be at the moment. Likewise for /usr/bin.)\n\n/me nominates Tom to create a Linux distribution.\n\n:-)\n\nDavid\n", "msg_date": "Mon, 12 Jun 2006 20:00:37 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards? " }, { "msg_contents": "Folks,\n\nFWIW, the applications where I did direct 32 / 64 comparison were \na) several data warehouse tests, with databases > 100GB\nb) computation-heavy applications (such as a complex calendaring app)\n\nAnd, as others have pointed out, I wasn't comparing generics; I was comparing \nAthalon/Xeon to Opteron. So it's quite possible that the improvements had \nnothing to do with going 64-bit and were because of other architecture \nimprovements.\n\nIn which case, why was 64-bit such a big deal?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 12 Jun 2006 21:34:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\nOn Jun 12, 2006, at 6:15 PM, Joshua D. Drake wrote:\n>> Empirically... postgresql built for 64 bits is marginally slower \n>> than that built\n>> for a 32 bit api on sparc. None of my customers have found 64 bit x86\n>> systems to be suitable for production use, yet, so I've not tested \n>> on any\n>> of those architectures.\n>\n> Really? All of our customers are migrating to Opteron and I have \n> many that have been using Opteron for over 12 months happily.\n\n\n\nWe have been using PostgreSQL on Opteron servers almost since the \nOpteron was first released, running both 32-bit and 64-bit versions \nof Linux. Both 32-bit and 64-bit versions have been bulletproof for \nus, with the usual stability I've become accustomed to with both \nPostgreSQL and Linux. We have been running nothing but 64-bit \nversions on mission-critical systems for the last year with zero \nproblems.\n\nThe short story is that for us 64-bit PostgreSQL on Opterons is \ntypically something like 20% faster than 32-bit on the same, and \n*much* faster than P4 Xeon systems they nominally compete with. \nAMD64 is a more efficient architecture than x86 in a number of ways, \nand the Opteron has enviable memory latency and bandwidth that make \nit an extremely efficient database workhorse. x86->AMD64 is not a \nword-width migration, it is a different architecture cleverly \ndesigned to be efficiently compatible with x86. In addition to \nthings like a more RISC-like register set, AMD64 uses a different \nfloating point architecture that is more efficient than the old x87.\n\nIn terms of bang for the buck in a bulletproof database server, it is \nreally hard to argue with 64-bit Opterons. They are damn fast, and \nin my experience problem free. We run databases on other \narchitectures, but they are all getting replaced with 64-bit Linux on \nOpterons because the AMD64 systems tend to be both faster and \ncheaper. Architectures like Sparc have never given us problems, but \nthey have not exactly thrilled us with their performance either.\n\n\nCheers,\n\nJ. Andrew Rogers\n\n", "msg_date": "Mon, 12 Jun 2006 23:00:38 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "On Mon, Jun 12, 2006 at 10:44:01PM -0400, Tom Lane wrote:\n> (Personally, if I'd designed it, the libraries would actually live in\n> /usr/lib32 and /usr/lib64, and /usr/lib would be a symlink to whichever\n> you needed it to be at the moment. Likewise for /usr/bin.)\n\nActually, there have been plans for doing something like this in Debian for a\nwhile: Let stuff live in /lib/i686-linux-gnu and /lib/x86_64-linux-gnu\n(lib32 and lib64 doesn't really scale, once you start considering stuff like\n\"ia64 can emulate hppa\"), and adjust paths and symlinks as fit. It's still a\nlong way to go, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 13 Jun 2006 08:30:59 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Placement of 64-bit libraries (offtopic)" }, { "msg_contents": "J. Andrew Rogers wrote:\n\n> We have been using PostgreSQL on Opteron servers almost since the\n> Opteron was first released, running both 32-bit and 64-bit versions of\n> Linux. Both 32-bit and 64-bit versions have been bulletproof for us,\n> with the usual stability I've become accustomed to with both PostgreSQL\n> and Linux. We have been running nothing but 64-bit versions on\n> mission-critical systems for the last year with zero problems.\n> \n> The short story is that for us 64-bit PostgreSQL on Opterons is\n> typically something like 20% faster than 32-bit on the same, and *much*\n> faster than P4 Xeon systems they nominally compete with. \n\nSince you sound like you have done extensive testing:\n\nDo you have any data regarding whether to enable hyperthreading or not?\nI realize that this may be highly dependant on the OS, application and\nnumber of CPUs, but I would be interested in hearing your\nrecommendations (or others').\n\n/Nis\n\n\n\n\n", "msg_date": "Tue, 13 Jun 2006 10:40:20 +0200", "msg_from": "Nis Jorgensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Installation of a 32-bit PostgreSQL on a 64-bit Linux like RHEL 4 is \nvery easy. Make sure that you have installed all needed 32-bit libs and \ndevel packages.\n\nHere is an example to call configure to get a 32-bit PostgreSQL:\n\nCXX=\"/usr/bin/g++ -m32\" \\\nCPP=\"/usr/bin/gcc -m32 -E\" \\\nLD=\"/usr/bin/ld -m elf_i386\" \\\nAS=\"/usr/bin/gcc -m32 -Wa,--32 -D__ASSEMBLY__ -traditional\" \\\nCC=\"/usr/bin/gcc -m32\" \\\nCFLAGS=\"-O3 -funroll-loops -fno-strict-aliasing -pipe -mcpu=opteron \n-march=opteron -mfpmath=sse -msse2\" \\\n./configure --prefix=<pgsql-path>\n\n\nJ. Andrew Rogers schrieb:\n> The short story is that for us 64-bit PostgreSQL on Opterons is \n> typically something like 20% faster than 32-bit on the same, and *much* \n> faster than P4 Xeon systems they nominally compete with. AMD64 is a \n> more efficient architecture than x86 in a number of ways, and the \n> Opteron has enviable memory latency and bandwidth that make it an \n> extremely efficient database workhorse. x86->AMD64 is not a word-width \n> migration, it is a different architecture cleverly designed to be \n> efficiently compatible with x86. In addition to things like a more \n> RISC-like register set, AMD64 uses a different floating point \n> architecture that is more efficient than the old x87.\n>\n\nI did a few test in the past with 64-bit PostgreSQL and 32-bit \nPostgreSQL and the 32-bit version was always faster.\nPlease find attached a small patch with does apply a change to the \nx86_64 part also to the i386 part of src/include/storage/s_lock.h.\nWithout this change the performance of PostgreSQL 8.0 was horrible on a \nOpteron. The effect is smaller with PostgreSQL 8.1.\n\nCheers\nSven.", "msg_date": "Tue, 13 Jun 2006 11:04:32 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "On Tue, 13 Jun 2006 04:26:05 +0200, Alex Turner <[email protected]> wrote:\n\n> Anyone who has tried x86-64 linux knows what a royal pain in the ass it\n> is. They didn't do anything sensible, like just make the whole OS 64 \n> bit,\n> no, they had to split it up, and put 64-bit libs in a new directory \n> /lib64.\n> This means that a great many applications don't know to check in there \n> for\n> libs, and don't compile pleasantly, php is one among them. I forget what\n> others, it's been awhile now. Of course if you actualy want to use more\n> than 4gig RAM in a pleasant way, it's pretty much essential.\n>\n> Alex.\n\n\tDecent distros do this for you :\n\n$ ll /usr | grep lib\nlrwxrwxrwx 1 root root 5 jan 20 09:55 lib -> lib64\ndrwxr-xr-x 10 root root 1,8K avr 19 16:16 lib32\ndrwxr-xr-x 92 root root 77K jun 10 15:48 lib64\n\n\tAlso, on gentoo, everything \"just works\" in 64-bit mode and the packages \ncompile normally... I don't see a problem...\n", "msg_date": "Tue, 13 Jun 2006 12:47:54 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Sven,\n\nOn 6/13/06 2:04 AM, \"Sven Geisler\" <[email protected]> wrote:\n\n> Please find attached a small patch with does apply a change to the\n> x86_64 part also to the i386 part of src/include/storage/s_lock.h.\n> Without this change the performance of PostgreSQL 8.0 was horrible on a\n> Opteron. The effect is smaller with PostgreSQL 8.1.\n\nCan you describe what kinds of tests you ran to check your speed?\n\nSince it's the TAS lock that you are patching, the potential impact is\ndiffuse and large: xlog.c, shmem.c, lwlock.c, proc.c, all do significant\nwork.\n\n- Luke\n\n\n", "msg_date": "Tue, 13 Jun 2006 04:33:19 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Luke\n\nLuke Lonergan schrieb:\n> On 6/13/06 2:04 AM, \"Sven Geisler\" <[email protected]> wrote:\n>> Please find attached a small patch with does apply a change to the\n>> x86_64 part also to the i386 part of src/include/storage/s_lock.h.\n>> Without this change the performance of PostgreSQL 8.0 was horrible on a\n>> Opteron. The effect is smaller with PostgreSQL 8.1.\n> \n> Can you describe what kinds of tests you ran to check your speed?\n\nI has create a test scenario with parallel client which running mostly \nSELECTs on the same tables. I used a sequence of 25 queries using 10 \ntables. We use the total throughput (queries per second) as result.\n\n> \n> Since it's the TAS lock that you are patching, the potential impact is\n> diffuse and large: xlog.c, shmem.c, lwlock.c, proc.c, all do significant\n> work.\n\nYes, I know. We had a problem last year with the performance of the \nOpteron. We have started the futex patch to resolve the issue. The futex \npatch itself did have no effect, but there was a side effect because the \nfutex patch did use also another assembler sequence. This make a hole \ndifference on a Opteron. It turned out that removing the lines\n\ncmpb\njne\nlock\n\nwas the reason why the Opteron runs faster.\nI have created a sequence of larger query with following result on \nOpteron 875 and PostgreSQL 8.0.3\norignal 8.0.3 => 289 query/time and 10% cpu usage\npatched 8.0.3 => 1022 query/time and 45% cpu usage\n\nI has a smaller effect on a XEON MP with EM64T. But this effect wasn't \nthat huge. There was no effect on classic XEON's.\n\nCheers\nSven.\n", "msg_date": "Tue, 13 Jun 2006 14:03:21 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Nis Jorgensen) wrote:\n> J. Andrew Rogers wrote:\n>\n>> We have been using PostgreSQL on Opteron servers almost since the\n>> Opteron was first released, running both 32-bit and 64-bit versions of\n>> Linux. Both 32-bit and 64-bit versions have been bulletproof for us,\n>> with the usual stability I've become accustomed to with both PostgreSQL\n>> and Linux. We have been running nothing but 64-bit versions on\n>> mission-critical systems for the last year with zero problems.\n>> \n>> The short story is that for us 64-bit PostgreSQL on Opterons is\n>> typically something like 20% faster than 32-bit on the same, and *much*\n>> faster than P4 Xeon systems they nominally compete with. \n>\n> Since you sound like you have done extensive testing:\n>\n> Do you have any data regarding whether to enable hyperthreading or not?\n> I realize that this may be highly dependant on the OS, application and\n> number of CPUs, but I would be interested in hearing your\n> recommendations (or others').\n\nUm, Hyper-Threading? On AMD?\n\nHyper-Threading is a feature only offered by Intel, on some Pentium 4\nchips.\n\nIt is not offered by AMD. For our purposes, this is no loss; database\nbenchmarks have widely shown it to be a performance loser across\nvarious database systems.\n-- \noutput = reverse(\"moc.enworbbc\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/languages.html\nYes, for sparkling white chip prints, use low SUDSing DRAW....\n", "msg_date": "Tue, 13 Jun 2006 08:23:42 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Sven,\n\nOn 6/13/06 5:03 AM, \"Sven Geisler\" <[email protected]> wrote:\n\n> Yes, I know. We had a problem last year with the performance of the\n> Opteron. We have started the futex patch to resolve the issue. The futex\n> patch itself did have no effect, but there was a side effect because the\n> futex patch did use also another assembler sequence. This make a hole\n> difference on a Opteron. It turned out that removing the lines\n> \n> cmpb\n> jne\n> lock\n> \n> was the reason why the Opteron runs faster.\n> I have created a sequence of larger query with following result on\n> Opteron 875 and PostgreSQL 8.0.3\n> orignal 8.0.3 => 289 query/time and 10% cpu usage\n> patched 8.0.3 => 1022 query/time and 45% cpu usage\n\nThis was in 64-bit mode on the Opteron?\n\n- Luke\n\n\n", "msg_date": "Tue, 13 Jun 2006 05:42:08 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Luke\n\nLuke Lonergan schrieb:\n> Sven,\n> On 6/13/06 5:03 AM, \"Sven Geisler\" <[email protected]> wrote:\n>> Yes, I know. We had a problem last year with the performance of the\n>> Opteron. We have started the futex patch to resolve the issue. The futex\n>> patch itself did have no effect, but there was a side effect because the\n>> futex patch did use also another assembler sequence. This make a hole\n>> difference on a Opteron. It turned out that removing the lines\n>>\n>> cmpb\n>> jne\n>> lock\n>>\n>> was the reason why the Opteron runs faster.\n>> I have created a sequence of larger query with following result on\n>> Opteron 875 and PostgreSQL 8.0.3\n>> orignal 8.0.3 => 289 query/time and 10% cpu usage\n>> patched 8.0.3 => 1022 query/time and 45% cpu usage\n> \n> This was in 64-bit mode on the Opteron?\nThis was in 32-bit mode on the Opteron. But the effect was the same in \n64-bit mode with PostgreSQL 8.0.3.\n\nYou already get this change if you compile PostgreSQL 8.1.x in x86_64 \n(64-bit mode).\n\nCheers\nSven.\n", "msg_date": "Tue, 13 Jun 2006 14:46:51 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Sven,\n\nOn 6/13/06 5:46 AM, \"Sven Geisler\" <[email protected]> wrote:\n\n> You already get this change if you compile PostgreSQL 8.1.x in x86_64\n> (64-bit mode).\n\nI see, so I think your point with the patch is to make the 32-bit compiled\nversion benefit as well.\n\n- Luke\n\n\n", "msg_date": "Tue, 13 Jun 2006 05:49:44 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Luke,\n\nLuke Lonergan schrieb:\n> On 6/13/06 5:46 AM, \"Sven Geisler\" <[email protected]> wrote:\n>> You already get this change if you compile PostgreSQL 8.1.x in x86_64\n>> (64-bit mode).\n> \n> I see, so I think your point with the patch is to make the 32-bit compiled\n> version benefit as well.\n> \n\nYup. I think you have to change this in the 32-bit compiled version too \nif you want to compare 32-bit and 64-bit on a Opteron.\n\nSven.\n", "msg_date": "Tue, 13 Jun 2006 14:52:02 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n>\n>\n> In which case, why was 64-bit such a big deal?\n> \n\nWe had this discussion with 16/32 bit too, back in those 286/386 times...\nNot too many 16bit apps left now :-)\n\nRegards,\nAndreas\n\n\n\n\n", "msg_date": "Tue, 13 Jun 2006 15:15:06 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "\nOn Jun 13, 2006, at 1:40 AM, Nis Jorgensen wrote:\n> Since you sound like you have done extensive testing:\n>\n> Do you have any data regarding whether to enable hyperthreading or \n> not?\n> I realize that this may be highly dependant on the OS, application and\n> number of CPUs, but I would be interested in hearing your\n> recommendations (or others').\n\n\nHyperthreading never made much of a difference for our database \nloads. Since we've retired all non-dev P4 database servers, I am not \ntoo worried about it. We will probably re-test the new \"Core 2\" CPUs \nthat are coming out, since those differ significantly from the P4 in \ncapability.\n\n\nJ. Andrew Rogers\n\n", "msg_date": "Tue, 13 Jun 2006 06:46:10 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "* Steinar H. Gunderson ([email protected]) wrote:\n> On Mon, Jun 12, 2006 at 10:44:01PM -0400, Tom Lane wrote:\n> > (Personally, if I'd designed it, the libraries would actually live in\n> > /usr/lib32 and /usr/lib64, and /usr/lib would be a symlink to whichever\n> > you needed it to be at the moment. Likewise for /usr/bin.)\n> \n> Actually, there have been plans for doing something like this in Debian for a\n> while: Let stuff live in /lib/i686-linux-gnu and /lib/x86_64-linux-gnu\n> (lib32 and lib64 doesn't really scale, once you start considering stuff like\n> \"ia64 can emulate hppa\"), and adjust paths and symlinks as fit. It's still a\n> long way to go, though.\n\nThe general feeling is that there won't be support for multiple versions\nof a given binary being installed at once though. The proposal Steinar\nmentioned is called 'multiarch' and is being discussed with LSB and\nother distros too, though I think it did mostly originated with Debian\nfolks.\n\nJust my 2c.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 13 Jun 2006 12:11:50 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Placement of 64-bit libraries (offtopic)" }, { "msg_contents": "On Mon, Jun 12, 2006 at 05:19:46PM -0600, David Boreham wrote:\n> What they are saying is strictly true : 64-bit pointers tend to increase \n> the working set size\n> of an application vs. 32-bit pointers. This means that any caches will \n> have somewhat lower\n> hit ratio. Also the bytes/s between the CPU and memory will be higher \n> due to moving those larger pointers.\n\nWhile bytes/s will go up what really matters is words/s, where a word is\nthe size of a memory transfer to the CPU. Taking a simplistic view, 8\nbit CPUs move data into the CPU one byte at a time; 16 bit CPUs, 2\nbytes; 32 bit, 4 bytes, and 64 bit, 8 bytes. The reality is a bit more\ncomplicated, but I'm 99.9% certain that you won't see a modern 64 bit CPU\ntranfering data in less than 64 bit increments.\n\n> However, an application that needs to work on > around 2G of data will \n> in the end be\n> much faster 64-bit due to reduced I/O (it can keep more of the data in \n> memory).\n \nThere's not an automatic correlation between word size and address\nspace, just look at the 8088, so this depends entirely on the CPU.\n\n> I worked on porting a large database application from 32-bit to 64-bit. One\n> of our customers required us to retain the 32-bit version because of \n> this phenomenon.\n> \n> In measurements I conducted on that application, the performance \n> difference wasn't\n> great (10% or so), but it was measurable. This was with Sun Sparc hardware.\n> It is possible that more modern CPU designs have more efficient 64-bit\n> implementation than 32-bit, so the opposite might be seen too.\n> \n> Whether or not PG would show the same thing I can't say for sure. \n> Probably it would though.\n\nIt's going to depend entirely on the CPU and the compiler. I can say\nthat in the 32 vs 64 bit benchmarking I've done using dbt2, I wasn't\nable to find a difference at all on Sunfire Opteron machines.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 14:43:38 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "On Mon, Jun 12, 2006 at 08:04:41PM -0400, Stephen Frost wrote:\n> don't think). This is because Postgres lets the OS handle most of the\n> cacheing, so as long as your OS can see all the memory you have in the\n\nActually, in 8.1.x I've seen some big wins from greatly increasing the\namount of shared_buffers, even as high as 50% of memory, thanks to the\nchanges made to the buffer management code. I'd strongly advice users to\nbenchmark their applications with higher shared_buffers and see what\nimpact it has, especially if your application can't make use of really\nbig work_mem settings. If there's additional changes to the shared\nbuffer code in 8.2 (I know Tom's been looking at doing multiple buffer\npools to reduce contention on the BufMgr lock), it'd be worth\nre-benchmarking when it comes out.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 15:08:10 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64-bit vs 32-bit performance ... backwards?" }, { "msg_contents": "Jim C. Nasby wrote:\n...\n> Actually, in 8.1.x I've seen some big wins from greatly increasing the\n> amount of shared_buffers, even as high as 50% of memory, thanks to the\n> changes made to the buffer management code. ...\n\nAnyone else run into a gotcha that one of our customers ran into?\nPG 7.4.8 running on Solaris 2.6, USparc w 4GB RAM.\nUsually about 50 active backends.\n(No reason to believe this wouldn't apply to 8.x).\n\nInitially shared_buffers were set to 1000 (8MB).\nThen, we moved all apps but the database server off the box.\n\nRaised shared_buffers to 2000 (16MB).\nModest improvement in some frequent repeated queries.\n\nRaised shared_buffers to 16000 (128MB).\nDB server dropped to a CRAWL.\n\nvmstat showed that it was swapping like crazy.\nDropped shared_buffers back down again. \nSwapping stopped.\n\nStared at \"ps u\" a lot, and realized that the shm seg appeared to\nbe counted as part of the resident set (RSS).\nTheory was that the kernel was reading the numbers the same way,\nand swapping out resident sets, since they obviously wouldn't\nall fit in RAM :-)\n\nAnyone from Sun reading this list, willing to offer an opinion?\n", "msg_date": "Tue, 13 Jun 2006 15:21:34 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Solaris shared_buffers anomaly?" }, { "msg_contents": "Mischa Sandberg <[email protected]> writes:\n> vmstat showed that it was swapping like crazy.\n> Dropped shared_buffers back down again. \n> Swapping stopped.\n\nDoes Solaris have any call that allows locking a shmem segment in RAM?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 18:22:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly? " }, { "msg_contents": "On Tue, Jun 13, 2006 at 06:22:07PM -0400, Tom Lane wrote:\n> Mischa Sandberg <[email protected]> writes:\n> > vmstat showed that it was swapping like crazy.\n> > Dropped shared_buffers back down again. \n> > Swapping stopped.\n> \n> Does Solaris have any call that allows locking a shmem segment in RAM?\n\nThe Solaris 9 shmctl manpage mentions this token:\n\n SHM_LOCK\n Lock the shared memory segment specified by shmid in\n memory. This command can be executed only by a process\n that has an effective user ID equal to super-user.\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 13 Jun 2006 16:34:01 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "On Tue, Jun 13, 2006 at 03:21:34PM -0700, Mischa Sandberg wrote:\n> Jim C. Nasby wrote:\n> ...\n> >Actually, in 8.1.x I've seen some big wins from greatly increasing the\n> >amount of shared_buffers, even as high as 50% of memory, thanks to the\n> >changes made to the buffer management code. ...\n> \n> Anyone else run into a gotcha that one of our customers ran into?\n> PG 7.4.8 running on Solaris 2.6, USparc w 4GB RAM.\n> Usually about 50 active backends.\n> (No reason to believe this wouldn't apply to 8.x).\n> \n> Initially shared_buffers were set to 1000 (8MB).\n> Then, we moved all apps but the database server off the box.\n> \n> Raised shared_buffers to 2000 (16MB).\n> Modest improvement in some frequent repeated queries.\n> \n> Raised shared_buffers to 16000 (128MB).\n> DB server dropped to a CRAWL.\n> \n> vmstat showed that it was swapping like crazy.\n> Dropped shared_buffers back down again. \n> Swapping stopped.\n\nWhat's sort_mem set to? I suspect you simply ran the machine out of\nmemory.\n\nAlso, Solaris by default will only use a portion of memory for\nfilesystem caching, which will kill PostgreSQL performance.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 17:47:59 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "> Initially shared_buffers were set to 1000 (8MB).\n> Then, we moved all apps but the database server off the box.\n> \n> Raised shared_buffers to 2000 (16MB).\n> Modest improvement in some frequent repeated queries.\n> \n> Raised shared_buffers to 16000 (128MB).\n> DB server dropped to a CRAWL.\n\nVersions below 8.1 normally don't do well with high shared_buffers. 8.1 \nwould do much better. If you dropped that to more like 6k you would \nprobably continue to see increase over 2k.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 13 Jun 2006 15:55:32 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Tom Lane wrote:\n> Mischa Sandberg <[email protected]> writes:\n>> vmstat showed that it was swapping like crazy.\n>> Dropped shared_buffers back down again. \n>> Swapping stopped.\n> \n> Does Solaris have any call that allows locking a shmem segment in RAM?\n\nYes, mlock(). But want to understand what's going on before patching.\nNo reason to believe that the multiply-attached shm seg was being swapped out\n(which is frankly insane). Swapping out (and in) just the true resident set of\nevery backend would be enough to explain the vmstat io we saw.\n\nhttp://www.carumba.com/talk/random/swol-09-insidesolaris.html\n\nFor a dedicated DB server machine, Solaris has a feature:\ncreate \"intimate\" shared memory with shmat(..., SHM_SHARE_MMU).\nAll backends share the same TLB entries (!). \n\nContext switch rates on our in-house solaris boxes running PG\nhave been insane (4000/sec). Reloading the TLB map on every process\ncontext switch might be one reason Solaris runs our db apps at less\nthan half the speed of our perftesters' Xeon beige-boxes.\n\nThat's guesswork. Sun is making PG part of their distro ... \nperhaps they've some knowledgeable input.\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n\n", "msg_date": "Tue, 13 Jun 2006 16:04:58 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "On Tue, Jun 13, 2006 at 04:20:34PM -0700, Mischa Sandberg wrote:\n> Jim C. Nasby wrote:\n> >On Tue, Jun 13, 2006 at 03:21:34PM -0700, Mischa Sandberg wrote:\n> >>Raised shared_buffers to 16000 (128MB).\n> >>DB server dropped to a CRAWL.\n> >>\n> >>vmstat showed that it was swapping like crazy.\n> >>Dropped shared_buffers back down again. \n> >>Swapping stopped.\n> >\n> >What's sort_mem set to? I suspect you simply ran the machine out of\n> >memory.\n> \n> 8192 (8MB). No issue when shared_buffers was 2000; same apps always.\n \nSo if all 50 backends were running a sort, you'd use 400MB. The box has\n4G, right?\n\n> >Also, Solaris by default will only use a portion of memory for\n> >filesystem caching, which will kill PostgreSQL performance.\n> \n> Yep, tested /etc/system segmap_percent at 20,40,60. \n> No significant difference between 20 and 60.\n\nThat's pretty disturbing... how large is your database?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 18:17:30 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Tue, Jun 13, 2006 at 03:21:34PM -0700, Mischa Sandberg wrote:\n>> Raised shared_buffers to 16000 (128MB).\n>> DB server dropped to a CRAWL.\n>>\n>> vmstat showed that it was swapping like crazy.\n>> Dropped shared_buffers back down again. \n>> Swapping stopped.\n> \n> What's sort_mem set to? I suspect you simply ran the machine out of\n> memory.\n\n8192 (8MB). No issue when shared_buffers was 2000; same apps always.\n\n> Also, Solaris by default will only use a portion of memory for\n> filesystem caching, which will kill PostgreSQL performance.\n\nYep, tested /etc/system segmap_percent at 20,40,60. \nNo significant difference between 20 and 60.\nDefault is 10%? 12%? Can't recall.\n\nWas not changed from 20 during the shared_buffer test.\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Tue, 13 Jun 2006 16:20:34 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Tue, Jun 13, 2006 at 04:20:34PM -0700, Mischa Sandberg wrote:\n>> Jim C. Nasby wrote:\n>>> What's sort_mem set to? I suspect you simply ran the machine out of\n>>> memory.\n>> 8192 (8MB). No issue when shared_buffers was 2000; same apps always.\n> \n> So if all 50 backends were running a sort, you'd use 400MB. The box has\n> 4G, right?\n\nUmm ... yes. \"if\". 35-40 of them are doing pure INSERTS. \nNot following your train.\n\n>> Yep, tested /etc/system segmap_percent at 20,40,60. \n>> No significant difference between 20 and 60.\n> That's pretty disturbing... how large is your database?\n\n~10GB. Good locality. Where heading?\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Tue, 13 Jun 2006 17:01:34 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Mischa Sandberg wrote:\n> Jim C. Nasby wrote:\n> ...\n>> Actually, in 8.1.x I've seen some big wins from greatly increasing the\n>> amount of shared_buffers, even as high as 50% of memory, thanks to the\n>> changes made to the buffer management code. ...\n> \n> Anyone else run into a gotcha that one of our customers ran into?\n> PG 7.4.8 running on Solaris 2.6, USparc w 4GB RAM.\n> Usually about 50 active backends.\n> (No reason to believe this wouldn't apply to 8.x).\n> \n> Initially shared_buffers were set to 1000 (8MB).\n> Then, we moved all apps but the database server off the box.\n> \n> Raised shared_buffers to 2000 (16MB).\n> Modest improvement in some frequent repeated queries.\n> \n> Raised shared_buffers to 16000 (128MB).\n> DB server dropped to a CRAWL.\n> \n> vmstat showed that it was swapping like crazy.\n> Dropped shared_buffers back down again. Swapping stopped.\n> \n> Stared at \"ps u\" a lot, and realized that the shm seg appeared to\n> be counted as part of the resident set (RSS).\n> Theory was that the kernel was reading the numbers the same way,\n> and swapping out resident sets, since they obviously wouldn't\n> all fit in RAM :-)\n> \n> Anyone from Sun reading this list, willing to offer an opinion?\n>\n\nA while ago I ran 7.4.? on a Solaris 2.8 box (E280 or E220 can't recall) \nwith 2G of ram - 40 users or so with shared_buffers = approx 12000 - \nwith no swapping I recall (in fact I pretty sure there was free memory!).\n\nI suspect something else is your culprit - what is work_mem (or \nsort_mem) set to? I'm thinking that you have this high and didn't have \nmuch memory headroom to begin with, so that upping shared_buffers from \n16MB -> 128MB tipped things over the edge!\n\nCheers\n\nMark\n", "msg_date": "Wed, 14 Jun 2006 12:10:43 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Mischa Sandberg <[email protected]> writes:\n> Tom Lane wrote:\n>> Does Solaris have any call that allows locking a shmem segment in RAM?\n\n> Yes, mlock(). But want to understand what's going on before patching.\n\nSure, but testing it with mlock() might help you understand what's going\non, by eliminating one variable: we don't really know if the shmem is\ngetting swapped, or something else.\n\n> For a dedicated DB server machine, Solaris has a feature:\n> create \"intimate\" shared memory with shmat(..., SHM_SHARE_MMU).\n> All backends share the same TLB entries (!). \n\nWe use that already. (Hmm, might be interesting for you to turn it\n*off* and see if anything changes. See src/backend/port/sysv_shmem.c.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 22:04:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly? " }, { "msg_contents": "On Tue, Jun 13, 2006 at 05:01:34PM -0700, Mischa Sandberg wrote:\n> Jim C. Nasby wrote:\n> >On Tue, Jun 13, 2006 at 04:20:34PM -0700, Mischa Sandberg wrote:\n> >>Jim C. Nasby wrote:\n> >>>What's sort_mem set to? I suspect you simply ran the machine out of\n> >>>memory.\n> >>8192 (8MB). No issue when shared_buffers was 2000; same apps always.\n> > \n> >So if all 50 backends were running a sort, you'd use 400MB. The box has\n> >4G, right?\n> \n> Umm ... yes. \"if\". 35-40 of them are doing pure INSERTS. \n> Not following your train.\n\nIf sort_mem is set too high and a bunch of sorts fire off at once,\nyou'll run the box out of memory and it'll start swapping. Won't really\nmatter much whether it's swapping shared buffers or not; performance\nwill just completely tank.\n\nActually, I think that Solaris can be pretty aggressive about swapping\nstuff out to try and cache more data. Perhaps that's what's happening?\n\n> >>Yep, tested /etc/system segmap_percent at 20,40,60. \n> >>No significant difference between 20 and 60.\n> >That's pretty disturbing... how large is your database?\n> \n> ~10GB. Good locality. Where heading?\n\nI guess I should have asked what your working set size was... unless\nthat's very small, it doesn't make sense that changing the cache size\nthat much wouldn't help things.\n\nBTW, on some versions of Solaris, segmap_percent doesn't actually work;\nyou have to change something else that's measured in bytes.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 22:35:39 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Tom Lane wrote:\n> Mischa Sandberg <[email protected]> writes:\n\n>> Tom Lane wrote:\n>>> Does Solaris have any call that allows locking a shmem segment in RAM?\n>> Yes, mlock(). But want to understand what's going on before patching.\n> \n> Sure, but testing it with mlock() might help you understand what's going\n> on, by eliminating one variable: we don't really know if the shmem is\n> getting swapped, or something else.\n\n>> For a dedicated DB server machine, Solaris has a feature:\n>> create \"intimate\" shared memory with shmat(..., SHM_SHARE_MMU).\n>> All backends share the same TLB entries (!). \n> \n> We use that already. (Hmm, might be interesting for you to turn it\n> *off* and see if anything changes. See src/backend/port/sysv_shmem.c.)\n\nGah. Always must remember to RTFSource.\nAnd reproduce the problem on a machine I control :-)\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Wed, 14 Jun 2006 10:06:20 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" }, { "msg_contents": "Folks,\n\nFirst off, you'll be glad to know that I've persuaded two of the Sun \nperformance engineers to join this list soon. So you should be able to \nget more difinitive answers to these questions.\n\nSecond, 7.4 still did linear scanning of shared_buffers as part of LRU and \nfor other activities. I don't know how that would cause swapping, but it \ncertainly could cause dramatic slowdowns (like 2-5x) if you overallocated \nshared_buffers. \n\nPossibly this is also triggering a bug in Solaris 2.6. 2.6 is pretty \ndarned old (1997); maybe you should upgrade? We're testing with s_b set \nto 300,000 on Solaris 10 (Niagara) so this is obviously not a current \nSolaris issue.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 14 Jun 2006 14:25:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris shared_buffers anomaly?" } ]
[ { "msg_contents": "Hi,\nI don't have a copy of the standard on hand and a collegue is claiming\nthat there must be a from clause in a select query (he is an oracle\nguy). This doesn't seem to be the case for postgres... does anyone\nknow?\nCheers\nAntoine\nps. any one of them will do...\n-- \nThis is where I should put some witty comment.\n", "msg_date": "Tue, 13 Jun 2006 14:43:45 +0200", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "OT - select + must have from - sql standard syntax?" }, { "msg_contents": "On Tue, Jun 13, 2006 at 02:43:45PM +0200, Antoine wrote:\n> Hi,\n> I don't have a copy of the standard on hand and a collegue is claiming\n> that there must be a from clause in a select query (he is an oracle\n> guy). This doesn't seem to be the case for postgres... does anyone\n> know?\n\nDunno, but I know that other databases (at least DB2) don't require FROM\neither. In Oracle, if you want to do something like\n\nSELECT now();\n\nyou actually have to do\n\nSELECT now() FROM dual;\n\nwhere dual is a special, hard-coded table in Oracle that has only one\nrow. Personally, I find their approach to be pretty stupid.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 16:02:47 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT - select + must have from - sql standard syntax?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 13, 2006 at 02:43:45PM +0200, Antoine wrote:\n>> I don't have a copy of the standard on hand and a collegue is claiming\n>> that there must be a from clause in a select query (he is an oracle\n>> guy). This doesn't seem to be the case for postgres... does anyone\n>> know?\n\n> Dunno, but I know that other databases (at least DB2) don't require FROM\n> either.\n\nThe spec does require a FROM clause in SELECT (at least as of SQL99, did\nnot check SQL2003). However, it's clearly mighty useful to allow FROM\nto be omitted for simple compute-this-scalar-result problems. You\nshould respond to the Oracle guy that \"SELECT whatever FROM dual\" is not\nin the standard either (certainly the spec does not mention any such\ntable). And in any case an Oracle fanboy has got *no* leg to stand on\nwhen griping about proprietary extensions to the spec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 22:44:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT - select + must have from - sql standard syntax? " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> The spec does require a FROM clause in SELECT (at least as of SQL99, did\n> not check SQL2003). However, it's clearly mighty useful to allow FROM\n> to be omitted for simple compute-this-scalar-result problems. You\n> should respond to the Oracle guy that \"SELECT whatever FROM dual\" is not\n> in the standard either (certainly the spec does not mention any such\n> table). \n\nWell you could always create a \"dual\", it was always just a regular table. We\nused to joke about what would happen to Oracle if you inserted an extra row in\nit...\n\nOracle used to always require FROM, if it has stopped requiring it then that's\nnew. I had heard it had special-cased dual in later versions to avoid the\ntable access overhead, I suspect these two changes are related.\n\n-- \ngreg\n\n", "msg_date": "14 Jun 2006 00:15:30 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT - select + must have from - sql standard syntax?" }, { "msg_contents": "> > The spec does require a FROM clause in SELECT (at least as of SQL99, did\n> > not check SQL2003). However, it's clearly mighty useful to allow FROM\n> > to be omitted for simple compute-this-scalar-result problems. You\n> > should respond to the Oracle guy that \"SELECT whatever FROM dual\" is not\n> > in the standard either (certainly the spec does not mention any such\n> > table).\n\nThanks for your answers guys. I was pretty sure DUAL wasn't in the\nstandard (never seen it outside an oracle context) but wasn't at all\nsure about the FROM.\nCheers\nAntoine\nps. shame the standard isn't \"freely\" consultable to save you guys\nsilly OT questions!\n\n\n-- \nThis is where I should put some witty comment.\n", "msg_date": "Wed, 14 Jun 2006 08:36:22 +0200", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OT - select + must have from - sql standard syntax?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Greg Stark\n> Sent: Tuesday, June 13, 2006 11:16 PM\n> Subject: Re: [PERFORM] OT - select + must have from - sql \n> standard syntax?\n[SNIP]\n> \n> Well you could always create a \"dual\", it was always just a \n> regular table. We\n> used to joke about what would happen to Oracle if you \n> inserted an extra row in\n> it...\n\n\nI've never used Oracle, so I don't understand why its called dual when\nit only has one row? Shouldn't it be called single? :\\\n\n\nDave\n\n", "msg_date": "Wed, 14 Jun 2006 08:58:48 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT - select + must have from - sql standard syntax?" }, { "msg_contents": "Antoine <[email protected]> writes:\n> ps. shame the standard isn't \"freely\" consultable to save you guys\n> silly OT questions!\n\nYou can get free \"draft\" versions that are close-enough-to-final to be\nperfectly usable. See our developers' FAQ for some links. I like the\ndrafts partly because they're plain ASCII, and so far easier to search\nthan PDFs ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2006 10:20:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT - select + must have from - sql standard syntax? " }, { "msg_contents": "> You can get free \"draft\" versions that are close-enough-to-final to be\n> perfectly usable. See our developers' FAQ for some links. I like the\n> drafts partly because they're plain ASCII, and so far easier to search\n> than PDFs ...\n\nGreat to know - thanks!\nCheers\nAntoine\n\n-- \nThis is where I should put some witty comment.\n", "msg_date": "Wed, 14 Jun 2006 21:04:10 +0200", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OT - select + must have from - sql standard syntax?" } ]
[ { "msg_contents": "Hi,\n \nI have a database where there are three columns (name,date,data). The\nqueries are almost always something like SELECT date,data FROM table WHERE\nname=blah AND date > 1/1/2005 AND date < 1/1/2006;. I currently have three\nB-tree indexes, one for each of the columns. Is clustering on date index\ngoing to be what I want, or do I need a index that contains both name and\ndate?\n \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com <http://www.benjaminarai.com/>", "msg_date": "Tue, 13 Jun 2006 09:04:15 -0700", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about clustering multiple columns" }, { "msg_contents": "On Tue, Jun 13, 2006 at 09:04:15 -0700,\n Benjamin Arai <[email protected]> wrote:\n> Hi,\n> \n> I have a database where there are three columns (name,date,data). The\n> queries are almost always something like SELECT date,data FROM table WHERE\n> name=blah AND date > 1/1/2005 AND date < 1/1/2006;. I currently have three\n> B-tree indexes, one for each of the columns. Is clustering on date index\n> going to be what I want, or do I need a index that contains both name and\n> date?\n\nI would expect that clustering on the name would be better for the above\nquery.\nYou probably want an index on name and date combined.\n", "msg_date": "Fri, 16 Jun 2006 10:31:32 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about clustering multiple columns" }, { "msg_contents": "Thanks! This exactly what I was looking for. \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com\n\n-----Original Message-----\nFrom: Bruno Wolff III [mailto:[email protected]] \nSent: Friday, June 16, 2006 11:56 AM\nTo: Benjamin Arai\nCc: [email protected]\nSubject: Re: Question about clustering multiple columns\n\nOn Fri, Jun 16, 2006 at 11:11:59 -0700,\n Benjamin Arai <[email protected]> wrote:\n> Hi,\n> \n> Thanks for the reply. I have one more question. Does it matter in \n> which order that I make the index?\n\nPlease keep replies copied to the lists so that other people can learn from\nand crontibute to the discussion.\n\nIn this case I am just going to copy back to the performance list, since it\nis generally better for perfomance questions than the general list.\n\n> For example, should I create an index cusip,date or date,cusip, does \n> it matter which order. My goal is to cluster the entries by cusip, \n> then for each cusip order the data by date (maybe the order by data \n> occurs automatically). Hm, in that case maybe I only need to cluster \n> by cusip, but then how do I ensure that each cusip had its data ordered by\ndate?\n\nI think that you want to order by cusip (assuming that corresponds to \"name\"\nin you sample query below) first. You won't end up having to go through\nvalues in the index that will be filtered out if you do it that way.\n\nThe documentation for the cluster command says that it clusters on indexes,\nnot columns. So if the index is on (cusip, date), then the records will be\nordered by cusip, date immediately after the cluster. (New records added\nafter the cluster are not guarenteed to be ordered by the index.)\n\n> \n> Benjamin\n> \n> -----Original Message-----\n> From: Bruno Wolff III [mailto:[email protected]]\n> Sent: Friday, June 16, 2006 8:32 AM\n> To: Benjamin Arai\n> Cc: [email protected]; [email protected]\n> Subject: Re: Question about clustering multiple columns\n> \n> On Tue, Jun 13, 2006 at 09:04:15 -0700,\n> Benjamin Arai <[email protected]> wrote:\n> > Hi,\n> > \n> > I have a database where there are three columns (name,date,data). \n> > The queries are almost always something like SELECT date,data FROM \n> > table WHERE name=blah AND date > 1/1/2005 AND date < 1/1/2006;. I \n> > currently have three B-tree indexes, one for each of the columns. \n> > Is clustering on date index going to be what I want, or do I need a \n> > index that contains both name and date?\n> \n> I would expect that clustering on the name would be better for the \n> above query.\n> You probably want an index on name and date combined.\n> \n> \n> \n\n!DSPAM:4492fdfd193631139819016!\n\n", "msg_date": "Fri, 16 Jun 2006 11:55:38 -0700", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about clustering multiple columns" }, { "msg_contents": "On Fri, Jun 16, 2006 at 11:11:59 -0700,\n Benjamin Arai <[email protected]> wrote:\n> Hi,\n> \n> Thanks for the reply. I have one more question. Does it matter in which\n> order that I make the index?\n\nPlease keep replies copied to the lists so that other people can learn from\nand crontibute to the discussion.\n\nIn this case I am just going to copy back to the performance list, since it\nis generally better for perfomance questions than the general list.\n\n> For example, should I create an index cusip,date or date,cusip, does it\n> matter which order. My goal is to cluster the entries by cusip, then for\n> each cusip order the data by date (maybe the order by data occurs\n> automatically). Hm, in that case maybe I only need to cluster by cusip, but\n> then how do I ensure that each cusip had its data ordered by date?\n\nI think that you want to order by cusip (assuming that corresponds to \"name\"\nin you sample query below) first. You won't end up having to go through values\nin the index that will be filtered out if you do it that way.\n\nThe documentation for the cluster command says that it clusters on indexes,\nnot columns. So if the index is on (cusip, date), then the records will be\nordered by cusip, date immediately after the cluster. (New records added \nafter the cluster are not guarenteed to be ordered by the index.)\n\n> \n> Benjamin\n> \n> -----Original Message-----\n> From: Bruno Wolff III [mailto:[email protected]] \n> Sent: Friday, June 16, 2006 8:32 AM\n> To: Benjamin Arai\n> Cc: [email protected]; [email protected]\n> Subject: Re: Question about clustering multiple columns\n> \n> On Tue, Jun 13, 2006 at 09:04:15 -0700,\n> Benjamin Arai <[email protected]> wrote:\n> > Hi,\n> > \n> > I have a database where there are three columns (name,date,data). The \n> > queries are almost always something like SELECT date,data FROM table \n> > WHERE name=blah AND date > 1/1/2005 AND date < 1/1/2006;. I currently \n> > have three B-tree indexes, one for each of the columns. Is clustering \n> > on date index going to be what I want, or do I need a index that \n> > contains both name and date?\n> \n> I would expect that clustering on the name would be better for the above\n> query.\n> You probably want an index on name and date combined.\n> \n> !DSPAM:4492ce0d180368658827628!\n> \n", "msg_date": "Fri, 16 Jun 2006 13:55:56 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about clustering multiple columns" } ]
[ { "msg_contents": "Just so I don't think I'm insane:\n\nwarehouse=# explain analyze select e.event_date::date\nwarehouse-# from l_event_log e\nwarehouse-# JOIN c_event_type t ON (t.id = e.event_type_id)\nwarehouse-# WHERE e.event_date > now() - interval '2 days'\nwarehouse-# AND t.event_name = 'activation';\n \nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=9.22..2723869.56 rows=268505 width=8) (actual\ntime=107.324..408.466 rows=815 loops=1)\n Hash Cond: (\"outer\".event_type_id = \"inner\".id)\n -> Index Scan using idx_evt_dt on l_event_log e \n(cost=0.00..2641742.75 rows=15752255 width=12) (actual\ntime=0.034..229.641 rows=38923 loops=1)\n Index Cond: (event_date > (now() - '2 days'::interval))\n -> Hash (cost=9.21..9.21 rows=3 width=4) (actual time=0.392..0.392\nrows=0 loops=1)\n -> Index Scan using pk_c_event_type on c_event_type t \n(cost=0.00..9.21 rows=3 width=4) (actual time=0.071..0.353 rows=6\nloops=1)\n Filter: ((event_name)::text = 'activation'::text)\n Total runtime: 412.015 ms\n(8 rows)\n\n\nAm I correct in assuming this terrible plan is due to our ancient\nversion of Postgres?\nThis plan is so bad, the system prefers a sequence scan on our 27M row\ntable with dates\nspanning 4 years. 2 days should come back instantly. Both tables are\nfreshly vacuumed\nand analyzed, so I'll just chalk this up to 7.4 sucking unless someone\nsays otherwise.\n\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\nConfidentiality Note:\n\nThe document(s) accompanying this e-mail transmission, if any, and the\ne-mail transmittal message contain information from Leapfrog Online\nCustomer Acquisition, LLC is confidential or privileged. The information\nis intended to be for the use of the individual(s) or entity(ies) named\non this e-mail transmission message. If you are not the intended\nrecipient, be aware that any disclosure, copying, distribution or use of\nthe contents of this e-mail is prohibited. If you have received this\ne-mail in error, please immediately delete this e-mail and notify us by\ntelephone of the error\n", "msg_date": "Tue, 13 Jun 2006 12:32:19 -0500", "msg_from": "\"Shaun Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Confirmation of bad query plan generated by 7.4 tree" }, { "msg_contents": "\"Shaun Thomas\" <[email protected]> writes:\n> Am I correct in assuming this terrible plan is due to our ancient\n> version of Postgres?\n\nI missed the part where you explain why you think this plan is terrible?\n412ms for what seems a rather expensive query doesn't sound so awful.\nDo you know an alternative that is better?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 14:09:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 tree " }, { "msg_contents": "> warehouse-# WHERE e.event_date > now() - interval '2 days'\n\nTry explicitly querying:\nWHERE e.event_date > '2006-06-11 20:15:00'\n\nIn my understanding 7.4 does not precalculate this timestamp value for the\npurpose of choosing a plan.\n\nGreetings\nMarcin\n\n", "msg_date": "Tue, 13 Jun 2006 20:17:46 +0200", "msg_from": "\"Marcin Mank\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 tree" }, { "msg_contents": ">>> On 6/13/2006 at 1:09 PM, Tom Lane <[email protected]> wrote:\n\n> I missed the part where you explain why you think this plan is\nterrible?\n> 412ms for what seems a rather expensive query doesn't sound so\nawful.\n\nSorry, I based that statement on the estimated/actual disparity. That\nparticular query plan is not terrible in its results, but look at the\nestimates and how viciously the explain analyze corrects the values.\n\nHere's an example:\n\n -> Index Scan using idx_evt_dt on l_event_log e \n (cost=0.00..2641742.75 rows=15752255 width=12)\n (actual time=0.034..229.641 rows=38923 loops=1)\n\nrows=15752255 ? That's over half the 27M row table. As expected, the\n*actual* match is much, much lower at 38923. As it turns out, Marcin\nwas right. Simply changing:\n\nnow() - interval '2 days'\n\nto\n\n'2006-06-11 15:30:00'\n\ngenerated a much more accurate set of estimates. I have to assume\nthat\n7.4 is incapable of that optimization step. Now that I know this, I\nplan on modifying my stored proc to calculate the value before\ninserting\nit into the query.\n\nThanks!\n\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\nConfidentiality Note:\n\nThe document(s) accompanying this e-mail transmission, if any, and the\ne-mail transmittal message contain information from Leapfrog Online\nCustomer Acquisition, LLC is confidential or privileged. The information\nis intended to be for the use of the individual(s) or entity(ies) named\non this e-mail transmission message. If you are not the intended\nrecipient, be aware that any disclosure, copying, distribution or use of\nthe contents of this e-mail is prohibited. If you have received this\ne-mail in error, please immediately delete this e-mail and notify us by\ntelephone of the error\n", "msg_date": "Tue, 13 Jun 2006 15:54:44 -0500", "msg_from": "\"Shaun Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "\"Shaun Thomas\" <[email protected]> writes:\n> Simply changing:\n> now() - interval '2 days'\n> to\n> '2006-06-11 15:30:00'\n> generated a much more accurate set of estimates.\n\nYeah, 7.4 won't risk basing estimates on the results of non-immutable\nfunctions. We relaxed that in 8.0 I believe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 17:07:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 tree " }, { "msg_contents": "On Tue, Jun 13, 2006 at 03:54:44PM -0500, Shaun Thomas wrote:\n> >>> On 6/13/2006 at 1:09 PM, Tom Lane <[email protected]> wrote:\n> \n> > I missed the part where you explain why you think this plan is\n> terrible?\n> > 412ms for what seems a rather expensive query doesn't sound so\n> awful.\n> \n> Sorry, I based that statement on the estimated/actual disparity. That\n> particular query plan is not terrible in its results, but look at the\n> estimates and how viciously the explain analyze corrects the values.\n> \n> Here's an example:\n> \n> -> Index Scan using idx_evt_dt on l_event_log e \n> (cost=0.00..2641742.75 rows=15752255 width=12)\n> (actual time=0.034..229.641 rows=38923 loops=1)\n> \n> rows=15752255 ? That's over half the 27M row table. As expected, the\n> *actual* match is much, much lower at 38923. As it turns out, Marcin\n> was right. Simply changing:\n> \n> now() - interval '2 days'\n> \n> to\n> \n> '2006-06-11 15:30:00'\n> \n> generated a much more accurate set of estimates. I have to assume\n> that\n> 7.4 is incapable of that optimization step. Now that I know this, I\n> plan on modifying my stored proc to calculate the value before\n> inserting\n> it into the query.\n\nIs there some compelling reason to stick with 7.4? In my experience\nyou'll see around double (+100%) the performance going to 8.1...\n\nAlso, I'm not sure that the behavior is entirely changed, either. On a\n8.1.4 database I'm still seeing a difference between now() - interval\nand a hard-coded date.\n\nWhat's your stats target set to for that table?\n\n> -- \n> Shaun Thomas\n> Database Administrator\n> \n> Leapfrog Online \n> 807 Greenwood Street \n> Evanston, IL 60201 \n\nHeh, I grew up 3 miles from there. In fact, IIRC my old dentist is/was\nat 807 Davis.\n\n> Tel. 847-440-8253\n> Fax. 847-570-5750\n> www.leapfrogonline.com\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 16:13:47 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": ">>> On 6/13/2006 at 4:13 PM, \"Jim C. Nasby\" <[email protected]>\nwrote:\n\n\n> Is there some compelling reason to stick with 7.4? In my experience\n> you'll see around double (+100%) the performance going to 8.1...\n\nNot really. We *really* want to upgrade, but we're in the middle of\nbuying the new machine right now. There's also the issue of migrating\n37GB of data which I don't look forward to, considering we'll need to\nset up a slony replication for the entire thing to avoid the hours \nof downtime necessary for a full dump/restore.\n\n> What's your stats target set to for that table?\n\nNot sure what you mean by that. It's just that this table has 27M\nrows\nextending over 4 years, and I'm not quite sure how to hint to that.\nAn index scan for a few days would be a tiny fraction of the entire\ntable, so PG being insistent on the sequence scans was confusing the\nhell\nout of me.\n\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\nConfidentiality Note:\n\nThe document(s) accompanying this e-mail transmission, if any, and the\ne-mail transmittal message contain information from Leapfrog Online\nCustomer Acquisition, LLC is confidential or privileged. The information\nis intended to be for the use of the individual(s) or entity(ies) named\non this e-mail transmission message. If you are not the intended\nrecipient, be aware that any disclosure, copying, distribution or use of\nthe contents of this e-mail is prohibited. If you have received this\ne-mail in error, please immediately delete this e-mail and notify us by\ntelephone of the error\n", "msg_date": "Tue, 13 Jun 2006 16:35:41 -0500", "msg_from": "\"Shaun Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "On Tue, Jun 13, 2006 at 04:35:41PM -0500, Shaun Thomas wrote:\n> >>> On 6/13/2006 at 4:13 PM, \"Jim C. Nasby\" <[email protected]>\n> wrote:\n> \n> \n> > Is there some compelling reason to stick with 7.4? In my experience\n> > you'll see around double (+100%) the performance going to 8.1...\n> \n> Not really. We *really* want to upgrade, but we're in the middle of\n> buying the new machine right now. There's also the issue of migrating\n> 37GB of data which I don't look forward to, considering we'll need to\n> set up a slony replication for the entire thing to avoid the hours \n> of downtime necessary for a full dump/restore.\n \nAs long as the master isn't very heavily loaded it shouldn't be that big\na deal to do so...\n\n> > What's your stats target set to for that table?\n> \n> Not sure what you mean by that. It's just that this table has 27M\n> rows\n> extending over 4 years, and I'm not quite sure how to hint to that.\n> An index scan for a few days would be a tiny fraction of the entire\n> table, so PG being insistent on the sequence scans was confusing the\n> hell\n> out of me.\n\nWhat's the output of\nSELECT attname, attstattarget\n FROM pg_attribute\n WHERE attrelid='table_name'::regclass AND attnum >= 0;\nand\nSHOW default_statistics_target;\n\n?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 16:54:19 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Also, I'm not sure that the behavior is entirely changed, either. On a\n> 8.1.4 database I'm still seeing a difference between now() - interval\n> and a hard-coded date.\n\nIt'd depend on the context, possibly, but it's easy to show that the\ncurrent planner does fold \"now() - interval_constant\" when making\nestimates. Simple example:\n\n-- create and populate 1000-row table:\n\nregression=# create table t1 (f1 timestamptz);\nCREATE TABLE\nregression=# insert into t1 select now() - x * interval '1 day' from generate_series(1,1000) x;\nINSERT 0 1000\n\n-- default estimate is pretty awful:\n\nregression=# explain select * from t1 where f1 > now();\n QUERY PLAN \n-----------------------------------------------------\n Seq Scan on t1 (cost=0.00..39.10 rows=647 width=8)\n Filter: (f1 > now())\n(2 rows)\n\nregression=# vacuum t1;\nVACUUM\n\n-- now the planner at least knows how many rows in the table with some\n-- accuracy, but with no stats it's still falling back on a default\n-- selectivity estimate:\n\nregression=# explain select * from t1 where f1 > now();\n QUERY PLAN \n-----------------------------------------------------\n Seq Scan on t1 (cost=0.00..21.00 rows=333 width=8)\n Filter: (f1 > now())\n(2 rows)\n\n-- and the default doesn't really care what the comparison value is:\n\nregression=# explain select * from t1 where f1 > now() - interval '10 days';\n QUERY PLAN \n-----------------------------------------------------\n Seq Scan on t1 (cost=0.00..23.50 rows=333 width=8)\n Filter: (f1 > (now() - '10 days'::interval))\n(2 rows)\n\n-- but let's give it some stats:\n\nregression=# vacuum analyze t1;\nVACUUM\n\n-- and things get better:\n\nregression=# explain select * from t1 where f1 > now() - interval '10 days';\n QUERY PLAN \n---------------------------------------------------\n Seq Scan on t1 (cost=0.00..23.50 rows=9 width=8)\n Filter: (f1 > (now() - '10 days'::interval))\n(2 rows)\n\n7.4 would still be saying \"rows=333\" in the last case, because it's\nfalling back on DEFAULT_INEQ_SEL whenever the comparison value isn't\nstrictly constant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 18:04:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 " }, { "msg_contents": "On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Also, I'm not sure that the behavior is entirely changed, either. On a\n> > 8.1.4 database I'm still seeing a difference between now() - interval\n> > and a hard-coded date.\n> \n> It'd depend on the context, possibly, but it's easy to show that the\n> current planner does fold \"now() - interval_constant\" when making\n> estimates. Simple example:\n\nTurns out the difference is between feeding a date vs a timestamp into the\nquery... I would have thought that since date is a date that the WHERE clause\nwould be casted to a date if it was a timestamptz, but I guess not...\n\nstats=# explain select * from email_contrib where project_id=8 and date >= now()-'15 days'::interval;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.01..45405.83 rows=14225 width=24)\n Index Cond: ((project_id = 8) AND (date >= (now() - '15 days'::interval)))\n(2 rows)\n\nstats=# explain select * from email_contrib where project_id=8 AND date >= '2006-05-29 22:09:56.814897+00'::date;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.00..48951.74 rows=15336 width=24)\n Index Cond: ((project_id = 8) AND (date >= '2006-05-29'::date))\n(2 rows)\n\nstats=# explain select * from email_contrib where project_id=8 AND date >= '2006-05-29 22:09:56.814897+00'::timestamp;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.00..45472.76 rows=14246 width=24)\n Index Cond: ((project_id = 8) AND (date >= '2006-05-29 22:09:56.814897'::timestamp without time zone))\n(2 rows)\n\nActual row count is 109071; reason for the vast difference is querying on two columns.\n\nI know comming up with general-purpose multicolumn stats is extremely\ndifficult, but can't we at least add histograms for multi-column indexes?? In\nthis case that would most likely make the estimate dead-on, because there's an\nindex on project_id, date.\n\nDetails below for the morbidly curious/bored...\n\nstats=# \\d email_contrib\n Table \"public.email_contrib\"\n Column | Type | Modifiers \n------------+---------+-----------\n project_id | integer | not null\n id | integer | not null\n date | date | not null\n team_id | integer | \n work_units | bigint | not null\nIndexes:\n \"email_contrib_pkey\" PRIMARY KEY, btree (project_id, id, date), tablespace \"raid10\"\n \"email_contrib__pk24\" btree (id, date) WHERE project_id = 24, tablespace \"raid10\"\n \"email_contrib__pk25\" btree (id, date) WHERE project_id = 25, tablespace \"raid10\"\n \"email_contrib__pk8\" btree (id, date) WHERE project_id = 8, tablespace \"raid10\"\n \"email_contrib__project_date\" btree (project_id, date), tablespace \"raid10\"\n \"email_contrib__project_id\" btree (project_id), tablespace \"raid10\"\n \"email_contrib__team_id\" btree (team_id), tablespace \"raid10\"\nForeign-key constraints:\n \"fk_email_contrib__id\" FOREIGN KEY (id) REFERENCES stats_participant(id) ON UPDATE CASCADE\n \"fk_email_contrib__team_id\" FOREIGN KEY (team_id) REFERENCES stats_team(team) ON UPDATE CASCADE\nTablespace: \"raid10\"\n\nstats=# explain analyze select * from email_contrib where project_id=8 and date >= now()-'15 days'::interval;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.01..45475.95 rows=14247 width=24) (actual time=0.294..264.345 rows=109071 loops=1)\n Index Cond: ((project_id = 8) AND (date >= (now() - '15 days'::interval)))\n Total runtime: 412.167 ms\n(3 rows)\n\nstats=# select now()-'15 days'::interval;\n ?column? \n-------------------------------\n 2006-05-29 22:09:56.814897+00\n(1 row)\n\nstats=# explain analyze select * from email_contrib where project_id=8 and date >= '2006-05-29 22:09:56.814897+00';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.00..48951.74 rows=15336 width=24) (actual time=0.124..229.800 rows=116828 loops=1)\n Index Cond: ((project_id = 8) AND (date >= '2006-05-29'::date))\n Total runtime: 391.240 ms\n(3 rows)\n\nstats=# explain select * from email_contrib where project_id=8 and date >= '2006-05-29 22:09:56.814897+00'::date;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib__project_date on email_contrib (cost=0.00..48951.74 rows=15336 width=24)\n Index Cond: ((project_id = 8) AND (date >= '2006-05-29'::date))\n(2 rows)\n\nSo casting to date doesn't change anything, but dropping project_id from the\nwhere clause certainly does...\n\nstats=# explain analyze select * from email_contrib where date >= now()-'15 days'::interval;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_contrib (cost=847355.98..1256538.96 rows=152552 width=24) (actual time=74886.028..75267.633 rows=148894 loops=1)\n Recheck Cond: (date >= (now() - '15 days'::interval))\n -> Bitmap Index Scan on email_contrib__project_date (cost=0.00..847355.98 rows=152552 width=0) (actual time=74885.690..74885.690 rows=148894 loops=1)\n Index Cond: (date >= (now() - '15 days'::interval))\n Total runtime: 75472.490 ms\n(5 rows)\n\nThat estimate is dead-on. So it appears it's yet another case of cross-column\nstats. :( But there's still a difference between now()-interval and something hard-coded:\n\nstats=# explain analyze select * from email_contrib where date >= '2006-05-29 22:09:56.814897+00'::date;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_contrib (cost=847355.98..1278756.22 rows=164256 width=24) (actual time=19356.752..19623.450 rows=159348 loops=1)\n Recheck Cond: (date >= '2006-05-29'::date)\n -> Bitmap Index Scan on email_contrib__project_date (cost=0.00..847355.98 rows=164256 width=0) (actual time=19356.391..19356.391 rows=159348 loops=1)\n Index Cond: (date >= '2006-05-29'::date)\n Total runtime: 19841.614 ms\n(5 rows)\n\nstats=# explain analyze select * from email_contrib where date >= (now()-'15 days'::interval)::date;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_contrib (cost=847355.98..1279988.15 rows=164256 width=24) (actual time=19099.417..19372.167 rows=159348 loops=1)\n Recheck Cond: (date >= ((now() - '15 days'::interval))::date)\n -> Bitmap Index Scan on email_contrib__project_date (cost=0.00..847355.98 rows=164256 width=0) (actual time=19099.057..19099.057 rows=159348 loops=1)\n Index Cond: (date >= ((now() - '15 days'::interval))::date)\n Total runtime: 19589.785 ms\n\nAha! It's the casting to date that changes things.\n\nThe stats target is 100...\n\nstats=# select attname, n_distinct from pg_stats where tablename='email_contrib';\n attname | n_distinct \n------------+------------\n project_id | 6\n team_id | 4104\n work_units | 6795\n date | 3034\n id | 35301\n(5 rows)\n\nThe n_distinct for project_id and date both look about right.\n\nstats=# select * from pg_stats where tablename='email_contrib' and attname='project_id';\n-[ RECORD 1 ]-----+------------------------------------------------------------\nschemaname | public\ntablename | email_contrib\nattname | project_id\nnull_frac | 0\navg_width | 4\nn_distinct | 6\nmost_common_vals | {205,5,8,25,24,3}\nmost_common_freqs | {0.4273,0.419833,0.0933667,0.0514667,0.00506667,0.00296667}\nhistogram_bounds | \ncorrelation | 0.605662\n\nstats=# select relpages,reltuples from pg_class where relname='email_contrib';\n relpages | reltuples \n----------+-------------\n 996524 | 1.35509e+08\n\nIf we look at how many rows would match project_id 8 and any 15 dates...\n\nstats=# SELECT 1.35509e+08 * 0.0933667 / 3034 * 15;\n ?column? \n------------------------\n 62551.2268472313777195\n\nWe come up with something much closer to reality (116828 rows). I guess the\nproblem is in the histogram for date; where the last 3 values are:\n\n2005-11-02,2006-03-05,2006-06-11\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 17:39:51 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": ">>> On 6/13/2006 at 4:54 PM, \"Jim C. Nasby\" <[email protected]>\nwrote:\n\n> SELECT attname, attstattarget\n> FROM pg_attribute\n> WHERE attrelid='table_name'::regclass AND attnum >= 0;\n\n-1 for all values.\n\n> SHOW default_statistics_target;\n\n10.\n\nHere's something slightly annoying: I tried precalculating the value\nin my stored proc, and it's still ignoring it.\n\nlastTime := now() - interval ''7 days'';\n\nUPDATE fact_credit_app\n SET activated_date_id = ad.date_id\n FROM l_event_log e\n JOIN c_event_type t ON (t.id = e.event_type_id)\n JOIN wf_date ad ON (e.event_date::date=ad.datestamp)\n WHERE e.ext_id=fact_credit_app.unique_id\n AND t.event_name = ''activation''\n AND e.event_date > lastTime\n AND fact_credit_app.activated_date_id IS NULL;\n\nInstead of taking a handful of seconds (like when I replace\nlastTime with the text equivalent), it takes 10 minutes...\nI can see the planner not liking the results of a function,\nbut a variable? That's a static value! ::cry::\n\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\nConfidentiality Note:\n\nThe document(s) accompanying this e-mail transmission, if any, and the\ne-mail transmittal message contain information from Leapfrog Online\nCustomer Acquisition, LLC is confidential or privileged. The information\nis intended to be for the use of the individual(s) or entity(ies) named\non this e-mail transmission message. If you are not the intended\nrecipient, be aware that any disclosure, copying, distribution or use of\nthe contents of this e-mail is prohibited. If you have received this\ne-mail in error, please immediately delete this e-mail and notify us by\ntelephone of the error\n", "msg_date": "Tue, 13 Jun 2006 17:41:06 -0500", "msg_from": "\"Shaun Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "On Tue, Jun 13, 2006 at 05:41:06PM -0500, Shaun Thomas wrote:\n> >>> On 6/13/2006 at 4:54 PM, \"Jim C. Nasby\" <[email protected]>\n> wrote:\n> \n> > SELECT attname, attstattarget\n> > FROM pg_attribute\n> > WHERE attrelid='table_name'::regclass AND attnum >= 0;\n> \n> -1 for all values.\n> \n> > SHOW default_statistics_target;\n> \n> 10.\n \nIncreasing the statistics target for that table (or\ndefault_statistics_target) might help. I'd try between 50 and 100.\n\n> Here's something slightly annoying: I tried precalculating the value\n> in my stored proc, and it's still ignoring it.\n> \n> lastTime := now() - interval ''7 days'';\n> \n> UPDATE fact_credit_app\n> SET activated_date_id = ad.date_id\n> FROM l_event_log e\n> JOIN c_event_type t ON (t.id = e.event_type_id)\n> JOIN wf_date ad ON (e.event_date::date=ad.datestamp)\n> WHERE e.ext_id=fact_credit_app.unique_id\n> AND t.event_name = ''activation''\n> AND e.event_date > lastTime\n> AND fact_credit_app.activated_date_id IS NULL;\n> \n> Instead of taking a handful of seconds (like when I replace\n> lastTime with the text equivalent), it takes 10 minutes...\n> I can see the planner not liking the results of a function,\n> but a variable? That's a static value! ::cry::\n\nIf you're using plpgsql, it should be turning that update into a\nprepared statement and then binding the variable to it. That means that\nif you pass in different values in the same session, you could end up\nwith bad plans depending on the valuse, since it will cache the query\nplan.\n\nActually, come to think of it... I'm not sure if bound parameters are\nused in query planning...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 17:54:23 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:\n>> It'd depend on the context, possibly, but it's easy to show that the\n>> current planner does fold \"now() - interval_constant\" when making\n>> estimates. Simple example:\n\n> Turns out the difference is between feeding a date vs a timestamp into the\n> query... I would have thought that since date is a date that the WHERE clause\n> would be casted to a date if it was a timestamptz, but I guess not...\n\nHmm ... worksforme. Could you provide a complete test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 21:50:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 " }, { "msg_contents": "\"Shaun Thomas\" <[email protected]> writes:\n> I can see the planner not liking the results of a function,\n> but a variable? That's a static value!\n\nRead what you wrote, and rethink...\n\nIf you're desperate you can construct a query string with the variable\nvalue embedded as a literal, and then EXECUTE that. This isn't a great\nsolution since it forces a re-plan on every execution. We've\noccasionally debated ways to do it better, but no such improvement will\never appear in 7.4 ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jun 2006 22:13:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 " }, { "msg_contents": "On Tue, Jun 13, 2006 at 09:50:49PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:\n> >> It'd depend on the context, possibly, but it's easy to show that the\n> >> current planner does fold \"now() - interval_constant\" when making\n> >> estimates. Simple example:\n> \n> > Turns out the difference is between feeding a date vs a timestamp into the\n> > query... I would have thought that since date is a date that the WHERE clause\n> > would be casted to a date if it was a timestamptz, but I guess not...\n> \n> Hmm ... worksforme. Could you provide a complete test case?\n\nI can't provide the data I used for that, but I'll try and come up with\nsomething else.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 14 Jun 2006 08:57:40 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": ">>> On 6/13/2006 at 9:13 PM, Tom Lane <[email protected]> wrote:\n\n> Read what you wrote, and rethink...\n\nHah. Yes, I understand the irony of that statement, but the point is\nthat the value of the variable won't change during query execution.\n\n> If you're desperate you can construct a query string with the\nvariable\n> value embedded as a literal, and then EXECUTE that. This isn't a\ngreat\n> solution since it forces a re-plan on every execution.\n\nThat's so gross... but it might work. I'm not really desperate, just\nfrustrated. I really can't wait until we can upgrade; 7.4 is driving\nme nuts. I'm not really worried about a re-plan, since this SP just\nupdates a fact table, so it only gets called twice a day. Cutting the\nexecution time of the SP down to < 20 seconds from 15 minutes would be\nnice, but not absolutely required. I was just surprised at the large\ndifference in manual execution as opposed to the SP with the same\nquery.\n\n> We've occasionally debated ways to do it better, but no such\n> improvement will ever appear in 7.4 ;-)\n\nAgreed! When we finally upgrade, I fully plan on putting a symbolic\nbullet into our old installation. ;)\n\nThanks!\n\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\nConfidentiality Note:\n\nThe document(s) accompanying this e-mail transmission, if any, and the\ne-mail transmittal message contain information from Leapfrog Online\nCustomer Acquisition, LLC is confidential or privileged. The information\nis intended to be for the use of the individual(s) or entity(ies) named\non this e-mail transmission message. If you are not the intended\nrecipient, be aware that any disclosure, copying, distribution or use of\nthe contents of this e-mail is prohibited. If you have received this\ne-mail in error, please immediately delete this e-mail and notify us by\ntelephone of the error\n", "msg_date": "Wed, 14 Jun 2006 09:32:04 -0500", "msg_from": "\"Shaun Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" }, { "msg_contents": "On Jun 13, 2006, at 8:50 PM, Tom Lane wrote:\n\n> \"Jim C. Nasby\" <[email protected]> writes:\n>> On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:\n>>> It'd depend on the context, possibly, but it's easy to show that the\n>>> current planner does fold \"now() - interval_constant\" when making\n>>> estimates. Simple example:\n>\n>> Turns out the difference is between feeding a date vs a timestamp \n>> into the\n>> query... I would have thought that since date is a date that the \n>> WHERE clause\n>> would be casted to a date if it was a timestamptz, but I guess not...\n>\n> Hmm ... worksforme. Could you provide a complete test case?\n\ndecibel=# create table date_test(d date not null, i int not null);\nCREATE TABLE\ndecibel=# insert into date_test select now()-x*'1 day'::interval, i \nfrom generate_series(0,3000) x, generate_series(1,100000) i;\nINSERT 0 300100000\ndecibel=# analyze verbose date_test;\nINFO: analyzing \"decibel.date_test\"\nINFO: \"date_test\": scanned 30000 of 1622163 pages, containing \n5550000 live rows and 0 dead rows; 30000 rows in sample, 300100155 \nestimated total rows\nANALYZE\ndecibel=# explain select * from date_test where d >= now()-'15 \ndays'::interval;\n QUERY PLAN\n---------------------------------------------------------------------\nSeq Scan on date_test (cost=0.00..6873915.80 rows=1228164 width=8)\n Filter: (d >= (now() - '15 days'::interval))\n(2 rows)\n\ndecibel=# explain select * from date_test where d >= (now()-'15 \ndays'::interval)::date;\n QUERY PLAN\n---------------------------------------------------------------------\nSeq Scan on date_test (cost=0.00..7624166.20 rows=1306467 width=8)\n Filter: (d >= ((now() - '15 days'::interval))::date)\n(2 rows)\n\ndecibel=# select version();\n version\n------------------------------------------------------------------------ \n-------------------------\nPostgreSQL 8.1.4 on amd64-portbld-freebsd6.0, compiled by GCC cc \n(GCC) 3.4.4 [FreeBSD] 20050518\n(1 row)\n\ndecibel=#\n\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Wed, 14 Jun 2006 16:36:00 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 " }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On Jun 13, 2006, at 8:50 PM, Tom Lane wrote:\n>> Hmm ... worksforme. Could you provide a complete test case?\n\n> decibel=# create table date_test(d date not null, i int not null);\n> [etc]\n\nNot sure what you are driving at. The estimates are clearly not\ndefaults (the default estimate would be 1/3rd of the table, or\nabout 100mil rows). Are you expecting them to be the same? If so why?\nThe comparison values are slightly different after all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2006 22:36:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4 " }, { "msg_contents": "On Wed, Jun 14, 2006 at 10:36:55PM -0400, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n> > On Jun 13, 2006, at 8:50 PM, Tom Lane wrote:\n> >> Hmm ... worksforme. Could you provide a complete test case?\n> \n> > decibel=# create table date_test(d date not null, i int not null);\n> > [etc]\n> \n> Not sure what you are driving at. The estimates are clearly not\n> defaults (the default estimate would be 1/3rd of the table, or\n> about 100mil rows). Are you expecting them to be the same? If so why?\n> The comparison values are slightly different after all.\n\nYes... I was expecting that since we're looking at a date field that the\ntimestamp would get cast to a date. Sorry I wasn't clear on that...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 15 Jun 2006 10:50:00 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirmation of bad query plan generated by 7.4" } ]
[ { "msg_contents": "I have a client who is running Postgresql 7.4.x series database\n(required to use 7.4.x). They are planning an upgrade to a new server.\nThey are insistent on Dell.\n\nI have personal experience with AMD dual Opteron, but I have not seen\nany benchmarks on Intel's dual core Xeon. I've seen in the past Dell and\nnot performed well as well as Xeon's HT issues.\n\nCan anyone share what their experience has been with Intel's dual core\nCPUs and/or Dell's new servers?\n\nI am hoping the client is willing to wait for Dell to ship a AMD\nOpeteron-based server.\n\nThanks.\n\nSteve Poe\n\n", "msg_date": "Tue, 13 Jun 2006 11:02:40 -0700", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Which processor runs better for Postgresql?" }, { "msg_contents": "[email protected] (Steve Poe) writes:\n> I have a client who is running Postgresql 7.4.x series database\n> (required to use 7.4.x). They are planning an upgrade to a new server.\n> They are insistent on Dell.\n\nThen they're being insistent on poor performance.\n\nIf you search for \"dell postgresql performance\" you'll find plenty of\nexamples of people who have been disappointed when they insisted on\nDell for PostgreSQL.\n\nHere is a *long* thread on the matter...\n<http://archives.postgresql.org/pgsql-performance/2004-12/msg00022.php>\n\n> I am hoping the client is willing to wait for Dell to ship a AMD\n> Opeteron-based server.\n\nBased on Dell's history, I would neither:\n\n a) Hold my breath, nor\n\n b) Expect an Opteron-based Dell server to perform as well as\n seemingly-equivalent servers provisioned by other vendors.\n\nWe got burned by some Celestica-built Opterons that didn't turn out\nquite as hoped.\n\nWe have had somewhat better results with some HP Opterons; they appear\nto be surviving less-than-ideal 3rd world data centre situations with\nreasonable aplomb. (Based on the amount of dust in their diet, I'm\nsomewhat surprised the disk drives are still running...)\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/nonrdbms.html\nWe are Pentium of Borg. Division is futile. You will be approximated.\n(seen in someone's .signature)\n", "msg_date": "Tue, 13 Jun 2006 14:22:06 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Tue, 2006-06-13 at 13:02, Steve Poe wrote:\n> I have a client who is running Postgresql 7.4.x series database\n> (required to use 7.4.x). They are planning an upgrade to a new server.\n> They are insistent on Dell.\n\nDo they have a logical reason for this, or is it mostly hand-waving? My\nexperience has been hand waving. Last company I was at, the CIO bragged\nabout having saved a million a year on server by going with Dell. His\nnumbers were made up, and, in fact, we spent a large portion of each\nweek babysitting those god awful 2600 series machines with adaptec cards\nand the serverworks chipset. And they were slow compared to anything\nelse with similar specs.\n\n> I have personal experience with AMD dual Opteron, but I have not seen\n> any benchmarks on Intel's dual core Xeon. I've seen in the past Dell and\n> not performed well as well as Xeon's HT issues.\n\nDells tend to perform poorly, period. They choose low end parts (the\n2600's Serverworks chipset is widely regarded as one of the slowest\nchipset for the P-IV there is.) and then mucking around with the BIOS of\nthe add in cards to make them somewhat stable with their dodgy hardware.\n\n> Can anyone share what their experience has been with Intel's dual core\n> CPUs and/or Dell's new servers?\n\nHaven't used the dual core Dells. Latest ones I've used are the dual\nXeon 2850 machines, which are at least stable, if still pretty pokey.\n\n> I am hoping the client is willing to wait for Dell to ship a AMD\n> Opeteron-based server.\n\nLet's just hope Dell hasn't spent all this time hamstringing a good chip\nwith low end, underperforming hardware, eh?\n\nMy suggestion is to look at something like this:\n\nhttp://www.abmx.com/1u-supermicro-amd-opteron-rackmount-server-p-210.html\n\n1U rackmount opteron from Supermicro that can have two dual core\nopterons and 4 drives and up to 16 gigs of ram. Supermicro server\nmotherboards have always treated me well and performed well too.\n", "msg_date": "Tue, 13 Jun 2006 14:00:02 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "\n>My suggestion is to look at something like this:\n>\n>http://www.abmx.com/1u-supermicro-amd-opteron-rackmount-server-p-210.html\n>\n>1U rackmount opteron from Supermicro that can have two dual core\n>opterons and 4 drives and up to 16 gigs of ram. Supermicro server\n>motherboards have always treated me well and performed well too.\n> \n>\nI've had good experience with similar machines from Tyan :\nhttp://www.tyan.com/products/html/gt24b2891.html\n\n\n\n\n", "msg_date": "Tue, 13 Jun 2006 13:11:28 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "David Boreham wrote:\n> \n> >My suggestion is to look at something like this:\n> >\n> >http://www.abmx.com/1u-supermicro-amd-opteron-rackmount-server-p-210.html\n> >\n> >1U rackmount opteron from Supermicro that can have two dual core\n> >opterons and 4 drives and up to 16 gigs of ram. Supermicro server\n> >motherboards have always treated me well and performed well too.\n> > \n> >\n> I've had good experience with similar machines from Tyan :\n> http://www.tyan.com/products/html/gt24b2891.html\n\nIn fact I think Tyan makes the Supermicro motherboards.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 13 Jun 2006 15:15:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, 2006-06-13 at 13:02, Steve Poe wrote:\n>> I have a client who is running Postgresql 7.4.x series database\n>> (required to use 7.4.x). They are planning an upgrade to a new server.\n>> They are insistent on Dell.\n> \n> Do they have a logical reason for this, or is it mostly hand-waving? \n\nThey probably do. They have probably standardized on Dell hardware. It \nis technically a dumb reason, but from a business standpoint it makes sense.\n\n My\n> experience has been hand waving. Last company I was at, the CIO bragged\n> about having saved a million a year on server by going with Dell. His\n> numbers were made up, and, in fact, we spent a large portion of each\n> week babysitting those god awful 2600 series machines with adaptec cards\n> and the serverworks chipset. And they were slow compared to anything\n> else with similar specs.\n\nYou can get extremely competitive quotes from IBM or HP as long as you \nsay, \"You are competing against Dell\".\n\n> Dells tend to perform poorly, period. They choose low end parts (the\n> 2600's Serverworks chipset is widely regarded as one of the slowest\n> chipset for the P-IV there is.) and then mucking around with the BIOS of\n> the add in cards to make them somewhat stable with their dodgy hardware.\n\nI can confirm this.\n\n>> I am hoping the client is willing to wait for Dell to ship a AMD\n>> Opeteron-based server.\n\nTell them to go with an HP DL 385. They will be much happier.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 13 Jun 2006 12:44:17 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Tue, Jun 13, 2006 at 12:44:17PM -0700, Joshua D. Drake wrote:\n> You can get extremely competitive quotes from IBM or HP as long as you \n> say, \"You are competing against Dell\".\n\nPossibly even more competitive from Sun...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 13 Jun 2006 16:17:51 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Jun 13, 2006, at 2:02 PM, Steve Poe wrote:\n\n>\n> Can anyone share what their experience has been with Intel's dual core\n> CPUs and/or Dell's new servers?\n\nI'm one of the few Dell fans around here... but I must say that I \ndon't buy them for my big DB servers specifically since they don't \ncurrently ship Opteron based systems. (I did call and thank my sales \nrep for pushing my case for them to do Opterons, though, since I'm \nsure they are doing it as a personal favor to me :-) )\n\nI just put up a pentium-D dual-core based system and it is pretty \nwickedly fast. it only has a pair of SATA drives on it and is used \nfor pre-production testing.\n\n>\n> I am hoping the client is willing to wait for Dell to ship a AMD\n> Opeteron-based server.\n\nDon't wait. It will be *months* before that happens. Go get a Sun \nX4100 and an external RAID array and be happy. These boxes are an \namazing work of engineering.", "msg_date": "Thu, 15 Jun 2006 12:22:31 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "Vivek,\n\nThanks for your feedback. Which Dell server did you purchase? \n\nThe client has a PowerEdge 2600 and they STILL want Dell. Again, if it\nwere my pocketbook, Dell would not be there. \n\nThe client has a 30GB DB. This is large for me, but probably not with\nyou. Also, I am advising the client to go to a 10+ disc array (from 3)\nand enough RAM to load half the DB into memory. \n\nSteve\n\n\n\n\nOn Thu, 2006-06-15 at 12:22 -0400, Vivek Khera wrote:\n> On Jun 13, 2006, at 2:02 PM, Steve Poe wrote:\n> \n> >\n> > Can anyone share what their experience has been with Intel's dual core\n> > CPUs and/or Dell's new servers?\n> \n> I'm one of the few Dell fans around here... but I must say that I \n> don't buy them for my big DB servers specifically since they don't \n> currently ship Opteron based systems. (I did call and thank my sales \n> rep for pushing my case for them to do Opterons, though, since I'm \n> sure they are doing it as a personal favor to me :-) )\n> \n> I just put up a pentium-D dual-core based system and it is pretty \n> wickedly fast. it only has a pair of SATA drives on it and is used \n> for pre-production testing.\n> \n> >\n> > I am hoping the client is willing to wait for Dell to ship a AMD\n> > Opeteron-based server.\n> \n> Don't wait. It will be *months* before that happens. Go get a Sun \n> X4100 and an external RAID array and be happy. These boxes are an \n> amazing work of engineering.\n> \n\n", "msg_date": "Thu, 15 Jun 2006 10:10:46 -0700", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Jun 15, 2006, at 1:10 PM, Steve Poe wrote:\n\n> Vivek,\n>\n> Thanks for your feedback. Which Dell server did you purchase?\n\nI have many many dell rackmounts: 1550, 1650, 1750, 1850, and SC1425 \nand throw in a couple of 2450.\n\nI *really* like the 1850 with built-in SCSI RAID. It is fast enough \nto be a replica of my primary bread and butter database running on a \nbeefy opteron system (using Slony-1 replication).\n\nThe SC1425 boxes make for good, cheap web front end servers. We buy \n'em in pairs and load balance them at the network layer using CARP.\n\nAt the office we have mostly SC400 series (400, 420, and 430) for our \nservers. The latest box is an SC430 with dual core pentium D and \ndual SATA drives running software mirror. It pushes over 20MB/s on \nthe disks, which is pretty impressive for the hardware.\n\n\n>\n> The client has a PowerEdge 2600 and they STILL want Dell. Again, if it\n> were my pocketbook, Dell would not be there.\n\nI lucked out and skipped the 2650 line, apparently :-)\n\nI used the 2450's as my DB servers and they were barely adequate once \nwe got beyond our startup phase, and moving them over to Opteron was \na godsend. I tried some small opteron systems vendor but had QC \nissues (1 of 5 systems stable), so went with Sun and have not looked \nback. I still buy Dell's for all other server purposes mainly \nbecause it is convenient in terms of purchasing and getting support \n(ie, business reasons).\n\nAnd I don't spend all my time babysitting these boxes, like others \nimply.\n\n\n>\n> The client has a 30GB DB. This is large for me, but probably not with\n> you. Also, I am advising the client to go to a 10+ disc array (from 3)\n> and enough RAM to load half the DB into memory.\n\n30GB DB on a 10 disk array seems overkill, considering that the \nsmallest disks you're going to get will be 36GB (or perhaps 72Gb by \nnow).", "msg_date": "Thu, 15 Jun 2006 13:47:35 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Joshua D. Drake\n> Sent: 13 June 2006 20:44\n> To: Scott Marlowe\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Which processor runs better for Postgresql?\n> \n> They probably do. They have probably standardized on Dell \n> hardware. It \n> is technically a dumb reason, but from a business standpoint \n> it makes sense.\n\nWe use Dell here for those reasons these days, but thankfully are able\nto suitably overspec everything to allow for significant growth and any\nminor performance issues that they may have (we've never seen any\nthough). In Dell's defence we've never had a single problem with the\n2850's or 1850's we're running which have all been rock solid. They also\nhave excellent OOB management in their DRAC cards - far better than that\nin the slightly older Intel boxes we also run. That is a big selling\npoint for us.\n\n> You can get extremely competitive quotes from IBM or HP as \n> long as you \n> say, \"You are competing against Dell\".\n\nDell beat them hands down in our experience - and yes, we have had\nnumerous quotes for HP and IBM kit, each of them knowing they are\ncompeting against Dell.\n\n> > Dells tend to perform poorly, period. They choose low end \n> parts (the\n> > 2600's Serverworks chipset is widely regarded as one of the slowest\n> > chipset for the P-IV there is.) and then mucking around \n> with the BIOS of\n> > the add in cards to make them somewhat stable with their \n> dodgy hardware.\n> \n> I can confirm this.\n\nAnd how old are the 2600's now?\n\nAnyhoo, I'm not saying the current machines are excellent performers or\nanything, but there are good business reasons to run them if you don't\nneed to squeeze out every last pony.\n\nRegards, Dave.\n", "msg_date": "Tue, 13 Jun 2006 21:11:24 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "Dave, Joshua, Scott (and all),\n\nThanks for your feedback, while I do appreciate it, I did not intent on\nmaking this discussion \"buy this instead\"...I whole-heartly agree with\nyou. Joshua, you made the best comment, it is a business decision for\nthe client. I don't agree with it, but I understand it. I've recommended\nSun or Penguin Computing which I've had no technical issues with. They\ndid not dispute my recommendation but they ignored it. I have not like\nDell, on the server side, since 1998 - 2000 time period.\n\nExcluding Dell's issues, has anyone seen performance differences between\nAMD's Opteron and Intel's new Xeon's (dual or quad CPU or dual-core). If\nanyone has done benchmark comparisons between them, any summary\ninformation would be appreciated.\n\nFor now, I am asking the client to hold-off and wait for the AMD Opteron\navailability on the Dell servers.\n\nThanks again.\n\nSteve\n\n\n\n", "msg_date": "Wed, 14 Jun 2006 08:03:25 -0700", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Tue, 2006-06-13 at 15:11, Dave Page wrote:\n> \n> > -----Original Message-----\n\n> And how old are the 2600's now?\n> \n> Anyhoo, I'm not saying the current machines are excellent performers or\n> anything, but there are good business reasons to run them if you don't\n> need to squeeze out every last pony.\n\nJust thought I'd point you to Dell's forums.\n\nhttp://forums.us.dell.com/supportforums/board?board.id=pes_linux&page=1\n\nwherein you'll find plenty of folks who have problems with freezing RAID\ncontrollers with 28xx and 18xx machines. \n", "msg_date": "Wed, 14 Jun 2006 12:04:18 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" } ]
[ { "msg_contents": "hi\ni'm waiting for new server to arrive.\nfor the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).\nwhat kind of partitioning you suggest, to get maximum performance?\nfor system things i will have separate discs, so whole array is only for\npostgresql.\n\ndata processing is oltp, but with large amounts of write operations.\n\nany hints?\n\nshould i go with 3 partitions, or just 1? or 2?\n\ndepesz\n\n-- \nhttp://www.depesz.com/ - nowy, lepszy depesz\n\nhii'm waiting for new server to arrive.for the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).what kind of partitioning you suggest, to get maximum performance?for system things i will have separate discs, so whole array is only for postgresql.\ndata processing is oltp, but with large amounts of write operations.any hints?should i go with 3 partitions, or just 1? or 2?depesz-- \nhttp://www.depesz.com/ - nowy, lepszy depesz", "msg_date": "Wed, 14 Jun 2006 09:52:26 +0200", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": true, "msg_subject": "how to partition disks" }, { "msg_contents": "Hi Hubert,\n\nhubert depesz lubaczewski schrieb:\n> hi\n> i'm waiting for new server to arrive.\n> for the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).\n> what kind of partitioning you suggest, to get maximum performance?\n> for system things i will have separate discs, so whole array is only for \n> postgresql.\n> \n> data processing is oltp, but with large amounts of write operations.\n> \n> any hints?\n> \n> should i go with 3 partitions, or just 1? or 2?\n> \n\nYou should configure your discs to RAID 10 volumes.\nYou should set up a separate volume for WAL.\nA volume for an additional table space may also useful.\n\nIn your case I would do 2 partitions:\n\n1. RAID 10 with 8 discs for general data\n2. RAID 10 with 4 discs for WAL\n(two disk as spare)\n\nYou may split the first volume in two volumes for a second table space \nif you Server doesn't have enough RAM and you have a high disk read I/O.\n\n\nCheers\nSven.\n", "msg_date": "Wed, 14 Jun 2006 10:26:43 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "On 6/14/06, Sven Geisler <[email protected]> wrote:\n>\n> You should configure your discs to RAID 10 volumes.\n> You should set up a separate volume for WAL.\n> A volume for an additional table space may also useful.\n> In your case I would do 2 partitions:\n> 1. RAID 10 with 8 discs for general data\n>\n\nraid 10 is of course not questionable. but are you sure that it will work\nfaster than for example:\n2 discs (raid 1) for xlog\n6 discs (raid 10) for tables\n6 discs (raid 10) for indices?\n\ndepesz\n\n-- \nhttp://www.depesz.com/ - nowy, lepszy depesz\n\nOn 6/14/06, Sven Geisler <[email protected]> wrote:\nYou should configure your discs to RAID 10 volumes.You should set up a separate volume for WAL.A volume for an additional table space may also useful.In your case I would do 2 partitions:1. RAID 10 with 8 discs for general data\nraid 10 is of course not questionable. but are you sure that it will work faster than for example:2 discs (raid 1) for xlog6 discs (raid 10) for tables6 discs (raid 10) for indices?\ndepesz-- http://www.depesz.com/ - nowy, lepszy depesz", "msg_date": "Wed, 14 Jun 2006 13:21:25 +0200", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "Hi Hupert,\n\nhubert depesz lubaczewski schrieb:\n> On 6/14/06, *Sven Geisler* <[email protected] \n> <mailto:[email protected]>> wrote:\n> You should configure your discs to RAID 10 volumes.\n> You should set up a separate volume for WAL.\n> A volume for an additional table space may also useful.\n> In your case I would do 2 partitions:\n> 1. RAID 10 with 8 discs for general data\n> \n> \n> raid 10 is of course not questionable. but are you sure that it will \n> work faster than for example:\n> 2 discs (raid 1) for xlog\n> 6 discs (raid 10) for tables\n> 6 discs (raid 10) for indices?\n> \n\nThis depends on your application. Do you have a lot of disc reads?\nAnyhow, I would put the xlog always to a RAID 10 volume because most of \nthe I/O for update and inserts is going to the xlog.\n\n4 discs xlog\n6 discs tables\n4 discs tables2\n\nThis should be better. You should distribute indices on separate spindle \nstacks to share the I/O. But again this depends on your application and \nyour server. How are the indices used? How large is your file system \ncache. What does PostgreSQL effectively read from disc.\n\nDon't forget to tune your postgresql.conf:\n<http://www.powerpostgresql.com/PerfList>\n<http://www.powerpostgresql.com/Downloads/terabytes_osc2005.pdf>\n\nCheers\nSven.\n", "msg_date": "Wed, 14 Jun 2006 13:41:31 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "> > raid 10 is of course not questionable. but are you sure that it will \n> > work faster than for example:\n> > 2 discs (raid 1) for xlog\n> > 6 discs (raid 10) for tables\n> > 6 discs (raid 10) for indices?\n> > \n> \n> This depends on your application. Do you have a lot of disc reads?\n> Anyhow, I would put the xlog always to a RAID 10 volume because most of \n> the I/O for update and inserts is going to the xlog.\n> \n> 4 discs xlog\n> 6 discs tables\n> 4 discs tables2\n\nI have a question in regards to I/O bandwidths of various raid configuration. Primary, does the\nabove suggested raid partitions imply that multiple (smaller) disk arrays have a potential for\nmore I/O bandwidth than a larger raid 10 array?\n\nRegards,\n\nRichard\n\n\n", "msg_date": "Wed, 14 Jun 2006 07:23:44 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "Hi Richard,\n\nRichard Broersma Jr schrieb:\n>> This depends on your application. Do you have a lot of disc reads?\n>> Anyhow, I would put the xlog always to a RAID 10 volume because most of \n>> the I/O for update and inserts is going to the xlog.\n>>\n>> 4 discs xlog\n>> 6 discs tables\n>> 4 discs tables2\n> \n> I have a question in regards to I/O bandwidths of various raid configuration. Primary, does the\n> above suggested raid partitions imply that multiple (smaller) disk arrays have a potential for\n> more I/O bandwidth than a larger raid 10 array?\n\nYes.\nBecause the disc arms didn't need to reposition that much as there would \no with one large volume.\n\nFor example, You run two queries with two clients and each queries needs \nto read some indices from disk. In this case it more efficient to read \nfrom different volumes than to read from one large volume where the disc \narms has to jump.\n\nSven.\n", "msg_date": "Wed, 14 Jun 2006 16:32:23 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "On Wed, Jun 14, 2006 at 04:32:23PM +0200, Sven Geisler wrote:\n> Hi Richard,\n> \n> Richard Broersma Jr schrieb:\n> >>This depends on your application. Do you have a lot of disc reads?\n> >>Anyhow, I would put the xlog always to a RAID 10 volume because most of \n> >>the I/O for update and inserts is going to the xlog.\n> >>\n> >>4 discs xlog\n> >>6 discs tables\n> >>4 discs tables2\n> >\n> >I have a question in regards to I/O bandwidths of various raid \n> >configuration. Primary, does the\n> >above suggested raid partitions imply that multiple (smaller) disk arrays \n> >have a potential for\n> >more I/O bandwidth than a larger raid 10 array?\n> \n> Yes.\n> Because the disc arms didn't need to reposition that much as there would \n> o with one large volume.\n> \n> For example, You run two queries with two clients and each queries needs \n> to read some indices from disk. In this case it more efficient to read \n> from different volumes than to read from one large volume where the disc \n> arms has to jump.\n\nBut keep in mind that all of that is only true if you have very good\nknowledge of how your data will be accessed. If you don't know that,\nyou'll almost certainly be better off just piling everything into one\nRAID array and letting the controller deal with it.\n\nAlso, if you have a good RAID controller that's batter-backed,\nseperating pg_xlog onto it's own array is much less likely to be a win.\nThe reason you normally put pg_xlog on it's own partition is because the\ndatabase has to fsync pg_xlog *at every single commit*. This means you\nabsolutely want that fsync to be as fast as possible. But with a good,\nbattery-backed controller, this no longer matters. The fsync is only\ngoing to push the data into the controller, and the controller will take\nthings from there. That means it's far less important to put pg_xlog on\nit's own array. I actually asked about this recently and one person did\nreply that they'd done testing and found it was better to just put all\ntheir drives into one array so they weren't wasting bandwidth on the\npg_xlog drives.\n\nEven if you do decide to keep pg_xlog seperate, a 4 drive RAID10 for\nthat is overkill. It will be next to impossible for you to generate\nenough WAL traffic to warrent it.\n\nYour best bet is to perform testing with your application. That's the\nonly way you'll truely find out what's going to work best. Short of\nthat, your best bet is to just pile all the drives together. If you do\ntesting, I'd start first with the effect of a seperate pg_xlog. Only\nafter you have those results would I consider trying to do things like\nsplit indexes from tables, etc. \n\nBTW, you should consider reserving some of the drives in the array as\nhot spares.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 14 Jun 2006 10:16:39 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "On Wed, Jun 14, 2006 at 04:32:23PM +0200, Sven Geisler wrote:\n>For example, You run two queries with two clients and each queries needs \n>to read some indices from disk. In this case it more efficient to read \n>from different volumes than to read from one large volume where the disc \n>arms has to jump.\n\nHmm. Bad example, IMO. In the case of reading indices you're doing \nrandom IO and the heads will be jumping all over the place anyway. The \nlimiting factor there will be seeks/s, and you'll possibly get better \nresults with the larger array. (That case is fairly complicated to \nanalyze and depends very much on the data.) Where multiple arrays will be \nfaster is if you have a lot of sequential IO--in fact, a couple of cheap \ndisks can blow away a fairly expensive array for purely sequential \noperations since each disk can handle >60MB/s of if it doesn't have to \nseek, whereas multiple sequential streams on the big array will cause \neach disk in the array to seek. (The array controller will try to hide \nthis with its cache; its cache size & software will determine how \nsuccessful it is at doing so.)\n\nMike Stone\n", "msg_date": "Fri, 16 Jun 2006 07:23:04 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" }, { "msg_contents": "hubert depesz lubaczewski writes:\n\n> On 6/14/06, Sven Geisler \n> raid 10 is of course not questionable. but are you sure that it will work \n> faster than for example:\n> 2 discs (raid 1) for xlog\n> 6 discs (raid 10) for tables\n> 6 discs (raid 10) for indices?\n\n\nCaching up on the performance list.\nAlthough this may not help the original poster.. wanted to share a recent \nexperience related to allocation of disks on a raid.\n\nWe just got a server with 16 disks.\nWe condfigured 12 to 1 raid controller and a second raid with 4. Both using \nraid 10.\n\nRAID 1\n10 x 7,200rpm disks\n2 hot spares\n\nRAID 2\n4 x 10,000 rpm disk\n\nOne of the things I always do with new machines is to run bonnie++ and get \nsome numbers.\n\nI expected the second raid to have better numbers than the first because the \ndisks were 10K drives (all SATA). To my surprise the larger raid had better \nnumbers.\n\nSo I figure the number of spindles on a single RAID does make a big \ndifference. To that regard splitting 16 disks into 3 sets may help with data \nneeding to be read/written to be in separate raids, but may degrade \nperformance by reducing the number of spindles on each of the raids. \n", "msg_date": "Fri, 01 Sep 2006 08:20:57 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to partition disks" } ]
[ { "msg_contents": "Hi,\n\nhere's my problem:\n\n# explain analyze select * from mxstrpartsbg where szam =\nround(800000*random())::integer;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on mxstrpartsbg (cost=0.00..56875.04 rows=1 width=322) (actual\ntime=190.748..1271.664 rows=1 loops=1)\n Filter: (szam = (round((800000::double precision * random())))::integer)\n Total runtime: 1271.785 ms\n(3 rows)\n\n# explain analyze select * from mxstrpartsbg where szam = 671478; \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using mxstrpartsbg_pkey on mxstrpartsbg (cost=0.00..5.87\nrows=1 width=322) (actual time=71.642..71.644 rows=1 loops=1)\n Index Cond: (szam = 671478)\n Total runtime: 71.706 ms\n(3 rows)\n\nIs there a way to have PostgreSQL to pre-compute all the constants in the\nWHERE clause? It would be a huge performance gain. Thanks in advance.\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Wed, 14 Jun 2006 12:53:54 +0200 (CEST)", "msg_from": "=?iso-8859-2?Q?B=F6sz=F6rm=E9nyi_Zolt=E1n?= <[email protected]>", "msg_from_op": true, "msg_subject": "Precomputed constants?" }, { "msg_contents": "On Jun 14 12:53, B�sz�rm�nyi Zolt�n wrote:\n> # explain analyze select * from mxstrpartsbg where szam =\n> round(800000*random())::integer;\n\nAFAIK, you can use sth like that:\n\nSELECT * FROM mxstrpartsbg\n WHERE szam = (SELECT round(800000*random())::integer OFFSET 0);\n\nThis will prevent calculation of round() for every row.\n\n\nRegards.\n", "msg_date": "Wed, 14 Jun 2006 14:07:13 +0300", "msg_from": "Volkan YAZICI <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "> On Jun 14 12:53, Bďż˝szďż˝rmďż˝nyi Zoltďż˝n wrote:\n>> # explain analyze select * from mxstrpartsbg where szam =\n>> round(800000*random())::integer;\n>\n> AFAIK, you can use sth like that:\n>\n> SELECT * FROM mxstrpartsbg\n> WHERE szam = (SELECT round(800000*random())::integer OFFSET 0);\n>\n> This will prevent calculation of round() for every row.\n>\n> Regards.\n\nThanks, It worked.\n\nOh, I see now. I makes sense, random() isn't a constant and\nit was computed for every row. Actually running the query produces\ndifferent results sets with 0, 1 or 2 rows.\n\nReplacing random() with a true constant gives me index scan\neven if it's hidden inside other function calls. E.g.:\n\n# explain analyze select * from mxstrpartsbg where szam =\nround('800000.71'::decimal(10,2))::integer;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using mxstrpartsbg_pkey on mxstrpartsbg (cost=0.00..5.87\nrows=1 width=322) (actual time=0.020..0.022 rows=1 loops=1)\n Index Cond: (szam = 800001)\n Total runtime: 0.082 ms\n(3 rows)\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Wed, 14 Jun 2006 13:30:10 +0200 (CEST)", "msg_from": "=?iso-8859-2?Q?B=F6sz=F6rm=E9nyi_Zolt=E1n?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:\n> Replacing random() with a true constant gives me index scan\n> even if it's hidden inside other function calls. E.g.:\n\nThe database has no choice but to compute random() for every row; it's\nmarked VOLATILE.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 14 Jun 2006 10:18:04 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "Jim C. Nasby �rta:\n> On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:\n> \n>> Replacing random() with a true constant gives me index scan\n>> even if it's hidden inside other function calls. E.g.:\n>> \n>\n> The database has no choice but to compute random() for every row; it's\n> marked VOLATILE.\n> \n\nI see now, docs about CREATE FUNCTION mentions random(),\ncurrval() and timeofday() as examples for VOLATILE.\nBut where in the documentation can I find this info about all\nbuilt-in functions? Thanks.\n\nBest regards,\nZolt�n B�sz�rm�nyi\n\n", "msg_date": "Thu, 15 Jun 2006 06:31:02 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "On Thu, Jun 15, 2006 at 06:31:02AM +0200, Zoltan Boszormenyi wrote:\n> Jim C. Nasby ?rta:\n> >On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:\n> > \n> >>Replacing random() with a true constant gives me index scan\n> >>even if it's hidden inside other function calls. E.g.:\n> >> \n> >\n> >The database has no choice but to compute random() for every row; it's\n> >marked VOLATILE.\n> > \n> \n> I see now, docs about CREATE FUNCTION mentions random(),\n> currval() and timeofday() as examples for VOLATILE.\n> But where in the documentation can I find this info about all\n> built-in functions? Thanks.\n\nNo, but you can query pg_proc for that info. The docs should have info\nabout that table.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 15 Jun 2006 10:59:29 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "Jim C. Nasby �rta:\n> On Thu, Jun 15, 2006 at 06:31:02AM +0200, Zoltan Boszormenyi wrote:\n> \n>> Jim C. Nasby ?rta:\n>> \n>>> On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:\n>>> \n>>> \n>>>> Replacing random() with a true constant gives me index scan\n>>>> even if it's hidden inside other function calls. E.g.:\n>>>> \n>>>> \n>>> The database has no choice but to compute random() for every row; it's\n>>> marked VOLATILE.\n>>> \n>>> \n>> I see now, docs about CREATE FUNCTION mentions random(),\n>> currval() and timeofday() as examples for VOLATILE.\n>> But where in the documentation can I find this info about all\n>> built-in functions? Thanks.\n>> \n>\n> No, but you can query pg_proc for that info. The docs should have info\n> about that table.\n> \n\nThanks!\n\n# select proname,provolatile from pg_proc where proname='random';\n proname | provolatile\n---------+-------------\n random | v\n(1 sor)\n\n# select distinct provolatile from pg_proc;\n provolatile\n-------------\n i\n s\n v\n(3 sor)\n\nIf I get this right, IMMUTABLE/STABLE/VOLATILE\nare indicated with their initials.\n\nBest regards,\nZolt�n B�sz�rm�nyi\n\n", "msg_date": "Thu, 15 Jun 2006 20:19:10 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" }, { "msg_contents": "On Jun 15, 2006, at 1:19 PM, Zoltan Boszormenyi wrote:\n> # select distinct provolatile from pg_proc;\n> provolatile\n> -------------\n> i\n> s\n> v\n> (3 sor)\n>\n> If I get this right, IMMUTABLE/STABLE/VOLATILE\n> are indicated with their initials.\n\nThat's probably correct. If the docs don't specify this then the code \nwould. Or you could just create 3 test functions and see what you end \nup with, but I can't see it being any different from your guess.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Sat, 17 Jun 2006 14:25:14 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precomputed constants?" } ]
[ { "msg_contents": "-- this is the third time I've tried sending this and I never saw it get \nthrough to the list. Sorry if multiple copies show up.\n\nHi all,\n\nI've been lurking using the web archives for a while and haven't found \nan answer that seems to answer my questions about pg_dump.\n\nWe have a 206GB data warehouse running on version 8.0.3. The server is \nsomewhat underpowered in terms of CPU: (1) 2.8 GHz Xeon 4GB Ram and a \nsingle HBA to our SAN (IBM DS4300). We in the process of migrating to a \nnew server that we've repurposed from our production OLTP database (8) \n2.0 GHz Xeon, 16GB Ram and dual HBAs to the same SAN running version 8.1.\n\nIndependant of that move, we still need to get by on the old system and \nI'm concerned that even on the new system, pg_dump will still perform \npoorly. I can't do a full test because we're also taking advantage of \nthe table partitioning in 8.1 so we're not doing a dump and restore.\n\nWe backup the database using:\n\npg_dump -Fc -cv ${CURDB} > ${BACKDIR}/${CURDB}-${DATE}.bak\n\nThere a three different LUNs allocated to the old warehouse on the SAN - \ndata, wal and a dump area for the backups. The SAN has two controllers \n(only 128MB of cache per) and the data is on one controller while the \nWAL and dump area are on the other. Still a single HBA though.\n\nCreating the compressed backup of this database takes 12 hours. We start \nat 6PM and it's done a little after 1AM, just in time for the next day's \nload. The load itself takes about 5 hours.\n\nI've watched the backup process and I/O is not a problem. Memory isn't a \nproblem either. It seems that we're CPU bound but NOT in I/O wait. The \nserver is a dedicated PGSQL box.\n\nHere are our settings from the conf file:\n\nmaintenance_work_mem = 524288\nwork_mem = 1048576 ( I know this is high but you should see some of our \nsorts and aggregates)\nshared_buffers = 50000\neffective_cache_size = 450000\nwal_buffers = 64\ncheckpoint_segments = 256\ncheckpoint_timeout = 3600\n\nWe're inserting around 3mil rows a night if you count staging, info, dim \nand fact tables. The vacuum issue is a whole other problem but right now \nI'm concerned about just the backup on the current hardware.\n\nI've got some space to burn so I could go to an uncompressed backup and \ncompress it later during the day.\n\nIf there are any tips anyone can provide I would greatly appreciate it. \nI know that the COPY performance was bumped up in 8.1 but I'm stuck on \nthis 8.0 box for a while longer.\n\nThanks,\nJohn E. Vincent\n", "msg_date": "Wed, 14 Jun 2006 10:47:01 -0400", "msg_from": "\"John E. Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "\"John E. Vincent\" <[email protected]> writes:\n> I've watched the backup process and I/O is not a problem. Memory isn't a \n> problem either. It seems that we're CPU bound but NOT in I/O wait.\n\nIs it the pg_dump process, or the connected backend, that's chewing the\nbulk of the CPU time? (This should be pretty obvious in \"top\".)\n\nIf it's the pg_dump process, the bulk of the CPU time is likely going\ninto compression --- you might consider backing off the compression\nlevel, perhaps --compress=1 or even 0 if size of the dump file isn't\na big concern.\n\nAnother possibility if your LAN is reasonably fast is to run pg_dump on\na different machine, so that you can put two CPUs to work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2006 11:51:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0 " }, { "msg_contents": "On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:\n> -- this is the third time I've tried sending this and I never saw it get \n> through to the list. Sorry if multiple copies show up.\n> \n> Hi all,\n\nBUNCHES SNIPPED\n\n> work_mem = 1048576 ( I know this is high but you should see some of our \n> sorts and aggregates)\n\nUmmm. That's REALLY high. You might want to consider lowering the\nglobal value here, and then crank it up on a case by case basis, like\nduring nighttime report generation. Just one or two queries could\ntheoretically run your machine out of memory right now. Just put a \"set\nwork_mem=1000000\" in your script before the big query runs.\n\n> We're inserting around 3mil rows a night if you count staging, info, dim \n> and fact tables. The vacuum issue is a whole other problem but right now \n> I'm concerned about just the backup on the current hardware.\n> \n> I've got some space to burn so I could go to an uncompressed backup and \n> compress it later during the day.\n\nThat's exactly what we do. We just do a normal backup, and have a\nscript that gzips anything in the backup directory that doesn't end in\n.gz... If you've got space to burn, as you say, then use it at least a\nfew days to see how it affects backup speeds.\n\nSeeing as how you're CPU bound, most likely the problem is just the\ncompressed backup.\n", "msg_date": "Wed, 14 Jun 2006 11:44:10 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On 6/14/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:\n> > -- this is the third time I've tried sending this and I never saw it get\n> > through to the list. Sorry if multiple copies show up.\n> >\n> > Hi all,\n>\n> BUNCHES SNIPPED\n>\n> > work_mem = 1048576 ( I know this is high but you should see some of our\n> > sorts and aggregates)\n>\n> Ummm. That's REALLY high. You might want to consider lowering the\n> global value here, and then crank it up on a case by case basis, like\n> during nighttime report generation. Just one or two queries could\n> theoretically run your machine out of memory right now. Just put a \"set\n> work_mem=1000000\" in your script before the big query runs.\n\n\n\nI know it is but that's what we need for some of our queries. Our ETL tool\n(informatica) and BI tool (actuate) won't let us set those things as part of\nour jobs. We need it for those purposes. We have some really nasty queries\nthat will be fixed in our new server.\n\nE.G. we have a table called loan_account_agg_fact that has 200+ million rows\nand it contains every possible combination of late status for a customer\naccount (i.e. 1 day late, 2 day late, 3 day late) so it gets inserted for\nnew customers but updated for existing records as part of our warehouse\nload. Part of the new layout is combining late ranges so instead of number\nof days we have a range of days (i.e. 1-15,16-30....). Even with work_mem\nthat large, the load of that loan_account_agg_fact table creates over 3GB of\ntemp tables!\n\n\n> That's exactly what we do. We just do a normal backup, and have a\n> script that gzips anything in the backup directory that doesn't end in\n> .gz... If you've got space to burn, as you say, then use it at least a\n> few days to see how it affects backup speeds.\n>\n> Seeing as how you're CPU bound, most likely the problem is just the\n> compressed backup.\n>\n\nI'm starting to think the same thing. I'll see how this COPY I'm doing of\nthe single largest table does right now and make some judgement based on\nthat.\n\n-- \nJohn E. Vincent\n\nOn 6/14/06, Scott Marlowe <[email protected]> wrote:\nOn Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is the third time I've tried sending this and I never saw it get> through to the list. Sorry if multiple copies show up.>> Hi all,\nBUNCHES SNIPPED> work_mem = 1048576 ( I know this is high but you should see some of our> sorts and aggregates)Ummm.  That's REALLY high.  You might want to consider lowering theglobal value here, and then crank it up on a case by case basis, like\nduring nighttime report generation.  Just one or two queries couldtheoretically run your machine out of memory right now.  Just put a \"setwork_mem=1000000\" in your script before the big query runs.\nI know it is but that's what we need for some of our queries. Our ETL tool (informatica) and BI tool (actuate) won't let us set those things as part of our jobs. We need it for those purposes. We have some really nasty queries that will be fixed in our new server.\nE.G. we have a table called loan_account_agg_fact that has 200+ million rows and it contains every possible combination of late status for a customer account (i.e. 1 day late, 2 day late, 3 day late) so it gets inserted for new customers but updated for existing records as part of our warehouse load. Part of the new layout is combining late ranges so instead of number of days we have a range of days (\ni.e. 1-15,16-30....). Even with work_mem that large, the load of that loan_account_agg_fact table creates over 3GB of temp tables!\nThat's exactly what we do.  We just do a normal backup, and have ascript that gzips anything in the backup directory that doesn't end in.gz...  If you've got space to burn, as you say, then use it at least a\nfew days to see how it affects backup speeds.Seeing as how you're CPU bound, most likely the problem is just thecompressed backup.I'm starting to think the same thing. I'll see how this COPY I'm doing of the single largest table does right now and make some judgement based on that.\n-- John E. Vincent", "msg_date": "Wed, 14 Jun 2006 13:04:31 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On Wed, June 14, 2006 1:04 pm, John Vincent wrote:\n\n> I know it is but that's what we need for some of our queries. Our ETL\n> tool (informatica) and BI tool (actuate) won't let us set those things as\n> part of our jobs. We need it for those purposes. We have some really nasty\n> queries that will be fixed in our new server.\n\nYou could modify pgpool to insert the necessary set commands and point the\ntools at pgpool.\n\n-M\n\n", "msg_date": "Wed, 14 Jun 2006 13:29:29 -0400 (EDT)", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "Out of curiosity, does anyone have any idea what the ratio of actual\ndatasize to backup size is if I use the custom format with -Z 0 compression\nor the tar format?\n\nThanks.\n\nOn 6/14/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:\n> > -- this is the third time I've tried sending this and I never saw it get\n> > through to the list. Sorry if multiple copies show up.\n> >\n> > Hi all,\n>\n> BUNCHES SNIPPED\n>\n> > work_mem = 1048576 ( I know this is high but you should see some of our\n> > sorts and aggregates)\n>\n> Ummm. That's REALLY high. You might want to consider lowering the\n> global value here, and then crank it up on a case by case basis, like\n> during nighttime report generation. Just one or two queries could\n> theoretically run your machine out of memory right now. Just put a \"set\n> work_mem=1000000\" in your script before the big query runs.\n>\n> > We're inserting around 3mil rows a night if you count staging, info, dim\n> > and fact tables. The vacuum issue is a whole other problem but right now\n> > I'm concerned about just the backup on the current hardware.\n> >\n> > I've got some space to burn so I could go to an uncompressed backup and\n> > compress it later during the day.\n>\n> That's exactly what we do. We just do a normal backup, and have a\n> script that gzips anything in the backup directory that doesn't end in\n> .gz... If you've got space to burn, as you say, then use it at least a\n> few days to see how it affects backup speeds.\n>\n> Seeing as how you're CPU bound, most likely the problem is just the\n> compressed backup.\n>\n\n\n\n-- \nJohn E. Vincent\[email protected]\n\nOut of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression or the tar format? Thanks.On 6/14/06, \nScott Marlowe <[email protected]> wrote:\nOn Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is the third time I've tried sending this and I never saw it get> through to the list. Sorry if multiple copies show up.>> Hi all,\nBUNCHES SNIPPED> work_mem = 1048576 ( I know this is high but you should see some of our> sorts and aggregates)Ummm.  That's REALLY high.  You might want to consider lowering theglobal value here, and then crank it up on a case by case basis, like\nduring nighttime report generation.  Just one or two queries couldtheoretically run your machine out of memory right now.  Just put a \"setwork_mem=1000000\" in your script before the big query runs.\n> We're inserting around 3mil rows a night if you count staging, info, dim> and fact tables. The vacuum issue is a whole other problem but right now> I'm concerned about just the backup on the current hardware.\n>> I've got some space to burn so I could go to an uncompressed backup and> compress it later during the day.That's exactly what we do.  We just do a normal backup, and have ascript that gzips anything in the backup directory that doesn't end in\n.gz...  If you've got space to burn, as you say, then use it at least afew days to see how it affects backup speeds.Seeing as how you're CPU bound, most likely the problem is just thecompressed backup.\n-- John E. [email protected]", "msg_date": "Wed, 14 Jun 2006 14:11:19 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On Wed, 2006-06-14 at 12:04, John Vincent wrote:\n> \n> On 6/14/06, Scott Marlowe <[email protected]> wrote:\n> On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:\n> > -- this is the third time I've tried sending this and I\n> never saw it get\n> > through to the list. Sorry if multiple copies show up.\n> >\n> > Hi all,\n> \n> BUNCHES SNIPPED\n> \n> > work_mem = 1048576 ( I know this is high but you should see\n> some of our\n> > sorts and aggregates)\n> \n> Ummm. That's REALLY high. You might want to consider\n> lowering the\n> global value here, and then crank it up on a case by case\n> basis, like \n> during nighttime report generation. Just one or two queries\n> could\n> theoretically run your machine out of memory right now. Just\n> put a \"set\n> work_mem=1000000\" in your script before the big query runs.\n> \n> \n> I know it is but that's what we need for some of our queries. Our ETL\n> tool (informatica) and BI tool (actuate) won't let us set those things\n> as part of our jobs. We need it for those purposes. We have some\n> really nasty queries that will be fixed in our new server. \n\nDescription of \"Queries gone wild\" redacted. hehe.\n\nYeah, I've seen those kinds of queries before too. you might be able to\nlimit your exposure by using alter user:\n\nalter user userwhoneedslotsofworkmem set work_mem=1000000;\n\nand then only that user will have that big of a default. You could even\nmake it so that only queries that need that much log in as that user,\nand all other queries log in as other folks. Just a thought. I just\nget REAL nervous seeing a production machine with a work_mem set that\nhigh.\n\n", "msg_date": "Wed, 14 Jun 2006 15:11:37 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On 6/14/06, Scott Marlowe <[email protected]> wrote:\n>\n>\n> Description of \"Queries gone wild\" redacted. hehe.\n>\n> Yeah, I've seen those kinds of queries before too. you might be able to\n> limit your exposure by using alter user:\n>\n> alter user userwhoneedslotsofworkmem set work_mem=1000000;\n\n\nIs this applicable on 8.0? We were actually LOOKING for a governor of some\nsort for these queries. And something that is not explicitly stated, is\nthat allocated up front or is that just a ceiling?\n\nand then only that user will have that big of a default. You could even\n> make it so that only queries that need that much log in as that user,\n> and all other queries log in as other folks. Just a thought. I just\n> get REAL nervous seeing a production machine with a work_mem set that\n> high.\n\n\nWhich is actually how it's configured. We have a dedicated user connecting\nfrom Actuate. The reports developers use thier own logins when developing\nnew reports. Only when they get published do they convert to the Actuate\nuser.\n\n\n\n\n-- \nJohn E. Vincent\[email protected]\n\nOn 6/14/06, Scott Marlowe <[email protected]> wrote:\nDescription of \"Queries gone wild\" redacted.  hehe.Yeah, I've seen those kinds of queries before too.  you might be able tolimit your exposure by using alter user:alter user userwhoneedslotsofworkmem set work_mem=1000000;\nIs this applicable on  8.0? We were actually LOOKING for a governor of some sort for these queries.  And something that is not explicitly stated, is that allocated up front or is that just a ceiling?\nand then only that user will have that big of a default.  You could evenmake it so that only queries that need that much log in as that user,\nand all other queries log in as other folks.  Just a thought.  I justget REAL nervous seeing a production machine with a work_mem set thathigh.Which is actually how it's configured. We have a dedicated user connecting from  Actuate. The reports developers use thier own logins when developing new reports. Only when they get published do they convert to the Actuate user.\n-- John E. [email protected]", "msg_date": "Wed, 14 Jun 2006 16:55:00 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "How long does gzip take to compress this backup?\n\nOn Wed, 2006-06-14 at 15:59, John Vincent wrote:\n> Okay I did another test dumping using the uncompressed backup on the\n> system unloaded and the time dropped down to 8m for the backup.\n> There's still the size issue to contend with but as I said, I've got a\n> fair bit of space left on the SAN to work with. \n> \n> On 6/14/06, John Vincent <[email protected]> wrote:\n> Well I did a test to answer my own question:\n> \n> -rw-r--r-- 1 postgres postgres 167M Jun 14 01:43\n> claDW_PGSQL-20060613170001.bak\n> -rw-r--r-- 1 root root 2.4G Jun 14 14:45\n> claDW_PGSQL.test.bak \n> \n> the claDW_PGSQL database is a subset of the data in the main\n> schema that I'm dealing with. \n> \n> I did several tests using -Fc -Z0 and a straight pg_dump with\n> no format option.\n> \n> The file size is about 1300% larger and takes just as long to\n> dump even for that small database. \n> \n> Interestingly enough gzip compresses about 1M smaller with no\n> gzip options.\n> \n> I don't know that the uncompressed is really helping much. I'm\n> going to run another query when there's no other users on the\n> system and see how it goes. \n> \n> \n> \n> -- \n> John E. Vincent\n", "msg_date": "Wed, 14 Jun 2006 16:03:35 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:\n> Out of curiosity, does anyone have any idea what the ratio of actual\n> datasize to backup size is if I use the custom format with -Z 0 compression\n> or the tar format?\n\n-Z 0 should mean no compression.\n\nSomething you can try is piping the output of pg_dump to gzip/bzip2. On\nsome OSes, that will let you utilize 1 CPU for just the compression. If\nyou wanted to get even fancier, there is a parallelized version of bzip2\nout there, which should let you use all your CPUs.\n\nOr if you don't care about disk IO bandwidth, just compress after the\nfact (though, that could just put you in a situation where pg_dump\nbecomes bandwidth constrained).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 14 Jun 2006 16:13:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "time gzip -6 claDW_PGSQL.test.bak\n\nreal 3m4.360s\nuser 1m22.090s\nsys 0m6.050s\n\nWhich is still less time than it would take to do a compressed pg_dump.\n\nOn 6/14/06, Scott Marlowe <[email protected]> wrote:\n>\n> How long does gzip take to compress this backup?\n>\n> On Wed, 2006-06-14 at 15:59, John Vincent wrote:\n> > Okay I did another test dumping using the uncompressed backup on the\n> > system unloaded and the time dropped down to 8m for the backup.\n> > There's still the size issue to contend with but as I said, I've got a\n> > fair bit of space left on the SAN to work with.\n>\n\ntime gzip -6 claDW_PGSQL.test.bakreal    3m4.360suser    1m22.090ssys     0m6.050sWhich is still less time than it would take to do a compressed pg_dump. On 6/14/06, \nScott Marlowe <[email protected]> wrote:\nHow long does gzip take to compress this backup?On Wed, 2006-06-14 at 15:59, John Vincent wrote:> Okay I did another test dumping using the uncompressed backup on the> system unloaded and the time dropped down to 8m for the backup.\n> There's still the size issue to contend with but as I said, I've got a> fair bit of space left on the SAN to work with.", "msg_date": "Wed, 14 Jun 2006 17:16:19 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On 6/14/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:\n> > Out of curiosity, does anyone have any idea what the ratio of actual\n> > datasize to backup size is if I use the custom format with -Z 0\n> compression\n> > or the tar format?\n>\n> -Z 0 should mean no compression.\n\n\nBut the custom format is still a binary backup, no?\n\nSomething you can try is piping the output of pg_dump to gzip/bzip2. On\n> some OSes, that will let you utilize 1 CPU for just the compression. If\n> you wanted to get even fancier, there is a parallelized version of bzip2\n> out there, which should let you use all your CPUs.\n>\n> Or if you don't care about disk IO bandwidth, just compress after the\n> fact (though, that could just put you in a situation where pg_dump\n> becomes bandwidth constrained).\n\n\nUnfortunately if we working with our current source box, the 1 CPU is\nalready the bottleneck in regards to compression. If I run the pg_dump from\nthe remote server though, I might be okay.\n\n--\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\nOn 6/14/06, Jim C. Nasby <[email protected]> wrote:\nOn Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:> Out of curiosity, does anyone have any idea what the ratio of actual> datasize to backup size is if I use the custom format with -Z 0 compression\n> or the tar format?-Z 0 should mean no compression.But the custom format is still a binary backup, no?\nSomething you can try is piping the output of pg_dump to gzip/bzip2. Onsome OSes, that will let you utilize 1 CPU for just the compression. Ifyou wanted to get even fancier, there is a parallelized version of bzip2\nout there, which should let you use all your CPUs.Or if you don't care about disk IO bandwidth, just compress after thefact (though, that could just put you in a situation where pg_dumpbecomes bandwidth constrained).\nUnfortunately if we working with our current source box, the 1 CPU is already the bottleneck in regards to compression. If I run the pg_dump from the remote server though, I might be okay.\n--Jim C. Nasby, Sr. Engineering Consultant      [email protected]\nPervasive Software      http://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461", "msg_date": "Wed, 14 Jun 2006 17:18:14 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "On Wed, Jun 14, 2006 at 05:18:14PM -0400, John Vincent wrote:\n> On 6/14/06, Jim C. Nasby <[email protected]> wrote:\n> >\n> >On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:\n> >> Out of curiosity, does anyone have any idea what the ratio of actual\n> >> datasize to backup size is if I use the custom format with -Z 0\n> >compression\n> >> or the tar format?\n> >\n> >-Z 0 should mean no compression.\n> \n> \n> But the custom format is still a binary backup, no?\n \nI fail to see what that has to do with anything...\n\n> Something you can try is piping the output of pg_dump to gzip/bzip2. On\n> >some OSes, that will let you utilize 1 CPU for just the compression. If\n> >you wanted to get even fancier, there is a parallelized version of bzip2\n> >out there, which should let you use all your CPUs.\n> >\n> >Or if you don't care about disk IO bandwidth, just compress after the\n> >fact (though, that could just put you in a situation where pg_dump\n> >becomes bandwidth constrained).\n> \n> \n> Unfortunately if we working with our current source box, the 1 CPU is\n> already the bottleneck in regards to compression. If I run the pg_dump from\n> the remote server though, I might be okay.\n\nOh, right, forgot about that. Yeah, your best bet could be to use an\nexternal machine for the dump.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 14 Jun 2006 16:25:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" }, { "msg_contents": "Just couple of suggestions:\n\nI think on the current server you're pretty much hosed since you are\nlook like you are cpu bottlenecked. You probably should take a good\nlook at PITR and see if that meets your requirements. Also you\ndefinately want to go to 8.1...it's faster, and every bit helps.\n\nGood luck with the new IBM server ;)\n\nmerlin\n", "msg_date": "Thu, 15 Jun 2006 21:03:56 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of pg_dump on PGSQL 8.0" } ]
[ { "msg_contents": "________________________________\n\n\tFrom: Scott Marlowe [mailto:[email protected]] \n\tSent: 14 June 2006 18:04\n\tTo: Dave Page\n\tCc: Joshua D. Drake; [email protected];\[email protected]\n\tSubject: RE: [PERFORM] Which processor runs better for\nPostgresql?\n\t\n\t\n\n\tOn Tue, 2006-06-13 at 15:11, Dave Page wrote:\n\t> \n\t> > -----Original Message-----\n\t\n\t> And how old are the 2600's now?\n\t>\n\t> Anyhoo, I'm not saying the current machines are excellent\nperformers or\n\t> anything, but there are good business reasons to run them if\nyou don't\n\t> need to squeeze out every last pony.\n\t\n\tJust thought I'd point you to Dell's forums.\n\t\n\t\nhttp://forums.us.dell.com/supportforums/board?board.id=pes_linux&page=1\n<http://forums.us.dell.com/supportforums/board?board.id=pes_linux&page=1\n> \n\t\n\twherein you'll find plenty of folks who have problems with\nfreezing RAID\n\tcontrollers with 28xx and 18xx machines. \n\t \n\n<shrug>Never had any such problems in the dozen or so machines we run\n(about a 50-50 split of Linux to Windows). \n\nRegards, Dave\n\n\nRE: [PERFORM] Which processor runs better for Postgresql?\n\n\n\n \n\n\n\nFrom: Scott Marlowe \n [mailto:[email protected]] Sent: 14 June 2006 \n 18:04To: Dave PageCc: Joshua D. Drake; \n [email protected]; [email protected]: RE: \n [PERFORM] Which processor runs better for Postgresql?\n\nOn Tue, 2006-06-13 at 15:11, Dave Page \n wrote:> > > -----Original Message-----> And \n how old are the 2600's now?>> Anyhoo, I'm not saying the current \n machines are excellent performers or> anything, but there are good \n business reasons to run them if you don't> need to squeeze out every \n last pony.Just thought I'd point you to Dell's \n forums.http://forums.us.dell.com/supportforums/board?board.id=pes_linux&page=1wherein you'll find plenty of folks who have problems with freezing \n RAIDcontrollers with 28xx and 18xx machines.  \n<shrug>Never had any such problems in the dozen or so \nmachines we run (about a 50-50 split of Linux to \nWindows). \nRegards, Dave", "msg_date": "Wed, 14 Jun 2006 19:43:41 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which processor runs better for Postgresql?" }, { "msg_contents": "On Wed, 2006-06-14 at 13:43, Dave Page wrote:\n> \n\n> \n> Just thought I'd point you to Dell's forums.\n> \n> http://forums.us.dell.com/supportforums/board?board.id=pes_linux&page=1\n> \n> wherein you'll find plenty of folks who have problems with\n> freezing RAID\n> controllers with 28xx and 18xx machines. \n> \n> \n> <shrug>Never had any such problems in the dozen or so machines we run\n> (about a 50-50 split of Linux to Windows). \n> \n> Regards, Dave\n> \n\nYeah, We've got a mix of 2650 and 2850s, and our 2850s have been rock\nsolid stable, unlike the 2650s. I was actually kinda surprised to see\nhow many people have problems with the 2850s.\n\nApparently, the 2850 mobos have a built in RAID that's pretty stable\n(it's got a PERC number I can't remembeR), but ordering them with an add\non Perc RAID controller appears to make them somewhat unstable as well.\n\nOn recommendation I've seen repeatedly is to use the --noapic option at\nboot time.\n\nJust FYI\n", "msg_date": "Wed, 14 Jun 2006 14:52:06 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which processor runs better for Postgresql?" } ]
[ { "msg_contents": "I have a box with an app and postgresql on it. Hardware includes with 2 2.8 Ghz xeons 512KB cache, 4 GB of memory, 6 scsi disk in a software \nraid 5 on a trustix 2.2 with a 2.6.15.3 kernel. The data and indexes are on the raid array while the tx log is on disk \nwith the OS. All is well.\n\nThe one application executes one transaction every 60 seconds or so. The transaction can range from tiny \nto relatively large. Maybe 30-70k inserts, 60-100k updates... nothing too heavy, take about 8-12 seconds \nto finish the the entire update in the worst case. The application is using the latest jdbc.... I am using \npreparedStatements with addBatch/executebatch/clearBatch to send statements in batches of 10 thousand... \n(is that high?)\n\nThe box itself is a little over subscribed for memory which is causing us to swap a bit... As the \napplication runs, I notice the postgres process which handles this particular app connection grows in memory seemingly \nuncrontrollably until kaboom. Once the kernel kills off enough processes and the system settles, I see the postgres process is at 1.9GB \nof res memory and 77MB of shared memory. This challenges a number of assumptions I have made in the last while and raises a \nfew questions... BTW, I am assuming this is not a memory leak b/c the same install of our software on a box \nwith 8GB of memory and no swap being used has no unexplained growth in the memory... it is perfectly healthy \nand quite performant.\n\nAnyway, due to errors in the transaction, it is rolledback afterwhich the postgres process remains at 901MB of \nresident memory and 91MB of of shared memory.\n\n27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar qradar ::ffff:x.x.x.x(51149) idle\n\nThere are a few things I would like to understand. \n\n- What in the postgres will grow at an uncontrolled rate when the system is under heavy load or the transaction \n is larger... there must be something not governed by the shared memory or other configuration in postgresql.conf. \n It seems like, once we start hitting swap, postgres grows in memory resulting in more swapping... until applications \n start getting killed.\n- when the transaction was rolled back why did the process hold onto the 901MB of memory? \n- when is a transaction too big? is this determined by the configuration and performance of wal_buffers and wal log or is there \n house cleaning which MUST be done at commit/rollback to avoid siutations like this thus indicating there is an upper bound.\n\nI have been configuring postgres from tidbits I collected reading this list in the last few months.... \nnot sure if what I have is totally right for the work load, but when I have adequate memory and avoid swap, we are more than \nhappy with performance. Configuration which is not below is just the default.\n\nshared_buffers = 32767\nwork_mem = 20480\nmaintenance_work_mem = 32768\nmax_fsm_pages = 4024000\nmax_fsm_relations = 2000\nfsync = false\nwal_sync_method = fsync\nwal_buffers = 4096\ncheckpoint_segments = 32\ncheckpoint_timeout = 1200\ncheckpoint_warning = 60\ncommit_delay = 5000\ncommit_siblings = 5\neffective_cache_size = 175000\nrandom_page_cost = 2\nautovacuum = true\nautovacuum_naptime = 60\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_vacuum_cost_delay=100\nautovacuum_vacuum_cost_limit=100\ndefault_statistics_target = 40\n\nIs there anything here which looks weird or mis configured? I am just starting to play with the bg writer configuration so I did not include.\ntypically, there is little or no iowait... and no reason to think there is something miconfigured... from what I have seen.\n\nIn one transaction i have seen as many as 5 checkpoint_segments be created/used so I was considering increasing wal_buffers to 8192 from 4096 \ngiven as many as 4 segments in memory/cache at once... need to test this though ....\n\nAnyone have any thoughts on what could have caused the bloat? \n\nthanks\n", "msg_date": "Wed, 14 Jun 2006 16:18:37 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres consuming way too much memory???" }, { "msg_contents": "\"jody brownell\" <[email protected]> writes:\n> 27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar qradar ::ffff:x.x.x.x(51149) idle\n\nThis looks like a memory leak, but you haven't provided enough info to\nlet someone else reproduce it. Can you log what your application is\ndoing and extract a test case? What PG version is this, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jun 2006 16:03:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory??? " }, { "msg_contents": "Sorry about that, I was in a slight panic :) \n\nI am using postgresql 8.1.4. I will install 8.1.3 and see if the same behavior exists.. we \nmay have started seeing this in 8.1.3, but I dont think before. I will check some stability \nmachines for similar bloating.\n\nThe query (calling a store proc) which is always running when the spiral begins is below. It simply performs \nbulk linking of two objects. Depending on what the application is detecting, it could be called to insert\n40 - 50k records, 500 at a time. When the box is healthy, this is a 200 - 500 ms op, but this starts to become \na 20000+ ms op. I guess this makes sense considering the paging.....\n\nJun 14 12:50:18 xxx postgres[5649]: [3-1] LOG: duration: 20117.984 ms statement: EXECUTE <unnamed> [PREPARE: select * from link_attacker_targets($1, $2, $3) as\n\nCREATE OR REPLACE FUNCTION link_attacker_targets (p_attacker bigint, p_targets varchar, p_targets_size integer) \n\treturns bigint[] as\n$body$\nDECLARE\n v_targets bigint[];\n v_target bigint;\n v_returns bigint[];\n v_returns_size integer := 0;\nBEGIN\n v_targets := convert_string2bigint_array (p_targets, p_targets_size);\n\n FOR i IN 1..p_targets_size LOOP\n \tv_target := v_targets[i];\n\n\tBEGIN\n INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n v_returns_size := v_returns_size + 1;\n v_returns[v_returns_size] := v_target;\n \n\tEXCEPTION WHEN unique_violation THEN\n\t\t-- do nothing... app cache may be out of date.\n\tEND;\n END LOOP;\n RETURN v_returns;\nEND;\n$body$\nLANGUAGE plpgsql VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;\n\nOn Wednesday 14 June 2006 17:03, you wrote:\n> \"jody brownell\" <[email protected]> writes:\n> > 27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar qradar ::ffff:x.x.x.x(51149) idle\n> \n> This looks like a memory leak, but you haven't provided enough info to\n> let someone else reproduce it. Can you log what your application is\n> doing and extract a test case? What PG version is this, anyway?\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n", "msg_date": "Thu, 15 Jun 2006 09:01:18 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres consuming way too much memory???" }, { "msg_contents": "The last version of postgres we had in production was 8.1.1 actually, not 8.1.3. \n\nSo far, on my stability box and older production stability boxes I dont see the same behavior. \n\nI will install 8.1.1 on these boxes and see what I see.\n\nOn Thursday 15 June 2006 09:01, jody brownell wrote:\n> Sorry about that, I was in a slight panic :) \n> \n> I am using postgresql 8.1.4. I will install 8.1.3 and see if the same behavior exists.. we \n> may have started seeing this in 8.1.3, but I dont think before. I will check some stability \n> machines for similar bloating.\n> \n> The query (calling a store proc) which is always running when the spiral begins is below. It simply performs \n> bulk linking of two objects. Depending on what the application is detecting, it could be called to insert\n> 40 - 50k records, 500 at a time. When the box is healthy, this is a 200 - 500 ms op, but this starts to become \n> a 20000+ ms op. I guess this makes sense considering the paging.....\n> \n> Jun 14 12:50:18 xxx postgres[5649]: [3-1] LOG: duration: 20117.984 ms statement: EXECUTE <unnamed> [PREPARE: select * from link_attacker_targets($1, $2, $3) as\n> \n> CREATE OR REPLACE FUNCTION link_attacker_targets (p_attacker bigint, p_targets varchar, p_targets_size integer) \n> \treturns bigint[] as\n> $body$\n> DECLARE\n> v_targets bigint[];\n> v_target bigint;\n> v_returns bigint[];\n> v_returns_size integer := 0;\n> BEGIN\n> v_targets := convert_string2bigint_array (p_targets, p_targets_size);\n> \n> FOR i IN 1..p_targets_size LOOP\n> \tv_target := v_targets[i];\n> \n> \tBEGIN\n> INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n> v_returns_size := v_returns_size + 1;\n> v_returns[v_returns_size] := v_target;\n> \n> \tEXCEPTION WHEN unique_violation THEN\n> \t\t-- do nothing... app cache may be out of date.\n> \tEND;\n> END LOOP;\n> RETURN v_returns;\n> END;\n> $body$\n> LANGUAGE plpgsql VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;\n> \n> On Wednesday 14 June 2006 17:03, you wrote:\n> > \"jody brownell\" <[email protected]> writes:\n> > > 27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar qradar ::ffff:x.x.x.x(51149) idle\n> > \n> > This looks like a memory leak, but you haven't provided enough info to\n> > let someone else reproduce it. Can you log what your application is\n> > doing and extract a test case? What PG version is this, anyway?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Thu, 15 Jun 2006 09:15:10 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres consuming way too much memory???" }, { "msg_contents": "Some more information...\n\nWhen postgresql starts to go into this bloating state, I can only make it happen from my java app.\nIf I simultaneously perform insert of 10million rows into another table, it behaves as expected, but \nthe postgresql process handling the java connection slows down and bloats.\n\nThis leads me to think it has something to do with either the long lived connection. I am using \ndbcp jdbc pool from jakarta OR I am trigger this behavior with something I am doing in the link \nroutine I sent earlier.\n\nI am going to try closing the connection after each TX to see if this resolves it for now. If not, I will write\na java app, stored procedure (table etc) reproduce it without our application. \n\nOh yeah, it is when I use about have of my swap, dstat starts reporting heavy paging, memory climbs very quickly.\n\n\n\nOn Thursday 15 June 2006 09:15, jody brownell wrote:\n> The last version of postgres we had in production was 8.1.1 actually, not 8.1.3. \n> \n> So far, on my stability box and older production stability boxes I dont see the same behavior. \n> \n> I will install 8.1.1 on these boxes and see what I see.\n> \n> On Thursday 15 June 2006 09:01, jody brownell wrote:\n> > Sorry about that, I was in a slight panic :) \n> > \n> > I am using postgresql 8.1.4. I will install 8.1.3 and see if the same behavior exists.. we \n> > may have started seeing this in 8.1.3, but I dont think before. I will check some stability \n> > machines for similar bloating.\n> > \n> > The query (calling a store proc) which is always running when the spiral begins is below. It simply performs \n> > bulk linking of two objects. Depending on what the application is detecting, it could be called to insert\n> > 40 - 50k records, 500 at a time. When the box is healthy, this is a 200 - 500 ms op, but this starts to become \n> > a 20000+ ms op. I guess this makes sense considering the paging.....\n> > \n> > Jun 14 12:50:18 xxx postgres[5649]: [3-1] LOG: duration: 20117.984 ms statement: EXECUTE <unnamed> [PREPARE: select * from link_attacker_targets($1, $2, $3) as\n> > \n> > CREATE OR REPLACE FUNCTION link_attacker_targets (p_attacker bigint, p_targets varchar, p_targets_size integer) \n> > \treturns bigint[] as\n> > $body$\n> > DECLARE\n> > v_targets bigint[];\n> > v_target bigint;\n> > v_returns bigint[];\n> > v_returns_size integer := 0;\n> > BEGIN\n> > v_targets := convert_string2bigint_array (p_targets, p_targets_size);\n> > \n> > FOR i IN 1..p_targets_size LOOP\n> > \tv_target := v_targets[i];\n> > \n> > \tBEGIN\n> > INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n> > v_returns_size := v_returns_size + 1;\n> > v_returns[v_returns_size] := v_target;\n> > \n> > \tEXCEPTION WHEN unique_violation THEN\n> > \t\t-- do nothing... app cache may be out of date.\n> > \tEND;\n> > END LOOP;\n> > RETURN v_returns;\n> > END;\n> > $body$\n> > LANGUAGE plpgsql VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;\n> > \n> > On Wednesday 14 June 2006 17:03, you wrote:\n> > > \"jody brownell\" <[email protected]> writes:\n> > > > 27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar qradar ::ffff:x.x.x.x(51149) idle\n> > > \n> > > This looks like a memory leak, but you haven't provided enough info to\n> > > let someone else reproduce it. Can you log what your application is\n> > > doing and extract a test case? What PG version is this, anyway?\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > > \n> > > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Thu, 15 Jun 2006 12:02:50 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres consuming way too much memory???" }, { "msg_contents": "\"jody brownell\" <[email protected]> writes:\n> When postgresql starts to go into this bloating state, I can only make it happen from my java app.\n\nThat's interesting. The JDBC driver uses protocol features that aren't\nused by psql, so it's possible that the leak is triggered by one of\nthose features. I wouldn't worry too much about duplicating the problem\nfrom psql anyway --- a Java test case will do fine.\n\n> I am going to try closing the connection after each TX to see if this\n> resolves it for now. If not, I will write a java app, stored procedure\n> (table etc) reproduce it without our application.\n\nEven if that works around it for you, please pursue getting a test case\ntogether so we can find and fix the underlying problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 11:34:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory??? " }, { "msg_contents": "\"jody brownell\" <[email protected]> writes:\n> \tBEGIN\n> INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n> v_returns_size := v_returns_size + 1;\n> v_returns[v_returns_size] := v_target;\n \n> \tEXCEPTION WHEN unique_violation THEN\n> \t\t-- do nothing... app cache may be out of date.\n> \tEND;\n\nHmm. There is a known problem that plpgsql leaks some memory when\ncatching an exception:\nhttp://archives.postgresql.org/pgsql-hackers/2006-02/msg00885.php\n\nSo if your problem case involves a whole lot of duplicates then that\ncould explain the initial bloat. However, AFAIK that leakage is in\na transaction-local memory context, so the space ought to be freed at\ntransaction end. And Linux's malloc does know about giving space back\nto the kernel (unlike some platforms). So I'm not sure why you're\nseeing persistent bloat.\n\nCan you rewrite the function to not use an EXCEPTION block (perhaps\na separate SELECT probe for each row --- note this won't be reliable\nif there are concurrent processes making insertions)? If so, does\nthat fix the problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 11:44:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory??? " }, { "msg_contents": "Tom - that make sense... and fits the timeline of when the instability may have been introduced.\n\nI use soft references in java to track these relationships. When the GC needs memory it will collect\nobjects referenced by soft references so I need to have this exception caught where my caches may get cleaned.\n\nWhen the system is under load as it would be in this case, there references would be cleaned causing a large \nnumber of exceptions in the pgplsql, subsequently causing the leak... hence the swift downward spiral.\n\nThe previous version of these routines used selects but due to volume of selects, performance suffered quite \na bit. I dont think I could revert now for production use... closing the connection maybe the workaround for \nus for this release IF this is in fact what the problem is. Unfortunatly, I use the catch in about 20 similar \nroutines to reset sequences etc.... this may be painful :(\n\nI will modify the routine to help isolate the problem. stay tuned.\n\nBTW - the fix you mentioned .... is that targeted for 8.2? Is there a timeline for 8.2?\n\nOn Thursday 15 June 2006 12:44, Tom Lane wrote:\n> \"jody brownell\" <[email protected]> writes:\n> > \tBEGIN\n> > INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n> > v_returns_size := v_returns_size + 1;\n> > v_returns[v_returns_size] := v_target;\n> \n> > \tEXCEPTION WHEN unique_violation THEN\n> > \t\t-- do nothing... app cache may be out of date.\n> > \tEND;\n> \n> Hmm. There is a known problem that plpgsql leaks some memory when\n> catching an exception:\n> http://archives.postgresql.org/pgsql-hackers/2006-02/msg00885.php\n> \n> So if your problem case involves a whole lot of duplicates then that\n> could explain the initial bloat. However, AFAIK that leakage is in\n> a transaction-local memory context, so the space ought to be freed at\n> transaction end. And Linux's malloc does know about giving space back\n> to the kernel (unlike some platforms). So I'm not sure why you're\n> seeing persistent bloat.\n> \n> Can you rewrite the function to not use an EXCEPTION block (perhaps\n> a separate SELECT probe for each row --- note this won't be reliable\n> if there are concurrent processes making insertions)? If so, does\n> that fix the problem?\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n", "msg_date": "Thu, 15 Jun 2006 13:12:37 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres consuming way too much memory???" }, { "msg_contents": "\"jody brownell\" <[email protected]> writes:\n> BTW - the fix you mentioned .... is that targeted for 8.2? Is there a timeline for 8.2?\n\nThere is no fix as yet, but it's on the radar screen to fix for 8.2.\n\nWe expect 8.2 will go beta towards the end of summer (I forget whether\nAug 1 or Sep 1 is the target).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 12:18:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory??? " }, { "msg_contents": "On Thu, 2006-06-15 at 11:34 -0400, Tom Lane wrote:\n> \"jody brownell\" <[email protected]> writes:\n> > When postgresql starts to go into this bloating state, I can only make it happen from my java app.\n> \n> That's interesting. The JDBC driver uses protocol features that aren't\n> used by psql, so it's possible that the leak is triggered by one of\n> those features. I wouldn't worry too much about duplicating the problem\n> from psql anyway --- a Java test case will do fine.\n> \n> > I am going to try closing the connection after each TX to see if this\n> > resolves it for now. If not, I will write a java app, stored procedure\n> > (table etc) reproduce it without our application.\n\n\nJust to mention another possible culprit; this one doesn't seem all that\nlikely to me, but at least it's easy to investigate.\n\nWith DBCP and non-ancient versions of the JDBC driver that use v3\nprotocol and real prepared statements, it is possible to (mis)configure\nthe system to create an unbounded number of cached prepared statements\non any particular connection. Older versions of DBCP were also known to\nhave bugs which aggravated this issue when prepared statement caching\nwas enabled, IIRC.\n\n-- Mark Lewis\n", "msg_date": "Thu, 15 Jun 2006 09:57:25 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory???" }, { "msg_contents": "\nAdded to TODO:\n\n> o Fix memory leak from exceptions\n>\n> http://archives.postgresql.org/pgsql-performance/2006-06/msg0$\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"jody brownell\" <[email protected]> writes:\n> > \tBEGIN\n> > INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);\n> > v_returns_size := v_returns_size + 1;\n> > v_returns[v_returns_size] := v_target;\n> \n> > \tEXCEPTION WHEN unique_violation THEN\n> > \t\t-- do nothing... app cache may be out of date.\n> > \tEND;\n> \n> Hmm. There is a known problem that plpgsql leaks some memory when\n> catching an exception:\n> http://archives.postgresql.org/pgsql-hackers/2006-02/msg00885.php\n> \n> So if your problem case involves a whole lot of duplicates then that\n> could explain the initial bloat. However, AFAIK that leakage is in\n> a transaction-local memory context, so the space ought to be freed at\n> transaction end. And Linux's malloc does know about giving space back\n> to the kernel (unlike some platforms). So I'm not sure why you're\n> seeing persistent bloat.\n> \n> Can you rewrite the function to not use an EXCEPTION block (perhaps\n> a separate SELECT probe for each row --- note this won't be reliable\n> if there are concurrent processes making insertions)? If so, does\n> that fix the problem?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 13:17:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres consuming way too much memory???" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]] \n> Sent: 14 June 2006 20:52\n> To: Dave Page\n> Cc: Joshua D. Drake; [email protected]; \n> [email protected]\n> Subject: RE: [PERFORM] Which processor runs better for Postgresql?\n> \n> \n> Yeah, We've got a mix of 2650 and 2850s, and our 2850s have been rock\n> solid stable, unlike the 2650s. I was actually kinda surprised to see\n> how many people have problems with the 2850s.\n> \n> Apparently, the 2850 mobos have a built in RAID that's pretty stable\n> (it's got a PERC number I can't remembeR), but ordering them \n> with an add\n> on Perc RAID controller appears to make them somewhat \n> unstable as well.\n\nThat might be it - we always chose the onboard PERC because it has twice\nthe cache of the other options. \n\nRegards, Dave.\n", "msg_date": "Wed, 14 Jun 2006 21:00:57 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which processor runs better for Postgresql?" } ]
[ { "msg_contents": "All,\n So I thought I'd pose this question:\n\nIf I have a pg database attached to a powervault (PV) with just an \noff-the-shelf SCSI card I generally want fsync on to prevent data \ncorruption in case the PV should loose power.\nHowever, if I have it attached to a NetApp that ensures data writes \nto via the NVRAM can I safely turn fsync off to gain additional \nperformance?\n\nBest Regards,\nDan Gorman\n\n\n", "msg_date": "Wed, 14 Jun 2006 14:48:04 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "On Wed, 14 Jun 2006 14:48:04 -0700\nDan Gorman <[email protected]> wrote:\n> If I have a pg database attached to a powervault (PV) with just an \n> off-the-shelf SCSI card I generally want fsync on to prevent data \n> corruption in case the PV should loose power.\n> However, if I have it attached to a NetApp that ensures data writes \n> to via the NVRAM can I safely turn fsync off to gain additional \n> performance?\n\nI wouldn't. Remember, you still have to get the data to the NetApp.\nYou don't want things sitting in the computer's buffers when it's power\ngoes down.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 14 Jun 2006 17:53:24 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "No. You need fsync on in order to force the data to get TO the NetApp\nat the right time. With fsync off, the data gets cached in the\noperating system.\n\n-- Mark Lewis\n\nOn Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:\n> All,\n> So I thought I'd pose this question:\n> \n> If I have a pg database attached to a powervault (PV) with just an \n> off-the-shelf SCSI card I generally want fsync on to prevent data \n> corruption in case the PV should loose power.\n> However, if I have it attached to a NetApp that ensures data writes \n> to via the NVRAM can I safely turn fsync off to gain additional \n> performance?\n> \n> Best Regards,\n> Dan Gorman\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Wed, 14 Jun 2006 14:54:45 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "\nMark Lewis <[email protected]> writes:\n\n> On Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:\n> >\n> > However, if I have it attached to a NetApp that ensures data writes \n> > to via the NVRAM can I safely turn fsync off to gain additional \n> > performance?\n>\n> No. You need fsync on in order to force the data to get TO the NetApp\n> at the right time. With fsync off, the data gets cached in the\n> operating system.\n\nIn fact the benefit of the NVRAM is precisely that it makes sure you *don't*\nhave any reason to turn fsync off. It should make the fsync essentially free.\n\n-- \ngreg\n\n", "msg_date": "14 Jun 2006 23:33:53 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "On 14 Jun 2006 23:33:53 -0400, Greg Stark <[email protected]> wrote:\n> In fact the benefit of the NVRAM is precisely that it makes sure you *don't*\n> have any reason to turn fsync off. It should make the fsync essentially free.\n\nHaving run PostgreSQL on a NetApp with input from NetApp, this is\ncorrect. fsync should be turned on, but you will not incur the *real*\ndirect-to-disk cost of the sync, it will be direct-to-NVRAM.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 15 Jun 2006 01:14:26 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "That makes sense. Speaking of NetApp, we're using the 3050C with 4 FC \nshelfs. Any generic advice other than the NetApp (their NFS oracle \ntuning options)\nthat might be useful? (e.g. turning off snapshots)\n\nRegards,\nDan Gorman\n\nOn Jun 14, 2006, at 10:14 PM, Jonah H. Harris wrote:\n\n> On 14 Jun 2006 23:33:53 -0400, Greg Stark <[email protected]> wrote:\n>> In fact the benefit of the NVRAM is precisely that it makes sure \n>> you *don't*\n>> have any reason to turn fsync off. It should make the fsync \n>> essentially free.\n>\n> Having run PostgreSQL on a NetApp with input from NetApp, this is\n> correct. fsync should be turned on, but you will not incur the *real*\n> direct-to-disk cost of the sync, it will be direct-to-NVRAM.\n>\n> -- \n> Jonah H. Harris, Software Architect | phone: 732.331.1300\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 33 Wood Ave S, 2nd Floor | [email protected]\n> Iselin, New Jersey 08830 | http://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 14 Jun 2006 22:20:25 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "On 6/15/06, Dan Gorman <[email protected]> wrote:\n> shelfs. Any generic advice other than the NetApp (their NFS oracle\n> tuning options) that might be useful? (e.g. turning off snapshots)\n\nI was using PostgreSQL on a 980c, but feature-wise they're probably\npretty close.\n\nWhat type of application are you running? OLTP? If so, what type of\ntransaction volume? Are you planning to use any Flex* or Snap*\nfeatures? What type of volume layouts are you using?\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 15 Jun 2006 01:35:43 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "On 6/15/06, Jonah H. Harris <[email protected]> wrote:\n> On 6/15/06, Dan Gorman <[email protected]> wrote:\n> > shelfs. Any generic advice other than the NetApp (their NFS oracle\n> > tuning options) that might be useful? (e.g. turning off snapshots)\n>\n> I was using PostgreSQL on a 980c, but feature-wise they're probably\n> pretty close.\n>\n> What type of application are you running? OLTP? If so, what type of\n> transaction volume? Are you planning to use any Flex* or Snap*\n> features? What type of volume layouts are you using?\n\nAlso, you mentioned NFS... is that what you were planning? If you\nlicensed iSCSI, it's a bit better for the database from a performance\nangle.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 15 Jun 2006 01:38:18 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "Dan Gorman wrote:\n> That makes sense. Speaking of NetApp, we're using the 3050C with 4 FC \n> shelfs. Any generic advice other than the NetApp (their NFS oracle \n> tuning options)\n> that might be useful? (e.g. turning off snapshots)\n\nI'm not sure if this is in the tuning advice you already have, but we \nuse a dedicated gigabit interface to the NetApp, with jumbo (9K) frames, \nand an 8K NFS blocksize. We use this for both Oracle and Postgres when \nthe database resides on NetApp.\n\nJoe\n", "msg_date": "Wed, 14 Jun 2006 22:51:34 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "Currently I have jumbo frames enabled on the NA and the switches and \nalso are using a the 32K R/W NFS options. Everything is gigE.\n\nRegards,\nDan Gorman\n\n\nOn Jun 14, 2006, at 10:51 PM, Joe Conway wrote:\n\n> Dan Gorman wrote:\n>> That makes sense. Speaking of NetApp, we're using the 3050C with 4 \n>> FC shelfs. Any generic advice other than the NetApp (their NFS \n>> oracle tuning options)\n>> that might be useful? (e.g. turning off snapshots)\n>\n> I'm not sure if this is in the tuning advice you already have, but \n> we use a dedicated gigabit interface to the NetApp, with jumbo (9K) \n> frames, and an 8K NFS blocksize. We use this for both Oracle and \n> Postgres when the database resides on NetApp.\n>\n> Joe\n\n\n", "msg_date": "Wed, 14 Jun 2006 23:01:28 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" }, { "msg_contents": "On Thu, Jun 15, 2006 at 01:14:26AM -0400, Jonah H. Harris wrote:\n> On 14 Jun 2006 23:33:53 -0400, Greg Stark <[email protected]> wrote:\n> >In fact the benefit of the NVRAM is precisely that it makes sure you \n> >*don't*\n> >have any reason to turn fsync off. It should make the fsync essentially \n> >free.\n> \n> Having run PostgreSQL on a NetApp with input from NetApp, this is\n> correct. fsync should be turned on, but you will not incur the *real*\n> direct-to-disk cost of the sync, it will be direct-to-NVRAM.\n\nJust so there's no confusion... this applies to any caching RAID\ncontroller as well. You just need to ensure that the cache in the\ncontroller absolutely will not be lost in the event of a power failure\nor what-have-you. On most controllers this is accomplished with a simple\nbattery backup; I don't know if the higher-end stuff takes further steps\n(such as flashing the cache contents to flash memory on a power\nfailure).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 15 Jun 2006 10:54:20 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres fsync off (not needed) with NetApp" } ]
[ { "msg_contents": "\n\n\n\nHi,\n\nIs it possible to start two instances of postgresql with different port and\ndirectory which run simultaneously?\nIf can then will this cause any problem or performance drop down?\n\nThanks.\n\n", "msg_date": "Thu, 15 Jun 2006 13:58:20 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Is it possible to start two instances of postgresql?" }, { "msg_contents": "am 15.06.2006, um 13:58:20 +0800 mailte [email protected] folgendes:\n> \n> \n> \n> \n> Hi,\n> \n> Is it possible to start two instances of postgresql with different port and\n> directory which run simultaneously?\n\nYes, this is possible, and this is the Debian way for updates.\n\n\n> If can then will this cause any problem or performance drop down?\n\nOf course, if you have high load in one database ... you have only one\nmachine.\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Thu, 15 Jun 2006 08:07:36 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" }, { "msg_contents": "[email protected] writes:\n> Is it possible to start two instances of postgresql with different port and\n> directory which run simultaneously?\n\nCertainly. We have one HACMP cluster which hosts 14 PostgreSQL\ninstances across two physical boxes. (If one went down, they'd all\nmigrate to the survivor...)\n\n> If can then will this cause any problem or performance drop down?\n\nThere certainly can be; the databases will be sharing disks, memory,\nand CPUs, so if they are avidly competing for resources, the\ncompetition is sure to have some impact on performance.\n\nFlip side: That 14 database cluster has several databases that are\nknown to be very lightly used; they *aren't* competing, and aren't a\nproblem.\n\nConsider it obvious that if you haven't enough memory or I/O bandwidth\nto cover your two PG instances, you'll find performance sucks... If\nyou have enough, then it can work fine...\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://cbbrowne.com/info/linuxxian.html\n\"At Microsoft, it doesn't matter which file you're compiling, only\nwhich flags you #define.\" -- Colin Plumb\n", "msg_date": "Thu, 15 Jun 2006 11:01:51 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" } ]
[ { "msg_contents": "\n\n\n\nso what is the best way to implement two databases in one machine?\nimplement with two postgresql instances with separate directory or\nimplement under one instance?\n\nif I implement two database in one instance, if one of the database crash\nwill it affect the other?\n\n\n\n \n \"A. Kretschmer\" \n <andreas.kretschmer@schollg To: [email protected] \n las.com> cc: \n Sent by: Subject: Re: [PERFORM]Is it possible to start two instances of postgresql? \n pgsql-performance-owner@pos \n tgresql.org \n \n \n 06/15/2006 02:07 PM \n \n \n\n\n\n\nam 15.06.2006, um 13:58:20 +0800 mailte [email protected]\nfolgendes:\n>\n>\n>\n>\n> Hi,\n>\n> Is it possible to start two instances of postgresql with different port\nand\n> directory which run simultaneously?\n\nYes, this is possible, and this is the Debian way for updates.\n\n\n> If can then will this cause any problem or performance drop down?\n\nOf course, if you have high load in one database ... you have only one\nmachine.\n\n\nHTH, Andreas\n--\nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe ===\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "Thu, 15 Jun 2006 14:34:51 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Is it possible to start two instances of postgresql?" }, { "msg_contents": "am 15.06.2006, um 14:34:51 +0800 mailte [email protected] folgendes:\n> \n> \n> \n> \n> so what is the best way to implement two databases in one machine?\n> implement with two postgresql instances with separate directory or\n> implement under one instance?\n\nWhat do you want to do?\nDo you need 2 separate pg-versions? Or do you need, for instance, a\nlive-db and a test-db? \n\n\n> if I implement two database in one instance, if one of the database crash\n> will it affect the other?\n\nYes, but on the other side, if you have 2 instances on the same machine\nand this machine chrash, then you lost all.\nWhat do you want to do? Perhaps, you need slony? (replication solution)\n\n\n\nBtw.: please, no silly fullquote.\n\n\nAndreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Thu, 15 Jun 2006 09:06:52 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" } ]
[ { "msg_contents": "\n\n\n\n\nboth of the two database are live but use for two different web app.\nmy company don't want to spend more to buy a new server, so then I think of\nto implement both under the same server and one instance..\nbut then my superior don't want to do that way.\n they want to implement two databases in one server but if one of the\ndatabase down it will not affect the other, so that's why I need to have\ntwo instances.\n\n\n\n\n \n \"A. Kretschmer\" \n <andreas.kretschmer@schollg To: [email protected] \n las.com> cc: \n Sent by: Subject: Re: [PERFORM]Is it possible to start two instances of postgresql? \n pgsql-performance-owner@pos \n tgresql.org \n \n \n 06/15/2006 03:06 PM \n \n \n\n\n\n\nam 15.06.2006, um 14:34:51 +0800 mailte [email protected]\nfolgendes:\n>\n>\n>\n>\n> so what is the best way to implement two databases in one machine?\n> implement with two postgresql instances with separate directory or\n> implement under one instance?\n\nWhat do you want to do?\nDo you need 2 separate pg-versions? Or do you need, for instance, a\nlive-db and a test-db?\n\n\n> if I implement two database in one instance, if one of the database crash\n> will it affect the other?\n\nYes, but on the other side, if you have 2 instances on the same machine\nand this machine chrash, then you lost all.\nWhat do you want to do? Perhaps, you need slony? (replication solution)\n\n\n\nBtw.: please, no silly fullquote.\n\n\nAndreas\n--\nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe ===\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n\n", "msg_date": "Thu, 15 Jun 2006 15:24:35 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Is it possible to start two instances of postgresql?" }, { "msg_contents": "[email protected] wrote:\n\n> both of the two database are live but use for two different web app.\n> my company don't want to spend more to buy a new server, so then I think of\n> to implement both under the same server and one instance..\n> but then my superior don't want to do that way.\n> they want to implement two databases in one server but if one of the\n> database down it will not affect the other, so that's why I need to have\n> two instances.\n\nWe are currently running your suggestion (two instances of PG) in a\nproduction server, with no obvious problems attributable to the setup\n(we have seen some performance problems with one system, but those are\nlikely caused by bad db/application design).\n\nIn our case the two systems are running different minor versions\n(although we are planning to migrate them both to the latest 7.4.x).\n\n/Nis\n\n", "msg_date": "Thu, 15 Jun 2006 10:15:02 +0200", "msg_from": "Nis Jorgensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" }, { "msg_contents": "\n> [email protected] wrote:\n>\n> \n>> both of the two database are live but use for two different web app.\n>> my company don't want to spend more to buy a new server, so then I think of\n>> to implement both under the same server and one instance..\n\nJust as an anecdote, I am running 30 databases on a single instance and \nit's working quite well. There may be reasons to run multiple \ninstances but it seems like tuning them to cooperate for memory would \npose some problems - e.g. effective_cache_size.\n\n-Dan\n\n", "msg_date": "Thu, 15 Jun 2006 11:17:25 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" }, { "msg_contents": "In response to Dan Harris <[email protected]>:\n> \n> > [email protected] wrote:\n> >\n> >> both of the two database are live but use for two different web app.\n> >> my company don't want to spend more to buy a new server, so then I think of\n> >> to implement both under the same server and one instance..\n> \n> Just as an anecdote, I am running 30 databases on a single instance and \n> it's working quite well. There may be reasons to run multiple \n> instances but it seems like tuning them to cooperate for memory would \n> pose some problems - e.g. effective_cache_size.\n\nThe only reason I can see for doing this is when you need to run two\ndifferent versions of PostgreSQL. Which is what I've been forced to\ndo on one of our servers.\n\nIt works, but it's a pain to admin. If you can just put all the databases\nin one db cluster (is that terminology still correct?) it'll be much\neasier.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n", "msg_date": "Thu, 15 Jun 2006 13:53:33 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to start two instances of postgresql?" } ]
[ { "msg_contents": "\n Hello,\n\n Is it possible to somehow analyze function performance? E.g.\nwe are using function cleanup() which takes obviously too much time\nto execute but I have problems trying to figure what is slowing things\ndown.\n\n When I explain analyze function lines step by step it show quite\nacceptable performance.\n\n PostgreSQL 8.0 is running on two dual core Opterons.\n\n Thanks,\n\n Mindaugas\n\n", "msg_date": "Thu, 15 Jun 2006 15:16:32 +0300", "msg_from": "\"Mindaugas\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to analyze function performance" }, { "msg_contents": "It depends what is the purpose of the function. If it's mainly a\ncontainer for a heap of SQL queries along with some simple IF, ELSE\netc. then I use two simple ways to analyze the performance (or lack\nof performance):\n\n1) I use a lot of debug messages\n\n2) I print out all SQL and the execute EXPLAIN / EXPLAIN ANALYZE on them\n\nIf the function is mainly a computation of something, it's usually nice\nto try to use for example C language, as it's much faster than PL/pgSQL\nfor this type of functions.\n\nBut it depends on what you are trying to do in that function ...\n\nTomas\n\n> Hello,\n> \n> Is it possible to somehow analyze function performance? E.g.\n> we are using function cleanup() which takes obviously too much time\n> to execute but I have problems trying to figure what is slowing things\n> down.\n> \n> When I explain analyze function lines step by step it show quite\n> acceptable performance.\n> \n> PostgreSQL 8.0 is running on two dual core Opterons.\n> \n> Thanks,\n> \n> Mindaugas\n", "msg_date": "Thu, 15 Jun 2006 15:03:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to analyze function performance" }, { "msg_contents": "\"Mindaugas\" <[email protected]> writes:\n> Is it possible to somehow analyze function performance? E.g.\n> we are using function cleanup() which takes obviously too much time\n> to execute but I have problems trying to figure what is slowing things\n> down.\n\n> When I explain analyze function lines step by step it show quite\n> acceptable performance.\n\nAre you sure you are \"explain analyze\"ing the same queries the function\nis really doing? You have to account for the fact that what plpgsql is\nissuing is parameterized queries, and sometimes that limits the\nplanner's ability to pick a good plan. For instance, if you have\n\n\tdeclare x int;\n\tbegin\n\t\t...\n\t\tfor r in select * from foo where key = x loop ...\n\nthen what is really getting planned and executed is \"select * from foo\nwhere key = $1\" --- every plpgsql variable gets replaced by a parameter\nsymbol \"$n\". You can model this for EXPLAIN purposes with a prepared\nstatement:\n\n\tprepare p1(int) as select * from foo where key = $1;\n\texplain analyze execute p1(42);\n\nIf you find out that a particular query really sucks when parameterized,\nyou can work around this by using EXECUTE to force the query to be\nplanned afresh on each use with literal constants instead of parameters:\n\n\tfor r in execute 'select * from foo where key = ' || x loop ...\n\nThe replanning takes extra time, though, so don't do this except where\nyou've specifically proved there's a need.\n\nBTW, be careful to use quote_literal() when needed in queries built as\nstrings, else you'll have bugs and maybe even security problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 10:24:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to analyze function performance " } ]
[ { "msg_contents": "I'm not a programmer so understanding the optimizer code is WAY beyond my\nlimits.\n\nMy question, that I haven't seen answered elsewhere, is WHAT things can\naffect the choice of an index scan over a sequence scan. I understand that\nsometimes a sequence scan is faster and that you still have to get the data\nfrom the disk but my question relates to an issue we had pop up today.\n\nWe have 2 tables, which we'll refer to as laaf and laaf_new. The first table\nhas 220M rows and the second table has 4M rows. What were basically doing is\naggregating the records from the first table into the second one at which\npoint we're going to drop the first one. This is the same table I mentioned\npreviously in my post about pg_dump.\n\nlaaf_new has one less column than laaf and both were freshly vacuum analyzed\nafter having an index added on a single column (other than the primary key).\n\n\nThe query we were doing was as follows:\n\nselect main_account_status_dim_id, count(*)\nfrom cla_dw.loan_account_agg_fact_new\ngroup by main_account_status_dim_id\norder by main_account_status_dim_id;\n\nOne of our problems is that we don't have any PGSQL dbas here. All of our\nguys are DB2 (we're still looking though).\n\nNow I've been told by our DBA that we should have been able to wholy satisfy\nthat query via the indexes.\n\nWe did regular EXPLAINS on the query with seqscan enabled and disabled and\neven in our own tests actually running the queries, the results WERE faster\nwith a seq scan than an index scan but the question we were discussing is\nWHY did it choose the index scan and why is the index scan slower than the\nsequence scan? He's telling me that DB2 would have been able to do the whole\nthing with indexes.\n\nEXPLAINS:\n\n(the reason for the random_page_cost was that we had the default of 4 in the\n.conf file and were planning on changing it to 2 anyway to match our other\nserver)\n\nset random_page_cost=2;\nset enable_seqscan=on;\nexplain select main_account_status_dim_id, count(*)\nfrom cla_dw.loan_account_agg_fact\ngroup by main_account_status_dim_id\norder by main_account_status_dim_id;\n\n\"Sort (cost=8774054.54..8774054.66 rows=48 width=4)\"\n\" Sort Key: main_account_status_dim_id\"\n\" -> HashAggregate (cost=8774052.60..8774053.20 rows=48 width=4)\"\n\" -> Seq Scan on loan_account_agg_fact\n(cost=0.00..7609745.40rows=232861440 width=4)\"\n\n\nset random_page_cost=2;\nset enable_seqscan=off;\nexplain select main_account_status_dim_id, count(*)\nfrom cla_dw.loan_account_agg_fact\ngroup by main_account_status_dim_id\norder by main_account_status_dim_id;\n\n\"Sort (cost=108774054.54..108774054.66 rows=48 width=4)\"\n\" Sort Key: main_account_status_dim_id\"\n\" -> HashAggregate (cost=108774052.60..108774053.20 rows=48 width=4)\"\n\" -> Seq Scan on loan_account_agg_fact (cost=\n100000000.00..107609745.40 rows=232861440 width=4)\"\nHere's the DDL for the table laaf:\n\n\nWhen the system is not busy again, I'll run a verbose version. The query was\nrun against each of the tables to compare the results of aggregation change\nwith the new table.\n\nCREATE TABLE cla_dw.loan_account_agg_fact\n(\n loan_account_agg_fact_id int8 NOT NULL DEFAULT\nnextval('loan_account_agg_fact_loan_account_agg_fact_id_seq'::regclass),\n dw_load_date_id int4 NOT NULL DEFAULT 0,\n servicer_branch_dim_id int4 NOT NULL DEFAULT 0,\n main_account_status_dim_id int4 NOT NULL DEFAULT 0,\n product_dim_id int4 NOT NULL DEFAULT 0,\n next_due_date_id int4 NOT NULL DEFAULT 0,\n account_balance numeric(15,6) NOT NULL DEFAULT 0,\n loan_count int4 NOT NULL DEFAULT 0,\n principal numeric(15,6) NOT NULL DEFAULT 0,\n interest numeric(15,6) NOT NULL DEFAULT 0,\n fees numeric(15,6) NOT NULL DEFAULT 0,\n gl_principal numeric(15,6) NOT NULL DEFAULT 0,\n gl_interest numeric(15,6) NOT NULL DEFAULT 0,\n accruable_principal numeric(15,6) NOT NULL DEFAULT 0,\n unaccruable_principal numeric(15,6) NOT NULL DEFAULT 0,\n calculated_principal numeric(15,6) DEFAULT 0,\n current_interest numeric(15,6) NOT NULL DEFAULT 0,\n past_due_interest numeric(16,5) NOT NULL DEFAULT 0,\n cash_available numeric(15,6) DEFAULT 0,\n cash_collected numeric(15,6) DEFAULT 0,\n cash_collected_date_id int4 DEFAULT 0,\n dw_agg_load_dt timestamp(0) DEFAULT ('now'::text)::timestamp(6) with time\nzone,\n cash_available_principal numeric(15,6) DEFAULT 0,\n cash_available_current numeric(15,6) DEFAULT 0,\n cash_available_last numeric(15,6) DEFAULT 0,\n cash_available_interest numeric(15,6) DEFAULT 0,\n cash_available_fees numeric(15,6) DEFAULT 0,\n cash_not_collected numeric(15,6) DEFAULT 0,\n number_contacts_total int4 DEFAULT 0,\n number_broken_commitments int4 DEFAULT 0,\n loc_current_due_total numeric(15,6) DEFAULT 0,\n loc_current_due_principal numeric(15,6) DEFAULT 0,\n loc_current_due_interest numeric(15,6) DEFAULT 0,\n loc_current_due_fees numeric(15,6) DEFAULT 0,\n loc_past_due_last numeric(15,6) DEFAULT 0,\n loc_past_due_total numeric(15,6) DEFAULT 0,\n number_made_commitments int4 DEFAULT 0,\n CONSTRAINT loan_account_agg_fact_pkey PRIMARY KEY\n(loan_account_agg_fact_id)\n)\nWITH OIDS;\n\nCREATE INDEX loan_account_agg_fact_main_account_status_dim_id\n ON cla_dw.loan_account_agg_fact\n USING btree\n (main_account_status_dim_id)\n TABLESPACE fact_idx_part1_ts;\n\n\nHere's the DDL for the table laaf_new:\n\nCREATE TABLE cla_dw.loan_account_agg_fact_new\n(\n loan_account_agg_fact_id bigserial NOT NULL,\n dw_load_date_id int4 NOT NULL,\n servicer_branch_dim_id int4 NOT NULL,\n main_account_status_dim_id int4 NOT NULL,\n product_dim_id int4 NOT NULL,\n dw_agg_load_dt timestamp,\n account_balance numeric(15,6) NOT NULL DEFAULT 0,\n loan_count int4 NOT NULL DEFAULT 0,\n principal numeric(15,6) NOT NULL DEFAULT 0,\n interest numeric(15,6) NOT NULL DEFAULT 0,\n fees numeric(15,6) NOT NULL DEFAULT 0,\n gl_principal numeric(15,6) NOT NULL DEFAULT 0,\n gl_interest numeric(15,6) NOT NULL DEFAULT 0,\n accruable_principal numeric(15,6) DEFAULT 0,\n unaccruable_principal numeric(15,6) DEFAULT 0,\n calculated_principal numeric(15,6) DEFAULT 0,\n current_interest numeric(15,6) DEFAULT 0,\n past_due_interest numeric(15,6) DEFAULT 0,\n cash_available numeric(15,6) DEFAULT 0,\n cash_collected numeric(15,6) DEFAULT 0,\n cash_available_principal numeric(15,6) DEFAULT 0,\n cash_available_current numeric(15,6) DEFAULT 0,\n cash_available_last numeric(15,6) DEFAULT 0,\n cash_available_interest numeric(15,6) DEFAULT 0,\n cash_available_fees numeric(15,6) DEFAULT 0,\n cash_not_collected numeric(15,6) DEFAULT 0,\n number_contacts_total int4 DEFAULT 0,\n number_broken_commitments int4 DEFAULT 0,\n loc_current_due_total numeric(15,6) DEFAULT 0,\n loc_current_due_principal numeric(15,6) DEFAULT 0,\n loc_current_due_interest numeric(15,6) DEFAULT 0,\n loc_current_due_fees numeric(15,6) DEFAULT 0,\n loc_past_due_last numeric(15,6) DEFAULT 0,\n loc_past_due_total numeric(15,6) DEFAULT 0,\n number_made_commitments int4 DEFAULT 0,\n CONSTRAINT loan_account_agg_fact_pkey_new PRIMARY KEY\n(loan_account_agg_fact_id) USING INDEX TABLESPACE default_ts\n)\nWITH OIDS TABLESPACE fact_data_part1_ts;\n\nCREATE INDEX laafn_main_account_status_dim\n ON cla_dw.loan_account_agg_fact_new\n USING btree\n (main_account_status_dim_id)\n TABLESPACE fact_idx_part2_ts;\n\nI'm not a programmer so understanding the optimizer code is WAY beyond my limits.My question, that I haven't seen answered elsewhere, is WHAT things can affect the choice of an index scan over a sequence scan. I understand that sometimes a sequence scan is faster and that you still have to get the data from the disk but my question relates to an issue we had pop up today.\nWe have 2 tables, which we'll refer to as laaf and laaf_new. The first table has 220M rows and the second table has 4M rows. What were basically doing is aggregating the records from the first table into the second one at which point we're going to drop the first one. This is the same table I mentioned previously in my post about pg_dump.\nlaaf_new has one less column than laaf and both were freshly vacuum analyzed after having an index added on a single column (other than the primary key). The query we were doing was as follows:select main_account_status_dim_id, count(*)\nfrom cla_dw.loan_account_agg_fact_newgroup by main_account_status_dim_idorder by main_account_status_dim_id; One of our problems is that we don't have any PGSQL dbas here. All of our guys are DB2 (we're still looking though).\nNow I've been told by our DBA that we should have been able to wholy satisfy that query via the indexes.We did regular EXPLAINS on the query with seqscan enabled and disabled and even in our own tests actually running the queries, the results WERE faster with a seq scan than an index scan but the question we were discussing is WHY did it choose the index scan and why is the index scan slower than the sequence scan? He's telling me that DB2 would have been able to do the whole thing with indexes.\nEXPLAINS:(the reason for the random_page_cost was that we had the default of 4 in the .conf file and were planning on changing it to 2 anyway to match our other server)set random_page_cost=2;set enable_seqscan=on;\nexplain select main_account_status_dim_id, count(*)from cla_dw.loan_account_agg_factgroup by main_account_status_dim_idorder by main_account_status_dim_id; \"Sort  (cost=8774054.54..8774054.66 rows=48 width=4)\"\n\"  Sort Key: main_account_status_dim_id\"\"  ->  HashAggregate  (cost=8774052.60..8774053.20 rows=48 width=4)\"\"        ->  Seq Scan on loan_account_agg_fact  (cost=0.00..7609745.40\n rows=232861440 width=4)\"set random_page_cost=2;set enable_seqscan=off;explain select main_account_status_dim_id, count(*)from cla_dw.loan_account_agg_factgroup by main_account_status_dim_id\norder by main_account_status_dim_id; \"Sort  (cost=108774054.54..108774054.66 rows=48 width=4)\"\"  Sort Key: main_account_status_dim_id\"\"  ->  HashAggregate  (cost=108774052.60..108774053.20\n rows=48 width=4)\"\"        ->  Seq Scan on loan_account_agg_fact  (cost=100000000.00..107609745.40 rows=232861440 width=4)\"Here's the DDL for the table laaf:When the system is not busy again, I'll run a verbose version. The query was run against each of the tables to compare the results of aggregation change with the new table.\nCREATE TABLE cla_dw.loan_account_agg_fact(  loan_account_agg_fact_id int8 NOT NULL DEFAULT nextval('loan_account_agg_fact_loan_account_agg_fact_id_seq'::regclass),  dw_load_date_id int4 NOT NULL DEFAULT 0,\n  servicer_branch_dim_id int4 NOT NULL DEFAULT 0,  main_account_status_dim_id int4 NOT NULL DEFAULT 0,  product_dim_id int4 NOT NULL DEFAULT 0,  next_due_date_id int4 NOT NULL DEFAULT 0,  account_balance numeric(15,6) NOT NULL DEFAULT 0,\n  loan_count int4 NOT NULL DEFAULT 0,  principal numeric(15,6) NOT NULL DEFAULT 0,  interest numeric(15,6) NOT NULL DEFAULT 0,  fees numeric(15,6) NOT NULL DEFAULT 0,  gl_principal numeric(15,6) NOT NULL DEFAULT 0,\n  gl_interest numeric(15,6) NOT NULL DEFAULT 0,  accruable_principal numeric(15,6) NOT NULL DEFAULT 0,  unaccruable_principal numeric(15,6) NOT NULL DEFAULT 0,  calculated_principal numeric(15,6) DEFAULT 0,\n  current_interest numeric(15,6) NOT NULL DEFAULT 0,  past_due_interest numeric(16,5) NOT NULL DEFAULT 0,  cash_available numeric(15,6) DEFAULT 0,  cash_collected numeric(15,6) DEFAULT 0,  cash_collected_date_id int4 DEFAULT 0,\n  dw_agg_load_dt timestamp(0) DEFAULT ('now'::text)::timestamp(6) with time zone,  cash_available_principal numeric(15,6) DEFAULT 0,  cash_available_current numeric(15,6) DEFAULT 0,  cash_available_last numeric(15,6) DEFAULT 0,\n  cash_available_interest numeric(15,6) DEFAULT 0,  cash_available_fees numeric(15,6) DEFAULT 0,  cash_not_collected numeric(15,6) DEFAULT 0,  number_contacts_total int4 DEFAULT 0,  number_broken_commitments int4 DEFAULT 0,\n  loc_current_due_total numeric(15,6) DEFAULT 0,  loc_current_due_principal numeric(15,6) DEFAULT 0,  loc_current_due_interest numeric(15,6) DEFAULT 0,  loc_current_due_fees numeric(15,6) DEFAULT 0,  loc_past_due_last numeric(15,6) DEFAULT 0,\n  loc_past_due_total numeric(15,6) DEFAULT 0,  number_made_commitments int4 DEFAULT 0,  CONSTRAINT loan_account_agg_fact_pkey PRIMARY KEY (loan_account_agg_fact_id)) WITH OIDS;CREATE INDEX loan_account_agg_fact_main_account_status_dim_id\n  ON cla_dw.loan_account_agg_fact  USING btree  (main_account_status_dim_id)  TABLESPACE fact_idx_part1_ts;Here's the DDL for the table laaf_new:CREATE TABLE cla_dw.loan_account_agg_fact_new\n(  loan_account_agg_fact_id bigserial NOT NULL,  dw_load_date_id int4 NOT NULL,  servicer_branch_dim_id int4 NOT NULL,  main_account_status_dim_id int4 NOT NULL,  product_dim_id int4 NOT NULL,  dw_agg_load_dt timestamp,\n  account_balance numeric(15,6) NOT NULL DEFAULT 0,  loan_count int4 NOT NULL DEFAULT 0,  principal numeric(15,6) NOT NULL DEFAULT 0,  interest numeric(15,6) NOT NULL DEFAULT 0,  fees numeric(15,6) NOT NULL DEFAULT 0,\n  gl_principal numeric(15,6) NOT NULL DEFAULT 0,  gl_interest numeric(15,6) NOT NULL DEFAULT 0,  accruable_principal numeric(15,6) DEFAULT 0,  unaccruable_principal numeric(15,6) DEFAULT 0,  calculated_principal numeric(15,6) DEFAULT 0,\n  current_interest numeric(15,6) DEFAULT 0,  past_due_interest numeric(15,6) DEFAULT 0,  cash_available numeric(15,6) DEFAULT 0,  cash_collected numeric(15,6) DEFAULT 0,  cash_available_principal numeric(15,6) DEFAULT 0,\n  cash_available_current numeric(15,6) DEFAULT 0,  cash_available_last numeric(15,6) DEFAULT 0,  cash_available_interest numeric(15,6) DEFAULT 0,  cash_available_fees numeric(15,6) DEFAULT 0,  cash_not_collected numeric(15,6) DEFAULT 0,\n  number_contacts_total int4 DEFAULT 0,  number_broken_commitments int4 DEFAULT 0,  loc_current_due_total numeric(15,6) DEFAULT 0,  loc_current_due_principal numeric(15,6) DEFAULT 0,  loc_current_due_interest numeric(15,6) DEFAULT 0,\n  loc_current_due_fees numeric(15,6) DEFAULT 0,  loc_past_due_last numeric(15,6) DEFAULT 0,  loc_past_due_total numeric(15,6) DEFAULT 0,  number_made_commitments int4 DEFAULT 0,  CONSTRAINT loan_account_agg_fact_pkey_new PRIMARY KEY (loan_account_agg_fact_id) USING INDEX TABLESPACE default_ts\n) WITH OIDS TABLESPACE fact_data_part1_ts;CREATE INDEX laafn_main_account_status_dim  ON cla_dw.loan_account_agg_fact_new  USING btree  (main_account_status_dim_id)  TABLESPACE fact_idx_part2_ts;", "msg_date": "Thu, 15 Jun 2006 14:05:46 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer internals" }, { "msg_contents": "On Thu, 2006-06-15 at 14:05 -0400, John Vincent wrote:\n> Now I've been told by our DBA that we should have been able to wholy\n> satisfy that query via the indexes.\n\nDB2 can satisfy the query using only indexes because DB2 doesn't do\nMVCC.\n\nAlthough MVCC is generally a win in terms of making the database easier\nto use and applications less brittle, it also means that the database\nmust inspect the visibility information for each row before it can\nanswer a query. For most types of queries this isn't a big deal, but\nfor count(*) type queries, it slows things down.\n\nSince adding the visibility information to indexes would make them\nsignificantly more expensive to use and maintain, it isn't done.\nTherefore, each row has to be fetched from the main table anyway.\n\nSince in this particular query you are counting all rows of the\ndatabase, PG must fetch each row from the main table regardless, so the\nsequential scan is much faster because it avoids traversing the index\nand performing random read operations.\n\n-- Mark Lewis\n", "msg_date": "Thu, 15 Jun 2006 11:33:45 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On 6/15/06, Mark Lewis <[email protected]> wrote:\n\n> DB2 can satisfy the query using only indexes because DB2 doesn't do\n> MVCC.\n>\n> Although MVCC is generally a win in terms of making the database easier\n> to use and applications less brittle, it also means that the database\n> must inspect the visibility information for each row before it can\n> answer a query. For most types of queries this isn't a big deal, but\n> for count(*) type queries, it slows things down.\n\n\n\nMark,\n\nThanks for the answer. My DBAs just got this look on thier face when I\nshowed. It's not like the couldn't have investigated this information\nthemselves but I think the light finally came on.\n\nOne question that we came up with is how does this affect other aggregate\nfunctions like MAX,MIN,SUM and whatnot? Being that this is our data\nwarehouse, we use these all the time. As I've said previously, I didn't know\na human could generate some of the queries we've passed through this system.\n\n\nSince adding the visibility information to indexes would make them\n> significantly more expensive to use and maintain, it isn't done.\n> Therefore, each row has to be fetched from the main table anyway.\n>\n> Since in this particular query you are counting all rows of the\n> database, PG must fetch each row from the main table regardless, so the\n> sequential scan is much faster because it avoids traversing the index\n> and performing random read operations.\n>\n> -- Mark Lewis\n>\n\nOn 6/15/06, Mark Lewis <[email protected]> wrote:\nDB2 can satisfy the query using only indexes because DB2 doesn't doMVCC.Although MVCC is generally a win in terms of making the database easierto use and applications less brittle, it also means that the database\nmust inspect the visibility information for each row before it cananswer a query.  For most types of queries this isn't a big deal, butfor count(*) type queries, it slows things down.\nMark,Thanks for the answer. My DBAs just got this look on thier face when I showed. It's not like the couldn't have investigated this information themselves but I think the light finally came on.One question that we came up with is how does this affect other aggregate functions like MAX,MIN,SUM and whatnot? Being that this is our data warehouse, we use these all the time. As I've said previously, I didn't know a human could generate some of the queries we've passed through this system.\nSince adding the visibility information to indexes would make themsignificantly more expensive to use and maintain, it isn't done.\nTherefore, each row has to be fetched from the main table anyway.Since in this particular query you are counting all rows of thedatabase, PG must fetch each row from the main table regardless, so thesequential scan is much faster because it avoids traversing the index\nand performing random read operations.-- Mark Lewis", "msg_date": "Thu, 15 Jun 2006 14:46:11 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On Thu, 2006-06-15 at 14:46 -0400, John Vincent wrote:\n\n> One question that we came up with is how does this affect other\n> aggregate functions like MAX,MIN,SUM and whatnot? Being that this is\n> our data warehouse, we use these all the time. As I've said\n> previously, I didn't know a human could generate some of the queries\n> we've passed through this system. \n\nPreviously, MIN and MAX would also run slowly, for the same reason as\nCOUNT(*). But there really isn't a need for that, since you can still\nget a big speedup by scanning the index in order, looking up each row\nand stopping as soon as you find a visible one.\n\nThis has been fixed so newer versions of PG will run quickly and use the\nindex for MIN and MAX. I don't remember which version had that change;\nit might not be until 8.2. You can dig the archives to find out for\nsure. \n\nFor older versions of PG before the fix, you can make MIN and MAX run\nquickly by rewriting them in the following form:\n\nSELECT column FROM table ORDER BY column LIMIT 1;\n\nUnfortunately SUM is in the same boat as COUNT; in order for it to\nreturn a meaningful result it must inspect visibility information for\nall of the rows.\n\n-- Mark\n", "msg_date": "Thu, 15 Jun 2006 12:01:03 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On 6/15/06, Mark Lewis <[email protected]> wrote:\n>\n>\n> Unfortunately SUM is in the same boat as COUNT; in order for it to\n> return a meaningful result it must inspect visibility information for\n> all of the rows.\n>\n> -- Mark\n>\n\nWe'll this is interesting news to say the least. We went with PostgreSQL for\nour warehouse because we needed the advanced features that MySQL didn't have\nat the time (views/sprocs).\n\nIt sounds like we almost need another fact table for the places that we do\nSUM (which is not a problem just an additional map. If I'm interpreting this\nall correctly, we can't force PG to bypass a sequence scan even if we know\nour data is stable because of the MVCC aspect. In our case, as with most\nwarehouses (except those that do rolling loads during the day), we only\nwrite data to it for about 5 hours at night in batch.\n\nAny suggestions? FYI the original question wasn't meant as a poke at\ncomparing PG to MySQL to DB2. I'm not making an yvalue judgements either\nway. I'm just trying to understand how we can use it the best way possible.\n\nIf anyone from the bizgres team is watching, have they done any work in this\narea?\n\nThanks.\nJohn\n\nOn 6/15/06, Mark Lewis <[email protected]> wrote:\nUnfortunately SUM is in the same boat as COUNT; in order for it toreturn a meaningful result it must inspect visibility information forall of the rows.-- MarkWe'll this is interesting news to say the least. We went with PostgreSQL for our warehouse because we needed the advanced features that MySQL didn't have at the time (views/sprocs).\nIt sounds like we almost need another fact table for the places that we do SUM (which is not a problem just an additional map. If I'm interpreting this all correctly, we can't force PG to bypass a sequence scan even if we know our data is stable because of the MVCC aspect. In our case, as with most warehouses (except those that do rolling loads during the day), we only write data to it for about 5 hours at night in batch. \nAny suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.\nIf anyone from the bizgres team is watching, have they done any work in this area? Thanks.John", "msg_date": "Thu, 15 Jun 2006 15:21:50 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On Thu, 2006-06-15 at 14:21, John Vincent wrote:\n> On 6/15/06, Mark Lewis <[email protected]> wrote:\n> Unfortunately SUM is in the same boat as COUNT; in order for\n> it to\n> return a meaningful result it must inspect visibility\n> information for\n> all of the rows.\n> \n> -- Mark\n> \n> We'll this is interesting news to say the least. We went with\n> PostgreSQL for our warehouse because we needed the advanced features\n> that MySQL didn't have at the time (views/sprocs). \n> \n> It sounds like we almost need another fact table for the places that\n> we do SUM (which is not a problem just an additional map. If I'm\n> interpreting this all correctly, we can't force PG to bypass a\n> sequence scan even if we know our data is stable because of the MVCC\n> aspect. In our case, as with most warehouses (except those that do\n> rolling loads during the day), we only write data to it for about 5\n> hours at night in batch. \n> \n> Any suggestions? FYI the original question wasn't meant as a poke at\n> comparing PG to MySQL to DB2. I'm not making an yvalue judgements\n> either way. I'm just trying to understand how we can use it the best\n> way possible. \n> \n> If anyone from the bizgres team is watching, have they done any work\n> in this area? \n\nThis might help:\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nSince you're doing a data warehouse, I would think materialized views\nwould be a natural addition anyway.\n", "msg_date": "Thu, 15 Jun 2006 14:26:39 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "Any suggestions? FYI the original question wasn't meant as a poke at\n> comparing PG to MySQL to DB2. I'm not making an yvalue judgements either\n> way. I'm just trying to understand how we can use it the best way possible.\n>\n\nActually we just thought about something. With PG, we can create an index\nthat is a SUM of the column where indexing, no? We're going to test this in\na few hours. Would that be able to be satisfied by an index scan?\n\nAny suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.\nActually we just thought about something. With PG, we can create an index that is a SUM of the column where indexing, no? We're going to test this in a few hours. Would that be able to be satisfied by an index scan?", "msg_date": "Thu, 15 Jun 2006 15:38:32 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "> Any suggestions? FYI the original question wasn't meant as a poke at\n> comparing PG to MySQL to DB2. I'm not making an yvalue judgements either\n> way. I'm just trying to understand how we can use it the best way possible.\n>\n> If anyone from the bizgres team is watching, have they done any work in\n> this area?\n>\n> Thanks.\n> John\n>\n\nActually we just thought about something. With PG, we can create an index\nthat is a SUM of the column where indexing, no? We're going to test this in\na few hours. Would that be able to be satisfied by an index scan?\n\nAlso, we're looking at the link provided for the materialized views in PG.\n\nThanks.\n\nAny suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.\nIf anyone from the bizgres team is watching, have they done any work in this area? Thanks.John\nActually we just thought about something. With PG, we can create\nan index that is a SUM of the column where indexing, no? We're going to\ntest this in a few hours. Would that be able to be satisfied by an\nindex scan?\nAlso, we're looking at the link provided for the materialized views in PG.Thanks.", "msg_date": "Thu, 15 Jun 2006 15:43:09 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On Thu, Jun 15, 2006 at 03:43:09PM -0400, John Vincent wrote:\n> >Any suggestions? FYI the original question wasn't meant as a poke at\n> >comparing PG to MySQL to DB2. I'm not making an yvalue judgements either\n> >way. I'm just trying to understand how we can use it the best way possible.\n> >\n> >If anyone from the bizgres team is watching, have they done any work in\n> >this area?\n> >\n> >Thanks.\n> >John\n> >\n> \n> Actually we just thought about something. With PG, we can create an index\n> that is a SUM of the column where indexing, no? We're going to test this in\n> a few hours. Would that be able to be satisfied by an index scan?\n> \n> Also, we're looking at the link provided for the materialized views in PG.\n> \n> Thanks.\n\ndecibel=# create index test on i ( sum(i) );\nERROR: cannot use aggregate function in index expression\ndecibel=# \n\nBTW, there have been a number of proposals to negate the effect of not\nhaving visibility info in indexes. Unfortunately, none of them have come\nto fruition yet, mostly because it's a very difficult problem to solve.\nBut it is something that the community would like to see happen.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 15 Jun 2006 17:23:45 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "> decibel=# create index test on i ( sum(i) );\n> ERROR: cannot use aggregate function in index expression\n> decibel=#\n>\n> BTW, there have been a number of proposals to negate the effect of not\n> having visibility info in indexes. Unfortunately, none of them have come\n> to fruition yet, mostly because it's a very difficult problem to solve.\n> But it is something that the community would like to see happen.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\n\nYeah we got the same thing when we tried it.\n\nI thought about the whole thing on the way home and the downside is that we\nmight have to ditch pgsql.\n\nAs far as implementing it, it might make sense to translate READ UNCOMMITTED\nto that new functionality. If the default isolation level stays the current\nlevel, the people who need it can use it via WITH UR or somesuch.\n\nI know it's not that easy but it's an idea. I'm also thinking that the table\ninheritance we're going to be taking advantage of in 8.1 on the new server\nmight make the sequence scan less of an issue. The only reason the sequence\nscan really blows is that we have a single table with 220M rows and growing.\n\ndecibel=# create index test on i ( sum(i) );ERROR:  cannot use aggregate function in index expression\ndecibel=#BTW, there have been a number of proposals to negate the effect of nothaving visibility info in indexes. Unfortunately, none of them have cometo fruition yet, mostly because it's a very difficult problem to solve.\nBut it is something that the community would like to see happen.--Jim C. Nasby, Sr. Engineering Consultant      \[email protected] Software      \nhttp://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461\nYeah we got the same thing when we tried it.\nI thought about the whole thing on the way home and the downside is that we might have to ditch pgsql.As far as implementing it, it might make sense to translate READ UNCOMMITTED to that new functionality. If the default isolation level stays the current level, the people who need it can use it via WITH UR or somesuch.\nI know it's not that easy but it's an idea. I'm also thinking that the table inheritance we're going to be taking advantage of in 8.1 on the new server might make the sequence scan less of an issue. The only reason the sequence scan really blows is that we have a single table with 220M rows and growing.", "msg_date": "Thu, 15 Jun 2006 19:23:39 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "Mark Lewis wrote:\n> On Thu, 2006-06-15 at 14:05 -0400, John Vincent wrote:\n>> Now I've been told by our DBA that we should have been able to wholy\n>> satisfy that query via the indexes.\n \n> DB2 can satisfy the query using only indexes because DB2 doesn't do\n> MVCC.\n\nYou can get pretty much the same effect with materialized views.\nCreate a table that LOOKS like the index (just those columns),\nwith a foreign key relationship to the original table (cascade delete),\nand have the after-insert trigger on the main table write a row to the derived table.\nNow (index and) query the skinny table.\n\nAdvantage of these tables: you can cluster them regularily,\nbecause it doesn't hard-lock the main table.\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Thu, 15 Jun 2006 17:36:22 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n\n> On Thu, 2006-06-15 at 14:05 -0400, John Vincent wrote:\n> > Now I've been told by our DBA that we should have been able to wholy\n> > satisfy that query via the indexes.\n> \n> DB2 can satisfy the query using only indexes because DB2 doesn't do\n> MVCC.\n\nWell it's more subtle than that. DB2 most certainly does provide MVCC\nsemantics as does Oracle and MSSQL and any other serious SQL implementation.\n\nBut there are different ways to implement MVCC and every database makes\ndecisions that have pros and cons. Postgres's implementation has some big\nbenefits over others (no rollback segments, no expensive recovery operations,\nfast inserts and updates) but it also has disadvantages (periodic vacuums and\nindexes don't cover the data).\n\nThe distinction you're looking for here is sometimes called \"optimistic\"\nversus \"pessimistic\" space management. (Not locking, that's something else.)\nPostgres is \"pessimistic\" -- treats every transaction as if it might be rolled\nback. Oracle and most others are \"optimistic\" assumes every transaction will\nbe committed and stores information elsewhere to implement MVCC And recover in\ncase it's rolled back. The flip side is that Oracle and others like it have to\ndo a lot of extra footwork to do if you query data that hasn't been committed\nyet. That footwork has performance implications.\n\n-- \ngreg\n\n", "msg_date": "16 Jun 2006 07:23:26 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On 16 Jun 2006 07:23:26 -0400, Greg Stark <[email protected]> wrote:\n> The flip side is that Oracle and others like it have to\n> do a lot of extra footwork to do if you query data\n> that hasn't been committed yet. That footwork\n> has performance implications.\n\nNot disagreeing here at all, but considering that Oracle, DB2, and SQL\nServer, et al have proven themselves to perform extremely well under\nheavy load (in multiple benchmarks), the overhead of an UNDO\nimplementation has a calculable break even point.\n\nFeel free to debate it, but the optimistic approach adopted by nearly\nevery commercial database vendor is *generally* a better approach for\nOLTP.\n\nConsider Weikum & Vossen (p. 442):\n\nWe also need to consider the extra work that the recovery algorithm\nincurs during normal operation. This is exactly the catch with the\nclass of no-undo/no-redo algorithms. By and large, they come at the\nexpense of a substantial overhead during normal operations that may\nincrease the execution cost per transaction by a factor of two or even\nhigher. In other words, it reduces the achievable transaction\nthroughput of a given server configuration by a factor of two or more.\n\nNow, if we're considering UPDATES (the worst case for PostgreSQL's\ncurrent MVCC architecture), then this is (IMHO) a true statement.\nThere aren't many *successful* commercial databases that incur the\nadditional overhead of creating another version of the record, marking\nthe old one as having been updated, inserting N-number of new index\nentries to point to said record, and having to WAL-log all\naforementioned changes. I have yet to see any successful commercial\nRDBMS using some sort of no-undo algorithm that doesn't follow the,\n\"factor of two or more\" performance reduction. However, if you\nconsider an INSERT or DELETE in PostgreSQL, those are implemented much\nbetter than in most commercial database systems due to PostgreSQL's\nMVCC design. I've done a good amount of research on enhancing\nPostgreSQL's MVCC in UPDATE conditions and believe there is a nice\nhappy medium for us.\n\n/me waits for the obligatory and predictable, \"the benchmarks are\nflawed\" response.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 16 Jun 2006 08:51:33 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n\n> Now, if we're considering UPDATES (the worst case for PostgreSQL's\n> current MVCC architecture), then this is (IMHO) a true statement.\n> There aren't many *successful* commercial databases that incur the\n> additional overhead of creating another version of the record, marking\n> the old one as having been updated, inserting N-number of new index\n> entries to point to said record, and having to WAL-log all\n> aforementioned changes. \n\nWell Oracle has to do almost all that same work, it's just doing it in a\nseparate place called a rollback segment. There are pros and cons especially\nwhere it comes to indexes, but also where it comes to what happens when the\nnew record is larger than the old one.\n\n> I've done a good amount of research on enhancing PostgreSQL's MVCC in UPDATE\n> conditions and believe there is a nice happy medium for us.\n\nIMHO the biggest problem Postgres has is when you're updating a lot of records\nin a table with little free space. Postgres has to keep jumping back and forth\nbetween the old records it's reading in and the new records it's writing out.\nThat can in theory turn a simple linear update scan into a O(n^2) operation.\nIn practice read-ahead and caching should help but I'm not clear to what\nextent.\n\nThat and of course the visibility bitmap that has been much-discussed that\nmight make vacuum not have to visit every page and allow index scans to skip\nchecking visibility info for some pages would be major wins.\n\n> /me waits for the obligatory and predictable, \"the benchmarks are\n> flawed\" response.\n\nI wouldnt' say the benchmarks are flawed but I also don't think you can point\nto any specific design feature and say it's essential just on the basis of\nbottom-line results. You have to look at the actual benefit the specific wins.\n\nOracle and the others all implement tons of features intended to optimize\napplications like the benchmarks (and the benchmarks specifically of course:)\nthat have huge effects on the results. Partitioned tables, materialized views,\netc allow algorithmic improvements that do much more than any low level\noptimizations can do.\n\n-- \ngreg\n\n", "msg_date": "16 Jun 2006 09:21:01 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On 16 Jun 2006 09:21:01 -0400, Greg Stark <[email protected]> wrote:\n> Well Oracle has to do almost all that same work, it's just doing it in a\n> separate place called a rollback segment.\n\nWell, it's not really the same work. The process by which Oracle\nmanages UNDO is actually pretty simple and efficient, but complex in\nits implementation. There has also been some significant performance\nimprovements in this area in both 9i and 10g.\n\n> There are pros and cons especially where it comes\n> to indexes, but also where it comes to what happens\n> when the new record is larger than the old one.\n\nCertainly, you want to avoid row chaining at all costs; which is why\nPCTFREE is there. I have researched update-in-place for PostgreSQL\nand can avoid row-chaining... so I think we can get the same benefit\nwithout the management and administration cost.\n\n> IMHO the biggest problem Postgres has is when you're\n> updating a lot of records in a table with little free space.\n\nYes, this is certainly the most noticible case. This is one reason\nI'm behind the freespace patch. Unfortunately, a lot of inexperienced\npeople use VACUUM FULL and don't understand why VACUUM is *generally*\nbetter.(to free up block-level freespace and update FSM) assuming they\nhave enough hard disk space for the database.\n\n> That and of course the visibility bitmap that has been\n> much-discussed\n\nI'd certainly like to see it.\n\n> I wouldnt' say the benchmarks are flawed but I also\n> don't think you can point to any specific design\n> feature and say it's essential just on the basis of\n> bottom-line results. You have to look at the actual\n> benefit the specific wins.\n\nTrue.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 16 Jun 2006 09:43:55 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On Jun 16, 2006, at 8:43 AM, Jonah H. Harris wrote:\n> Yes, this is certainly the most noticible case. This is one reason\n> I'm behind the freespace patch. Unfortunately, a lot of inexperienced\n> people use VACUUM FULL and don't understand why VACUUM is *generally*\n> better.(to free up block-level freespace and update FSM) assuming they\n> have enough hard disk space for the database.\n\nAnother reason to turn autovac on by default in 8.2...\n\n>> That and of course the visibility bitmap that has been\n>> much-discussed\n> I'd certainly like to see it.\n\nWhat's the hold-up on this? I thought there were some technical \nissues that had yet to be resolved?\n\nBTW, I'll point out that DB2 and MSSQL didn't switch to MVCC until \ntheir most recent versions.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Sat, 17 Jun 2006 14:33:17 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" }, { "msg_contents": "On Thu, Jun 15, 2006 at 15:38:32 -0400,\n John Vincent <[email protected]> wrote:\n> Any suggestions? FYI the original question wasn't meant as a poke at\n> >comparing PG to MySQL to DB2. I'm not making an yvalue judgements either\n> >way. I'm just trying to understand how we can use it the best way possible.\n> >\n> \n> Actually we just thought about something. With PG, we can create an index\n> that is a SUM of the column where indexing, no? We're going to test this in\n> a few hours. Would that be able to be satisfied by an index scan?\n\nNo, that won't work. While you can make indexes on functions of a row, you\ncan't make indexes on aggregate functions.\n\nYou might find making a materialized view of the information you want can\nhelp with performance. The issues with \"sum\" are pretty much the same ones\nas with \"count\". You can find a couple different ways of doing materialized\nviews for \"count\" in the archives. There is a simple way of doing it that\ndoesn't work well with lots of concurrent updates and a more complicated\nmethod that does work well with lots of concurrent updates.\n", "msg_date": "Fri, 23 Jun 2006 23:32:27 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer internals" } ]
[ { "msg_contents": "We have a customer who are having performance problems. They have a \nlarge (36G+) postgres 8.1.3 database installed on an 8-way opteron with \n8G RAM, attached to an EMC SAN via fibre-channel (I don't have details \nof the EMC SAN model, or the type of fibre-channel card at the moment). \nThey're running RedHat ES3 (which means a 2.4.something Linux kernel).\n\nThey are unhappy about their query performance. We've been doing various \nthings to try to work out what we can do. One thing that has been \napparent is that autovacuum has not been able to keep the database \nsufficiently tamed. A pg_dump/pg_restore cycle reduced the total \ndatabase size from 81G to 36G. Performing the restore took about 23 hours.\n\nWe tried restoring the pg_dump output to one of our machines, a \ndual-core pentium D with a single SATA disk, no raid, I forget how much \nRAM but definitely much less than 8G. The restore took five hours. So it \nwould seem that our machine, which on paper should be far less \nimpressive than the customer's box, does more than four times the I/O \nperformance.\n\nTo simplify greatly - single local SATA disk beats EMC SAN by factor of \nfour.\n\nIs that expected performance, anyone? It doesn't sound right to me. Does \nanyone have any clues about what might be going on? Buggy kernel \ndrivers? Buggy kernel, come to think of it? Does a SAN just not provide \nadequate performance for a large database?\n\nI'd be grateful for any clues anyone can offer,\n\nTim", "msg_date": "Fri, 16 Jun 2006 07:50:19 +1000", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "SAN performance mystery" }, { "msg_contents": "On Thu, 2006-06-15 at 16:50, Tim Allen wrote:\n> We have a customer who are having performance problems. They have a \n> large (36G+) postgres 8.1.3 database installed on an 8-way opteron with \n> 8G RAM, attached to an EMC SAN via fibre-channel (I don't have details \n> of the EMC SAN model, or the type of fibre-channel card at the moment). \n> They're running RedHat ES3 (which means a 2.4.something Linux kernel).\n> \n> They are unhappy about their query performance. We've been doing various \n> things to try to work out what we can do. One thing that has been \n> apparent is that autovacuum has not been able to keep the database \n> sufficiently tamed. A pg_dump/pg_restore cycle reduced the total \n> database size from 81G to 36G. Performing the restore took about 23 hours.\n\nDo you have the ability to do any simple IO performance testing, like\nwith bonnie++ (the old bonnie is not really capable of properly testing\nmodern equipment, but bonnie++ will give you some idea of the throughput\nof the SAN) Or even just timing a dd write to the SAN?\n\n> We tried restoring the pg_dump output to one of our machines, a \n> dual-core pentium D with a single SATA disk, no raid, I forget how much \n> RAM but definitely much less than 8G. The restore took five hours. So it \n> would seem that our machine, which on paper should be far less \n> impressive than the customer's box, does more than four times the I/O \n> performance.\n> \n> To simplify greatly - single local SATA disk beats EMC SAN by factor of \n> four.\n> \n> Is that expected performance, anyone? It doesn't sound right to me. Does \n> anyone have any clues about what might be going on? Buggy kernel \n> drivers? Buggy kernel, come to think of it? Does a SAN just not provide \n> adequate performance for a large database?\n\nYes, this is not uncommon. It is very likely that your SATA disk is\nlying about fsync.\n\nWhat kind of backup are you using? insert statements or copy\nstatements? If insert statements, then the difference is quite\nbelievable. If copy statements, less so.\n\nNext time, on their big server, see if you can try a restore with fsync\nturned off and see if that makes the restore faster. Note you should\nturn fsync back on after the restore, as running without it is quite\ndangerous should you suffer a power outage.\n\nHow are you mounting to the EMC SAN? NFS, iSCSI? Other?\n", "msg_date": "Thu, 15 Jun 2006 16:56:54 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Tim Allen wrote:\n\n> We have a customer who are having performance problems. They have a \n> large (36G+) postgres 8.1.3 database installed on an 8-way opteron \n> with 8G RAM, attached to an EMC SAN via fibre-channel (I don't have \n> details of the EMC SAN model, or the type of fibre-channel card at the \n> moment). They're running RedHat ES3 (which means a 2.4.something Linux \n> kernel).\n>\n> They are unhappy about their query performance. We've been doing \n> various things to try to work out what we can do. One thing that has \n> been apparent is that autovacuum has not been able to keep the \n> database sufficiently tamed. A pg_dump/pg_restore cycle reduced the \n> total database size from 81G to 36G. Performing the restore took about \n> 23 hours.\n>\n> We tried restoring the pg_dump output to one of our machines, a \n> dual-core pentium D with a single SATA disk, no raid, I forget how \n> much RAM but definitely much less than 8G. The restore took five \n> hours. So it would seem that our machine, which on paper should be far \n> less impressive than the customer's box, does more than four times the \n> I/O performance.\n>\n> To simplify greatly - single local SATA disk beats EMC SAN by factor \n> of four.\n>\n> Is that expected performance, anyone? It doesn't sound right to me. \n> Does anyone have any clues about what might be going on? Buggy kernel \n> drivers? Buggy kernel, come to think of it? Does a SAN just not \n> provide adequate performance for a large database?\n>\n> I'd be grateful for any clues anyone can offer,\n\n\nI'm actually in a not dissimiliar position here- I was seeing the \nperformance of Postgres going to an EMC Raid over iSCSI running at about \n1/2 the speed of a lesser machine hitting a local SATA drive. That was, \nuntil I noticed that the SATA drive Postgres installation had fsync \nturned off, and the EMC version had fsync turned on. Turning fsync on \non the SATA drive dropped it's performance to being about 1/4th that of EMC.\n\nMoral of the story: make sure you're comparing apples to apples.\n\nBrian\n\n", "msg_date": "Thu, 15 Jun 2006 18:02:04 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "On 6/15/06, Tim Allen <[email protected]> wrote:\n>\n> <snipped>\n> Is that expected performance, anyone? It doesn't sound right to me. Does\n> anyone have any clues about what might be going on? Buggy kernel\n> drivers? Buggy kernel, come to think of it? Does a SAN just not provide\n> adequate performance for a large database?\n>\n> I'd be grateful for any clues anyone can offer,\n>\n> Tim\n\n\nTim,\n\nHere are the areas I would look at first if we're considering hardware to be\nthe problem:\n\nHBA and driver:\n Since this is a Intel/Linux system, the HBA is PROBABLY a qlogic. I would\nneed to know the SAN model to see what the backend of the SAN is itself. EMC\nhas some FC-attach models that actually have SATA disks underneath. You also\nmight want to look at the cache size of the controllers on the SAN.\n - Something also to note is that EMC provides a add-on called PowerPath\nfor load balancing multiple HBAs. If they don't have this, it might be worth\ninvestigating.\n - As with anything, disk layout is important. With the lower end IBM SAN\n(DS4000) you actually have to operate on physical spindle level. On our\n4300, when I create a LUN, I select the exact disks I want and which of the\ntwo controllers are the preferred path. On our DS6800, I just ask for\nstorage. I THINK all the EMC models are the \"ask for storage\" type of\nscenario. However with the 6800, you select your storage across extent\npools.\n\n\nHave they done any benchmarking of the SAN outside of postgres? Before we\nsettle on a new LUN configuration, we always do the dd,umount,mount,dd\nroutine. It's not a perfect test for databases but it will help you catch\nGROSS performance issues.\n\nSAN itself:\n - Could the SAN be oversubscribed? How many hosts and LUNs total do they\nhave and what are the queue_depths for those hosts? With the qlogic card,\nyou can set the queue depth in the BIOS of the adapter when the system is\nbooting up. CTRL-Q I think. If the system has enough local DASD to relocate\nthe database internally, it might be a valid test to do so and see if you\ncan isolate the problem to the SAN itself.\n\nPG itself:\n\n If you think it's a pgsql configuration, I'm guessing you already\nconfigured postgresql.conf to match thiers (or at least a fraction of thiers\nsince the memory isn't the same?). What about loading a \"from-scratch\"\nconfig file and restarting the tuning process?\n\n\nJust a dump of my thought process from someone who's been spending too much\ntime tuning his SAN and postgres lately.\n\nOn 6/15/06, Tim Allen <[email protected]> wrote:\n<snipped>Is that expected performance, anyone? It doesn't sound right to me. Doesanyone have any clues about what might be going on? Buggy kerneldrivers? Buggy kernel, come to think of it? Does a SAN just not provide\nadequate performance for a large database?I'd be grateful for any clues anyone can offer,TimTim,Here are the areas I would look at first if we're considering hardware to be the problem:\nHBA and driver:   Since this is a Intel/Linux system, the HBA is PROBABLY a qlogic. I would need to know the SAN model to see what the backend of the SAN is itself. EMC has some FC-attach models that actually have SATA disks underneath. You also might want to look at the cache size of the controllers on the SAN.\n   - Something also to note is that EMC provides a add-on called PowerPath for load balancing multiple HBAs. If they don't have this, it might be worth investigating.  - As with anything, disk layout is important. With the lower end IBM SAN (DS4000) you actually have to operate on physical spindle level. On our 4300, when I create a LUN, I select the exact disks I want and which of the two controllers are the preferred path. On our DS6800, I just ask for storage. I THINK all the EMC models are the \"ask for storage\" type of scenario. However with the 6800, you select your storage across extent pools. \nHave they done any benchmarking of the SAN outside of postgres? Before we settle on a new LUN configuration, we always do the dd,umount,mount,dd routine. It's not a perfect test for databases but it will help you catch GROSS performance issues.\nSAN itself:  - Could the SAN be oversubscribed? How many hosts and LUNs total do they have and what are the queue_depths for those hosts? With the qlogic card, you can set the queue depth in the BIOS of the adapter when the system is booting up. CTRL-Q I think.  If the system has enough local DASD to relocate the database internally, it might be a valid test to do so and see if you can isolate the problem to the SAN itself.\nPG itself:   If you think it's a pgsql configuration, I'm guessing you already\nconfigured postgresql.conf to match thiers (or at least a fraction of\nthiers since the memory isn't the same?). What about loading a \"from-scratch\" config file and restarting the tuning process?\nJust a dump of my thought process from someone who's been spending too much time tuning his SAN and postgres lately.", "msg_date": "Thu, 15 Jun 2006 18:15:38 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Brian Hurt <[email protected]> writes:\n> Tim Allen wrote:\n>> To simplify greatly - single local SATA disk beats EMC SAN by factor \n>> of four.\n\n> I'm actually in a not dissimiliar position here- I was seeing the \n> performance of Postgres going to an EMC Raid over iSCSI running at about \n> 1/2 the speed of a lesser machine hitting a local SATA drive. That was, \n> until I noticed that the SATA drive Postgres installation had fsync \n> turned off, and the EMC version had fsync turned on. Turning fsync on \n> on the SATA drive dropped it's performance to being about 1/4th that of EMC.\n\nAnd that's assuming that the SATA drive isn't configured to lie about\nwrite completion ...\n\nI agree with Brian's suspicion that the SATA drive isn't properly\nfsync'ing to disk, resulting in bogusly high throughput. However,\nISTM a well-configured SAN ought to be able to match even the bogus\nthroughput, because it should be able to rely on battery-backed\ncache to hold written blocks across a power failure, and hence should\nbe able to report write-complete as soon as it's got the page in cache\nrather than having to wait till it's really down on magnetic platter.\nWhich is what the SATA drive is doing ... only it can't keep the promise\nit's making for lack of any battery backup on its on-board cache.\n\nSo I'm thinking *both* setups may be misconfigured. Or else you forgot\nto buy the battery-backed-cache option on the SAN hardware.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 18:24:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery " }, { "msg_contents": "On Thu, 2006-06-15 at 18:24 -0400, Tom Lane wrote:\n> I agree with Brian's suspicion that the SATA drive isn't properly\n> fsync'ing to disk, resulting in bogusly high throughput. However,\n> ISTM a well-configured SAN ought to be able to match even the bogus\n> throughput, because it should be able to rely on battery-backed\n> cache to hold written blocks across a power failure, and hence should\n> be able to report write-complete as soon as it's got the page in cache\n> rather than having to wait till it's really down on magnetic platter.\n> Which is what the SATA drive is doing ... only it can't keep the promise\n> it's making for lack of any battery backup on its on-board cache.\n\nIt really depends on your SAN RAID controller. We have an HP SAN; I\ndon't remember the model number exactly, but we ran some tests and with\nthe battery-backed write cache enabled, we got some improvement in write\nperformance but it wasn't NEARLY as fast as an SATA drive which lied\nabout write completion.\n\nThe write-and-fsync latency was only about 2-3 times better than with no\nwrite cache at all. So I wouldn't assume that just because you've got a\nwrite cache on your SAN, that you're getting the same speed as\nfsync=off, at least for some cheap controllers.\n\n-- Mark Lewis\n", "msg_date": "Thu, 15 Jun 2006 16:25:17 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Given the fact that most SATA drives have only an 8MB cache, and your RAID\ncontroller should have at least 64MB, I would argue that the system with the\nRAID controller should always be faster. If it's not, you're getting\nshort-changed somewhere, which is typical on linux, because the drivers just\naren't there for a great many controllers that are out there.\n\nAlex.\n\nOn 6/15/06, Mark Lewis <[email protected]> wrote:\n>\n> On Thu, 2006-06-15 at 18:24 -0400, Tom Lane wrote:\n> > I agree with Brian's suspicion that the SATA drive isn't properly\n> > fsync'ing to disk, resulting in bogusly high throughput. However,\n> > ISTM a well-configured SAN ought to be able to match even the bogus\n> > throughput, because it should be able to rely on battery-backed\n> > cache to hold written blocks across a power failure, and hence should\n> > be able to report write-complete as soon as it's got the page in cache\n> > rather than having to wait till it's really down on magnetic platter.\n> > Which is what the SATA drive is doing ... only it can't keep the promise\n> > it's making for lack of any battery backup on its on-board cache.\n>\n> It really depends on your SAN RAID controller. We have an HP SAN; I\n> don't remember the model number exactly, but we ran some tests and with\n> the battery-backed write cache enabled, we got some improvement in write\n> performance but it wasn't NEARLY as fast as an SATA drive which lied\n> about write completion.\n>\n> The write-and-fsync latency was only about 2-3 times better than with no\n> write cache at all. So I wouldn't assume that just because you've got a\n> write cache on your SAN, that you're getting the same speed as\n> fsync=off, at least for some cheap controllers.\n>\n> -- Mark Lewis\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nGiven the fact that most SATA drives have only an 8MB cache, and your RAID controller should have at least 64MB, I would argue that the system with the RAID controller should always be faster.  If it's not, you're getting short-changed somewhere, which is typical on linux, because the drivers just aren't there for a great many controllers that are out there.\nAlex.On 6/15/06, Mark Lewis <[email protected]> wrote:\nOn Thu, 2006-06-15 at 18:24 -0400, Tom Lane wrote:> I agree with Brian's suspicion that the SATA drive isn't properly> fsync'ing to disk, resulting in bogusly high throughput.  However,> ISTM a well-configured SAN ought to be able to match even the bogus\n> throughput, because it should be able to rely on battery-backed> cache to hold written blocks across a power failure, and hence should> be able to report write-complete as soon as it's got the page in cache\n> rather than having to wait till it's really down on magnetic platter.> Which is what the SATA drive is doing ... only it can't keep the promise> it's making for lack of any battery backup on its on-board cache.\nIt really depends on your SAN RAID controller.  We have an HP SAN; Idon't remember the model number exactly, but we ran some tests and withthe battery-backed write cache enabled, we got some improvement in write\nperformance but it wasn't NEARLY as fast as an SATA drive which liedabout write completion.The write-and-fsync latency was only about 2-3 times better than with nowrite cache at all.  So I wouldn't assume that just because you've got a\nwrite cache on your SAN, that you're getting the same speed asfsync=off, at least for some cheap controllers.-- Mark Lewis---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Thu, 15 Jun 2006 19:58:00 -0400", "msg_from": "\"Alex Turner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Tim Allen wrote:\n> We have a customer who are having performance problems. They have a\n> large (36G+) postgres 8.1.3 database installed on an 8-way opteron with\n> 8G RAM, attached to an EMC SAN via fibre-channel (I don't have details\n> of the EMC SAN model, or the type of fibre-channel card at the moment).\n> They're running RedHat ES3 (which means a 2.4.something Linux kernel).\n> \n> They are unhappy about their query performance. We've been doing various\n> things to try to work out what we can do. One thing that has been\n> apparent is that autovacuum has not been able to keep the database\n> sufficiently tamed. A pg_dump/pg_restore cycle reduced the total\n> database size from 81G to 36G. Performing the restore took about 23 hours.\n\nHi Tim!\n\nto give you some comparision - we have a similiar sized database here\n(~38GB after a fresh restore and ~76GB after some months into\nproduction). the server is a 4 core Opteron @2,4Ghz with 16GB RAM,\nconnected via 2 QLogic 2Gbit HBA's to the SAN (IBM DS4300 Turbo).\n\nIt took us quite a while to get this combination up to speed but a full\ndump&restore cycle (via a pg_dump | psql pipe over the net) now takes\nonly about an hour.\n23 hours or even 5 hours sounds really excessive - I'm wondering about\nsome basic issues with the SAN.\nIf you are using any kind of multipathing (most likely the one in the\nQLA-drivers) I would at first assume that you are playing ping-pong\nbetween the controllers (ie the FC-cards do send IO to more than one\nSAN-head causing those to failover constantly completely destroying\nperformance).\nES3 is rather old too and I don't think that even their hacked up kernel\nis very good at driving a large Opteron SMP box (2.6 should be MUCH\nbetter in that regard).\n\nOther than that - how well is your postgresql instance tuned to your\nhardware ?\n\n\nStefan\n", "msg_date": "Fri, 16 Jun 2006 09:48:00 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Tim Allen wrote:\n> We have a customer who are having performance problems. They have a \n> large (36G+) postgres 8.1.3 database installed on an 8-way opteron with \n> 8G RAM, attached to an EMC SAN via fibre-channel (I don't have details \n> of the EMC SAN model, or the type of fibre-channel card at the moment). \n> They're running RedHat ES3 (which means a 2.4.something Linux kernel).\n\n> To simplify greatly - single local SATA disk beats EMC SAN by factor of \n> four.\n> \n> Is that expected performance, anyone? It doesn't sound right to me. Does \n> anyone have any clues about what might be going on? Buggy kernel \n> drivers? Buggy kernel, come to think of it? Does a SAN just not provide \n> adequate performance for a large database?\n> \n> I'd be grateful for any clues anyone can offer,\n> \n> Tim\n\nThanks to all who have replied so far. I've learned a few new things in \nthe meantime.\n\nFirstly, the fibrechannel card is an Emulex LP1050. The customer seems \nto have rather old drivers for it, so I have recommended that they \nupgrade asap. I've also suggested they might like to upgrade their \nkernel to something recent too (eg upgrade to RHEL4), but no telling \nwhether they'll accept that recommendation.\n\nThe fact that SATA drives are wont to lie about write completion, which \nseveral posters have pointed out, presumably has an effect on write \nperformance (ie apparent write performance is increased at the cost of \nan increased risk of data-loss), but, again presumably, not much of an \neffect on read performance. After loading the customer's database on our \nfairly modest box with the single SATA disk, we also tested select query \nperformance, and while we didn't see a factor of four gain, we certainly \nsaw that read performance is also substantially better. So the fsync \nissue possibly accounts for part of our factor-of-four, but not all of \nit. Ie, the SAN is still not doing well by comparison, even allowing for \nthe presumption that it is more honest.\n\nOne curious thing is that some postgres backends seem to spend an \ninordinate amount of time in uninterruptible iowait state. I found a \nposting to this list from December 2004 from someone who reported that \nvery same thing. For example, bringing down postgres on the customer box \nrequires kill -9, because there are invariably one or two processes so \ndeeply uninterruptible as to not respond to a politer signal. That \nindicates something not quite right, doesn't it?\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n", "msg_date": "Fri, 16 Jun 2006 19:11:01 +1000", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "\n\"Alex Turner\" <[email protected]> writes:\n\n> Given the fact that most SATA drives have only an 8MB cache, and your RAID\n> controller should have at least 64MB, I would argue that the system with the\n> RAID controller should always be faster. If it's not, you're getting\n> short-changed somewhere, which is typical on linux, because the drivers just\n> aren't there for a great many controllers that are out there.\n\nAlternatively Linux is using the 1-4 gigabytes of cache available to it\neffectively enough that the 64 megabytes of mostly duplicated cache just isn't\nespecially helpful...\n\nI never understood why disk caches on the order of megabytes are exciting. Why\nshould disk manufacturers be any better about cache management than OS\nauthors?\n\nIn the case of RAID 5 this could actually work against you since the RAID\ncontroller can _only_ use its cache to find parity blocks when writing.\nSoftware raid can use all of the OS's disk cache to that end.\n\n-- \ngreg\n\n", "msg_date": "16 Jun 2006 07:28:35 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "\nOn Jun 16, 2006, at 5:11 AM, Tim Allen wrote:\n>\n> One curious thing is that some postgres backends seem to spend an \n> inordinate amount of time in uninterruptible iowait state. I found \n> a posting to this list from December 2004 from someone who reported \n> that very same thing. For example, bringing down postgres on the \n> customer box requires kill -9, because there are invariably one or \n> two processes so deeply uninterruptible as to not respond to a \n> politer signal. That indicates something not quite right, doesn't it?\n>\n\nSounds like there could be a driver/array/kernel bug there that is \nkicking the performance down the tube.\nIf it was PG's fault it wouldn't be stuck uninterruptable.\n\n--\nJeff Trout <[email protected]>\nhttp://www.dellsmartexitin.com/\nhttp://www.stuarthamm.net/\n\n\n\n", "msg_date": "Fri, 16 Jun 2006 13:58:31 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "On Jun 16, 2006, at 6:28 AM, Greg Stark wrote:\n> I never understood why disk caches on the order of megabytes are \n> exciting. Why\n> should disk manufacturers be any better about cache management than OS\n> authors?\n>\n> In the case of RAID 5 this could actually work against you since \n> the RAID\n> controller can _only_ use its cache to find parity blocks when \n> writing.\n> Software raid can use all of the OS's disk cache to that end.\n\nIIRC some of the Bizgres folks have found better performance with \nsoftware raid for just that reason. The big advantage HW raid has is \nthat you can do a battery-backed cache, something you'll never be \nable to duplicate in a general-purpose computer (sure, you could \nbattery-back the DRAM if you really wanted to, but if the kernel \ncrashed you'd be completely screwed, which isn't the case with a \nbattery-backed RAID controller).\n\nThe quality of the RAID controller also makes a huge difference.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Sat, 17 Jun 2006 14:53:03 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Jeff Trout wrote:\n> On Jun 16, 2006, at 5:11 AM, Tim Allen wrote:\n>> One curious thing is that some postgres backends seem to spend an \n>> inordinate amount of time in uninterruptible iowait state. I found a \n>> posting to this list from December 2004 from someone who reported \n>> that very same thing. For example, bringing down postgres on the \n>> customer box requires kill -9, because there are invariably one or \n>> two processes so deeply uninterruptible as to not respond to a \n>> politer signal. That indicates something not quite right, doesn't it?\n> \n> Sounds like there could be a driver/array/kernel bug there that is \n> kicking the performance down the tube.\n> If it was PG's fault it wouldn't be stuck uninterruptable.\n\nThat's what I thought. I've advised the customer to upgrade their kernel \ndrivers, and to preferably upgrade their kernel as well. We'll see if \nthey accept the advice :-|.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n", "msg_date": "Mon, 19 Jun 2006 16:12:07 +1000", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, 2006-06-15 at 16:50, Tim Allen wrote:\n> \n>>We have a customer who are having performance problems. They have a \n>>large (36G+) postgres 8.1.3 database installed on an 8-way opteron with \n>>8G RAM, attached to an EMC SAN via fibre-channel (I don't have details \n>>of the EMC SAN model, or the type of fibre-channel card at the moment). \n>>They're running RedHat ES3 (which means a 2.4.something Linux kernel).\n>>\n>>They are unhappy about their query performance. We've been doing various \n>>things to try to work out what we can do. One thing that has been \n>>apparent is that autovacuum has not been able to keep the database \n>>sufficiently tamed. A pg_dump/pg_restore cycle reduced the total \n>>database size from 81G to 36G. Performing the restore took about 23 hours.\n> \n> Do you have the ability to do any simple IO performance testing, like\n> with bonnie++ (the old bonnie is not really capable of properly testing\n> modern equipment, but bonnie++ will give you some idea of the throughput\n> of the SAN) Or even just timing a dd write to the SAN?\n\nI've done some timed dd's. The timing results vary quite a bit, but it \nseems you can write to the SAN at about 20MB/s and read from it at about \n 12MB/s. Not an entirely scientific test, as I wasn't able to stop \nother activity on the machine, though I don't think much else was \nhappening. Certainly not impressive figures, compared with our machine \nwith the SATA disk (referred to below), which can get 161MB/s copying \nfiles on the same disk, and 48MB/s and 138Mb/s copying files from the \nsata disk respectively to and from a RAID5 array.\n\nThe customer is a large organisation, with a large IT department who \nguard their turf carefully, so there is no way I could get away with \ninstalling any heavier duty testing tools like bonnie++ on their machine.\n\n>>We tried restoring the pg_dump output to one of our machines, a \n>>dual-core pentium D with a single SATA disk, no raid, I forget how much \n>>RAM but definitely much less than 8G. The restore took five hours. So it \n>>would seem that our machine, which on paper should be far less \n>>impressive than the customer's box, does more than four times the I/O \n>>performance.\n>>\n>>To simplify greatly - single local SATA disk beats EMC SAN by factor of \n>>four.\n>>\n>>Is that expected performance, anyone? It doesn't sound right to me. Does \n>>anyone have any clues about what might be going on? Buggy kernel \n>>drivers? Buggy kernel, come to think of it? Does a SAN just not provide \n>>adequate performance for a large database?\n\n> Yes, this is not uncommon. It is very likely that your SATA disk is\n> lying about fsync.\n\nI guess a sustained write will flood the disk's cache and negate the \neffect of the write-completion dishonesty. But I have no idea how large \na copy would have to be to do that - can anyone suggest a figure? \nCertainly, the read performance of the SATA disk still beats the SAN, \nand there is no way to lie about read performance.\n\n> What kind of backup are you using? insert statements or copy\n> statements? If insert statements, then the difference is quite\n> believable. If copy statements, less so.\n\nA binary pg_dump, which amounts to copy statements, if I'm not mistaken.\n\n> Next time, on their big server, see if you can try a restore with fsync\n> turned off and see if that makes the restore faster. Note you should\n> turn fsync back on after the restore, as running without it is quite\n> dangerous should you suffer a power outage.\n> \n> How are you mounting to the EMC SAN? NFS, iSCSI? Other?\n\niSCSI, I believe. Some variant of SCSI, anyway, of that I'm certain.\n\nThe conclusion I'm drawing here is that this SAN does not perform at all \nwell, and is not a good database platform. It's sounding from replies \nfrom other people that this might be a general property of SAN's, or at \nleast the ones that are not stratospherically priced.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n", "msg_date": "Mon, 19 Jun 2006 20:09:47 +1000", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "On Mon, Jun 19, 2006 at 08:09:47PM +1000, Tim Allen wrote:\n>Certainly, the read performance of the SATA disk still beats the SAN, \n>and there is no way to lie about read performance.\n\nSure there is: you have the data cached in system RAM. I find it real \nhard to believe that you can sustain 161MB/s off a single SATA disk.\n\nMike Stone\n", "msg_date": "Mon, 19 Jun 2006 06:24:32 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "John Vincent wrote:\n> <snipped>\n> Is that expected performance, anyone? It doesn't sound right to me. Does\n> anyone have any clues about what might be going on? Buggy kernel\n> drivers? Buggy kernel, come to think of it? Does a SAN just not provide\n> adequate performance for a large database?\n> \n> Tim,\n> \n> Here are the areas I would look at first if we're considering hardware \n> to be the problem:\n> \n> HBA and driver:\n> Since this is a Intel/Linux system, the HBA is PROBABLY a qlogic. I \n> would need to know the SAN model to see what the backend of the SAN is \n> itself. EMC has some FC-attach models that actually have SATA disks \n> underneath. You also might want to look at the cache size of the \n> controllers on the SAN.\n\nAs I noted in another thread, the HBA is an Emulex LP1050, and they have \na rather old driver for it. I've recommended that they update ASAP. This \nhasn't happened yet.\n\nI know very little about the SAN itself - the customer hasn't provided \nany information other than the brand name, as they selected it and \ninstalled it themselves. I shall ask for more information.\n\n> - Something also to note is that EMC provides a add-on called \n> PowerPath for load balancing multiple HBAs. If they don't have this, it \n> might be worth investigating.\n\nOK, thanks, I'll ask the customer whether they've used PowerPath at all. \nThey do seem to have it installed on the machine, but I suppose that \ndoesn't guarantee it's being used correctly. However, it looks like they \nhave just the one HBA, so, if I've correctly understood what load \nbalancing means in this context, it's not going to help; right?\n\n> - As with anything, disk layout is important. With the lower end IBM \n> SAN (DS4000) you actually have to operate on physical spindle level. On \n> our 4300, when I create a LUN, I select the exact disks I want and which \n> of the two controllers are the preferred path. On our DS6800, I just ask \n> for storage. I THINK all the EMC models are the \"ask for storage\" type \n> of scenario. However with the 6800, you select your storage across \n> extent pools.\n> \n> Have they done any benchmarking of the SAN outside of postgres? Before \n> we settle on a new LUN configuration, we always do the \n> dd,umount,mount,dd routine. It's not a perfect test for databases but it \n> will help you catch GROSS performance issues.\n\nI've done some dd'ing myself, as described in another thread. The \nresults are not at all encouraging - their SAN seems to do about 20MB/s \nor less.\n\n> SAN itself:\n> - Could the SAN be oversubscribed? How many hosts and LUNs total do \n> they have and what are the queue_depths for those hosts? With the qlogic \n> card, you can set the queue depth in the BIOS of the adapter when the \n> system is booting up. CTRL-Q I think. If the system has enough local \n> DASD to relocate the database internally, it might be a valid test to do \n> so and see if you can isolate the problem to the SAN itself.\n\nThe SAN possibly is over-subscribed. Can you suggest any easy ways for \nme to find out? The customer has an IT department who look after their \nSANs, and they're not keen on outsiders poking their noses in. It's hard \nfor me to get any direct access to the SAN itself.\n\n> PG itself:\n> \n> If you think it's a pgsql configuration, I'm guessing you already \n> configured postgresql.conf to match thiers (or at least a fraction of \n> thiers since the memory isn't the same?). What about loading a \n> \"from-scratch\" config file and restarting the tuning process?\n\nThe pg configurations are not identical. However, given the differences \nin raw I/O speed observed, it doesn't seem likely that the difference in \nconfiguration is responsible. Yes, as you guessed, we set more \nconservative options on the less capable box. Doing proper double-blind \ntests on the customer box is difficult, as it is in production and the \ncustomer has a very low tolerance for downtime.\n\n> Just a dump of my thought process from someone who's been spending too \n> much time tuning his SAN and postgres lately.\n\nThanks for all the suggestions, John. I'll keep trying to follow some of \nthem up.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n", "msg_date": "Mon, 19 Jun 2006 20:28:07 +1000", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "* Tim Allen ([email protected]) wrote:\n> The conclusion I'm drawing here is that this SAN does not perform at all \n> well, and is not a good database platform. It's sounding from replies \n> from other people that this might be a general property of SAN's, or at \n> least the ones that are not stratospherically priced.\n\nI'd have to agree with you about the specific SAN/setup you're working\nwith there. I certainly disagree that it's a general property of SAN's\nthough. We've got a DS4300 with FC controllers and drives, hosts are\ngenerally dual-controller load-balanced and it works quite decently.\n\nIndeed, the EMC SANs are generally the high-priced ones too, so not\nreally sure what to tell you about the poor performance you're seeing\nout of it. Your IT folks and/or your EMC rep. should be able to resolve\nthat, really...\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Mon, 19 Jun 2006 08:41:54 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "On 6/19/06, Tim Allen <[email protected]> wrote:\n>\n>\n> As I noted in another thread, the HBA is an Emulex LP1050, and they have\n> a rather old driver for it. I've recommended that they update ASAP. This\n> hasn't happened yet.\n\n\nYeah, I saw that in a later thread. I would suggest also that the BIOS\nsettings on the HBA itself have been investigated. An example is the Qlogic\nHBAs have a profile of sorts, one for tape and one for disk. Could be\nsomething there.\n\n\n> OK, thanks, I'll ask the customer whether they've used PowerPath at all.\n> They do seem to have it installed on the machine, but I suppose that\n> doesn't guarantee it's being used correctly. However, it looks like they\n> have just the one HBA, so, if I've correctly understood what load\n> balancing means in this context, it's not going to help; right?\n\n\nIf they have a single HBA then no it won't help. I'm not very intimate on\npowerpath but it might even HURT if they have it enabled with one HBA. As an\nexample, we were in the process of migrating an AIX LPAR to our DS6800. We\nonly had one spare HBA to assign it. The default policy with the SDD driver\nis lb (load balancing). The problem is that with the SDD driver you see\nmultiple hdisks per HBA per controller port on the SAN. Since we had 4\ncontroller ports active on the SAN, our HBA saw 4 hdisks per LUN. The SDD\ndriver abstracts that out as a single vpath and you use the vpaths as your\npv on the system. The problem was that it was attempting to load balance\nacross a single hba which was NOT what we wanted.\n\n\n\n>\n> I've done some dd'ing myself, as described in another thread. The\n> results are not at all encouraging - their SAN seems to do about 20MB/s\n> or less.\n\n\nI saw that as well.\n\n\n> The SAN possibly is over-subscribed. Can you suggest any easy ways for\n> me to find out? The customer has an IT department who look after their\n> SANs, and they're not keen on outsiders poking their noses in. It's hard\n> for me to get any direct access to the SAN itself.\n\n\nWhen I say over-subscribed, you have to look at all the active LUNs and all\nof the systems attached as well. With the DS4300 (standard not turbo\noption), the SAN can handle 512 I/Os per second. If I have 4 LUNs assigned\nto four systems (1 per system), and each LUN has a queue_depth of 128 from\neach system, I''ll oversubscribe with the next host attach unless I back the\nqueue_depth off on each host. Contrast that with the Turbo controller option\nwhich does 1024 I/Os per sec and I can duplicate what I have now or add a\nsecond LUN per host. I can't even find how much our DS6800 supports.\n\n\n> Thanks for all the suggestions, John. I'll keep trying to follow some of\n> them up.\n\n\n From what I can tell, it sounds like the SATA problem other people have\nmentioned sounds like the culprit.\n\nOn 6/19/06, Tim Allen <[email protected]> wrote:\nAs I noted in another thread, the HBA is an Emulex LP1050, and they havea rather old driver for it. I've recommended that they update ASAP. Thishasn't happened yet.Yeah, I saw that in a later thread. I would suggest also that the BIOS settings on the HBA itself have been investigated. An example is the Qlogic HBAs have a profile of sorts, one for tape and one for disk. Could be something there.\nOK, thanks, I'll ask the customer whether they've used PowerPath at all.\nThey do seem to have it installed on the machine, but I suppose thatdoesn't guarantee it's being used correctly. However, it looks like theyhave just the one HBA, so, if I've correctly understood what loadbalancing means in this context, it's not going to help; right?\nIf they have a single HBA then no it won't help. I'm not very intimate on powerpath but it might even HURT if they have it enabled with one HBA. As an example, we were in the process of migrating an AIX LPAR to our DS6800. We only had one spare HBA to assign it. The default policy with the SDD driver is lb (load balancing). The problem is that with the SDD driver you see multiple hdisks per HBA per controller port on the SAN. Since we had 4 controller ports active on the SAN, our HBA saw 4 hdisks per LUN. The SDD driver abstracts that out as a single vpath and you use the vpaths as your pv on the system. The problem was that it was attempting to load balance across a single hba which was NOT what we wanted.\nI've done some dd'ing myself, as described in another thread. The\nresults are not at all encouraging - their SAN seems to do about 20MB/sor less.I saw that as well. \nThe SAN possibly is over-subscribed. Can you suggest any easy ways forme to find out? The customer has an IT department who look after theirSANs, and they're not keen on outsiders poking their noses in. It's hard\nfor me to get any direct access to the SAN itself.When I say over-subscribed, you have to look at all the active LUNs and all of the systems attached as well. With the DS4300 (standard not turbo option), the SAN can handle 512 I/Os per second. If I have 4 LUNs assigned to four systems (1 per system), and each LUN has a queue_depth of 128 from each system, I''ll oversubscribe with the next host attach unless I back the queue_depth off on each host. Contrast that with the Turbo controller option which does 1024 I/Os per sec and I can duplicate what I have now or add a second LUN per host. I can't even find how much our DS6800 supports.\nThanks for all the suggestions, John. I'll keep trying to follow some of\nthem up.From what I can tell, it sounds like the SATA problem other people have mentioned sounds like the culprit.", "msg_date": "Mon, 19 Jun 2006 08:54:18 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "> I'd have to agree with you about the specific SAN/setup you're working\n> with there. I certainly disagree that it's a general property of SAN's\n> though. We've got a DS4300 with FC controllers and drives, hosts are\n> generally dual-controller load-balanced and it works quite decently.\n\n\nHow are you guys doing the load balancing? IIRC, the RDAC driver only does\nfailover. Or are you using the OS level multipathing instead? While we were\non the 4300 for our AIX boxes, we just created two big RAID5 LUNs and\nassigned one to each controller. With 2 HBAs and LVM stripping that was\nabout the best we could get in terms of load balancing.\n\nIndeed, the EMC SANs are generally the high-priced ones too, so not\n> really sure what to tell you about the poor performance you're seeing\n> out of it. Your IT folks and/or your EMC rep. should be able to resolve\n> that, really...\n\n\nThe only exception I've heard to this is the Clarion AX150. We looked at one\nand we were warned off of it by some EMC gearheads.\n\n Enjoy,\n>\n> Stephen\n>\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.3 (GNU/Linux)\n>\n> iD8DBQFElpuRrzgMPqB3kigRAuo8AJ9vlxRK7VPMb9rN7AFm/qMNHLbdBwCfZiih\n> ZHApIcDhhj/J/Es9KPXEl/s=\n> =25MX\n> -----END PGP SIGNATURE-----\n>\n>\n>\n\nI'd have to agree with you about the specific SAN/setup you're working\nwith there.  I certainly disagree that it's a general property of SAN'sthough.  We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently.\nHow are you guys doing the load balancing? IIRC, the RDAC driver only does failover. Or are you using the OS level multipathing instead? While we were on the 4300 for our AIX boxes, we just created two big RAID5 LUNs and assigned one to each controller. With 2 HBAs and LVM stripping that was about the best we could get in terms of load balancing.\nIndeed, the EMC SANs are generally the high-priced ones too, so notreally sure what to tell you about the poor performance you're seeing\nout of it.  Your IT folks and/or your EMC rep. should be able to resolvethat, really...The only exception I've heard to this is the Clarion AX150. We looked at one and we were warned off of it by some EMC gearheads.\n        Enjoy,                Stephen-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.3 (GNU/Linux)iD8DBQFElpuRrzgMPqB3kigRAuo8AJ9vlxRK7VPMb9rN7AFm/qMNHLbdBwCfZiihZHApIcDhhj/J/Es9KPXEl/s==25MX-----END PGP SIGNATURE-----", "msg_date": "Mon, 19 Jun 2006 08:58:47 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": ">\n> > I'd have to agree with you about the specific SAN/setup you're working\n> > with there. I certainly disagree that it's a general property of SAN's\n> > though. We've got a DS4300 with FC controllers and drives, hosts are\n> > generally dual-controller load-balanced and it works quite decently.\n> >\n> How are you guys doing the load balancing? IIRC, the RDAC driver only does\n> failover. Or are you using the OS level multipathing instead? While we were\n> on the 4300 for our AIX boxes, we just created two big RAID5 LUNs and\n> assigned one to each controller. With 2 HBAs and LVM stripping that was\n> about the best we could get in terms of load balancing.\n>\n> Indeed, the EMC SANs are generally the high-priced ones too, so not\n> > really sure what to tell you about the poor performance you're seeing\n> > out of it. Your IT folks and/or your EMC rep. should be able to resolve\n> > that, really...\n>\n>\n> The only exception I've heard to this is the Clarion AX150. We looked at\n> one and we were warned off of it by some EMC gearheads.\n>\n>\n>\n\n\nI'd have to agree with you about the specific SAN/setup you're working\nwith there.  I certainly disagree that it's a general property of SAN'sthough.  We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently. \nHow are you guys doing the load balancing? IIRC, the RDAC driver only does failover. Or are you using the OS level multipathing instead? While we were on the 4300 for our AIX boxes, we just created two big RAID5 LUNs and assigned one to each controller. With 2 HBAs and LVM stripping that was about the best we could get in terms of load balancing.\nIndeed, the EMC SANs are generally the high-priced ones too, so not\nreally sure what to tell you about the poor performance you're seeing\nout of it.  Your IT folks and/or your EMC rep. should be able to resolvethat, really...The only exception I've heard to this is the Clarion AX150. We looked at one and we were warned off of it by some EMC gearheads.", "msg_date": "Mon, 19 Jun 2006 08:59:48 -0400", "msg_from": "\"John Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "* John Vincent ([email protected]) wrote:\n> >> I'd have to agree with you about the specific SAN/setup you're working\n> >> with there. I certainly disagree that it's a general property of SAN's\n> >> though. We've got a DS4300 with FC controllers and drives, hosts are\n> >> generally dual-controller load-balanced and it works quite decently.\n> >>\n> >How are you guys doing the load balancing? IIRC, the RDAC driver only does\n> >failover. Or are you using the OS level multipathing instead? While we were\n> >on the 4300 for our AIX boxes, we just created two big RAID5 LUNs and\n> >assigned one to each controller. With 2 HBAs and LVM stripping that was\n> >about the best we could get in terms of load balancing.\n\nWe're using the OS-level multipathing. I tend to prefer using things\nlike multipath over specific-driver options. I havn't spent a huge\namount of effort profiling the SAN, honestly, but it's definitely faster\nthan the direct-attached hardware-RAID5 SCSI system we used to use (from\nnStor), though that could have been because they were smaller, slower,\nregular SCSI disks (not FC).\n\nA simple bonnie++ run on one of the systems on the SAN gave me this:\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nvardamir 32200M 40205 15 22399 5 102572 10 288.4 0\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 2802 99 +++++ +++ +++++ +++ 2600 99 +++++ +++ 10205 100\n\nSo, 40MB/s out, 102MB/s in, or so. This was on an ext3 filesystem.\nUnderneath that array it's a 3-disk RAID5 of 300GB 10k RPM FC disks.\nWe also have a snapshot on that array, but it was disabled at the time.\n\n> >Indeed, the EMC SANs are generally the high-priced ones too, so not\n> >> really sure what to tell you about the poor performance you're seeing\n> >> out of it. Your IT folks and/or your EMC rep. should be able to resolve\n> >> that, really...\n> >\n> >\n> >The only exception I've heard to this is the Clarion AX150. We looked at\n> >one and we were warned off of it by some EMC gearheads.\n\nYeah, the Clarion is the EMC \"cheap\" line, and I think the AX150 was the\nextra-cheap one which Dell rebranded and sold.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 19 Jun 2006 11:04:25 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Michael Stone wrote:\n> On Mon, Jun 19, 2006 at 08:09:47PM +1000, Tim Allen wrote:\n>> Certainly, the read performance of the SATA disk still beats the SAN, \n>> and there is no way to lie about read performance.\n> \n> Sure there is: you have the data cached in system RAM. I find it real \n> hard to believe that you can sustain 161MB/s off a single SATA disk.\n> \n\nAgreed - approx 60-70Mb/s seems to be the ballpark for modern SATA \ndrives, so get get 161Mb/s you would need about 3 of them striped \ntogether (or a partially cached file as indicated).\n\nWhat is interesting is that (presumably) the same test is getting such \nuninspiring results on the SAN...\n\nHaving said that, I've been there too, about 4 years ago with a SAN that \nhad several 6 disk RAID5 arrays, and the best sequential *read* \nperformance we ever saw from them was about 50Mb/s. I recall trying to \nget performance data from the vendor - only to be told that if we were \ndoing benchmarks - could they have our results when we were finished!\n\nregards\n\nMark\n\n", "msg_date": "Tue, 20 Jun 2006 11:16:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Hi, Tim,\n\nTim Allen wrote:\n> One thing that has been\n> apparent is that autovacuum has not been able to keep the database\n> sufficiently tamed. A pg_dump/pg_restore cycle reduced the total\n> database size from 81G to 36G.\n\nTwo first shots:\n\n- Increase your free_space_map settings, until (auto)vacuum does not\nwarn about a too small FSM setting any more\n\n- Tune autovacuum to run more often, possibly with a higher delay\nsetting to lower the load.\n\nIf you still have the original database around,\n\n> Performing the restore took about 23 hours.\n\nTry to put the WAL on another spindle, and increase the WAL size /\ncheckpoint segments.\n\nWhen most of the restore time was spent in index creation, increase the\nsort mem / maintainance work mem settings.\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 23 Jun 2006 13:56:16 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "Hi, Tim,\n\nSeems I sent my message to fast, cut in middle of a sencence:\n\nMarkus Schaber wrote:\n>> A pg_dump/pg_restore cycle reduced the total\n>> database size from 81G to 36G.\n\n> If you still have the original database around,\n\n... can you check wether VACUUM FULL and REINDEX achieve the same effect?\n\nThanks,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 23 Jun 2006 14:02:27 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" } ]
[ { "msg_contents": "\nHello!\n\nI am trying to delete an entire table. There are about 41,000 rows in\nthe table (based on count(*)).\n\nI am using the SQL comment: delete from table;\n\nThe operation seems to take in the order of hours, rather than seconds\nor minutes.\n\n\"Explain delete from table\" gives me:\n\n QUERY PLAN\n----------------------------------------------------------------\n Seq Scan on table (cost=0.00..3967.74 rows=115374 width=6)\n(1 row)\n\n\nI am using an Intel Pentium D 2.8GHz CPU. My system has about 1.2GB of\nRAM. This should be ok... my database isn't that big, I think.\n\n\nAny ideas why this takes so long and how I could speed this up?\n\nOr alternatively, is there a better way to delete all the contents from\na table?\n\n\nThank you!\n\n\n", "msg_date": "Fri, 16 Jun 2006 15:58:46 +0900", "msg_from": "David Leangen <[email protected]>", "msg_from_op": true, "msg_subject": "Delete operation VERY slow..." }, { "msg_contents": "am 16.06.2006, um 15:58:46 +0900 mailte David Leangen folgendes:\n> \n> Hello!\n> \n> I am trying to delete an entire table. There are about 41,000 rows in\n> the table (based on count(*)).\n> \n> I am using the SQL comment: delete from table;\n\nUse TRUNCATE table.\n\n\nAndreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Fri, 16 Jun 2006 09:17:07 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete operation VERY slow..." }, { "msg_contents": "David,\n\nTruncate table would be a good idea if u want to delete all the data in the\ntable.\nYou need not perform vacuum in this case since there are no dead rows\ncreated.\n\n~gourish\n\n\nOn 6/16/06, David Leangen <[email protected]> wrote:\n>\n>\n> Hello!\n>\n> I am trying to delete an entire table. There are about 41,000 rows in\n> the table (based on count(*)).\n>\n> I am using the SQL comment: delete from table;\n>\n> The operation seems to take in the order of hours, rather than seconds\n> or minutes.\n>\n> \"Explain delete from table\" gives me:\n>\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Seq Scan on table (cost=0.00..3967.74 rows=115374 width=6)\n> (1 row)\n>\n>\n> I am using an Intel Pentium D 2.8GHz CPU. My system has about 1.2GB of\n> RAM. This should be ok... my database isn't that big, I think.\n>\n>\n> Any ideas why this takes so long and how I could speed this up?\n>\n> Or alternatively, is there a better way to delete all the contents from\n> a table?\n>\n>\n> Thank you!\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n\n-- \nBest,\nGourish Singbal\n\n \nDavid,\n \nTruncate table would be a good idea if u want to delete all the data in the table.\nYou need not perform vacuum in this case since there are no dead rows created.\n \n~gourish \nOn 6/16/06, David Leangen <[email protected]> wrote:\nHello!I am trying to delete an entire table. There are about 41,000 rows inthe table (based on count(*)).\nI am using the SQL comment: delete from table;The operation seems to take in the order of hours, rather than secondsor minutes.\"Explain delete from table\" gives me:                          QUERY PLAN\n----------------------------------------------------------------Seq Scan on table  (cost=0.00..3967.74 rows=115374 width=6)(1 row)I am using an Intel Pentium D 2.8GHz CPU. My system has about 1.2GB\n ofRAM. This should be ok... my database isn't that big, I think.Any ideas why this takes so long and how I could speed this up?Or alternatively, is there a better way to delete all the contents from\na table?Thank you!---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings\n-- Best,Gourish Singbal", "msg_date": "Fri, 16 Jun 2006 12:52:49 +0530", "msg_from": "\"Gourish Singbal\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete operation VERY slow..." }, { "msg_contents": "\nWow! That was almost instantaneous. I can't believe the difference.\n\nThe only inconvenience is that I need to remove all the foreign key\nconstraints before truncating, then put them back after. But I suppose\nit is a small price to pay for this incredible optimization.\n\n\nThank you!\n\n\n\nOn Fri, 2006-06-16 at 12:52 +0530, Gourish Singbal wrote:\n> \n> David,\n> \n> Truncate table would be a good idea if u want to delete all the data\n> in the table.\n> You need not perform vacuum in this case since there are no dead rows\n> created.\n> \n> ~gourish\n> \n> \n> On 6/16/06, David Leangen <[email protected]> wrote: \n> \n> Hello!\n> \n> I am trying to delete an entire table. There are about 41,000\n> rows in\n> the table (based on count(*)). \n> \n> I am using the SQL comment: delete from table;\n> \n> The operation seems to take in the order of hours, rather than\n> seconds\n> or minutes.\n> \n> \"Explain delete from table\" gives me:\n> \n> QUERY PLAN \n> ----------------------------------------------------------------\n> Seq Scan on table (cost=0.00..3967.74 rows=115374 width=6)\n> (1 row)\n> \n> \n> I am using an Intel Pentium D 2.8GHz CPU. My system has about\n> 1.2GB of\n> RAM. This should be ok... my database isn't that big, I think.\n> \n> \n> Any ideas why this takes so long and how I could speed this\n> up?\n> \n> Or alternatively, is there a better way to delete all the\n> contents from \n> a table?\n> \n> \n> Thank you!\n> \n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n> \n> -- \n> Best,\n> Gourish Singbal \n\n", "msg_date": "Fri, 16 Jun 2006 17:12:22 +0900", "msg_from": "David Leangen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete operation VERY slow..." }, { "msg_contents": "\n> Wow! That was almost instantaneous. I can't believe the difference.\n>\n> The only inconvenience is that I need to remove all the foreign key\n> constraints before truncating, then put them back after. But I suppose\n> it is a small price to pay for this incredible optimization.\n\n\tIn that case, your DELETE might have been slowed down by foreign key \nchecks.\n\n\tSuppose you have tables A and B, and table A has a column \"b_id \nREFERENCES B(id)\"\n\tWhen you delete from B postgres has to lookup in A which rows reference \nthe deleted rows in order to do the ON DELETE action you specified in the \nconstraint.\n\tIf you do not have an index on b_id, this can be quite slow... so you \nshould check if your foreign key relations that need indexes have them.\n", "msg_date": "Fri, 16 Jun 2006 13:23:15 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete operation VERY slow..." }, { "msg_contents": "David Leangen <[email protected]> writes:\n> The only inconvenience is that I need to remove all the foreign key\n> constraints before truncating, then put them back after.\n\nI was about to ask if you had any. Usually the reason for DELETE being\nslow is that you have foreign key references to (not from) the table and\nthe referencing columns aren't indexed. This forces a seqscan search\nof the referencing table for each row deleted :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2006 09:30:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete operation VERY slow... " } ]
[ { "msg_contents": "I have a producer/consumer setup where various producer processes \ninsert new records into a table and consumer processes mark those \nrecords as having been handled when they have dealt with them, but \nleave the records in the table so that we can generate reports later.\n\nThe records are added with a char(1) field specifying their state and \na timestamp, and a varchar(40) saying which class of consumer may \nhandle them.\n\nThere is a partial index on consumer and timestamp where the state \nfield says the record is new ... so there are no records in this \nindex except when a producer has just added them and no consumer has \nyet handled them.\n\nEach consumer polls the database with a query to select a batch of \nunhandled records (ordered by timestamp) ... the idea being that, \neven though the table has a huge number of historical records used \nfor reporting, the partial index based query should be tiny/quick as \nthere are usually few/no unhandled records.\n\nProblem 1 ... why does this polling query take 200-300 milliseconds \nwhen the partial index is empty, and what can be done about it?\nThis is on a fast modern machine and various other queries take under \na millisecond.\n\nI guess that the fact that records are constantly (and rapidly) added \nto and removed from the index may have caused the index to become \ninefficient somehow ...\nIf that's the case, dropping it and creating a new one might \ntemporarily fix the issue... but for how long?\nAs the actual table is huge (44 million records) and reading all the \nrecords to create a new index would take a long time (simply doing a \n'select count(*)' on the table takes some minutes) and lock the table \nwhile it's happening, I can't really experiment, though I could \nschedule/agree downtime for the system in the middle of the night at \nsome point, and try rebuilding the index then.\n\nProblem 2 ... tentative (not readily reproducible and haven't managed \nto rule out the possibility of a bug in my code yet) ... a long \nrunning consumer process (which establishes a connection to the \ndatabase using libpq, and keeps the connection open indefinitely) was \nreporting that the query in question was taking 4.5 seconds, but \nstarting up the psql command-line tool and running the same query \nreported a 200-300 millisecond duration. Could this be a problem in \nthe database server process handling the connection? or in the libpq \ncode handling it? The disparity between the times taken for queries \nas logged in psql and within the consumer application only seemed to \noccur for this polling query (which gets executed very often), not \nfor other queries the consumer program did on other tables.\nRestarting the consumer process 'cured' this ... now I'm waiting to \nsee if this behavior returns.\nAnyone seen anything like this or know what might cause it?\n\n", "msg_date": "Fri, 16 Jun 2006 08:17:14 +0100", "msg_from": "Richard Frith-Macdonald <[email protected]>", "msg_from_op": true, "msg_subject": "Why is my (empty) partial index query slow?" }, { "msg_contents": "Richard Frith-Macdonald <[email protected]> writes:\n> I have a producer/consumer setup where various producer processes \n> insert new records into a table and consumer processes mark those \n> records as having been handled when they have dealt with them, but \n> leave the records in the table so that we can generate reports later.\n\nHave you tried EXPLAIN ANALYZE on the problem queries?\n\nIf you want help here, you really need to show us the table and index\ndefinitions, the exact queries, and the EXPLAIN ANALYZE results. Oh,\nand mention the exact Postgres version you're using, too. Otherwise\nwe're just guessing at what's going on.\n\n> I guess that the fact that records are constantly (and rapidly) added \n> to and removed from the index may have caused the index to become \n> inefficient somehow ...\n\nHow often are you vacuuming the table? A heavily-updated table needs a\nlot of vacuuming to avoid becoming bloated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 12:04:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my (empty) partial index query slow? " }, { "msg_contents": "Richard Frith-Macdonald <[email protected]> writes:\n> What has confused me is why a query using an empty index should be \n> slow, irrespective of the state of the table that the index applies to.\n\nIs it actually empty, or have you just deleted-and-not-yet-vacuumed\nall the rows in the index?\n\nI had hoped to see comparative EXPLAIN ANALYZE output for the fast and\nslow cases. Maybe when it gets slow again you could redo the explain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 20:43:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my (empty) partial index query slow? " } ]
[ { "msg_contents": "We've seen similar results with our EMC CX200 (fully equipped) when\ncompared to a single (1) SCSI disk machine. For sequential reads/writes\n(import, export, updates on 5-10 30M+ row tables), performance is\ndownright awful. A big DB update took 5-6h in pre-prod (single SCSI),\nand 10-14?h (don't recall the exact details) in production (EMC SAN).\nAnd this was with a proprietary DB, btw - no fsync on/off affecting the\nresults here.\n\nFC isn't exactly known for great bandwidth, iirc a 2Gbit FC channel tops\nat 192Mb/s. So, especially if you mostly have DW/BI type of workloads,\ngo for DAD (Direct Attached Disks) instead.\n\n/Mikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Allen\nSent: den 15 juni 2006 23:50\nTo: [email protected]\nSubject: [PERFORM] SAN performance mystery\n\nWe have a customer who are having performance problems. They have a\nlarge (36G+) postgres 8.1.3 database installed on an 8-way opteron with\n8G RAM, attached to an EMC SAN via fibre-channel (I don't have details\nof the EMC SAN model, or the type of fibre-channel card at the moment). \nThey're running RedHat ES3 (which means a 2.4.something Linux kernel).\n\nThey are unhappy about their query performance. We've been doing various\nthings to try to work out what we can do. One thing that has been\napparent is that autovacuum has not been able to keep the database\nsufficiently tamed. A pg_dump/pg_restore cycle reduced the total\ndatabase size from 81G to 36G. Performing the restore took about 23\nhours.\n\nWe tried restoring the pg_dump output to one of our machines, a\ndual-core pentium D with a single SATA disk, no raid, I forget how much\nRAM but definitely much less than 8G. The restore took five hours. So it\nwould seem that our machine, which on paper should be far less\nimpressive than the customer's box, does more than four times the I/O\nperformance.\n\nTo simplify greatly - single local SATA disk beats EMC SAN by factor of\nfour.\n\nIs that expected performance, anyone? It doesn't sound right to me. Does\nanyone have any clues about what might be going on? Buggy kernel\ndrivers? Buggy kernel, come to think of it? Does a SAN just not provide\nadequate performance for a large database?\n\nI'd be grateful for any clues anyone can offer,\n\nTim\n\n\n\n", "msg_date": "Fri, 16 Jun 2006 13:45:09 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN performance mystery" }, { "msg_contents": "On 6/16/06, Mikael Carneholm <[email protected]> wrote:\n> We've seen similar results with our EMC CX200 (fully equipped) when\n> compared to a single (1) SCSI disk machine. For sequential reads/writes\n> (import, export, updates on 5-10 30M+ row tables), performance is\n> downright awful. A big DB update took 5-6h in pre-prod (single SCSI),\n> and 10-14?h (don't recall the exact details) in production (EMC SAN).\n> And this was with a proprietary DB, btw - no fsync on/off affecting the\n> results here.\n\nYou are in good company. We bought a Hitachi AMS200, 2gb FC and a\ngigabyte of cache. We were shocked and dismayed to find the unit\ncould do about 50 mb/sec measured from dd (yes, around the performance\nof a single consumer grade sata drive). It is my (unconfirmted)\nbelief that the unit was governed internally to encourage you to buy\nthe more expensive version, AMS500, etc.\n\nneedless to say, we sent the unit back, and are now waiting on a\nxyratex 4gb FC attached SAS unit. we spoke directly to their\nperformance people who told us to expect the unit to be network\nbandwitdh bottlenecked as you would expect. they were even talking\nabout a special mode where you could bond the dual fc ports, now\nthat's power. If the unit really does what they claim, I will be back\nhere talking about it for sure ;)\n\nThe bottom line is that most SANs, even from some of the biggest\nvendors, are simply worthless from a performance angle. You have to\nbe really critical when you buy them, don't beleive anything the sales\nrep tells you, and make sure to negotiate in advance a return policy\nif the unit does not perform. There is tons of b.s. out there, but so\nfar my impression of xyratex is really favorable (fingers crossed),\nand I'm hearing lots of great stuff about them from the channel.\n\nmerlin\n", "msg_date": "Fri, 16 Jun 2006 10:19:05 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN performance mystery" } ]
[ { "msg_contents": "\nI am thrill to inform you all that Sun has just donated a fully loaded \nT2000 system to the PostgreSQL community, and it's being setup by Corey \nShields at OSL (osuosl.org) and should be online probably early next \nweek. The system has\n\n* 8 cores, 4 hw threads/core @ 1.2 GHz. Solaris sees the system as \nhaving 32 virtual CPUs, and each can be enabled or disabled individually\n* 32 GB of DDR2 SDRAM memory\n* 2 @ 73GB internal SAS drives (10000 RPM)\n* 4 Gigabit ethernet ports\n\nFor complete spec, visit \nhttp://www.sun.com/servers/coolthreads/t2000/specifications.jsp\n\nI think this system is well suited for PG scalability testing, among \nothers. We did an informal test using an internal OLTP benchmark and \nnoticed that PG can scale to around 8 CPUs. Would be really cool if all \n32 virtual CPUs can be utilized!!!\n\nAnyways, if you need to access the system for testing purposes, please \ncontact Josh Berkus.\n\nRegards,\n\nRobert Lor\nSun Microsystems, Inc.\n01-510-574-7189\n\n\n\n", "msg_date": "Fri, 16 Jun 2006 08:18:58 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "Folks,\n\n> I am thrill to inform you all that Sun has just donated a fully loaded\n> T2000 system to the PostgreSQL community, and it's being setup by Corey\n> Shields at OSL (osuosl.org) and should be online probably early next\n> week. The system has\n\nSo this system will be hosted by Open Source Lab in Oregon. It's going to \nbe \"donated\" to Software In the Public Interest, who will own for the \nPostgreSQL fund.\n\nWe'll want to figure out a scheduling system to schedule performance and \ncompatibility testing on this machine; I'm not sure exactly how that will \nwork. Suggestions welcome. As a warning, Gavin Sherry and I have a bunch \nof pending tests already to run.\n\nFirst thing as soon as I have a login, of course, is to set up a Buildfarm \ninstance.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 16 Jun 2006 10:01:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "On 16-6-2006 17:18, Robert Lor wrote:\n> \n> I think this system is well suited for PG scalability testing, among \n> others. We did an informal test using an internal OLTP benchmark and \n> noticed that PG can scale to around 8 CPUs. Would be really cool if all \n> 32 virtual CPUs can be utilized!!!\n\nI can already confirm very good scalability (with our workload) on \npostgresql on that machine. We've been testing a 32thread/16G-version \nand it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores \n(with all four threads enabled).\n\nThe threads are a bit less scalable, but still pretty good. Enabling 1, \n2 or 4 threads for each core yields resp 60 and 130% extra performance.\n\nBest regards,\n\nArjen\n", "msg_date": "Sat, 17 Jun 2006 00:34:20 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "Arjen,\n\n> I can already confirm very good scalability (with our workload) on\n> postgresql on that machine. We've been testing a 32thread/16G-version\n> and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores\n> (with all four threads enabled).\n\nKeen. We're trying to keep the linear scaling going up to 32 cores of \ncourse (which doesn't happen, presently). Would you be interested in \nhelping us troubleshoot some of the performance issues?\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 16 Jun 2006 16:24:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL\n community" }, { "msg_contents": "> I am thrill to inform you all that Sun has just donated a fully loaded \n> T2000 system to the PostgreSQL community, and it's being setup by Corey \n> Shields at OSL (osuosl.org) and should be online probably early next \n> week. The system has\n> \n> * 8 cores, 4 hw threads/core @ 1.2 GHz. Solaris sees the system as \n> having 32 virtual CPUs, and each can be enabled or disabled individually\n> * 32 GB of DDR2 SDRAM memory\n> * 2 @ 73GB internal SAS drives (10000 RPM)\n> * 4 Gigabit ethernet ports\n> \n> For complete spec, visit \n> http://www.sun.com/servers/coolthreads/t2000/specifications.jsp\n> \n> I think this system is well suited for PG scalability testing, among \n> others. We did an informal test using an internal OLTP benchmark and \n> noticed that PG can scale to around 8 CPUs. Would be really cool if all \n> 32 virtual CPUs can be utilized!!!\n\nInteresting. We (some Japanese companies including SRA OSS,\nInc. Japan) did some PG scalability testing using a Unisys's big 16\n(physical) CPU machine and found PG scales up to 8 CPUs. However\nbeyond 8 CPU PG does not scale anymore. The result can be viewed at\n\"OSS iPedia\" web site (http://ossipedia.ipa.go.jp). Our conclusion was\nPG has a serious lock contention problem in the environment by\nanalyzing the oprofile result.\n\nYou can take a look at the detailed report at:\nhttp://ossipedia.ipa.go.jp/capacity/EV0604210111/\n(unfortunately only Japanese contents is available at the\nmoment. Please use some automatic translation services)\n\nEvalution environment was:\nPostgreSQL 8.1.2\nOSDL DBT-1 2.1\nMiracle Linux 4.0\nUnisys ES700 Xeon 2.8GHz CPU x 16 Mem 16GB(HT off)\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n", "msg_date": "Sat, 17 Jun 2006 10:15:21 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > Interesting. We (some Japanese companies including SRA OSS,\n> > Inc. Japan) did some PG scalability testing using a Unisys's big 16\n> > (physical) CPU machine and found PG scales up to 8 CPUs. However\n> > beyond 8 CPU PG does not scale anymore. The result can be viewed at\n> > \"OSS iPedia\" web site (http://ossipedia.ipa.go.jp). Our conclusion was\n> > PG has a serious lock contention problem in the environment by\n> > analyzing the oprofile result.\n> \n> 18% in s_lock is definitely bad :-(. Were you able to determine which\n> LWLock(s) are accounting for the contention?\n\nYes. We were interested in that too. Some people did addtional tests\nto determin that. I don't have the report handy now. I will report\nback next week.\n\n> The test case seems to be spending a remarkable amount of time in LIKE\n> comparisons, too. That probably is not a representative condition.\n\nI know. I think point is 18% in s_lock only appears with 12 CPUs or more.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n", "msg_date": "Sat, 17 Jun 2006 11:18:38 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Interesting. We (some Japanese companies including SRA OSS,\n> Inc. Japan) did some PG scalability testing using a Unisys's big 16\n> (physical) CPU machine and found PG scales up to 8 CPUs. However\n> beyond 8 CPU PG does not scale anymore. The result can be viewed at\n> \"OSS iPedia\" web site (http://ossipedia.ipa.go.jp). Our conclusion was\n> PG has a serious lock contention problem in the environment by\n> analyzing the oprofile result.\n\n18% in s_lock is definitely bad :-(. Were you able to determine which\nLWLock(s) are accounting for the contention?\n\nThe test case seems to be spending a remarkable amount of time in LIKE\ncomparisons, too. That probably is not a representative condition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jun 2006 22:34:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Arjen van der Meijden wrote:\n\n>\n> I can already confirm very good scalability (with our workload) on \n> postgresql on that machine. We've been testing a 32thread/16G-version \n> and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores \n> (with all four threads enabled).\n>\n> The threads are a bit less scalable, but still pretty good. Enabling \n> 1, 2 or 4 threads for each core yields resp 60 and 130% extra \n> performance.\n\nWow, what type of workload is it? And did you do much tuning to get \nnear-linear scalability to 32 threads?\n\nRegards,\n-Robert\n", "msg_date": "Fri, 16 Jun 2006 21:17:04 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "Tom,\n\n> 18% in s_lock is definitely bad :-(.  Were you able to determine which\n> LWLock(s) are accounting for the contention?\n\nGavin Sherry and Tom Daly (Sun) are currently working on identifying the \nproblem lock using DLWLOCK_STATS. Any luck, Gavin?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Sat, 17 Jun 2006 12:19:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:\n\n> Folks,\n>\n>> I am thrill to inform you all that Sun has just donated a fully \n>> loaded\n>> T2000 system to the PostgreSQL community, and it's being setup by \n>> Corey\n>> Shields at OSL (osuosl.org) and should be online probably early next\n>> week. The system has\n>\n> So this system will be hosted by Open Source Lab in Oregon. It's \n> going to\n> be \"donated\" to Software In the Public Interest, who will own for the\n> PostgreSQL fund.\n>\n> We'll want to figure out a scheduling system to schedule \n> performance and\n> compatibility testing on this machine; I'm not sure exactly how \n> that will\n> work. Suggestions welcome. As a warning, Gavin Sherry and I have \n> a bunch\n> of pending tests already to run.\n>\n> First thing as soon as I have a login, of course, is to set up a \n> Buildfarm\n> instance.\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Sat, 17 Jun 2006 14:53:04 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:\n> First thing as soon as I have a login, of course, is to set up a \n> Buildfarm\n> instance.\n\nKeep in mind that buildfarm clients and benchmarking stuff don't \nusually mix well.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Sat, 17 Jun 2006 14:54:50 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL community" }, { "msg_contents": "\n\nJim Nasby wrote:\n\n> On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:\n>\n>> First thing as soon as I have a login, of course, is to set up a \n>> Buildfarm\n>> instance.\n>\n>\n> Keep in mind that buildfarm clients and benchmarking stuff don't \n> usually mix well.\n>\n\nOn a fast machine like this a buildfarm run is not going to take very \nlong. You could run those once a day at times of low demand. Or even \nonce or twice a week.\n\ncheers\n\nandrew\n", "msg_date": "Sat, 17 Jun 2006 17:46:40 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "On 17-6-2006 1:24, Josh Berkus wrote:\n> Arjen,\n> \n>> I can already confirm very good scalability (with our workload) on\n>> postgresql on that machine. We've been testing a 32thread/16G-version\n>> and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores\n>> (with all four threads enabled).\n> \n> Keen. We're trying to keep the linear scaling going up to 32 cores of \n> course (which doesn't happen, presently). Would you be interested in \n> helping us troubleshoot some of the performance issues?\n\nYou can ask your questions, if I happen to do know the answer, you're a \nstep further in the right direction.\n\nBut actually, I didn't do much to get this scalability... So I won't be \nof much help to you, its not that I spent hours on getting this performance.\nI just started out with the \"normal\" attempts to get a good config. \nCurrently the shared buffers is set to 30k. Larger settings didn't seem \nto differ much on our previous 4-core version, so I didn't even check it \nout on this one. I noticed I forgot to set the effective cache size to \nmore than 6G for this one too, but since our database is smaller than \nthat, that shouldn't make any difference. The work memory was increased \na bit to 2K. So there are no magic tricks here.\n\nI do have to add its a recent checkout of 8.2devel compiled using Sun \nStudio 11. It was compiled using this as CPPFLAGS: -xtarget=ultraT1 \n-fast -xnolibmopt\n\nThe -xnolibmopt was added because we couldn't figure out why it yielded \nseveral linking errors at the end of the compilation when the -xlibmopt \nfrom -fast was enabled, so we disabled that particular setting from the \n-fast macro.\n\n\nThe workload generated is an abstraction and simplification of our \nwebsite's workload, used for benchmarking. Its basically a news and \nprice comparision site and it runs on LAMP (with the M of MySQL), i.e. a \nlot of light queries, many primary-key or indexed \"foreign-key\" lookups \nfor little amounts of records. Some aggregations for summaries, etc. \nThere are little writes and hardly any on the most read tables.\nThe database easily fits in memory, the total size of the actively read \ntables is about 3G.\nThis PostgreSQL-version is not a direct copy of the queries and tables, \nbut I made an effort of getting it more PostgreSQL-minded as much as \npossible. I.e. I combined a few queries, I changed \"boolean\"-enum's in \nMySQL to real booleans in Postgres, I added specific indexes (including \npartials) etc.\n\nWe use apache+php as clients and just open X apache processes using 'ab' \nat the same time to generate various amounts of concurrent workloads. \nSolaris scales really well to higher concurrencies and PostgreSQL \ndoesn't seem to have problems with it either in our workload.\n\nSo its not really a real-life scenario, but its not a synthetic \nbenchmark either.\n\nHere is a graph of our performance measured on PostgreSQL:\nhttp://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png\n\nWhat you see are three lines. Each represents the amount of total \"page \nviews\" processed in 600 seconds for a specific amount of Niagara-cores \n(i.e. 1, 2, 4, 6 and 8). Each core had all its threads enabled, so its \nactually 4, 8, 16, 24 and 32 virtual cpu's you're looking at.\nThe \"Max\"-line displays the maximum generated \"page views\" on a specific \ncore-amount for any concurrency, respectively: 5, 13, 35, 45 and 60.\nThe \"Bij 50\" is the amount of \"page views\" it generated with 50 \napache-processes working at the same time (on two dual xeon machines, so \n25 each). I took 50 a bit arbitrary but all core-configs seemed to do \npretty well under that workload.\n\nThe \"perfect\" line is based on the \"Max\" value for 1 core and then just \nmultiplied by the amount of cores to have a linear reference. The \"Bij \n50\" and the \"perfect\" line don't differ too much in color, but the \ntop-one is the \"perfect\" line.\n\nIn the near future we'll be presenting an article on this on our \nwebsite, although that will be in dutch the graphs should still be easy \nto read for you guys.\nAnd because of that I can't promise too much detailed information until \nthen.\n\nI hope I clarified things a bit now, if not ask me about it,\nBest regards,\n\nArjen\n", "msg_date": "Sun, 18 Jun 2006 11:17:54 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Arjen van der Meijden wrote:\n>\n> Here is a graph of our performance measured on PostgreSQL:\n> http://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png \n>\n>\n...\n>\n> The \"perfect\" line is based on the \"Max\" value for 1 core and then \n> just multiplied by the amount of cores to have a linear reference. The \n> \"Bij 50\" and the \"perfect\" line don't differ too much in color, but \n> the top-one is the \"perfect\" line.\n\nSureky the 'perfect' line ought to be linear? If the performance was \nperfectly linear, then the 'pages generated' ought to be G times the \nnumber (virtual) processors, where G is the gradient of the graph. In \nsuch a case the graph will go through the origin (o,o), but you graph \ndoes not show this. \n\nI'm a bit confused, what is the 'perfect' supposed to be?\n\nThanks\n\nDavid\n\n\n\n\n\n\n Arjen van der Meijden\nwrote:\n\nHere is a graph of our performance measured on PostgreSQL: \nhttp://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png\n\n\n\n...\n\nThe \"perfect\" line is based on the \"Max\" value for 1 core and then just\nmultiplied by the amount of cores to have a linear reference. The \"Bij\n50\" and the \"perfect\" line don't differ too much in color, but the\ntop-one is the \"perfect\" line. \n\n\nSureky the 'perfect' line ought to be linear?  If the performance was\nperfectly linear, then the 'pages generated' ought to be G times the\nnumber (virtual) processors, where G is the gradient of the graph.  In\nsuch a case the graph will go through the origin (o,o), but you graph\ndoes not show this.  \n\nI'm a bit confused, what is the 'perfect' supposed to be?\n\nThanks\n\nDavid", "msg_date": "Thu, 22 Jun 2006 14:03:47 +0100", "msg_from": "David Roussel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Arjen van der Meijden wrote:\n> First of all, this graph has no origin. Its a bit difficult to test with \n> less than one cpu.\n\nSure it does. I ran all the tests. They all took infinite time, and I got zero results. And my results are 100% accurate and reliable. It's perfectly valid data. :-)\n\nCraig\n", "msg_date": "Thu, 22 Jun 2006 07:03:25 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "On 22-6-2006 15:03, David Roussel wrote:\n> Sureky the 'perfect' line ought to be linear? If the performance was \n> perfectly linear, then the 'pages generated' ought to be G times the \n> number (virtual) processors, where G is the gradient of the graph. In \n> such a case the graph will go through the origin (o,o), but you graph \n> does not show this. \n> \n> I'm a bit confused, what is the 'perfect' supposed to be?\n\nFirst of all, this graph has no origin. Its a bit difficult to test with \nless than one cpu.\n\nAnyway, the line actually is linear and would've gone through the \norigin, if there was one. What I did was take the level of the \n'max'-line at 1 and then multiply it by 2, 4, 6 and 8. So if at 1 the \nlevel would've been 22000, the 2 would be 44000 and the 8 176000.\n\nPlease do notice the distance between 1 and 2 on the x-axis is the same \nas between 2 and 4, which makes the graph a bit harder to read.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 22 Jun 2006 16:19:21 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "> > Tatsuo Ishii <[email protected]> writes:\n> > > Interesting. We (some Japanese companies including SRA OSS,\n> > > Inc. Japan) did some PG scalability testing using a Unisys's big 16\n> > > (physical) CPU machine and found PG scales up to 8 CPUs. However\n> > > beyond 8 CPU PG does not scale anymore. The result can be viewed at\n> > > \"OSS iPedia\" web site (http://ossipedia.ipa.go.jp). Our conclusion was\n> > > PG has a serious lock contention problem in the environment by\n> > > analyzing the oprofile result.\n> > \n> > 18% in s_lock is definitely bad :-(. Were you able to determine which\n> > LWLock(s) are accounting for the contention?\n> \n> Yes. We were interested in that too. Some people did addtional tests\n> to determin that. I don't have the report handy now. I will report\n> back next week.\n\nSorry for the delay. Finally I got the oprofile data. It's\nhuge(34MB). If you are interested, I can put somewhere. Please let me\nknow.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n", "msg_date": "Thu, 13 Jul 2006 18:02:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>> 18% in s_lock is definitely bad :-(. Were you able to determine which\n>>> LWLock(s) are accounting for the contention?\n>> \n>> Yes. We were interested in that too. Some people did addtional tests\n>> to determin that. I don't have the report handy now. I will report\n>> back next week.\n\n> Sorry for the delay. Finally I got the oprofile data. It's\n> huge(34MB). If you are interested, I can put somewhere. Please let me\n> know.\n\nYes, please.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jul 2006 10:44:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>> 18% in s_lock is definitely bad :-(. Were you able to determine which\n>>> LWLock(s) are accounting for the contention?\n\n> Sorry for the delay. Finally I got the oprofile data. It's\n> huge(34MB). If you are interested, I can put somewhere. Please let me\n> know.\n\nI finally got a chance to look at this, and it seems clear that all the\ntraffic is on the BufMappingLock. This is essentially the same problem\nwe were discussing with respect to Gavin Hamill's report of poor\nperformance on an 8-way IBM PPC64 box (see hackers archives around\n2006-04-21). If your database is fully cached in shared buffers, then\nyou can do a whole lot of buffer accesses per unit time, and even though\nall the BufMappingLock acquisitions are in shared-LWLock mode, the\nLWLock's spinlock ends up being heavily contended on an SMP box.\n\nIt's likely that CVS HEAD would show somewhat better performance because\nof the btree change to cache local copies of index metapages (which\neliminates a fair fraction of buffer accesses, at least in Gavin's test\ncase). Getting much further than that seems to require partitioning\nthe buffer mapping table. The last discussion stalled on my concerns\nabout unpredictable shared memory usage, but I have some ideas on that\nwhich I'll post separately. In the meantime, thanks for sending along\nthe oprofile data!\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jul 2006 15:01:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tom Lane wrote:\n\n>Tatsuo Ishii <[email protected]> writes:\n> \n>\n>>>>18% in s_lock is definitely bad :-(. Were you able to determine which\n>>>>LWLock(s) are accounting for the contention?\n>>>> \n>>>>\n>\n> \n>\n>>Sorry for the delay. Finally I got the oprofile data. It's\n>>huge(34MB). If you are interested, I can put somewhere. Please let me\n>>know.\n>> \n>>\n>\n>I finally got a chance to look at this, and it seems clear that all the\n>traffic is on the BufMappingLock. This is essentially the same problem\n>we were discussing with respect to Gavin Hamill's report of poor\n>performance on an 8-way IBM PPC64 box (see hackers archives around\n>2006-04-21). If your database is fully cached in shared buffers, then\n>you can do a whole lot of buffer accesses per unit time, and even though\n>all the BufMappingLock acquisitions are in shared-LWLock mode, the\n>LWLock's spinlock ends up being heavily contended on an SMP box.\n>\n>It's likely that CVS HEAD would show somewhat better performance because\n>of the btree change to cache local copies of index metapages (which\n>eliminates a fair fraction of buffer accesses, at least in Gavin's test\n>case). Getting much further than that seems to require partitioning\n>the buffer mapping table. The last discussion stalled on my concerns\n>about unpredictable shared memory usage, but I have some ideas on that\n>which I'll post separately. In the meantime, thanks for sending along\n>the oprofile data!\n>\n>\t\t\tregards, tom lane\n> \n>\nI ran pgbench and fired up a DTrace script using the lwlock probes we've \nadded, and it looks like BufMappingLock is the most contended lock, but \nCheckpointStartLocks are held for longer duration!\n\n Lock Id Mode Count\n ControlFileLock Exclusive 1\n SubtransControlLock Exclusive 1\n BgWriterCommLock Exclusive 6\n FreeSpaceLock Exclusive 6\n FirstLockMgrLock Exclusive 48\n BufFreelistLock Exclusive 74\n BufMappingLock Exclusive 74\n CLogControlLock Exclusive 184\n XidGenLock Exclusive 184\n CheckpointStartLock Shared 185\n WALWriteLock Exclusive 185\n ProcArrayLock Exclusive 368\n CLogControlLock Shared 552\n SubtransControlLock Shared 1273\n WALInsertLock Exclusive 1476\n XidGenLock Shared 1842\n ProcArrayLock Shared 3160\n SInvalLock Shared 3684\n BufMappingLock Shared 14578\n\n Lock Id Combined Time (ns)\n ControlFileLock 7915\n BgWriterCommLock 43438\n FreeSpaceLock 111139\n BufFreelistLock 448530\n FirstLockMgrLock 2879957\n CLogControlLock 4237750\n SubtransControlLock 6378042\n XidGenLock 9500422\n WALInsertLock 16372040\n SInvalLock 23284554\n ProcArrayLock 32188638\n BufMappingLock 113128512\n WALWriteLock 142391501\n CheckpointStartLock 4171106665\n\n\nRegards,\n-Robert\n\n \n", "msg_date": "Fri, 21 Jul 2006 00:56:56 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Robert Lor <[email protected]> writes:\n> I ran pgbench and fired up a DTrace script using the lwlock probes we've \n> added, and it looks like BufMappingLock is the most contended lock, but \n> CheckpointStartLocks are held for longer duration!\n\nThose numbers look a bit suspicious --- I'd expect to see some of the\nLWLocks being taken in both shared and exclusive modes, but you don't\nshow any such cases. You sure your script is counting correctly?\nAlso, it'd be interesting to count time spent holding shared lock\nseparately from time spent holding exclusive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jul 2006 09:42:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "On Fri, Jul 21, 2006 at 12:56:56AM -0700, Robert Lor wrote:\n> I ran pgbench and fired up a DTrace script using the lwlock probes we've \n> added, and it looks like BufMappingLock is the most contended lock, but \n> CheckpointStartLocks are held for longer duration!\n \nNot terribly surprising given that that lock can generate a substantial\namount of IO (though looking at the numbers, you might want to make\nbgwriter more aggressive). Also, that's a shared lock, so it won't have\nnearly the impact that BufMappingLock does.\n\n> Lock Id Mode Count\n> ControlFileLock Exclusive 1\n> SubtransControlLock Exclusive 1\n> BgWriterCommLock Exclusive 6\n> FreeSpaceLock Exclusive 6\n> FirstLockMgrLock Exclusive 48\n> BufFreelistLock Exclusive 74\n> BufMappingLock Exclusive 74\n> CLogControlLock Exclusive 184\n> XidGenLock Exclusive 184\n> CheckpointStartLock Shared 185\n> WALWriteLock Exclusive 185\n> ProcArrayLock Exclusive 368\n> CLogControlLock Shared 552\n> SubtransControlLock Shared 1273\n> WALInsertLock Exclusive 1476\n> XidGenLock Shared 1842\n> ProcArrayLock Shared 3160\n> SInvalLock Shared 3684\n> BufMappingLock Shared 14578\n> \n> Lock Id Combined Time (ns)\n> ControlFileLock 7915\n> BgWriterCommLock 43438\n> FreeSpaceLock 111139\n> BufFreelistLock 448530\n> FirstLockMgrLock 2879957\n> CLogControlLock 4237750\n> SubtransControlLock 6378042\n> XidGenLock 9500422\n> WALInsertLock 16372040\n> SInvalLock 23284554\n> ProcArrayLock 32188638\n> BufMappingLock 113128512\n> WALWriteLock 142391501\n> CheckpointStartLock 4171106665\n> \n> \n> Regards,\n> -Robert\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 21 Jul 2006 08:56:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Hi,\n\nTom Lane schrieb:\n> Robert Lor <[email protected]> writes:\n>> I ran pgbench and fired up a DTrace script using the lwlock probes we've \n>> added, and it looks like BufMappingLock is the most contended lock, but \n>> CheckpointStartLocks are held for longer duration!\n> \n> Those numbers look a bit suspicious --- I'd expect to see some of the\n> LWLocks being taken in both shared and exclusive modes, but you don't\n> show any such cases. You sure your script is counting correctly?\n> Also, it'd be interesting to count time spent holding shared lock\n> separately from time spent holding exclusive.\n\nIs there a test case which shows the contention for this full cached\ntables? It would be nice to have measurable numbers like context\nswitches and queries per second.\n\nSven.\n", "msg_date": "Fri, 21 Jul 2006 16:59:49 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Tom Lane wrote:\n\n>Those numbers look a bit suspicious --- I'd expect to see some of the\n>LWLocks being taken in both shared and exclusive modes, but you don't\n>show any such cases. You sure your script is counting correctly?\n> \n>\nI'll double check to make sure no stupid mistakes were made!\n\n>Also, it'd be interesting to count time spent holding shared lock\n>separately from time spent holding exclusive.\n> \n>\nWill provide that data later today.\n\nRegards,\n-Robert\n\n", "msg_date": "Fri, 21 Jul 2006 08:11:58 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Tom Lane wrote:\n\n>Also, it'd be interesting to count time spent holding shared lock\n>separately from time spent holding exclusive.\n>\n> \n>\nTom,\n\nHere is the break down between exclusive & shared LWLocks. Do the \nnumbers look reasonable to you?\n\nRegards,\n-Robert\n\nbash-3.00# time ./Tom_lwlock_acquire.d `pgrep -n postgres`\n********** LWLock Count: Exclusive **********\n Lock Id Mode Count\n ControlFileLock Exclusive 1\n FreeSpaceLock Exclusive 9\n XidGenLock Exclusive 202\n CLogControlLock Exclusive 203\n WALWriteLock Exclusive 203\n BgWriterCommLock Exclusive 222\n BufFreelistLock Exclusive 305\n BufMappingLock Exclusive 305\n ProcArrayLock Exclusive 405\n FirstLockMgrLock Exclusive 670\n WALInsertLock Exclusive 1616\n\n********** LWLock Count: Shared **********\n Lock Id Mode Count\n CheckpointStartLock Shared 202\n CLogControlLock Shared 450\n SubtransControlLock Shared 776\n XidGenLock Shared 2020\n ProcArrayLock Shared 3778\n SInvalLock Shared 4040\n BufMappingLock Shared 40838\n\n********** LWLock Time: Exclusive **********\n Lock Id Combined Time (ns)\n ControlFileLock 8301\n FreeSpaceLock 80590\n CLogControlLock 1603557\n BgWriterCommLock 1607122\n BufFreelistLock 1997406\n XidGenLock 2312442\n BufMappingLock 3161683\n FirstLockMgrLock 5392575\n ProcArrayLock 6034396\n WALInsertLock 12277693\n WALWriteLock 324869744\n\n********** LWLock Time: Shared **********\n Lock Id Combined Time (ns)\n CLogControlLock 3183788\n SubtransControlLock 6956229\n XidGenLock 12012576\n SInvalLock 35567976\n ProcArrayLock 45400779\n BufMappingLock 300669441\n CheckpointStartLock 4056134243\n\n\nreal 0m24.718s\nuser 0m0.382s\nsys 0m0.181s\n\n\n", "msg_date": "Fri, 21 Jul 2006 20:05:28 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Robert Lor <[email protected]> writes:\n> Here is the break down between exclusive & shared LWLocks. Do the \n> numbers look reasonable to you?\n\nYeah, those seem plausible, although the hold time for\nCheckpointStartLock seems awfully high --- about 20 msec\nper transaction. Are you using a nonzero commit_delay?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jul 2006 14:10:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>> Interesting. We (some Japanese companies including SRA OSS,\n>>> Inc. Japan) did some PG scalability testing using a Unisys's big 16\n>>> (physical) CPU machine and found PG scales up to 8 CPUs. However\n>>> beyond 8 CPU PG does not scale anymore. The result can be viewed at\n>>> \"OSS iPedia\" web site (http://ossipedia.ipa.go.jp). Our conclusion was\n>>> PG has a serious lock contention problem in the environment by\n>>> analyzing the oprofile result.\n\nCan you retry this test case using CVS tip? I'm curious to see if\nhaving partitioned the BufMappingLock helps ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jul 2006 19:52:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tom Lane wrote:\n\n>Yeah, those seem plausible, although the hold time for\n>CheckpointStartLock seems awfully high --- about 20 msec\n>per transaction. Are you using a nonzero commit_delay?\n>\n>\n> \n>\nI didn't change commit_delay which defaults to zero.\n\n\nRegards,\n-Robert\n\n", "msg_date": "Sun, 23 Jul 2006 17:52:12 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" }, { "msg_contents": "Robert Lor <[email protected]> writes:\n> Tom Lane wrote:\n>> Yeah, those seem plausible, although the hold time for\n>> CheckpointStartLock seems awfully high --- about 20 msec\n>> per transaction. Are you using a nonzero commit_delay?\n>> \n> I didn't change commit_delay which defaults to zero.\n\nHmmm ... AFAICS this must mean that flushing the WAL data to disk\nat transaction commit time takes (most of) 20 msec on your hardware.\nWhich still seems high --- on most modern disks that'd be at least two\ndisk revolutions, maybe more. What's the disk hardware you're testing\non, particularly its RPM spec?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jul 2006 21:29:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL " }, { "msg_contents": "Tom Lane wrote:\n\n>Hmmm ... AFAICS this must mean that flushing the WAL data to disk\n>at transaction commit time takes (most of) 20 msec on your hardware.\n>Which still seems high --- on most modern disks that'd be at least two\n>disk revolutions, maybe more. What's the disk hardware you're testing\n>on, particularly its RPM spec?\n> \n>\nI actually ran the test on my laptop. It has an Ultra ATA/100 drive \n(5400 rpm). The test was just a quickie to show some data from the \nprobes. I'll collect and share data from the T2000 server later.\n\nRegards,\n-Robert\n", "msg_date": "Sun, 23 Jul 2006 20:34:25 -0700", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Donated a Sun Fire T2000 to the PostgreSQL" } ]
[ { "msg_contents": "\nFor long involved reasons I'm hanging out late at work today, and rather \nthan doing real, productive work, I thought I'd run some benchmarks \nagainst our development PostgreSQL database server. My conclusions are \nat the end.\n\nThe purpose of the benchmarking was to find out how fast Postgres was, \nor to compare Postgres to other databases, but to instead answer the \nquestion: when does it become worthwhile to switch over to using COPYs \ninstead of INSERTS, and by how much? This benchmark should in no way be \nused to gauge absolute performance of PostgreSQL.\n\nThe machine in question: a new HP-145 rack mount server, with a \nsingle-socket dual-core 1.8GHz Opteron 275, 1M of cache per core, with \n4G of memory, running Redhat Linux (forget which version). Database was \non the local single SATA hard disk- no raid. From the numbers, I'm \nassuming the disk honors fsync. Some tuning of the database was done, \nspecifically shared_buffers was upped to 2500 and temp_buffers to 1500 \n(mental note to self: must increase these signifigantly more. Forgot \nthey were so low). fsync is definately on. Test program was written in \nOcaml, compiled to native code, using the Ocaml Postgresql connection \nlibrary (Ocaml bindings of the libpgsql library). The test was single \nthreaded- only one insert going on at a time, run over the local gigabit \nethernet network from a remote machine.\n\nThe table design was very simple:\nCREATE TABLE copytest (\n id SERIAL PRIMARY KEY NOT NULL,\n name VARCHAR(64),\n thread INT,\n block INT,\n num INT);\n\nThe id column was not specified either in the inserts or in the copies, \ninstead it just came from the sequence. Other than the id, there are no \nindexes on the table. Numbers are approximate.\n\nResults:\n\nInserts, 1 per transaction* 83 inserts/second\nInserts, 5 per transaction 419 inserts/second \nInserts, 10 per transaction 843 inserts/second\nInserts, 50 per transaction ~3,100 inserts/second\nInserts, 100 per transaction ~4,200 inserts/second\nInserts, 1,000 per transaction ~5,400 inserts/second\nCopy, 5 element blocks ~405 inserts/second\nCopy, 10 element blocks ~700 inserts/second\nCopy, 50 element blocks ~3,400 inserts/second\nCopy, 100 element blocks ~6,000 inserts/second\nCopy, 1,000 element blocks ~20,000 inserts/second\nCopy, 10,000 element blocks ~27,500 inserts/second\nCopy, 100,000 element blocks ~27,600 inserts/second\n\n* The singleton inserts were not done in an explicit begin/end block, \nbut were instead \"unadorned\" inserts.\n\nSome conclusions:\n\n1) Transaction time is a huge hit on the small block sizes. Going from \n1 insert per transaction to 10 inserts per transaction gives a 10x speed \nup. Once the block size gets large enough (10's to 100's of elements \nper block) the cost of a transaction becomes less of a problem. \n\n2) Both insert methods hit fairly hard walls of diminishing returns were \nlarger block sizes gave little performance advantage, tending to no \nperformance advantage.\n\n3) For small enough block sizes, inserts are actually faster than \ncopies- but not by much. There is a broad plateau, spanning at least \nthe 5 through 100 elements per block (more than an order of magnitude), \nwhere the performance of the two are roughly identical. For the general \ncase, I'd be inclined to switch to copies sooner (at 5 or so elements \nper block) rather than later.\n\n4) At the high end, copies vastly outperformed inserts. At 1,000 \nelements per block, the copy was almost 4x faster than inserts. This \nwidened to ~5x before copy started topping out.\n\n5) The performance of Postgres, at least on inserts, depends critically \non how you program it. One the same hardware, performance for me varied \nover a factor of over 300-fold, 2.5 orders of magnitude. Programs which \nare unaware of transactions and are designed to be highly portable are \nlikely to hit the abysmal side of performance, where the transaction \noverhead kills performance. I'm not sure there is a fix for this (let \nalone an easy fix)- simply dropping transactions is obviously not it. \nPrograms that are transaction aware and willing to use \nPostgreSQL-specific features can get surprisingly excellent \nperformance. Simply being transaction-aware and doing multiple inserts \nper transaction greatly increases performance, giving an easy order of \nmagnitude increase (wrapping 10 inserts in a transaction gives a 10x \nperformance boost).\n\nBrian\n\n\n", "msg_date": "Mon, 19 Jun 2006 20:09:42 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": true, "msg_subject": "Some performance numbers, with thoughts" }, { "msg_contents": "Brian Hurt <[email protected]> writes:\n> For long involved reasons I'm hanging out late at work today, and rather \n> than doing real, productive work, I thought I'd run some benchmarks \n> against our development PostgreSQL database server. My conclusions are \n> at the end.\n\nUmmm ... you forgot to mention Postgres version? Also, which client and\nserver encodings did you use (that starts to get to be a noticeable\nissue for high COPY rates)?\n\n> 1) Transaction time is a huge hit on the small block sizes.\n\nRight. For small transactions with a drive honoring fsync, you should\nexpect to get a max of about one commit per platter revolution. Your\nnumbers work out to a shade under 5000 commits/minute, from which I\nspeculate a 7200 RPM drive ... do you know what it really is?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jun 2006 21:17:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts " }, { "msg_contents": "Brian,\n\nAny idea what your bottleneck is? You can find out at a crude level by\nattaching an strace to the running backend, assuming it¹s running long\nenough to grab it, then look at what the system call breakdown is.\nBasically, run one of your long insert streams, do a ³top² to find which\nprocess id the backend is using (the <pid>), then run this:\n\n strace -p <pid> -c\n\nAnd CTRL-C after a few seconds to see a breakdown of system calls.\n\nI think what you'll see is that for the small number of inserts per TXN,\nyou'll be bottlenecked on fsync() calls, or fdatasync() if you defaulted it.\nThings might speed up a whole lot there depending on your choice of one or\nthe other. \n\n- Luke \n\n\nOn 6/19/06 5:09 PM, \"Brian Hurt\" <[email protected]> wrote:\n\n> \n> \n> For long involved reasons I'm hanging out late at work today, and rather\n> than doing real, productive work, I thought I'd run some benchmarks\n> against our development PostgreSQL database server. My conclusions are\n> at the end.\n> \n> The purpose of the benchmarking was to find out how fast Postgres was,\n> or to compare Postgres to other databases, but to instead answer the\n> question: when does it become worthwhile to switch over to using COPYs\n> instead of INSERTS, and by how much? This benchmark should in no way be\n> used to gauge absolute performance of PostgreSQL.\n> \n> The machine in question: a new HP-145 rack mount server, with a\n> single-socket dual-core 1.8GHz Opteron 275, 1M of cache per core, with\n> 4G of memory, running Redhat Linux (forget which version). Database was\n> on the local single SATA hard disk- no raid. From the numbers, I'm\n> assuming the disk honors fsync. Some tuning of the database was done,\n> specifically shared_buffers was upped to 2500 and temp_buffers to 1500\n> (mental note to self: must increase these signifigantly more. Forgot\n> they were so low). fsync is definately on. Test program was written in\n> Ocaml, compiled to native code, using the Ocaml Postgresql connection\n> library (Ocaml bindings of the libpgsql library). The test was single\n> threaded- only one insert going on at a time, run over the local gigabit\n> ethernet network from a remote machine.\n> \n> The table design was very simple:\n> CREATE TABLE copytest (\n> id SERIAL PRIMARY KEY NOT NULL,\n> name VARCHAR(64),\n> thread INT,\n> block INT,\n> num INT);\n> \n> The id column was not specified either in the inserts or in the copies,\n> instead it just came from the sequence. Other than the id, there are no\n> indexes on the table. Numbers are approximate.\n> \n> Results:\n> \n> Inserts, 1 per transaction* 83 inserts/second\n> Inserts, 5 per transaction 419 inserts/second\n> Inserts, 10 per transaction 843 inserts/second\n> Inserts, 50 per transaction ~3,100 inserts/second\n> Inserts, 100 per transaction ~4,200 inserts/second\n> Inserts, 1,000 per transaction ~5,400 inserts/second\n> Copy, 5 element blocks ~405 inserts/second\n> Copy, 10 element blocks ~700 inserts/second\n> Copy, 50 element blocks ~3,400 inserts/second\n> Copy, 100 element blocks ~6,000 inserts/second\n> Copy, 1,000 element blocks ~20,000 inserts/second\n> Copy, 10,000 element blocks ~27,500 inserts/second\n> Copy, 100,000 element blocks ~27,600 inserts/second\n> \n> * The singleton inserts were not done in an explicit begin/end block,\n> but were instead \"unadorned\" inserts.\n> \n> Some conclusions:\n> \n> 1) Transaction time is a huge hit on the small block sizes. Going from\n> 1 insert per transaction to 10 inserts per transaction gives a 10x speed\n> up. Once the block size gets large enough (10's to 100's of elements\n> per block) the cost of a transaction becomes less of a problem.\n> \n> 2) Both insert methods hit fairly hard walls of diminishing returns were\n> larger block sizes gave little performance advantage, tending to no\n> performance advantage.\n> \n> 3) For small enough block sizes, inserts are actually faster than\n> copies- but not by much. There is a broad plateau, spanning at least\n> the 5 through 100 elements per block (more than an order of magnitude),\n> where the performance of the two are roughly identical. For the general\n> case, I'd be inclined to switch to copies sooner (at 5 or so elements\n> per block) rather than later.\n> \n> 4) At the high end, copies vastly outperformed inserts. At 1,000\n> elements per block, the copy was almost 4x faster than inserts. This\n> widened to ~5x before copy started topping out.\n> \n> 5) The performance of Postgres, at least on inserts, depends critically\n> on how you program it. One the same hardware, performance for me varied\n> over a factor of over 300-fold, 2.5 orders of magnitude. Programs which\n> are unaware of transactions and are designed to be highly portable are\n> likely to hit the abysmal side of performance, where the transaction\n> overhead kills performance. I'm not sure there is a fix for this (let\n> alone an easy fix)- simply dropping transactions is obviously not it.\n> Programs that are transaction aware and willing to use\n> PostgreSQL-specific features can get surprisingly excellent\n> performance. Simply being transaction-aware and doing multiple inserts\n> per transaction greatly increases performance, giving an easy order of\n> magnitude increase (wrapping 10 inserts in a transaction gives a 10x\n> performance boost).\n> \n> Brian\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n> \n\n\n\n", "msg_date": "Mon, 19 Jun 2006 18:24:02 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts" }, { "msg_contents": "On Mon, 2006-06-19 at 20:09 -0400, Brian Hurt wrote:\n\n> 5) The performance of Postgres, at least on inserts, depends critically \n> on how you program it. One the same hardware, performance for me varied \n> over a factor of over 300-fold, 2.5 orders of magnitude. Programs which \n> are unaware of transactions and are designed to be highly portable are \n> likely to hit the abysmal side of performance, where the transaction \n> overhead kills performance. \n\nI'm quite interested in this comment. Transactions have always been part\nof the SQL standard, so being unaware of them when using SQL is strange\nto me. Can you talk more about what your expectations of what\nperformance \"should have been\" - I don't want to flame you, just to\nunderstand that viewpoint.\n\nWhat are you implicitly comparing against? With which options enabled?\n\nHow are you submitting these SQL statements? Through what API?\n\n> I'm not sure there is a fix for this (let \n> alone an easy fix)- simply dropping transactions is obviously not it.\n\nI'd like to see what other \"fixes\" we might think of.\n\nPerhaps we might consider a session-level mode that groups together\natomic INSERTs into the same table into a single larger transaction.\nThat might be something we can do at the client level, for example.\n \n> Programs that are transaction aware and willing to use \n> PostgreSQL-specific features can get surprisingly excellent \n> performance. Simply being transaction-aware and doing multiple inserts \n> per transaction greatly increases performance, giving an easy order of \n> magnitude increase (wrapping 10 inserts in a transaction gives a 10x \n> performance boost).\n\nThis is exactly the same as most other transactional-RDBMS.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 26 Jun 2006 20:33:34 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts" }, { "msg_contents": "On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:\n>of the SQL standard, so being unaware of them when using SQL is strange\n>to me. \n\nWelcome to the world of programs designed for mysql. You'll almost never \nsee them batch inserts, take advantage of referential integrity, etc. \nYou end up with lots of selects & inserts in loops that expect \nautocommit-like behavior because it doesn't matter in that world.\n\nMike Stone\n", "msg_date": "Mon, 26 Jun 2006 17:20:14 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts" }, { "msg_contents": "On Mon, 2006-06-26 at 17:20 -0400, Michael Stone wrote:\n> On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:\n> >of the SQL standard, so being unaware of them when using SQL is strange\n> >to me. \n> \n> Welcome to the world of programs designed for mysql. You'll almost never \n> see them batch inserts, take advantage of referential integrity, etc. \n> You end up with lots of selects & inserts in loops that expect \n> autocommit-like behavior because it doesn't matter in that world.\n\nYes, I suspected that was the case. I was interested in understanding\nwhy anybody thought it was acceptable, and in what conditions that might\nbe the case. Brian's open approach has helped explain things for me.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 27 Jun 2006 08:12:50 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts" }, { "msg_contents": "Combining the \"insert\" statements in a big concatenated string\njoined by semicolons - rather than sending each individually\ncan drastically speed up your inserts; making them much closer\nto the speed of copy.\n\nFor example, instead of sending them separately, it's much faster\nto send a single string like this\n \"insert into tbl (c1,c2) values (v1,v2);insert into tbl (c1,c2) values (v3,v4);...\"\npresumably due to the round-trip packets sending each insert takes.\n\nBrian Hurt wrote:\n> \n> Inserts, 1,000 per transaction ~5,400 inserts/second\n> Copy, 1,000 element blocks ~20,000 inserts/second\n> \n\nWhen I last measured it it was about a factor of 4 speedup\n(3 seconds vs 0.7 seconds) by concatenating the inserts with\nsample code shown her [1].\n\nIf the same ratio holds for your test case, these concatenated\ninserts would be almost the exact same speed as a copy.\n\n Ron M\n\n[1] http://archives.postgresql.org/pgsql-performance/2005-09/msg00327.php\n\n", "msg_date": "Wed, 28 Jun 2006 08:41:41 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance numbers, with thoughts" } ]
[ { "msg_contents": "Hi,\n\nI have some speed issues with a big array in a table. I hope you can\nhelp me to tune my query. \n\nMy table looks like this:\n\nId | timestamp | map\nPrimary key | timestamp | array of real [34][28]\n\nWith an index on timestamp \n\nMy query is the following:\n\nSelect map[1,1], map[1,2] .... Map[34,28] from table where timestamp > x\nand timestamp < y order by timestamp\n\nExpected return is about 5000 rows of the table. I have to run this\nquery multiple times with different x and y values\n\nThe table is huge (about 60000 entries) but will get even much more\nbigger.\n\nThe query takes ages on a 3.GhZ Xeon processor with 2 GB RAM. I'm using\npostgresql 7.4 .\n\nAny hints how I can speedup this ? (use postgres 8.1, change table\nsetup, query one row or column of the array )\n\nI use libpqxx to access the database. This might be another bottleneck,\nbut I assume my query and table setup is the bigger bottleneck. Would it\nmake sense to fetch the whole array ? (Select map from table where ...\nand parse the array manually)\n\nThanks for your help.\n\nMarcel\n\n\n\n\n\n\n\n\nBig array speed issues\n\n\n\nHi,\n\nI have some speed issues with a big array in a table. I hope you can help me to tune my query. \n\nMy table looks like this:\n\nId                |  timestamp  | map\nPrimary key |  timestamp  | array of real [34][28]\n\nWith an index on timestamp \n\nMy query is the following:\n\nSelect map[1,1], map[1,2] …. Map[34,28] from table where timestamp > x and timestamp < y order by timestamp\n\nExpected return is about 5000 rows of the table. I have to run this query multiple times with different x and y values\n\nThe table is huge (about 60000 entries) but will get even much more bigger.\n\nThe query takes ages on a 3.GhZ Xeon processor with 2 GB RAM.  I'm using postgresql 7.4 .\n\nAny hints how I can speedup this ?  (use postgres 8.1, change table setup, query one row or column of the array )\n\nI use libpqxx to access the database. This might be another bottleneck, but I assume my query and table setup is the bigger bottleneck. Would it make sense to fetch the whole array ? (Select map from table where …  and parse the array manually)\nThanks for your help.\n\nMarcel", "msg_date": "Tue, 20 Jun 2006 11:35:17 +0200", "msg_from": "\"Merkel Marcel (CR/AEM4)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big array speed issues" }, { "msg_contents": "On 6/20/06, Merkel Marcel (CR/AEM4) <[email protected]> wrote:\n\n> I use libpqxx to access the database. This might be another bottleneck, but\n> I assume my query and table setup is the bigger bottleneck. Would it make\n> sense to fetch the whole array ? (Select map from table where … and parse\n> the array manually)\n\nhave you tried similar approach without using arrays?\n\nmerlin\n", "msg_date": "Tue, 20 Jun 2006 22:56:32 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big array speed issues" }, { "msg_contents": " \n\nVon: Merlin Moncure [mailto:[email protected]] \nAn: Merkel Marcel (CR/AEM4)\nCc: [email protected]\nBetreff: Re: [PERFORM] Big array speed issues\n\nOn 6/20/06, Merkel Marcel (CR/AEM4) <[email protected]> wrote:\n\n> I use libpqxx to access the database. This might be another\nbottleneck, but\n> I assume my query and table setup is the bigger bottleneck. Would it\nmake\n> sense to fetch the whole array ? (Select map from table where ... and\nparse\n> the array manually)\n\nhave you tried similar approach without using arrays?\n\nMerlin\n\n\nNot yet. I would first like to know what is the time consuming part and\nwhat is a work around. If you are sure individual columns for every\nentry of the array solve the issue I will joyfully implement it. The\ndownsize of this approch is that the array dimensions are not always the\nsame in my scenario. But I have a workaround in mind for this issue.\n\nCheers\n\nMarcel\n\n\n\n", "msg_date": "Wed, 21 Jun 2006 09:29:03 +0200", "msg_from": "\"Merkel Marcel (CR/AEM4)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big array speed issues" }, { "msg_contents": "On Wed, Jun 21, 2006 at 09:29:03AM +0200, Merkel Marcel (CR/AEM4) wrote:\n> \n> \n> Von: Merlin Moncure [mailto:[email protected]] \n> An: Merkel Marcel (CR/AEM4)\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Big array speed issues\n> \n> On 6/20/06, Merkel Marcel (CR/AEM4) <[email protected]> wrote:\n> \n> > I use libpqxx to access the database. This might be another\n> bottleneck, but\n> > I assume my query and table setup is the bigger bottleneck. Would it\n> make\n> > sense to fetch the whole array ? (Select map from table where ... and\n> parse\n> > the array manually)\n> \n> have you tried similar approach without using arrays?\n> \n> Merlin\n> \n> \n> Not yet. I would first like to know what is the time consuming part and\n> what is a work around. If you are sure individual columns for every\n> entry of the array solve the issue I will joyfully implement it. The\n> downsize of this approch is that the array dimensions are not always the\n> same in my scenario. But I have a workaround in mind for this issue.\n\nBefore mucking about with the code, I'd absolutely try 8.1. I've\ngenerally seen it double the performance of 7.4.\n\nAlso, output from EXPLAIN ANALYZE would make it a lot easier to figure\nout what the issue is, and it would be good to try this without\nselecting any of the arrays.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 13:33:05 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big array speed issues" }, { "msg_contents": "> Not yet. I would first like to know what is the time consuming part and\n> what is a work around. If you are sure individual columns for every\n> entry of the array solve the issue I will joyfully implement it. The\n> downsize of this approch is that the array dimensions are not always the\n> same in my scenario. But I have a workaround in mind for this issue.\n\nThe first thing I would try would be to completely normalize te file, aka\n\ncreate table data as\n(\n id int,\n t timestamp,\n map_x int,\n map_y int,\n value float\n);\n\nand go with denormalized approach only when this doesn't work for some reason.\n\nmerlin\n", "msg_date": "Wed, 21 Jun 2006 15:49:04 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big array speed issues" } ]
[ { "msg_contents": "Hi\nI have following table:\nCREATE TABLE alias (\n alias_id BIGSERIAL PRIMARY KEY,\n mask VARCHAR(20) NOT NULL DEFAULT '',\n);\n\nwith index:\nCREATE INDEX alias_mask_ind ON alias(mask);\n\n\nand this table has about 1 million rows.\n\n\nIn DB procedure I execute:\n LOOP\n <........>\n OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out\nORDER BY mask;\n i:=0;\n LOOP\n i:=i+1;\n FETCH cursor1 INTO alias_row;\n EXIT WHEN i=10;\n END LOOP;\n CLOSE cursor1;\n EXIT WHEN end_number=10000;\n END LOOP;\n\n\nSuch construction is very slow (20 sec. per one iteration) but when I modify SQL\nto:\n OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out\nORDER BY mask LIMIT 100;\n\n\nit works very fast(whole program executes in 4-7s). It is strange for me becuase\nI've understood so far\nthat when cursor is open select is executed but Postgres does not\nselect all rows - only cursor is positioned on first row, when you\nexecute fetch next row is read. But this example shows something\ndifferent.\n\n\nCan somebody clarify what is wrong with my example? I need select\nwithout LIMIT 100 part.\n\n\nRegards\nMichal Szymanski\nhttp://blog.szymanskich.net\n\n\n\n", "msg_date": "Tue, 20 Jun 2006 14:39:32 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Curson prbolem" }, { "msg_contents": "[email protected] writes:\n> [slow:]\n> OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out\n> ORDER BY mask;\n> [fast:]\n> OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out\n> ORDER BY mask LIMIT 100;\n\nThe difference is that in the first case the planner has to assume you\nintend to fetch all the rows with mask>=something (and I'll bet the\nsomething is a plpgsql variable, so the planner can't even see its\nvalue). In this case a sort-based plan looks like a winner. In the\nsecond case, since you only need to fetch 100 rows, it's clearly best to\nscan the index beginning at mask = alias_out.\n\n> Can somebody clarify what is wrong with my example? I need select\n> without LIMIT 100 part.\n\nWhy? You should always tell the SQL engine what it is that you really\nwant --- leaving it in the dark about your intentions is a good way to\ndestroy performance, as you are finding out. If I were you I would get\nrid of the row-counting inside the loop entirely, and use the \"LIMIT n\"\nclause to handle that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Jun 2006 10:28:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curson prbolem " } ]
[ { "msg_contents": "This might not be 100% performance compilant but i guess its better than -hackers since\nthese day's there seem to be some big consern :)\n \nSo feel free to comment\n \n[Abstract: Underlyin plpgsql should remove all public user ACL's from Function,Table Sequence,View ... ]\n \n-elz\n \n---------------------------------------------\n---------------------------------------------\n \nCREATE OR REPLACE FUNCTION cleanup_public_perm_on_function()\n RETURNS int4 AS\n'\nDECLARE\nr_record record;\nv_record record;\nexec_string text;\nargument_string text;\ni int2;\nBEGIN\n FOR r_record IN SELECT * FROM pg_proc WHERE proowner !=''1'' LOOP\n \n exec_string = '''';\n argument_string = '''';\n \n exec_string = ''REVOKE ALL ON FUNCTION '' || r_record.proname || ''('';\n \n IF (r_record.pronargs > 0) THEN\n i = 0;\n WHILE (i < r_record.pronargs) LOOP\n IF i > 0 THEN\n argument_string = argument_string || '','' ;\n END IF;\n FOR v_record IN SELECT * from pg_type WHERE oid=r_record.proargtypes[i] LOOP \n argument_string = argument_string || v_record.typname ;\n END LOOP;\n i = i+1;\n END LOOP;\n END IF;\n \n \n exec_string = exec_string || argument_string || '') FROM public;'';\n \n IF exec_string != '''' THEN\n \n RAISE NOTICE ''exec_string is %'', exec_string;\n EXECUTE exec_string;\n END IF;\n END LOOP;\n RETURN 1;\nEND;\n'\n LANGUAGE 'plpgsql' VOLATILE;\n\nCREATE OR REPLACE FUNCTION cleaup_public_on_table_sequence_view()\n RETURNS int4 AS\n'\nDECLARE\nr_record record;\nexec_string text;\nBEGIN\n FOR r_record IN SELECT * FROM pg_class WHERE relowner !=''1'' LOOP\n \n exec_string = '''';\n IF (r_record.relkind::char = ''r''::char) THEN\n exec_string = ''REVOKE ALL ON TABLE '' || r_record.relname || '' FROM public'';\n END IF;\n IF (r_record.relkind::char = ''c''::char) THEN\n \n exec_string = ''REVOKE ALL ON TABLE '' || r_record.relname || '' FROM public'';\n \n END IF;\n IF (r_record.relkind::char = ''v''::char) THEN\n \n exec_string = ''REVOKE ALL ON TABLE '' || r_record.relname || '' FROM public'';\n \n END IF;\n \n IF (r_record.relkind::char = ''S''::char) THEN\n \n exec_string = ''REVOKE ALL ON TABLE '' || r_record.relname || '' FROM public'';\n END IF;\n \n IF exec_string != '''' THEN\n \n RAISE NOTICE ''exec_string is %'', exec_string;\n EXECUTE exec_string;\n END IF;\n END LOOP;\n RETURN 1;\nEND;\n'\n LANGUAGE 'plpgsql' VOLATILE;\n \n\nSELECT * FROM cleanup_public_perm_on_function();\nSELECT * FROM cleaup_public_on_table_sequence_view();\nDROP FUNCTION cleanup_public_perm_on_function();\nDROP FUNCTION cleaup_public_on_table_sequence_view();\n\nAVERTISSEMENT CONCERNANT LA CONFIDENTIALITÉ \n\nLe présent message est à l'usage exclusif du ou des destinataires mentionnés ci-dessus. Son contenu est confidentiel et peut être assujetti au secret professionnel. Si vous avez reçu le présent message par erreur, veuillez nous en aviser immédiatement et le détruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n\nCONFIDENTIALITY NOTICE\n\nThis communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n", "msg_date": "Wed, 21 Jun 2006 00:27:23 -0400", "msg_from": "\"Eric Lauzon\" <[email protected]>", "msg_from_op": true, "msg_subject": "ACL cleanup" } ]
[ { "msg_contents": "Hello People,\n\nI'm trying to solve a 'what i feel is a' performance/configuration/query \nerror on my side. I'm fairly new to configuring PostgreSQL so, i might \nbe completely wrong with my configuration.\n\nMy database consists of 44 tables, about 20GB. Two of those tables are \n'big/huge'. Table src.src_faktuur_verricht contains 43million records \n(9GB) and table src.src_faktuur_verrsec contains 55million records (6GB).\n\nBelow is the 'slow' query.\n\nINSERT INTO rpt.rpt_verrichting\n(verrichting_id\n,verrichting_secid\n,fout_status\n,patientnr\n,verrichtingsdatum\n,locatie_code\n,afdeling_code\n,uitvoerder_code\n,aanvrager_code\n,verrichting_code\n,dbcnr\n,aantal_uitgevoerd\n,kostenplaats_code\n,vc_patientnr\n,vc_verrichting_code\n,vc_dbcnr\n)\nSELECT t1.id\n, t0.secid\n, t1.status\n, t1.patientnr\n, t1.datum\n, t1.locatie\n, t1.afdeling\n, t1.uitvoerder\n, t1.aanvrager\n, t0.code\n, t1.casenr\n, t0.aantal\n, t0.kostplaats\n, null\n, null\n, null\nFROM src.src_faktuur_verrsec t0 JOIN\n src.src_faktuur_verricht t1 ON\n t0.id = t1.id\nWHERE substr(t0.code,1,2) not in ('14','15','16','17')\nAND (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\nAND EXTRACT(YEAR from t1.datum) > 2004;\n\n\nOutput from explain\n\nHash Join (cost=1328360.12..6167462.76 rows=7197568 width=118)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n\n -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90 \nrows=40902852 width=52)\n Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND \n(substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1, \n2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND \n((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS NULL)))\n -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=80)\n -> Bitmap Heap Scan on src_faktuur_verricht t1 \n(cost=62392.02..1188102.97 rows=8942863 width=80)\n Recheck Cond: (date_part('year'::text, datum) > \n2004::double precision)\n -> Bitmap Index Scan on src_faktuur_verricht_idx1 \n(cost=0.00..62392.02 rows=8942863 width=0)\n Index Cond: (date_part('year'::text, datum) > \n2004::double precision)\n\n\nThe db server runs PostgreSQL 8.1.4 on FreeBSD 6.1-Stable. 2GB of RAM.\nIt contains two SATA150 disks, one contains PostgreSQL and the rest of \nthe operating system and the other disk holds the pg_xlog directory.\n\nChanged lines from my postgresql.conf file\n\nshared_buffers = 8192\ntemp_buffers = 4096\nwork_mem = 65536\nmaintenance_work_mem = 1048576\nmax_fsm_pages = 40000\nfsync = off\nwal_buffers = 64\neffective_cache_size = 174848\n\nThe query above takes around 42 minutes.\n\nHowever, i also have a wimpy desktop machine with 1gb ram. Windows with \nMSSQL 2000 (default installation), same database structure, same \nindexes, same query, etc and it takes 17 minutes. The big difference \nmakes me think that i've made an error with my PostgreSQL configuration. \nI just can't seem to figure it out.\n\nCould someone perhaps give me some pointers, advice?\n\nThanks in advance.\n\nNicky\n\n\n\n\n\n\n\n\n\n\nHello People, \n\nI'm trying to solve a 'what i feel is a'\nperformance/configuration/query error on my side. I'm fairly new to\nconfiguring PostgreSQL so, i might be completely wrong with my\nconfiguration. \n\nMy database consists of 44 tables, about 20GB. Two of those tables are\n'big/huge'. Table src.src_faktuur_verricht contains 43million records\n(9GB) and table src.src_faktuur_verrsec contains 55million records\n(6GB). \n\nBelow is the 'slow' query. \n\nINSERT INTO rpt.rpt_verrichting\n(verrichting_id\n,verrichting_secid\n,fout_status\n,patientnr\n,verrichtingsdatum\n,locatie_code\n,afdeling_code\n,uitvoerder_code\n,aanvrager_code\n,verrichting_code\n,dbcnr\n,aantal_uitgevoerd\n,kostenplaats_code\n,vc_patientnr\n,vc_verrichting_code\n,vc_dbcnr\n)\nSELECT  t1.id\n,       t0.secid\n,       t1.status\n,       t1.patientnr\n,       t1.datum\n,       t1.locatie\n,       t1.afdeling\n,       t1.uitvoerder\n,       t1.aanvrager\n,       t0.code\n,       t1.casenr\n,       t0.aantal\n,       t0.kostplaats\n,       null\n,       null\n,       null\nFROM    src.src_faktuur_verrsec t0 JOIN\n        src.src_faktuur_verricht t1 ON\n        t0.id = t1.id\nWHERE   substr(t0.code,1,2) not in ('14','15','16','17')\nAND     (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\nAND     EXTRACT(YEAR from t1.datum) > 2004;\n\n\nOutput from explain\n\nHash Join  (cost=1328360.12..6167462.76 rows=7197568 width=118)\n  Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n\n  ->  Seq Scan on src_faktuur_verrsec t0  (cost=0.00..2773789.90\nrows=40902852 width=52)\n        Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND\n(substr((code)::text, 1, 2) <> '15'::text) AND\n(substr((code)::text, 1, 2) <> '16'::text) AND\n(substr((code)::text, 1, 2) <> '17'::text) AND\n((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS\nNULL)))\n  ->  Hash  (cost=1188102.97..1188102.97 rows=8942863 width=80)\n        ->  Bitmap Heap Scan on src_faktuur_verricht t1 \n(cost=62392.02..1188102.97 rows=8942863 width=80)\n              Recheck Cond: (date_part('year'::text, datum) >\n2004::double precision)\n              ->  Bitmap Index Scan on src_faktuur_verricht_idx1 \n(cost=0.00..62392.02 rows=8942863 width=0)\n                    Index Cond: (date_part('year'::text, datum) >\n2004::double precision)\n\n\nThe db server runs PostgreSQL 8.1.4 on FreeBSD 6.1-Stable. 2GB of RAM. \nIt contains two SATA150 disks, one contains PostgreSQL and the rest of\nthe operating system and the other disk holds the pg_xlog directory.\n\nChanged lines from my postgresql.conf file\n\nshared_buffers = 8192\ntemp_buffers = 4096\nwork_mem = 65536\nmaintenance_work_mem = 1048576\nmax_fsm_pages = 40000\nfsync = off\nwal_buffers = 64\neffective_cache_size = 174848\n\nThe query above takes around 42 minutes. \n\nHowever, i also have a wimpy desktop machine with 1gb ram. Windows with\nMSSQL 2000 (default installation), same database structure, same\nindexes, same query, etc and it takes 17 minutes. The big difference\nmakes me think that i've made an error with my PostgreSQL\nconfiguration. I just can't seem to figure it out. \n\nCould someone perhaps give me some pointers, advice?\n\nThanks in advance. \n\nNicky", "msg_date": "Wed, 21 Jun 2006 15:47:19 +0200", "msg_from": "nicky <[email protected]>", "msg_from_op": true, "msg_subject": "Speeding up query, Joining 55mil and 43mil records. " }, { "msg_contents": "Could you post an explain analyze of the query? Just FYI, if you do an\nexplain analyze of the insert statement, it will actually do the insert.\nIf you don't want that just post an explain analyze of the select part.\n \nTo me it would be interesting to compare just the select parts of the\nquery between Postgres and MSSQL. That way you would know if your\nPostgres install is slower at the query or slower at the insert.\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of nicky\nSent: Wednesday, June 21, 2006 8:47 AM\nTo: [email protected]\nSubject: [PERFORM] Speeding up query, Joining 55mil and 43mil records. \n\n\nHello People, \n\nI'm trying to solve a 'what i feel is a' performance/configuration/query\nerror on my side. I'm fairly new to configuring PostgreSQL so, i might\nbe completely wrong with my configuration. \n\nMy database consists of 44 tables, about 20GB. Two of those tables are\n'big/huge'. Table src.src_faktuur_verricht contains 43million records\n(9GB) and table src.src_faktuur_verrsec contains 55million records\n(6GB). \n\nBelow is the 'slow' query. \n\nINSERT INTO rpt.rpt_verrichting\n(verrichting_id\n,verrichting_secid\n,fout_status\n,patientnr\n,verrichtingsdatum\n,locatie_code\n,afdeling_code\n,uitvoerder_code\n,aanvrager_code\n,verrichting_code\n,dbcnr\n,aantal_uitgevoerd\n,kostenplaats_code\n,vc_patientnr\n,vc_verrichting_code\n,vc_dbcnr\n)\nSELECT t1.id\n, t0.secid\n, t1.status\n, t1.patientnr\n, t1.datum\n, t1.locatie\n, t1.afdeling\n, t1.uitvoerder\n, t1.aanvrager\n, t0.code\n, t1.casenr\n, t0.aantal\n, t0.kostplaats\n, null\n, null\n, null\nFROM src.src_faktuur_verrsec t0 JOIN\n src.src_faktuur_verricht t1 ON\n t0.id = t1.id\nWHERE substr(t0.code,1,2) not in ('14','15','16','17')\nAND (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\nAND EXTRACT(YEAR from t1.datum) > 2004;\n\n\nOutput from explain\n\nHash Join (cost=1328360.12..6167462.76 rows=7197568 width=118)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n\n -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90\nrows=40902852 width=52)\n Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND\n(substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1,\n2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND\n((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS NULL)))\n -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=80)\n -> Bitmap Heap Scan on src_faktuur_verricht t1\n(cost=62392.02..1188102.97 rows=8942863 width=80)\n Recheck Cond: (date_part('year'::text, datum) >\n2004::double precision)\n -> Bitmap Index Scan on src_faktuur_verricht_idx1\n(cost=0.00..62392.02 rows=8942863 width=0)\n Index Cond: (date_part('year'::text, datum) >\n2004::double precision)\n\n\nThe db server runs PostgreSQL 8.1.4 on FreeBSD 6.1-Stable. 2GB of RAM. \nIt contains two SATA150 disks, one contains PostgreSQL and the rest of\nthe operating system and the other disk holds the pg_xlog directory.\n\nChanged lines from my postgresql.conf file\n\nshared_buffers = 8192\ntemp_buffers = 4096\nwork_mem = 65536\nmaintenance_work_mem = 1048576\nmax_fsm_pages = 40000\nfsync = off\nwal_buffers = 64\neffective_cache_size = 174848\n\nThe query above takes around 42 minutes. \n\nHowever, i also have a wimpy desktop machine with 1gb ram. Windows with\nMSSQL 2000 (default installation), same database structure, same\nindexes, same query, etc and it takes 17 minutes. The big difference\nmakes me think that i've made an error with my PostgreSQL configuration.\nI just can't seem to figure it out. \n\nCould someone perhaps give me some pointers, advice?\n\nThanks in advance. \n\nNicky\n\n\n\n\n\n\n\n\n\nMessage\n\n\nCould \nyou post an explain analyze of the query?  Just FYI, if you do an explain \nanalyze of the insert statement, it will actually do the insert.  If you \ndon't want that just post an explain analyze of the select \npart.\n \nTo me \nit would be interesting to compare just the select parts of the query between \nPostgres and MSSQL.  That way you would know if your Postgres install is \nslower at the query or slower at the insert.\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n nickySent: Wednesday, June 21, 2006 8:47 AMTo: \n [email protected]: [PERFORM] Speeding up \n query, Joining 55mil and 43mil records. Hello People, I'm trying to solve a 'what i feel is a' \n performance/configuration/query error on my side. I'm fairly new to \n configuring PostgreSQL so, i might be completely wrong with my configuration. \n My database consists of 44 tables, about 20GB. Two of those tables are \n 'big/huge'. Table src.src_faktuur_verricht contains 43million records (9GB) \n and table src.src_faktuur_verrsec contains 55million records (6GB). \n Below is the 'slow' query. INSERT INTO \n rpt.rpt_verrichting(verrichting_id,verrichting_secid,fout_status,patientnr,verrichtingsdatum,locatie_code,afdeling_code,uitvoerder_code,aanvrager_code,verrichting_code,dbcnr,aantal_uitgevoerd,kostenplaats_code,vc_patientnr,vc_verrichting_code,vc_dbcnr)SELECT  \n t1.id,       \n t0.secid,       \n t1.status,       \n t1.patientnr,       \n t1.datum,       \n t1.locatie,       \n t1.afdeling,       \n t1.uitvoerder,       \n t1.aanvrager,       \n t0.code,       \n t1.casenr,       \n t0.aantal,       \n t0.kostplaats,       \n null,       \n null,       nullFROM    \n src.src_faktuur_verrsec t0 JOIN        \n src.src_faktuur_verricht t1 ON        \n t0.id = t1.idWHERE   substr(t0.code,1,2) not in \n ('14','15','16','17')AND     (substr(t0.correctie,4,1) \n <> '1' OR t0.correctie is null)AND     \n EXTRACT(YEAR from t1.datum) > 2004;Output from \n explainHash Join  (cost=1328360.12..6167462.76 rows=7197568 \n width=118)  Hash Cond: ((\"outer\".id)::text = \n (\"inner\".id)::text)  ->  Seq Scan on src_faktuur_verrsec \n t0  (cost=0.00..2773789.90 rows=40902852 \n width=52)        Filter: \n ((substr((code)::text, 1, 2) <> '14'::text) AND (substr((code)::text, 1, \n 2) <> '15'::text) AND (substr((code)::text, 1, 2) <> '16'::text) \n AND (substr((code)::text, 1, 2) <> '17'::text) AND \n ((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS \n NULL)))  ->  Hash  (cost=1188102.97..1188102.97 \n rows=8942863 width=80)        \n ->  Bitmap Heap Scan on src_faktuur_verricht t1  \n (cost=62392.02..1188102.97 rows=8942863 \n width=80)              \n Recheck Cond: (date_part('year'::text, datum) > 2004::double \n precision)              \n ->  Bitmap Index Scan on src_faktuur_verricht_idx1  \n (cost=0.00..62392.02 rows=8942863 \n width=0)                    \n Index Cond: (date_part('year'::text, datum) > 2004::double \n precision)The db server runs PostgreSQL 8.1.4 on FreeBSD \n 6.1-Stable. 2GB of RAM. It contains two SATA150 disks, one contains \n PostgreSQL and the rest of the operating system and the other disk holds the \n pg_xlog directory.Changed lines from my postgresql.conf \n fileshared_buffers = 8192temp_buffers = 4096work_mem = \n 65536maintenance_work_mem = 1048576max_fsm_pages = 40000fsync = \n offwal_buffers = 64effective_cache_size = 174848The query \n above takes around 42 minutes. However, i also have a wimpy desktop \n machine with 1gb ram. Windows with MSSQL 2000 (default installation), same \n database structure, same indexes, same query, etc and it takes 17 minutes. The \n big difference makes me think that i've made an error with my PostgreSQL \n configuration. I just can't seem to figure it out. Could someone \n perhaps give me some pointers, advice?Thanks in advance. \n Nicky", "msg_date": "Wed, 21 Jun 2006 09:53:03 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records. " }, { "msg_contents": "Hi Nicky,\n\nI guess, you should try to upgrade the memory setting of PostgreSQL first.\n\nwork_mem = 65536\n\nIs a bit low for such large joins.\n\nDid you get a change to watch the directory \n<PGDATA>/base/<DBOID>/pgsql_tmp to see how large the temporary file is \nduring this query. I'm sure that there is large file.\n\nAnyhow, you can upgrade 'work_mem' to 1000000 which is 1 GB. Please note \nthat the parameter work_mem is per backend process. You will get \nproblems with multiple large queries at the same time.\nYou may move (link) the directory 'pgsql_tmp' to a very fast file system \nif you still get large files in this directory.\n\nYou also can try to increase this settings:\n\ncheckpoint_segments = 256\ncheckpoint_timeout = 3600 # range 30-3600, in seconds\ncheckpoint_warning = 0 # 0 is off\n\nPlease read the PostgreSQL documentation about the drawbacks of this \nsetting as well as your setting 'fsync=off'.\n\nCheers\nSven.\n\nnicky schrieb:\n> Hello People,\n> \n> I'm trying to solve a 'what i feel is a' performance/configuration/query \n> error on my side. I'm fairly new to configuring PostgreSQL so, i might \n> be completely wrong with my configuration.\n> \n> My database consists of 44 tables, about 20GB. Two of those tables are \n> 'big/huge'. Table src.src_faktuur_verricht contains 43million records \n> (9GB) and table src.src_faktuur_verrsec contains 55million records (6GB).\n> \n> Below is the 'slow' query.\n> \n> INSERT INTO rpt.rpt_verrichting\n> (verrichting_id\n> ,verrichting_secid\n> ,fout_status\n> ,patientnr\n> ,verrichtingsdatum\n> ,locatie_code\n> ,afdeling_code\n> ,uitvoerder_code\n> ,aanvrager_code\n> ,verrichting_code\n> ,dbcnr\n> ,aantal_uitgevoerd\n> ,kostenplaats_code\n> ,vc_patientnr\n> ,vc_verrichting_code\n> ,vc_dbcnr\n> )\n> SELECT t1.id\n> , t0.secid\n> , t1.status\n> , t1.patientnr\n> , t1.datum\n> , t1.locatie\n> , t1.afdeling\n> , t1.uitvoerder\n> , t1.aanvrager\n> , t0.code\n> , t1.casenr\n> , t0.aantal\n> , t0.kostplaats\n> , null\n> , null\n> , null\n> FROM src.src_faktuur_verrsec t0 JOIN\n> src.src_faktuur_verricht t1 ON\n> t0.id = t1.id\n> WHERE substr(t0.code,1,2) not in ('14','15','16','17')\n> AND (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\n> AND EXTRACT(YEAR from t1.datum) > 2004;\n> \n> \n> Output from explain\n> \n> Hash Join (cost=1328360.12..6167462.76 rows=7197568 width=118)\n> Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n> \n> -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90 \n> rows=40902852 width=52)\n> Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND \n> (substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1, \n> 2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND \n> ((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS NULL)))\n> -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=80)\n> -> Bitmap Heap Scan on src_faktuur_verricht t1 \n> (cost=62392.02..1188102.97 rows=8942863 width=80)\n> Recheck Cond: (date_part('year'::text, datum) > \n> 2004::double precision)\n> -> Bitmap Index Scan on src_faktuur_verricht_idx1 \n> (cost=0.00..62392.02 rows=8942863 width=0)\n> Index Cond: (date_part('year'::text, datum) > \n> 2004::double precision)\n> \n> \n> The db server runs PostgreSQL 8.1.4 on FreeBSD 6.1-Stable. 2GB of RAM.\n> It contains two SATA150 disks, one contains PostgreSQL and the rest of \n> the operating system and the other disk holds the pg_xlog directory.\n> \n> Changed lines from my postgresql.conf file\n> \n> shared_buffers = 8192\n> temp_buffers = 4096\n> work_mem = 65536\n> maintenance_work_mem = 1048576\n> max_fsm_pages = 40000\n> fsync = off\n> wal_buffers = 64\n> effective_cache_size = 174848\n> \n> The query above takes around 42 minutes.\n> \n> However, i also have a wimpy desktop machine with 1gb ram. Windows with \n> MSSQL 2000 (default installation), same database structure, same \n> indexes, same query, etc and it takes 17 minutes. The big difference \n> makes me think that i've made an error with my PostgreSQL configuration. \n> I just can't seem to figure it out.\n> \n> Could someone perhaps give me some pointers, advice?\n> \n> Thanks in advance.\n> \n> Nicky\n> \n", "msg_date": "Wed, 21 Jun 2006 17:37:07 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "On Wed, 2006-06-21 at 08:47, nicky wrote:\n> Hello People, \n\nSNIPPAGE\n\n> The query above takes around 42 minutes. \n> \n> However, i also have a wimpy desktop machine with 1gb ram. Windows\n> with MSSQL 2000 (default installation), same database structure, same\n> indexes, same query, etc and it takes 17 minutes. The big difference\n> makes me think that i've made an error with my PostgreSQL\n> configuration. I just can't seem to figure it out. \n\nWhat is the difference between the two plans (i.e. explain on both boxes\nand compare)\n", "msg_date": "Wed, 21 Jun 2006 12:03:41 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> What is the difference between the two plans (i.e. explain on both boxes\n> and compare)\n\nEven more to the point, let's see EXPLAIN ANALYZE output from both boxes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2006 13:12:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records. " }, { "msg_contents": "On Wed, Jun 21, 2006 at 03:47:19PM +0200, nicky wrote:\n> WHERE substr(t0.code,1,2) not in ('14','15','16','17')\n> AND (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\n> AND EXTRACT(YEAR from t1.datum) > 2004;\n\nHow much data do you expect to be getting back from that where clause?\nUnless you plan on inserting most of the table, some well-placed indexes\nwould probably help, and fixing the datum portion might as well\n(depending on how far back the data goes). Specifically:\n\nCREATE INDEX t0_code_partial ON t0(substr(code,1,2));\n(yeah, I know t0 is an alias, but I already snipped the table name)\n\nand\n\nAND t1.datum >= '1/1/2005'\n\n(might need to cast that to a date or whatever).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 13:48:03 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "Hello again,\n\nthanks for all the quick replies.\n\nIt seems i wasn't entirely correct on my previous post, i've mixed up \nsome times/numbers.\n\nBelow the correct numbers\n\nMSSQL: SELECT COUNT(*) from JOIN (without insert) 17 minutes\nPostgreSQL: SELECT COUNT(*) from JOIN (without insert) 33 minutes\nPostgreSQL: complete query 55 minutes\n\nThe part i'm really troubled with is the difference in performance for \nthe select part. Which takes twice as long on PostgreSQL even though it \nhas a better server then MSSQL.\n\nChanged i've made to postgressql.conf\n\nwork_mem = 524288 (1GB, results in out of memory error)\ncheckpoints_segments = 256\ncheckpoints_timeout = 3600\ncheckpoints_warning = 0\n\n\nI've ran the complete 'explain analyse query' twice. First with \npgsql_tmp on the same disk, then again with pgsql_tmp on a seperate disk.\n\n**** (PostgreSQL) (*pgsql_tmp on same disk*):\n\nHash Join (cost=1328360.12..6167462.76 rows=7197568 width=118) (actual \ntime=327982.425..1903423.769 rows=7551616 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90 \nrows=40902852 width=52) (actual time=8.935..613455.204 rows=37368390 \nloops=1)\n Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND \n(substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1, \n2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND \n((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS NULL)))\n -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=80) (actual \ntime=327819.698..327819.698 rows=8761024 loops=1)\n -> Bitmap Heap Scan on src_faktuur_verricht t1 \n(cost=62392.02..1188102.97 rows=8942863 width=80) (actual \ntime=75911.336..295510.647 rows=8761024 loops=1)\n Recheck Cond: (date_part('year'::text, datum) > \n2004::double precision)\n -> Bitmap Index Scan on src_faktuur_verricht_idx1 \n(cost=0.00..62392.02 rows=8942863 width=0) (actual \ntime=75082.080..75082.080 rows=8761024 loops=1)\n Index Cond: (date_part('year'::text, datum) > \n2004::double precision)\nTotal runtime: 3355696.015 ms\n\n\n**** (PostgreSQL) (*pgsql_tmp on seperate disk*)\n\nHash Join (cost=1328360.12..6167462.76 rows=7197568 width=118) (actual \ntime=172797.736..919869.708 rows=7551616 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90 \nrows=40902852 width=52) (actual time=0.015..362154.822 rows=37368390 \nloops=1)\n Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND \n(substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1, \n2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND \n((substr((correctie)::text, 4, 1) <> '1'::text) OR (correctie IS NULL)))\n -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=80) (actual \ntime=172759.255..172759.255 rows=8761024 loops=1)\n -> Bitmap Heap Scan on src_faktuur_verricht t1 \n(cost=62392.02..1188102.97 rows=8942863 width=80) (actual \ntime=4244.840..142144.606 rows=8761024 loops=1)\n Recheck Cond: (date_part('year'::text, datum) > \n2004::double precision)\n -> Bitmap Index Scan on src_faktuur_verricht_idx1 \n(cost=0.00..62392.02 rows=8942863 width=0) (actual \ntime=3431.361..3431.361 rows=8761024 loops=1)\n Index Cond: (date_part('year'::text, datum) > \n2004::double precision)\nTotal runtime: 2608316.714 ms\n\nA lot of difference in performance. 55 minutes to 42 minutes.\n\n\nI've ran the 'select count(*) from JOIN' to see the difference on that \npart.\n\n**** (PostgreSQL) Explain analyse from SELECT COUNT(*) from the JOIN. \n(*pgsql_tmp on seperate disk*)\n\nAggregate (cost=5632244.93..5632244.94 rows=1 width=0) (actual \ntime=631993.425..631993.427 rows=1 loops=1)\n -> Hash Join (cost=1258493.12..5614251.00 rows=7197568 width=0) \n(actual time=237999.277..620018.706 rows=7551616 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on src_faktuur_verrsec t0 (cost=0.00..2773789.90 \nrows=40902852 width=14) (actual time=23.449..200532.422 rows=37368390 \nloops=1)\n Filter: ((substr((code)::text, 1, 2) <> '14'::text) AND \n(substr((code)::text, 1, 2) <> '15'::text) AND (substr((code)::text, 1, \n2) <> '16'::text) AND (substr((code)::text, 1, 2) <> '17'::text) AND \n((substr((correctie)::text, 4, 1) <> '1'::tex (..)\n -> Hash (cost=1188102.97..1188102.97 rows=8942863 width=14) \n(actual time=237939.262..237939.262 rows=8761024 loops=1)\n -> Bitmap Heap Scan on src_faktuur_verricht t1 \n(cost=62392.02..1188102.97 rows=8942863 width=14) (actual \ntime=74713.092..216206.478 rows=8761024 loops=1)\n Recheck Cond: (date_part('year'::text, datum) > \n2004::double precision)\n -> Bitmap Index Scan on src_faktuur_verricht_idx1 \n(cost=0.00..62392.02 rows=8942863 width=0) (actual \ntime=73892.153..73892.153 rows=8761024 loops=1)\n Index Cond: (date_part('year'::text, datum) > \n2004::double precision)\nTotal runtime: 631994.172 ms\n\nA lot of improvement also in the select count: 33 minutes vs 10 minutes.\n\n\nTo us, the speeds are good. Very happy with the performance increase on \nthat select with join, since 90% of the queries are SELECT based.\n\nThe query results in 7551616 records, so that's about 4500 inserts per \nsecond. I'm not sure if that is fast or not. Any further tips would be \nwelcome.\n\nThanks everyone.\nNicky\n", "msg_date": "Thu, 22 Jun 2006 11:48:45 +0200", "msg_from": "nicky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "Hi Nicky,\n\nDid you tried to create an index to avoid the sequential scans?\n\nSeq Scan on src_faktuur_verrsec t0...\n\nI think, you should try\n\nCREATE INDEX src.src_faktuur_verrsec_codesubstr ON \nsrc.src_faktuur_verrsec (substr(src.src_faktuur_verrsec.code,1,2))\n\nCheers\nSven.\n\nnicky schrieb:\n> Hello again,\n> \n> thanks for all the quick replies.\n> \n> It seems i wasn't entirely correct on my previous post, i've mixed up \n> some times/numbers.\n> \n> Below the correct numbers\n> \n> MSSQL: SELECT COUNT(*) from JOIN (without insert) 17 minutes\n> PostgreSQL: SELECT COUNT(*) from JOIN (without insert) 33 minutes\n> PostgreSQL: complete query 55 minutes\n\n <snip snip snip>\n> \n> A lot of improvement also in the select count: 33 minutes vs 10 minutes.\n> \n> \n> To us, the speeds are good. Very happy with the performance increase on \n> that select with join, since 90% of the queries are SELECT based.\n> \n> The query results in 7551616 records, so that's about 4500 inserts per \n> second. I'm not sure if that is fast or not. Any further tips would be \n> welcome.\n", "msg_date": "Thu, 22 Jun 2006 13:29:41 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "Hello Sven,\n\nWe have the following indexes on src_faktuur_verrsec\n/\n CREATE INDEX src_faktuur_verrsec_idx0\n ON src.src_faktuur_verrsec\n USING btree\n (id);\n\n CREATE INDEX src_faktuur_verrsec_idx1\n ON src.src_faktuur_verrsec\n USING btree\n (substr(code::text, 1, 2));\n\n CREATE INDEX src_faktuur_verrsec_idx2\n ON src.src_faktuur_verrsec\n USING btree\n (substr(correctie::text, 4, 1));/\n\nand another two on src_faktuur_verricht\n\n/ CREATE INDEX src_faktuur_verricht_idx0\n ON src.src_faktuur_verricht\n USING btree\n (id);\n\n CREATE INDEX src_faktuur_verricht_idx1\n ON src.src_faktuur_verricht\n USING btree\n (date_part('year'::text, datum))\n TABLESPACE src_index;/\n\nPostgreSQL elects not to use them. I assume, because it most likely \nneeds to traverse the entire table anyway.\n\nif i change: / substr(t0.code,1,2) not in \n('14','15','16','17')/\nto (removing the NOT): / substr(t0.code,1,2) in ('14','15','16','17')/\n\nit uses the index, but it's not the query that needs to be run anymore.\n\nGreetings,\nNick\n\n\n\n\nSven Geisler wrote:\n> Hi Nicky,\n>\n> Did you tried to create an index to avoid the sequential scans?\n>\n> Seq Scan on src_faktuur_verrsec t0...\n>\n> I think, you should try\n>\n> CREATE INDEX src.src_faktuur_verrsec_codesubstr ON \n> src.src_faktuur_verrsec (substr(src.src_faktuur_verrsec.code,1,2))\n>\n> Cheers\n> Sven.\n>\n> nicky schrieb:\n>> Hello again,\n>>\n>> thanks for all the quick replies.\n>>\n>> It seems i wasn't entirely correct on my previous post, i've mixed up \n>> some times/numbers.\n>>\n>> Below the correct numbers\n>>\n>> MSSQL: SELECT COUNT(*) from JOIN (without insert) 17 minutes\n>> PostgreSQL: SELECT COUNT(*) from JOIN (without insert) 33 minutes\n>> PostgreSQL: complete query 55 minutes\n>\n> <snip snip snip>\n>>\n>> A lot of improvement also in the select count: 33 minutes vs 10 minutes.\n>>\n>>\n>> To us, the speeds are good. Very happy with the performance increase \n>> on that select with join, since 90% of the queries are SELECT based.\n>>\n>> The query results in 7551616 records, so that's about 4500 inserts \n>> per second. I'm not sure if that is fast or not. Any further tips \n>> would be welcome.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\n", "msg_date": "Thu, 22 Jun 2006 14:10:50 +0200", "msg_from": "nicky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "> PostgreSQL elects not to use them. I assume, because it most \n> likely needs to traverse the entire table anyway.\n> \n> if i change: / substr(t0.code,1,2) not in \n> ('14','15','16','17')/\n> to (removing the NOT): / substr(t0.code,1,2) in \n> ('14','15','16','17')/\n> \n> it uses the index, but it's not the query that needs to be \n> run anymore.\n\nIf this is the only query that you're having problems with, you might be\nhelped with a partial index - depending on how much 14-17 really\nfilters. Try something like:\n\nCREATE INDEX foo ON src.src_faktuur_verrsec (id) WHERE\nsubstr(t0.code,1,2) not in ('14','15','16','17') AND\n(substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)\n\nThat index shuold be usable for the JOIN while filtering out all the\nunnecessary rows before you even get tehre.\nIn the same way, if it filters a lot of rows, you might want to try\nCREATE INDEX foo ON src.src_faktuur_verricht (id) WHERE EXTRACT(YEAR\nfrom t1.datum) > 2004\n\n\nBut this kind of requires that the partial indexes actually drop\nsignificant amounts of the table. If not, then they'll be of no help.\n\n//Magnus\n", "msg_date": "Thu, 22 Jun 2006 14:17:29 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." }, { "msg_contents": "Hi Nick,\n\nI'm not that good to advice how to get PostgreSQL to use an index to get \nyour results faster.\n\nDid you try \"not (substr(t0.code,1,2) in ('14','15','16','17'))\"?\n\nCheers\nSven.\n\nnicky schrieb:\n> Hello Sven,\n> \n> We have the following indexes on src_faktuur_verrsec\n> /\n> CREATE INDEX src_faktuur_verrsec_idx0\n> ON src.src_faktuur_verrsec\n> USING btree\n> (id);\n> \n> CREATE INDEX src_faktuur_verrsec_idx1\n> ON src.src_faktuur_verrsec\n> USING btree\n> (substr(code::text, 1, 2));\n> \n> CREATE INDEX src_faktuur_verrsec_idx2\n> ON src.src_faktuur_verrsec\n> USING btree\n> (substr(correctie::text, 4, 1));/\n> \n> and another two on src_faktuur_verricht\n> \n> / CREATE INDEX src_faktuur_verricht_idx0\n> ON src.src_faktuur_verricht\n> USING btree\n> (id);\n> \n> CREATE INDEX src_faktuur_verricht_idx1\n> ON src.src_faktuur_verricht\n> USING btree\n> (date_part('year'::text, datum))\n> TABLESPACE src_index;/\n> \n> PostgreSQL elects not to use them. I assume, because it most likely \n> needs to traverse the entire table anyway.\n> \n> if i change: / substr(t0.code,1,2) not in \n> ('14','15','16','17')/\n> to (removing the NOT): / substr(t0.code,1,2) in ('14','15','16','17')/\n> \n> it uses the index, but it's not the query that needs to be run anymore.\n> \n", "msg_date": "Thu, 22 Jun 2006 14:19:58 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up query, Joining 55mil and 43mil records." } ]
[ { "msg_contents": "Hey - I am running into a data relation bloat problem which I believe is causing fairly significant \nslowdown of my updates. I am using version\n\n version\n-----------------------------------------------------------------------------\n PostgreSQL 8.1.4 on i586-trustix-linux-gnu, compiled by GCC gcc (GCC) 3.2.3\n\nAfter about 12 hours of running, my updates are causing lots of reads and iowait (45%) slowing \neverything down. The DB bloats from 259MB to 2.4 - 3.4GB.\n\nThe primary table which is troubled is called target and reaches a size of in mb of 834MB from its \nfreshly 'vacuum full analyze' size of 39MB.\n\nqradar=# select * from q_table_size;\n tablename | size\n--------------------------------+---------\n target | 834.496\n\nMy configuration includes.\n\nshared_buffers = 32767\nwork_mem = 20480\nmaintenance_work_mem = 32768\nmax_fsm_pages = 4024000\nmax_fsm_relations = 2000\nfsync = false\nwal_sync_method = fsync\nwal_buffers = 4096\ncheckpoint_segments = 32\ncheckpoint_timeout = 1200\ncheckpoint_warning = 60\ncommit_delay = 5000\ncommit_siblings = 5\neffective_cache_size = 175000\nrandom_page_cost = 2\nautovacuum = true\nautovacuum_naptime = 60\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.08\nautovacuum_analyze_scale_factor = 0.08\n#autovacuum_vacuum_cost_delay=100\n#autovacuum_vacuum_cost_limit=100\ndefault_statistics_target = 40\n\nFor the particular table I have pg_autovacuum overrides as \n\napp=# select * from pg_autovacuum where vacrelid = 16603;\n vacrelid | enabled | vac_base_thresh | vac_scale_factor | anl_base_thresh | anl_scale_factor | vac_cost_delay | vac_cost_limit\n----------+---------+-----------------+------------------+-----------------+------------------+----------------+----------------\n 16603 | t | 200 | 0.01 | 200 | 0.01 | 0 | 400\n\n\nWhat I am seeing is, after about 12 hours an update of a few thousand records takes about 2+ minutes as opposed the 100ms it used \nto take. I can restore performance only be stopping everything, perform a vacuum full analyze and restarting.\n\nAfter the vacuum full, my table returns to the expected 250+ MB from the previous size.\n\nqradar=# select * from q_table_size ;\n tablename | size\n--------------------------------+---------\n target | 841.536\n\nI can see autovacuum in top every 60 seconds as configured, but it is there and gone in the 1 second refresh. My table grows consistent \nevery transaction to no avail. To stop the growth, I had to perform a manual vacuum analyze. But at this point, performance is so poor \nI have to perform vacuum analyze full.\n\nAnyway, I am totally confused. My first cut at changing the autovacuum configuration was using Jim Nasby' advice by cutting all values in \nhalf leaving my tables at roughly 20% dead space, for this table, that would be just over 50k tuples. This however yields the same results \nas the above configuration with continous bloat. So, I was WAY more aggressive as shown above with no improvment. By calculation, Jims advice\nwould suffice for our system.\n\nI just checked a production box which is running 8.1.1 and it is behaving as expected. This configuration only specifies \"autovacuum = true\", \neverything else is left to the defaults.\n\nIs there something whacked about my configuration? Is there a way I can troubleshoot what autovacuum is doing or why it is not performing \nthe work?\n\nHere is the output for the vacuum full of target...\n\nqradar=# vacuum full analyze verbose target;\nINFO: vacuuming \"public.target\"\nINFO: \"target\": found 5048468 removable, 266778 nonremovable row versions in 96642 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 140 to 144 bytes long.\nThere were 1696 unused item pointers.\nTotal free space (including removable row versions) is 730074628 bytes.\n89347 pages are or will become empty, including 0 at the end of the table.\n95261 pages containing 730030436 free bytes are potential move destinations.\nCPU 2.31s/1.27u sec elapsed 6.46 sec.\nINFO: index \"target_pkey\" now contains 266778 row versions in 18991 pages\nDETAIL: 5048468 index row versions were removed.\n40 index pages have been deleted, 40 are currently reusable.\nCPU 0.91s/5.29u sec elapsed 6.24 sec.\nINFO: index \"target_network_key\" now contains 266778 row versions in 15159 pages\nDETAIL: 5048468 index row versions were removed.\n30 index pages have been deleted, 30 are currently reusable.\nCPU 0.45s/4.96u sec elapsed 5.43 sec.\nINFO: index \"target_tulu_idx\" now contains 266778 row versions in 19453 pages\nDETAIL: 5048468 index row versions were removed.\n17106 index pages have been deleted, 17106 are currently reusable.\nCPU 0.79s/3.31u sec elapsed 4.10 sec.\nINFO: \"target\": moved 266719 row versions, truncated 96642 to 4851 pages\nDETAIL: CPU 5.19s/8.86u sec elapsed 14.27 sec.\nINFO: index \"target_pkey\" now contains 266778 row versions in 18991 pages\nDETAIL: 266719 index row versions were removed.\n41 index pages have been deleted, 41 are currently reusable.\nCPU 0.78s/0.54u sec elapsed 1.32 sec.\nINFO: index \"target_network_key\" now contains 266778 row versions in 15159 pages\nDETAIL: 266719 index row versions were removed.\n31 index pages have been deleted, 31 are currently reusable.\nCPU 0.49s/0.44u sec elapsed 0.93 sec.\nINFO: index \"target_tulu_idx\" now contains 266778 row versions in 19453 pages\nDETAIL: 266719 index row versions were removed.\n16726 index pages have been deleted, 16726 are currently reusable.\nCPU 0.33s/0.38u sec elapsed 0.76 sec.\nINFO: analyzing \"public.target\"\nINFO: \"target\": scanned 4851 of 4851 pages, containing 266778 live rows and 0 dead rows; 12000 rows in sample, 266778 estimated total rows\nVACUUM\n\nA db wide vacuum full outputs this at the end.\n\nINFO: free space map contains 32848 pages in 159 relations\nDETAIL: A total of 24192 page slots are in use (including overhead).\n24192 page slots are required to track all free space.\nCurrent limits are: 4024000 page slots, 2000 relations, using 23705 KB.\n\nSo, it appears my autovacuum is just NOT working... I must have screwed something up, but I cannot see what. \n\nThanks again.\nJody\n\n\n\n", "msg_date": "Wed, 21 Jun 2006 10:52:42 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help tuning autovacuum - seeing lots of relation bloat" }, { "msg_contents": "> So, it appears my autovacuum is just NOT working... I must have screwed something up, but I cannot see what. \n\nIs it possible that you have long running transactions ? If yes, VACUUM\nis simply not efficient, as it won't eliminate the dead space\naccumulated during the long running transaction. In that case VACUUM\nFULL won't help you either as it also can't eliminate dead space still\nvisible by old transactions, but from what you say I guess you really\nstop everything before doing VACUUM FULL so you might as well stopped\nthe culprit transaction too... that's why the VACUUM FULL worked (if my\nassumption is correct).\n\nTo check if this is the case, look for \"idle in transaction\" in your\nprocess listing (ps auxww|grep \"idle in transaction\"). If you got one\n(or more) of that, you found your problem. If not, hopefully others will\nhelp you :-)\n\nCheers,\nCsaba.\n\n\n\n", "msg_date": "Wed, 21 Jun 2006 16:08:33 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relation" }, { "msg_contents": "Csaba Nagy <[email protected]> writes:\n>> So, it appears my autovacuum is just NOT working... I must have screwed something up, but I cannot see what. \n\n> Is it possible that you have long running transactions ?\n\nThe other question I was wondering about is if autovacuum is actually\nchoosing to vacuum the target table or not. The only way to check that\nin 8.1 is to crank log_min_messages up to DEBUG2 and then trawl through\nthe postmaster log looking for \"autovac\" messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2006 10:17:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relation " }, { "msg_contents": "Our application is broken down quite well. We have two main writing processes \nwriting to two separate sets of tables. No crossing over, nothign to prohibit the \nvacuuming in the nature which you describe.\n\nMy longest transaction on the tables in question are typically quite short until \nof course they begin to bloat.\n\n\n\nOn Wednesday 21 June 2006 11:08, Csaba Nagy wrote:\n> > So, it appears my autovacuum is just NOT working... I must have screwed something up, but I cannot see what. \n> \n> Is it possible that you have long running transactions ? If yes, VACUUM\n> is simply not efficient, as it won't eliminate the dead space\n> accumulated during the long running transaction. In that case VACUUM\n> FULL won't help you either as it also can't eliminate dead space still\n> visible by old transactions, but from what you say I guess you really\n> stop everything before doing VACUUM FULL so you might as well stopped\n> the culprit transaction too... that's why the VACUUM FULL worked (if my\n> assumption is correct).\n> \n> To check if this is the case, look for \"idle in transaction\" in your\n> process listing (ps auxww|grep \"idle in transaction\"). If you got one\n> (or more) of that, you found your problem. If not, hopefully others will\n> help you :-)\n> \n> Cheers,\n> Csaba.\n> \n> \n> \n> \n", "msg_date": "Wed, 21 Jun 2006 12:27:06 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "On Wed, 2006-06-21 at 17:27, jody brownell wrote:\n> Our application is broken down quite well. We have two main writing processes \n> writing to two separate sets of tables. No crossing over, nothign to prohibit the \n> vacuuming in the nature which you describe.\n\nIt really doesn't matter what table are you touching, as it doesn't\nmatter if you read or write either, what matters is how long ago was the\nlast \"begin\" without \"commit\" or \"rollback\". VACUUM will not touch\ntuples which were deleted after the oldest not yet finished transaction\nstarted, regardless if that transaction touched the vacuumed table or\nnot in any way...\n\n> My longest transaction on the tables in question are typically quite short until \n> of course they begin to bloat.\n\nWell, your application might be completely well behaved and still your\nDBA (or your favorite DB access tool for that matter) can leave open\ntransactions in an interactive session. It never hurts to check if you\nactually have \"idle in transaction\" sessions. It happened a few times to\nus, some of those were bad coding on ad-hoc tools written by us, others\nwere badly behaved DB access tools opening a transaction immediately\nafter connect and after each successful command, effectively leaving an\nopen transaction when leaving it open while having lunch...\n\nSo it might very well be that some interactive or ad hoc tools you're\nusing to manage the DB are your problem.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 21 Jun 2006 17:42:21 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "That is interesting.\n\nThere is one thread keeping a transaction open it appears from ps\n\npostgres: app app xxx(42644) idle in transaction\n\nhowever, I created a test table \"t\" not configured in pg_autovacuum. I inserted a whack of rows and saw this.\n\nJun 21 12:38:45 vanquish postgres[1525]: [8-1] LOG: autovacuum: processing database \"qradar\"\nJun 21 12:38:45 vanquish postgres[1525]: [9-1] DEBUG: autovac: will VACUUM ANALYZE t\nJun 21 12:38:45 vanquish postgres[1525]: [10-1] DEBUG: vacuuming \"public.t\"\nJun 21 12:38:48 vanquish postgres[1525]: [11-1] DEBUG: \"t\": removed 8104311 row versions in 51620 pages\nJun 21 12:38:48 vanquish postgres[1525]: [11-2] DETAIL: CPU 0.93s/0.70u sec elapsed 1.70 sec.\nJun 21 12:38:48 vanquish postgres[1525]: [12-1] DEBUG: \"t\": found 8104311 removable, 0 nonremovable row versions in 51620 pages\nJun 21 12:38:48 vanquish postgres[1525]: [12-2] DETAIL: 0 dead row versions cannot be removed yet.\n\nfollowed a later (after I did a similar insert op on target) by this\n\nJun 21 13:00:46 vanquish postgres[3311]: [12-1] LOG: autovacuum: processing database \"qradar\"\nJun 21 13:00:46 vanquish postgres[3311]: [13-1] DEBUG: autovac: will VACUUM target\nJun 21 13:00:46 vanquish postgres[3311]: [14-1] DEBUG: vacuuming \"public.target\"\nJun 21 13:01:51 vanquish postgres[3311]: [15-1] DEBUG: index \"target_pkey\" now contains 1296817 row versions in 25116 pages\nJun 21 13:01:51 vanquish postgres[3311]: [15-2] DETAIL: 5645230 index row versions were removed.\nJun 21 13:01:51 vanquish postgres[3311]: [15-3] ^I116 index pages have been deleted, 60 are currently reusable.\nJun 21 13:01:51 vanquish postgres[3311]: [15-4] ^ICPU 1.29s/7.44u sec elapsed 48.65 sec.\nJun 21 13:02:19 vanquish postgres[3311]: [16-1] DEBUG: index \"target_network_key\" now contains 1296817 row versions in 19849 pages\nJun 21 13:02:19 vanquish postgres[3311]: [16-2] DETAIL: 5645230 index row versions were removed.\nJun 21 13:02:19 vanquish postgres[3311]: [16-3] ^I32 index pages have been deleted, 0 are currently reusable.\nJun 21 13:02:19 vanquish postgres[3311]: [16-4] ^ICPU 0.89s/6.61u sec elapsed 27.77 sec.\nJun 21 13:02:47 vanquish postgres[3311]: [17-1] DEBUG: index \"target_network_details_id_idx\" now contains 1296817 row versions in 23935 pages\nJun 21 13:02:47 vanquish postgres[3311]: [17-2] DETAIL: 5645230 index row versions were removed.\nJun 21 13:02:47 vanquish postgres[3311]: [17-3] ^I17814 index pages have been deleted, 0 are currently reusable.\nJun 21 13:02:47 vanquish postgres[3311]: [17-4] ^ICPU 0.93s/7.52u sec elapsed 27.36 sec.\nJun 21 13:03:23 vanquish postgres[3311]: [18-1] DEBUG: index \"target_tulu_idx\" now contains 1296817 row versions in 24341 pages\nJun 21 13:03:23 vanquish postgres[3311]: [18-2] DETAIL: 5645230 index row versions were removed.\nJun 21 13:03:23 vanquish postgres[3311]: [18-3] ^I18495 index pages have been deleted, 0 are currently reusable.\nJun 21 13:03:23 vanquish postgres[3311]: [18-4] ^ICPU 1.37s/5.38u sec elapsed 36.95 sec.\nJun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: \"target\": removed 5645231 row versions in 106508 pages\nJun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec elapsed 40.63 sec.\nJun 21 13:04:04 vanquish postgres[3311]: [20-1] DEBUG: \"target\": found 5645231 removable, 1296817 nonremovable row versions in 114701 pages\nJun 21 13:04:04 vanquish postgres[3311]: [20-2] DETAIL: 0 dead row versions cannot be removed yet.\n\nthis was with the \"Idle in transaction\" though..... \n\nAh HA! Wondering, my autovacuum naptime is 60 seconds, that is also the interval which I wake up and begin persistence.\nWondering if I am simply locking autovacuum out of the tables b/c they are on a similar timeline.\n\nI will try a 30 second naptime, if this is it, that should increase the likely hood of falling on the right side of the TX more often.\n\nmake sense?\n\n\nOn Wednesday 21 June 2006 12:42, Csaba Nagy wrote:\n> On Wed, 2006-06-21 at 17:27, jody brownell wrote:\n> > Our application is broken down quite well. We have two main writing processes \n> > writing to two separate sets of tables. No crossing over, nothign to prohibit the \n> > vacuuming in the nature which you describe.\n> \n> It really doesn't matter what table are you touching, as it doesn't\n> matter if you read or write either, what matters is how long ago was the\n> last \"begin\" without \"commit\" or \"rollback\". VACUUM will not touch\n> tuples which were deleted after the oldest not yet finished transaction\n> started, regardless if that transaction touched the vacuumed table or\n> not in any way...\n> \n> > My longest transaction on the tables in question are typically quite short until \n> > of course they begin to bloat.\n> \n> Well, your application might be completely well behaved and still your\n> DBA (or your favorite DB access tool for that matter) can leave open\n> transactions in an interactive session. It never hurts to check if you\n> actually have \"idle in transaction\" sessions. It happened a few times to\n> us, some of those were bad coding on ad-hoc tools written by us, others\n> were badly behaved DB access tools opening a transaction immediately\n> after connect and after each successful command, effectively leaving an\n> open transaction when leaving it open while having lunch...\n> \n> So it might very well be that some interactive or ad hoc tools you're\n> using to manage the DB are your problem.\n> \n> Cheers,\n> Csaba.\n> \n> \n> \n", "msg_date": "Wed, 21 Jun 2006 13:21:05 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "Opps - that was confusing. The idle in transaction was from one box and the autovacuum was from another.\n\nSo, one question was answered, auto vacuum is running and selecting the tables but apparently not at the \nsame time as my app probably due to this \"idle in transaction\". I will track it down and see what the difference is.\n\nthanks\n\nOn Wednesday 21 June 2006 13:21, jody brownell wrote:\n> That is interesting.\n> \n> There is one thread keeping a transaction open it appears from ps\n> \n> postgres: app app xxx(42644) idle in transaction\n> \n> however, I created a test table \"t\" not configured in pg_autovacuum. I inserted a whack of rows and saw this.\n> \n> Jun 21 12:38:45 vanquish postgres[1525]: [8-1] LOG: autovacuum: processing database \"qradar\"\n> Jun 21 12:38:45 vanquish postgres[1525]: [9-1] DEBUG: autovac: will VACUUM ANALYZE t\n> Jun 21 12:38:45 vanquish postgres[1525]: [10-1] DEBUG: vacuuming \"public.t\"\n> Jun 21 12:38:48 vanquish postgres[1525]: [11-1] DEBUG: \"t\": removed 8104311 row versions in 51620 pages\n> Jun 21 12:38:48 vanquish postgres[1525]: [11-2] DETAIL: CPU 0.93s/0.70u sec elapsed 1.70 sec.\n> Jun 21 12:38:48 vanquish postgres[1525]: [12-1] DEBUG: \"t\": found 8104311 removable, 0 nonremovable row versions in 51620 pages\n> Jun 21 12:38:48 vanquish postgres[1525]: [12-2] DETAIL: 0 dead row versions cannot be removed yet.\n> \n> followed a later (after I did a similar insert op on target) by this\n> \n> Jun 21 13:00:46 vanquish postgres[3311]: [12-1] LOG: autovacuum: processing database \"qradar\"\n> Jun 21 13:00:46 vanquish postgres[3311]: [13-1] DEBUG: autovac: will VACUUM target\n> Jun 21 13:00:46 vanquish postgres[3311]: [14-1] DEBUG: vacuuming \"public.target\"\n> Jun 21 13:01:51 vanquish postgres[3311]: [15-1] DEBUG: index \"target_pkey\" now contains 1296817 row versions in 25116 pages\n> Jun 21 13:01:51 vanquish postgres[3311]: [15-2] DETAIL: 5645230 index row versions were removed.\n> Jun 21 13:01:51 vanquish postgres[3311]: [15-3] ^I116 index pages have been deleted, 60 are currently reusable.\n> Jun 21 13:01:51 vanquish postgres[3311]: [15-4] ^ICPU 1.29s/7.44u sec elapsed 48.65 sec.\n> Jun 21 13:02:19 vanquish postgres[3311]: [16-1] DEBUG: index \"target_network_key\" now contains 1296817 row versions in 19849 pages\n> Jun 21 13:02:19 vanquish postgres[3311]: [16-2] DETAIL: 5645230 index row versions were removed.\n> Jun 21 13:02:19 vanquish postgres[3311]: [16-3] ^I32 index pages have been deleted, 0 are currently reusable.\n> Jun 21 13:02:19 vanquish postgres[3311]: [16-4] ^ICPU 0.89s/6.61u sec elapsed 27.77 sec.\n> Jun 21 13:02:47 vanquish postgres[3311]: [17-1] DEBUG: index \"target_network_details_id_idx\" now contains 1296817 row versions in 23935 pages\n> Jun 21 13:02:47 vanquish postgres[3311]: [17-2] DETAIL: 5645230 index row versions were removed.\n> Jun 21 13:02:47 vanquish postgres[3311]: [17-3] ^I17814 index pages have been deleted, 0 are currently reusable.\n> Jun 21 13:02:47 vanquish postgres[3311]: [17-4] ^ICPU 0.93s/7.52u sec elapsed 27.36 sec.\n> Jun 21 13:03:23 vanquish postgres[3311]: [18-1] DEBUG: index \"target_tulu_idx\" now contains 1296817 row versions in 24341 pages\n> Jun 21 13:03:23 vanquish postgres[3311]: [18-2] DETAIL: 5645230 index row versions were removed.\n> Jun 21 13:03:23 vanquish postgres[3311]: [18-3] ^I18495 index pages have been deleted, 0 are currently reusable.\n> Jun 21 13:03:23 vanquish postgres[3311]: [18-4] ^ICPU 1.37s/5.38u sec elapsed 36.95 sec.\n> Jun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: \"target\": removed 5645231 row versions in 106508 pages\n> Jun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec elapsed 40.63 sec.\n> Jun 21 13:04:04 vanquish postgres[3311]: [20-1] DEBUG: \"target\": found 5645231 removable, 1296817 nonremovable row versions in 114701 pages\n> Jun 21 13:04:04 vanquish postgres[3311]: [20-2] DETAIL: 0 dead row versions cannot be removed yet.\n> \n> this was with the \"Idle in transaction\" though..... \n> \n> Ah HA! Wondering, my autovacuum naptime is 60 seconds, that is also the interval which I wake up and begin persistence.\n> Wondering if I am simply locking autovacuum out of the tables b/c they are on a similar timeline.\n> \n> I will try a 30 second naptime, if this is it, that should increase the likely hood of falling on the right side of the TX more often.\n> \n> make sense?\n> \n> \n> On Wednesday 21 June 2006 12:42, Csaba Nagy wrote:\n> > On Wed, 2006-06-21 at 17:27, jody brownell wrote:\n> > > Our application is broken down quite well. We have two main writing processes \n> > > writing to two separate sets of tables. No crossing over, nothign to prohibit the \n> > > vacuuming in the nature which you describe.\n> > \n> > It really doesn't matter what table are you touching, as it doesn't\n> > matter if you read or write either, what matters is how long ago was the\n> > last \"begin\" without \"commit\" or \"rollback\". VACUUM will not touch\n> > tuples which were deleted after the oldest not yet finished transaction\n> > started, regardless if that transaction touched the vacuumed table or\n> > not in any way...\n> > \n> > > My longest transaction on the tables in question are typically quite short until \n> > > of course they begin to bloat.\n> > \n> > Well, your application might be completely well behaved and still your\n> > DBA (or your favorite DB access tool for that matter) can leave open\n> > transactions in an interactive session. It never hurts to check if you\n> > actually have \"idle in transaction\" sessions. It happened a few times to\n> > us, some of those were bad coding on ad-hoc tools written by us, others\n> > were badly behaved DB access tools opening a transaction immediately\n> > after connect and after each successful command, effectively leaving an\n> > open transaction when leaving it open while having lunch...\n> > \n> > So it might very well be that some interactive or ad hoc tools you're\n> > using to manage the DB are your problem.\n> > \n> > Cheers,\n> > Csaba.\n> > \n> > \n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Wed, 21 Jun 2006 13:33:58 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "On Wed, 2006-06-21 at 18:21, jody brownell wrote:\n> That is interesting.\n> \n> There is one thread keeping a transaction open it appears from ps\n> \n> postgres: app app xxx(42644) idle in transaction\n\nThat shouldn't be a problem on itself, \"idle in transaction\" happens all\nthe time between 2 commands in the same transaction... you only have a\nproblem if you see the same PID always \"idle\", that means somebody left\nan open transaction and left for lunch.\n\n[snip]\n> this was with the \"Idle in transaction\" though..... \n\nThis probably means you don't have long running transactions currently.\nHowever, if you happen to have just one such long transaction, the dead\nspace accumulates and normal vacuum will not be able to clean that\nanymore. But I guess if you didn't find one now then you should take a\nlook at Tom's suggestion and bump up debug level to see if autovacuum\npicks your table at all...\n\n> Ah HA! Wondering, my autovacuum naptime is 60 seconds, that is also the interval which I wake up and begin persistence.\n> Wondering if I am simply locking autovacuum out of the tables b/c they are on a similar timeline.\n> \n> I will try a 30 second naptime, if this is it, that should increase the likely hood of falling on the right side of the TX more often.\n> \n> make sense?\n\nI don't think that's your problem... vacuum wouldn't be locked out by\nany activity which doesn't lock exclusively the table (and I guess\nyou're not doing that). If your persistence finishes quickly then that's\nnot the problem.\n\nOh, just occured to me... in order to use autovacuum you also need to\nenable the statistics collector on row level:\n\nstats_start_collector = on\nstats_row_level = on\n\nSee also:\nhttp://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n\nThis was not mentioned in the settings in your original post, so I guess\nyou didn't touch that, and I think they are disabled by default.\n\nIf this is disabled, you should enable it and \"pg_ctl reload ....\", that\nshould fix the problem.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 21 Jun 2006 18:36:39 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "that is exactly what I am seeing, one process, no change, always in idle while the others are constantly\nchanging their state.\n\nlooks like someone opened a tx then is blocking on a queue lock or something. dang.\n\n\n\nOn Wednesday 21 June 2006 13:36, Csaba Nagy wrote:\n> On Wed, 2006-06-21 at 18:21, jody brownell wrote:\n> > That is interesting.\n> > \n> > There is one thread keeping a transaction open it appears from ps\n> > \n> > postgres: app app xxx(42644) idle in transaction\n> \n> That shouldn't be a problem on itself, \"idle in transaction\" happens all\n> the time between 2 commands in the same transaction... you only have a\n> problem if you see the same PID always \"idle\", that means somebody left\n> an open transaction and left for lunch.\n> \n> [snip]\n> > this was with the \"Idle in transaction\" though..... \n> \n> This probably means you don't have long running transactions currently.\n> However, if you happen to have just one such long transaction, the dead\n> space accumulates and normal vacuum will not be able to clean that\n> anymore. But I guess if you didn't find one now then you should take a\n> look at Tom's suggestion and bump up debug level to see if autovacuum\n> picks your table at all...\n> \n> > Ah HA! Wondering, my autovacuum naptime is 60 seconds, that is also the interval which I wake up and begin persistence.\n> > Wondering if I am simply locking autovacuum out of the tables b/c they are on a similar timeline.\n> > \n> > I will try a 30 second naptime, if this is it, that should increase the likely hood of falling on the right side of the TX more often.\n> > \n> > make sense?\n> \n> I don't think that's your problem... vacuum wouldn't be locked out by\n> any activity which doesn't lock exclusively the table (and I guess\n> you're not doing that). If your persistence finishes quickly then that's\n> not the problem.\n> \n> Oh, just occured to me... in order to use autovacuum you also need to\n> enable the statistics collector on row level:\n> \n> stats_start_collector = on\n> stats_row_level = on\n> \n> See also:\n> http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n> \n> This was not mentioned in the settings in your original post, so I guess\n> you didn't touch that, and I think they are disabled by default.\n> \n> If this is disabled, you should enable it and \"pg_ctl reload ....\", that\n> should fix the problem.\n> \n> Cheers,\n> Csaba.\n> \n> \n> \n", "msg_date": "Wed, 21 Jun 2006 13:39:33 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "On Wed, 2006-06-21 at 18:39, jody brownell wrote:\n> that is exactly what I am seeing, one process, no change, always in idle while the others are constantly\n> changing their state.\n> \n> looks like someone opened a tx then is blocking on a queue lock or something. dang.\n\nDon't forget to check the statistics collector settings (see below), if\nthat is not correct then autovacuum is indeed not working correctly... I\nshould have put that on the beginning of the mail so you won't overlook\nit ;-)\n\n> > \n> > Oh, just occured to me... in order to use autovacuum you also need to\n> > enable the statistics collector on row level:\n> > \n> > stats_start_collector = on\n> > stats_row_level = on\n> > \n> > See also:\n> > http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n> > \n> > This was not mentioned in the settings in your original post, so I guess\n> > you didn't touch that, and I think they are disabled by default.\n> > \n> > If this is disabled, you should enable it and \"pg_ctl reload ....\", that\n> > should fix the problem.\n> > \n> > Cheers,\n> > Csaba.\n\n\n\n", "msg_date": "Wed, 21 Jun 2006 18:44:39 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "block and row are always configured on - they are my friend :)\n\nthanks\nOn Wednesday 21 June 2006 13:44, Csaba Nagy wrote:\n> On Wed, 2006-06-21 at 18:39, jody brownell wrote:\n> > that is exactly what I am seeing, one process, no change, always in idle while the others are constantly\n> > changing their state.\n> > \n> > looks like someone opened a tx then is blocking on a queue lock or something. dang.\n> \n> Don't forget to check the statistics collector settings (see below), if\n> that is not correct then autovacuum is indeed not working correctly... I\n> should have put that on the beginning of the mail so you won't overlook\n> it ;-)\n> \n> > > \n> > > Oh, just occured to me... in order to use autovacuum you also need to\n> > > enable the statistics collector on row level:\n> > > \n> > > stats_start_collector = on\n> > > stats_row_level = on\n> > > \n> > > See also:\n> > > http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n> > > \n> > > This was not mentioned in the settings in your original post, so I guess\n> > > you didn't touch that, and I think they are disabled by default.\n> > > \n> > > If this is disabled, you should enable it and \"pg_ctl reload ....\", that\n> > > should fix the problem.\n> > > \n> > > Cheers,\n> > > Csaba.\n> \n> \n> \n> \n", "msg_date": "Wed, 21 Jun 2006 13:49:49 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "On Wed, Jun 21, 2006 at 10:52:42AM -0300, jody brownell wrote:\n> A db wide vacuum full outputs this at the end.\n> \n> INFO: free space map contains 32848 pages in 159 relations\n> DETAIL: A total of 24192 page slots are in use (including overhead).\n> 24192 page slots are required to track all free space.\n> Current limits are: 4024000 page slots, 2000 relations, using 23705 KB.\n\nFWIW, the tail end of a db-wide vacuum FULL doesn't provide useful info\nabout FSM utilization, because it just made everything as compact as\npossible.\n\nMy suspicion is that it's taking too long for autovac to get around to\nthis database/table. Dropping the sleep time might help. I see that this\ntable is vacuumed with a delay setting of 0, but if there are other\ntables with a high delay that could pose a problem.\n\nGetting detailed output of what autovac is actually doing as Tom\nsuggested would be a good idea.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 13:57:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relation bloat" }, { "msg_contents": "On Wed, Jun 21, 2006 at 01:21:05PM -0300, jody brownell wrote:\n> Jun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: \"target\": removed 5645231 row versions in 106508 pages\n> Jun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec elapsed 40.63 sec.\n> Jun 21 13:04:04 vanquish postgres[3311]: [20-1] DEBUG: \"target\": found 5645231 removable, 1296817 nonremovable row versions in 114701 pages\n> Jun 21 13:04:04 vanquish postgres[3311]: [20-2] DETAIL: 0 dead row versions cannot be removed yet.\n\nSo the table contained 5.6M dead rows and 1.3M live rows.\n\nI think you should forget about having autovacuum keep this table\nin-check and add manual vacuum commands to your code. Autovac is\nintended to deal with 99% of use cases; this is pretty clearly in the 1%\nit can't handle.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 14:38:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "OK.... this was over a 12 - 16 hour period of not having anything done with it though right?\n\nI am assuming if autovacuum were active through out that period, we would be somewhat better off ...is that not accurate?\n\n\nOn Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:\n> 5\n", "msg_date": "Wed, 21 Jun 2006 16:40:40 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "BTW, in production with a similar load - autovacuum with default out of the box \nsettings seems to work quite well.... \n\nI double checked this earlier today.\n\nOn Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:\n> 5\n", "msg_date": "Wed, 21 Jun 2006 16:41:45 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "On Wed, Jun 21, 2006 at 04:41:45PM -0300, jody brownell wrote:\n> BTW, in production with a similar load - autovacuum with default out of the box \n> settings seems to work quite well.... \n> \n> I double checked this earlier today.\n\nSo what's different between production and the machine with the problem?\n\nThe issue with autovac is that it will only vacuum one table at a time,\nso if it's off vacuuming some other table for a long period of time it\nwon't be touching this table, which will be a problem. Now, if that's\nactually what's happening...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 17:11:50 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "Well, for one we did introduce a TX leak which was preventing autovac from running. I guess that was _the_ issue.\n\nI have since fixed it and an now testing.... looks much better, nothing concerning.... \n(fingers crossed until morning :)). debug logs are full of vac/anal of the tables... so, for now I am back \non track moving forward... Now that auto vac is actually running, the box is feeling slightly more sluggish.\n\nBTW - As soon as we deliver to QA, I will post the test case for the memory leak I was seeing the other day. \n(I have not forgotten, I am just swamped)\n\nThanks for the help all. Much appreciated. \nCheers.\n\nOn Wednesday 21 June 2006 19:11, Jim C. Nasby wrote:\n> On Wed, Jun 21, 2006 at 04:41:45PM -0300, jody brownell wrote:\n> > BTW, in production with a similar load - autovacuum with default out of the box \n> > settings seems to work quite well.... \n> > \n> > I double checked this earlier today.\n> \n> So what's different between production and the machine with the problem?\n> \n> The issue with autovac is that it will only vacuum one table at a time,\n> so if it's off vacuuming some other table for a long period of time it\n> won't be touching this table, which will be a problem. Now, if that's\n> actually what's happening...\n", "msg_date": "Wed, 21 Jun 2006 20:08:55 -0300", "msg_from": "\"jody brownell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "Hi, Csaba,\n\nCsaba Nagy wrote:\n\n> Well, your application might be completely well behaved and still your\n> DBA (or your favorite DB access tool for that matter) can leave open\n> transactions in an interactive session. It never hurts to check if you\n> actually have \"idle in transaction\" sessions. It happened a few times to\n> us, some of those were bad coding on ad-hoc tools written by us, others\n> were badly behaved DB access tools opening a transaction immediately\n> after connect and after each successful command, effectively leaving an\n> open transaction when leaving it open while having lunch...\n\nSome older JDBC driver versions had the bug that they always had an open\ntransaction, thus an application server having some pooled connections\nlingering around could block vacuum forever.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 23 Jun 2006 14:32:29 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Wed, Jun 21, 2006 at 01:21:05PM -0300, jody brownell wrote:\n>> Jun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: \"target\": removed 5645231 row versions in 106508 pages\n>> Jun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec elapsed 40.63 sec.\n>> Jun 21 13:04:04 vanquish postgres[3311]: [20-1] DEBUG: \"target\": found 5645231 removable, 1296817 nonremovable row versions in 114701 pages\n>> Jun 21 13:04:04 vanquish postgres[3311]: [20-2] DETAIL: 0 dead row versions cannot be removed yet.\n> \n> So the table contained 5.6M dead rows and 1.3M live rows.\n> \n> I think you should forget about having autovacuum keep this table\n> in-check and add manual vacuum commands to your code. Autovac is\n> intended to deal with 99% of use cases; this is pretty clearly in the 1%\n> it can't handle.\n\nMaybe your free space map is configured to small, can you watch out for\nlog messages telling to increase it?\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 23 Jun 2006 14:40:47 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tuning autovacuum - seeing lots of relationbloat" } ]
[ { "msg_contents": "We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \nRHEL, postgres 8.1) and ported our old database over to it (single cpu, \n2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \nsome queries are super slow. One function in particular, which used to \ntake 15-30 minutes on the old server, has been running now for over 12 \nhours:\n BEGIN\n TRUNCATE stock.datacount;\n FOR rec IN SELECT itemID, item, hexValue FROM stock.activeitem LOOP\n histdate := (SELECT updatedate FROM stock.historical s WHERE \ns.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n IF histdate IS NOT NULL THEN\n funddate := (SELECT updatedate FROM stock.funddata s \nWHERE s.itemID=rec.itemID);\n techdate := (SELECT updatedate FROM stock.techsignals s \nWHERE s.itemID=rec.itemID);\n IF (histdate <> funddate) OR (histdate <> techdate) OR \n(funddate IS NULL) OR (techdate IS NULL) THEN\n counter := counter + 1;\n outrec.itemID := rec.itemID;\n outrec.item := rec.item;\n outrec.hexvalue := rec.hexvalue;\n RETURN NEXT outrec;\n END IF;\n END IF;\n END LOOP;\n INSERT INTO stock.datacount (itemcount) VALUES (counter);\n COPY stock.datacount TO ''/tmp/datacount'';\n RETURN;\n END;\n\n\"top\" shows:\nCPU states: cpu user nice system irq softirq iowait idle\n total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% 61.6%\nMem: 8152592k av, 8143012k used, 9580k free, 0k shrd, 179888k \nbuff\n 6342296k actv, 1206340k in_d, 137916k in_c\nSwap: 8385760k av, 259780k used, 8125980k free 7668624k \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n17027 postgres 25 0 566M 561M 560M R 24.9 7.0 924:34 1 postmaster\n\nI've likely set some parameter(s) to the wrong values, but I don't know \nwhich one(s). Here are my relevant postgresql.conf settings:\nshared_buffers = 70000\nwork_mem = 9192\nmaintenance_work_mem = 131072\nmax_fsm_pages = 70000\nfsync = off (temporarily, will be turned back on)\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\neffective_cache_size = 70000\n\n[root@new-server root]# cat /proc/sys/kernel/shmmax\n660000000\n\nWe want to put this into production soon, but this is a showstopper. Can \nanyone help me out with this?\n\n\nThanks\n\nRon St.Pierre\n", "msg_date": "Wed, 21 Jun 2006 08:37:51 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning New Server (slow function)" } ]
[ { "msg_contents": "We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \nRHEL, postgres 8.1) and ported our old database over to it (single cpu, \n2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \nsome queries are super slow. One function in particular, which used to \ntake 15-30 minutes on the old server, has been running now for over 12 \nhours:\n BEGIN\n TRUNCATE stock.datacount;\n FOR rec IN SELECT itemID, item, hexValue FROM stock.activeitem LOOP\n histdate := (SELECT updatedate FROM stock.historical s WHERE \ns.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n IF histdate IS NOT NULL THEN\n funddate := (SELECT updatedate FROM stock.funddata s \nWHERE s.itemID=rec.itemID);\n techdate := (SELECT updatedate FROM stock.techsignals s \nWHERE s.itemID=rec.itemID);\n IF (histdate <> funddate) OR (histdate <> techdate) OR \n(funddate IS NULL) OR (techdate IS NULL) THEN\n counter := counter + 1;\n outrec.itemID := rec.itemID;\n outrec.item := rec.item;\n outrec.hexvalue := rec.hexvalue;\n RETURN NEXT outrec;\n END IF;\n END IF;\n END LOOP;\n INSERT INTO stock.datacount (itemcount) VALUES (counter);\n COPY stock.datacount TO ''/tmp/datacount'';\n RETURN;\n END;\n\n\"top\" shows:\nCPU states: cpu user nice system irq softirq iowait idle\n total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% 61.6%\nMem: 8152592k av, 8143012k used, 9580k free, 0k shrd, 179888k \nbuff\n 6342296k actv, 1206340k in_d, 137916k in_c\nSwap: 8385760k av, 259780k used, 8125980k free 7668624k \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n17027 postgres 25 0 566M 561M 560M R 24.9 7.0 924:34 1 \npostmaster\n\nI've likely set some parameter(s) to the wrong values, but I don't know \nwhich one(s). Here are my relevant postgresql.conf settings:\nshared_buffers = 70000\nwork_mem = 9192\nmaintenance_work_mem = 131072\nmax_fsm_pages = 70000\nfsync = off (temporarily, will be turned back on)\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\neffective_cache_size = 70000\n\n[root@new-server root]# cat /proc/sys/kernel/shmmax\n660000000\n\nWe want to put this into production soon, but this is a showstopper. Can \nanyone help me out with this?\n\n\nThanks\n\nRon St.Pierre\n", "msg_date": "Wed, 21 Jun 2006 08:57:55 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning New Server (slow function)" }, { "msg_contents": "Ron St-Pierre <[email protected]> writes:\n> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \n> RHEL, postgres 8.1) and ported our old database over to it (single cpu, \n> 2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \n> some queries are super slow. One function in particular, which used to \n> take 15-30 minutes on the old server, has been running now for over 12 \n> hours:\n\nA fairly common gotcha in updating is to forget to ANALYZE all your\ntables after loading the data into the new server. My bet is that some\nof the queries in the function are using bad plans for lack of\nup-to-date statistics.\n\nIf ANALYZEing and then starting a fresh session (to get rid of the\nfunction's cached plans) doesn't help, you'll need to do some comparison\nof EXPLAIN plans between old and new server to try to figure out where\nthe problem is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 12:19:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning New Server (slow function) " } ]
[ { "msg_contents": "Howdy,\n\nDidn't see anything in the archives, so I thought I'd ask: has anyone \ndone any work to gauge the performance penalty of using DOMAINs? I'm \nthinking of something like Elein's email DOMAIN:\n\n http://www.varlena.com/GeneralBits/\n\nI figured that most simple domains that have a constraint check are \nno faster or slower than tables with constraints that validate a \nparticular column. Is that the case?\n\nBut I'm also interested in how Elein made the email domain case- \ninsensitive, since I'd like to have/create a truly case-insensitive \ntext type (ITEXT anyone?). The functions for the operator class there \nwere mainly written in SQL, and if it adds a significant overhead, \nI'm not sure it'd be a good idea to use that approach for a case- \ninsensitive text type, since I use it quite a lot in my apps, and \noften do LIKE queries against text data. Thoughts?\n\nMany TIA,\n\nDavid\n", "msg_date": "Wed, 21 Jun 2006 11:26:16 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of DOMAINs" }, { "msg_contents": "On Wed, Jun 21, 2006 at 11:26:16AM -0700, David Wheeler wrote:\n> Howdy,\n> \n> Didn't see anything in the archives, so I thought I'd ask: has anyone \n> done any work to gauge the performance penalty of using DOMAINs? I'm \n> thinking of something like Elein's email DOMAIN:\n> \n> http://www.varlena.com/GeneralBits/\n> \n> I figured that most simple domains that have a constraint check are \n> no faster or slower than tables with constraints that validate a \n> particular column. Is that the case?\n \nProbably. Only thing that might pose a difference is if you're doing a\nlot of manipulating of the domain that didn't involve table access;\npresumably PostgreSQL will perform the checks every time you cast\nsomething to a domain.\n\n> But I'm also interested in how Elein made the email domain case- \n> insensitive, since I'd like to have/create a truly case-insensitive \n> text type (ITEXT anyone?). The functions for the operator class there \n\nhttp://gborg.postgresql.org/project/citext/projdisplay.php\n\n> were mainly written in SQL, and if it adds a significant overhead, \n> I'm not sure it'd be a good idea to use that approach for a case- \n> insensitive text type, since I use it quite a lot in my apps, and \n> often do LIKE queries against text data. Thoughts?\n> \n> Many TIA,\n> \n> David\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 14:02:43 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of DOMAINs" }, { "msg_contents": "David Wheeler <[email protected]> writes:\n> Didn't see anything in the archives, so I thought I'd ask: has anyone \n> done any work to gauge the performance penalty of using DOMAINs?\n\nThere are some reports in the archives of particular usage patterns\nwhere they pretty much suck, because GetDomainConstraints() searches\npg_constraint every time it's called. We do what we can to avoid\ncalling that multiple times per query, but for something like a simple\nINSERT ... VALUES into a domain column, the setup overhead is still bad.\n\nI've been intending to try to fix things so that the search result can\nbe cached by typcache.c, but not gotten round to it. (The hard part,\nif anyone wants to tackle it, is figuring out a way to clear the cache\nentry when needed.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2006 16:08:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of DOMAINs " }, { "msg_contents": "David,\n\n> But I'm also interested in how Elein made the email domain case-\n> insensitive, since I'd like to have/create a truly case-insensitive\n> text type (ITEXT anyone?). The functions for the operator class there\n> were mainly written in SQL, and if it adds a significant overhead,\n> I'm not sure it'd be a good idea to use that approach for a case-\n> insensitive text type, since I use it quite a lot in my apps, and\n> often do LIKE queries against text data. Thoughts?\n\nWell, current case-insensitivity hacks definitely aren't compatible with \nLIKE as far as \"begins with\" indexes are concerned. Of course, floating \nLIKEs (%value%) are going to suck no matter what data type you're using.\n\nI created an operator for CI equality ... =~ ... which performs well on \nindexed columns. But it doesn't do \"begins with\".\n\nITEXT is a TODO, but there are reasons why it's harder than it looks.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 21 Jun 2006 18:19:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of DOMAINs" }, { "msg_contents": "\n> since I'd like to have/create a truly case-insensitive\n> text type (ITEXT anyone?).\n\nI haven't seen it mentioned in this thread yet, but have you looked \nat citext?\n\nhttp://gborg.postgresql.org/project/citext/projdisplay.php\n\nI don't have any experience with it, but perhaps it can do what \nyou're looking for.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Thu, 22 Jun 2006 11:24:22 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of DOMAINs" }, { "msg_contents": "On Jun 21, 2006, at 13:08, Tom Lane wrote:\n\n> There are some reports in the archives of particular usage patterns\n> where they pretty much suck, because GetDomainConstraints() searches\n> pg_constraint every time it's called. We do what we can to avoid\n> calling that multiple times per query, but for something like a simple\n> INSERT ... VALUES into a domain column, the setup overhead is still \n> bad.\n\nI assume that there's no domain thingy that you already have that \ncould cache it, eh?\n\nSorry, I ask this as someone who knows no C and less about \nPostgreSQL's internals.\n\nBest,\n\nDavid\n\n", "msg_date": "Thu, 22 Jun 2006 11:11:36 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of DOMAINs " }, { "msg_contents": "On Jun 21, 2006, at 18:19, Josh Berkus wrote:\n\n> Well, current case-insensitivity hacks definitely aren't compatible \n> with\n> LIKE as far as \"begins with\" indexes are concerned.\n\nYes, currently I use LOWER() for my indexes and for all LIKE, =, etc. \nqueries. This works well, but ORDER by of course isn't what I'd like. \nThat's one of the things that Elein's email domain addresses, albeit \nwith a USING keyword, which is unfortunate.\n\n> Of course, floating\n> LIKEs (%value%) are going to suck no matter what data type you're \n> using.\n\nYes, I know that. :-) I avoid that.\n\n> I created an operator for CI equality ... =~ ... which performs \n> well on\n> indexed columns. But it doesn't do \"begins with\".\n\nOops. So how could it perform well on indexed columns?\n\n> ITEXT is a TODO, but there are reasons why it's harder than it looks.\n\nI'm sure. I should bug potential future SoC students about it. ;-)\n\nBest,\n\nDavid\n", "msg_date": "Thu, 22 Jun 2006 11:18:12 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of DOMAINs" }, { "msg_contents": "On Jun 21, 2006, at 19:24, Michael Glaesemann wrote:\n\n> I haven't seen it mentioned in this thread yet, but have you looked \n> at citext?\n>\n> http://gborg.postgresql.org/project/citext/projdisplay.php\n>\n> I don't have any experience with it, but perhaps it can do what \n> you're looking for.\n\nYes, I've seen it. I haven't tried it, either. It'd be nice if it had \na compatible license with PostgreSQL, though.\n\nBest,\n\nDavid\n", "msg_date": "Thu, 22 Jun 2006 11:18:48 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of DOMAINs" } ]
[ { "msg_contents": "I have a really stupid question about top, what exactly is iowait CPU time?\n\nAlex\n\nI have a really stupid question about top, what exactly is iowait CPU time?Alex", "msg_date": "Wed, 21 Jun 2006 16:46:15 -0400", "msg_from": "\"Alex Turner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Quick question about top..." }, { "msg_contents": "On Wed, Jun 21, 2006 at 04:46:15PM -0400, Alex Turner wrote:\n> I have a really stupid question about top, what exactly is iowait CPU time?\n\nTime while the CPU is idle, but at least one I/O request is outstanding.\n\nIn other words, if you're at 100% I/O-wait, you're heavily I/O-bound and your\nprocessor is bored to death.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 21 Jun 2006 23:18:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quick question about top..." } ]
[ { "msg_contents": "We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \nRHEL, postgres 8.1) and ported our old database over to it (single cpu, \n2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \nsome queries are super slow. One function in particular, which used to \ntake 15-30 minutes on the old server, has been running now for over 12 \nhours:\n BEGIN\n TRUNCATE stock.datacount;\n FOR rec IN SELECT itemID, item, hexValue FROM stock.activeitem LOOP\n histdate := (SELECT updatedate FROM stock.historical s WHERE \ns.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n IF histdate IS NOT NULL THEN\n funddate := (SELECT updatedate FROM stock.funddata s WHERE \ns.itemID=rec.itemID);\n techdate := (SELECT updatedate FROM stock.techsignals s \nWHERE s.itemID=rec.itemID);\n IF (histdate <> funddate) OR (histdate <> techdate) OR \n(funddate IS NULL) OR (techdate IS NULL) THEN\n counter := counter + 1;\n outrec.itemID := rec.itemID;\n outrec.item := rec.item;\n outrec.hexvalue := rec.hexvalue;\n RETURN NEXT outrec;\n END IF;\n END IF;\n END LOOP;\n INSERT INTO stock.datacount (itemcount) VALUES (counter);\n COPY stock.datacount TO ''/tmp/datacount'';\n RETURN;\n END;\n\nnote: stock.activeitem contains about 75000 rows\n\n\n\"top\" shows:\nCPU states: cpu user nice system irq softirq iowait idle\n total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% 61.6%\nMem: 8152592k av, 8143012k used, 9580k free, 0k shrd, 179888k \nbuff\n 6342296k actv, 1206340k in_d, 137916k in_c\nSwap: 8385760k av, 259780k used, 8125980k free 7668624k \ncached\n\nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n17027 postgres 25 0 566M 561M 560M R 24.9 7.0 924:34 1 \npostmaster\n\nI've likely set some parameter(s) to the wrong values, but I don't know \nwhich one(s). Here are my relevant postgresql.conf settings:\nshared_buffers = 70000\nwork_mem = 9192\nmaintenance_work_mem = 131072\nmax_fsm_pages = 70000\nfsync = off (temporarily, will be turned back on)\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\neffective_cache_size = 70000\n\n[root@new-server root]# cat /proc/sys/kernel/shmmax\n660000000\n\nWe want to put this into production soon, but this is a showstopper. Can \nanyone help me out with this?\n\n\nThanks\n\nRon St.Pierre\n\n", "msg_date": "Wed, 21 Jun 2006 14:27:41 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning New Server (slow function)" }, { "msg_contents": "On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:\n> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \n> RHEL, postgres 8.1) and ported our old database over to it (single cpu, \n\nRAID *4*?\n\nIf you do any kind of updating at all, you're likely to be real unhappy\nwith that...\n\n> 2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \n> some queries are super slow. One function in particular, which used to \n> take 15-30 minutes on the old server, has been running now for over 12 \n> hours:\n> BEGIN\n> TRUNCATE stock.datacount;\n> FOR rec IN SELECT itemID, item, hexValue FROM stock.activeitem LOOP\n> histdate := (SELECT updatedate FROM stock.historical s WHERE \n> s.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n> IF histdate IS NOT NULL THEN\n> funddate := (SELECT updatedate FROM stock.funddata s WHERE \n> s.itemID=rec.itemID);\n> techdate := (SELECT updatedate FROM stock.techsignals s \n> WHERE s.itemID=rec.itemID);\n> IF (histdate <> funddate) OR (histdate <> techdate) OR \n> (funddate IS NULL) OR (techdate IS NULL) THEN\n> counter := counter + 1;\n> outrec.itemID := rec.itemID;\n> outrec.item := rec.item;\n> outrec.hexvalue := rec.hexvalue;\n> RETURN NEXT outrec;\n> END IF;\n> END IF;\n> END LOOP;\n> INSERT INTO stock.datacount (itemcount) VALUES (counter);\n> COPY stock.datacount TO ''/tmp/datacount'';\n> RETURN;\n> END;\n> \n> note: stock.activeitem contains about 75000 rows\n \nGetting EXPLAIN ANALYZE from the queries would be good. Adding debug\noutput via NOTICE to see how long each step is taking would be a good\nidea, too.\n\nOf course, even better would be to do away with the cursor...\n \n> \"top\" shows:\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% 61.6%\n> Mem: 8152592k av, 8143012k used, 9580k free, 0k shrd, 179888k \n> buff\n\nThe high system % (if I'm reading this correctly) makes me wonder if\nthis is some kind of locking issue.\n\n> 6342296k actv, 1206340k in_d, 137916k in_c\n> Swap: 8385760k av, 259780k used, 8125980k free 7668624k \n> cached\n> \n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n> 17027 postgres 25 0 566M 561M 560M R 24.9 7.0 924:34 1 \n> postmaster\n> \n> I've likely set some parameter(s) to the wrong values, but I don't know \n> which one(s). Here are my relevant postgresql.conf settings:\n> shared_buffers = 70000\n> work_mem = 9192\n> maintenance_work_mem = 131072\n> max_fsm_pages = 70000\n> fsync = off (temporarily, will be turned back on)\n> checkpoint_segments = 64\n> checkpoint_timeout = 1800\n> effective_cache_size = 70000\n> \n> [root@new-server root]# cat /proc/sys/kernel/shmmax\n> 660000000\n> \n> We want to put this into production soon, but this is a showstopper. Can \n> anyone help me out with this?\n> \n> \n> Thanks\n> \n> Ron St.Pierre\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 21 Jun 2006 17:21:42 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning New Server (slow function)" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:\n> \n>> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4, \n>> RHEL, postgres 8.1) and ported our old database over to it (single cpu, \n>> \n>\n> RAID *4*?\n> \noops, raid 5 (but we are getting good io throughput...)\n> If you do any kind of updating at all, you're likely to be real unhappy\n> with that...\n>\n> \n>> 2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however \n>> some queries are super slow. One function in particular, which used to \n>> take 15-30 minutes on the old server, has been running now for over 12 \n>> hours:\n>> BEGIN\n>> TRUNCATE stock.datacount;\n>> FOR rec IN SELECT itemID, item, hexValue FROM stock.activeitem LOOP\n>> histdate := (SELECT updatedate FROM stock.historical s WHERE \n>> s.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n>> IF histdate IS NOT NULL THEN\n>> funddate := (SELECT updatedate FROM stock.funddata s WHERE \n>> s.itemID=rec.itemID);\n>> techdate := (SELECT updatedate FROM stock.techsignals s \n>> WHERE s.itemID=rec.itemID);\n>> IF (histdate <> funddate) OR (histdate <> techdate) OR \n>> (funddate IS NULL) OR (techdate IS NULL) THEN\n>> counter := counter + 1;\n>> outrec.itemID := rec.itemID;\n>> outrec.item := rec.item;\n>> outrec.hexvalue := rec.hexvalue;\n>> RETURN NEXT outrec;\n>> END IF;\n>> END IF;\n>> END LOOP;\n>> INSERT INTO stock.datacount (itemcount) VALUES (counter);\n>> COPY stock.datacount TO ''/tmp/datacount'';\n>> RETURN;\n>> END;\n>>\n>> note: stock.activeitem contains about 75000 rows\n>> \n> \n> Getting EXPLAIN ANALYZE from the queries would be good. Adding debug\n> output via NOTICE to see how long each step is taking would be a good\n> idea, too.\n>\n> \nI set client_min_messages = debug2, log_min_messages = debug2 and \nlog_statement = 'all' and am running the query with EXPLAIN ANALYZE. I \ndon't know how long it will take until something useful returns, but I \nwill let it run for a while.\n> Of course, even better would be to do away with the cursor...\n> \n> \nHow would I rewrite it to do away with the cursor?\n>> \"top\" shows:\n>> CPU states: cpu user nice system irq softirq iowait idle\n>> total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% 61.6%\n>> Mem: 8152592k av, 8143012k used, 9580k free, 0k shrd, 179888k \n>> buff\n>> \n>\n> The high system % (if I'm reading this correctly) makes me wonder if\n> this is some kind of locking issue.\n>\n> \nBut it's the only postgres process running.\n>> 6342296k actv, 1206340k in_d, 137916k in_c\n>> Swap: 8385760k av, 259780k used, 8125980k free 7668624k \n>> cached\n>>\n>> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n>> 17027 postgres 25 0 566M 561M 560M R 24.9 7.0 924:34 1 \n>> postmaster\n>>\n>> I've likely set some parameter(s) to the wrong values, but I don't know \n>> which one(s). Here are my relevant postgresql.conf settings:\n>> shared_buffers = 70000\n>> work_mem = 9192\n>> maintenance_work_mem = 131072\n>> max_fsm_pages = 70000\n>> fsync = off (temporarily, will be turned back on)\n>> checkpoint_segments = 64\n>> checkpoint_timeout = 1800\n>> effective_cache_size = 70000\n>>\n>> [root@new-server root]# cat /proc/sys/kernel/shmmax\n>> 660000000\n>>\n>> We want to put this into production soon, but this is a showstopper. Can \n>> anyone help me out with this?\n>>\n>>\n>> Thanks\n>>\n>> Ron St.Pierre\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>> \n>\n> \n\n", "msg_date": "Wed, 21 Jun 2006 15:53:06 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning New Server (slow function)" }, { "msg_contents": "On Jun 21, 2006, at 5:53 PM, Ron St-Pierre wrote:\n> Jim C. Nasby wrote:\n>> On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:\n>>\n>>> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, \n>>> raid 4, RHEL, postgres 8.1) and ported our old database over to \n>>> it (single cpu,\n>>\n>> RAID *4*?\n>>\n> oops, raid 5 (but we are getting good io throughput...)\n\nJust remember that unless you have a really good battery-backed \ncontroller, writes to RAID5 pretty much suck.\n\n>>> BEGIN\n>>> TRUNCATE stock.datacount;\n>>> FOR rec IN SELECT itemID, item, hexValue FROM \n>>> stock.activeitem LOOP\n>>> histdate := (SELECT updatedate FROM stock.historical s \n>>> WHERE s.itemID=rec.itemID ORDER BY updatedate DESC LIMIT 1);\n>>> IF histdate IS NOT NULL THEN\n>>> funddate := (SELECT updatedate FROM stock.funddata s \n>>> WHERE s.itemID=rec.itemID);\n>>> techdate := (SELECT updatedate FROM \n>>> stock.techsignals s WHERE s.itemID=rec.itemID);\n>>> IF (histdate <> funddate) OR (histdate <> techdate) \n>>> OR (funddate IS NULL) OR (techdate IS NULL) THEN\n>>> counter := counter + 1;\n>>> outrec.itemID := rec.itemID;\n>>> outrec.item := rec.item;\n>>> outrec.hexvalue := rec.hexvalue;\n>>> RETURN NEXT outrec;\n>>> END IF;\n>>> END IF;\n>>> END LOOP;\n>>> INSERT INTO stock.datacount (itemcount) VALUES (counter);\n>>> COPY stock.datacount TO ''/tmp/datacount'';\n>>> RETURN;\n>>> END;\n> How would I rewrite it to do away with the cursor?\n\nSomething like...\n\nSELECT ...\n\tFROM (SELECT a...., f.updatedate AS funddate, t.updatedate AS \ntechdate, max(updatedate) hist_date\n\t\t\t\tFROM activeitem a\n\t\t\t\t\tJOIN historical h USING itemid\n\t\t\t\tGROUP BY a...., f.updatedate, t.updatedate) AS a\n\t\tLEFT JOIN funddate f USING itemid\n\t\tLEFT JOIN techsignals USING itemid\n\tWHERE f.updatedate <> hist_date OR t.updatedate <> hist_date OR \nf.updatedate IS NULL OR t.updatedate IS NULL\n;\n\nBTW, there's some trick that would let you include the NULL tests \nwith the other tests in the WHERE, but I can't remember it off the \ntop of my head...\n\n>>> \"top\" shows:\n>>> CPU states: cpu user nice system irq softirq \n>>> iowait idle\n>>> total 5.8% 0.6% 31.2% 0.0% 0.0% 0.5% \n>>> 61.6%\n>>> Mem: 8152592k av, 8143012k used, 9580k free, 0k shrd, \n>>> 179888k buff\n>>>\n>>\n>> The high system % (if I'm reading this correctly) makes me wonder if\n>> this is some kind of locking issue.\n>>\n>>\n> But it's the only postgres process running.\n\nSure, but PostgreSQL still acquires internal locks.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Thu, 22 Jun 2006 16:15:13 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning New Server (slow function)" } ]
[ { "msg_contents": "I've recently configured a new high-performance database server:\n\t2xXeon 3.4G, 2G RAM, 4x15K SCSI disks in RAID 10, h/w RAID\n\nThis has been live for a couple of weeks.\n\nThe box is running Fedora Core 4.\n\nThe only thing running on this box is PostgreSQL 8.1.4 and some stub \napplications that handle the interface to Postgres (basically taking XML service \nrequests, translating into SQL and using libpq). The database is a backend for a \nbig web application. The web-server and processor intensive front-end run on a \nseparate server.\n\nPostgres has probably been running for 2 weeks now.\n\nI've just uploaded a CSV file that the web-application turns into the contents \ninto multiple requests to the database. Each row in the CSV file causes a few \ntransactions to fire. Bascially adding rows into a couple of table. The tables \nat the moment aren't huge (20,000 rows in on, 150,000 in the other).\n\nPerformance was appalling - taking 85 seconds to upload the CSV file and create \nthe records. A separate script to delete the rows took 45 seconds. While these \nactivities were taking place the Postgres process was using 97% CPU on the \nserver - nothing else much running.\n\nFor comparison, my test machine (750M Athlon, RedHat 8, 256M RAM, single IDE \nhard drive) created the records in 22 seconds and deleted them again in 17.\n\nI had autovacuum ON - but to make sure I did first a vacuum analyze (no \ndifference) then vacuum full (again no difference).\n\nI'd tweaked a couple of parameters in postgres.conf - the significant one I \nthought being random_page_cost, so I changed this back to default and did a \n'service postgresql reload' - no difference, but I wasn't sure whether this \ncould be changed via reload so I restarted Postgres.\n\nThe restart fixed the problem. The 85 second insert time dropped back down to 5 \nseconds!!!\n\nTo check whether the random_page_cost was making the difference I restored the \nold postgres.conf, restarted postgres and redid the upload. Rather suprisingly - \n the upload time was still at 5 seconds.\n\nAny thoughts? I find it hard to believe that Postgres performance could degrade \nover a couple of weeks. Read performance seemed to be fine. The postgres memory \nsize didn't seem to be huge. What else am I overlooking? What could I have \nchanged by simply restarting Postgres that could make such a drastic change in \nperformance?\n\nPete\n", "msg_date": "Thu, 22 Jun 2006 00:08:53 +0100", "msg_from": "Peter Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance - fixed by restart" }, { "msg_contents": "Peter Wilson <[email protected]> writes:\n> I'd tweaked a couple of parameters in postgres.conf - the significant one I \n> thought being random_page_cost, so I changed this back to default and did a \n> 'service postgresql reload' - no difference, but I wasn't sure whether this \n> could be changed via reload so I restarted Postgres.\n\n> The restart fixed the problem. The 85 second insert time dropped back down to 5 \n> seconds!!!\n\nUm, which parameters did you change *exactly*?\n\nAlso, depending on how you were submitting the queries, it's possible\nthat Postgres was using cached plans already made based on the old\nsettings. In that case the restart would've cleared the bad plans\n(but starting a fresh connection would've been sufficient for that).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 12:24:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance - fixed by restart " } ]
[ { "msg_contents": "Hello,\n\n\n\nI am getting following error while inserting a row into the \"abc\" table:\n\n*ERROR: fmgr_info: function 2720768: cache lookup failed*\n\n* *\n\nTable \"abc\" has one trigger called \"abct\"\n\nDefinition is as follows:\n\n\n\nBEGIN;\n\n LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;\n\n\n\n create TRIGGER abct\n\n AFTER INSERT OR DELETE on abc\n\n FOR EACH ROW EXECUTE PROCEDURE abc_function();\n\n\n\nCOMMIT;\n\n\n\nabc_function() updates entry from the \"xyz\" table for every insert and\ndelete operations on table \"abc\".\n\n\n\n\"xyz\" table maintains the count of total number of rows in table \"abc\"\n\n\n\nCurrently \"abc\" table contains 1000090 rows. And same count is available in\ntable \"xyz\".\n\nBut now I am not able to insert any records into the \"abc\" table because of\nabove mentioned error.\n\n\n\nPlease provide me some help regarding this.\n\n\n\nThanks,\n\nSoni\n\nHello,\n \nI am getting following error while inserting a row into the \"abc\" table:\nERROR:  fmgr_info: function 2720768: cache lookup failed\n\n \nTable \"abc\" has one trigger called \"abct\"\nDefinition is as follows:\n \nBEGIN;\n   LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;\n \n   create TRIGGER abct\n      AFTER INSERT OR DELETE on abc \n      FOR EACH ROW EXECUTE PROCEDURE abc_function();\n \nCOMMIT;\n \nabc_function() updates entry from the \"xyz\" table for every insert and delete operations on table \"abc\".\n \n\"xyz\" table maintains the count of total number of rows in table \"abc\" \n \nCurrently \"abc\" table contains 1000090 rows. And same count is available in table \"xyz\". \nBut now I am not able to insert any records into the \"abc\" table because of above mentioned error.\n \nPlease provide me some help regarding this.\n \nThanks,\nSoni", "msg_date": "Thu, 22 Jun 2006 10:28:11 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding ERROR: fmgr_info: function 2720768: cache lookup failed" }, { "msg_contents": "\"soni de\" <[email protected]> writes:\n> I am getting following error while inserting a row into the \"abc\" table:\n> *ERROR: fmgr_info: function 2720768: cache lookup failed*\n\nWhat PG version is this? (I can tell from the spelling of the error\nmessage that it's older than 7.4.) If it's pre-7.3 then the answer is\nprobably that you dropped and re-created the function, and now need to\ndrop and re-create the trigger to match. 7.3 shouldn't have let you\ndrop a function that has a trigger depending on it, though.\n\nBTW this seems a bit off-topic for pgsql-performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 10:47:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding ERROR: fmgr_info: function 2720768: cache lookup failed" } ]
[ { "msg_contents": "How to speed the following query? It seems to run forever.\n\nexplain SELECT\nbilkaib.DB,\nCASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,\nCASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS \ndb2objekt,\nCASE WHEN dbkonto.objekt3='+' THEN bilkaib.DB3OBJEKT ELSE '' END AS \ndb3objekt,\nCASE WHEN dbkonto.objekt4='+' THEN bilkaib.DB4OBJEKT ELSE '' END AS \ndb4objekt,\nCASE WHEN dbkonto.objekt5='+' THEN bilkaib.DB5OBJEKT ELSE '' END AS \ndb5objekt,\nCASE WHEN dbkonto.objekt6='+' THEN bilkaib.DB6OBJEKT ELSE '' END AS \ndb6objekt,\nCASE WHEN dbkonto.objekt7='+' THEN bilkaib.DB7OBJEKT ELSE '' END AS \ndb7objekt,\nCASE WHEN dbkonto.objekt8='+' THEN bilkaib.DB8OBJEKT ELSE '' END AS \ndb8objekt,\nCASE WHEN dbkonto.objekt9='+' THEN bilkaib.DB9OBJEKT ELSE '' END AS \ndb9objekt,\nbilkaib.CR,\nCASE WHEN crkonto.objekt1='+' THEN bilkaib.crOBJEKT ELSE '' END AS crobjekt,\nCASE WHEN crkonto.objekt2='+' THEN bilkaib.cr2OBJEKT ELSE '' END AS \ncr2objekt,\nCASE WHEN crkonto.objekt3='+' THEN bilkaib.cr3OBJEKT ELSE '' END AS \ncr3objekt,\nCASE WHEN crkonto.objekt4='+' THEN bilkaib.cr4OBJEKT ELSE '' END AS \ncr4objekt,\nCASE WHEN crkonto.objekt5='+' THEN bilkaib.cr5OBJEKT ELSE '' END AS \ncr5objekt,\nCASE WHEN crkonto.objekt6='+' THEN bilkaib.cr6OBJEKT ELSE '' END AS \ncr6objekt,\nCASE WHEN crkonto.objekt7='+' THEN bilkaib.cr7OBJEKT ELSE '' END AS \ncr7objekt,\nCASE WHEN crkonto.objekt8='+' THEN bilkaib.cr8OBJEKT ELSE '' END AS \ncr8objekt,\nCASE WHEN crkonto.objekt9='+' THEN bilkaib.cr9OBJEKT ELSE '' END AS \ncr9objekt,\nbilkaib.RAHA,\nCASE WHEN crkonto.klienkaupa OR dbkonto.klienkaupa\n OR crkonto.tyyp IN ('K','I') OR dbkonto.tyyp IN ('K','I')\nTHEN\n bilkaib.KLIENT ELSE '' END AS klient,\n\nbilkaib.EXCHRATE,\n\nCASE WHEN crkonto.klienkaupa OR dbkonto.klienkaupa\n OR crkonto.tyyp IN ('K','I') OR dbkonto.tyyp IN ('K','I')\nTHEN\n '' ELSE '' END AS kliendinim, -- 24.\n\nCASE WHEN crkonto.arvekaupa OR dbkonto.arvekaupa\n OR (bilkaib.cr<>'00' AND crkonto.tyyp='K')\n OR (bilkaib.db<>'00' AND dbkonto.tyyp='K')\nTHEN bilkaib.doknr ELSE CAST('' AS CHAR(25) ) END AS doknr\n\n,CASE WHEN bilkaib.raha='EEK' THEN CAST('20060101' AS DATE) ELSE \nbilkaib.kuupaev END AS kuupaev\n,SUM(bilkaib.summa) AS summa\n,CAST( 0 as numeric(12,2)) as rhsumma\n from BILKAIB join KONTO CRKONTO ON bilkaib.cr=crkonto.kontonr AND\n crkonto.iseloom='A'\n join KONTO DBKONTO ON bilkaib.db=dbkonto.kontonr AND\n dbkonto.iseloom='A'\n where\n bilkaib.kuupaev BETWEEN '2006-01-01' AND '2006-12-31'\n GROUP BY \n1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26\n\n\"GroupAggregate (cost=83038.02..103020.42 rows=124890 width=759)\"\n\" -> Sort (cost=83038.02..83350.25 rows=124890 width=759)\"\n\" Sort Key: bilkaib.db, CASE WHEN (dbkonto.objekt1 = '+'::bpchar) \nTHEN bilkaib.dbobjekt ELSE ''::bpchar END, CASE WHEN (dbkonto.objekt2 = \n'+'::bpchar) THEN bilkaib.db2objekt ELSE ''::bpchar END, CASE WHEN \n(dbkonto.objekt3 = '+'::bpchar) THEN bilkaib. (..)\"\n\" -> Hash Join (cost=41.71..23348.23 rows=124890 width=759)\"\n\" Hash Cond: (\"outer\".cr = \"inner\".kontonr)\"\n\" -> Hash Join (cost=20.86..11676.02 rows=144696 width=707)\"\n\" Hash Cond: (\"outer\".db = \"inner\".kontonr)\"\n\" -> Seq Scan on bilkaib (cost=0.00..9369.99 \nrows=167643 width=655)\"\n\" Filter: ((kuupaev >= '2006-01-01'::date) AND \n(kuupaev <= '2006-12-31'::date))\"\n\" -> Hash (cost=20.29..20.29 rows=227 width=66)\"\n\" -> Seq Scan on konto dbkonto (cost=0.00..20.29 \nrows=227 width=66)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\" -> Hash (cost=20.29..20.29 rows=227 width=66)\"\n\" -> Seq Scan on konto crkonto (cost=0.00..20.29 \nrows=227 width=66)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\n\nIf I only replace column expressions with constant numbers, it runs fast:\n\nexplain analyze SELECT 1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6\n,SUM(bilkaib.summa) AS summa\n,CAST( 0 as numeric(12,2)) as rhsumma\n from BILKAIB join KONTO CRKONTO ON bilkaib.cr=crkonto.kontonr AND\n crkonto.iseloom='A'\n join KONTO DBKONTO ON bilkaib.db=dbkonto.kontonr AND\n dbkonto.iseloom='A'\n where\n bilkaib.kuupaev BETWEEN '2006-01-01' AND '2006-12-31'\n GROUP BY \n1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26\n\n\n\"HashAggregate (cost=22099.33..22099.34 rows=1 width=11) (actual \ntime=4518.820..4518.824 rows=1 loops=1)\"\n\" -> Hash Join (cost=41.71..13669.25 rows=124890 width=11) (actual \ntime=4.347..3445.650 rows=167349 loops=1)\"\n\" Hash Cond: (\"outer\".cr = \"inner\".kontonr)\"\n\" -> Hash Join (cost=20.86..11676.02 rows=144696 width=25) (actual \ntime=2.165..2076.951 rows=167349 loops=1)\"\n\" Hash Cond: (\"outer\".db = \"inner\".kontonr)\"\n\" -> Seq Scan on bilkaib (cost=0.00..9369.99 rows=167643 \nwidth=39) (actual time=0.012..725.813 rows=167349 loops=1)\"\n\" Filter: ((kuupaev >= '2006-01-01'::date) AND (kuupaev \n<= '2006-12-31'::date))\"\n\" -> Hash (cost=20.29..20.29 rows=227 width=14) (actual \ntime=2.112..2.112 rows=227 loops=1)\"\n\" -> Seq Scan on konto dbkonto (cost=0.00..20.29 \nrows=227 width=14) (actual time=0.011..1.126 rows=227 loops=1)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\" -> Hash (cost=20.29..20.29 rows=227 width=14) (actual \ntime=2.149..2.149 rows=227 loops=1)\"\n\" -> Seq Scan on konto crkonto (cost=0.00..20.29 rows=227 \nwidth=14) (actual time=0.022..1.152 rows=227 loops=1)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\"Total runtime: 4519.063 ms\"\n\nPostgres 8.1 on Gentoo Linux.\n\nAndrus. \n\n\n", "msg_date": "Thu, 22 Jun 2006 21:22:44 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "why group expressions cause query to run forever" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> How to speed the following query? It seems to run forever.\n> explain SELECT\n> bilkaib.DB,\n> CASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,\n> CASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS \n> db2objekt,\n> CASE WHEN dbkonto.objekt3='+' THEN bilkaib.DB3OBJEKT ELSE '' END AS \n> db3objekt,\n> ...\n> GROUP BY \n> 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26\n\nI think the problem is probably that you're sorting two dozen CHAR\ncolumns, and that in many of the rows all these entries are '' forcing\nthe sort code to compare all two dozen columns (not so)? So the sort\nends up doing lots and lots and lots of CHAR comparisons. Which can\nbe slow, especially in non-C locales. What's your locale setting?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 15:30:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why group expressions cause query to run forever " }, { "msg_contents": "Tom,\n\nthank you.\n\n> I think the problem is probably that you're sorting two dozen CHAR\n> columns, and that in many of the rows all these entries are '' forcing\n> the sort code to compare all two dozen columns (not so)? So the sort\n> ends up doing lots and lots and lots of CHAR comparisons. Which can\n> be slow, especially in non-C locales. What's your locale setting?\n\nshow all returns\n\n\"lc_collate\";\"en_US.UTF-8\"\n\"lc_ctype\";\"en_US.UTF-8\"\n\"lc_messages\";\"C\"\n\"lc_monetary\";\"et_EE.utf-8\"\n\"lc_numeric\";\"et_EE.utf-8\"\n\"lc_time\";\"et_EE.utf-8\"\n\nHow to speed up this query ?\nIs it possible to force the binary comparison for grouping ?\nShould I concatenate all the char columns into single column ?\n\nAndrus.\n\n\n\n", "msg_date": "Mon, 26 Jun 2006 11:48:47 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why group expressions cause query to run forever" }, { "msg_contents": "> I think the problem is probably that you're sorting two dozen CHAR\n> columns, and that in many of the rows all these entries are '' forcing\n> the sort code to compare all two dozen columns (not so)?\n\nYes, most of columns return empty strings.\n\nI changed empty strings to null, casted to varchar and simplyfied the \nstatment.\nHowever, this select statement runs forever.\n\nAny idea how to speed it up ?\n\nAndrus.\n\nSELECT\nbilkaib.DB,\nCASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE null \nEND::VARCHAR(10) AS dbobjekt,\nCASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE null \nEND::VARCHAR(10) AS db2objekt,\nCASE WHEN dbkonto.objekt3='+' THEN bilkaib.DB3OBJEKT ELSE null \nEND::VARCHAR(10) AS db3objekt,\nCASE WHEN dbkonto.objekt4='+' THEN bilkaib.DB4OBJEKT ELSE null \nEND::VARCHAR(10) AS db4objekt,\nCASE WHEN dbkonto.objekt5='+' THEN bilkaib.DB5OBJEKT ELSE null \nEND::VARCHAR(10) AS db5objekt,\nCASE WHEN dbkonto.objekt6='+' THEN bilkaib.DB6OBJEKT ELSE null \nEND::VARCHAR(10) AS db6objekt,\nCASE WHEN dbkonto.objekt7='+' THEN bilkaib.DB7OBJEKT ELSE null \nEND::VARCHAR(10) AS db7objekt,\nCASE WHEN dbkonto.objekt8='+' THEN bilkaib.DB8OBJEKT ELSE null \nEND::VARCHAR(10) AS db8objekt,\nCASE WHEN dbkonto.objekt9='+' THEN bilkaib.DB9OBJEKT ELSE null \nEND::VARCHAR(10) AS db9objekt\n from BILKAIB join KONTO CRKONTO ON bilkaib.cr=crkonto.kontonr\n join KONTO DBKONTO ON bilkaib.db=dbkonto.kontonr\n where\n bilkaib.kuupaev BETWEEN '2006-01-01' AND '2006-12-31'\n GROUP BY 1,2,3,4,5,6,7,8,9,10\n\n\n", "msg_date": "Tue, 27 Jun 2006 20:50:23 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why group expressions cause query to run forever" } ]
[ { "msg_contents": "Hello,\n\n\nI see in the documentation that we can obtain the number of pages for a table with the view named pg_class.\n\nI would want if it is possible for each pages of a table to have the occupation of blocs in percentage in order to see if the page is good full or not.\n\nI don’t find anything in the doc and the archive.\n\nBest regards,\n\n\nSorry for my english\n\n\n", "msg_date": "Fri, 23 Jun 2006 12:12:03 +0200 (CEST)", "msg_from": "luchot <[email protected]>", "msg_from_op": true, "msg_subject": "Occupation bloc in pages of table" }, { "msg_contents": "luchot <[email protected]> writes:\n> I would want if it is possible for each pages of a table to have the occupation of blocs in percentage in order to see if the page is good full or not.\n\nThere is not any magic way of getting that information, but you could\nmodify contrib/pgstattuple to produce such a report.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2006 12:09:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occupation bloc in pages of table " } ]
[ { "msg_contents": "Hi for all,\n \n Please,\n Normaly when some SGBD exec the algoritm Nest-Loop Join, there are diferences about the space(buffer) for outer table and inner table. So, I want know where Postgres define the number for this spaces (buffers)? And can I change it?\n \n This is very important to me.\n \n Thanks, \n I hope that somebody can help me.\n By\n Daniel\n \n \n \t\t\n---------------------------------\n Abra sua conta no Yahoo! Mail - 1GB de espa�o, alertas de e-mail no celular e anti-spam realmente eficaz. \nHi for all, Please, Normaly when some SGBD exec the algoritm Nest-Loop Join, there are diferences about the space(buffer)  for  outer table  and  inner  table. So, I want know where Postgres define the number for this spaces (buffers)? And can I change it? This is very important to me. Thanks, I hope that somebody can help me. By Daniel \n\nAbra sua conta no Yahoo! Mail - 1GB de espa�o, alertas de e-mail no celular e anti-spam realmente eficaz.", "msg_date": "Fri, 23 Jun 2006 15:32:35 -0300 (ART)", "msg_from": "Daniel Xavier de Sousa <[email protected]>", "msg_from_op": true, "msg_subject": "Buffers to Nest Loop Join" } ]
[ { "msg_contents": "\nHello,\n\nI´m have some problems with a temporary table, i need create a table, insert\nsome values, make a select and at end of transaction the table must droped,\nbut after i created a table there not more exist, is this normal ?\n\nHow to reproduce :\n\n\n\tCREATE TEMP TABLE cademp (\n\t codemp INTEGER,\n\t codfil INTEGER,\n\t nomemp varchar(50)\n\t) ON COMMIT DROP;\n\n\tINSERT INTO cademp (codemp, codfil, nomemp) values (1,1,'TESTE');\n\tINSERT INTO cademp (codemp, codfil, nomemp) values (1,2,'TESTE1');\n\t\n\tSelect * from cademp;\n\n\n\nIn this case, the table cademp doesn´t exist at the first insert, in the\nsame transaction.\n\n\n\n\nTks,\n\nFranklin\n\n", "msg_date": "Fri, 23 Jun 2006 18:58:22 -0300", "msg_from": "\"Franklin Haut\" <[email protected]>", "msg_from_op": true, "msg_subject": "Temporary table" }, { "msg_contents": "\"Franklin Haut\" <[email protected]> writes:\n> How to reproduce :\n\n> \tCREATE TEMP TABLE cademp (\n> \t codemp INTEGER,\n> \t codfil INTEGER,\n> \t nomemp varchar(50)\n> \t) ON COMMIT DROP;\n\n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,1,'TESTE');\n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,2,'TESTE1');\n\t\n> \tSelect * from cademp;\n\nYou need a BEGIN/COMMIT around that, or else rethink using ON COMMIT DROP.\nAs is, the temp table goes away instantly when the CREATE commits.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jun 2006 18:04:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Temporary table " }, { "msg_contents": "Franklin Haut wrote:\n> Hello,\n> \n> I´m have some problems with a temporary table, i need create a table,\n> insert some values, make a select and at end of transaction the table\n> must droped, but after i created a table there not more exist, is\n> this normal ? \n> \n> How to reproduce :\n> \n> \n> \tCREATE TEMP TABLE cademp (\n> \t codemp INTEGER,\n> \t codfil INTEGER,\n> \t nomemp varchar(50)\n> \t) ON COMMIT DROP;\n> \n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,1,'TESTE');\n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,2,'TESTE1');\n> \n> \tSelect * from cademp;\n> \n> \n> \n> In this case, the table cademp doesn´t exist at the first insert, in\n> the same transaction.\n> \n\nIt is NOT the same transaction. By default, each STATEMENT is it's own\ntransaction.\n\nStick a BEGIN; before the create table, and a commit; after the select.\n\nLarry Rosenman\n> \n> \n> \n> Tks,\n> \n> Franklin\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 6: explain analyze is your\n> friend \n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 512-248-2683 E-Mail: [email protected]\nUS Mail: 430 Valona Loop, Round Rock, TX 78681-3893\n\n", "msg_date": "Fri, 23 Jun 2006 17:08:05 -0500", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Temporary table" }, { "msg_contents": "Ok, it works.\n\n\nThanks\n\nFranklin \n\n-----Mensagem original-----\nDe: Larry Rosenman [mailto:[email protected]] \nEnviada em: sexta-feira, 23 de junho de 2006 19:08\nPara: 'Franklin Haut'; [email protected]\nAssunto: RE: [PERFORM] Temporary table\n\nFranklin Haut wrote:\n> Hello,\n> \n> I´m have some problems with a temporary table, i need create a table, \n> insert some values, make a select and at end of transaction the table \n> must droped, but after i created a table there not more exist, is this \n> normal ?\n> \n> How to reproduce :\n> \n> \n> \tCREATE TEMP TABLE cademp (\n> \t codemp INTEGER,\n> \t codfil INTEGER,\n> \t nomemp varchar(50)\n> \t) ON COMMIT DROP;\n> \n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,1,'TESTE');\n> \tINSERT INTO cademp (codemp, codfil, nomemp) values (1,2,'TESTE1');\n> \n> \tSelect * from cademp;\n> \n> \n> \n> In this case, the table cademp doesn´t exist at the first insert, in \n> the same transaction.\n> \n\nIt is NOT the same transaction. By default, each STATEMENT is it's own\ntransaction.\n\nStick a BEGIN; before the create table, and a commit; after the select.\n\nLarry Rosenman\n> \n> \n> \n> Tks,\n> \n> Franklin\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 6: explain analyze is your \n> friend\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 512-248-2683 E-Mail: [email protected]\nUS Mail: 430 Valona Loop, Round Rock, TX 78681-3893\n\n", "msg_date": "Fri, 23 Jun 2006 21:27:40 -0300", "msg_from": "\"Franklin Haut\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: Temporary table" } ]
[ { "msg_contents": "Here is a subtle question about SQL. I have a one-to-many pair of tables (call them \"P\" and \"C\" for parent and child). For each row of P, there are many rows in C with data, and I want to sort P on the min(c.data). The basic query is simple:\n\n select p_id, min(data) as m from c group by p_id order by m;\n\nNow the problem: I also want to store this, in sorted order, as a \"hitlist\", so I have a table like this:\n\n create table hitlist(p_id integer, sortorder integer);\n\nand a sequence to go with it. The first thing I tried doesn't work:\n\n insert into hitlist(p_id, sortorder)\n (select p_id, nextval('hitlist_seq') from\n (select p_id, min(data) as m from c group by p_id order by m);\n\nApparently, the sort order returned by the innermost select is NOT maintained as you go through the next select statement -- the rows seem to come out in random order. This surprised me. But in thinking about the definition of SQL itself, I guess there's no guarantee that sort order is maintained across sub-selects. I was caught by this because in Oracle, this same query works \"correctly\" (i.e. the hitlist ends up in sorted order), but I suspect that was just the luck of their implementation.\n\nCan anyone confirm this, that the sort order is NOT guaranteed to be maintained through layers of SELECT statements?\n\nThe obvious solution is to make the hitlist.sortorder column have the nextval() as its default and eliminate the first sub-select. But I thought the two would be equivalent.\n\nThanks,\nCraig\n", "msg_date": "Sun, 25 Jun 2006 11:15:02 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sort order in sub-select" } ]
[ { "msg_contents": "Thank you for you quick answer i will try to see what i can do whith the file contrib/pgstattuple\n\nBest regards,\n\n> Message du 23/06/06 à 18h11\n> De : \"Tom Lane\" <[email protected]>\n> A : [email protected]\n> Copie à : [email protected]\n> Objet : Re: [PERFORM] Occupation bloc in pages of table \n> \n> luchot <[email protected]> writes:\n> > I would want if it is possible for each pages of a table to have the occupation of blocs in percentage in order to see if the page is good full or not.\n> \n> There is not any magic way of getting that information, but you could\n> modify contrib/pgstattuple to produce such a report.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n>\n\n", "msg_date": "Mon, 26 Jun 2006 09:52:16 +0200 (CEST)", "msg_from": "luchot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occupation bloc in pages of table" } ]
[ { "msg_contents": "Thank you for you quick answer i will try to see what i can do whith the file contrib/pgstattuple\n\nBest regards,\n\n\n> Message du 23/06/06 à 18h11\n> De : \"Tom Lane\" <[email protected]>\n> A : [email protected]\n> Copie à : [email protected]\n> Objet : Re: [PERFORM] Occupation bloc in pages of table \n> \n> luchot <[email protected]> writes:\n> > I would want if it is possible for each pages of a table to have the occupation of blocs in percentage in order to see if the page is good full or not.\n> \n> There is not any magic way of getting that information, but you could\n> modify contrib/pgstattuple to produce such a report.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n>\n\n", "msg_date": "Mon, 26 Jun 2006 10:03:48 +0200 (CEST)", "msg_from": "luchot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occupation bloc in pages of table" } ]
[ { "msg_contents": "\n> De : \"Tom Lane\" <[email protected]>\n> There is not any magic way of getting that information, but you could\n> modify contrib/pgstattuple to produce such a report.\n\nThank you for you speed answer , i will try to see what i can do in contrib/pgstattuple \n\nBest regards,\n\nLuc \n\n", "msg_date": "Mon, 26 Jun 2006 10:07:05 +0200 (CEST)", "msg_from": "luchot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occupation bloc in pages of table" } ]
[ { "msg_contents": "Hello all\n\nI have a big amount of phone calls data (280M records, 25 Gbytes).The best decision\nfor this task is partitioning and I use it now. But at first I tried put all\ndata in a single table indexed by call date&time. Because of nature of the\ndata the records clustered by date and near ordered by time.\n\nThe table definition:\n\nCREATE DOMAIN datetime AS timestamp NOT NULL;\nCREATE DOMAIN cause AS int2 DEFAULT 16 NOT NULL;\nCREATE DOMAIN conn_num AS varchar(34);\nCREATE DOMAIN dur AS int4 NOT NULL;\nCREATE DOMAIN lno AS int2;\nCREATE DOMAIN tgrp AS char(6);\n\nCREATE TABLE conn\n(\n datetime datetime,\n anum conn_num,\n bnum conn_num,\n dur dur,\n itgrp tgrp,\n ilno lno,\n otgrp tgrp,\n olno lno,\n cause cause\n) \nWITHOUT OIDS;\n\nCREATE INDEX conn_dt\n ON conn\n USING btree\n (datetime);\n\nUsual tasks on the table are export and search calls on one or more days. This\ncause the scan of 400K or more records, selected by 'conn_dt' index. The best data\naccess path is a bitmap heap scan. Tests I've made showed incredible bitmap scan\nperfomance almost equal to a seq scan. But PG always prefered simple index scan\nwhich is 20 times slower. Digging in the PG internals brought me to\nindexCorrelation. For the 'datetime' column it was about 0,999999. But why despite\nof this the index scan was so slow? In the next step I ran\n\nselect ctid from conn where ... order by datetime;\n\nResult showed up that there were no page seq scan at all - true random access\nonly.\nThe simple model which can explain the situation: the sequence of numbers 2, 1,\n4, 3, 6, 5, ..., 100, 99 has correlation about 0,9994. Let's imagine it's the page\norder of an index scan. H'm, bad choice, isn't it?\n\nI think indexCorrelation can help to estimate page count but not page\nfetch cost. Why not to use formula\n\nmin_IO_cost = ceil(indexSelectivity * T) * random_page_cost\n\ninstead of\n\nmin_IO_cost = ceil(indexSelectivity * T) ?\n\n\n\n\n", "msg_date": "Tue, 27 Jun 2006 16:14:08 +0400", "msg_from": "Andrew Sagulin <[email protected]>", "msg_from_op": true, "msg_subject": "Large index scan perfomance and indexCorrelation (PG 8.1.4 Win32)" }, { "msg_contents": "On Tue, 2006-06-27 at 16:14 +0400, Andrew Sagulin wrote:\n\n> Result showed up that there were no page seq scan at all - true random access\n> only.\n> The simple model which can explain the situation: the sequence of numbers 2, 1,\n> 4, 3, 6, 5, ..., 100, 99 has correlation about 0,9994. Let's imagine it's the page\n> order of an index scan. H'm, bad choice, isn't it?\n\nYour example is only possible if whole blocks of data were out of order,\nwhich I guess is possible within a multi-million row table. Slightly out\nof order values would be ignored, since I/O works at the block rather\nthan the tuple level.\n\nANALYZE doesn't cope well with tables as large as you have. It doesn't\nsample enough rows, nor does it look within single blocks/groups to\ndiscover anomalies such as yours. As a result, the data that is sampled\nlooks almost perfectly ordered, though the main bulk is not.\n\nI think what you are also pointing out is that the assumption of the\neffects of correlation doesn't match the current readahead logic of\nfilesystems. If we were to actively force a readahead stride of 2 for\nthis scan (somehow), then the lack of correlation would disappear\ncompletely. IIRC the current filesystem readahead logic would find that\nsuch a sequence would be seen as random, and so no readahead would be\nperformed at all - even though the data is highly correlated. That isn't\nPostgreSQL's fault directly, since the readahead ought to work better\nthan it does, but we fail indirectly by relying upon it in this case.\n\n> I think indexCorrelation can help to estimate page count but not page\n> fetch cost. Why not to use formula\n> \n> min_IO_cost = ceil(indexSelectivity * T) * random_page_cost\n> \n> instead of\n> \n> min_IO_cost = ceil(indexSelectivity * T) ?\n\nThat part is sensible. The min_IO_cost is when the access is sequential,\nwhich by definition has a cost of 1.0.\n\nThe bit you might have issue with is how we extrapolate from the\nmin_IO_cost and correlation to arrive at a cost.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 27 Jun 2006 23:04:17 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large index scan perfomance and indexCorrelation (PG" }, { "msg_contents": "Wednesday, June 28, 2006, 2:04:17 Simon Riggs, you wrote:\n\n> That part is sensible. The min_IO_cost is when the access is sequential,\n> which by definition has a cost of 1.0.\n\nIn general - yes. But we talk about the min_IO_cost of the index scan which is\nbarely sequential. Correct me if I'm wrong: index scan algorithm is something\nlike this: 'read couple of index pages, read some data pages, index pages, data\npages, ...'. So, the current assumption of min_IO_cost is too optimistic even in\na case of ideal tuple ordering.\n\n> The bit you might have issue with is how we extrapolate from the\n> min_IO_cost and correlation to arrive at a cost.\n\nNow index scan cost calculation use indexCorrelation as measure of a tuple\nclustering and a degree of their sequentiality (physical ordering). As far\nas I know there are cases when this approach is wrong, for example, my issue or\nany other case with high clustering without ordering, where bitmap heap scan is\nthe best way but PostgreSQL prefer index scan or even sequential scan.\n\nDoes PostgreSQL's development team plan to revise the index scan\ncost algorithm or issues like mine is too rare for taking into account?\n\n\n\n", "msg_date": "Wed, 28 Jun 2006 13:33:55 +0400", "msg_from": "Andrew Sagulin <[email protected]>", "msg_from_op": true, "msg_subject": "Large index scan perfomance and indexCorrelation (PG 8.1.4 Win32)" }, { "msg_contents": "Andrew Sagulin <[email protected]> writes:\n> Does PostgreSQL's development team plan to revise the index scan\n> cost algorithm or issues like mine is too rare for taking into account?\n\nThe algorithm is certainly open for discussion, but we're not changing\nit on the basis of just a single report ...\n\nYou're mistaken to be fingering min_IO_cost as the source of the issue,\nbecause there is also a correction for near-sequential access in\ncost_bitmap_heap_scan. If we were to bias the system as heavily against\nthe consideration as you propose, we would logically have to put a\nsimilar bias into cost_bitmap_heap_scan, and you'd probably still end up\nwith a plain indexscan. What you need to do is compare the two\nfunctions and figure out what part of the cost models are out of line\nwith reality. I tend to agree with the upthread comment that the\nnonlinear interpolation between min_IO_cost and max_IO_cost is suspect\n... but that may or may not have anything truly to do with your problem.\nIt might be that cost_index is fine and cost_bitmap_heap_scan is\novercharging.\n\nBTW there are already some changes in HEAD relating to this, please see\nthe pghackers archives from beginning of June (thread \"More thoughts\nabout planner's cost estimates\").\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jun 2006 10:37:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large index scan perfomance and indexCorrelation (PG 8.1.4 Win32)" }, { "msg_contents": "On Wed, Jun 28, 2006 at 10:37:24AM -0400, Tom Lane wrote:\n> with a plain indexscan. What you need to do is compare the two\n> functions and figure out what part of the cost models are out of line\n> with reality. I tend to agree with the upthread comment that the\n> nonlinear interpolation between min_IO_cost and max_IO_cost is suspect\n\nIf you're going to make such a comparison (which is badly needed, imho),\nhttp://stats.distributed.net/~decibel/ might be of use to you. It shows\nthat the nonlinear interpolation between the correlated and uncorrelated\nindex scans is way off base, at least for this example.\n\nBTW, you'll have a hard time convincing people to increase the cost\nestimates of index scans, because experience has shown that they're\nalready too high (all the discussions about dropping random_page_cost,\nfor example).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 28 Jun 2006 10:08:15 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large index scan perfomance and indexCorrelation (PG 8.1.4 Win32)" } ]
[ { "msg_contents": "-- \nBest,\nGourish Singbal\n\n-- Best,Gourish Singbal", "msg_date": "Tue, 27 Jun 2006 19:34:51 +0530", "msg_from": "\"Gourish Singbal\" <[email protected]>", "msg_from_op": true, "msg_subject": "unregister" } ]
[ { "msg_contents": "unregister\n\n\n\n\n\nunregister", "msg_date": "Tue, 27 Jun 2006 13:46:56 -0300", "msg_from": "=?iso-8859-1?Q?Leandro_Guimar=E3es_dos_Santos?= <[email protected]>", "msg_from_op": true, "msg_subject": "unregister" } ]
[ { "msg_contents": "I have SP, which has a cursor iterations. Need to call another SP for\nevery loop iteration of the cursor. The pseudo code is as follows..\n\nCreate proc1 as\nBegin\n\nVariable declrations...\n\ndeclare EffectiveDate_Cursor cursor for\nselect field1,fld2 from tab1,tab2 where tab1.effectivedate<Getdate()\n---/////Assuming the above query would result in 3 records\nOpen EffectiveDate_Cursor\nFetch next From EffectiveDate_Cursor Into @FLD1,@FLD2\nbegin\n /*Calling my second stored proc with fld1 as a In parameter\nand Op1 and OP2 Out parameters*/\n Exec sp_minCheck @fld1, @OP1 output,@OP2 output\n Do something based on Op1 and Op2.\nend\nWhile @@Fetch_Status = 0\nFetch next From EffectiveDate_Cursor Into @FLD1,@FLD2\n/* Assume If loop count is 3.\n and If the Fetch stmt is below the begin Stmt, the loop iterations are\n4 else the loop iterations are 2*/\nbegin\n /*Calling my second stored proc with fld1 as a In parameter and Op1\nand OP2 Out parameters*/\n Exec sp_minCheck @fld1, @OP1 output,@OP2 output\n Do something based on Op1 and Op2.\nend\n\n\nThe problem I had been facing is that, the when a stored proc is called\nwithin the loop, the proc is getting into infinite loops.\nAny Help would be appreciated.\n\nSatish\n\n", "msg_date": "29 Jun 2006 10:00:35 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Calling a SP from Curosor loop" }, { "msg_contents": "On 29 Jun 2006 10:00:35 -0700, [email protected]\n<[email protected]> wrote:\n> I have SP, which has a cursor iterations. Need to call another SP for\n> every loop iteration of the cursor. The pseudo code is as follows..\n\ni would suggest converting your code to pl/pgsql and reposting. that\nlook awfully like t-sql stored procedure, you may as well be saying,\n'gobble de gook bak wakka bakka bak!', got it? :-)\n\n(aside: pg/pgsql functions support nested calls, recursion, etc and\nshould provide no problems when properly written).\n\nmerlin\n", "msg_date": "Fri, 7 Jul 2006 09:21:14 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling a SP from Curosor loop" } ]
[ { "msg_contents": "Here is a question about SQL. I have a one-to-many pair of tables (call them \"P\" and \"C\" for parent and child). For each row of P, there are many rows in C with data, and I want to sort P on the min(c.data). The basic query is simple:\n\n select p_id, min(data) as m from c group by p_id order by m;\n\nNow the problem: I also want to store this, in sorted order, as a \"hitlist\", so I have a table like this:\n\n create table hitlist(p_id integer, sortorder integer);\n\nand a sequence to go with it. The first thing I tried doesn't work:\n\n insert into hitlist(p_id, sortorder)\n (select p_id, nextval('hitlist_seq') from\n (select p_id, min(data) as m from c group by p_id order by m);\n\nApparently, the sort order returned by the innermost select is NOT maintained as you go through the next select statement -- the rows seem to come out in random order. This surprised me. But in thinking about the definition of SQL itself, I guess there's no guarantee that sort order is maintained across sub-selects. I was caught by this because in Oracle, this same query works \"correctly\" (i.e. the hitlist ends up in sorted order), but I suspect that was just the luck of their implementation.\n\nCan anyone confirm this, that the sort order is NOT guaranteed to be maintained through layers of SELECT statements?\n\nThe apparent solution is to make the hitlist.sortorder column have nextval() as its default and eliminate the first sub-select. But I thought the two would be equivalent.\n\nThanks,\nCraig\n\n", "msg_date": "Thu, 29 Jun 2006 21:55:10 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sort order in sub-select" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> insert into hitlist(p_id, sortorder)\n> (select p_id, nextval('hitlist_seq') from\n> (select p_id, min(data) as m from c group by p_id order by m);\n\n> Apparently, the sort order returned by the innermost select is NOT\n> maintained as you go through the next select statement -- the rows seem\n> to come out in random order. This surprised me.\n\nIt surprises me too. This is outside the SQL spec, because the spec\ndoesn't allow ORDER BY in subselects, but Postgres definitely does and\nwe expect it to be honored. Can you provide a complete example and the\nEXPLAIN plan that you're getting?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jun 2006 02:21:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort order in sub-select " } ]
[ { "msg_contents": "Folks,\n\nJozsef is having trouble posting to the list, but he's receiving \nmessages fine. So reply to the list and not to me. Message follows:\n\n\n-------- Original Message -------\nThe original post:\n\nTitle: Update touches unrelated indexes!?\n\nHi Everyone,\n\nI hope someone can explain what I'm seeing on our system. I've got a\ntable with about four million rows in it (see schema below). Almost\nevery column has one or two indexes. What I've found is that when I\nissue an update statement to zero out the content of a particular\ncolumn, the pg_locks table indicates that every other, seemingly\nunrelated index is locked/changed. The statement is this:\n\nUPDATE schema_1.test_table SET col_27 = 0;\n\nI expect the idx_test_table_col_27 index to have write locks during this\noperation but seeing RowExclusiveLock entries on every other index\npuzzles me. Interestingly enough these locks are not present if the\ntable is smaller.\n\nI see these \"extra\" locks even if I drop the idx_test_table_col_27 index\nbefore the update. The performance of this update is extremely slow. I'm\nmuch better off if I drop all indexes before the update and recreate\nthem after the update. However, deleting these indexes has a negative\nimpact on the performance of other queries that are concurrently being\nexecuted.\n\nIs there a way to limit the impact of the update to the actual column\nand index it is executed on?\n\nAny help is greatly appreciated!\n\nRegards,\nJozsef\n\n dfdata=# \\d test_table\n\n Table \"schema_1.test_table\"\n Column | Type | Modifiers\n-----------------+-----------------------------+--------------------\n col_1 | character varying | not null\n col_2 | character varying |\n col_3 | integer | not null\n col_4 | integer | not null\n col_5 | character varying | not null\n col_6 | character varying | not null\n col_7 | character(1) | not null\n col_8 | character varying | not null\n col_9 | character varying | not null\n col_10 | character varying |\n col_11 | bigint | not null\n col_12 | integer | not null\n col_13 | character varying |\n col_14 | integer | not null\n col_15 | character(38) | not null\n col_16 | character varying | not null\n col_17 | bigint | not null\n col_18 | character varying |\n col_19 | character varying |\n col_20 | integer | not null\n col_21 | integer | not null\n col_22 | integer | not null\n col_23 | integer | not null\n col_24 | timestamp without time zone | not null\n col_25 | timestamp without time zone | not null\n col_26 | timestamp without time zone | not null\n col_27 | integer | not null default 0\n col_28 | integer | not null default 0\n col_29 | integer | not null default 0\n\nIndexes:\n\n \"idx_test_table_col_1\" UNIQUE, btree (col_1)\n \"idx_test_table_col_27\" btree (col_27)\n \"idx_test_table_col_14\" btree (col_14)\n \"idx_test_table_col_12\" btree (col_12)\n \"idx_test_table_col_24\" btree (date_trunc('day'::text, col_24))\n \"idx_test_table_col_25\" btree (date_trunc('day'::text, col_25))\n \"idx_test_table_col_26\" btree (date_trunc('day'::text, col_26))\n \"idx_test_table_col_29\" btree (col_29)\n \"idx_test_table_col_6\" btree (col_6)\n \"idx_test_table_col_10\" btree (lower(col_10::text))\n \"idx_test_table_col_10_2\" btree (lower(col_10::text)\nvarchar_pattern_ops)\n \"idx_test_table_col_9\" btree (lower(col_9::text))\n \"idx_test_table_col_9_2\" btree (lower(col_9::text)\nvarchar_pattern_ops)\n \"idx_test_table_col_8\" btree (lower(col_8::text))\n \"idx_test_table_col_8_2\" btree (lower(col_8::text)\nvarchar_pattern_ops)\n \"idx_test_table_col_5\" btree (col_5)\n \"idx_test_table_col_17\" btree (col_17)\n \"idx_test_table_col_28\" btree (col_28)\n\n\n\n\nlocktype | relation | mode | transaction | pid | granted |\nnspname | relname\n\n----------+----------+------------------+-------------+------+---------+\n------------+-----------------------------------------------------\n\n relation | 1259 | AccessShareLock | 73112 | 7923 | t |\npg_catalog | pg_class\n relation | 10342 | AccessShareLock | 73112 | 7923 | t |\npg_catalog | pg_locks\n relation | 2615 | AccessShareLock | 73112 | 7923 | t |\npg_catalog | pg_namespace\n relation | 28344 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_27\n relation | 28354 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_14\n relation | 28353 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_12\n relation | 28356 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_24\n relation | 28357 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_25\n relation | 28358 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_26\n relation | 28346 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_29\n relation | 28343 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_6\n relation | 28351 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_10\n relation | 28352 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_10_2\n relation | 28349 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_9\n relation | 28350 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_9_2\n relation | 28347 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_8\n relation | 28348 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_8_2\n relation | 28341 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_1\n relation | 28342 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_5\n relation | 28355 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_17\n relation | 28345 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | idx_test_table_col_28\n relation | 27657 | AccessShareLock | 73109 | 7914 | t |\nschema_1 | test_table\n relation | 27657 | RowExclusiveLock | 73109 | 7914 | t |\nschema_1 | test_table\n\n\n", "msg_date": "Thu, 29 Jun 2006 23:33:20 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "FWD: Update touches unrelated indexes?" }, { "msg_contents": "Josh Berkus <[email protected]> forwards:\n> I hope someone can explain what I'm seeing on our system. I've got a\n> table with about four million rows in it (see schema below). Almost\n> every column has one or two indexes. What I've found is that when I\n> issue an update statement to zero out the content of a particular\n> column, the pg_locks table indicates that every other, seemingly\n> unrelated index is locked/changed.\n\nThis surprises you why?\n\n> I expect the idx_test_table_col_27 index to have write locks during this\n> operation but seeing RowExclusiveLock entries on every other index\n> puzzles me. Interestingly enough these locks are not present if the\n> table is smaller.\n\nThat last I don't believe at all --- PG updates every index on every row\nupdate. Most likely the OP is just not querying pg_locks fast enough to\nsee the locks. If he's really concerned about update performance then\nhe probably needs to think harder about whether every one of those\nindexes is really carrying its weight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jun 2006 02:43:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FWD: Update touches unrelated indexes? " } ]
[ { "msg_contents": "Good Morning\n\nI am new to postgres and have been asked to look at a server where we \ntruncate a table then load data.\n\nThe CPU has started to hit 100% usage during this process.\n\nCan you please describe what steps I could take to investigate and solve \nthis issue?\n So far all I have done is run a Vacuum Analyze command using PGAdmin \nIII.....which appears to have made little difference.\n\n\nThank you\n\n\nPete Newman\nInformation Systems \nInstitute of Physics\nTel No 0117 9301249\nFax No 0117 9301183\[email protected]\nhttp://www.iop.org\n<p>\n************************************************************************<p>\n\nThis email (and attachments) are confidential and intended for the addressee(s) only. If you are not the intended recipient please notify the sender, delete any copies and do not take action in reliance on it. Any views expressed are the author's and do not represent those of IOP, except where specifically stated. IOP takes reasonable precautions to protect against viruses but accepts no responsibility for loss or damage arising from virus infection. For the protection of IOP's systems and staff emails are scanned automatically.<p>\n\nIOP Publishing Limited Registered in England under Registration No 467514. Registered Office: Dirac House, Temple Back, Bristol BS1 6BE England\n\n\nGood Morning\n\nI am new to postgres and have been asked\nto look at a server where we truncate a table then load data.\n\nThe CPU has started to hit 100% usage\nduring this process.\n\nCan you please describe what steps I\ncould take to investigate and solve this issue?\n So far all I have done is run\na Vacuum Analyze command using PGAdmin III.....which appears to have made\nlittle difference.\n\n\nThank you\n\n\nPete Newman\nInformation Systems \nInstitute of Physics\nTel No 0117 9301249\nFax No 0117 9301183\[email protected]\nhttp://www.iop.org\n\n************************************************************************\n\nThis email (and attachments) are confidential and intended for the addressee(s) only. If you are not the intended recipient please notify the sender, delete any copies and do not take action in reliance on it. Any views expressed are the author's and do not represent those of IOP, except where specifically stated. IOP takes reasonable precautions to protect against viruses but accepts no responsibility for loss or damage arising from virus infection. For the protection of IOP's systems and staff emails are scanned automatically.\n\nIOP Publishing Limited Registered in England under Registration No 467514. Registered Office: Dirac House, Temple Back, Bristol BS1 6BE England", "msg_date": "Fri, 30 Jun 2006 08:32:39 +0100", "msg_from": "Peter Newman <[email protected]>", "msg_from_op": true, "msg_subject": "100% CPU" }, { "msg_contents": "moving to -performance\n\nOn Fri, Jun 30, 2006 at 08:32:39AM +0100, Peter Newman wrote:\n> Good Morning\n> \n> I am new to postgres and have been asked to look at a server where we \n> truncate a table then load data.\n> \n> The CPU has started to hit 100% usage during this process.\n> \n> Can you please describe what steps I could take to investigate and solve \n> this issue?\n> So far all I have done is run a Vacuum Analyze command using PGAdmin \n> III.....which appears to have made little difference.\n\nHow many indexes do you have on the table? How exactly are you loading\nthe data? What hardware is this? What version of the database?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 30 Jun 2006 12:58:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 100% CPU" }, { "msg_contents": "Jim C. Nasby wrote:\n> moving to -performance\n> \n> On Fri, Jun 30, 2006 at 08:32:39AM +0100, Peter Newman wrote:\n>> Good Morning\n>>\n>> I am new to postgres and have been asked to look at a server where we \n>> truncate a table then load data.\n>>\n>> The CPU has started to hit 100% usage during this process.\n>>\n\nThis could be a good sign! - if you are using COPY into a table with no \nindexes and your machine has a good IO subsystem, then typically cpu \nbecomes the limiting factor. However as Jim suggested, more details \nwould be good, otherwise we are just guessing!\n\nCheers\n\nMark\n", "msg_date": "Sun, 02 Jul 2006 18:25:56 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgadmin-support] 100% CPU" } ]
[ { "msg_contents": "I have a index question. My table has 800K rows and I a doing a basic \nquery on an indexed integer field which takes over 2 seconds to \ncomplete because it's ignoring the index for some reason. Any ideas \nas to why it's ignoring the index? I'm using postgres 8.0.2.\n\nSELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0'\n\nEXPLAIN ANALYZE reveals that it's not using the index...\n\nAggregate (cost=22695.28..22695.28 rows=1 width=0) (actual \ntime=2205.688..2205.724 rows=1 loops=1)\n -> Seq Scan on purchase_order_items (cost=0.00..21978.08 \nrows=286882 width=0) (actual time=0.535..2184.405 rows=7458 loops=1)\n Filter: (expected_quantity > 0)\nTotal runtime: 2207.203 ms\n\nHowever, if I use the \"SET ENABLE_SEQSCAN TO OFF\" trick, then it does \nuse the index and is much faster.\n\nSET ENABLE_SEQSCAN TO OFF;\nEXPLAIN ANALYZE SELECT count(*) FROM purchase_order_items WHERE \nexpected_quantity > '0'\n\nAggregate (cost=1050659.46..1050659.46 rows=1 width=0) (actual \ntime=137.393..137.441 rows=1 loops=1)\n -> Index Scan using purchase_order_items_expected_quantity_idx on \npurchase_order_items (cost=0.00..1049942.25 rows=286882 width=0) \n(actual time=0.756..119.990 rows=7458 loops=1)\n Index Cond: (expected_quantity > 0)\nTotal runtime: 139.185 ms\n\nI could understand if this was a really complex query and the planner \ngot confused... but this is such a simple query. Is it OK to use \"SET \nENABLE_SEQSCAN TO OFF;\" in production code? Is there another solution?\n\nThanks!\n\n------------------------------\n\n-- Table Definition --\n\nCREATE TABLE purchase_order_items (\n id serial NOT NULL,\n purchase_order_id integer,\n manufacturer_id integer,\n quantity integer,\n product_name character varying(16),\n short_description character varying(60),\n expected_quantity integer,\n received_quantity integer,\n \"position\" real,\n created_at timestamp without time zone DEFAULT now(),\n updated_at timestamp without time zone\n);\n\n-- Index --\n\nCREATE INDEX purchase_order_items_expected_quantity_idx ON \npurchase_order_items USING btree (expected_quantity);\n\n\n\nI have a index question. My table has 800K rows and I a doing a basic query on an indexed integer field which takes over 2 seconds to complete because it's ignoring the index for some reason. Any ideas as to why it's ignoring the index? I'm using postgres 8.0.2.SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0' EXPLAIN ANALYZE reveals that it's not using the index...Aggregate  (cost=22695.28..22695.28 rows=1 width=0) (actual time=2205.688..2205.724 rows=1 loops=1)  ->  Seq Scan on purchase_order_items  (cost=0.00..21978.08 rows=286882 width=0) (actual time=0.535..2184.405 rows=7458 loops=1)        Filter: (expected_quantity > 0)Total runtime: 2207.203 msHowever, if I use the \"SET ENABLE_SEQSCAN TO OFF\" trick, then it does use the index and is much faster.SET ENABLE_SEQSCAN TO OFF;EXPLAIN ANALYZE SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0' Aggregate  (cost=1050659.46..1050659.46 rows=1 width=0) (actual time=137.393..137.441 rows=1 loops=1)  ->  Index Scan using purchase_order_items_expected_quantity_idx on purchase_order_items  (cost=0.00..1049942.25 rows=286882 width=0) (actual time=0.756..119.990 rows=7458 loops=1)        Index Cond: (expected_quantity > 0)Total runtime: 139.185 msI could understand if this was a really complex query and the planner got confused... but this is such a simple query. Is it OK to use \"SET ENABLE_SEQSCAN TO OFF;\" in production code? Is there another solution?Thanks!-------------------------------- Table Definition --CREATE TABLE purchase_order_items (    id serial NOT NULL,    purchase_order_id integer,    manufacturer_id integer,    quantity integer,    product_name character varying(16),    short_description character varying(60),    expected_quantity integer,    received_quantity integer,    \"position\" real,    created_at timestamp without time zone DEFAULT now(),    updated_at timestamp without time zone);-- Index --CREATE INDEX purchase_order_items_expected_quantity_idx ON purchase_order_items USING btree (expected_quantity);", "msg_date": "Fri, 30 Jun 2006 09:31:52 -0400", "msg_from": "Joe Lester <[email protected]>", "msg_from_op": true, "msg_subject": "Index Being Ignored?" }, { "msg_contents": "Joe Lester wrote:\n> I have a index question. My table has 800K rows and I a doing a basic \n> query on an indexed integer field which takes over 2 seconds to \n> complete because it's ignoring the index for some reason. Any ideas \n> as to why it's ignoring the index? I'm using postgres 8.0.2.\n> \n> SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0'\n> \n> EXPLAIN ANALYZE reveals that it's not using the index...\n> \n> Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual \n> time=2205.688..2205.724 rows=1 loops=1)\n> -> Seq Scan on purchase_order_items (cost=0.00..21978.08 \n> rows=286882 width=0) (actual time=0.535..2184.405 rows=7458 loops=1)\n> Filter: (expected_quantity > 0)\n> Total runtime: 2207.203 ms\n\nThe estimated rowcount is far off. When did you last run ANALYZE on\nthis table?\n\nBTW, you should upgrade (to 8.0.8) unless you want known bugs to destroy\nyour data.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 30 Jun 2006 10:14:55 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Being Ignored?" }, { "msg_contents": "Hi, Joe,\n\nJoe Lester wrote:\n> Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual\n> time=2205.688..2205.724 rows=1 loops=1)\n> -> Seq Scan on purchase_order_items (cost=0.00..21978.08 rows=286882\n> width=0) (actual time=0.535..2184.405 rows=7458 loops=1)\n> Filter: (expected_quantity > 0)\n\nThe query planner estimates that your filter will hit 286882 rows, while\nin reality it hits only 7458 rows. That's why the query planer chooses a\nsequential scan.\n\nIt seems that the statistics for the column expected_quantity are off.\n\nMy suggestions:\n\n- make shure that the statistics are current by analyzing the table\nappropriately (e. G. by using the autovacuum daemon from contrib).\n\n- increase the statistics target for this column.\n\n- if you run this query very often, an conditional index might make sense:\n\nCREATE INDEX purchase_order_having_quantity_idx ON purchase_order_items\n(expected_quantity) WHERE expected_quantity > 0;\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 30 Jun 2006 16:29:06 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Being Ignored?" }, { "msg_contents": "Joe Lester <[email protected]> writes:\n> SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0'\n\n> Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual \n> time=2205.688..2205.724 rows=1 loops=1)\n> -> Seq Scan on purchase_order_items (cost=0.00..21978.08 \n> rows=286882 width=0) (actual time=0.535..2184.405 rows=7458 loops=1)\n> Filter: (expected_quantity > 0)\n> Total runtime: 2207.203 ms\n\nWhy is the expected row count so far off --- have you analyzed the table\nlately? For such a simple WHERE condition the estimate should be pretty\naccurate, if the stats are sufficient. If this table is very large you\nmight need to increase the statistics targets, but more likely you just\nhaven't got up-to-date stats at all.\n\nThe planner *never* \"ignores\" an index. It may deliberately decide not\nto use it, if it thinks the seqscan plan will be faster, as it does in\nthis case --- note the much higher cost estimate for the indexscan:\n\n> SET ENABLE_SEQSCAN TO OFF;\n> EXPLAIN ANALYZE SELECT count(*) FROM purchase_order_items WHERE \n> expected_quantity > '0'\n\n> Aggregate (cost=1050659.46..1050659.46 rows=1 width=0) (actual \n> time=137.393..137.441 rows=1 loops=1)\n> -> Index Scan using purchase_order_items_expected_quantity_idx on \n> purchase_order_items (cost=0.00..1049942.25 rows=286882 width=0) \n> (actual time=0.756..119.990 rows=7458 loops=1)\n> Index Cond: (expected_quantity > 0)\n> Total runtime: 139.185 ms\n\nThe reason the cost estimate is out of line with reality is mainly that\nthe rows estimate is out of line with reality. There may be some index\norder correlation it's not aware of too.\n\nBTW you might want to think about updating to PG 8.1. Its \"bitmap\"\nindex scans are much better suited for queries that are using a\nrelatively unselective index condition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jun 2006 10:41:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Being Ignored? " }, { "msg_contents": "great!\n\nThanks Markus and Tom!\n\nOn Jun 30, 2006, at 10:29 AM, Markus Schaber wrote:\n\n> Hi, Joe,\n>\n> Joe Lester wrote:\n>> Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual\n>> time=2205.688..2205.724 rows=1 loops=1)\n>> -> Seq Scan on purchase_order_items (cost=0.00..21978.08 \n>> rows=286882\n>> width=0) (actual time=0.535..2184.405 rows=7458 loops=1)\n>> Filter: (expected_quantity > 0)\n>\n> The query planner estimates that your filter will hit 286882 rows, \n> while\n> in reality it hits only 7458 rows. That's why the query planer \n> chooses a\n> sequential scan.\n>\n> It seems that the statistics for the column expected_quantity are off.\n>\n> My suggestions:\n>\n> - make shure that the statistics are current by analyzing the table\n> appropriately (e. G. by using the autovacuum daemon from contrib).\n>\n> - increase the statistics target for this column.\n>\n> - if you run this query very often, an conditional index might make \n> sense:\n>\n> CREATE INDEX purchase_order_having_quantity_idx ON \n> purchase_order_items\n> (expected_quantity) WHERE expected_quantity > 0;\n>\n>\n> HTH,\n> Markus\n>\n> -- \n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in EU! www.ffii.org \n> www.nosoftwarepatents.org\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\n\n", "msg_date": "Fri, 30 Jun 2006 12:32:07 -0400", "msg_from": "Joe Lester <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Being Ignored?" } ]
[ { "msg_contents": "Hi,\n\nAlfter hours of adjusting performance of the queries in my Postgres\n7.3 database - reprogramming the queries, VACUUMing, changing value of\nenable_seqscan - I gived it up, recreated the database and transferred\nthe dump of the old database into it.\nThe queries went from 15 sec to 50 msec!! Wow.\nNow I would really love to know how the old database got that slow,\nand how can I avoid it in the future. Any tips are greatly\nappreciated!\n\nThanks!\nKsenia.\n", "msg_date": "Fri, 30 Jun 2006 16:13:52 +0200", "msg_from": "\"Ksenia Marasanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "newly created database makes queries run 300% faster" }, { "msg_contents": "Ksenia Marasanova wrote:\n> Hi,\n> \n> Alfter hours of adjusting performance of the queries in my Postgres\n> 7.3 database - reprogramming the queries, VACUUMing, changing value of\n> enable_seqscan - I gived it up, recreated the database and transferred\n> the dump of the old database into it.\n> The queries went from 15 sec to 50 msec!! Wow.\n> Now I would really love to know how the old database got that slow,\n> and how can I avoid it in the future. Any tips are greatly\n> appreciated!\n\nIf memory servers me (and it might not in this case), vacuum in 7.3 had\nissues with indexes. Reindexing or clustering your tables might have\nhelped. Both are blocking operations.\n\nHow to avoid it in the future is simple. Upgrade to a modern version of\nPostgres and vacuum your database properly. People work on this thing\nfor a reason :-)\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Fri, 30 Jun 2006 10:36:03 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: newly created database makes queries run 300% faster" } ]
[ { "msg_contents": "Hi Tom,\n\n>This surprises you why?\n\nI don't know anything about how PG stores keys along with their\nreferences to the actual rows but my assumption was that that reference\nis some sort of an index into a table that maps the reference to an\nactual disk/file address. So even if the row or the page with the row on\nit is physically moved to a different location in the disk file, the\nunrelated indexes would not have to be changed because only the\ndisk/file address changes but the reference does not. If PG does not\nwork in a similar fashion then I understand the locks. \n\n> That last I don't believe at all --- PG updates every index on every\nrow\n>update. Most likely the OP is just not querying pg_locks fast enough\nto\n>see the locks.\n\nI'm sure you are right, but I was doing the update in a transaction and\nI did not see those looks after the update was done but before the\nchanges were committed. \n\n>he probably needs to think harder about whether every one of those\n>indexes is really carrying its weight.\n\nUnfortunately all of those indexes are required by the application. It\nappears that the only viable option I have is to drop the indexes and\nrecreate them after the update.\n\n\nThanks for the help!\nJozsef\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Friday, June 30, 2006 1:44 AM\nTo: Josh Berkus\nCc: [email protected]\nSubject: Re: [PERFORM] FWD: Update touches unrelated indexes? \n\nJosh Berkus <[email protected]> forwards:\n> I hope someone can explain what I'm seeing on our system. I've got a\n> table with about four million rows in it (see schema below). Almost\n> every column has one or two indexes. What I've found is that when I\n> issue an update statement to zero out the content of a particular\n> column, the pg_locks table indicates that every other, seemingly\n> unrelated index is locked/changed.\n\nThis surprises you why?\n\n> I expect the idx_test_table_col_27 index to have write locks during\nthis\n> operation but seeing RowExclusiveLock entries on every other index\n> puzzles me. Interestingly enough these locks are not present if the\n> table is smaller.\n\nThat last I don't believe at all --- PG updates every index on every row\nupdate. Most likely the OP is just not querying pg_locks fast enough to\nsee the locks. If he's really concerned about update performance then\nhe probably needs to think harder about whether every one of those\nindexes is really carrying its weight.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Fri, 30 Jun 2006 10:26:04 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FWD: Update touches unrelated indexes? " }, { "msg_contents": "Jozsef Szalay wrote:\n\n> >he probably needs to think harder about whether every one of those\n> >indexes is really carrying its weight.\n> \n> Unfortunately all of those indexes are required by the application. It\n> appears that the only viable option I have is to drop the indexes and\n> recreate them after the update.\n\nNot at all -- the option is just continue to operate normally after the\nupdate, because all the indexes are always updated. If you see an index\nnot being updated, it's a bug and by all means report it, preferably\nwith a test case other people can reproduce.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 30 Jun 2006 13:27:49 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FWD: Update touches unrelated indexes?" }, { "msg_contents": "> >This surprises you why?\n>\n> I don't know anything about how PG stores keys along with their\n> references to the actual rows but my assumption was that that reference\n> is some sort of an index into a table that maps the reference to an\n> actual disk/file address. So even if the row or the page with the row on\n> it is physically moved to a different location in the disk file, the\n> unrelated indexes would not have to be changed because only the\n> disk/file address changes but the reference does not. If PG does not\n> work in a similar fashion then I understand the locks.\n>\n\nWhen you update a table postgres makes a copy of the row being updated\nso it has to create new index entries pointing to the new version of\nthe row... but it keeps old index entries pointing to the prior\nversion of the row because if there are concurrent queries to those\ntables that looks for that particular row and you haven't committed\nyet we still want the old version (old index entry)...\n\n-- \nregards,\nJaime Casanova\n\n\"Programming today is a race between software engineers striving to\nbuild bigger and better idiot-proof programs and the universe trying\nto produce bigger and better idiots.\nSo far, the universe is winning.\"\n Richard Cook\n", "msg_date": "Sun, 2 Jul 2006 09:36:20 -0500", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FWD: Update touches unrelated indexes?" } ]
[ { "msg_contents": "Is there any way to create a reverse index on string columns so that\nqueries of the form:\n\nwhere column like '%2345';\n\ncan use an index and perform as fast as searching with like '2345%'?\n\nIs the only way to create a reverse function and create an index using\nthe reverse function and modify queries to use:\n\nwhere reverse(column) like reverse('%2345') ?\n\nthanks\n\n-- \nEugene Hart\nCell: 443-604-2679\n", "msg_date": "Sun, 2 Jul 2006 17:50:37 -0400", "msg_from": "Gene <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing LIKE '%2345' queries" }, { "msg_contents": "Am Sonntag, 2. Juli 2006 23:50 schrieb Gene:\n> Is there any way to create a reverse index on string columns so that\n> queries of the form:\n>\n> where column like '%2345';\n>\n> can use an index and perform as fast as searching with like '2345%'?\n>\n> Is the only way to create a reverse function and create an index using\n> the reverse function and modify queries to use:\n>\n> where reverse(column) like reverse('%2345') ?\n>\n> thanks\n\ncreate a trigger that computes this at insert/update time, index this fix, and \nrewrite the query this way:\nwhere inverted_column like '5432%';\n\n", "msg_date": "Mon, 3 Jul 2006 09:33:53 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing LIKE '%2345' queries" }, { "msg_contents": "Hi all,\n\n I've been working on my personal project for 3.5 years now. I \ndeveloped an ERP system in web/java. Now the people I will work with \nsuggest to offers it in Saas mode. Which means my customer will connect \nto my website and found they ERP software and data there. It's not the \ndeployment I planned initially so if you can just validate some \ntechnicals points to be sure it's not crazy using Postgresl here and not \na big $$$ db to do the job.\n\nTypically I will have 1db per client and around 150 tables per db. So \nsince I hope I didn`t work all those year for nothing .. I expect to \nhave bunch of clients witch means the same amount of db since I have 1 \ndb/client. \n\nCan I hope having several hundred of db on 1 db server? Like 250 dbs = \n250 client = 360 000 tables !!!\nSo is there a limit for the number of db in the db server ?(this spec is \nnot on the website)\nWhat about the performance? Can I expect to have the same performance? \n\nSince I put everything on the web I do needs an High Availability \ninfrastructure. I looked into SlonyI and Mammoth to replicate the db \nbut since SlonyI use triggers can I expect it to do the job? Is Mammoth \nis the only available solution?\n\nLast question and not the least I'm reading this performance list for \nseveral years now and know suggestion about hardware to run postgresl is \ndiscussed. Since I wrote software there severals points about hardware \nthat I don`t understand. Do you have any suggestion of platform to run \ninto my Saas configuration? I do need the WISE one! I'm pretty sure \nthat if I was a big company I would be able throw bunch of $$$$ but it's \nnot my case. I'm pretty sure it exists out there some piece of Hardware \nthat would do the job perfectly with a fair price.\n\nSo far I did understand that Postgresql loves Opteron and I have looked \ninto the dl145 series of HP. I did understand that Dell Hardware it`s \nnot reliable. But it's still not clear what should be my requirement \nfor memory, disk, nb cpu, cpu power, etc.\n\nI'm pretty sure it`s better to have more slower CPUs that having the \nlatest Opteron available on the market, or more slower servers that \nhaving the fastest one... am I right? But agains what it`s the optimal \nchoice?\n\nThanks you to share your knowledge on those point. I do consider using \nPostgresql is the Smart choice in my project since the beginning but \nbefore putting all the money (That I don`t have ..:-)) to buy some \nhardware I just want to be sure I'm not crazy!\n\nThanks for your help I really appreciate it!!\n\nBest Regards\n/David\n\n\n\n\n\n\n", "msg_date": "Mon, 03 Jul 2006 07:41:33 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Is postgresql ca do the job for software deployed in ASP ou SaaS\n mode?" }, { "msg_contents": "> Typically I will have 1db per client and around 150 tables per db. So \n> since I hope I didn`t work all those year for nothing .. I expect to \n> have bunch of clients witch means the same amount of db since I have 1 \n> db/client. \n> \n> Can I hope having several hundred of db on 1 db server? Like 250 dbs = \n> 250 client = 360 000 tables !!!\n> So is there a limit for the number of db in the db server ?(this spec is \n> not on the website)\n\nI'll take a stab at this question.\n\nEach table and database are referenced by an OID.\n\nSo the sum(tables) + sum(database) << max-size(OID). \nIn my case max-size of OID (I believe) is 9223372036854775807.\nSo if you limited yourself to 1% of the OIDs for use as tables and databases then you could\npotentially have 92233720368547758 table or database.\n\nEach database create produces a directory with the database OID:\n./data/base/10792\n./data/base/10793\n./data/base/16814\n...\n...\n\nsince the creation of a new db produces a directory, one limitation would come from your\nfile-systems limitation on the number of sub-directories that are allowed.\n\nEach table with-in the database is assigned an OID and is located inside the DB directory. So if\nthere is a file-system limitation on the number of files with-in a given directory it would also\nbe a limit to the number of tables that could be created for each database.\n\nThe only other limitation that I am aware of is the storage capacity of you DB server.\n\nIf there are additional limitations beyond these I would be interested in knowing about them and\nadding them to the http://www.postgresql.org/about/ we be helpful also.\n\nRegards,\n\nRichard Broersma Jr.\n", "msg_date": "Mon, 3 Jul 2006 10:00:27 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP ou SaaS\n\tmode?" }, { "msg_contents": "On 7/3/06, David Gagnon <[email protected]> wrote:\n>\n>\n> Can I hope having several hundred of db on 1 db server? Like 250 dbs =\n> 250 client = 360 000 tables !!!\n> So is there a limit for the number of db in the db server ?(this spec is\n> not on the website)\n> What about the performance? Can I expect to have the same performance?\n\n\n\nI am running a similar environment. Each of our customers has a seperate\ndatabase with serveral hundred tables per database. One of our servers is\nrunning over 200 customer databases with absolutely no problems.\n\nHTH,\n\nchris\n\nOn 7/3/06, David Gagnon <[email protected]> wrote:\nCan I hope having several hundred of db on 1 db server?  Like 250 dbs =250 client = 360 000 tables !!!So is there a limit for the number of db in the db server ?(this spec isnot on the website)What about the performance? Can I expect to have the same performance?\nI am running a similar environment.  Each of our customers has a seperate database with serveral hundred tables per database.  One of our servers is running over 200 customer databases with absolutely no problems. \n HTH,chris", "msg_date": "Mon, 3 Jul 2006 13:20:20 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP ou SaaS\n\tmode?" }, { "msg_contents": "Richard Broersma Jr wrote:\n> Each table with-in the database is assigned an OID and is located inside the DB directory. So if\n> there is a file-system limitation on the number of files with-in a given directory it would also\n> be a limit to the number of tables that could be created for each database.\n\nYou could handle this with tablespaces. For example, create ten tablespaces, and then assign customer databases to them in round-robin fashion. This also lets you assign databases to different disks to balance the I/O load.\n\nCraig\n", "msg_date": "Mon, 03 Jul 2006 10:35:55 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in" }, { "msg_contents": "Hi, Chris,\n\nIn your deployment, can you put a bit more detail if available? Many thanks!\n\nMy questions are:\n a) How do you resolve the connection pool issue?\n b) If each client application has many threads of connections to the\nremote server, what is the likely performance penalty with compare to the\nDBMS hosted at the same host as the client?\n\nIndeed, the application requirements may be quite different for us, but\nabove two are my main concerns prior to doing a porting work for a large\napplication (from other vendor DBMS).\n\nWe have several idential applications on different servers, each has 250+\ndatabase connections, currently they are having a DBMS on each server but we\nwant them to share one DBMS at a dedicate DBMS server (in the same LAN) if\nperformance penalty is little. I wonder if anyone there can provide your\ncomments and experience on this. Many thanks.\n\nRegards,\nGuoping\n\n -----Original Message-----\n From: [email protected]\n[mailto:[email protected]]On Behalf Of Chris Hoover\n Sent: 2006��7��4�� 3:20\n To: David Gagnon\n Cc: [email protected]\n Subject: Re: [PERFORM] Is postgresql ca do the job for software deployed\nin ASP ou SaaS mode?\n\n\n On 7/3/06, David Gagnon <[email protected]> wrote:\n\n Can I hope having several hundred of db on 1 db server? Like 250 dbs =\n 250 client = 360 000 tables !!!\n So is there a limit for the number of db in the db server ?(this spec is\n not on the website)\n What about the performance? Can I expect to have the same performance?\n\n\n I am running a similar environment. Each of our customers has a seperate\ndatabase with serveral hundred tables per database. One of our servers is\nrunning over 200 customer databases with absolutely no problems.\n\n\n HTH,\n\n chris\n\n\n\n\n\n\n\nHi, \nChris,\n \nIn \nyour deployment, can you put a bit more detail if available? Many \nthanks! \n \nMy \nquestions are: \n  \na)  How do you resolve the connection \npool issue?\n  \nb)  If each client application has many threads of connections to the \nremote server, what is the likely performance penalty with compare to the DBMS \nhosted at the same host as the client?\n \nIndeed, the application requirements may be quite different for us, \nbut above two are my main concerns prior to doing a porting work for \na large application (from other vendor DBMS). \n \nWe \nhave several idential applications on different servers, each has 250+ database \nconnections, currently they are having a DBMS on each server but we \nwant them to share one DBMS at a dedicate DBMS server (in the same \nLAN) if performance penalty is little. I wonder if anyone there can \nprovide your comments and experience on this. Many \nthanks.  \n \nRegards,\nGuoping\n \n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]]On Behalf Of Chris \n HooverSent: 2006Äê7ÔÂ4ÈÕ 3:20To: David \n GagnonCc: [email protected]: Re: \n [PERFORM] Is postgresql ca do the job for software deployed in ASP ou SaaS \n mode?On 7/3/06, David \n Gagnon <[email protected]> \n wrote:\n \nCan \n I hope having several hundred of db on 1 db server?  Like 250 dbs \n =250 client = 360 000 tables !!!So is there a limit for the number \n of db in the db server ?(this spec isnot on the website)What about \n the performance? Can I expect to have the same performance? \nI am running a similar environment.  Each of our customers \n has a seperate database with serveral hundred tables per database.  One \n of our servers is running over 200 customer databases with absolutely no \n problems. \nHTH,chris", "msg_date": "Tue, 4 Jul 2006 14:35:37 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP ou SaaS\n\tmode?" }, { "msg_contents": "On Sun, 2 Jul 2006, Gene wrote:\n\n> can use an index and perform as fast as searching with like '2345%'?\n>\n> Is the only way to create a reverse function and create an index using\n> the reverse function and modify queries to use:\n>\n> where reverse(column) like reverse('%2345') ?\n\n \tHmm.. interesting.\n \tIf (and only if) the records stored in \"column\" column have fixed \nlength (say, all are 50 characters in length) you could create and index \non, say, substring(column,45,50), and use this in the WHERE clauses in \nyour queries.\n \tOr if the length of those records is not the same maybe it is \nfeasible to create an ondex on substring(column, length(column)-5, \nlength(column)).\n\n-- \nAny views or opinions presented within this e-mail are solely those of\nthe author and do not necessarily represent those of any company, unless\notherwise expressly stated.\n", "msg_date": "Tue, 4 Jul 2006 14:14:23 +0300 (EEST)", "msg_from": "Tarhon-Onu Victor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing LIKE '%2345' queries" }, { "msg_contents": "Thanks for the suggestion. Actually I went ahead and created a reverse\nfunction using plpgsql, created an index using reverse column and now\nmy queries use \"where reverse(column) like reverse('%2345') and it's\nusing the index like i hoped it would! Now if I could figure out how\nto optimize like '%2345%' queries. I don't want to create many\nindexes though the table is very write heavy.\n\n> > Is the only way to create a reverse function and create an index using\n> > the reverse function and modify queries to use:\n> >\n> > where reverse(column) like reverse('%2345') ?\n>\n> Hmm.. interesting.\n> If (and only if) the records stored in \"column\" column have fixed\n> length (say, all are 50 characters in length) you could create and index\n> on, say, substring(column,45,50), and use this in the WHERE clauses in\n> your queries.\n> Or if the length of those records is not the same maybe it is\n> feasible to create an ondex on substring(column, length(column)-5,\n> length(column)).\n>\n> --\n> Any views or opinions presented within this e-mail are solely those of\n> the author and do not necessarily represent those of any company, unless\n> otherwise expressly stated.\n>\n", "msg_date": "Tue, 4 Jul 2006 16:27:44 -0400", "msg_from": "Gene <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing LIKE '%2345' queries" }, { "msg_contents": "Gene wrote:\n> Thanks for the suggestion. Actually I went ahead and created a reverse\n> function using plpgsql, created an index using reverse column and now\n> my queries use \"where reverse(column) like reverse('%2345') and it's\n> using the index like i hoped it would! Now if I could figure out how\n> to optimize like '%2345%' queries. I don't want to create many\n> indexes though the table is very write heavy.\n\nYou can't because that text can be anywhere inside the database field, \nso the whole field basically has to be checked to see if it's there.\n\nYou could check out full text indexing (tsearch2).\n\n<shameless plug>\nhttp://www.designmagick.com/article/27/PostgreSQL/Introduction-to-Full-Text-Indexing\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 05 Jul 2006 10:22:42 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing LIKE '%2345' queries" } ]
[ { "msg_contents": "I have a very slow query when enable_seqscan=on and very fast when\nenable_seqscan=off. My schema looks like this (relevant columns\nonly):\n\ncreate table organizations (\n\torganization_id serial primary key,\n\torganization varchar(200) not null,\n\torganization_location varchar(55) not null\n\t-- and several irrelevant columns\n); -- about 9000 records\ncreate table persons (\n\tperson_id serial primary key,\n\tsurname varchar(50) not null,\n\tforename varchar(35) not null,\n\torganization_id int references organizations,\n -- and several irrelevant columns\n); -- about 6500 records\ncreate index persons_surname_forename_person_id on persons (\n\tsurname,\n\tforename,\n\tlpad(person_id,10,'0')\n); -- I was hoping this would speed up array comparisions\n\n\nThe query looking for a position of a person of given person_id in a\nlist sorted by surname, forename and person_id and filtered by some\ncriteria. In this example person_id=1, forename~*'to' (about 400\npeople) and organization_location~*'warszawa' (about 2000\norganizations):\n\nselect count(*) as position from (select\n\t\tperson_id, surname, forename\n\t\tfrom persons\n\t\tnatural left join organizations\n\t\twhere forename~*'to' and organization_location~*'warszawa'\n\t) as person_filter\n\t\twhere array[surname, forename, lpad(person_id,10,'0')]\n\t\t<\n\t\t(select array[surname, forename, lpad(person_id,10,'0')]\n\t\t\tfrom persons where person_id=1);\n\nThis query take about 30 seconds when enable_seqscan=on and 65\nmilliseconds when off.\n\nWhen enable_seqscan=on:\n Aggregate (cost=785.72..785.73 rows=1 width=0) (actual time=27948.955..27948.956 rows=1 loops=1)\n InitPlan\n -> Index Scan using persons_pkey on persons (cost=0.00..3.11 rows=1 width=26) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (person_id = 1)\n -> Nested Loop (cost=0.00..782.60 rows=1 width=0) (actual time=27948.939..27948.939 rows=0 loops=1)\n Join Filter: (\"inner\".organization_id = \"outer\".organization_id)\n -> Seq Scan on organization (cost=0.00..480.95 rows=1 width=4) (actual time=0.071..69.702 rows=1892 loops=1)\n Filter: ((organization_location)::text ~* 'warszawa'::text)\n -> Seq Scan on persons (cost=0.00..296.10 rows=444 width=4) (actual time=14.720..14.720 rows=0 loops=1892)\n Filter: (((forename)::text ~* 'to'::text) AND (ARRAY[surname, forename, (lpad((person_id)::text, 10, '0'::text))::character varying] < $0))\n Total runtime: 27949.106 ms\n\nWhen enable_seqscan=off:\n Aggregate (cost=100001710.26..100001710.27 rows=1 width=0) (actual time=66.788..66.789 rows=1 loops=1)\n InitPlan\n -> Index Scan using persons_pkey on persons (cost=0.00..3.11 rows=1 width=26) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (person_id = 1)\n -> Hash Join (cost=100001408.81..100001707.14 rows=1 width=0) (actual time=66.756..66.756 rows=0 loops=1)\n Hash Cond: (\"outer\".organization_id = \"inner\".organization_id)\n -> Seq Scan on persons (cost=100000000.00..100000296.10 rows=444 width=4) (actual time=14.972..14.972 rows=0 loops=1)\n Filter: (((forename)::text ~* 'to'::text) AND (ARRAY[surname, forename, (lpad((person_id)::text, 10, '0'::text))::character varying] < $0))\n -> Hash (cost=1408.81..1408.81 rows=1 width=4) (actual time=51.763..51.763 rows=1892 loops=1)\n -> Index Scan using organizations_pkey on organizations (cost=0.00..1408.81 rows=1 width=4) (actual time=0.049..48.233 rows=1892 loops=1)\n Filter: ((organization_location)::text ~* 'warszawa'::text)\n Total runtime: 66.933 ms\n\n\nDatabase is properly analyzed. postgresql-8.1.4 on Fedora Core 4.\n\nRegards\nTometzky\n\nPS. Actual table and column names are different (they're in Polish)\nbut I've translated them for better readability for english-speaking.\n\nPS. I wonder if it makes sense to \"enable_seqscan=off\" for every client\nif a database is small enough to fit in OS cache.\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Mon, 3 Jul 2006 22:31:07 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": true, "msg_subject": "query very slow when enable_seqscan=on" }, { "msg_contents": "On Mon, 2006-07-03 at 22:31 +0200, Tomasz Ostrowski wrote:\n> I have a very slow query when enable_seqscan=on and very fast when\n> enable_seqscan=off. My schema looks like this (relevant columns\n> only):\n> PS. Actual table and column names are different (they're in Polish)\n> but I've translated them for better readability for english-speaking.\n\nThanks\n\n> PS. I wonder if it makes sense to \"enable_seqscan=off\" for every client\n> if a database is small enough to fit in OS cache.\n\nYou can set this for individual statements if you choose to.\n\n> -> Seq Scan on organization (cost=0.00..480.95 rows=1\n> width=4) (actual time=0.071..69.702 rows=1892 loops=1)\n> Filter: ((organization_location)::text ~*\n> 'warszawa'::text)\n\nThe issue is caused by the under-estimation of the number of rows in the\ntable as a result of the regular expression comparison. As a result the\nplanner thinks it can choose a nested loops scan, though ends up doing\n1892 seq scans of persons, when it thought it would do only one. \n\nThe under estimation is a known issue. Posting to -perform for the\nrecord. \n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 04 Jul 2006 00:05:42 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query very slow when enable_seqscan=on" }, { "msg_contents": "Tomasz Ostrowski <[email protected]> writes:\n> I have a very slow query when enable_seqscan=on and very fast when\n> enable_seqscan=off.\n\nHere's your problem:\n\n> -> Seq Scan on organization (cost=0.00..480.95 rows=1 width=4) (actual time=0.071..69.702 rows=1892 loops=1)\n> Filter: ((organization_location)::text ~* 'warszawa'::text)\n\nIf it were estimating something like the actual number of rows matching\nthat filter, it'd never have chosen a nestloop plan like that.\n\nHow many rows are there in the organization table?\n\nThis is probably the fault of the pattern-selectivity heuristic: it's\nfar too optimistic about long match strings eliminating a lot of rows.\nI think there's been some discussion of modifying that logic but no\none's really stepped up with a better idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Jul 2006 19:05:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query very slow when enable_seqscan=on " }, { "msg_contents": "On Mon, 03 Jul 2006, Tom Lane wrote:\n\n> > -> Seq Scan on organization (cost=0.00..480.95 rows=1 width=4) (actual time=0.071..69.702 rows=1892 loops=1)\n> > Filter: ((organization_location)::text ~* 'warszawa'::text)\n> \n> How many rows are there in the organization table?\n\nAbout 9000. And about 6500 persons. \"Warszawa\" is a biggest city in\nPoland and a capital - many organizations are located there.\n\n> This is probably the fault of the pattern-selectivity heuristic:\n> it's far too optimistic about long match strings eliminating a lot\n> of rows. I think there's been some discussion of modifying that\n> logic but no one's really stepped up with a better idea.\n\nI think because there is no good solution to this - no statistical\ninformation is going to predict how much data will match a regular\nexpression. Maybe in this situation an algorithm should be\npessimistic - that it will return all rows, or all non-null rows or\nall rows no shorter than matching string (if it's a string and not\nfor example regex like [abcdefghijklmnopqrstuvwxyz] which is long but\nwill match basicaly everything). In my opinion it is better to\noverestimate most of the time than to risk underestimation by a\nfactor of 1000 and more.\n\nFor now I'm turning off seqscans. This is a second time I got\nterrible permormance with seqscans turned on because of bad\nestimation. And my database will probably fit in cache.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Tue, 4 Jul 2006 10:37:33 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query very slow when enable_seqscan=on" }, { "msg_contents": "Tomasz Ostrowski <[email protected]> writes:\n> I think because there is no good solution to this - no statistical\n> information is going to predict how much data will match a regular\n> expression.\n\nWell, it's certainly hard to imagine simple stats that would let the\ncode guess that, say, \"warsa\" and \"warsaw\" match nearly the same\n(large) number of rows while \"warsawq\" matches nothing.\n\nI think the real problem here is that regex matching is the wrong tool\nfor the job. Have you looked into a full-text index (tsearch2)?\nWith something like that, the index operator has at least got the\ncorrect conceptual model, ie, looking for indexed words. I'm not sure\nif they have any decent statistical support for it :-( but in theory\nthat seems doable, whereas regex estimation will always be a crapshoot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Jul 2006 09:56:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query very slow when enable_seqscan=on " }, { "msg_contents": "On Tue, 04 Jul 2006, Tom Lane wrote:\n\n> I think the real problem here is that regex matching is the wrong\n> tool for the job. Have you looked into a full-text index\n> (tsearch2)?\n\nSo much to do with so little time...\n\nI've briefly looked into it but:\n\n- it's complicated;\n\n- it is not needed - basic scan is good enough for the amount of data\n we have (if a sane query plan is chosen by a database);\n\n- we have data in many languages (including based on cyryllic\n alphabet) - languages which use different forms of the same word\n based on context, for example:\n Warszawa\n Warszawy\n Warszawie\n Warszawďż˝\n Warszawďż˝\n Warszawo\n All of the above could be translated to \"Warsaw\". So we need to\n support matching parts of words (\"warszaw\"), which I haven't seen\n in tsearch2 (maybe I've overlooked). We also have words, which\n different forms look like this: \"stďż˝\" \"stole\" \"stoďż˝u\" (Polish for\n \"table\") - when we need to find it we'd need to list every possible\n form (about 10) or use a regex like: 'st[oďż˝][lďż˝]'.\n\n> With something like that, the index operator has at least got the\n> correct conceptual model, ie, looking for indexed words. I'm not sure\n> if they have any decent statistical support for it :-( but in theory\n> that seems doable, whereas regex estimation will always be a crapshoot.\n\nSo why estimate regex expressions if there is no estimation possible?\nLet's set this estimate to be pessimistic (match everything or\neverything not null) and it will choose better plans. At least until\nsomebody will figure out better approach.\n\nPozdrawiam\nTometzky\n-- \nBest of prhn - najzabawniejsze teksty polskiego UseNet-u\nhttp://prhn.dnsalias.org/\n Chaos zawsze pokonuje porzďż˝dek, gdyďż˝ jest lepiej zorganizowany.\n [ Terry Pratchett ]\n", "msg_date": "Tue, 4 Jul 2006 16:44:08 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query very slow when enable_seqscan=on" }, { "msg_contents": "On Tue, Jul 04, 2006 at 04:44:08PM +0200, Tomasz Ostrowski wrote:\n> On Tue, 04 Jul 2006, Tom Lane wrote:\n> \n> > I think the real problem here is that regex matching is the wrong\n> > tool for the job. Have you looked into a full-text index\n> > (tsearch2)?\n> \n> So much to do with so little time...\n\nFor what it's worth, I've got pretty good results (at least taking the\nlittle amount of work I put into it) with trigram indexes, courtesy of\n$PGCONTRIB/pg_trgm.sql, just another amazing little piece by Bartunov\nand Sigaev.\n\nLet me know if you'd like to hear more.\n\nRegards\n-- tomás", "msg_date": "Tue, 4 Jul 2006 16:24:34 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: query very slow when enable_seqscan=on" }, { "msg_contents": "Tomasz Ostrowski <[email protected]> writes:\n> So why estimate regex expressions if there is no estimation possible?\n> Let's set this estimate to be pessimistic (match everything or\n> everything not null) and it will choose better plans.\n\nBetter plans for this specific example, worse plans for other cases.\nLife is not that simple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Jul 2006 14:15:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query very slow when enable_seqscan=on " }, { "msg_contents": "On Tue, 04 Jul 2006, Tom Lane wrote:\n\n> Tomasz Ostrowski <[email protected]> writes:\n> > So why estimate regex expressions if there is no estimation possible?\n> > Let's set this estimate to be pessimistic (match everything or\n> > everything not null) and it will choose better plans.\n> \n> Better plans for this specific example, worse plans for other cases.\n> Life is not that simple.\n\nIt isn't. This worse plans will be choosen only when pattern/regex\nmatching is used and will be, say, 2 times worse.\n\nWhat I'm trying to point out is that some people use regular\nexpressions for filtering rows. When the program is written it is\noften impossible to know what data will be put into it. And when a\nprogram is unexpectedly 2000 times slower than normal it is much\nworse than if it is 2 times slower, but predictable.\n\nI know Postgres uses probabilistic approach so there's always a\nprobability that the planner chooses very wrong. But this probability\nis so small that it can be ignored. With pattern/regex matching it is\nnot.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Wed, 5 Jul 2006 10:33:59 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query very slow when enable_seqscan=on" } ]
[ { "msg_contents": "Do you really need to create one *DB* per client - that is, is one\nschema (in the same DB) per client out of the question? If not, I would\nlook into moving all reference tables (read-only data, constants and\nsuch) into a common schema (with read permission granted to each\nclient/role), that way reducing the amount of objects needed to be\ncreated/maintained and at the same time reducing the memory requirements\n(lots of shared objects == lots of reused shared buffers). Set the\ndefault_tablespace variable per client (login role) also so that the I/O\nload can be balanced. A system based on Opterons such as the HP DL385 or\nDL585 with two CPUs (or four if you go for the 585), 8-16Gb of RAM and a\ndecent storage system with 14-28 disks could be worth evaluating.\n\n/Mikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of David\nGagnon\nSent: den 3 juli 2006 13:42\nTo: [email protected]\nSubject: [PERFORM] Is postgresql ca do the job for software deployed in\nASP ou SaaS mode?\n\nHi all,\n\n I've been working on my personal project for 3.5 years now. I\ndeveloped an ERP system in web/java. Now the people I will work with\nsuggest to offers it in Saas mode. Which means my customer will connect\nto my website and found they ERP software and data there. It's not the\ndeployment I planned initially so if you can just validate some\ntechnicals points to be sure it's not crazy using Postgresl here and not\na big $$$ db to do the job.\n\nTypically I will have 1db per client and around 150 tables per db. So\nsince I hope I didn`t work all those year for nothing .. I expect to\nhave bunch of clients witch means the same amount of db since I have 1\ndb/client. \n\nCan I hope having several hundred of db on 1 db server? Like 250 dbs =\n250 client = 360 000 tables !!!\nSo is there a limit for the number of db in the db server ?(this spec is\nnot on the website) What about the performance? Can I expect to have the\nsame performance? \n\nSince I put everything on the web I do needs an High Availability\ninfrastructure. I looked into SlonyI and Mammoth to replicate the db\nbut since SlonyI use triggers can I expect it to do the job? Is Mammoth\nis the only available solution?\n\nLast question and not the least I'm reading this performance list for\nseveral years now and know suggestion about hardware to run postgresl is\ndiscussed. Since I wrote software there severals points about hardware\nthat I don`t understand. Do you have any suggestion of platform to run\ninto my Saas configuration? I do need the WISE one! I'm pretty sure\nthat if I was a big company I would be able throw bunch of $$$$ but it's\nnot my case. I'm pretty sure it exists out there some piece of Hardware\nthat would do the job perfectly with a fair price.\n\nSo far I did understand that Postgresql loves Opteron and I have looked\ninto the dl145 series of HP. I did understand that Dell Hardware it`s\nnot reliable. But it's still not clear what should be my requirement\nfor memory, disk, nb cpu, cpu power, etc.\n\nI'm pretty sure it`s better to have more slower CPUs that having the\nlatest Opteron available on the market, or more slower servers that\nhaving the fastest one... am I right? But agains what it`s the optimal\nchoice?\n\nThanks you to share your knowledge on those point. I do consider using\nPostgresql is the Smart choice in my project since the beginning but\nbefore putting all the money (That I don`t have ..:-)) to buy some\nhardware I just want to be sure I'm not crazy!\n\nThanks for your help I really appreciate it!!\n\nBest Regards\n/David\n\n\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Tue, 4 Jul 2006 10:03:45 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP ou SaaS\n\tmode?" }, { "msg_contents": "Hi All,\n \n First thanks for your help everyone! \n\n\n\n\nMikael Carneholm wrote:\n> Do you really need to create one *DB* per client - that is, is one\n> schema (in the same DB) per client out of the question? If not, I would\n> look into moving all reference tables (read-only data, constants and\n> such) into a common schema (with read permission granted to each\n> client/role), that way reducing the amount of objects needed to be\n> created/maintained and at the same time reducing the memory requirements\n> (lots of shared objects == lots of reused shared buffers). \nFor my application there is very little info I can share. Maybe less \nthan 10 on 100 actually so I not sure it worth it ...\n\n\n\n> Set the\n> default_tablespace variable per client (login role) also so that the I/O\n> load can be balanced. A system based on Opterons such as the HP DL385 or\n> DL585 with two CPUs (or four if you go for the 585), 8-16Gb of RAM and a\n> decent storage system with 14-28 disks could be worth evaluating.\n>\n> /Mikael\n> \nI look into the HP DL385 and DL585 on HP site and they are price between \n3000 and 15000$$ (base price). Thats quite a difference? So is the HP \nDL385 with 2 cpus will do the job ?\n\nhttp://h71016.www7.hp.com/dstore/ctoBases.asp?jumpid=re_NSS_dl585storageserver&oi=E9CED&BEID=19701&SBLID=&ProductLineId=450&FamilyId=2230&LowBaseId=&LowPrice=&familyviewgroup=405&viewtype=Matrix\nhttp://h71016.www7.hp.com/dstore/ctoBases.asp?ProductLineId=431&FamilyId=2048&jumpid=re_hphqiss/Ovw_Buy/DL385\n\nI will look more deeply into them in detail trying to understand \nsomething ...\n\nThanks for your help!\nBest Regards\n/David\n\n\n\n\n\n\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of David\n> Gagnon\n> Sent: den 3 juli 2006 13:42\n> To: [email protected]\n> Subject: [PERFORM] Is postgresql ca do the job for software deployed in\n> ASP ou SaaS mode?\n>\n> Hi all,\n>\n> I've been working on my personal project for 3.5 years now. I\n> developed an ERP system in web/java. Now the people I will work with\n> suggest to offers it in Saas mode. Which means my customer will connect\n> to my website and found they ERP software and data there. It's not the\n> deployment I planned initially so if you can just validate some\n> technicals points to be sure it's not crazy using Postgresl here and not\n> a big $$$ db to do the job.\n>\n> Typically I will have 1db per client and around 150 tables per db. So\n> since I hope I didn`t work all those year for nothing .. I expect to\n> have bunch of clients witch means the same amount of db since I have 1\n> db/client. \n>\n> Can I hope having several hundred of db on 1 db server? Like 250 dbs =\n> 250 client = 360 000 tables !!!\n> So is there a limit for the number of db in the db server ?(this spec is\n> not on the website) What about the performance? Can I expect to have the\n> same performance? \n>\n> Since I put everything on the web I do needs an High Availability\n> infrastructure. I looked into SlonyI and Mammoth to replicate the db\n> but since SlonyI use triggers can I expect it to do the job? Is Mammoth\n> is the only available solution?\n>\n> Last question and not the least I'm reading this performance list for\n> several years now and know suggestion about hardware to run postgresl is\n> discussed. Since I wrote software there severals points about hardware\n> that I don`t understand. Do you have any suggestion of platform to run\n> into my Saas configuration? I do need the WISE one! I'm pretty sure\n> that if I was a big company I would be able throw bunch of $$$$ but it's\n> not my case. I'm pretty sure it exists out there some piece of Hardware\n> that would do the job perfectly with a fair price.\n>\n> So far I did understand that Postgresql loves Opteron and I have looked\n> into the dl145 series of HP. I did understand that Dell Hardware it`s\n> not reliable. But it's still not clear what should be my requirement\n> for memory, disk, nb cpu, cpu power, etc.\n>\n> I'm pretty sure it`s better to have more slower CPUs that having the\n> latest Opteron available on the market, or more slower servers that\n> having the fastest one... am I right? But agains what it`s the optimal\n> choice?\n>\n> Thanks you to share your knowledge on those point. I do consider using\n> Postgresql is the Smart choice in my project since the beginning but\n> before putting all the money (That I don`t have ..:-)) to buy some\n> hardware I just want to be sure I'm not crazy!\n>\n> Thanks for your help I really appreciate it!!\n>\n> Best Regards\n> /David\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n>\n> \n\n\n", "msg_date": "Tue, 04 Jul 2006 07:55:58 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP" }, { "msg_contents": "Hi, Mikael,\n\nJust my 2 cents:\n\nMikael Carneholm wrote:\n> Do you really need to create one *DB* per client - that is, is one\n> schema (in the same DB) per client out of the question?\n\nSometimes, schemas would work _technically_, but not politically, as a\npostgresql user cannot be prevented from listing all schemas (or even\nall databases in the same user), regardless whether he/she has access\nrights.\n\nBut it is not always acceptable that a customer knows which other\ncustomers one has.\n\nThis forces the use of the \"one cluster per customer\" paradigm.\n\n\nThanks,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 05 Jul 2006 12:38:20 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in" }, { "msg_contents": "A sodden late night idea ... schemas don't need to have names that are meaningful to outsiders.\n\nStill, the point about \"political\" aspects is an important one. OTH, schemas provide an elegant way of segregating data.\n\nMy $0.02 (not worth what it was)\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Markus Schaber\nSent:\tWed 7/5/2006 3:38 AM\nTo:\[email protected]\nCc:\t\nSubject:\tRe: [PERFORM] Is postgresql ca do the job for software deployed in\n\nHi, Mikael,\n\nJust my 2 cents:\n\nMikael Carneholm wrote:\n> Do you really need to create one *DB* per client - that is, is one\n> schema (in the same DB) per client out of the question?\n\nSometimes, schemas would work _technically_, but not politically, as a\npostgresql user cannot be prevented from listing all schemas (or even\nall databases in the same user), regardless whether he/she has access\nrights.\n\nBut it is not always acceptable that a customer knows which other\ncustomers one has.\n\nThis forces the use of the \"one cluster per customer\" paradigm.\n\n\nThanks,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n!DSPAM:44ab96fb98231804284693!\n\n\n\n\n", "msg_date": "Wed, 5 Jul 2006 04:06:23 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in" }, { "msg_contents": "Hi, Gregory,\n\nGregory S. Williamson wrote:\n> A sodden late night idea ... schemas don't need to have names that\n> are meaningful to outsiders.\n\nYes, but having schema names like A34FZ37 not only qualifies for\nthedailywtf.com, but also tends to produce maintainance nightmares.\n\nAnd it still allows the customer to estimate the amount of customers.\n\n> Still, the point about \"political\" aspects is an important one. OTH,\n> schemas provide an elegant way of segregating data.\n\nYes, they do, and in the ASP case (where we have control over all\nsoftware that connects to PostgreSQL) we use the one schema per customer\nparadigm quite successfully.\n\n> My $0.02 (not worth what it was)\n\nOh, I think the're at least $0.03 cents worth. :-)\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 05 Jul 2006 21:18:07 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in" } ]
[ { "msg_contents": "> For my application there is very little info I can share. Maybe less\nthan 10 on 100 actually so I not sure it worth it ...\n\nOk, so 90% of the tables are being written to - this either means that\nyour application uses very little constants, or that it has access to\nconstans that are stored somewhere else (eg, a JMX Mbean that's\ninitialized from property files on application startup). Would it be too\nmuch work to redesign the DB model to support more than one client? \n\n>I look into the HP DL385 and DL585 on HP site and they are price\nbetween \n>3000 and 15000$$ (base price). Thats quite a difference? So is the HP\n\n>DL385 with 2 cpus will do the job ?\n\nYeah, there's quite a difference on the price tags between those two.\nI'd vote for the DL385 since the sockets for the two extra CPU's won't\ngive you linear scalability per $ in the end. A single machine may be\ncheaper to administrate, but if administration costs are\nirrelevant/negligible I'd go for several 2-socket machines instead of\none 4-socket machine.\n\n/Mikael\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of David\n> Gagnon\n> Sent: den 3 juli 2006 13:42\n> To: [email protected]\n> Subject: [PERFORM] Is postgresql ca do the job for software deployed\nin\n> ASP ou SaaS mode?\n>\n> Hi all,\n>\n> I've been working on my personal project for 3.5 years now. I\n> developed an ERP system in web/java. Now the people I will work with\n> suggest to offers it in Saas mode. Which means my customer will\nconnect\n> to my website and found they ERP software and data there. It's not\nthe\n> deployment I planned initially so if you can just validate some\n> technicals points to be sure it's not crazy using Postgresl here and\nnot\n> a big $$$ db to do the job.\n>\n> Typically I will have 1db per client and around 150 tables per db. So\n> since I hope I didn`t work all those year for nothing .. I expect to\n> have bunch of clients witch means the same amount of db since I have 1\n> db/client. \n>\n> Can I hope having several hundred of db on 1 db server? Like 250 dbs\n=\n> 250 client = 360 000 tables !!!\n> So is there a limit for the number of db in the db server ?(this spec\nis\n> not on the website) What about the performance? Can I expect to have\nthe\n> same performance? \n>\n> Since I put everything on the web I do needs an High Availability\n> infrastructure. I looked into SlonyI and Mammoth to replicate the db\n> but since SlonyI use triggers can I expect it to do the job? Is\nMammoth\n> is the only available solution?\n>\n> Last question and not the least I'm reading this performance list for\n> several years now and know suggestion about hardware to run postgresl\nis\n> discussed. Since I wrote software there severals points about\nhardware\n> that I don`t understand. Do you have any suggestion of platform to\nrun\n> into my Saas configuration? I do need the WISE one! I'm pretty sure\n> that if I was a big company I would be able throw bunch of $$$$ but\nit's\n> not my case. I'm pretty sure it exists out there some piece of\nHardware\n> that would do the job perfectly with a fair price.\n>\n> So far I did understand that Postgresql loves Opteron and I have\nlooked\n> into the dl145 series of HP. I did understand that Dell Hardware it`s\n> not reliable. But it's still not clear what should be my requirement\n> for memory, disk, nb cpu, cpu power, etc.\n>\n> I'm pretty sure it`s better to have more slower CPUs that having the\n> latest Opteron available on the market, or more slower servers that\n> having the fastest one... am I right? But agains what it`s the\noptimal\n> choice?\n>\n> Thanks you to share your knowledge on those point. I do consider\nusing\n> Postgresql is the Smart choice in my project since the beginning but\n> before putting all the money (That I don`t have ..:-)) to buy some\n> hardware I just want to be sure I'm not crazy!\n>\n> Thanks for your help I really appreciate it!!\n>\n> Best Regards\n> /David\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n>\n> \n\n\n\n", "msg_date": "Tue, 4 Jul 2006 15:05:42 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP ou SaaS\n\tmode?" }, { "msg_contents": "Mikael Carneholm wrote:\n>> For my application there is very little info I can share. Maybe less\n>> \n> than 10 on 100 actually so I not sure it worth it ...\n>\n> Ok, so 90% of the tables are being written to - this either means that\n> your application uses very little constants, or that it has access to\n> constans that are stored somewhere else (eg, a JMX Mbean that's\n> initialized from property files on application startup). Would it be too\n> much work to redesign the DB model to support more than one client? \n> \nYes configuration are in property files or somewhere else. I will keep \nthis solution in mind but for now I really think that would really \ncomplicated for what it will give in return...\n\n\n\n> \n>> I look into the HP DL385 and DL585 on HP site and they are price\n>> \n> between \n> \n>> 3000 and 15000$$ (base price). Thats quite a difference? So is the HP\n>> \n>\n> \n>> DL385 with 2 cpus will do the job ?\n>> \n>\n> Yeah, there's quite a difference on the price tags between those two.\n> I'd vote for the DL385 since the sockets for the two extra CPU's won't\n> give you linear scalability per $ in the end. A single machine may be\n> cheaper to administrate, but if administration costs are\n> irrelevant/negligible I'd go for several 2-socket machines instead of\n> one 4-socket machine.\n> \nI do need 2 machines since I need an HA solution. So on top of those \nquestion I try to figure out if Slony-I can do the job in my scenario or \ndo I need the Mammoth solution. I'm searching the list right now and \nthere is not a lot of info... :-( Any Idea?\n\nSo thanks for the info about the DL385 I will look deeply into it !\n\nBest Regards\n/David\n\n\n> /Mikael\n>\n> \n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of David\n>> Gagnon\n>> Sent: den 3 juli 2006 13:42\n>> To: [email protected]\n>> Subject: [PERFORM] Is postgresql ca do the job for software deployed\n>> \n> in\n> \n>> ASP ou SaaS mode?\n>>\n>> Hi all,\n>>\n>> I've been working on my personal project for 3.5 years now. I\n>> developed an ERP system in web/java. Now the people I will work with\n>> suggest to offers it in Saas mode. Which means my customer will\n>> \n> connect\n> \n>> to my website and found they ERP software and data there. It's not\n>> \n> the\n> \n>> deployment I planned initially so if you can just validate some\n>> technicals points to be sure it's not crazy using Postgresl here and\n>> \n> not\n> \n>> a big $$$ db to do the job.\n>>\n>> Typically I will have 1db per client and around 150 tables per db. So\n>> since I hope I didn`t work all those year for nothing .. I expect to\n>> have bunch of clients witch means the same amount of db since I have 1\n>> db/client. \n>>\n>> Can I hope having several hundred of db on 1 db server? Like 250 dbs\n>> \n> =\n> \n>> 250 client = 360 000 tables !!!\n>> So is there a limit for the number of db in the db server ?(this spec\n>> \n> is\n> \n>> not on the website) What about the performance? Can I expect to have\n>> \n> the\n> \n>> same performance? \n>>\n>> Since I put everything on the web I do needs an High Availability\n>> infrastructure. I looked into SlonyI and Mammoth to replicate the db\n>> but since SlonyI use triggers can I expect it to do the job? Is\n>> \n> Mammoth\n> \n>> is the only available solution?\n>>\n>> Last question and not the least I'm reading this performance list for\n>> several years now and know suggestion about hardware to run postgresl\n>> \n> is\n> \n>> discussed. Since I wrote software there severals points about\n>> \n> hardware\n> \n>> that I don`t understand. Do you have any suggestion of platform to\n>> \n> run\n> \n>> into my Saas configuration? I do need the WISE one! I'm pretty sure\n>> that if I was a big company I would be able throw bunch of $$$$ but\n>> \n> it's\n> \n>> not my case. I'm pretty sure it exists out there some piece of\n>> \n> Hardware\n> \n>> that would do the job perfectly with a fair price.\n>>\n>> So far I did understand that Postgresql loves Opteron and I have\n>> \n> looked\n> \n>> into the dl145 series of HP. I did understand that Dell Hardware it`s\n>> not reliable. But it's still not clear what should be my requirement\n>> for memory, disk, nb cpu, cpu power, etc.\n>>\n>> I'm pretty sure it`s better to have more slower CPUs that having the\n>> latest Opteron available on the market, or more slower servers that\n>> having the fastest one... am I right? But agains what it`s the\n>> \n> optimal\n> \n>> choice?\n>>\n>> Thanks you to share your knowledge on those point. I do consider\n>> \n> using\n> \n>> Postgresql is the Smart choice in my project since the beginning but\n>> before putting all the money (That I don`t have ..:-)) to buy some\n>> hardware I just want to be sure I'm not crazy!\n>>\n>> Thanks for your help I really appreciate it!!\n>>\n>> Best Regards\n>> /David\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> ---------------------------(end of\n>> \n> broadcast)---------------------------\n> \n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>>\n>>\n>>\n>> \n>> \n>\n>\n>\n>\n>\n>\n> \n\n\n", "msg_date": "Tue, 04 Jul 2006 09:33:20 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is postgresql ca do the job for software deployed in ASP" } ]
[ { "msg_contents": "Hi all,\nI got this query, I'm having indexes for PropertyId and Dates columns across\nall the tables, but still it takes ages to get me the result. What indexes\nwould be proposed on this, or I'm helpless?\n\nFROM STG_Property a\n\n FULL OUTER JOIN\n STG_PropConfirmedLogs b\n ON (a.PropertyId = b.PropertyId AND a.p_LastModified = b.p_Modified_Date\n)\n\n FULL OUTER JOIN\n STG_PropConnectionFeesLogs c\n ON ((a.PropertyId = c.PropertyId AND a.p_LastModified = c.p_ChangedOn)\n OR (b.PropertyId = c.PropertyId AND b.p_Modified_Date = c.p_ChangedOn))\n\n FULL OUTER JOIN\n STG_PropDeletedLogs d\n ON ((a.PropertyId = d.PropertyId AND a.p_LastModified = d.p_DeletedOn)\n OR (b.PropertyId = d.PropertyId AND b.p_Modified_Date = d.p_DeletedOn)\n OR (c.PropertyId = d.PropertyId AND c.p_ChangedOn = d.p_DeletedOn))\n\n FULL OUTER JOIN\n STG_PropFEWALogs e\n ON ((a.PropertyId = e.PropertyId AND a.p_LastModified =\ne.p_Modified_Date)\n OR (b.PropertyId = e.PropertyId AND b.p_Modified_Date =\ne.p_Modified_Date) OR (c.PropertyId = e.PropertyId AND c.p_ChangedOn =\ne.p_Modified_Date)\n OR (d.PropertyId = e.PropertyId AND d.p_DeletedOn = e.p_Modified_Date))\n\n FULL OUTER JOIN\n STG_PropInSewerNetworkLogs f\n ON ((a.PropertyId = f.PropertyId AND a.p_LastModified =\nf.p_Modified_Date)\n OR (b.PropertyId = f.PropertyId AND b.p_Modified_Date =\nf.p_Modified_Date)\n OR (c.PropertyId = f.PropertyId AND c.p_ChangedOn = f.p_Modified_Date)\n OR (d.PropertyId = f.PropertyId AND d.p_DeletedOn = f.p_Modified_Date)\n OR (e.PropertyId = f.PropertyId AND e.p_Modified_Date =\nf.p_Modified_Date)) FULL OUTER JOIN\n STG_PropTypeLogs g\n ON ((a.PropertyId = g.PropertyId AND a.p_LastModified = g\n.p_LastModified)\n OR (b.PropertyId = g.PropertyId AND b.p_Modified_Date = g\n.p_LastModified)\n OR (c.PropertyId = g.PropertyId AND c.p_ChangedOn = g.p_LastModified)\n OR (d.PropertyId = g.PropertyId AND d.p_DeletedOn = g.p_LastModified)\n OR (e.PropertyId = g.PropertyId AND e.p_Modified_Date = g\n.p_LastModified)\n OR (f.PropertyId = g.PropertyId AND f.p_Modified_Date = g\n.p_LastModified))\n\n-- Luckys\n\nHi all,\nI got this query, I'm having indexes for PropertyId and Dates columns across all the tables, but still it takes ages to get me the result. What indexes would be proposed on this, or I'm helpless?\n \nFROM  STG_Property a   FULL OUTER JOIN     STG_PropConfirmedLogs b    \nON (a.PropertyId = b.PropertyId AND a.p_LastModified = b.p_Modified_Date)   FULL OUTER JOIN\n    STG_PropConnectionFeesLogs c    ON ((a.PropertyId = c.PropertyId AND a.p_LastModified = \nc.p_ChangedOn)    OR  (b.PropertyId = c.PropertyId AND b.p_Modified_Date = c.p_ChangedOn))\n   FULL OUTER JOIN    STG_PropDeletedLogs d    ON ((a.PropertyId = d.PropertyId \nAND a.p_LastModified = d.p_DeletedOn)    OR  (b.PropertyId = d.PropertyId AND b.p_Modified_Date = d.p_DeletedOn)    OR  (\nc.PropertyId = d.PropertyId AND c.p_ChangedOn = d.p_DeletedOn))   FULL OUTER\n JOIN    STG_PropFEWALogs e    ON ((a.PropertyId = e.PropertyId AND a.p_LastModified = e.p_Modified_Date)    \nOR  (b.PropertyId = e.PropertyId AND b.p_Modified_Date = e.p_Modified_Date) OR  (c.PropertyId = e.PropertyId \nAND c.p_ChangedOn = e.p_Modified_Date)    OR  (d.PropertyId = e.PropertyId AND d.p_DeletedOn = e.p_Modified_Date)) \n  FULL OUTER JOIN    STG_PropInSewerNetworkLogs f    ON ((a.PropertyId = f.PropertyId \nAND a.p_LastModified = f.p_Modified_Date)    OR  (b.PropertyId = f.PropertyId AND b.p_Modified_Date = f.p_Modified_Date)    OR\n  (c.PropertyId = f.PropertyId AND c.p_ChangedOn = f.p_Modified_Date)    OR  (d.PropertyId = f.PropertyId\n AND d.p_DeletedOn = f.p_Modified_Date)     OR  (e.PropertyId = f.PropertyId AND e.p_Modified_Date = f.p_Modified_Date))   \nFULL OUTER JOIN    STG_PropTypeLogs g    ON ((a.PropertyId = g\n.PropertyId AND a.p_LastModified = g.p_LastModified)    OR  (b.PropertyId = g.PropertyId \nAND b.p_Modified_Date = g.p_LastModified)    OR  (c.PropertyId = g.PropertyId \nAND c.p_ChangedOn = g.p_LastModified)    OR  (d.PropertyId = g.PropertyId \nAND d.p_DeletedOn = g.p_LastModified)    OR  (e.PropertyId = g.PropertyId AND e.p_Modified_Date\n = g.p_LastModified)    OR  (f.PropertyId = g.PropertyId AND f.p_Modified_Date = \ng.p_LastModified))\n \n-- Luckys", "msg_date": "Tue, 4 Jul 2006 20:00:02 +0400", "msg_from": "Luckys <[email protected]>", "msg_from_op": true, "msg_subject": "how to tune this query." }, { "msg_contents": "I don't think indexes are going to help you here - with the FULL OUTER \nJOINs, the query will have to look at and include each row from each \ntable you query from anyway, so it's going to choose sequential scans. \nIn addition, some of the lower join conditions are going to take forever.\n\nWhat's is your goal? The volume of data that I imagine this query would \nproduce can't possibly be useful. I'm guessing at the very least you'll \nwant to LEFT OUTER JOIN everything back against STG_Property, and leave \nthe other join conditions out of each ON statement.\n\nLuckys wrote:\n\n> Hi all,\n> I got this query, I'm having indexes for PropertyId and Dates columns \n> across all the tables, but still it takes ages to get me the result. \n> What indexes would be proposed on this, or I'm helpless?\n> \n> FROM STG_Property a\n> \n> FULL OUTER JOIN\n> STG_PropConfirmedLogs b\n> ON (a.PropertyId = b.PropertyId AND a.p_LastModified = \n> b.p_Modified_Date)\n> \n> FULL OUTER JOIN\n> STG_PropConnectionFeesLogs c\n> ON ((a.PropertyId = c.PropertyId AND a.p_LastModified = c.p_ChangedOn)\n> OR (b.PropertyId = c.PropertyId AND b.p_Modified_Date = \n> c.p_ChangedOn))\n> \n> FULL OUTER JOIN\n> STG_PropDeletedLogs d\n> ON ((a.PropertyId = d.PropertyId AND a.p_LastModified = d.p_DeletedOn)\n> OR (b.PropertyId = d.PropertyId AND b.p_Modified_Date = \n> d.p_DeletedOn)\n> OR ( c.PropertyId = d.PropertyId AND c.p_ChangedOn = d.p_DeletedOn))\n> \n> FULL OUTER JOIN\n> STG_PropFEWALogs e\n> ON ((a.PropertyId = e.PropertyId AND a.p_LastModified = \n> e.p_Modified_Date)\n> OR (b.PropertyId = e.PropertyId AND b.p_Modified_Date = \n> e.p_Modified_Date) OR (c.PropertyId = e.PropertyId AND c.p_ChangedOn \n> = e.p_Modified_Date)\n> OR (d.PropertyId = e.PropertyId AND d.p_DeletedOn = \n> e.p_Modified_Date))\n> \n> FULL OUTER JOIN\n> STG_PropInSewerNetworkLogs f\n> ON ((a.PropertyId = f.PropertyId AND a.p_LastModified = \n> f.p_Modified_Date)\n> OR (b.PropertyId = f.PropertyId AND b.p_Modified_Date = \n> f.p_Modified_Date)\n> OR (c.PropertyId = f.PropertyId AND c.p_ChangedOn = \n> f.p_Modified_Date)\n> OR (d.PropertyId = f.PropertyId AND d.p_DeletedOn = \n> f.p_Modified_Date)\n> OR (e.PropertyId = f.PropertyId AND e.p_Modified_Date = \n> f.p_Modified_Date)) FULL OUTER JOIN\n> STG_PropTypeLogs g\n> ON ((a.PropertyId = g .PropertyId AND a.p_LastModified = \n> g.p_LastModified)\n> OR (b.PropertyId = g.PropertyId AND b.p_Modified_Date = \n> g.p_LastModified)\n> OR (c.PropertyId = g.PropertyId AND c.p_ChangedOn = g.p_LastModified)\n> OR (d.PropertyId = g.PropertyId AND d.p_DeletedOn = g.p_LastModified)\n> OR (e.PropertyId = g.PropertyId AND e.p_Modified_Date = \n> g.p_LastModified)\n> OR (f.PropertyId = g.PropertyId AND f.p_Modified_Date = \n> g.p_LastModified))\n> \n> -- Luckys\n\n-- \n\nNolan Cafferky\nSoftware Developer\nIT Department\nRBS Interactive\[email protected]\n\n", "msg_date": "Tue, 04 Jul 2006 09:18:43 -0700", "msg_from": "Nolan Cafferky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to tune this query." }, { "msg_contents": "On 7/4/06, Luckys <[email protected]> wrote:\n>\n> Hi all,\n> I got this query, I'm having indexes for PropertyId and Dates columns across\n> all the tables, but still it takes ages to get me the result. What indexes\n> would be proposed on this, or I'm helpless?\n>\n\nI would suggest posting your table schemas and describe what you want\nthe results to look like. After years of following this list, I\nregard your query as something of a classic. There simply has to be\nan easier way of writing it.\n\nmerlin\n", "msg_date": "Fri, 7 Jul 2006 13:42:16 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to tune this query." } ]
[ { "msg_contents": "Hi,\n\nIf I run the query\n\nexplain analyze select * from ind_uni_100 where a=1 and b=1 and c=1\n\nI get the following plan:\n\nBitmap Heap Scan on ind_uni_100 (cost=942.50..1411.12 rows=125 width=104)\n(actual time=72.556..72.934 rows=116 loops=1)\n Recheck Cond: ((c = 1) AND (a = 1) AND (b = 1))\n -> BitmapAnd (cost=942.50..942.50 rows=125 width=0) (actual\ntime=72.421..72.421 rows=0 loops=1)\n -> Bitmap Index Scan on index_c_ind_uni_100 (cost=0.00..314.00\nrows=50000 width=0) (actual time=21.854..21.854 rows=49832 loops=1)\n Index Cond: (c = 1)\n -> Bitmap Index Scan on index_a_ind_uni_100 (cost=0.00..314.00\nrows=50000 width=0) (actual time=22.371..22.371 rows=50319 loops=1)\n Index Cond: (a = 1)\n -> Bitmap Index Scan on index_b_ind_uni_100 (cost=0.00..314.00\nrows=50000 width=0) (actual time=14.226..14.226 rows=49758 loops=1)\n Index Cond: (b = 1)\nTotal runtime: 73.395 ms\n\nWhich is quite reasonable.The table has 1.000.000 rows (17.242 pages). From\npg_stat_get_blocks_fetched I can see that there were 102 page requests for\ntable. So all things seem to work great here!\n\nBut if I multiply the size of the table ten-times (10.000.000 rows - 172.414\npages) and run the same query I get:\n \nexplain analyze select * from ind_uni_1000 where a=1 and b=1 and c=1\n\nBitmap Heap Scan on ind_uni_1000 (cost=9369.50..14055.74 rows=1250 width=104)\n(actual time=18111.415..176747.937 rows=1251 loops=1)\n Recheck Cond: ((c = 1) AND (a = 1) AND (b = 1))\n -> BitmapAnd (cost=9369.50..9369.50 rows=1250 width=0) (actual\ntime=17684.587..17684.587 rows=0 loops=1)\n -> Bitmap Index Scan on index_c_ind_uni_1000 (cost=0.00..3123.00\nrows=500000 width=0) (actual time=5704.624..5704.624 rows=500910 loops=1)\n Index Cond: (c = 1)\n -> Bitmap Index Scan on index_a_ind_uni_1000 (cost=0.00..3123.00\nrows=500000 width=0) (actual time=6147.962..6147.962 rows=500080 loops=1)\n Index Cond: (a = 1)\n -> Bitmap Index Scan on index_b_ind_uni_1000 (cost=0.00..3123.00\nrows=500000 width=0) (actual time=5767.754..5767.754 rows=500329 loops=1)\n Index Cond: (b = 1)\nTotal runtime: 176753.200 ms\n\nwhich is slower even than a seq scan. Now I get that there were 131.398 page\nrequests for table in order to retrieve almost 1250 tuples!Can someone explain\nwhy this is happening? All memory parameters are set to default.\n\nThanks!\n\n\n", "msg_date": "Wed, 5 Jul 2006 08:54:44 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problem with bitmap-index-scan plan" }, { "msg_contents": "[email protected] writes:\n> ... is quite reasonable.The table has 1.000.000 rows (17.242 pages). From\n> pg_stat_get_blocks_fetched I can see that there were 102 page requests for\n> table. So all things seem to work great here!\n\n> But if I multiply the size of the table ten-times (10.000.000 rows - 172.414\n> pages) and run the same query I get:\n> ...\n> which is slower even than a seq scan. Now I get that there were 131.398 page\n> requests for table in order to retrieve almost 1250 tuples!Can someone explain\n> why this is happening? All memory parameters are set to default.\n\nYou probably need to increase work_mem so that the bitmaps don't become\nlossy ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jul 2006 18:32:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with bitmap-index-scan plan " }, { "msg_contents": "\nYes that was the problem! Thank you very much\n\nOn Thu, 13 Jul 2006, Tom Lane wrote:\n\n> [email protected] writes:\n>> ... is quite reasonable.The table has 1.000.000 rows (17.242 pages). From\n>> pg_stat_get_blocks_fetched I can see that there were 102 page requests for\n>> table. So all things seem to work great here!\n>\n>> But if I multiply the size of the table ten-times (10.000.000 rows - 172.414\n>> pages) and run the same query I get:\n>> ...\n>> which is slower even than a seq scan. Now I get that there were 131.398 page\n>> requests for table in order to retrieve almost 1250 tuples!Can someone explain\n>> why this is happening? All memory parameters are set to default.\n>\n> You probably need to increase work_mem so that the bitmaps don't become\n> lossy ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Tue, 18 Jul 2006 12:25:44 +0300 (EEST)", "msg_from": "Kapadaidakis Yannis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with bitmap-index-scan plan " } ]
[ { "msg_contents": "Hello!\n\nI facing some strange problems with PostgreSQL 8.0 performance.\nI have application which handles a lot of tasks, each task is keps in separate\ntable. Those tables are dropped and created again periodically (precisely -\nwhen new task results came back from remote server). Also each table can have\nhundreds of thousands records inside (but mostly they do have just few\nthousands).\n\nSometimes I facing performance loss when working with database, and aafter I\nperformed vacuuming on entire database, i saw some tables and indexes in pg_*\nschemas were optimized and hundreds of thousands records were deleted. Could\nthat be the reason of performance loss, and if so - how can I fix that?\n\nI have pg_autovacuum up and running all the time\n\npg_autovacuum -d 3 -D -L /dev/null\n\nbut it seems pg_autovacuum does not do vacuuming on system tables.\n\n-- \nEugene Dzhurinsky\n", "msg_date": "Wed, 5 Jul 2006 16:07:03 +0300", "msg_from": "Eugeny N Dzhurinsky <[email protected]>", "msg_from_op": true, "msg_subject": "managing database with thousands of tables" }, { "msg_contents": "Eugeny N Dzhurinsky <[email protected]> writes:\n> but it seems pg_autovacuum does not do vacuuming on system tables.\n\nThere was a bug awhile back whereby autovac failed to notice temp table\ncleanup at connection end --- maybe you need to update?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2006 09:39:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: managing database with thousands of tables " }, { "msg_contents": "On Wed, Jul 05, 2006 at 09:39:31AM -0400, Tom Lane wrote:\n> Eugeny N Dzhurinsky <[email protected]> writes:\n> > but it seems pg_autovacuum does not do vacuuming on system tables.\n> \n> There was a bug awhile back whereby autovac failed to notice temp table\n> cleanup at connection end --- maybe you need to update?\n\nMay be. So should I update to newer postgres 8.1 or just upgrade\npg_autovacuum somehow (I don't know how btw ;) )?\n\n-- \nEugene N Dzhurinsky\n", "msg_date": "Wed, 5 Jul 2006 18:57:27 +0300", "msg_from": "Eugeny N Dzhurinsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: managing database with thousands of tables" }, { "msg_contents": "Eugeny N Dzhurinsky wrote:\n> On Wed, Jul 05, 2006 at 09:39:31AM -0400, Tom Lane wrote:\n>> Eugeny N Dzhurinsky <[email protected]> writes:\n>>> but it seems pg_autovacuum does not do vacuuming on system tables.\n>> There was a bug awhile back whereby autovac failed to notice temp table\n>> cleanup at connection end --- maybe you need to update?\n> \n> May be. So should I update to newer postgres 8.1 or just upgrade\n> pg_autovacuum somehow (I don't know how btw ;) )?\n\nUpdate the whole lot. You should be able to do the upgrade \"in place\" \nbut take a backup \"just in case\".\n\nhttp://www.postgresql.org/docs/8.1/interactive/release.html\n\nwill list all changes between versions.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 06 Jul 2006 10:33:20 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: managing database with thousands of tables" } ]
[ { "msg_contents": "We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3. \nOur PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz \n64-bit processors. There are two internal drives and an external \nenclosure containing 14 drives (configured as 7 pairs of mirrored drives \n- four pairs for table spaces, one pair for dbcluster, two pairs for \npoint in time recovery). The operating system is FreeBSD 6.0-RELEASE #10\n\nThe output from ulimit -a is:\n\nulimit -a\ncore file size (blocks, -c) unlimited\ndata seg size (kbytes, -d) 33554432\nfile size (blocks, -f) unlimited\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 11095\npipe size (512 bytes, -p) 1\nstack size (kbytes, -s) 524288\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 5547\nvirtual memory (kbytes, -v) unlimited\n\nShared memory kernel parameters are set to:\n\nshmmax 1073741000\nshmmin 1\nshmall 262144\nshmseg 128\nshmmni 192\nsemmni 256\nsemmns 512\nsemmsl 256\nsemmap 256\nsemvmx 32767\nshm_use_phys 1\n\nThe postgresql.conf file contains:\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n\n#external_pid_file = '(none)' # write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*' # what IP address(es) to listen on;\n # comma-separated list of \naddresses;\n # defaults to 'localhost', '*' \n= all\nport = 5432\nmax_connections = 102\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\nsuperuser_reserved_connections = 2\nunix_socket_directory = ''\nunix_socket_group = ''\nunix_socket_permissions = 0777 # octal\nbonjour_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\nauthentication_timeout = 60 # 1-600, in seconds\nssl = off\npassword_encryption = on\ndb_user_namespace = off\n\n# Kerberos\n\nkrb_server_keyfile = ''\nkrb_srvname = 'postgres'\nkrb_server_hostname = '' # empty string matches any \nkeytab entry\nkrb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\ntcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\ntcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\ntcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 125000 # min 16 or max_connections*2, \n8KB each\ntemp_buffers = 1000 # min 100, 8KB each\nmax_prepared_transactions = 0 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared \nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10000 # min 64, size in KB\nmaintenance_work_mem = 50000 # min 1024, size in KB\nmax_stack_depth = 500000 # in 100, size in KB\n # ulimit -a or ulimit -s\n\n# - Free Space Map -\n\nmax_fsm_pages = 600000 # min max_fsm_relations*16, 6 \nbytes each\nmax_fsm_relations = 1000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 1000 # min 25\npreload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 0 # 0-1000 milliseconds\nvacuum_cost_page_hit = 1 # 0-10000 credits\nvacuum_cost_page_miss = 10 # 0-10000 credits\nvacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 200 # 10-10000 milliseconds between \nrounds\nbgwriter_lru_percent = 1.0 # 0-100% of LRU buffers \nscanned/round\nbgwriter_lru_maxpages = 1000 # 0-1000 buffers max written/round\nbgwriter_all_percent = 0.333 # 0-100% of all buffers \nscanned/round\nbgwriter_all_maxpages = 1000 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on # turns forced synchronization \non or off\nwal_sync_method = fsync # the default is the first option\n # supported by the operating \nsystem:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\nfull_page_writes = on # recover from partial page writes\nwal_buffers = 64 # min 4, 8KB each\ncommit_delay = 0 # range 0-100000, in microseconds\ncommit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 120 # in logfile segments, min 1, \n16MB each\ncheckpoint_timeout = 900 # range 30-3600, in seconds\ncheckpoint_warning = 900 # in seconds, 0 is off\n\n# - Archiving -\n\narchive_command = 'archive_wal -email -txtmsg \"%p\" \"%f\"' # \ncommand to use\n # to archive a logfile segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 27462 # typically 8KB each\n # On TRU64 do /sbin/sysconfig \n-q advfs\n # to get the size o \nAdvfsCacheMaxPercent\n # (default is 7% of RAM). On \nFreeBSD set\n # to sysctl -n vfs.hibufspace / \n8192\nrandom_page_cost = 2 # units are one sequential page \nfetch\n # cost\ncpu_tuple_cost = 0.01 # (same)\ncpu_index_tuple_cost = 0.001 # (same)\ncpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\ngeqo = on\ngeqo_threshold = 12\ngeqo_effort = 5 # range 1-10\ngeqo_pool_size = 0 # selects default based on effort\ngeqo_generations = 0 # selects default based on effort\ngeqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100 # range 1-1000\nconstraint_exclusion = off\nfrom_collapse_limit = 8\njoin_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\nlog_destination = 'stderr' # Valid values are combinations of\n # stderr, syslog and eventlog,\n # depending on platform.\n\n# This is used when logging to stderr:\nredirect_stderr = on # Enable capturing of stderr \ninto log\n # files\n\n# These are only used if redirect_stderr is on:\nlog_directory = 'pg_log' # Directory where log files are \nwritten\n # Can be absolute or relative \nto PGDATA\nlog_filename = 'postgresql_log.%a' # Log file name pattern.\n # Can include strftime() escapes\nlog_truncate_on_rotation = on # If on, any existing log file \nof the\n # same name as the new log file \nwill be\n # truncated rather than \nappended to. But\n # such truncation only occurs on\n # time-driven rotation, not on \nrestarts\n # or size-driven rotation. \nDefault is\n # off, meaning append to \nexisting files\n # in all cases.\nlog_rotation_age = 1440 # Automatic rotation of logfiles \nwill\n # happen after so many minutes. \n 0 to\n # disable.\nlog_rotation_size = 10240 # Automatic rotation of logfiles \nwill\n # happen after so many \nkilobytes of log\n # output. 0 to disable.\n\n# These are relevant when logging to syslog:\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n\n# - When to Log -\n\nclient_min_messages = log # Values, in order of decreasing \ndetail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\nlog_min_messages = notice # Values, in order of decreasing \ndetail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\nlog_error_verbosity = default # terse, default, or verbose \nmessages\n\nlog_error_verbosity = default # terse, default, or verbose \nmessages\n\n#log_min_error_statement = notice # Values in order of increasing \nseverity:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # panic(off)\n\nlog_min_duration_statement = 1000 # -1 is disabled, 0 logs all \nstatements\n # and their durations, in \nmilliseconds.\n\nsilent_mode = off # DO NOT USE without syslog or\n # redirect_stderr\n\n# - What to Log -\n\ndebug_print_parse = off\ndebug_print_rewritten = off\ndebug_print_plan = off\ndebug_pretty_print = off\nlog_connections = off\nlog_disconnections = off\nlog_duration = off\nlog_line_prefix = '%t' # Special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = PID\n # %t = timestamp (no \nmilliseconds)\n # %m = timestamp with \nmilliseconds\n # %i = command tag\n # %c = session id\n # %l = session line number\n # %s = session start timestamp\n # %x = transaction id\n # %q = stop here in non-session\n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\nlog_statement = 'none' # none, mod, ddl, all\nlog_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\nlog_parser_stats = off\nlog_planner_stats = off\nlog_executor_stats = off\nlog_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = on\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\nstats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = off # enable autovacuum subprocess?\nautovacuum_naptime = 60 # time between autovacuum runs, \nin secs\nautovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n # vacuum\nautovacuum_analyze_threshold = 500 # min # of tuple updates before\n # analyze\nautovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n # vacuum\nautovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n # analyze\nautovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n # vacuum_cost_delay\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovac, -1 means use\n # vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\nsearch_path = '$user' # schema names\ndefault_tablespace = '' # a tablespace name, '' uses\n # the default\ncheck_function_bodies = on\ndefault_transaction_isolation = 'read committed'\ndefault_transaction_read_only = off\nstatement_timeout = 7200000 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\ntimezone = unknown # actually, defaults to TZ\n # environment setting\naustralian_timezones = off\nextra_float_digits = 0 # min -15, max 2\nclient_encoding = sql_ascii # actually, defaults to database\n # encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'C' # locale for system error message\n # strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\nexplain_pretty_print = on\ndynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\ndeadlock_timeout = 7200000 # in milliseconds\nmax_locks_per_transaction = 500 # min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\nadd_missing_from = off\nregex_flavor = advanced # advanced, extended, or basic\nsql_inheritance = on\ndefault_with_oids = off\nescape_string_warning = off\n\n# - Other Platforms & Clients -\n\ntransform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable class \nnames\n\nNOTE - raising shared_buffers had no effect\nNOTE - changing stats_command_string had no effect\n\nThe pg_hba.conf file contains:\n\n# TYPE DATABASE USER CIDR-ADDRESS METHOD\n\n# \"local\" is for Unix domain socket connections only\nlocal all postgre trust\n# IPv4 local connections:\nhost all postgre 127.0.0.1/32 trust\nhostnossl operations opps,opps_adm 10.1.10.0/8 trust\nhostnossl development,work all 10.1.10.0/8 trust\n# IPv6 local connections:\nhost all postgre ::1/128 trust\n\nThe pg_ident.conf file, other than comments, is empty.\n\nWe're running an OLTP database with a small number of connections (<50) \nperforming mostly reads and inserts on modest sized tables (largest is < \n2,000,000 records).\n\nThe symptoms are:\n\na) All 4 CPUs are nearly always 0% idle;\nb) The system load level is nearly always in excess of 20;\nc) the output from vmstat -w 10 looks like:\n procs memory page disks faults cpu\n r b w avm fre flt re pi po fr sr aa0 aa1 in sy cs us \nsy id\n21 0 3 1242976 327936 2766 0 0 0 2264 0 2 2 17397 140332 \n104846 18 82 1\n21 0 3 1242932 313312 2761 0 0 0 1223 0 1 1 15989 128666 \n107019 13 86 1\n19 3 0 1245908 275356 3762 0 0 0 1962 0 3 3 16397 131584 \n105792 14 85 1\n21 0 2 1243968 262616 2006 0 0 0 2036 0 1 1 15260 122801 \n107406 14 85 1\n 4 19 0 1240996 247004 1589 0 0 0 984 0 1 0 15403 121323 \n108331 12 87 2\n17 1 2 1230744 252888 2440 0 0 0 1807 0 1 0 17977 142618 \n105600 15 84 2\nNOTE - small user demands and high system demands\nd) Running top indicates a significant number or sblock states and \noccasional smwai states;\ne) ps auxww | grep postgres doesn't show anything abnormal;\nf) ESQL applications are very slow.\n\nWe VACUUM ANALYZE user databases every four hours. We VACUUM template1 \nevery 4 hours. We make a copy of the current WAL every minute. We create \na PIT recovery archive daily daily. None of these, individually seem to \nplace much strain on the server.\n\nHopefully I've supplied enough information to start diagnosing the \nproblem. Any ideas, thoughts, suggestions are greatly appreciated ...\n\n\nAndy\n\n-- \n--------------------------------------------------------------------------------\nAndrew Rost\nNational Operational Hydrologic Remote Sensing Center (NOHRSC)\nNational Weather Service, NOAA\n1735 Lake Dr. West, Chanhassen, MN 55317-8582\nVoice: (952)361-6610 x 234\nFax: (952)361-6634\[email protected]\nhttp://www.nohrsc.noaa.gov\n--------------------------------------------------------------------------------\n\n\n", "msg_date": "Wed, 05 Jul 2006 09:43:00 -0500", "msg_from": "andy rost <[email protected]>", "msg_from_op": true, "msg_subject": "Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "* andy rost ([email protected]) wrote:\n> We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3. \n> Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz \n> 64-bit processors. There are two internal drives and an external \n> enclosure containing 14 drives (configured as 7 pairs of mirrored drives \n> - four pairs for table spaces, one pair for dbcluster, two pairs for \n> point in time recovery). The operating system is FreeBSD 6.0-RELEASE #10\n\nNot sure it matters, but is the mirroring done with a hardware\ncontroller or in software?\n\n> shared_buffers = 125000 # min 16 or max_connections*2, \n> 8KB each\n> temp_buffers = 1000 # min 100, 8KB each\n> max_prepared_transactions = 0 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared \n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 10000 # min 64, size in KB\n> maintenance_work_mem = 50000 # min 1024, size in KB\n> max_stack_depth = 500000 # in 100, size in KB\n> # ulimit -a or ulimit -s\n\nThese seem kind of.. backwards... Just an example of one system I've\ngot shows:\n\nshared_buffers = 10000\nwork_mem = 32768\nmaintenance_work_mem = 65535\n\nDefaults for the rest. This is more of a data-warehouse than an OLTP,\nso I'm sure these aren't perfect for you, but you might try playing with\nthem some.\n\n> # - Free Space Map -\n> max_fsm_pages = 600000 # min max_fsm_relations*16, 6 \n> bytes each\n\nThis seems somewhat hgih from the default of 20,000, but for a very\nfrequently changing database it may make sense.\n\n> archive_command = 'archive_wal -email -txtmsg \"%p\" \"%f\"' # \n> command to use\n\nAre WALs being archived very frequently? Any idea if this takes much\ntime? I wouldn't really think it'd be an issue, but might be useful to\nknow.\n\n> effective_cache_size = 27462 # typically 8KB each\n\nThis seems like it might be a little low... How much memory do you have\nin the system? Then again, with your shared_mem set so high, perhaps\nit's not that bad, but it might make sense to swap those two settings,\nor at least that'd be a more common PG setup.\n\n> random_page_cost = 2 # units are one sequential page \n\nThat's quite a bit lower than the default of 4... May make sense for\nyou but it's certainly something to look at.\n\n> We're running an OLTP database with a small number of connections (<50) \n> performing mostly reads and inserts on modest sized tables (largest is < \n> 2,000,000 records).\n> \n> The symptoms are:\n> \n> a) All 4 CPUs are nearly always 0% idle;\n> b) The system load level is nearly always in excess of 20;\n\nAt a guess I'd say that the system is doing lots of sequential scans\nrather than using indexes, and that's why the processes are ending up in\na disk-wait state, which makes the load go up. Have you looked at the\nplans which are being generated for the most common queries to see what\nthey're doing?\n\nI'd also wonder if the shared_mem setting isn't set *too* high and\ncausing problems with the IPC or something... Not something I've heard\nof (generally, going up with shared_mem doesn't degrade performance,\njust doesn't improve it) but might be possible.\n\n> We VACUUM ANALYZE user databases every four hours. We VACUUM template1 \n> every 4 hours. We make a copy of the current WAL every minute. We create \n> a PIT recovery archive daily daily. None of these, individually seem to \n> place much strain on the server.\n\nThis doesn't sound too bad at all. How long do the vacuum's run for?\nIf it's 3 hours, then that might start to be an issue with disk I/O\ncontention...\n\n> Hopefully I've supplied enough information to start diagnosing the \n> problem. Any ideas, thoughts, suggestions are greatly appreciated ...\n\nJust my 2c, hopefully you'll get some better answers too. :)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 5 Jul 2006 11:46:56 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "\nOn Jul 5, 2006, at 10:43 AM, andy rost wrote:\n\n> We're in the process of porting from Informix 9.4 to PostgreSQL \n> 8.1.3. Our PostgreSQL server is an AMD Opteron Dual Core 275 with \n> two 2.2 Ghz 64-bit processors. There are two internal drives and an \n> external enclosure containing 14 drives (configured as 7 pairs of \n> mirrored drives - four pairs for table spaces, one pair for \n> dbcluster, two pairs for point in time recovery). The operating \n> system is FreeBSD 6.0-RELEASE #10\n\nWhat RAID card are you hooked up to? My best machines have LSI \nMegaRAID 320-2X cards in them, with the pairs of each mirrored set on \nopposing channels. My second best machine (into which the above card \nwill not fit) uses an Adaptec 2230SLP card similarly configured.\n\nThe database is mirrored from one box to the other, and a vacuum \nanalyze takes ~10 hours.\n\nYou seem to have done the right things in your postgresql.conf file.\n\nThings I'd like to know before offering advice: how big is your \ndatabase? Ie, what is the \"df\" output for your various partitions \nholding Pg data? also, how much RAM is in this box? And finally, \ncan you show the output of \"iostsat -w3\" for a few rows during times \nwhen you're having poor performance.\n\n\n", "msg_date": "Wed, 5 Jul 2006 13:55:28 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "\nHi Stephen,\n\nThanks for your input. My follow ups are interleaved below ...\n\nStephen Frost wrote:\n> * andy rost ([email protected]) wrote:\n> \n>>We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3. \n>>Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz \n>>64-bit processors. There are two internal drives and an external \n>>enclosure containing 14 drives (configured as 7 pairs of mirrored drives \n>>- four pairs for table spaces, one pair for dbcluster, two pairs for \n>>point in time recovery). The operating system is FreeBSD 6.0-RELEASE #10\n> \n> \n> Not sure it matters, but is the mirroring done with a hardware\n> controller or in software?\n>\n\nI'll have to check on this when our system administrator returns \ntomorrow. I performed a quick test while the server was under load by \nmoving a couple of Gigs of data while running iostat.I was getting disk \nI/O rates of about 125 KB per transaction, 250 transactions per second, \nand 35 Mg per second on all drives.\n\n> \n>>shared_buffers = 125000 # min 16 or max_connections*2, \n>>8KB each\n>>temp_buffers = 1000 # min 100, 8KB each\n>>max_prepared_transactions = 0 # can be 0 or more\n>># note: increasing max_prepared_transactions costs ~600 bytes of shared \n>>memory\n>># per transaction slot, plus lock space (see max_locks_per_transaction).\n>>work_mem = 10000 # min 64, size in KB\n>>maintenance_work_mem = 50000 # min 1024, size in KB\n>>max_stack_depth = 500000 # in 100, size in KB\n>> # ulimit -a or ulimit -s\n> \n> \n> These seem kind of.. backwards... Just an example of one system I've\n> got shows:\n> \n> shared_buffers = 10000\n> work_mem = 32768\n> maintenance_work_mem = 65535\n> \n> Defaults for the rest. This is more of a data-warehouse than an OLTP,\n> so I'm sure these aren't perfect for you, but you might try playing with\n> them some.\n\nOriginally shared_buffers was set to 32768. I set it to its current \nvalue out of desperations (newby response).\n\n> \n> \n>># - Free Space Map -\n>>max_fsm_pages = 600000 # min max_fsm_relations*16, 6 \n>>bytes each\n> \n> \n> This seems somewhat hgih from the default of 20,000, but for a very\n> frequently changing database it may make sense.\n> \n\nThis value is based on the output from VACUUM ANALYZE\n\n> \n>>archive_command = 'archive_wal -email -txtmsg \"%p\" \"%f\"' # \n>>command to use\n> \n> \n> Are WALs being archived very frequently? Any idea if this takes much\n> time? I wouldn't really think it'd be an issue, but might be useful to\n> know.\n> \n\nYes, about 100 times per hour. No, I don't think it takes much time\n\n> \n>>effective_cache_size = 27462 # typically 8KB each\n> \n> \n> This seems like it might be a little low... How much memory do you have\n> in the system? Then again, with your shared_mem set so high, perhaps\n> it's not that bad, but it might make sense to swap those two settings,\n> or at least that'd be a more common PG setup.\n\nOops, forgot to mention that we have 6 Gigs of memory. This value was \nset based on sysctl -n vfs.hibufspace / 8192\n\n> \n> \n>>random_page_cost = 2 # units are one sequential page \n> \n> \n> That's quite a bit lower than the default of 4... May make sense for\n> you but it's certainly something to look at.\n> \n\nThis value set per web page entitiled \"Annotated POSTGRESQL.CONF Guide \nfor PostgreSQL\"\n\n> \n>>We're running an OLTP database with a small number of connections (<50) \n>>performing mostly reads and inserts on modest sized tables (largest is < \n>>2,000,000 records).\n>>\n>>The symptoms are:\n>>\n>>a) All 4 CPUs are nearly always 0% idle;\n>>b) The system load level is nearly always in excess of 20;\n> \n> \n> At a guess I'd say that the system is doing lots of sequential scans\n> rather than using indexes, and that's why the processes are ending up in\n> a disk-wait state, which makes the load go up. Have you looked at the\n> plans which are being generated for the most common queries to see what\n> they're doing?\n\nWe thought of that too. However, executing:\nselect * from pg_stat_user_tables\nsuggests that we are using indexes where needed. We confirmed this by \nchecking and running manually queries reported by\nselect * from pg_stat_activity\nwhile the server is suffering\n\n> \n> I'd also wonder if the shared_mem setting isn't set *too* high and\n> causing problems with the IPC or something... Not something I've heard\n> of (generally, going up with shared_mem doesn't degrade performance,\n> just doesn't improve it) but might be possible.\n> \n\nPossible I suppose but we had the same trouble while the server was \nconfigured with 32768 buffers\n\n> \n>>We VACUUM ANALYZE user databases every four hours. We VACUUM template1 \n>>every 4 hours. We make a copy of the current WAL every minute. We create \n>>a PIT recovery archive daily daily. None of these, individually seem to \n>>place much strain on the server.\n> \n> \n> This doesn't sound too bad at all. How long do the vacuum's run for?\n> If it's 3 hours, then that might start to be an issue with disk I/O\n> contention...\n> \n\nVACUUM ANALYZE lasts about an hour and fifteen minutes\n\n> \n>>Hopefully I've supplied enough information to start diagnosing the \n>>problem. Any ideas, thoughts, suggestions are greatly appreciated ...\n> \n> \n> Just my 2c, hopefully you'll get some better answers too. :)\n>\n\nAgain, many thanks. Is this the proper mail list for this problem or \nshould I also be addressing the administation mail list as well?\n\n> \tThanks,\n> \n> \t\tStephen\n\n-- \n--------------------------------------------------------------------------------\nAndrew Rost\nNational Operational Hydrologic Remote Sensing Center (NOHRSC)\nNational Weather Service, NOAA\n1735 Lake Dr. West, Chanhassen, MN 55317-8582\nVoice: (952)361-6610 x 234\nFax: (952)361-6634\[email protected]\nhttp://www.nohrsc.noaa.gov\n--------------------------------------------------------------------------------\n\n\n", "msg_date": "Wed, 05 Jul 2006 13:11:01 -0500", "msg_from": "andy rost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "andy rost wrote:\n> \n\n>>> effective_cache_size = 27462 # typically 8KB each\n>>\n>>\n>> This seems like it might be a little low... How much memory do you have\n>> in the system? Then again, with your shared_mem set so high, perhaps\n>> it's not that bad, but it might make sense to swap those two settings,\n>> or at least that'd be a more common PG setup.\n> \n> Oops, forgot to mention that we have 6 Gigs of memory. This value was \n> set based on sysctl -n vfs.hibufspace / 8192\n> \n\nThat vfs.hibufspace sysctl is a little deceptive IMHO - e.g on my \nFreeBSD 6.1 system with 2G of ram it says 117276672 (i.e. about 112M), \nbut I have a 1G file cached entirely in ram at the moment... In FreeBSD \nfile pages are actually kept in the 'Inactive' section of memory, the \n'Buffer' section is used as a 'window' to read 'em. For instance on my \nsystem I see:\n\nMem: 4192K Active, 1303M Inact, 205M Wired, 12K Cache, 112M Buf, 491M Free\n\nSo my 1G file is cached in the 1303M of 'Inactive', but I have 112M of \nbuffer window for accessing this (and other) cached files. Now, I may \nnot have explained this that well, and it is quite confusing... but \nhopefully you get the idea!\n\nNow on the basis of the figures provided:\n- max_connections=102 , each with work_mem=10000 (approx 1G in total)\n- shared buffers=125000 (1G total)\n\nit looks like you are only using about 2G of your 6G, so there is a lot \nleft for caching file pages (lets say 3-4G or so).\n\nI would think you can happily set effective_cache_size=393216 (i.e. \n3G/8192). This will have the side effect of encouraging more index scans \n(probably what you want I think).\n\nBest wishes\n\nMark\n", "msg_date": "Thu, 06 Jul 2006 11:49:40 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "On 7/5/06, andy rost <[email protected]> wrote:\n\n> fsync = on # turns forced synchronization\n\nhave you tried turning this off and measuring performance?\n\n> stats_command_string = on\n\nI would turn this off unless you absoltely require it. It is\nexpensive for what it does.\n\n> a) All 4 CPUs are nearly always 0% idle;\n> b) The system load level is nearly always in excess of 20;\n\nI am guessing your system is spending all it's time syncing. If so,\nit's solvable (again, just run fsync=off for a bit and compare).\n\n> c) the output from vmstat -w 10 looks like:\n> procs memory page disks faults cpu\n> r b w avm fre flt re pi po fr sr aa0 aa1 in sy cs us\n> sy id\n> 21 0 3 1242976 327936 2766 0 0 0 2264 0 2 2 17397 140332\n> 104846 18 82 1\n\nis that 100k context switches over 10 seconds or one second? that\nmight be something to check out. pg 8.1 is regarded as the solution\nto any cs problem, though.\n\n> NOTE - small user demands and high system demands\n> d) Running top indicates a significant number or sblock states and\n> occasional smwai states;\n> e) ps auxww | grep postgres doesn't show anything abnormal;\n> f) ESQL applications are very slow.\n>\n> We VACUUM ANALYZE user databases every four hours. We VACUUM template1\n> every 4 hours. We make a copy of the current WAL every minute. We create\n> a PIT recovery archive daily daily. None of these, individually seem to\n> place much strain on the server.\n\nyour server should be able to handle this easily.\n\n> Hopefully I've supplied enough information to start diagnosing the\n> problem. Any ideas, thoughts, suggestions are greatly appreciated ...\n>\n\ncan you please approximate roughly how many transactions per second\nyour server is handling while you are getting the 20 load condition\n(and, if possible, broken down into read and write transactions)?\n\nmerlin\n", "msg_date": "Thu, 6 Jul 2006 10:44:26 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "Mark,\n\nThanks for the insight. I increased the value of effective_cache_size to \n3 Gigs and will monitor the performance over the weekend. Prior to this \nchange we discovered that we are filling up WALs to the tune of 2400 per \nday. Moving the pg_xlog subdirectory to its own drive seemed to boost \nthe performance significantly. We're taking this one step at a time. On \nMonday we plan to drop the number of shared memory buffers down to 50000 \nfrom its current value of 125000 (per the large number of \nrecommendations that this value should be held fairly low and \nsuggestions that vales in excess of 50000 may hamper performance).\n\nThanks again ...\n\nAndy\n\nMark Kirkwood wrote:\n> andy rost wrote:\n> \n>>\n> \n>>>> effective_cache_size = 27462 # typically 8KB each\n>>>\n>>>\n>>>\n>>> This seems like it might be a little low... How much memory do you have\n>>> in the system? Then again, with your shared_mem set so high, perhaps\n>>> it's not that bad, but it might make sense to swap those two settings,\n>>> or at least that'd be a more common PG setup.\n>>\n>>\n>> Oops, forgot to mention that we have 6 Gigs of memory. This value was \n>> set based on sysctl -n vfs.hibufspace / 8192\n>>\n> \n> That vfs.hibufspace sysctl is a little deceptive IMHO - e.g on my \n> FreeBSD 6.1 system with 2G of ram it says 117276672 (i.e. about 112M), \n> but I have a 1G file cached entirely in ram at the moment... In FreeBSD \n> file pages are actually kept in the 'Inactive' section of memory, the \n> 'Buffer' section is used as a 'window' to read 'em. For instance on my \n> system I see:\n> \n> Mem: 4192K Active, 1303M Inact, 205M Wired, 12K Cache, 112M Buf, 491M Free\n> \n> So my 1G file is cached in the 1303M of 'Inactive', but I have 112M of \n> buffer window for accessing this (and other) cached files. Now, I may \n> not have explained this that well, and it is quite confusing... but \n> hopefully you get the idea!\n> \n> Now on the basis of the figures provided:\n> - max_connections=102 , each with work_mem=10000 (approx 1G in total)\n> - shared buffers=125000 (1G total)\n> \n> it looks like you are only using about 2G of your 6G, so there is a lot \n> left for caching file pages (lets say 3-4G or so).\n> \n> I would think you can happily set effective_cache_size=393216 (i.e. \n> 3G/8192). This will have the side effect of encouraging more index scans \n> (probably what you want I think).\n> \n> Best wishes\n> \n> Mark\n\n-- \n--------------------------------------------------------------------------------\nAndrew Rost\nNational Operational Hydrologic Remote Sensing Center (NOHRSC)\nNational Weather Service, NOAA\n1735 Lake Dr. West, Chanhassen, MN 55317-8582\nVoice: (952)361-6610 x 234\nFax: (952)361-6634\[email protected]\nhttp://www.nohrsc.noaa.gov\n--------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 07 Jul 2006 15:38:26 -0500", "msg_from": "andy rost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "Hi Merlin,\n\nThanks for the input. Please see below ...\n\nMerlin Moncure wrote:\n> On 7/5/06, andy rost <[email protected]> wrote:\n> \n>> fsync = on # turns forced synchronization\n> \n> \n> have you tried turning this off and measuring performance?\n\nNo, not yet. We're trying a couple of outher avenues before manipulating \nthis parameter.\n\n> \n>> stats_command_string = on\n> \n> \n> I would turn this off unless you absoltely require it. It is\n> expensive for what it does.\n\nWe've turned this off\n\n> \n>> a) All 4 CPUs are nearly always 0% idle;\n>> b) The system load level is nearly always in excess of 20;\n> \n> \n> I am guessing your system is spending all it's time syncing. If so,\n> it's solvable (again, just run fsync=off for a bit and compare).\n> \n\nWe've reduced the load significantly primarily by moving pg_xlog to its \nown drive and by increasing the effective cache size. While we still see \n high load levels, they don't last very long. We're trying improve \nperformance from several angles but are taking it one step at a time. \nEventually we'll experiment with fsynch\n\n>> c) the output from vmstat -w 10 looks like:\n>> procs memory page disks faults \n>> cpu\n>> r b w avm fre flt re pi po fr sr aa0 aa1 in sy cs us\n>> sy id\n>> 21 0 3 1242976 327936 2766 0 0 0 2264 0 2 2 17397 140332\n>> 104846 18 82 1\n> \n> \n> is that 100k context switches over 10 seconds or one second? that\n> might be something to check out. pg 8.1 is regarded as the solution\n> to any cs problem, though.\n\nAccording to man top, that's 100K per second. I'm interested in your \nrecommendation but am not sure what \"pg 8.1\" references\n\n> \n>> NOTE - small user demands and high system demands\n>> d) Running top indicates a significant number or sblock states and\n>> occasional smwai states;\n>> e) ps auxww | grep postgres doesn't show anything abnormal;\n>> f) ESQL applications are very slow.\n>>\n>> We VACUUM ANALYZE user databases every four hours. We VACUUM template1\n>> every 4 hours. We make a copy of the current WAL every minute. We create\n>> a PIT recovery archive daily daily. None of these, individually seem to\n>> place much strain on the server.\n> \n> \n> your server should be able to handle this easily.\n> \n>> Hopefully I've supplied enough information to start diagnosing the\n>> problem. Any ideas, thoughts, suggestions are greatly appreciated ...\n>>\n> \n> can you please approximate roughly how many transactions per second\n> your server is handling while you are getting the 20 load condition\n> (and, if possible, broken down into read and write transactions)?\n\nDo you have any suggestions on how I might obtain these metrics?\n\n> \n> merlin\n\nThanks again Merlin ...\n\nAndy\n\n-- \n--------------------------------------------------------------------------------\nAndrew Rost\nNational Operational Hydrologic Remote Sensing Center (NOHRSC)\nNational Weather Service, NOAA\n1735 Lake Dr. West, Chanhassen, MN 55317-8582\nVoice: (952)361-6610 x 234\nFax: (952)361-6634\[email protected]\nhttp://www.nohrsc.noaa.gov\n--------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 07 Jul 2006 16:11:25 -0500", "msg_from": "andy rost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "On Fri, 7 Jul 2006, andy rost wrote:\n\n>> is that 100k context switches over 10 seconds or one second? that\n>> might be something to check out. pg 8.1 is regarded as the solution\n>> to any cs problem, though.\n>\n> According to man top, that's 100K per second. I'm interested in your \n> recommendation but am not sure what \"pg 8.1\" references\n\npg 8.1 means PostgreSQL 8.1.x (preferably 8.1.4) which is said to resolve many \ncontext switch issues.\n\n>> \n>> can you please approximate roughly how many transactions per second\n>> your server is handling while you are getting the 20 load condition\n>> (and, if possible, broken down into read and write transactions)?\n>\n> Do you have any suggestions on how I might obtain these metrics?\n>\n\nEvery five minutes do:\n\nselect sum(xact_commit) + sum(xact_rollback) as transactions from pg_stat_database;\n\nand then divide the difference by 300 and that's your transactions per second:\n\nselect sum(xact_commit) + sum(xact_rollback) as transactions from \npg_stat_database;\n\n transactions\n--------------\n 231894522\n(1 row)\n\n\n<wait 300 seconds>\n\nselect sum(xact_commit) + sum(xact_rollback) as transactions from \npg_stat_database;\n\n transactions\n--------------\n 231907346\n(1 row)\n\n(231907346-231894522)/300 = 42.74666666666666666666 TPS\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 7 Jul 2006 15:05:31 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" }, { "msg_contents": "On 7/7/06, andy rost <[email protected]> wrote:\n> Hi Merlin,\n> Thanks for the input. Please see below ...\n\nno problem!\n\n[aside: jeff, great advice on tps determination]\n\n> >> fsync = on # turns forced synchronization\n> > have you tried turning this off and measuring performance?\n>\n> No, not yet. We're trying a couple of outher avenues before manipulating\n> this parameter.\n\nok. just keep in mind that keeping fsync on keeps an aboslute upper\nlimit on your tps that is largely driven by hardware. with a raid\ncontroller write caching controller caching writes the penalty is\nextremely low but without write caching you might get ~150 tps on a\nrelatively high end raid system. moreover, your disks are totally\nutilized providing those 150 tps. (transactions with writing, that\nis)\n\nsymptoms of sync bottleneck are little/no cpu utilization, sustained\nhigh iowait, and extremely poor performance and an unresponsive\nserver. the best solution to sync issues is to have a\nhardware/software strategy designed to deal with it.\n\nif you are having periodic performance issues, you might be having\ncheckpoint storms. Thse are controllable by tuning the wal and\nespecially the bgwriter. These are easy to spot: you can do manual\ncheckpoints in psql and closely monitor the load.\n\n> We've reduced the load significantly primarily by moving pg_xlog to its\n> own drive and by increasing the effective cache size. While we still see\n> high load levels, they don't last very long. We're trying improve\n> performance from several angles but are taking it one step at a time.\n> Eventually we'll experiment with fsynch\n\ngood!\n\nmerlin\n", "msg_date": "Sun, 9 Jul 2006 10:47:52 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opteron/FreeBSD/PostgreSQL performance poor" } ]
[ { "msg_contents": "I am currently running FreeBSD 4.11 (due to IT requirements for now) and\nAdaptec's 2200S RAID controller running in RAID5. I was advised in the past\nthat the 2200S is actually a poor performing controller and obviously the\nRAID5 is less than ideal for databases. I chose to run the controller in\nRAID5 as the tech I talked to suggested that the 2200S was primarily designed\nfor RAID5 and it would operate the best that way. My server is a dual Xeon\n 3.06Ghz box running on a motherboard approximately 2-3 years old now. I'd\nlike to know what an ideal RAID controller that would be compatible with\nFreeBSD 6.1 would be these days.\n\nThanks in advance,\nKenji\n", "msg_date": "Wed, 5 Jul 2006 16:46:17 -0700", "msg_from": "Kenji Morishige <[email protected]>", "msg_from_op": true, "msg_subject": "suggested RAID controller for FreeBSD 6.1 + PostgreSQL 8.1" }, { "msg_contents": "Kenji Morishige wrote:\n> I am currently running FreeBSD 4.11 (due to IT requirements for now) and\n> Adaptec's 2200S RAID controller running in RAID5. I was advised in the past\n> that the 2200S is actually a poor performing controller and obviously the\n> RAID5 is less than ideal for databases. I chose to run the controller in\n> RAID5 as the tech I talked to suggested that the 2200S was primarily designed\n> for RAID5 and it would operate the best that way. My server is a dual Xeon\n> 3.06Ghz box running on a motherboard approximately 2-3 years old now. I'd\n> like to know what an ideal RAID controller that would be compatible with\n> FreeBSD 6.1 would be these days.\n> \n>\n\nI've had good experience with 3Ware on FreeBSD - painless installs and \nexcellent performance! (in my case the older 7506 models for (P)ATA on \n4.10, 5.4, 6.0 and 6.1).\n\nIt looks like the new 9550 series is supported in 6.1 onwards. Given \nthat this has been added recently I would suggest mailing \nfreebsd-hardware to see what experiences folks are having (I don't \nrecall seeing any postings about issues with the 9000 or 9550 series).\n\nCheers\n\nMark\n", "msg_date": "Thu, 06 Jul 2006 12:52:09 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 + PostgreSQL" }, { "msg_contents": "Thanks for the suggestion Mark, though the server chassis I am trying to\nutilize already has 4 10,000 RPM SCSI drives with SCA interfaces. Ideally I\nwould like to use the existing drives and chassis and find another SCSI RAID\ncontroller. It looks like 3Ware only makes ATA controllers.\n-Kenji\n\nOn Thu, Jul 06, 2006 at 12:52:09PM +1200, Mark Kirkwood wrote:\n> Kenji Morishige wrote:\n> >I am currently running FreeBSD 4.11 (due to IT requirements for now) and\n> >Adaptec's 2200S RAID controller running in RAID5. I was advised in the \n> >past\n> >that the 2200S is actually a poor performing controller and obviously the\n> >RAID5 is less than ideal for databases. I chose to run the controller in\n> >RAID5 as the tech I talked to suggested that the 2200S was primarily \n> >designed\n> >for RAID5 and it would operate the best that way. My server is a dual Xeon\n> > 3.06Ghz box running on a motherboard approximately 2-3 years old now. \n> > I'd\n> >like to know what an ideal RAID controller that would be compatible with\n> >FreeBSD 6.1 would be these days.\n> >\n> >\n> \n> I've had good experience with 3Ware on FreeBSD - painless installs and \n> excellent performance! (in my case the older 7506 models for (P)ATA on \n> 4.10, 5.4, 6.0 and 6.1).\n> \n> It looks like the new 9550 series is supported in 6.1 onwards. Given \n> that this has been added recently I would suggest mailing \n> freebsd-hardware to see what experiences folks are having (I don't \n> recall seeing any postings about issues with the 9000 or 9550 series).\n> \n> Cheers\n> \n> Mark\n", "msg_date": "Thu, 6 Jul 2006 11:10:19 -0700", "msg_from": "Kenji Morishige <[email protected]>", "msg_from_op": true, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 + PostgreSQL 8.1" }, { "msg_contents": "On Thu, 2006-07-06 at 13:10, Kenji Morishige wrote:\n> Thanks for the suggestion Mark, though the server chassis I am trying to\n> utilize already has 4 10,000 RPM SCSI drives with SCA interfaces. Ideally I\n> would like to use the existing drives and chassis and find another SCSI RAID\n> controller. It looks like 3Ware only makes ATA controllers.\n\nTake a look at LSI. They make solid and reliable SCSI RAID\ncontrollers. be sure and get the BBU cache.\n", "msg_date": "Thu, 06 Jul 2006 13:17:55 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 +" }, { "msg_contents": "Take a look at: http://www.icp-vortex.com/english/index_e.htm\n\nThey have always made good RAID controllers.\n\n\n\nCheers\nPaul\n\nOn Thu, 2006-07-06 at 11:10 -0700, Kenji Morishige wrote:\n> Thanks for the suggestion Mark, though the server chassis I am trying to\n> utilize already has 4 10,000 RPM SCSI drives with SCA interfaces. Ideally I\n> would like to use the existing drives and chassis and find another SCSI RAID\n> controller. It looks like 3Ware only makes ATA controllers.\n> -Kenji\n> \n> On Thu, Jul 06, 2006 at 12:52:09PM +1200, Mark Kirkwood wrote:\n> > Kenji Morishige wrote:\n> > >I am currently running FreeBSD 4.11 (due to IT requirements for now) and\n> > >Adaptec's 2200S RAID controller running in RAID5. I was advised in the \n> > >past\n> > >that the 2200S is actually a poor performing controller and obviously the\n> > >RAID5 is less than ideal for databases. I chose to run the controller in\n> > >RAID5 as the tech I talked to suggested that the 2200S was primarily \n> > >designed\n> > >for RAID5 and it would operate the best that way. My server is a dual Xeon\n> > > 3.06Ghz box running on a motherboard approximately 2-3 years old now. \n> > > I'd\n> > >like to know what an ideal RAID controller that would be compatible with\n> > >FreeBSD 6.1 would be these days.\n> > >\n> > >\n> > \n> > I've had good experience with 3Ware on FreeBSD - painless installs and \n> > excellent performance! (in my case the older 7506 models for (P)ATA on \n> > 4.10, 5.4, 6.0 and 6.1).\n> > \n> > It looks like the new 9550 series is supported in 6.1 onwards. Given \n> > that this has been added recently I would suggest mailing \n> > freebsd-hardware to see what experiences folks are having (I don't \n> > recall seeing any postings about issues with the 9000 or 9550 series).\n> > \n> > Cheers\n> > \n> > Mark\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "Thu, 6 Jul 2006 14:21:15 -0400", "msg_from": "\"Paul Khavkine\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 +PostgreSQL 8.1" }, { "msg_contents": "Kenji Morishige wrote:\n> Thanks for the suggestion Mark, though the server chassis I am trying to\n> utilize already has 4 10,000 RPM SCSI drives with SCA interfaces. Ideally I\n> would like to use the existing drives and chassis and find another SCSI RAID\n> controller. It looks like 3Ware only makes ATA controllers.\n> \n\nI should have checked that the 2200S was a *SCSI* RAID controller, sorry!\n\n\nCheers\n\nMark\n", "msg_date": "Fri, 07 Jul 2006 10:51:54 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 + PostgreSQL" }, { "msg_contents": "\n\n--On July 6, 2006 2:21:15 PM -0400 Paul Khavkine \n<[email protected]> wrote:\n\n>\n>\n> Take a look at: http://www.icp-vortex.com/english/index_e.htm\n>\n> They have always made good RAID controllers.\n\n\nI'll give a huge thumbs up for the ICPs, and also make sure you get the BBU \ncache as well (batter back-up) otherwise TURN OFF YOUR WRITE CACHE ON THE \nCARD. You will hose your database severely if you don't the one time hte \nmachine acidentally loses power.\n\nI'm getting ready to put together a pretty beefy server with either an LSI \nor ICP card (we're discussing options at this point) -- I've had really \ngood experience with the all of the ICP cards we've used. LSI used to be \nworthless, couldn't do anything online at all in any form of *nix, but \natleast in Linux you can now, I don't know about FreeBSD, look into that, \ndrives fail, and it bites to have to take your server down, and leave it \ndown for a rebuild.\n", "msg_date": "Thu, 06 Jul 2006 23:08:11 -0600", "msg_from": "Michael Loftis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 +PostgreSQL" }, { "msg_contents": "On Jul 5, 2006, at 7:46 PM, Kenji Morishige wrote:\n\n> like to know what an ideal RAID controller that would be compatible \n> with\n> FreeBSD 6.1 would be these days.\n\nLSI MegaRAID 320-2X and put half disks on one channel, half on the \nother, and MIRROR+STRIPE them (ie, RAID10).\n\nThere is nothing faster for FreeBSD 6.\n\nJust make sure you don't buy a Sun Fire X4100 to put it in, as it \nwill not fit :-(", "msg_date": "Mon, 10 Jul 2006 10:52:18 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 + PostgreSQL 8.1" } ]
[ { "msg_contents": "Hello!\n\nI have a postgresql server serving thousands of tables. Sometime there are\nqueries which involves several tables.\n\nIn postgresql.conf I have these settings:\n\nshared_buffers = 40000\nwork_mem = 8192\nmaintenance_work_mem = 16384\nmax_stack_depth = 2048\n\nall other settings are left by default (except ones needed for pg_autovacuum).\n\nIs there anything I can tune to get better performance?\n\n-- \nEugene N Dzhurinsky\n", "msg_date": "Thu, 6 Jul 2006 09:40:16 +0300", "msg_from": "Eugeny N Dzhurinsky <[email protected]>", "msg_from_op": true, "msg_subject": "getting better performance" }, { "msg_contents": "am 06.07.2006, um 9:40:16 +0300 mailte Eugeny N Dzhurinsky folgendes:\n> In postgresql.conf I have these settings:\n> \n> shared_buffers = 40000\n> work_mem = 8192\n> maintenance_work_mem = 16384\n> max_stack_depth = 2048\n> \n> all other settings are left by default (except ones needed for pg_autovacuum).\n> \n> Is there anything I can tune to get better performance?\n\nYou can set \"log_min_duration_statement\" to log slow querys and then\nanalyse this querys.\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\nHeynitz: 035242/47215, D1: 0160/7141639\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Thu, 6 Jul 2006 08:48:07 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" }, { "msg_contents": "On 7/6/06, A. Kretschmer <[email protected]> wrote:\n> am 06.07.2006, um 9:40:16 +0300 mailte Eugeny N Dzhurinsky folgendes:\n> > In postgresql.conf I have these settings:\n> >\n> > shared_buffers = 40000\n> > work_mem = 8192\n> > maintenance_work_mem = 16384\n> > max_stack_depth = 2048\n> >\n> > all other settings are left by default (except ones needed for pg_autovacuum).\n> >\n> > Is there anything I can tune to get better performance?\n>\n> You can set \"log_min_duration_statement\" to log slow querys and then\n> analyse this querys.\n\nWhen you collect your logs try PgFouine\nhttp://pgfouine.projects.postgresql.org/\nto understand what queries should be optimized and what's the reason\nof poor performance.\n", "msg_date": "Thu, 6 Jul 2006 12:48:09 +0400", "msg_from": "\"Ivan Zolotukhin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" }, { "msg_contents": "On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:\n> Hello!\n> \n> I have a postgresql server serving thousands of tables. Sometime there are\n> queries which involves several tables.\n\nDo you add / remove tables a lot? Could be you've got system catalog\nbloat.\n\ndo you have autovacuum running?\n\nWhat version of pgsql are you running?\n\nWhat OS?\n\nWhat file system?\n\nWhat kind of machine are you using?\n\nHow much memory does it have?\n\nHow many disk drives?\n\nAre you using RAID? hardware / software? battery backed cache or no?\n\nDo you have one or two, or thousands of connections at a time?\n\nDo you use connection pooling if you have lots of connections to handle?\n\nThere's a lot of info needed to make a decision on how to performance\ntune a system.\n", "msg_date": "Thu, 06 Jul 2006 09:28:39 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" }, { "msg_contents": "On Thu, Jul 06, 2006 at 09:28:39AM -0500, Scott Marlowe wrote:\n> On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:\n> Do you add / remove tables a lot? Could be you've got system catalog\n> bloat.\n\nYes, almost each table is dropped and re-created in 3-5 days.\n\n> do you have autovacuum running?\n\nYes.\n\n> What version of pgsql are you running?\n\npsql -V\npsql (PostgreSQL) 8.0.0\n\n> What OS?\n\nCentOS release 3.7 (Final)\n\n> What file system?\n\next3\n\n> What kind of machine are you using?\n\nPentium IV, 1.8 GHz\n\n> How much memory does it have?\n\n512 Mb RAM\n\n> How many disk drives?\n\nsingle\n\n> Are you using RAID? hardware / software? battery backed cache or no?\n\nno\n\n> Do you have one or two, or thousands of connections at a time?\n\nsomething like 20 connections (peak).\n\n> Do you use connection pooling if you have lots of connections to handle?\n\nMy application uses Jakarta Commons DBCP module.\n\n-- \nEugene Dzhurinsky\n", "msg_date": "Thu, 6 Jul 2006 18:11:26 +0300", "msg_from": "Eugeny N Dzhurinsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: getting better performance" }, { "msg_contents": "On 7/6/06, Eugeny N Dzhurinsky <[email protected]> wrote:\n> Hello!\n>\n> I have a postgresql server serving thousands of tables. Sometime there are\n> queries which involves several tables.\n\n> In postgresql.conf I have these settings:\n>\n> shared_buffers = 40000\n> work_mem = 8192\n> maintenance_work_mem = 16384\n> max_stack_depth = 2048\n>\n> all other settings are left by default (except ones needed for pg_autovacuum).\n>\n> Is there anything I can tune to get better performance?\n\nyou may want to explore upping your FSM settings with that many tables.\n\nMerlin\n", "msg_date": "Thu, 6 Jul 2006 11:26:54 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" }, { "msg_contents": "On Thu, 2006-07-06 at 10:11, Eugeny N Dzhurinsky wrote:\n> On Thu, Jul 06, 2006 at 09:28:39AM -0500, Scott Marlowe wrote:\n> > On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:\n> > Do you add / remove tables a lot? Could be you've got system catalog\n> > bloat.\n> \n> Yes, almost each table is dropped and re-created in 3-5 days.\n\n> > do you have autovacuum running?\n> \n> Yes.\n\nHopefully, that will keep the system catalogs from getting overly fat. \nYou still need to check their size every so often to make sure they're\nnot getting out of line. \n\nyou might wanna run a vacuum verbose and see what it has to say.\n\n> > What version of pgsql are you running?\n> \n> psql -V\n> psql (PostgreSQL) 8.0.0\n\nYou should update. x.0.0 versions often have the nastiest of the data\nloss bugs of any release versions of postgresql. 8.0 is up to 8.0.8\nnow. The upgrades are generally painless (backup anyway, just in case)\nand can be done in place. just down postgres, rpm -Uvh the packages and\nrestart postgres\n\n> > What OS?\n> \n> CentOS release 3.7 (Final)\n> \n> > What file system?\n> \n> ext3\n> \n> > What kind of machine are you using?\n> \n> Pentium IV, 1.8 GHz\n> \n> > How much memory does it have?\n> \n> 512 Mb RAM\n\nThat's kind of small for a database server. If your data set is fairly\nsmall it's alright, but if you're working on gigs of data in your\ndatabase, the more memory the better.\n\n> > How many disk drives?\n> \n> single\n\nOTOH, if you're doing a lot of committing / writing to the hard drives,\na single disk drive is suboptimal\n\nIs this SCSI, PATA or SATA? If it's [SP]ATA, then you've likely got no\nreal fsyncing, and while performance won't be a problem, reliability\nshould the machine crash would be. If it's SCSI, then it could be a\nbottle neck for writes.\n\n> > Are you using RAID? hardware / software? battery backed cache or no?\n> \n> no\n\nI'd recommend looking into it, unless you're CPU bound. A decent RAID\ncontroller with battery backed cache and a pair of drives in a mirror\nsetup can be a marked improved to start with, and you can add more\ndrives as time goes by if needs be.\n\nMy guess is that you've got sys catalog bloat. You might have to down\nthe database to single user mode and run a vacuum on the system catalogs\nfrom there. That's how it was in the old days of 7.x databases anyway.\n\nIf you don't have sys cat bloat, then you're probably CPU / memory bound\nright now. unless you're writing a lot, then you're likely disk i/o\nbound, but I kinda doubt it.\n", "msg_date": "Thu, 06 Jul 2006 10:32:28 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" }, { "msg_contents": "Hi, Eugeny,\n\nEugeny N Dzhurinsky wrote:\n\n>> Do you add / remove tables a lot? Could be you've got system catalog\n>> bloat.\n> \n> Yes, almost each table is dropped and re-created in 3-5 days.\n\nIf your really recreate the same table, TRUNCATE may be a better\nsolution than dropping and recreation.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 07 Jul 2006 10:28:54 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting better performance" } ]
[ { "msg_contents": "with all these unsubscribe requests, we can only extrapolate that the\nserver has no serious performance issues left to solve. good work!\n:-)\n\nmerlin\n", "msg_date": "Thu, 6 Jul 2006 13:53:42 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "victory!" }, { "msg_contents": "On Thu, July 6, 2006 1:53 pm, Merlin Moncure wrote:\n> with all these unsubscribe requests, we can only extrapolate that the\n> server has no serious performance issues left to solve. good work! :-)\n\nWell, either that or the performance issues are so severe that users are\ndropping like flies...\n\n-M\n\n\n", "msg_date": "Thu, 6 Jul 2006 14:01:24 -0400 (EDT)", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: victory!" } ]
[ { "msg_contents": "I have a problem with a query that in postgres 7.4 and 8.12 has an acceptable response time but in postgres 8.14 is very slow.\n \n \n This is the table I use:\n \n create table TEST (\n TESTID INT8 not null,\n TESTTYPE INT4 null,\n constraint PK_TESTID primary key (TESTID));\n create index IX_TEST_TESTTYPE on TEST (TESTTYPE);\n \n And this is the query with the problem:\n \n explain select max(TESTID) from TEST where TESTTYPE = 1577;\n \n The query plan in postgres 7.4 and 8.12 is using the index by TESTTYPE field, which is what I want in this case.\n \n QUERY PLAN \n Aggregate (cost=25.97..25.97 rows=1 width=8) \n -> Index Scan using ix_test_testtype on test (cost=0.00..25.95 rows=9 width=8) \n Index Cond: (testtype = 1577) \n \n \n With postgres 8.14 the query plan uses the primary key PK_TESTID with filter by TESTTYPE, which it takes almost 10 minutes to execute:\n \n QUERY PLAN \n Limit (cost=0.00..41.46 rows=1 width=8) \n -> Index Scan Backward using pk_testid on test (cost=�) \n Filter: ((testid IS NOT NULL) and (testtype = 1577))\n \n When replacing the index \n create index IX_TEST_TESTTYPE on TEST (TESTTYPE);\n with \n create index IX_TEST_TESTTYPE on TEST (TESTTYPE, TESTID);\n the query plan uses this index and the execution of this select is extremely fast. \n \n From what I can see, the query plan for 8.14 is using a index scan by the field used with max() function with a filter by the field in where condition.\n Should not the query plan use an index scan by the field in where condition (which in my case is a small range) and come up with the max value in that range?\n \n Is this a bug, am I missing a configuration step or this is how it is supposed to work? \n \n Thank you very much,\n Ioana \n \n \t\t\t\t\n---------------------------------\nMake free worldwide PC-to-PC calls. Try the new Yahoo! Canada Messenger with Voice\nI have a problem with a query that in postgres 7.4 and 8.12 has an acceptable response time but in postgres 8.14 is very slow. This is the table I use: create table TEST ( TESTID    INT8 not null, TESTTYPE  INT4     null, constraint PK_TESTID primary key (TESTID)); create index IX_TEST_TESTTYPE on TEST (TESTTYPE);   And this is the query with the problem:   explain select max(TESTID)\n from TEST where TESTTYPE = 1577;   The query plan in postgres 7.4 and 8.12 is using the index by TESTTYPE field, which is what I want in this case.   QUERY PLAN  Aggregate  (cost=25.97..25.97 rows=1 width=8)    \n  ->  Index Scan using ix_test_testtype on test  (cost=0.00..25.95 rows=9 width=8)            Index Cond: (testtype = 1577)     With postgres 8.14 the query plan uses the primary key PK_TESTID with filter by TESTTYPE, which it takes almost 10 minutes to execute:   QUERY PLAN  Limit  (cost=0.00..41.46 rows=1 width=8)      ->  Index Scan Backward using pk_testid on test  (cost=�)            Filter: ((testid IS NOT NULL) and (testtype = 1577))   When replacing the index create index IX_TEST_TESTTYPE on TEST (TESTTYPE); with create index\n IX_TEST_TESTTYPE on TEST (TESTTYPE, TESTID); the query plan uses this index and the execution of this select is extremely fast.   From what I can see, the query plan for 8.14 is using a index scan by the field used with max() function with a filter by the field in where condition. Should not the query plan use an index scan by the field in where condition (which in my case is a small range) and come up with the max value in that range?   Is this a bug, am I missing a\n configuration step or this is how it is supposed to work?   Thank you very much, Ioana \nMake free worldwide PC-to-PC calls. Try the new Yahoo! Canada Messenger with Voice", "msg_date": "Thu, 6 Jul 2006 16:06:52 -0400 (EDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan issue when upgrading to postgres 8.14 (from postgres 8.12\n\tor 7.4)" }, { "msg_contents": "If I insert a bunch of rows, then truncate, then insert a bunch more rows, do I need to vacuum? I've been assuming that TRUNCATE TABLE is a brute-force technique that more-or-less tosses the old table and starts fresh so that no vacuum is necessary.\n\nSecond question: Same scenario as above, but now the table has indexes. Is a reindex needed, or are the indexes they \"truncated\" too?\n\nThanks,\nCraig\n\n", "msg_date": "Thu, 06 Jul 2006 18:49:44 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "need vacuum after insert/truncate/insert?" }, { "msg_contents": "Ioana Danes wrote:\n> I have a problem with a query that in postgres 7.4 and 8.12 has an \n> acceptable response time but in postgres 8.14 is very slow.\n> \n> This is the table I use:\n> *\n> create* *table* TEST (\n> TESTID INT8 *not* *null*,\n> TESTTYPE INT4 *null*,\n> *constraint* PK_TESTID *primary* *key* (TESTID));\n> *create* *index* IX_TEST_TESTTYPE *on* TEST (TESTTYPE);\n> \n> And this is the query with the problem:\n> \n> *explain select* *max*(TESTID) *from* TEST *where* TESTTYPE = 1577;\n> \n> The query plan in postgres 7.4 and 8.12 is using the index by TESTTYPE \n> field, which is what I want in this case.\n> \n> QUERY PLAN \n> Aggregate (cost=25.97..25.97 rows=1 width=8) \n> -> Index Scan using ix_test_testtype on test (cost=0.00..25.95 \n> rows=9 width=8) \n> Index Cond: (testtype = 1577)\n> \n> \n> With postgres 8.14 the query plan uses the primary key PK_TESTID with \n> filter by TESTTYPE, which it takes almost 10 minutes to execute:\n> \n> QUERY PLAN \n> Limit (cost=0.00..41.46 rows=1 width=8) \n> -> Index Scan Backward using pk_testid on test (cost=�) \n> Filter: ((testid IS NOT NULL) and (testtype = 1577))\n> \n> When replacing the index\n> *create* *index* IX_TEST_TESTTYPE *on* TEST (TESTTYPE);\n> with\n> *create* *index* IX_TEST_TESTTYPE *on* TEST (TESTTYPE, TESTID);\n> the query plan uses this index and the execution of this select is \n> extremely fast.\n> \n> From what I can see, the query plan for 8.14 is using a index scan by \n> the field used with max() function with a filter by the field in where \n> condition.\n> Should not the query plan use an index scan by the field in where \n> condition (which in my case is a small range) and come up with the max \n> value in that range?\n> \n> Is this a bug, am I missing a configuration step or this is how it is \n> supposed to work?\n\nYou've left out the best details. Post an 'explain analyze' from both \nversions, and don't cut anything out :)\n\nI'm guessing postgres is seeing an index on the table is faster because \nit doesn't think you have many rows in the table. How many are there, \nand have you done an analyze of the table after loading the data in?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Fri, 07 Jul 2006 11:51:00 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan issue when upgrading to postgres 8.14 (from" }, { "msg_contents": "Hi, Craig,\n\nCraig A. James wrote:\n> If I insert a bunch of rows, then truncate, then insert a bunch more\n> rows, do I need to vacuum? I've been assuming that TRUNCATE TABLE is a\n> brute-force technique that more-or-less tosses the old table and starts\n> fresh so that no vacuum is necessary.\n> \n> Second question: Same scenario as above, but now the table has indexes. \n> Is a reindex needed, or are the indexes they \"truncated\" too?\n\nAFAIK, both table and indices are \"cut down\" nicely.\n\nBut you will need an ANALYZE after refilling of the table, to have\ncurrent statistics.\n\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 07 Jul 2006 10:32:07 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need vacuum after insert/truncate/insert?" }, { "msg_contents": "Hi Chris,\n \n Here are the results of my query for postgresql 8.0.3 and 8.1.4. For postgresql 8.1.4 there are 2 results, one for test table having the same indexes as in 8.0.3 and the second one for a new index on test table by (testtype,testid) that will speed up my query. This last index will fix my problem for this particular query. \n \n In the Test table there are 19,494,826 records and 11,090 records have testtype = 1455. The data on both servers is identical. And on both servers I run vacuum analyze prior executing this queries.\n \n As it can be seen the result in postgresql 8.1.4 is very slow and I am wondering why is that. Bug, missing configuration, ... \n \n 1. Result on Postgresql 8.0.3:\n -------------------------------------\n # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;\n \n Aggregate (cost=391.56..391.56 rows=1 width=8) (actual time=94.707..94.711 rows=1 loops=1)\n -> Index Scan using ix_test_testtype on test (cost=0.00..355.18 rows=14551 width=8) (actual time=0.036..51.089 rows=11090 loops=1)\n Index Cond: (testtype = 1455)\n Total runtime: 94.778 ms\n (4 rows)\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n (1 row)\n \n Time: 13.447 ms\n \n\n 2. Result on Postgresql 8.1.4 (with the same indexes as in 8.0.3):\n ------------------------------------------------------------------------------------------\n Result (cost=32.78..32.79 rows=1 width=0) (actual time=1865.406..1865.408 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..32.78 rows=1 width=8) (actual time=1865.378..1865.381 rows=1 loops=1)\n -> Index Scan Backward using pk_testid on test (cost=0.00..464069.25 rows=14155 width=8) (actual time=1865.371..1865.371 rows=1 loops=1)\n Filter: ((testid IS NOT NULL) AND (testtype = 1455))\n Total runtime: 1865.522 ms\n (6 rows)\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n \n Time: 1858.076 ms\n \n \n 3. Result on Postgresql 8.1.4 (after creating an index by testtype, testid ):\n -----------------------------------------------------------------------------------------------------\n # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;\n Result (cost=1.71..1.72 rows=1 width=0) (actual time=0.069..0.070 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..1.71 rows=1 width=8) (actual time=0.055..0.056 rows=1 loops=1)\n -> Index Scan Backward using ix_test2 on test (cost=0.00..24248.92 rows=14155 width=8) (actual time=0.050..0.050 rows=1 loops=1)\n Index Cond: (testtype = 1455)\n Filter: (testid IS NOT NULL)\n Total runtime: 0.159 ms\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n \n Time: 1.029 ms\n \n \n Thank you very much,\n Ioana Danes\n \nChris <[email protected]> wrote:\nYou've left out the best details. Post an 'explain analyze' from both \nversions, and don't cut anything out :)\n\nI'm guessing postgres is seeing an index on the table is faster because \nit doesn't think you have many rows in the table. How many are there, \nand have you done an analyze of the table after loading the data in?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n \t\t\n---------------------------------\nNow you can have a huge leap forward in email: get the new Yahoo! Mail. \nHi Chris, Here are the results of my query for postgresql 8.0.3 and 8.1.4. For postgresql 8.1.4 there are 2 results, one for test table having the same indexes as in 8.0.3 and the second one for a new index on test table by (testtype,testid) that will speed up my query. This last index will fix my problem for this particular query.  In the Test table there are 19,494,826 records and 11,090 records have testtype = 1455. The data on both servers is identical. And on both servers I run vacuum analyze prior executing this queries. As it can be seen the result in postgresql 8.1.4 is very slow and I am wondering why is that. Bug, missing configuration, ... 1. Result on Postgresql 8.0.3: ------------------------------------- # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;  Aggregate  (cost=391.56..391.56 rows=1 width=8) (actual time=94.707..94.711 rows=1 loops=1) \n    ->  Index Scan using ix_test_testtype on test  (cost=0.00..355.18 rows=14551 width=8) (actual time=0.036..51.089 rows=11090 loops=1)          Index Cond: (testtype = 1455)  Total runtime: 94.778 ms (4 rows) # select max(TESTID) from TEST where TESTTYPE = 1455;    max ----------  18527829 (1 row) Time: 13.447 ms 2. Result on Postgresql 8.1.4 (with the same indexes as in 8.0.3): ------------------------------------------------------------------------------------------  Result  (cost=32.78..32.79 rows=1 width=0) (actual time=1865.406..1865.408 rows=1 loops=1)    InitPlan      ->  Limit  (cost=0.00..32.78 rows=1 width=8) (actual time=1865.378..1865.381 rows=1 loops=1)            \n ->  Index Scan Backward using pk_testid on test  (cost=0.00..464069.25 rows=14155 width=8) (actual time=1865.371..1865.371 rows=1 loops=1)                  Filter: ((testid IS NOT NULL) AND (testtype = 1455))  Total runtime: 1865.522 ms (6 rows) # select max(TESTID) from TEST where TESTTYPE = 1455;      max ----------  18527829 Time: 1858.076 ms 3. Result on Postgresql 8.1.4 (after creating an index by testtype, testid ): ----------------------------------------------------------------------------------------------------- # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;  Result  (cost=1.71..1.72 rows=1 width=0) (actual time=0.069..0.070 rows=1 loops=1)    InitPlan     \n ->  Limit  (cost=0.00..1.71 rows=1 width=8) (actual time=0.055..0.056 rows=1 loops=1)            ->  Index Scan Backward using ix_test2 on test  (cost=0.00..24248.92 rows=14155 width=8) (actual time=0.050..0.050 rows=1 loops=1)                  Index Cond: (testtype = 1455)                  Filter: (testid IS NOT NULL)  Total runtime: 0.159 ms # select max(TESTID) from TEST where TESTTYPE = 1455;    max ----------  18527829 Time: 1.029 ms Thank you very much, Ioana Danes Chris <[email protected]> wrote:You've left out the best details. Post an 'explain analyze' from both versions, and don't cut anything out :)I'm guessing postgres is seeing an index on the table is faster because it doesn't think you have many rows in the table. How many are there, and have you done an analyze of the table after loading the data in?-- Postgresql & php tutorialshttp://www.designmagick.com/---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster\nNow you can have a huge leap forward in email: get the new Yahoo! Mail.", "msg_date": "Tue, 18 Jul 2006 12:02:53 -0400 (EDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan issue when upgrading to postgres 8.14 (from" }, { "msg_contents": "Hi everyone,\n \n I posted this question some time ago and I did not get any answer so here I am again. \n Does anyone now what the problem is with the following select when upgrading to postgresql 8.1.4 the query plan does not use the indexes as in postgresql 8.0.3.\n \n Here are the results of my query for postgresql 8.0.3 and 8.1.4. For postgresql 8.1.4 there are 2 results, one for test table having the same indexes as in 8.0.3 and the second one for a new index on test table by (testtype,testid) that will speed up my query. This last index will fix my problem for this particular query.\n \n In the Test table there are 19,494,826 records and 11,090 records have testtype = 1455. The data on both servers is identical. And on both servers I run vacuum analyze prior executing this queries.\n \n As it can be seen the result in postgresql 8.1.4 is very slow and I am wondering why is that. Bug, missing configuration, ...\n \n 1. Result on Postgresql 8.0.3:\n -------------------------------------\n # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;\n \n Aggregate (cost=391.56..391.56 rows=1 width=8) (actual time=94.707..94.711 rows=1 loops=1)\n -> Index Scan using ix_test_testtype on test (cost=0.00..355.18 rows=14551 width=8) (actual time=0.036..51.089 rows=11090 loops=1)\n Index Cond: (testtype = 1455)\n Total runtime: 94.778 ms\n (4 rows)\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n (1 row)\n \n Time: 13.447 ms\n \n \n 2. Result on Postgresql 8.1.4 (with the same indexes as in 8.0.3):\n ------------------------------------------------------------------------------------------\n Result (cost=32.78..32.79 rows=1 width=0) (actual time=1865.406..1865.408 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..32.78 rows=1 width=8) (actual time=1865.378..1865.381 rows=1 loops=1)\n -> Index Scan Backward using pk_testid on test (cost=0.00..464069.25 rows=14155 width=8) (actual time=1865.371..1865.371 rows=1 loops=1)\n Filter: ((testid IS NOT NULL) AND (testtype = 1455))\n Total runtime: 1865.522 ms\n (6 rows)\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n \n Time: 1858.076 ms\n \n \n 3. Result on Postgresql 8.1.4 (after creating an index by testtype, testid ):\n -----------------------------------------------------------------------------------------------------\n # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;\n Result (cost=1.71..1.72 rows=1 width=0) (actual time=0.069..0.070 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..1.71 rows=1 width=8) (actual time=0.055..0.056 rows=1 loops=1)\n -> Index Scan Backward using ix_test2 on test (cost=0.00..24248.92 rows=14155 width=8) (actual time=0.050..0.050 rows=1 loops=1)\n Index Cond: (testtype = 1455)\n Filter: (testid IS NOT NULL)\n Total runtime: 0.159 ms\n \n # select max(TESTID) from TEST where TESTTYPE = 1455;\n \n max\n ----------\n 18527829\n \n Time: 1.029 ms\n \n Thank you in advance,\n Ioana \n\n \t\t\n---------------------------------\nBe smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail \nHi everyone, I posted this question some time ago and I did not get any answer so here I am again. Does anyone now what the problem is with the following select when upgrading to postgresql 8.1.4 the query plan does not use the indexes as in postgresql 8.0.3. Here are the results of my query for postgresql 8.0.3 and 8.1.4. For postgresql 8.1.4 there are 2 results, one for test table having the same indexes as in 8.0.3 and the second one for a new index on test table by (testtype,testid) that will speed up my query. This last index will fix my problem for this particular query. In the Test table there are 19,494,826 records and 11,090 records have testtype = 1455. The data on both servers is identical. And on both servers I run vacuum analyze prior executing this queries. As it can be seen the result in postgresql 8.1.4 is very slow and I am wondering why is that. Bug, missing configuration, ... 1. Result\n on Postgresql 8.0.3: ------------------------------------- # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;  Aggregate  (cost=391.56..391.56 rows=1 width=8) (actual time=94.707..94.711 rows=1 loops=1)    ->  Index Scan using ix_test_testtype on test  (cost=0.00..355.18 rows=14551 width=8) (actual time=0.036..51.089 rows=11090 loops=1)          Index Cond: (testtype = 1455)  Total runtime: 94.778 ms (4 rows) # select max(TESTID) from TEST where TESTTYPE = 1455;    max ----------  18527829 (1 row) Time: 13.447 ms 2. Result on Postgresql 8.1.4 (with the same indexes as in 8.0.3): ------------------------------------------------------------------------------------------  Result  (cost=32.78..32.79 rows=1 width=0) (actual\n time=1865.406..1865.408 rows=1 loops=1)    InitPlan      ->  Limit  (cost=0.00..32.78 rows=1 width=8) (actual time=1865.378..1865.381 rows=1 loops=1)            ->  Index Scan Backward using pk_testid on test  (cost=0.00..464069.25 rows=14155 width=8) (actual time=1865.371..1865.371 rows=1 loops=1)                  Filter: ((testid IS NOT NULL) AND (testtype = 1455))  Total runtime: 1865.522 ms (6 rows) # select max(TESTID) from TEST where TESTTYPE = 1455;      max ----------  18527829 Time: 1858.076 ms 3. Result on Postgresql 8.1.4 (after creating an index by testtype, testid ): \n ----------------------------------------------------------------------------------------------------- # explain analyze select max(TESTID) from TEST where TESTTYPE = 1455;  Result  (cost=1.71..1.72 rows=1 width=0) (actual time=0.069..0.070 rows=1 loops=1)    InitPlan      ->  Limit  (cost=0.00..1.71 rows=1 width=8) (actual time=0.055..0.056 rows=1 loops=1)            ->  Index Scan Backward using ix_test2 on test  (cost=0.00..24248.92 rows=14155 width=8) (actual time=0.050..0.050 rows=1 loops=1)                  Index Cond: (testtype = 1455)                  Filter: (testid IS NOT NULL)  Total runtime: 0.159 ms # select max(TESTID) from TEST\n where TESTTYPE = 1455;    max ----------  18527829 Time: 1.029 ms Thank you in advance, Ioana \nBe smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail", "msg_date": "Thu, 27 Jul 2006 14:55:06 -0400 (EDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan issue when upgrading to postgres 8.14 (from" }, { "msg_contents": "Ioana Danes <[email protected]> writes:\n> Does anyone now what the problem is with the following select when upgrading to postgresql 8.1.4 the query plan does not use the indexes as in postgresql 8.0.3.\n \nThe planner doesn't have enough information about the correlation\nbetween testtype and testid to guess that the index-driven max()\noptimization doesn't work well in this case. But I see you've\nalready found the solution ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jul 2006 18:22:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan issue when upgrading to postgres 8.14 (from " } ]
[ { "msg_contents": "I have a problem with a query that in postgres 7.4 and 8.12 has an acceptable response time but in postgres 8.14 is very slow.\n \nThis is the table I use:\n \n create table TEST (\n TESTID INT8 not null,\n TESTTYPE INT4 null,\n constraint PK_TESTID primary key (TESTID));\n create index IX_TEST_TESTTYPE on TEST (TESTTYPE);\n \nAnd this is the query with the problem:\n explain select max(TESTID) from TEST where TESTTYPE = 1577;\n \nThe query plan in postgres 7.4 and 8.12 is using the index by TESTTYPE field, which is what I want in this case.\n QUERY PLAN \n Aggregate (cost=25.97..25.97 rows=1 width=8) \n -> Index Scan using ix_test_testtype on test (cost=0.00..25.95 rows=9 width=8) \n Index Cond: (testtype = 1577) \n With postgres 8.14 the query plan uses the primary key PK_TESTID with filter by TESTTYPE, which it takes almost 10 minutes to execute:\n QUERY PLAN \n Limit (cost=0.00..41.46 rows=1 width=8) \n -> Index Scan Backward using pk_testid on test (cost=�) \n Filter: ((testid IS NOT NULL) and (testtype = 1577))\n \nWhen replacing the index \n create index IX_TEST_TESTTYPE on TEST (TESTTYPE);\nwith \n create index IX_TEST_TESTTYPE on TEST (TESTTYPE, TESTID);\nthe query plan uses this index and the execution of this select is extremely fast. \n \n>From what I can see, the query plan for 8.14 is using a index scan by the field used with max() function with a filter by the field in the where condition.\nShould not the query plan use an index scan by the field in where condition (which in my case is a small range) and come up with the max value in that range?\n \nIs this a bug, am I missing a configuration step or this is how it is supposed to work? \n \nThank you very much,\nIoana \n\n\n \t\t\n---------------------------------\n All new Yahoo! Mail - \n---------------------------------\nGet a sneak peak at messages with a handy reading pane.\nI have a problem with a query that in postgres 7.4  and 8.12 has an acceptable response time but in postgres 8.14 is very slow.      This is the table I use:             create table  TEST (      TESTID    INT8 not null,      TESTTYPE  INT4     null,      constraint  PK_TESTID primary key (TESTID));      create index  IX_TEST_TESTTYPE on TEST (TESTTYPE);       And this is the query with the problem:       explain select max(TESTID) from TEST where TESTTYPE = 1577;       The query plan in postgres 7.4 and 8.12 is using the  index by TESTTYPE field, which is what I want in this\n case.      QUERY PLAN        Aggregate   (cost=25.97..25.97 rows=1 width=8)            ->  Index Scan using ix_test_testtype on  test  (cost=0.00..25.95 rows=9 width=8)                  Index Cond:  (testtype = 1577)        With postgres 8.14 the query plan uses the primary  key PK_TESTID with filter by TESTTYPE, which  it takes almost 10 minutes to execute:      QUERY PLAN        Limit  (cost=0.00..41.46  rows=1 width=8)            ->  Index Scan Backward using pk_testid on  test  (cost=�)   \n               Filter: ((testid IS  NOT NULL) and (testtype = 1577))       When replacing the index      create index IX_TEST_TESTTYPE on TEST (TESTTYPE);with      create index IX_TEST_TESTTYPE on TEST (TESTTYPE, TESTID);the query plan uses this index and the execution of this select is  extremely fast.        From what I can see, the query plan for 8.14 is using a index scan  by the field used with max() function with a filter by the field in the where  condition.Should not the query plan use an index scan by the field in where  condition (which in my case is a small range) and come up with the max value in that range?       Is this a bug, am I missing a configuration step or this is how it  is supposed\n to work?        Thank you very much,Ioana \n All new Yahoo! Mail - \nGet a sneak peak at messages with a handy reading pane.", "msg_date": "Thu, 6 Jul 2006 17:34:46 -0400 (EDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan issue when upgrading to postgres 8.14 (from postgres 8.12\n\tor 7.4)" } ]
[ { "msg_contents": "I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am attempting to\nuse partitioning via Inherited tables. At first I was going to create a rule\nper sub-table based on a date range, but found out with multiple rules\npostgres will only return the affected-row count on the last rule which\ngives Hibernate problems. So now I'm thinking the way to do it is just have\none rule at a time and when I want to start appending data to a new\npartition, just change the rule on the parent table and also update the\nconstraint on the last table to reflect the date ranges contained so that\nconstraint_exclusion will work. this should perform better also. For\ninstance\n\nStarting off with:\n\nParent (Rule on insert instead insert into Child2)\n Child1 (Constraint date <= somedate1)\n Child2 (Constraint date > somedate1)\n\nNow I want to create another Partition:\n\n Create Table Child3\nBEGIN\nUpdate Parent Rule( instead insert into Child3)\nsomedate2 = max(date) from Child2\nUpdate Child2 Constraint( date > somedate1 AND date <= somedate2 )\nSet Constraint Child3 (date > somedate2)\nEND\n\nWhich ends up with:\n\nParent (Rule on insert instead insert into Child2)\n Child1 (Constraint date <= somedate1)\n Child2 (Constraint date > somedate1 AND date <= somedate2)\n Child3 (Constraint date > somedate2)\n\nAnyone else tried this or expect it to work consistently (without stopping\ndb)? Is it possible that there could be a race condition for the insertion\nand constraints or will the transaction prevent that from occurring? I've\ndone some testing and it seems to work but I could just get lucky so far and\nnot lose any data :)\n\nThanks for any help,\nGene\n\nI'm using PostgreSQL 8.1.4 in a Hibernate Application and I am\nattempting to use partitioning via Inherited tables. At first I was\ngoing to create a rule per sub-table based on a date range, but found\nout with multiple rules postgres will only return the affected-row\ncount on the last rule which gives Hibernate problems. So now I'm\nthinking the way to do it is just have one rule at a time and when I\nwant to start appending data to a new partition, just change the rule\non the parent table and also update the constraint on the last table to\nreflect the date ranges contained so that constraint_exclusion will\nwork. this should perform better also. For instance\nStarting off with:\nParent (Rule on insert instead insert into Child2)\n  Child1 (Constraint date <= somedate1)\n  Child2 (Constraint date > somedate1)\n\nNow I want to create another Partition:\n\n\nCreate Table Child3\nBEGIN\nUpdate Parent Rule( instead insert into Child3)\nsomedate2 = max(date) from Child2\nUpdate Child2 Constraint( date > somedate1 AND date <= somedate2 )\nSet Constraint Child3 (date > somedate2)\nEND\n\nWhich ends up with:\nParent (Rule on insert instead insert into Child2)\n\n  Child1 (Constraint date <= somedate1)\n\n  Child2 (Constraint date > somedate1 AND date <= somedate2)  Child3 (Constraint date > somedate2)Anyone else tried this or expect it to work consistently (without stopping db)? Is it possible that there could be a race condition for the insertion and constraints or will the transaction prevent that from occurring? I've done some testing and it seems to work but I could just get lucky so far and not lose any data :)\nThanks for any help,Gene", "msg_date": "Fri, 7 Jul 2006 03:51:38 -0400", "msg_from": "Gene <[email protected]>", "msg_from_op": true, "msg_subject": "Update INSERT RULE while running for Partitioning" }, { "msg_contents": "Hi, Gene,\n\nGene wrote:\n> I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am\n> attempting to use partitioning via Inherited tables. At first I was\n> going to create a rule per sub-table based on a date range, but found\n> out with multiple rules postgres will only return the affected-row count\n> on the last rule which gives Hibernate problems.\n\nThis could be considered a PostgreSQL bug - maybe you should discuss\nthis on the appropriate list (general, hackers)?\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 07 Jul 2006 10:33:31 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update INSERT RULE while running for Partitioning" }, { "msg_contents": "On Fri, Jul 07, 2006 at 03:51:38AM -0400, Gene wrote:\n> Starting off with:\n> \n> Parent (Rule on insert instead insert into Child2)\n> Child1 (Constraint date <= somedate1)\n> Child2 (Constraint date > somedate1)\n> \n> Now I want to create another Partition:\n> \n> Create Table Child3\n> BEGIN\n> Update Parent Rule( instead insert into Child3)\n> somedate2 = max(date) from Child2\n> Update Child2 Constraint( date > somedate1 AND date <= somedate2 )\n> Set Constraint Child3 (date > somedate2)\n> END\n\nBe aware that adding a constraint with ALTER TABLE will involve a whole\ntable scan (at least in 8.1.2 or earlier). This is true even if if you\nhave an index such that \"EXPLAIN SELECT EXISTS (SELECT date > somedate1\nAND date <= somedate2 FROM Child2)\" claims it will run fast. ALTER\nTABLE is coded to always do a heap scan for constraint changes.\n\n\nTo avoid this, this I've made a minor modification to my local\nPostgreSQL to give a construct similar to Oracle's NOVALIDATE. I allow\n\"ALTER TABLE ... ADD CONSTRAINT ... [CHECK EXISTING | IGNORE EXISTING]\".\nTo use this safely without any race conditions I setup my last partition\nwith an explicit end time and possible extend it if needed.\n\nE.g.\n child1 CHECK(ts >= '-infinity' and ts < t1)\n child2 CHECK(ts >= t1 and ts < t2)\n child3 CHECK(ts >= t2 and ts < t3)\n\nHere doing:\n ALTER TABLE child3 ADD CONSTRAINT new_cstr\n CHECK(ts >= t2 and ts < t4) IGNORE EXISTING;\n ALTER TABLE child3 DROP CONSTRAINT old_cstr;\nis safe if t4 >= t3.\n\nI have a regular cron job that makes sure if CURRENT_TIMESTAMP\napproaches tn (the highest constraint time) it either makes a new\npartition (actually, in my case, recycles an old partition) or extends\nthe last partition. My data is such that inserts with a timestamp in\nthe future make no sense.\n\n\n> Anyone else tried this or expect it to work consistently (without stopping\n> db)?\n\nNote that using ALTER TABLE to add a constraint as well as using DROP\nTABLE or TRUNCATE to remove/recycle partitions are DDL commands that\nrequire exclusive locks. This will block both readers and writers to\nthe table(s) and can also cause readers and writers to now interfere\nwith each other.\n\nFor example, my work load is a lot of continuous small inserts with\nsome long running queries (reports). MVCC allows these to not block\neach other at all. However, if my cron job comes along and naively\nattempts to do DROP TABLE, TRUNCATE, or ALTER TABLE it will block on the\nlong running queries. This in turn will cause new INSERT transactions\nto queue up behind my waiting exclusive lock and now I effectively have\nreports blocking inserts.\n\nAlways think twice about running DDL commands on a live database;\nespecially in an automated fashion.\n\nThere are methods to alleviate or work around some of the issues of\ngetting an exclusive lock but I haven't found a true solution yet.\nI'd imagine that implementing true partitioning within the PostgreSQL\nback-end would solve this. Presumably because it would know that adding\na new partition, etc can be done without locking out readers at all\nand it would use something other than an exclusive lock to do the DDL\nchanges.\n\n> Is it possible that there could be a race condition for the insertion\n> and constraints or will the transaction prevent that from occurring?\n\nThe required exclusive locks will prevent race conditions. (If you were\nto use something like my IGNORE EXISTING you'd need to make sure you\nmanually got an exclusive lock before looking up the maximum value to\nset as the new constraint.)\n\n-- \nDave Chapeskie\n", "msg_date": "Fri, 7 Jul 2006 12:05:56 -0400", "msg_from": "Dave Chapeskie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update INSERT RULE while running for Partitioning" } ]
[ { "msg_contents": "Hi, all.\n\ni'm trying to tune application which makes alots of queries\nwith semantics(find LONGEST PREFIX MATCH in a string) like:\n\nSELECT cost\nFROM tarif\nWHERE $1 LIKE prefix\nORDER BY length(prefix) DESC\nLIMIT 1\n\nfrom table like:\n\nCREATE TABLE tarif (\n id bigint NOT NULL,\n prefix varchar(55) NOT NULL,\n cost numeric(x, x) not null\n) WITHOUT OIDS;\n\nwhere $1 is the phone numbers.. for example.\nit's common task for voice billing applications.\n\n\nso, generally i can do it that ways:\n\nWHERE $1 LIKE prefix\nWHERE $1 SIMILAR TO prefix\nWHERE $1 ~ prefix\nWHERE position(prefix in $1) = 0\n\n(\nsurely i must adopt prefix for matching rules,\ne.g. LIKE prefix || '%'\nand the best is to create trigger which modifies prefix on\ninsert/update time\n)\n\nBUT! this methods doesn't use indexes!!\nthis is the main disadvantage.\n\nvoip3a=# EXPLAIN ANALYZE SELECT cost FROM tarif WHERE '78123319060' like prefix ORDER BY length(prefix) LIMIT 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3028.90..3028.90 rows=1 width=22) (actual time=162.189..162.192 rows=1 loops=1)\n -> Sort (cost=3028.90..3030.43 rows=612 width=22) (actual time=162.181..162.181 rows=1 loops=1)\n Sort Key: length((prefix)::text)\n -> Seq Scan on tarif (cost=0.00..3000.57 rows=612 width=22) (actual time=4.132..161.715 rows=39 loops=1)\n Filter: ('78123319060'::text ~~ (prefix)::text)\n Total runtime: 162.340 ms\n(6 rows)\n\nvoip3a=# SELECT count(*) from tarif;\n count\n--------\n 122323\n(1 row)\n\n\n\n\nAND there are many more effective algorithms for searching LONGEST PREFIX\nMATCH in a string.. like\nhttp://search.cpan.org/~avif/Tree-Trie-1.1/Trie.pm\nfor example\n\n\n\n\nIs there any ways to improve perfomance?\nMay be implement indexes using Tire algoritm ?\n(if so can you please show me some url's to start...)\n\n\nThanks, Sergey\n\n\n", "msg_date": "Fri, 7 Jul 2006 12:48:49 +0400", "msg_from": "Hripchenko Sergey <[email protected]>", "msg_from_op": true, "msg_subject": "longest prefix match querries" } ]
[ { "msg_contents": "Adaptecs RAID controllers as all underwhelming.\n\nThe best commodity RAID controllers in terms of performance, size of available BBC, connectivity technologies (all of IDE, SCSI, SATA and FC are supported), etc are made by Areca.\n\nGet one of Areca's RAID controllers that hold up to 2 GB of BBC.\nARC-11xx are the PCI-X based products.\nARC-12xx are the PCI-E based products.\n\nReviews at places like tweakers.net\nAreca is based in Taiwan, but has European and US offices as well\n\nRon Peacetree\n\n-----Original Message-----\n>From: Kenji Morishige <[email protected]>\n>Sent: Jul 5, 2006 7:46 PM\n>To: [email protected]\n>Cc: [email protected]\n>Subject: [PERFORM] suggested RAID controller for FreeBSD 6.1 + PostgreSQL 8.1\n>\n>I am currently running FreeBSD 4.11 (due to IT requirements for now) and\n>Adaptec's 2200S RAID controller running in RAID5. I was advised in the past\n>that the 2200S is actually a poor performing controller and obviously the\n>RAID5 is less than ideal for databases. I chose to run the controller in\n>RAID5 as the tech I talked to suggested that the 2200S was primarily designed\n>for RAID5 and it would operate the best that way. My server is a dual Xeon\n> 3.06Ghz box running on a motherboard approximately 2-3 years old now. I'd\n>like to know what an ideal RAID controller that would be compatible with\n>FreeBSD 6.1 would be these days.\n>\n>Thanks in advance,\n>Kenji\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Fri, 7 Jul 2006 09:57:53 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: suggested RAID controller for FreeBSD 6.1 +" } ]
[ { "msg_contents": "I've got a problem where Deletes on a certain table are taking very long (>5 \nsec) (PG 8.1.3, linux). Explain Analyze on the delete shows that two \n(automatically created) triggers caused by foreign keys are responsible for \n99% of the time.\n* The two tables are large (>1.5mm and >400k rows), so sequential scans do \ntake a long time.\n* I've got indices on these tables, but PG doesn't appear to be using them \nduring the delete.\n* If I run the same SELECT in psql, it does use the index and responds very \nquickly.\n\nFor example, I interrupted the Delete, and it appears that it was executing \na select from an FK table:\n SELECT 1 FROM ONLY \"public\".\"party_aliases\" x WHERE \"owner_party_id\" = $1 \nFOR SHARE OF x;\n\nOK, that's fine. There's an index on that column:\n CREATE INDEX party_aliases_owner_party_idx ON party_aliases USING btree \n(owner_party_id, id);\n\nI've run ANALYZE, and that doesn't appear to make any difference. Why would \nPG use the index when I run the select myself, but do a sequential scan when \nthe same statement is run by the delete trigger?\n\nI looked through the mailing lists, but most suggestions appeared to be 1) \ncreate an index, or 2) run analyze. Any ideas?\n\nThanks in advance,\nKian Wright\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today - it's FREE! \nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n", "msg_date": "Fri, 07 Jul 2006 16:59:07 -0500", "msg_from": "\"K-Bob body\" <[email protected]>", "msg_from_op": true, "msg_subject": "Delete is very slow;\n PG not using existing index to check foreign keys" } ]
[ { "msg_contents": "Hi all!\n\nCan anyone explain to me what VACUUM does that REINDEX doesn't? We \nhave a frequently updated table on Postgres 7.4 on FC3 with about \n35000 rows which we VACUUM hourly and VACUUM FULL once per day. It \nseem like the table still slows to a crawl every few weeks. Running \na REINDEX by itself or a VACUUM FULL by itself doesn't seem to help, \nbut running a REINDEX followed immediately by a VACUUM FULL seems to \nsolve the problem.\n\nI'm trying to decide now if we need to include a daily REINDEX along \nwith our daily VACUUM FULL, and more importantly I'm just curious to \nknow why we should or shouldn't do that.\n\nAny information on this subject would be appreciated.\n\n-Scott\n\n", "msg_date": "Fri, 07 Jul 2006 15:29:19 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM vs. REINDEX" }, { "msg_contents": "> I'm trying to decide now if we need to include a daily REINDEX along \n> with our daily VACUUM FULL, and more importantly I'm just curious to \n> know why we should or shouldn't do that.\n> \n> Any information on this subject would be appreciated.\n\nMy understanding is that vaccum full frees all of the unused space from deprecated tuples in the\ntable. This effective reduces that amount of tuples that will be sequencially scanned which\ndeceases sequencial scan times.\n\nreindex rebuilds the index to removed all of the deprecated tuple references. This free memory\nand reduces that time it takes to scan the index.\n\nThats how I understand it.\n\nRegards,\n\nRichard Broersma Jr.\n", "msg_date": "Fri, 7 Jul 2006 16:15:27 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "On Fri, 7 Jul 2006, William Scott Jordan wrote:\n\n> Hi all!\n>\n> Can anyone explain to me what VACUUM does that REINDEX doesn't? We have a \n> frequently updated table on Postgres 7.4 on FC3 with about 35000 rows which \n> we VACUUM hourly and VACUUM FULL once per day. It seem like the table still \n> slows to a crawl every few weeks. Running a REINDEX by itself or a VACUUM \n> FULL by itself doesn't seem to help, but running a REINDEX followed \n> immediately by a VACUUM FULL seems to solve the problem.\n>\n> I'm trying to decide now if we need to include a daily REINDEX along with our \n> daily VACUUM FULL, and more importantly I'm just curious to know why we \n> should or shouldn't do that.\n>\n> Any information on this subject would be appreciated.\n\nWilliam,\n\nIf you're having to VACUUM FULL that often, then it's likely your FSM settings \nare too low. What does the last few lines of VACUUM VERBOSE say? Also, are \nyou running ANALYZE with the vacuums or just running VACUUM? You still need \nto run ANALYZE to update the planner statistics, otherwise things might slowly \ngrind to a halt. Also, you should probably consider setting up autovacuum and \nupgrading to 8.0 or 8.1 for better performance overall.\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 7 Jul 2006 16:18:40 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "Hi Jeff,\n\nWe are running ANALYZE with the hourly VACUUMs. Most of the time the \nVACUUM for this table looks like this:\n\n----------------------------\nINFO: vacuuming \"public.event_sums\"\nINFO: index \"event_sums_event_available\" now contains 35669 row \nversions in 1524 pages\nDETAIL: 22736 index row versions were removed.\n1171 index pages have been deleted, 1142 are currently reusable.\nCPU 0.03s/0.04u sec elapsed 0.06 sec.\nINFO: index \"event_sums_date_available\" now contains 35669 row \nversions in 3260 pages\nDETAIL: 22736 index row versions were removed.\n1106 index pages have been deleted, 1086 are currently reusable.\nCPU 0.06s/0.14u sec elapsed 0.20 sec.\nINFO: index \"event_sums_price_available\" now contains 35669 row \nversions in 2399 pages\nDETAIL: 22736 index row versions were removed.\n16 index pages have been deleted, 16 are currently reusable.\nCPU 0.05s/0.13u sec elapsed 0.17 sec.\nINFO: \"event_sums\": removed 22736 row versions in 1175 pages\nDETAIL: CPU 0.03s/0.05u sec elapsed 0.08 sec.\nINFO: \"event_sums\": found 22736 removable, 35669 nonremovable row \nversions in 27866 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 767199 unused item pointers.\n0 pages are entirely empty.\nCPU 0.49s/0.45u sec elapsed 0.93 sec.\n----------------------------\n\nWithout any increase in table traffic, every few weeks, things start \nto look like this:\n\n----------------------------\nINFO: vacuuming \"public.event_sums\"\nINFO: index \"event_sums_event_available\" now contains 56121 row \nversions in 2256 pages\nDETAIL: 102936 index row versions were removed.\n1777 index pages have been deleted, 1635 are currently reusable.\nCPU 0.03s/0.16u sec elapsed 1.04 sec.\nINFO: index \"event_sums_date_available\" now contains 56121 row \nversions in 5504 pages\nDETAIL: 102936 index row versions were removed.\n2267 index pages have been deleted, 2202 are currently reusable.\nCPU 0.15s/0.25u sec elapsed 13.91 sec.\nINFO: index \"event_sums_price_available\" now contains 56121 row \nversions in 4929 pages\nDETAIL: 102936 index row versions were removed.\n149 index pages have been deleted, 149 are currently reusable.\nCPU 0.13s/0.33u sec elapsed 0.51 sec.\nINFO: \"event_sums\": removed 102936 row versions in 3796 pages\nDETAIL: CPU 0.31s/0.26u sec elapsed 0.92 sec.\nINFO: \"event_sums\": found 102936 removable, 35972 nonremovable row \nversions in 170937 pages\nDETAIL: 8008 dead row versions cannot be removed yet.\nThere were 4840134 unused item pointers.\n0 pages are entirely empty.\nCPU 5.13s/1.68u sec elapsed 209.38 sec.\nINFO: analyzing \"public.event_sums\"\nINFO: \"event_sums\": 171629 pages, 3000 rows sampled, 7328 estimated total rows\n----------------------------\n\nThere are a few things in the second vacuum results that catch my \neye, but I don't have the skill set to diagnose the problem. I do \nknow, however, that a REINDEX followed by a VACUUM FULL seems to make \nthe symptoms go away for a while.\n\nAnd I agree that we should upgrade to an 8.x version of PG, but as \nwith many things in life time, money, and risk conspire against me.\n\n-William\n\n\n\n\nAt 04:18 PM 7/7/2006, you wrote:\n>On Fri, 7 Jul 2006, William Scott Jordan wrote:\n>\n>>Hi all!\n>>\n>>Can anyone explain to me what VACUUM does that REINDEX doesn't? We \n>>have a frequently updated table on Postgres 7.4 on FC3 with about \n>>35000 rows which we VACUUM hourly and VACUUM FULL once per day. It \n>>seem like the table still slows to a crawl every few \n>>weeks. Running a REINDEX by itself or a VACUUM FULL by itself \n>>doesn't seem to help, but running a REINDEX followed immediately by \n>>a VACUUM FULL seems to solve the problem.\n>>\n>>I'm trying to decide now if we need to include a daily REINDEX \n>>along with our daily VACUUM FULL, and more importantly I'm just \n>>curious to know why we should or shouldn't do that.\n>>\n>>Any information on this subject would be appreciated.\n>\n>William,\n>\n>If you're having to VACUUM FULL that often, then it's likely your \n>FSM settings are too low. What does the last few lines of VACUUM \n>VERBOSE say? Also, are you running ANALYZE with the vacuums or just \n>running VACUUM? You still need to run ANALYZE to update the planner \n>statistics, otherwise things might slowly grind to a halt. Also, \n>you should probably consider setting up autovacuum and upgrading to \n>8.0 or 8.1 for better performance overall.\n>\n>\n>--\n>Jeff Frost, Owner <[email protected]>\n>Frost Consulting, LLC http://www.frostconsultingllc.com/\n>Phone: 650-780-7908 FAX: 650-649-1954\n\n", "msg_date": "Fri, 07 Jul 2006 17:02:53 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "On Fri, 7 Jul 2006, William Scott Jordan wrote:\n\n> Hi Jeff,\n>\n> We are running ANALYZE with the hourly VACUUMs. Most of the time the VACUUM \n> for this table looks like this:\n\n> INFO: vacuuming \"public.event_sums\"\n> INFO: index \"event_sums_event_available\" now contains 56121 row versions in \n> 2256 pages\n> DETAIL: 102936 index row versions were removed.\n> 1777 index pages have been deleted, 1635 are currently reusable.\n> CPU 0.03s/0.16u sec elapsed 1.04 sec.\n> INFO: index \"event_sums_date_available\" now contains 56121 row versions in \n> 5504 pages\n> DETAIL: 102936 index row versions were removed.\n> 2267 index pages have been deleted, 2202 are currently reusable.\n> CPU 0.15s/0.25u sec elapsed 13.91 sec.\n> INFO: index \"event_sums_price_available\" now contains 56121 row versions in \n> 4929 pages\n> DETAIL: 102936 index row versions were removed.\n> 149 index pages have been deleted, 149 are currently reusable.\n> CPU 0.13s/0.33u sec elapsed 0.51 sec.\n> INFO: \"event_sums\": removed 102936 row versions in 3796 pages\n> DETAIL: CPU 0.31s/0.26u sec elapsed 0.92 sec.\n> INFO: \"event_sums\": found 102936 removable, 35972 nonremovable row versions \n> in 170937 pages\n> DETAIL: 8008 dead row versions cannot be removed yet.\n> There were 4840134 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 5.13s/1.68u sec elapsed 209.38 sec.\n> INFO: analyzing \"public.event_sums\"\n> INFO: \"event_sums\": 171629 pages, 3000 rows sampled, 7328 estimated total \n> rows\n\nHmmm..I was looking for something that looks like this:\n\nINFO: free space map: 109 relations, 204 pages stored; 1792 total pages \nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared \nmemory.\nVACUUM\n\nMaybe 7.4 doesn't give this? Or maybe you need to run vacuumdb -a -v to get \nit?\n\n\n\n> ----------------------------\n>\n> There are a few things in the second vacuum results that catch my eye, but I \n> don't have the skill set to diagnose the problem. I do know, however, that a \n> REINDEX followed by a VACUUM FULL seems to make the symptoms go away for a \n> while.\n>\n> And I agree that we should upgrade to an 8.x version of PG, but as with many \n> things in life time, money, and risk conspire against me.\n\nYou should still be able to use autovacuum, which might make you a little \nhappier. Which 7.4 version are you using?\n\n\n>\n> -William\n>\n>\n>\n>\n> At 04:18 PM 7/7/2006, you wrote:\n>> On Fri, 7 Jul 2006, William Scott Jordan wrote:\n>> \n>>> Hi all!\n>>> \n>>> Can anyone explain to me what VACUUM does that REINDEX doesn't? We have a \n>>> frequently updated table on Postgres 7.4 on FC3 with about 35000 rows \n>>> which we VACUUM hourly and VACUUM FULL once per day. It seem like the \n>>> table still slows to a crawl every few weeks. Running a REINDEX by itself \n>>> or a VACUUM FULL by itself doesn't seem to help, but running a REINDEX \n>>> followed immediately by a VACUUM FULL seems to solve the problem.\n>>> \n>>> I'm trying to decide now if we need to include a daily REINDEX along with \n>>> our daily VACUUM FULL, and more importantly I'm just curious to know why \n>>> we should or shouldn't do that.\n>>> \n>>> Any information on this subject would be appreciated.\n>> \n>> William,\n>> \n>> If you're having to VACUUM FULL that often, then it's likely your FSM \n>> settings are too low. What does the last few lines of VACUUM VERBOSE say? \n>> Also, are you running ANALYZE with the vacuums or just running VACUUM? You \n>> still need to run ANALYZE to update the planner statistics, otherwise \n>> things might slowly grind to a halt. Also, you should probably consider \n>> setting up autovacuum and upgrading to 8.0 or 8.1 for better performance \n>> overall.\n>> \n>> \n>> --\n>> Jeff Frost, Owner <[email protected]>\n>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>> Phone: 650-780-7908 FAX: 650-649-1954\n>\n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 7 Jul 2006 17:22:08 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "Hi Jeff,\n\nAh, okay. I see what information you were looking for. Doing a \nVACUUM on the full DB, we get the following results:\n\n----------------------------\nINFO: free space map: 885 relations, 8315 pages stored; 177632 total \npages needed\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB \nshared memory.\n----------------------------\n\n-William\n\n\nAt 05:22 PM 7/7/2006, you wrote:\n>On Fri, 7 Jul 2006, William Scott Jordan wrote:\n>\n>>Hi Jeff,\n>>\n>>We are running ANALYZE with the hourly VACUUMs. Most of the time \n>>the VACUUM for this table looks like this:\n>\n>>INFO: vacuuming \"public.event_sums\"\n>>INFO: index \"event_sums_event_available\" now contains 56121 row \n>>versions in 2256 pages\n>>DETAIL: 102936 index row versions were removed.\n>>1777 index pages have been deleted, 1635 are currently reusable.\n>>CPU 0.03s/0.16u sec elapsed 1.04 sec.\n>>INFO: index \"event_sums_date_available\" now contains 56121 row \n>>versions in 5504 pages\n>>DETAIL: 102936 index row versions were removed.\n>>2267 index pages have been deleted, 2202 are currently reusable.\n>>CPU 0.15s/0.25u sec elapsed 13.91 sec.\n>>INFO: index \"event_sums_price_available\" now contains 56121 row \n>>versions in 4929 pages\n>>DETAIL: 102936 index row versions were removed.\n>>149 index pages have been deleted, 149 are currently reusable.\n>>CPU 0.13s/0.33u sec elapsed 0.51 sec.\n>>INFO: \"event_sums\": removed 102936 row versions in 3796 pages\n>>DETAIL: CPU 0.31s/0.26u sec elapsed 0.92 sec.\n>>INFO: \"event_sums\": found 102936 removable, 35972 nonremovable row \n>>versions in 170937 pages\n>>DETAIL: 8008 dead row versions cannot be removed yet.\n>>There were 4840134 unused item pointers.\n>>0 pages are entirely empty.\n>>CPU 5.13s/1.68u sec elapsed 209.38 sec.\n>>INFO: analyzing \"public.event_sums\"\n>>INFO: \"event_sums\": 171629 pages, 3000 rows sampled, 7328 \n>>estimated total rows\n>\n>Hmmm..I was looking for something that looks like this:\n>\n>INFO: free space map: 109 relations, 204 pages stored; 1792 total \n>pages needed\n>DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB \n>shared memory.\n>VACUUM\n>\n>Maybe 7.4 doesn't give this? Or maybe you need to run vacuumdb -a \n>-v to get it?\n>\n>\n>\n>>----------------------------\n>>\n>>There are a few things in the second vacuum results that catch my \n>>eye, but I don't have the skill set to diagnose the problem. I do \n>>know, however, that a REINDEX followed by a VACUUM FULL seems to \n>>make the symptoms go away for a while.\n>>\n>>And I agree that we should upgrade to an 8.x version of PG, but as \n>>with many things in life time, money, and risk conspire against me.\n>\n>You should still be able to use autovacuum, which might make you a \n>little happier. Which 7.4 version are you using?\n>\n>\n>>\n>>-William\n>>\n>>\n>>\n>>\n>>At 04:18 PM 7/7/2006, you wrote:\n>>>On Fri, 7 Jul 2006, William Scott Jordan wrote:\n>>>\n>>>>Hi all!\n>>>>Can anyone explain to me what VACUUM does that REINDEX \n>>>>doesn't? We have a frequently updated table on Postgres 7.4 on \n>>>>FC3 with about 35000 rows which we VACUUM hourly and VACUUM FULL \n>>>>once per day. It seem like the table still slows to a crawl \n>>>>every few weeks. Running a REINDEX by itself or a VACUUM FULL by \n>>>>itself doesn't seem to help, but running a REINDEX followed \n>>>>immediately by a VACUUM FULL seems to solve the problem.\n>>>>I'm trying to decide now if we need to include a daily REINDEX \n>>>>along with our daily VACUUM FULL, and more importantly I'm just \n>>>>curious to know why we should or shouldn't do that.\n>>>>Any information on this subject would be appreciated.\n>>>William,\n>>>If you're having to VACUUM FULL that often, then it's likely your \n>>>FSM settings are too low. What does the last few lines of VACUUM \n>>>VERBOSE say? Also, are you running ANALYZE with the vacuums or \n>>>just running VACUUM? You still need to run ANALYZE to update the \n>>>planner statistics, otherwise things might slowly grind to a \n>>>halt. Also, you should probably consider setting up autovacuum \n>>>and upgrading to 8.0 or 8.1 for better performance overall.\n>>>\n>>>--\n>>>Jeff Frost, Owner <[email protected]>\n>>>Frost Consulting, LLC http://www.frostconsultingllc.com/\n>>>Phone: 650-780-7908 FAX: 650-649-1954\n>>\n>>\n>\n>--\n>Jeff Frost, Owner <[email protected]>\n>Frost Consulting, LLC http://www.frostconsultingllc.com/\n>Phone: 650-780-7908 FAX: 650-649-1954\n\n", "msg_date": "Fri, 07 Jul 2006 17:48:09 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "On Friday 07 July 2006 17:48, William Scott Jordan wrote:\n> Hi Jeff,\n>\n> Ah, okay. I see what information you were looking for. Doing a\n> VACUUM on the full DB, we get the following results:\n>\n> ----------------------------\n> INFO: free space map: 885 relations, 8315 pages stored; 177632 total\n> pages needed\n> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n> shared memory.\n> ----------------------------\n>\n\nThere is one problem right there. Your max_fsm_pages is not enough, or at \nleast you aren't vacuuming enough.\n\nEither increase your max_fsm_pages or vacuum more often.\n\nAlso, honestly -- upgrade to 8.1 :)\n\nJoshua D. Drake\n\n\n> -William\n>\n> At 05:22 PM 7/7/2006, you wrote:\n> >On Fri, 7 Jul 2006, William Scott Jordan wrote:\n> >>Hi Jeff,\n> >>\n> >>We are running ANALYZE with the hourly VACUUMs. Most of the time\n> >>the VACUUM for this table looks like this:\n> >>\n> >>INFO: vacuuming \"public.event_sums\"\n> >>INFO: index \"event_sums_event_available\" now contains 56121 row\n> >>versions in 2256 pages\n> >>DETAIL: 102936 index row versions were removed.\n> >>1777 index pages have been deleted, 1635 are currently reusable.\n> >>CPU 0.03s/0.16u sec elapsed 1.04 sec.\n> >>INFO: index \"event_sums_date_available\" now contains 56121 row\n> >>versions in 5504 pages\n> >>DETAIL: 102936 index row versions were removed.\n> >>2267 index pages have been deleted, 2202 are currently reusable.\n> >>CPU 0.15s/0.25u sec elapsed 13.91 sec.\n> >>INFO: index \"event_sums_price_available\" now contains 56121 row\n> >>versions in 4929 pages\n> >>DETAIL: 102936 index row versions were removed.\n> >>149 index pages have been deleted, 149 are currently reusable.\n> >>CPU 0.13s/0.33u sec elapsed 0.51 sec.\n> >>INFO: \"event_sums\": removed 102936 row versions in 3796 pages\n> >>DETAIL: CPU 0.31s/0.26u sec elapsed 0.92 sec.\n> >>INFO: \"event_sums\": found 102936 removable, 35972 nonremovable row\n> >>versions in 170937 pages\n> >>DETAIL: 8008 dead row versions cannot be removed yet.\n> >>There were 4840134 unused item pointers.\n> >>0 pages are entirely empty.\n> >>CPU 5.13s/1.68u sec elapsed 209.38 sec.\n> >>INFO: analyzing \"public.event_sums\"\n> >>INFO: \"event_sums\": 171629 pages, 3000 rows sampled, 7328\n> >>estimated total rows\n> >\n> >Hmmm..I was looking for something that looks like this:\n> >\n> >INFO: free space map: 109 relations, 204 pages stored; 1792 total\n> >pages needed\n> >DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB\n> >shared memory.\n> >VACUUM\n> >\n> >Maybe 7.4 doesn't give this? Or maybe you need to run vacuumdb -a\n> >-v to get it?\n> >\n> >>----------------------------\n> >>\n> >>There are a few things in the second vacuum results that catch my\n> >>eye, but I don't have the skill set to diagnose the problem. I do\n> >>know, however, that a REINDEX followed by a VACUUM FULL seems to\n> >>make the symptoms go away for a while.\n> >>\n> >>And I agree that we should upgrade to an 8.x version of PG, but as\n> >>with many things in life time, money, and risk conspire against me.\n> >\n> >You should still be able to use autovacuum, which might make you a\n> >little happier. Which 7.4 version are you using?\n> >\n> >>-William\n> >>\n> >>At 04:18 PM 7/7/2006, you wrote:\n> >>>On Fri, 7 Jul 2006, William Scott Jordan wrote:\n> >>>>Hi all!\n> >>>>Can anyone explain to me what VACUUM does that REINDEX\n> >>>>doesn't? We have a frequently updated table on Postgres 7.4 on\n> >>>>FC3 with about 35000 rows which we VACUUM hourly and VACUUM FULL\n> >>>>once per day. It seem like the table still slows to a crawl\n> >>>>every few weeks. Running a REINDEX by itself or a VACUUM FULL by\n> >>>>itself doesn't seem to help, but running a REINDEX followed\n> >>>>immediately by a VACUUM FULL seems to solve the problem.\n> >>>>I'm trying to decide now if we need to include a daily REINDEX\n> >>>>along with our daily VACUUM FULL, and more importantly I'm just\n> >>>>curious to know why we should or shouldn't do that.\n> >>>>Any information on this subject would be appreciated.\n> >>>\n> >>>William,\n> >>>If you're having to VACUUM FULL that often, then it's likely your\n> >>>FSM settings are too low. What does the last few lines of VACUUM\n> >>>VERBOSE say? Also, are you running ANALYZE with the vacuums or\n> >>>just running VACUUM? You still need to run ANALYZE to update the\n> >>>planner statistics, otherwise things might slowly grind to a\n> >>>halt. Also, you should probably consider setting up autovacuum\n> >>>and upgrading to 8.0 or 8.1 for better performance overall.\n> >>>\n> >>>--\n> >>>Jeff Frost, Owner <[email protected]>\n> >>>Frost Consulting, LLC http://www.frostconsultingllc.com/\n> >>>Phone: 650-780-7908 FAX: 650-649-1954\n> >\n> >--\n> >Jeff Frost, Owner <[email protected]>\n> >Frost Consulting, LLC http://www.frostconsultingllc.com/\n> >Phone: 650-780-7908 FAX: 650-649-1954\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Fri, 7 Jul 2006 18:15:40 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "On 7/7/06, William Scott Jordan <[email protected]> wrote:\n>\n> Hi Jeff,\n>\n> Ah, okay. I see what information you were looking for. Doing a\n> VACUUM on the full DB, we get the following results:\n>\n> ----------------------------\n> INFO: free space map: 885 relations, 8315 pages stored; 177632 total\n> pages needed\n> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n> shared memory.\n> ----------------------------\n>\n> -William\n\nWilliam,\n\nYou need to increase your fsm settings. The database is telling you it is\ntrying to store 177K+ pages, but you have only provided it with 20K. Since\nthese pages are cheap, I would set your fsm up with at least the following.\n\nmax_fsm_pages 500000\nmax_fsm_relations 5000\n\nThis should provide PostgreSQL with enough space to work. You still might\nneed to run one more vacuum full once you change the setting so that you can\nrecover the space that was lost due to your fsm begin to small. Keep an eye\non these last couple of lines from vacuum and adjust your setting\naccordingly. It may take a couple of tries to get PostgreSQL happy. Once\nyour fsm is large enough, you should be able to dispense with the vacuum\nfulls and reindexes and just do normal vacuuming.\n\nAlso in regards to the vacuum vs reindex. Reindexing is great and gives you\nnice clean \"virgin\" indexes, however, if you do not run an analyze (or\nvacuum analyze), the database will not have statistics for the new indexes.\nThis will cause the planner to make bad choices.\n\nWhat I used to do before upgrading to 8.1 was run a vacuum full, reindexdb,\nvacuum analyze every weekend (we were on 7.3.4). This gave me pristine\nindexes and tables for Monday's start of the week.\n\nIf you can, look hard at upgrading to 8.1.x as it will fix a lot of the\nissues you are having with autovacuum (along with a ton of other\nimprovements).\n\nHTH,\n\nChris\n\nOn 7/7/06, William Scott Jordan <[email protected]> wrote:\nHi Jeff,Ah, okay.  I see what information you were looking for.  Doing aVACUUM on the full DB, we get the following results:----------------------------INFO:  free space map: 885 relations, 8315 pages stored; 177632 total\npages neededDETAIL:  Allocated FSM size: 1000 relations + 20000 pages = 178 kBshared memory.-----------------------------WilliamWilliam,You need to increase your fsm settings.  The database is telling you it is trying to store 177K+ pages, but you have only provided it with 20K.  Since these pages are cheap, I would set your fsm up with at least the following.\nmax_fsm_pages 500000max_fsm_relations 5000This should provide PostgreSQL with enough space to work.  You still might need to run one more vacuum full once you change the setting so that you can recover the space that was lost due to your fsm begin to small.  Keep an eye on these last couple of lines from vacuum and adjust your setting accordingly.  It may take a couple of tries to get PostgreSQL happy.  Once your fsm is large enough, you should be able to dispense with the vacuum fulls and reindexes and just do normal vacuuming.\nAlso in regards to the vacuum vs reindex.  Reindexing is great and gives you nice clean \"virgin\" indexes, however, if you do not run an analyze (or vacuum analyze), the database will not have statistics for the new indexes.  This will cause the planner to make bad choices.\nWhat I used to do before upgrading to 8.1 was run a vacuum full, reindexdb, vacuum analyze every weekend (we were on 7.3.4).  This gave me pristine indexes and tables for Monday's start of the week.If you can, look hard at upgrading to \n8.1.x as it will fix a lot of the issues you are having with autovacuum (along with a ton of other improvements).HTH,Chris", "msg_date": "Fri, 7 Jul 2006 21:28:52 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "\n> William,\n>\n> You need to increase your fsm settings. The database is telling you it is\n> trying to store 177K+ pages, but you have only provided it with 20K. Since\n> these pages are cheap, I would set your fsm up with at least the following.\n>\n> max_fsm_pages 500000\n> max_fsm_relations 5000\n>\n> This should provide PostgreSQL with enough space to work. You still might\n> need to run one more vacuum full once you change the setting so that you\n> can recover the space that was lost due to your fsm begin to small. \nYes he will need to run a vacuum full but I actually doubt he needs to \nincrease his max_fsm_pages that much, he just needs to vacuum more.\n\nJoshua D. Drake\n", "msg_date": "Fri, 7 Jul 2006 20:24:24 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "On Fri, Jul 07, 2006 at 09:28:52PM -0400, Chris Hoover wrote:\n> You need to increase your fsm settings. The database is telling you it is\n> trying to store 177K+ pages, but you have only provided it with 20K. Since\n> these pages are cheap, I would set your fsm up with at least the following.\n\nWhile we're at it, is there a good reason why we simply aren't upping the FSM\ndefaults? It seems like a lot of people are being bitten by it, and adding\nmore pages and relations is as you say cheap...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 8 Jul 2006 10:13:16 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Fri, Jul 07, 2006 at 09:28:52PM -0400, Chris Hoover wrote:\n>> You need to increase your fsm settings. The database is telling you it is\n>> trying to store 177K+ pages, but you have only provided it with 20K. Since\n>> these pages are cheap, I would set your fsm up with at least the following.\n> \n> While we're at it, is there a good reason why we simply aren't upping the FSM\n> defaults? It seems like a lot of people are being bitten by it, and adding\n> more pages and relations is as you say cheap...\n\nthat is already done in -HEAD at the initdb stage:\n\n...\nselecting default shared_buffers/max_fsm_pages ... 4000/200000\n...\n\nStefan\n", "msg_date": "Sat, 08 Jul 2006 11:16:46 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs. REINDEX" } ]
[ { "msg_contents": "Hi,\n\nI am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz\nwith 1GB of RAM) and seeing very high CPU usage (normally over 90%)\nwhen I am running the following queries, and the queries take a long\ntime to return; over an hour!\n\nCREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0;\n\nINSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start,\n'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation,\nSUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start <\nTO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\nHH24:00:00.0')::timestamp;\n\nDELETE FROM ONLY ftone WHERE ftoneway.epId= fttemp1600384653.epId;\n\nThe only changes I've made to the default postgresql.comf file are listed below:\n\nLC_MESSAGES = 'en_US'\nLC_MONETARY = 'en_US'\nLC_NUMERIC = 'en_US'\nLC_TIME = 'en_US'\ntcpip_socket = true\nmax_connections = 20\neffective_cache_size = 32768\nwal_buffers = 128\nfsync = false\nshared_buffers = 3000\nmax_fsm_relations = 10000\nmax_fsm_pages = 100000\n\nThe tables are around a million rows but when when I run against\ntables of a few hundred thousand rows it still takes tens of minutes\nwith high CPU. My database does have a lot of tables (can be several\nthousand), can that cause performance issues?\n\nThanks,\n Neil\n", "msg_date": "Mon, 10 Jul 2006 10:52:42 +1000", "msg_from": "\"Neil Hepworth\" <[email protected]>", "msg_from_op": true, "msg_subject": "High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Mon, 10 Jul 2006, Neil Hepworth wrote:\n\n> I am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz\n> with 1GB of RAM) and seeing very high CPU usage (normally over 90%)\n> when I am running the following queries, and the queries take a long\n> time to return; over an hour!\n\nFirst off, when is the last time you vacuum analyzed this DB and how often \ndoes the vacuum analyze happen. Please post the EXPLAIN ANALYZE output for \neach of the queries below.\n\nAlso, I would strongly urge you to upgrade to a more recent version of \npostgresql. We're currently up to 8.1.4 and it has tons of excellent \nperformance enhancements as well as helpful features such as integrated \nautovacuum, point in time recovery backups, etc.\n\nAlso, I see that you're running with fsync = false. That's quite dangerous \nespecially on a production system.\n\n\n>\n> CREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0;\n>\n> INSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start,\n> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation,\n> SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start <\n> TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp;\n>\n> DELETE FROM ONLY ftone WHERE ftoneway.epId= fttemp1600384653.epId;\n>\n> The only changes I've made to the default postgresql.comf file are listed \n> below:\n>\n> LC_MESSAGES = 'en_US'\n> LC_MONETARY = 'en_US'\n> LC_NUMERIC = 'en_US'\n> LC_TIME = 'en_US'\n> tcpip_socket = true\n> max_connections = 20\n> effective_cache_size = 32768\n> wal_buffers = 128\n> fsync = false\n> shared_buffers = 3000\n> max_fsm_relations = 10000\n> max_fsm_pages = 100000\n>\n> The tables are around a million rows but when when I run against\n> tables of a few hundred thousand rows it still takes tens of minutes\n> with high CPU. My database does have a lot of tables (can be several\n> thousand), can that cause performance issues?\n>\n> Thanks,\n> Neil\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Sun, 9 Jul 2006 18:24:57 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "Thanks for the reply.\n\nThe database is vacuum analysed regularly and during my testing I\ntried running the vacuum analyse full immediately before the running\nthrough the set of queries (which does help a bit - reduces the time\nto about 80% but is is still over an hour, with basically 100% CPU).\n\nI'll get back to you with the full explain analyse output (I need to\nre-create my test database back to its original state and that takes a\nwhile) but I assume the part you're after is that all queries are\nsequential scans, which I initially thought was the problem. But it\nis my understanding that I cannot make them index scans because a\nlarge percentage of the table is being returned by the query\n(typically 30%) so the planner will favour a sequential scan over an\nindex scan for such a query, correct? If the queries had been disk\nbound (due to retrieving large amounts of data) I would have\nunderstood but I am confused as to why a sequential scan would cause\nsuch high CPU and not high disk activity.\n\nYes, I wish I could upgrade to the latest version of PostgreSQL but at\nthe moment my hands are tied due to dependencies on other applications\nrunning on our server (obviously we need to update certain queries,\ne.g. delete .. using.. and test with 8.1 first) - I will be pushing\nfor an upgrade as soon as possible. And the fsync=false is a\n\"compromise\" to try to improve performance (moving to 8.1 would be\nbetter compromise).\n\nNeil\n\n\nOn 10/07/06, Jeff Frost <[email protected]> wrote:\n> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>\n> > I am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz\n> > with 1GB of RAM) and seeing very high CPU usage (normally over 90%)\n> > when I am running the following queries, and the queries take a long\n> > time to return; over an hour!\n>\n> First off, when is the last time you vacuum analyzed this DB and how often\n> does the vacuum analyze happen. Please post the EXPLAIN ANALYZE output for\n> each of the queries below.\n>\n> Also, I would strongly urge you to upgrade to a more recent version of\n> postgresql. We're currently up to 8.1.4 and it has tons of excellent\n> performance enhancements as well as helpful features such as integrated\n> autovacuum, point in time recovery backups, etc.\n>\n> Also, I see that you're running with fsync = false. That's quite dangerous\n> especially on a production system.\n>\n>\n> >\n> > CREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0;\n> >\n> > INSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start,\n> > 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation,\n> > SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start <\n> > TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\n> > HH24:00:00.0')::timestamp;\n> >\n> > DELETE FROM ONLY ftone WHERE ftoneway.epId= fttemp1600384653.epId;\n> >\n> > The only changes I've made to the default postgresql.comf file are listed\n> > below:\n> >\n> > LC_MESSAGES = 'en_US'\n> > LC_MONETARY = 'en_US'\n> > LC_NUMERIC = 'en_US'\n> > LC_TIME = 'en_US'\n> > tcpip_socket = true\n> > max_connections = 20\n> > effective_cache_size = 32768\n> > wal_buffers = 128\n> > fsync = false\n> > shared_buffers = 3000\n> > max_fsm_relations = 10000\n> > max_fsm_pages = 100000\n> >\n> > The tables are around a million rows but when when I run against\n> > tables of a few hundred thousand rows it still takes tens of minutes\n> > with high CPU. My database does have a lot of tables (can be several\n> > thousand), can that cause performance issues?\n> >\n> > Thanks,\n> > Neil\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n> >\n>\n> --\n> Jeff Frost, Owner <[email protected]>\n> Frost Consulting, LLC http://www.frostconsultingllc.com/\n> Phone: 650-780-7908 FAX: 650-649-1954\n>\n", "msg_date": "Mon, 10 Jul 2006 17:55:38 +1000", "msg_from": "\"Neil Hepworth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "I should also explain that I run through these queries on multiple\ntables and with some slightly different parameters for the\n\"consolidation\" so I run through those 3 queries (or similar) 9 times\nand this takes a total of about 2 hours, with high CPU usage. And I\nam running the queries from a remote Java application (using JDBC),\nthe client is using postgresql-8.0-311.jdbc3.jar. The explain analyse\nresults I have provided below are from running via pgAdmin, not the\nJava app (I did a vacuum analyse of the db before running them):\n\n\n*** For the create ***:\n\n-- Executing query:\n\nBEGIN;\nEXPLAIN ANALYZE CREATE TABLE fttemp1643 AS SELECT * FROM ftone LIMIT 0;\n;\nROLLBACK;\n\nERROR: parser: parse error at or near \"CREATE\" at character 25\n\nNow that surprised me! I hadn't done an explain on that query before\nas it was so simple. Perhaps not permitted for creates? If I just\nrun the create:\n\n-- Executing query:\nCREATE TABLE fttemp1643 AS SELECT * FROM ftone LIMIT 0;\n\n\nQuery returned successfully with no result in 48 ms.\n\n\n\n*** For the insert ***:\n\nSubquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\nwidth=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\ntime=16861.72..34243.63 rows=560094 loops=1)\n -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n(actual time=16861.62..20920.12 rows=709461 loops=1)\n -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n(actual time=16861.62..18081.07 rows=709461 loops=1)\n Sort Key: eppairdefnid, \"start\"\n -> Seq Scan on ftone (cost=0.00..36446.66\nrows=234827 width=16) (actual time=0.45..10320.91 rows=709461 loops=1)\n Filter: ((consolidation = 60) AND (\"start\" <\n(to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\nTotal runtime: 55378.68 msec\n\n\n*** For the delete ***:\n\nHash Join (cost=0.00..30020.31 rows=425 width=14) (actual\ntime=3767.47..3767.47 rows=0 loops=1)\n Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n(actual time=0.04..2299.94 rows=1286333 loops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\ntime=206.01..206.01 rows=0 loops=1)\n -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\nwidth=4) (actual time=206.00..206.00 rows=0 loops=1)\nTotal runtime: 3767.52 msec\n\n\nThanks,\nNeil\n\nOn 10/07/06, Neil Hepworth <[email protected]> wrote:\n> Thanks for the reply.\n>\n> The database is vacuum analysed regularly and during my testing I\n> tried running the vacuum analyse full immediately before the running\n> through the set of queries (which does help a bit - reduces the time\n> to about 80% but is is still over an hour, with basically 100% CPU).\n>\n> I'll get back to you with the full explain analyse output (I need to\n> re-create my test database back to its original state and that takes a\n> while) but I assume the part you're after is that all queries are\n> sequential scans, which I initially thought was the problem. But it\n> is my understanding that I cannot make them index scans because a\n> large percentage of the table is being returned by the query\n> (typically 30%) so the planner will favour a sequential scan over an\n> index scan for such a query, correct? If the queries had been disk\n> bound (due to retrieving large amounts of data) I would have\n> understood but I am confused as to why a sequential scan would cause\n> such high CPU and not high disk activity.\n>\n> Yes, I wish I could upgrade to the latest version of PostgreSQL but at\n> the moment my hands are tied due to dependencies on other applications\n> running on our server (obviously we need to update certain queries,\n> e.g. delete .. using.. and test with 8.1 first) - I will be pushing\n> for an upgrade as soon as possible. And the fsync=false is a\n> \"compromise\" to try to improve performance (moving to 8.1 would be\n> better compromise).\n>\n> Neil\n>\n>\n> On 10/07/06, Jeff Frost <[email protected]> wrote:\n> > On Mon, 10 Jul 2006, Neil Hepworth wrote:\n> >\n> > > I am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz\n> > > with 1GB of RAM) and seeing very high CPU usage (normally over 90%)\n> > > when I am running the following queries, and the queries take a long\n> > > time to return; over an hour!\n> >\n> > First off, when is the last time you vacuum analyzed this DB and how often\n> > does the vacuum analyze happen. Please post the EXPLAIN ANALYZE output for\n> > each of the queries below.\n> >\n> > Also, I would strongly urge you to upgrade to a more recent version of\n> > postgresql. We're currently up to 8.1.4 and it has tons of excellent\n> > performance enhancements as well as helpful features such as integrated\n> > autovacuum, point in time recovery backups, etc.\n> >\n> > Also, I see that you're running with fsync = false. That's quite dangerous\n> > especially on a production system.\n> >\n> >\n> > >\n> > > CREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0;\n> > >\n> > > INSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start,\n> > > 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation,\n> > > SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start <\n> > > TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\n> > > HH24:00:00.0')::timestamp;\n> > >\n> > > DELETE FROM ONLY ftone WHERE ftoneway.epId= fttemp1600384653.epId;\n> > >\n> > > The only changes I've made to the default postgresql.comf file are listed\n> > > below:\n> > >\n> > > LC_MESSAGES = 'en_US'\n> > > LC_MONETARY = 'en_US'\n> > > LC_NUMERIC = 'en_US'\n> > > LC_TIME = 'en_US'\n> > > tcpip_socket = true\n> > > max_connections = 20\n> > > effective_cache_size = 32768\n> > > wal_buffers = 128\n> > > fsync = false\n> > > shared_buffers = 3000\n> > > max_fsm_relations = 10000\n> > > max_fsm_pages = 100000\n> > >\n> > > The tables are around a million rows but when when I run against\n> > > tables of a few hundred thousand rows it still takes tens of minutes\n> > > with high CPU. My database does have a lot of tables (can be several\n> > > thousand), can that cause performance issues?\n> > >\n> > > Thanks,\n> > > Neil\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/docs/faq\n> > >\n> > >\n> >\n> > --\n> > Jeff Frost, Owner <[email protected]>\n> > Frost Consulting, LLC http://www.frostconsultingllc.com/\n> > Phone: 650-780-7908 FAX: 650-649-1954\n> >\n>\n", "msg_date": "Mon, 10 Jul 2006 19:12:51 +1000", "msg_from": "\"Neil Hepworth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "\n\nOn Mon, 10 Jul 2006, Neil Hepworth wrote:\n\n> I should also explain that I run through these queries on multiple\n> tables and with some slightly different parameters for the\n> \"consolidation\" so I run through those 3 queries (or similar) 9 times\n> and this takes a total of about 2 hours, with high CPU usage. And I\n> am running the queries from a remote Java application (using JDBC),\n> the client is using postgresql-8.0-311.jdbc3.jar. The explain analyse\n> results I have provided below are from running via pgAdmin, not the\n> Java app (I did a vacuum analyse of the db before running them):\n>\n>\n\nNeil, did you ever answer which version of 7.3 this is?\n\nBTW, you mentioned that this takes 2 hours, but even looping over this 9 times \nseems like it would only take 9 minutes (55 seconds for the SELECT and 4 \nseconds for the DELETE = 59 seconds times 9). Perhaps you should post the \nexplain analyze for the actual query that takes so long as the planner output \nwill likely be quite different.\n\nOne thing I noticed is that the planner seems quite incorrect about the number \nof rows it expects in the SELECT. If you ran vacuum analyze before this, \nperhaps your fsm settings are incorrect? What does vacuumdb -a -v output at \nthe end? I'm looking for something that looks like this:\n\nINFO: free space map: 109 relations, 204 pages stored; 1792 total pages \nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared \nmemory.\n\nI see your fsm settings are non-default, so it's also possible I'm not used to \nreading 7.3's explain analyze output. :-)\n\nAlso, what does vmstat output look like while the query is running? Perhaps \nyou're running into some context switching problems. It would be interesting \nto know how the query runs on 8.1.x just to know if we're chasing an \noptimization that's fixed already in a later version.\n\n\n> Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n> width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n> -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n> time=16861.72..34243.63 rows=560094 loops=1)\n> -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n> (actual time=16861.62..20920.12 rows=709461 loops=1)\n> -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n> (actual time=16861.62..18081.07 rows=709461 loops=1)\n> Sort Key: eppairdefnid, \"start\"\n> -> Seq Scan on ftone (cost=0.00..36446.66\n> rows=234827 width=16) (actual time=0.45..10320.91 rows=709461 loops=1)\n> Filter: ((consolidation = 60) AND (\"start\" <\n> (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n> 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n> Total runtime: 55378.68 msec\n\n> *** For the delete ***:\n>\n> Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n> time=3767.47..3767.47 rows=0 loops=1)\n> Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n> -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n> (actual time=0.04..2299.94 rows=1286333 loops=1)\n> -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n> time=206.01..206.01 rows=0 loops=1)\n> -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n> width=4) (actual time=206.00..206.00 rows=0 loops=1)\n> Total runtime: 3767.52 msec\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Mon, 10 Jul 2006 08:26:51 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Sun, 2006-07-09 at 19:52, Neil Hepworth wrote:\n> Hi,\n> \n> I am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz\n> with 1GB of RAM) and seeing very high CPU usage (normally over 90%)\n> when I am running the following queries, and the queries take a long\n> time to return; over an hour!\n> \n> CREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0;\n> \n> INSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start,\n> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation,\n> SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start <\n> TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp;\n\nI don't need to see an explain analyze to make a guess here...\n\nstart < TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\nHH24:00:00.0')::timestamp\n\nis gonna be a problem because while you and I know that to_timestamp...\nis gonna be a constant, pg 7.3 doesn't. I've run into this before.\n\nJust run a query ahead of time with a simple:\n\nselect TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD\nHH24:00:00.0')::timestamp as starttime\n\nand then pull that out and stick it into your query. do the same for\nany other parts of the query like that.\n\nThat's assuming the issue here is that you're getting seq scans cause of\nthat part of the query.\n", "msg_date": "Mon, 10 Jul 2006 11:04:18 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Mon, Jul 10, 2006 at 17:55:38 +1000,\n Neil Hepworth <[email protected]> wrote:\n> \n> running on our server (obviously we need to update certain queries,\n> e.g. delete .. using.. and test with 8.1 first) - I will be pushing\n> for an upgrade as soon as possible. And the fsync=false is a\n\nYou can set add_missing_from if you want to delay rewriting queries that\nuse that feature.\n", "msg_date": "Tue, 11 Jul 2006 20:37:21 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "Thanks for the tip; I'll try that when we initially upgrade, hopefully soon.\n\nNeil\n\n\nOn 12/07/06, Bruno Wolff III <[email protected]> wrote:\n>\n> On Mon, Jul 10, 2006 at 17:55:38 +1000,\n> Neil Hepworth <[email protected]> wrote:\n> >\n> > running on our server (obviously we need to update certain queries,\n> > e.g. delete .. using.. and test with 8.1 first) - I will be pushing\n> > for an upgrade as soon as possible. And the fsync=false is a\n>\n> You can set add_missing_from if you want to delay rewriting queries that\n> use that feature.\n>\n\nThanks for the tip; I'll try that when we initially upgrade, hopefully soon.\n \nNeil \nOn 12/07/06, Bruno Wolff III <[email protected]> wrote:\nOn Mon, Jul 10, 2006 at 17:55:38 +1000,Neil Hepworth <[email protected]\n> wrote:>> running on our server (obviously we need to update certain queries,> e.g. delete .. using.. and test with 8.1 first) - I will be pushing> for an upgrade as soon as possible.  And the fsync=false is a\nYou can set add_missing_from if you want to delay rewriting queries thatuse that feature.", "msg_date": "Wed, 12 Jul 2006 13:10:50 +1000", "msg_from": "\"Neil Hepworth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Wed, 12 Jul 2006, Neil Hepworth wrote:\n\n> I am using version PostgreSQL 7.3.10 (RPM:\n> postgresql73-rhel21-7.3.10-2). Unfortunately vacuumdb -a -v does not\n> give the FSM info at the end (need a newer version of postgres for\n> that). Running the same queries on 8.1 reduces the time taken to\n> about 16 minutes, though I didn't run the test on the same hardware or\n> OS as I want to keep my test server as close to production as\n> possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino\n> Duo with 2GB of RAM, yes the laptop is brand new :).\n\nWell, looks like you're at least fairly up to date, but there is a fix in \n7.3.11 that you might want to get by upgrading to 7.3.15:\n\n * Fix race condition in transaction log management\n There was a narrow window in which an I/O operation could be\n initiated for the wrong page, leading to an Assert failure or data\n corruption.\n\nIt also appears that you can run autovacuum with 7.3 (I thought maybe it only \nwent back as far as 7.4).\n\nSo, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the server \nfor the whole set of loops? If so, 4x isn't a bad improvement. :-) So, \nassuming you dumped/loaded the same DB onto your laptop's postgresql server, \nwhat does the vacuumdb -a -v say on the laptop? Perhaps we can use it to see \nif your fsm settings are ok.\n\nBTW, did you see Scott's posting here:\n\nhttp://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php\n\nSince we didn't hear from you for a while, I thought perhaps Scott had hit on \nthe fix. Have you tried that yet? It certainly would help the planner out.\n\nYou might also want to turn on autovacuum and see if that helps.\n\nWhat's your disk subsystem like? In fact, what's the entire DB server \nhardware like?\n\n>\n> I run through a loop, executing the following or similar queries 8\n> times (well actually 12 but the last 4 don't do anything) - Jeff I've\n> attached complete outputs as files.\n>\n> A debug output further below (numbers after each method call name,\n> above each SQL statement, are times to run that statement in\n> milliseconds, the times on the lines \"\" are cumulative). So total for\n> one loop is 515 seconds, multiple by 8 and that gets me to over an\n> hour); it is actually the deletes that take the most time; 179 seconds\n> and 185 seconds each loop.\n>\n> ----------------------------------------------------\n>\n> CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0\n> INSERT INTO fttemp670743219 ( epId, start, direction, classid,\n> consolidation, cnt ) SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS\n> consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND\n> start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp GROUP BY epId, direction,\n> TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid\n> DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND\n> ftone.direction = fttemp670743219.direction AND ftone.start =\n> fttemp670743219.start AND ftone.consolidation =\n> fttemp670743219.consolidation AND ftone.classid =\n> fttemp670743219.classid\n> INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n> cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM\n> fttemp670743219\n> DROP TABLE fttemp670743219\n> DELETE FROM ftone WHERE consolidation = 0 AND start <\n> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',\n> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n> 'YYYY-MM-DD 00:00:00.0')::timestamp\n>\n> ----------------------------------------------------\n>\n> ftone: 0:\n> createConsolidatedInTemporary: 188:\n> CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0\n> createConsolidatedInTemporary: 76783:\n> INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n> consolidation, cnt ) SELECT epPairdefnid, TO_TIMESTAMP(start,\n> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60\n> AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n> AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n> 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,\n> direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n> classid\n> replaceConsolidatedInMainTable: 179178:\n> DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =\n> fttemp678233382.epPairdefnid AND ftone.direction =\n> fttemp678233382.direction AND ftone.start = fttemp678233382.start AND\n> ftone.consolidation = fttemp678233382.consolidation AND ftone.classid\n> = fttemp678233382.classid\n> replaceConsolidatedInMainTable: 61705:\n> INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n> classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,\n> classid, cnt FROM fttemp678233382\n> consolidate: 2656:\n> DROP TABLE fttemp678233382\n> MAIN LOOP TOTAL consolidate: 320526\n> deleteOlderThan: 184616:\n> DELETE FROM ftone WHERE consolidation = 0 AND start <\n> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n> 'YYYY-MM-DD 00:00:00.0')::timestamp\n> MAIN LOOP TOTAL deleteExpiredData: 505142\n> MAIN LOOP TOTAL generateStatistics: 515611\n>\n> ----------------------------------------------------\n>\n> Thanks again,\n> Neil\n>\n>\n> On 11/07/06, Jeff Frost <[email protected]> wrote:\n>> \n>> \n>> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>> \n>> > I should also explain that I run through these queries on multiple\n>> > tables and with some slightly different parameters for the\n>> > \"consolidation\" so I run through those 3 queries (or similar) 9 times\n>> > and this takes a total of about 2 hours, with high CPU usage. And I\n>> > am running the queries from a remote Java application (using JDBC),\n>> > the client is using postgresql-8.0-311.jdbc3.jar. The explain analyse\n>> > results I have provided below are from running via pgAdmin, not the\n>> > Java app (I did a vacuum analyse of the db before running them):\n>> >\n>> >\n>> \n>> Neil, did you ever answer which version of 7.3 this is?\n>> \n>> BTW, you mentioned that this takes 2 hours, but even looping over this 9 \n>> times\n>> seems like it would only take 9 minutes (55 seconds for the SELECT and 4\n>> seconds for the DELETE = 59 seconds times 9). Perhaps you should post the\n>> explain analyze for the actual query that takes so long as the planner \n>> output\n>> will likely be quite different.\n>> \n>> One thing I noticed is that the planner seems quite incorrect about the \n>> number\n>> of rows it expects in the SELECT. If you ran vacuum analyze before this,\n>> perhaps your fsm settings are incorrect? What does vacuumdb -a -v output \n>> at\n>> the end? I'm looking for something that looks like this:\n>> \n>> INFO: free space map: 109 relations, 204 pages stored; 1792 total pages\n>> needed\n>> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared\n>> memory.\n>> \n>> I see your fsm settings are non-default, so it's also possible I'm not used \n>> to\n>> reading 7.3's explain analyze output. :-)\n>> \n>> Also, what does vmstat output look like while the query is running? \n>> Perhaps\n>> you're running into some context switching problems. It would be \n>> interesting\n>> to know how the query runs on 8.1.x just to know if we're chasing an\n>> optimization that's fixed already in a later version.\n>> \n>> \n>> > Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n>> > width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n>> > -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n>> > time=16861.72..34243.63 rows=560094 loops=1)\n>> > -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n>> > (actual time=16861.62..20920.12 rows=709461 loops=1)\n>> > -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n>> > (actual time=16861.62..18081.07 rows=709461 loops=1)\n>> > Sort Key: eppairdefnid, \"start\"\n>> > -> Seq Scan on ftone (cost=0.00..36446.66\n>> > rows=234827 width=16) (actual time=0.45..10320.91 rows=709461 loops=1)\n>> > Filter: ((consolidation = 60) AND (\"start\" <\n>> > (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n>> > 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n>> > Total runtime: 55378.68 msec\n>> \n>> > *** For the delete ***:\n>> >\n>> > Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n>> > time=3767.47..3767.47 rows=0 loops=1)\n>> > Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n>> > -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n>> > (actual time=0.04..2299.94 rows=1286333 loops=1)\n>> > -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n>> > time=206.01..206.01 rows=0 loops=1)\n>> > -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n>> > width=4) (actual time=206.00..206.00 rows=0 loops=1)\n>> > Total runtime: 3767.52 msec\n>> \n>> --\n>> Jeff Frost, Owner <[email protected]>\n>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>> Phone: 650-780-7908 FAX: 650-649-1954\n>> \n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Tue, 11 Jul 2006 20:27:43 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Tue, 11 Jul 2006, Jeff Frost wrote:\n\n> On Wed, 12 Jul 2006, Neil Hepworth wrote:\n>\n> You might also want to turn on autovacuum and see if that helps.\n>\n> What's your disk subsystem like? In fact, what's the entire DB server \n> hardware like?\n\nBy the way, how big does the temp table get? If it's large, it might make the \nDELETE slow because it doesn't have any indexes on any of the comparison \ncolumns.\n\nDELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND \nftone.direction = fttemp670743219.direction AND ftone.start = \nfttemp670743219.start AND ftone.consolidation = fttemp670743219.consolidation \nAND ftone.classid = fttemp670743219.classid\n\nIn your explain analyze from before, it seems that there were 0 rows in that \ntable:\n\n> -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n> width=4) (actual time=206.00..206.00 rows=0 loops=1)\n> Total runtime: 3767.52 msec\n\nbut that was with the smaller set size I believe.\n\n>\n>> \n>> I run through a loop, executing the following or similar queries 8\n>> times (well actually 12 but the last 4 don't do anything) - Jeff I've\n>> attached complete outputs as files.\n>> \n>> A debug output further below (numbers after each method call name,\n>> above each SQL statement, are times to run that statement in\n>> milliseconds, the times on the lines \"\" are cumulative). So total for\n>> one loop is 515 seconds, multiple by 8 and that gets me to over an\n>> hour); it is actually the deletes that take the most time; 179 seconds\n>> and 185 seconds each loop.\n>>\n>> ----------------------------------------------------\n>> \n>> CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0\n>> INSERT INTO fttemp670743219 ( epId, start, direction, classid,\n>> consolidation, cnt ) SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n>> HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS\n>> consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND\n>> start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n>> HH24:00:00.0')::timestamp GROUP BY epId, direction,\n>> TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid\n>> DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND\n>> ftone.direction = fttemp670743219.direction AND ftone.start =\n>> fttemp670743219.start AND ftone.consolidation =\n>> fttemp670743219.consolidation AND ftone.classid =\n>> fttemp670743219.classid\n>> INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n>> cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM\n>> fttemp670743219\n>> DROP TABLE fttemp670743219\n>> DELETE FROM ftone WHERE consolidation = 0 AND start <\n>> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',\n>> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>> 'YYYY-MM-DD 00:00:00.0')::timestamp\n>>\n>> ----------------------------------------------------\n>> \n>> ftone: 0:\n>> createConsolidatedInTemporary: 188:\n>> CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0\n>> createConsolidatedInTemporary: 76783:\n>> INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n>> consolidation, cnt ) SELECT epPairdefnid, TO_TIMESTAMP(start,\n>> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60\n>> AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n>> AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>> 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,\n>> direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n>> classid\n>> replaceConsolidatedInMainTable: 179178:\n>> DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =\n>> fttemp678233382.epPairdefnid AND ftone.direction =\n>> fttemp678233382.direction AND ftone.start = fttemp678233382.start AND\n>> ftone.consolidation = fttemp678233382.consolidation AND ftone.classid\n>> = fttemp678233382.classid\n>> replaceConsolidatedInMainTable: 61705:\n>> INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n>> classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,\n>> classid, cnt FROM fttemp678233382\n>> consolidate: 2656:\n>> DROP TABLE fttemp678233382\n>> MAIN LOOP TOTAL consolidate: 320526\n>> deleteOlderThan: 184616:\n>> DELETE FROM ftone WHERE consolidation = 0 AND start <\n>> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>> 'YYYY-MM-DD 00:00:00.0')::timestamp\n>> MAIN LOOP TOTAL deleteExpiredData: 505142\n>> MAIN LOOP TOTAL generateStatistics: 515611\n>>\n>> ----------------------------------------------------\n>> \n>> Thanks again,\n>> Neil\n>> \n>> \n>> On 11/07/06, Jeff Frost <[email protected]> wrote:\n>>> \n>>> \n>>> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>>> \n>>> > I should also explain that I run through these queries on multiple\n>>> > tables and with some slightly different parameters for the\n>>> > \"consolidation\" so I run through those 3 queries (or similar) 9 times\n>>> > and this takes a total of about 2 hours, with high CPU usage. And I\n>>> > am running the queries from a remote Java application (using JDBC),\n>>> > the client is using postgresql-8.0-311.jdbc3.jar. The explain analyse\n>>> > results I have provided below are from running via pgAdmin, not the\n>>> > Java app (I did a vacuum analyse of the db before running them):\n>>> >\n>>> >\n>>> \n>>> Neil, did you ever answer which version of 7.3 this is?\n>>> \n>>> BTW, you mentioned that this takes 2 hours, but even looping over this 9 \n>>> times\n>>> seems like it would only take 9 minutes (55 seconds for the SELECT and 4\n>>> seconds for the DELETE = 59 seconds times 9). Perhaps you should post the\n>>> explain analyze for the actual query that takes so long as the planner \n>>> output\n>>> will likely be quite different.\n>>> \n>>> One thing I noticed is that the planner seems quite incorrect about the \n>>> number\n>>> of rows it expects in the SELECT. If you ran vacuum analyze before this,\n>>> perhaps your fsm settings are incorrect? What does vacuumdb -a -v output \n>>> at\n>>> the end? I'm looking for something that looks like this:\n>>> \n>>> INFO: free space map: 109 relations, 204 pages stored; 1792 total pages\n>>> needed\n>>> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared\n>>> memory.\n>>> \n>>> I see your fsm settings are non-default, so it's also possible I'm not \n>>> used to\n>>> reading 7.3's explain analyze output. :-)\n>>> \n>>> Also, what does vmstat output look like while the query is running? \n>>> Perhaps\n>>> you're running into some context switching problems. It would be \n>>> interesting\n>>> to know how the query runs on 8.1.x just to know if we're chasing an\n>>> optimization that's fixed already in a later version.\n>>> \n>>> \n>>> > Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n>>> > width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n>>> > -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n>>> > time=16861.72..34243.63 rows=560094 loops=1)\n>>> > -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n>>> > (actual time=16861.62..20920.12 rows=709461 loops=1)\n>>> > -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n>>> > (actual time=16861.62..18081.07 rows=709461 loops=1)\n>>> > Sort Key: eppairdefnid, \"start\"\n>>> > -> Seq Scan on ftone (cost=0.00..36446.66\n>>> > rows=234827 width=16) (actual time=0.45..10320.91 rows=709461 loops=1)\n>>> > Filter: ((consolidation = 60) AND (\"start\" <\n>>> > (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n>>> > 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n>>> > Total runtime: 55378.68 msec\n>>> \n>>> > *** For the delete ***:\n>>> >\n>>> > Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n>>> > time=3767.47..3767.47 rows=0 loops=1)\n>>> > Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n>>> > -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n>>> > (actual time=0.04..2299.94 rows=1286333 loops=1)\n>>> > -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n>>> > time=206.01..206.01 rows=0 loops=1)\n>>> > -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n>>> > width=4) (actual time=206.00..206.00 rows=0 loops=1)\n>>> > Total runtime: 3767.52 msec\n>>> \n>>> --\n>>> Jeff Frost, Owner <[email protected]>\n>>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>>> Phone: 650-780-7908 FAX: 650-649-1954\n>>> \n>> \n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Tue, 11 Jul 2006 20:43:07 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "Yes, it was the same DB so, yes 8.1 gives roughly a four fold improvement\n(assuming hardware and OS differences aren't that significant - I'd expect\nthe Linux version to be faster if anything); which certainly ain't bad! :)\n\nGood idea for the vacuumdb -a -v on the laptop, I re imported the database\nand than ran it output below:\n\nINFO: free space map contains 949 pages in 537 relations\nDETAIL: A total of 9024 page slots are in use (including overhead).\n9024 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 186 KB.\nVACUUM\n\nI am about to start testing Scott's suggestion now (thanks Scott - wasn't\nignoring you, just didn't have time yesterday), and I'll get back with the\nresults.\n\nBefore I posted the problem to this list I was focusing more on the settings\nin postgresql.conf than optimising the query as I thought this might be a\ngeneral problem, for all my DB updates/queries, with the way the planner was\noptimising queries; maybe assuming CPU cost was too cheap? Do you think I\nwas off track in my initial thinking? Optimising these queries is\ncertainly beneficial but I don't want postgres to hog the CPU for any\nextended period (other apps also run on the server), so I was wondering if\nthe general config settings could to be tuned to always prevent this\n(regardless of how poorly written my queries are :)?\n\nNeil\n\n\nOn 12/07/06, Jeff Frost <[email protected]> wrote:\n>\n> On Wed, 12 Jul 2006, Neil Hepworth wrote:\n>\n> > I am using version PostgreSQL 7.3.10 (RPM:\n> > postgresql73-rhel21-7.3.10-2). Unfortunately vacuumdb -a -v does not\n> > give the FSM info at the end (need a newer version of postgres for\n> > that). Running the same queries on 8.1 reduces the time taken to\n> > about 16 minutes, though I didn't run the test on the same hardware or\n> > OS as I want to keep my test server as close to production as\n> > possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino\n> > Duo with 2GB of RAM, yes the laptop is brand new :).\n>\n> Well, looks like you're at least fairly up to date, but there is a fix in\n> 7.3.11 that you might want to get by upgrading to 7.3.15:\n>\n> * Fix race condition in transaction log management\n> There was a narrow window in which an I/O operation could be\n> initiated for the wrong page, leading to an Assert failure or data\n> corruption.\n>\n> It also appears that you can run autovacuum with 7.3 (I thought maybe it\n> only\n> went back as far as 7.4).\n>\n> So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the\n> server\n> for the whole set of loops? If so, 4x isn't a bad improvement. :-) So,\n> assuming you dumped/loaded the same DB onto your laptop's postgresql\n> server,\n> what does the vacuumdb -a -v say on the laptop? Perhaps we can use it to\n> see\n> if your fsm settings are ok.\n>\n> BTW, did you see Scott's posting here:\n>\n> http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php\n>\n> Since we didn't hear from you for a while, I thought perhaps Scott had hit\n> on\n> the fix. Have you tried that yet? It certainly would help the planner\n> out.\n>\n> You might also want to turn on autovacuum and see if that helps.\n>\n> What's your disk subsystem like? In fact, what's the entire DB server\n> hardware like?\n>\n> >\n> > I run through a loop, executing the following or similar queries 8\n> > times (well actually 12 but the last 4 don't do anything) - Jeff I've\n> > attached complete outputs as files.\n> >\n> > A debug output further below (numbers after each method call name,\n> > above each SQL statement, are times to run that statement in\n> > milliseconds, the times on the lines \"\" are cumulative). So total for\n> > one loop is 515 seconds, multiple by 8 and that gets me to over an\n> > hour); it is actually the deletes that take the most time; 179 seconds\n> > and 185 seconds each loop.\n> >\n> > ----------------------------------------------------\n> >\n> > CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0\n> > INSERT INTO fttemp670743219 ( epId, start, direction, classid,\n> > consolidation, cnt ) SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n> > HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS\n> > consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND\n> > start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n> > HH24:00:00.0')::timestamp GROUP BY epId, direction,\n> > TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid\n> > DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND\n> > ftone.direction = fttemp670743219.direction AND ftone.start =\n> > fttemp670743219.start AND ftone.consolidation =\n> > fttemp670743219.consolidation AND ftone.classid =\n> > fttemp670743219.classid\n> > INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n> > cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM\n> > fttemp670743219\n> > DROP TABLE fttemp670743219\n> > DELETE FROM ftone WHERE consolidation = 0 AND start <\n> > TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',\n> > 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n> > 'YYYY-MM-DD 00:00:00.0')::timestamp\n> >\n> > ----------------------------------------------------\n> >\n> > ftone: 0:\n> > createConsolidatedInTemporary: 188:\n> > CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0\n> > createConsolidatedInTemporary: 76783:\n> > INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n> > consolidation, cnt ) SELECT epPairdefnid, TO_TIMESTAMP(start,\n> > 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60\n> > AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n> > AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n> > 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,\n> > direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n> > classid\n> > replaceConsolidatedInMainTable: 179178:\n> > DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =\n> > fttemp678233382.epPairdefnid AND ftone.direction =\n> > fttemp678233382.direction AND ftone.start = fttemp678233382.start AND\n> > ftone.consolidation = fttemp678233382.consolidation AND ftone.classid\n> > = fttemp678233382.classid\n> > replaceConsolidatedInMainTable: 61705:\n> > INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n> > classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,\n> > classid, cnt FROM fttemp678233382\n> > consolidate: 2656:\n> > DROP TABLE fttemp678233382\n> > MAIN LOOP TOTAL consolidate: 320526\n> > deleteOlderThan: 184616:\n> > DELETE FROM ftone WHERE consolidation = 0 AND start <\n> > TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n> > 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n> > 'YYYY-MM-DD 00:00:00.0')::timestamp\n> > MAIN LOOP TOTAL deleteExpiredData: 505142\n> > MAIN LOOP TOTAL generateStatistics: 515611\n> >\n> > ----------------------------------------------------\n> >\n> > Thanks again,\n> > Neil\n> >\n> >\n> > On 11/07/06, Jeff Frost <[email protected]> wrote:\n> >>\n> >>\n> >> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n> >>\n> >> > I should also explain that I run through these queries on multiple\n> >> > tables and with some slightly different parameters for the\n> >> > \"consolidation\" so I run through those 3 queries (or similar) 9 times\n> >> > and this takes a total of about 2 hours, with high CPU usage. And I\n> >> > am running the queries from a remote Java application (using JDBC),\n> >> > the client is using postgresql-8.0-311.jdbc3.jar. The explain\n> analyse\n> >> > results I have provided below are from running via pgAdmin, not the\n> >> > Java app (I did a vacuum analyse of the db before running them):\n> >> >\n> >> >\n> >>\n> >> Neil, did you ever answer which version of 7.3 this is?\n> >>\n> >> BTW, you mentioned that this takes 2 hours, but even looping over this\n> 9\n> >> times\n> >> seems like it would only take 9 minutes (55 seconds for the SELECT and\n> 4\n> >> seconds for the DELETE = 59 seconds times 9). Perhaps you should post\n> the\n> >> explain analyze for the actual query that takes so long as the planner\n> >> output\n> >> will likely be quite different.\n> >>\n> >> One thing I noticed is that the planner seems quite incorrect about the\n> >> number\n> >> of rows it expects in the SELECT. If you ran vacuum analyze before\n> this,\n> >> perhaps your fsm settings are incorrect? What does vacuumdb -a -v\n> output\n> >> at\n> >> the end? I'm looking for something that looks like this:\n> >>\n> >> INFO: free space map: 109 relations, 204 pages stored; 1792 total\n> pages\n> >> needed\n> >> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB\n> shared\n> >> memory.\n> >>\n> >> I see your fsm settings are non-default, so it's also possible I'm not\n> used\n> >> to\n> >> reading 7.3's explain analyze output. :-)\n> >>\n> >> Also, what does vmstat output look like while the query is running?\n> >> Perhaps\n> >> you're running into some context switching problems. It would be\n> >> interesting\n> >> to know how the query runs on 8.1.x just to know if we're chasing an\n> >> optimization that's fixed already in a later version.\n> >>\n> >>\n> >> > Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n> >> > width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n> >> > -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n> >> > time=16861.72..34243.63 rows=560094 loops=1)\n> >> > -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n> >> > (actual time=16861.62..20920.12 rows=709461 loops=1)\n> >> > -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n> >> > (actual time=16861.62..18081.07 rows=709461 loops=1)\n> >> > Sort Key: eppairdefnid, \"start\"\n> >> > -> Seq Scan on ftone (cost=0.00..36446.66\n> >> > rows=234827 width=16) (actual time=0.45..10320.91 rows=709461\n> loops=1)\n> >> > Filter: ((consolidation = 60) AND (\"start\" <\n> >> > (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n> >> > 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n> >> > Total runtime: 55378.68 msec\n> >>\n> >> > *** For the delete ***:\n> >> >\n> >> > Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n> >> > time=3767.47..3767.47 rows=0 loops=1)\n> >> > Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n> >> > -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n> >> > (actual time=0.04..2299.94 rows=1286333 loops=1)\n> >> > -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n> >> > time=206.01..206.01 rows=0 loops=1)\n> >> > -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n> >> > width=4) (actual time=206.00..206.00 rows=0 loops=1)\n> >> > Total runtime: 3767.52 msec\n> >>\n> >> --\n> >> Jeff Frost, Owner <[email protected]>\n> >> Frost Consulting, LLC http://www.frostconsultingllc.com/\n> >> Phone: 650-780-7908 FAX: 650-649-1954\n> >>\n> >\n>\n> --\n> Jeff Frost, Owner <[email protected]>\n> Frost Consulting, LLC http://www.frostconsultingllc.com/\n> Phone: 650-780-7908 FAX: 650-649-1954\n>\n\nYes, it was the same DB so, yes 8.1 gives roughly a four fold improvement (assuming hardware and OS differences aren't that significant - I'd expect the Linux version to be faster if anything); which certainly ain't bad! :)\n\n \nGood idea for the vacuumdb -a -v on the laptop, I re imported the database and than ran it output below:\n \nINFO:  free space map contains 949 pages in 537 relationsDETAIL:  A total of 9024 page slots are in use (including overhead).9024 page slots are required to track all free space.Current limits are:  20000 page slots, 1000 relations, using 186 KB.\nVACUUM\n \nI am about to start testing Scott's suggestion now (thanks Scott - wasn't ignoring you, just didn't have time yesterday), and I'll get back with the results.\n \nBefore I posted the problem to this list I was focusing more on the settings in postgresql.conf than optimising the query as I thought this might be a general problem, for all my DB updates/queries, with the way the planner was optimising queries; maybe assuming CPU cost was too cheap?  Do you think I was off track in my initial thinking?  Optimising these queries is certainly beneficial but I don't want postgres to hog the CPU for any extended period (other apps also run on the server), so I was wondering if the general config settings could to be tuned to always prevent this (regardless of how poorly written my queries are :)?\n\n \nNeil \nOn 12/07/06, Jeff Frost <[email protected]> wrote:\nOn Wed, 12 Jul 2006, Neil Hepworth wrote:> I am using version PostgreSQL 7.3.10 (RPM:> postgresql73-rhel21-7.3.10-2\n).  Unfortunately vacuumdb -a -v does not> give the FSM info at the end (need a newer version of postgres for> that).  Running the same queries on 8.1 reduces the time taken to> about 16 minutes, though I didn't run the test on the same hardware or\n> OS as I want to keep my test server as close to production as> possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino> Duo with 2GB of RAM, yes the laptop is brand new :).Well, looks like you're at least fairly up to date, but there is a fix in\n7.3.11 that you might want to get by upgrading to 7.3.15:     * Fix race condition in transaction log management       There was a narrow window in which an I/O operation could be       initiated for the wrong page, leading to an Assert failure or data\n       corruption.It also appears that you can run autovacuum with 7.3 (I thought maybe it onlywent back as far as 7.4).So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the server\nfor the whole set of loops?  If so, 4x isn't a bad improvement. :-)  So,assuming you dumped/loaded the same DB onto your laptop's postgresql server,what does the vacuumdb -a -v say on the laptop?  Perhaps we can use it to see\nif your fsm settings are ok.BTW, did you see Scott's posting here:http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php\nSince we didn't hear from you for a while, I thought perhaps Scott had hit onthe fix.  Have you tried that yet?  It certainly would help the planner out.You might also want to turn on autovacuum and see if that helps.\nWhat's your disk subsystem like?  In fact, what's the entire DB serverhardware like?>> I run through a loop, executing the following or similar queries 8> times (well actually 12 but the last 4 don't do anything) - Jeff I've\n> attached complete outputs as files.>> A debug output further below (numbers after each method call name,> above each SQL statement, are times to run that statement in> milliseconds, the times on the lines \"\" are cumulative).  So total for\n> one loop is 515 seconds, multiple by 8 and that gets me to over an> hour); it is actually the deletes that take the most time; 179 seconds> and 185 seconds each loop.>>                 ----------------------------------------------------\n>> CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0> INSERT INTO fttemp670743219 ( epId, start, direction, classid,> consolidation, cnt )  SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS> consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND> start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n> HH24:00:00.0')::timestamp GROUP BY epId, direction,> TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid> DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND> ftone.direction\n = fttemp670743219.direction AND ftone.start => fttemp670743219.start AND ftone.consolidation => fttemp670743219.consolidation AND ftone.classid => fttemp670743219.classid> INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n> cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM> fttemp670743219> DROP TABLE fttemp670743219> DELETE FROM ftone WHERE consolidation = 0 AND start <> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:\n34.156433+1000',> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),> 'YYYY-MM-DD 00:00:00.0')::timestamp>>                 ----------------------------------------------------\n>> ftone: 0:> createConsolidatedInTemporary: 188:> CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0> createConsolidatedInTemporary: 76783:> INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n> consolidation, cnt )  SELECT epPairdefnid, TO_TIMESTAMP(start,> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60> AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n> AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',> 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,> direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n> classid> replaceConsolidatedInMainTable: 179178:> DELETE FROM ONLY ftone WHERE ftone.epPairdefnid => fttemp678233382.epPairdefnid AND ftone.direction => fttemp678233382.direction AND ftone.start\n = fttemp678233382.start AND> ftone.consolidation = fttemp678233382.consolidation AND ftone.classid> = fttemp678233382.classid> replaceConsolidatedInMainTable: 61705:> INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n> classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,> classid, cnt FROM fttemp678233382> consolidate: 2656:> DROP TABLE fttemp678233382> MAIN LOOP TOTAL consolidate: 320526\n> deleteOlderThan: 184616:> DELETE FROM ftone WHERE consolidation = 0 AND start <> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n> 'YYYY-MM-DD 00:00:00.0')::timestamp> MAIN LOOP TOTAL deleteExpiredData: 505142> MAIN LOOP TOTAL generateStatistics: 515611>>                 ----------------------------------------------------\n>> Thanks again,>   Neil>>> On 11/07/06, Jeff Frost <[email protected]> wrote:>>>>>> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>>>> > I should also explain that I run through these queries on multiple>> > tables and with some slightly different parameters for the>> > \"consolidation\" so I run through those 3 queries (or similar) 9 times\n>> > and this takes a total of about 2 hours, with high CPU usage.  And I>> > am running the queries from a remote Java application (using JDBC),>> > the client is using postgresql-8.0-311.jdbc3.jar\n.  The explain analyse>> > results I have provided below are from running via pgAdmin, not the>> > Java app (I did a vacuum analyse of the db before running them):>> >>> >\n>>>> Neil, did you ever answer which version of 7.3 this is?>>>> BTW, you mentioned that this takes 2 hours, but even looping over this 9>> times>> seems like it would only take 9 minutes (55 seconds for the SELECT and 4\n>> seconds for the DELETE = 59 seconds times 9).  Perhaps you should post the>> explain analyze for the actual query that takes so long as the planner>> output>> will likely be quite different.\n>>>> One thing I noticed is that the planner seems quite incorrect about the>> number>> of rows it expects in the SELECT.  If you ran vacuum analyze before this,>> perhaps your fsm settings are incorrect?  What does vacuumdb -a -v output\n>> at>> the end?  I'm looking for something that looks like this:>>>> INFO:  free space map: 109 relations, 204 pages stored; 1792 total pages>> needed>> DETAIL:  Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared\n>> memory.>>>> I see your fsm settings are non-default, so it's also possible I'm not used>> to>> reading 7.3's explain analyze output. :-)>>>> Also, what does vmstat output look like while the query is running?\n>> Perhaps>> you're running into some context switching problems.  It would be>> interesting>> to know how the query runs on 8.1.x just to know if we're chasing an>> optimization that's fixed already in a later version.\n>>>>>> > Subquery Scan \"*SELECT*\"  (cost=59690.11..62038.38 rows=23483>> > width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)>> > ->  Aggregate  (cost=\n59690.11..62038.38 rows=23483 width=16) (actual>> > time=16861.72..34243.63 rows=560094 loops=1)>> >       ->  Group  (cost=59690.11..61451.32 rows=234827 width=16)>> > (actual time=\n16861.62..20920.12 rows=709461 loops=1)>> >             ->  Sort  (cost=59690.11..60277.18 rows=234827 width=16)>> > (actual time=16861.62..18081.07 rows=709461 loops=1)>> >                   Sort Key: eppairdefnid, \"start\"\n>> >                   ->  Seq Scan on ftone  (cost=0.00..36446.66>> > rows=234827 width=16) (actual time=0.45..10320.91 rows=709461 loops=1)>> >                         Filter: ((consolidation = 60) AND (\"start\" <\n>> > (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,>> > 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))>> > Total runtime: 55378.68 msec>>>> > *** For the delete ***:\n>> >>> > Hash Join  (cost=0.00..30020.31 rows=425 width=14) (actual>> > time=3767.47..3767.47 rows=0 loops=1)>> > Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n>> > ->  Seq Scan on ftone  (cost=0.00..23583.33 rows=1286333 width=10)>> > (actual time=0.04..2299.94 rows=1286333 loops=1)>> > ->  Hash  (cost=0.00..0.00 rows=1 width=4) (actual\n>> > time=206.01..206.01 rows=0 loops=1)>> >       ->  Seq Scan on fttemp1600384653  (cost=0.00..0.00 rows=1>> > width=4) (actual time=206.00..206.00 rows=0 loops=1)>> > Total runtime: \n3767.52 msec>>>> -->> Jeff Frost, Owner       <[email protected]>>> Frost Consulting, LLC   \nhttp://www.frostconsultingllc.com/>> Phone: 650-780-7908     FAX: 650-649-1954>>>--Jeff Frost, Owner       <[email protected]\n>Frost Consulting, LLC   http://www.frostconsultingllc.com/Phone: 650-780-7908     FAX: 650-649-1954", "msg_date": "Wed, 12 Jul 2006 13:48:40 +1000", "msg_from": "\"Neil Hepworth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "Please Cc: the list when replying to things like this so everyone can see (and \nlikely help).\n\nI'm not sure what you're response is actually regarding. Could you give some \nmore detail?\n\nOn Wed, 12 Jul 2006, Rizal wrote:\n\n> so, i must upgrade my PostgreSQL 803 which i have with a new version ?\n>\n> ----- Original Message -----\n> From: \"Jeff Frost\" <[email protected]>\n> To: \"Neil Hepworth\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Wednesday, July 12, 2006 10:27 AM\n> Subject: Re: [PERFORM] High CPU Usage - PostgreSQL 7.3\n>\n>\n>> On Wed, 12 Jul 2006, Neil Hepworth wrote:\n>>\n>>> I am using version PostgreSQL 7.3.10 (RPM:\n>>> postgresql73-rhel21-7.3.10-2). Unfortunately vacuumdb -a -v does not\n>>> give the FSM info at the end (need a newer version of postgres for\n>>> that). Running the same queries on 8.1 reduces the time taken to\n>>> about 16 minutes, though I didn't run the test on the same hardware or\n>>> OS as I want to keep my test server as close to production as\n>>> possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino\n>>> Duo with 2GB of RAM, yes the laptop is brand new :).\n>>\n>> Well, looks like you're at least fairly up to date, but there is a fix in\n>> 7.3.11 that you might want to get by upgrading to 7.3.15:\n>>\n>> * Fix race condition in transaction log management\n>> There was a narrow window in which an I/O operation could be\n>> initiated for the wrong page, leading to an Assert failure or data\n>> corruption.\n>>\n>> It also appears that you can run autovacuum with 7.3 (I thought maybe it\n> only\n>> went back as far as 7.4).\n>>\n>> So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the\n> server\n>> for the whole set of loops? If so, 4x isn't a bad improvement. :-) So,\n>> assuming you dumped/loaded the same DB onto your laptop's postgresql\n> server,\n>> what does the vacuumdb -a -v say on the laptop? Perhaps we can use it to\n> see\n>> if your fsm settings are ok.\n>>\n>> BTW, did you see Scott's posting here:\n>>\n>> http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php\n>>\n>> Since we didn't hear from you for a while, I thought perhaps Scott had hit\n> on\n>> the fix. Have you tried that yet? It certainly would help the planner\n> out.\n>>\n>> You might also want to turn on autovacuum and see if that helps.\n>>\n>> What's your disk subsystem like? In fact, what's the entire DB server\n>> hardware like?\n>>\n>>>\n>>> I run through a loop, executing the following or similar queries 8\n>>> times (well actually 12 but the last 4 don't do anything) - Jeff I've\n>>> attached complete outputs as files.\n>>>\n>>> A debug output further below (numbers after each method call name,\n>>> above each SQL statement, are times to run that statement in\n>>> milliseconds, the times on the lines \"\" are cumulative). So total for\n>>> one loop is 515 seconds, multiple by 8 and that gets me to over an\n>>> hour); it is actually the deletes that take the most time; 179 seconds\n>>> and 185 seconds each loop.\n>>>\n>>> ----------------------------------------------------\n>>>\n>>> CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0\n>>> INSERT INTO fttemp670743219 ( epId, start, direction, classid,\n>>> consolidation, cnt ) SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n>>> HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS\n>>> consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND\n>>> start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n>>> HH24:00:00.0')::timestamp GROUP BY epId, direction,\n>>> TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid\n>>> DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND\n>>> ftone.direction = fttemp670743219.direction AND ftone.start =\n>>> fttemp670743219.start AND ftone.consolidation =\n>>> fttemp670743219.consolidation AND ftone.classid =\n>>> fttemp670743219.classid\n>>> INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n>>> cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM\n>>> fttemp670743219\n>>> DROP TABLE fttemp670743219\n>>> DELETE FROM ftone WHERE consolidation = 0 AND start <\n>>> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',\n>>> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>>> 'YYYY-MM-DD 00:00:00.0')::timestamp\n>>>\n>>> ----------------------------------------------------\n>>>\n>>> ftone: 0:\n>>> createConsolidatedInTemporary: 188:\n>>> CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0\n>>> createConsolidatedInTemporary: 76783:\n>>> INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n>>> consolidation, cnt ) SELECT epPairdefnid, TO_TIMESTAMP(start,\n>>> 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60\n>>> AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n>>> AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>>> 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,\n>>> direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n>>> classid\n>>> replaceConsolidatedInMainTable: 179178:\n>>> DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =\n>>> fttemp678233382.epPairdefnid AND ftone.direction =\n>>> fttemp678233382.direction AND ftone.start = fttemp678233382.start AND\n>>> ftone.consolidation = fttemp678233382.consolidation AND ftone.classid\n>>> = fttemp678233382.classid\n>>> replaceConsolidatedInMainTable: 61705:\n>>> INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n>>> classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,\n>>> classid, cnt FROM fttemp678233382\n>>> consolidate: 2656:\n>>> DROP TABLE fttemp678233382\n>>> MAIN LOOP TOTAL consolidate: 320526\n>>> deleteOlderThan: 184616:\n>>> DELETE FROM ftone WHERE consolidation = 0 AND start <\n>>> TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>>> 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>>> 'YYYY-MM-DD 00:00:00.0')::timestamp\n>>> MAIN LOOP TOTAL deleteExpiredData: 505142\n>>> MAIN LOOP TOTAL generateStatistics: 515611\n>>>\n>>> ----------------------------------------------------\n>>>\n>>> Thanks again,\n>>> Neil\n>>>\n>>>\n>>> On 11/07/06, Jeff Frost <[email protected]> wrote:\n>>>>\n>>>>\n>>>> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>>>>\n>>>>> I should also explain that I run through these queries on multiple\n>>>>> tables and with some slightly different parameters for the\n>>>>> \"consolidation\" so I run through those 3 queries (or similar) 9 times\n>>>>> and this takes a total of about 2 hours, with high CPU usage. And I\n>>>>> am running the queries from a remote Java application (using JDBC),\n>>>>> the client is using postgresql-8.0-311.jdbc3.jar. The explain\n> analyse\n>>>>> results I have provided below are from running via pgAdmin, not the\n>>>>> Java app (I did a vacuum analyse of the db before running them):\n>>>>>\n>>>>>\n>>>>\n>>>> Neil, did you ever answer which version of 7.3 this is?\n>>>>\n>>>> BTW, you mentioned that this takes 2 hours, but even looping over this\n> 9\n>>>> times\n>>>> seems like it would only take 9 minutes (55 seconds for the SELECT and\n> 4\n>>>> seconds for the DELETE = 59 seconds times 9). Perhaps you should post\n> the\n>>>> explain analyze for the actual query that takes so long as the planner\n>>>> output\n>>>> will likely be quite different.\n>>>>\n>>>> One thing I noticed is that the planner seems quite incorrect about the\n>>>> number\n>>>> of rows it expects in the SELECT. If you ran vacuum analyze before\n> this,\n>>>> perhaps your fsm settings are incorrect? What does vacuumdb -a -v\n> output\n>>>> at\n>>>> the end? I'm looking for something that looks like this:\n>>>>\n>>>> INFO: free space map: 109 relations, 204 pages stored; 1792 total\n> pages\n>>>> needed\n>>>> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB\n> shared\n>>>> memory.\n>>>>\n>>>> I see your fsm settings are non-default, so it's also possible I'm not\n> used\n>>>> to\n>>>> reading 7.3's explain analyze output. :-)\n>>>>\n>>>> Also, what does vmstat output look like while the query is running?\n>>>> Perhaps\n>>>> you're running into some context switching problems. It would be\n>>>> interesting\n>>>> to know how the query runs on 8.1.x just to know if we're chasing an\n>>>> optimization that's fixed already in a later version.\n>>>>\n>>>>\n>>>>> Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n>>>>> width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n>>>>> -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n>>>>> time=16861.72..34243.63 rows=560094 loops=1)\n>>>>> -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n>>>>> (actual time=16861.62..20920.12 rows=709461 loops=1)\n>>>>> -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n>>>>> (actual time=16861.62..18081.07 rows=709461 loops=1)\n>>>>> Sort Key: eppairdefnid, \"start\"\n>>>>> -> Seq Scan on ftone (cost=0.00..36446.66\n>>>>> rows=234827 width=16) (actual time=0.45..10320.91 rows=709461\n> loops=1)\n>>>>> Filter: ((consolidation = 60) AND (\"start\" <\n>>>>> (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n>>>>> 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n>>>>> Total runtime: 55378.68 msec\n>>>>\n>>>>> *** For the delete ***:\n>>>>>\n>>>>> Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n>>>>> time=3767.47..3767.47 rows=0 loops=1)\n>>>>> Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n>>>>> -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n>>>>> (actual time=0.04..2299.94 rows=1286333 loops=1)\n>>>>> -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n>>>>> time=206.01..206.01 rows=0 loops=1)\n>>>>> -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n>>>>> width=4) (actual time=206.00..206.00 rows=0 loops=1)\n>>>>> Total runtime: 3767.52 msec\n>>>>\n>>>> --\n>>>> Jeff Frost, Owner <[email protected]>\n>>>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>>>> Phone: 650-780-7908 FAX: 650-649-1954\n>>>>\n>>>\n>>\n>> --\n>> Jeff Frost, Owner <[email protected]>\n>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>> Phone: 650-780-7908 FAX: 650-649-1954\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>\n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Wed, 12 Jul 2006 07:52:08 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" }, { "msg_contents": "On Wed, 12 Jul 2006, Neil Hepworth wrote:\n\n> Yes, it was the same DB so, yes 8.1 gives roughly a four fold improvement\n> (assuming hardware and OS differences aren't that significant - I'd expect\n> the Linux version to be faster if anything); which certainly ain't bad! :)\n>\n> Good idea for the vacuumdb -a -v on the laptop, I re imported the database\n> and than ran it output below:\n>\n> INFO: free space map contains 949 pages in 537 relations\n> DETAIL: A total of 9024 page slots are in use (including overhead).\n> 9024 page slots are required to track all free space.\n> Current limits are: 20000 page slots, 1000 relations, using 186 KB.\n> VACUUM\n\nWell, this looks like it's probably on track already even though it'll change \nas there are updates/deletes, but I suspect you're more or less ok with the \nFSM settings you have.\n\n>\n> I am about to start testing Scott's suggestion now (thanks Scott - wasn't\n> ignoring you, just didn't have time yesterday), and I'll get back with the\n> results.\n>\n> Before I posted the problem to this list I was focusing more on the settings\n> in postgresql.conf than optimising the query as I thought this might be a\n> general problem, for all my DB updates/queries, with the way the planner was\n> optimising queries; maybe assuming CPU cost was too cheap? Do you think I\n> was off track in my initial thinking? Optimising these queries is\n> certainly beneficial but I don't want postgres to hog the CPU for any\n> extended period (other apps also run on the server), so I was wondering if\n> the general config settings could to be tuned to always prevent this\n> (regardless of how poorly written my queries are :)?\n>\n\nI guess you could nice the postmaster, on startup or renice after startup but \nI'm not aware of any conf settings that would tune postgres to avoid using the \nCPU.\n\n> Neil\n>\n>\n> On 12/07/06, Jeff Frost <[email protected]> wrote:\n>> \n>> On Wed, 12 Jul 2006, Neil Hepworth wrote:\n>> \n>> > I am using version PostgreSQL 7.3.10 (RPM:\n>> > postgresql73-rhel21-7.3.10-2). Unfortunately vacuumdb -a -v does not\n>> > give the FSM info at the end (need a newer version of postgres for\n>> > that). Running the same queries on 8.1 reduces the time taken to\n>> > about 16 minutes, though I didn't run the test on the same hardware or\n>> > OS as I want to keep my test server as close to production as\n>> > possible, so I ran the 8.1 server on my Windows laptop (2GHz Centrino\n>> > Duo with 2GB of RAM, yes the laptop is brand new :).\n>> \n>> Well, looks like you're at least fairly up to date, but there is a fix in\n>> 7.3.11 that you might want to get by upgrading to 7.3.15:\n>>\n>> * Fix race condition in transaction log management\n>> There was a narrow window in which an I/O operation could be\n>> initiated for the wrong page, leading to an Assert failure or data\n>> corruption.\n>> \n>> It also appears that you can run autovacuum with 7.3 (I thought maybe it\n>> only\n>> went back as far as 7.4).\n>> \n>> So, is the 16 minutes on your laptop with 8.1 for windows vs 1hr on the\n>> server\n>> for the whole set of loops? If so, 4x isn't a bad improvement. :-) So,\n>> assuming you dumped/loaded the same DB onto your laptop's postgresql\n>> server,\n>> what does the vacuumdb -a -v say on the laptop? Perhaps we can use it to\n>> see\n>> if your fsm settings are ok.\n>> \n>> BTW, did you see Scott's posting here:\n>> \n>> http://archives.postgresql.org/pgsql-performance/2006-07/msg00091.php\n>> \n>> Since we didn't hear from you for a while, I thought perhaps Scott had hit\n>> on\n>> the fix. Have you tried that yet? It certainly would help the planner\n>> out.\n>> \n>> You might also want to turn on autovacuum and see if that helps.\n>> \n>> What's your disk subsystem like? In fact, what's the entire DB server\n>> hardware like?\n>> \n>> >\n>> > I run through a loop, executing the following or similar queries 8\n>> > times (well actually 12 but the last 4 don't do anything) - Jeff I've\n>> > attached complete outputs as files.\n>> >\n>> > A debug output further below (numbers after each method call name,\n>> > above each SQL statement, are times to run that statement in\n>> > milliseconds, the times on the lines \"\" are cumulative). So total for\n>> > one loop is 515 seconds, multiple by 8 and that gets me to over an\n>> > hour); it is actually the deletes that take the most time; 179 seconds\n>> > and 185 seconds each loop.\n>> >\n>> > ----------------------------------------------------\n>> >\n>> > CREATE TABLE fttemp670743219 AS SELECT * FROM ftone LIMIT 0\n>> > INSERT INTO fttemp670743219 ( epId, start, direction, classid,\n>> > consolidation, cnt ) SELECT epId, TO_TIMESTAMP(start, 'YYYY-MM-DD\n>> > HH24:00:00.0')::timestamp AS start, direction, classid, 60 AS\n>> > consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND\n>> > start < TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000', 'YYYY-MM-DD\n>> > HH24:00:00.0')::timestamp GROUP BY epId, direction,\n>> > TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp, classid\n>> > DELETE FROM ONLY ftone WHERE ftone.epId = fttemp670743219.epId AND\n>> > ftone.direction = fttemp670743219.direction AND ftone.start =\n>> > fttemp670743219.start AND ftone.consolidation =\n>> > fttemp670743219.consolidation AND ftone.classid =\n>> > fttemp670743219.classid\n>> > INSERT INTO ftone ( epId, start, consolidation, direction, classid,\n>> > cnt ) SELECT epId, start, consolidation, direction, classid, cnt FROM\n>> > fttemp670743219\n>> > DROP TABLE fttemp670743219\n>> > DELETE FROM ftone WHERE consolidation = 0 AND start <\n>> > TO_TIMESTAMP((TO_TIMESTAMP('2006-07-11 14:04:34.156433+1000',\n>> > 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>> > 'YYYY-MM-DD 00:00:00.0')::timestamp\n>> >\n>> > ----------------------------------------------------\n>> >\n>> > ftone: 0:\n>> > createConsolidatedInTemporary: 188:\n>> > CREATE TABLE fttemp678233382 AS SELECT * FROM ftone LIMIT 0\n>> > createConsolidatedInTemporary: 76783:\n>> > INSERT INTO fttemp678233382 ( epPairdefnid, start, direction, classid,\n>> > consolidation, cnt ) SELECT epPairdefnid, TO_TIMESTAMP(start,\n>> > 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, direction, classid, 60\n>> > AS consolidation, SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0\n>> > AND start < TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>> > 'YYYY-MM-DD HH24:00:00.0')::timestamp GROUP BY epPairdefnid,\n>> > direction, TO_TIMESTAMP(start, 'YYYY-MM-DD HH24:00:00.0')::timestamp,\n>> > classid\n>> > replaceConsolidatedInMainTable: 179178:\n>> > DELETE FROM ONLY ftone WHERE ftone.epPairdefnid =\n>> > fttemp678233382.epPairdefnid AND ftone.direction =\n>> > fttemp678233382.direction AND ftone.start = fttemp678233382.start AND\n>> > ftone.consolidation = fttemp678233382.consolidation AND ftone.classid\n>> > = fttemp678233382.classid\n>> > replaceConsolidatedInMainTable: 61705:\n>> > INSERT INTO ftone ( epPairdefnid, start, consolidation, direction,\n>> > classid, cnt ) SELECT epPairdefnid, start, consolidation, direction,\n>> > classid, cnt FROM fttemp678233382\n>> > consolidate: 2656:\n>> > DROP TABLE fttemp678233382\n>> > MAIN LOOP TOTAL consolidate: 320526\n>> > deleteOlderThan: 184616:\n>> > DELETE FROM ftone WHERE consolidation = 0 AND start <\n>> > TO_TIMESTAMP((TO_TIMESTAMP('2006-07-12 11:02:13.865444+1000',\n>> > 'YYYY-MM-DD 00:00:00.0')::timestamp - INTERVAL '10080 MINUTE'),\n>> > 'YYYY-MM-DD 00:00:00.0')::timestamp\n>> > MAIN LOOP TOTAL deleteExpiredData: 505142\n>> > MAIN LOOP TOTAL generateStatistics: 515611\n>> >\n>> > ----------------------------------------------------\n>> >\n>> > Thanks again,\n>> > Neil\n>> >\n>> >\n>> > On 11/07/06, Jeff Frost <[email protected]> wrote:\n>> >>\n>> >>\n>> >> On Mon, 10 Jul 2006, Neil Hepworth wrote:\n>> >>\n>> >> > I should also explain that I run through these queries on multiple\n>> >> > tables and with some slightly different parameters for the\n>> >> > \"consolidation\" so I run through those 3 queries (or similar) 9 times\n>> >> > and this takes a total of about 2 hours, with high CPU usage. And I\n>> >> > am running the queries from a remote Java application (using JDBC),\n>> >> > the client is using postgresql-8.0-311.jdbc3.jar. The explain\n>> analyse\n>> >> > results I have provided below are from running via pgAdmin, not the\n>> >> > Java app (I did a vacuum analyse of the db before running them):\n>> >> >\n>> >> >\n>> >>\n>> >> Neil, did you ever answer which version of 7.3 this is?\n>> >>\n>> >> BTW, you mentioned that this takes 2 hours, but even looping over this\n>> 9\n>> >> times\n>> >> seems like it would only take 9 minutes (55 seconds for the SELECT and\n>> 4\n>> >> seconds for the DELETE = 59 seconds times 9). Perhaps you should post\n>> the\n>> >> explain analyze for the actual query that takes so long as the planner\n>> >> output\n>> >> will likely be quite different.\n>> >>\n>> >> One thing I noticed is that the planner seems quite incorrect about the\n>> >> number\n>> >> of rows it expects in the SELECT. If you ran vacuum analyze before\n>> this,\n>> >> perhaps your fsm settings are incorrect? What does vacuumdb -a -v\n>> output\n>> >> at\n>> >> the end? I'm looking for something that looks like this:\n>> >>\n>> >> INFO: free space map: 109 relations, 204 pages stored; 1792 total\n>> pages\n>> >> needed\n>> >> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB\n>> shared\n>> >> memory.\n>> >>\n>> >> I see your fsm settings are non-default, so it's also possible I'm not\n>> used\n>> >> to\n>> >> reading 7.3's explain analyze output. :-)\n>> >>\n>> >> Also, what does vmstat output look like while the query is running?\n>> >> Perhaps\n>> >> you're running into some context switching problems. It would be\n>> >> interesting\n>> >> to know how the query runs on 8.1.x just to know if we're chasing an\n>> >> optimization that's fixed already in a later version.\n>> >>\n>> >>\n>> >> > Subquery Scan \"*SELECT*\" (cost=59690.11..62038.38 rows=23483\n>> >> > width=16) (actual time=16861.73..36473.12 rows=560094 loops=1)\n>> >> > -> Aggregate (cost=59690.11..62038.38 rows=23483 width=16) (actual\n>> >> > time=16861.72..34243.63 rows=560094 loops=1)\n>> >> > -> Group (cost=59690.11..61451.32 rows=234827 width=16)\n>> >> > (actual time=16861.62..20920.12 rows=709461 loops=1)\n>> >> > -> Sort (cost=59690.11..60277.18 rows=234827 width=16)\n>> >> > (actual time=16861.62..18081.07 rows=709461 loops=1)\n>> >> > Sort Key: eppairdefnid, \"start\"\n>> >> > -> Seq Scan on ftone (cost=0.00..36446.66\n>> >> > rows=234827 width=16) (actual time=0.45..10320.91 rows=709461\n>> loops=1)\n>> >> > Filter: ((consolidation = 60) AND (\"start\" <\n>> >> > (to_timestamp('2006-07-10 18:43:27.391103+1000'::text,\n>> >> > 'YYYY-MM-DDHH24:00:00.0'::text))::timestamp without time zone))\n>> >> > Total runtime: 55378.68 msec\n>> >>\n>> >> > *** For the delete ***:\n>> >> >\n>> >> > Hash Join (cost=0.00..30020.31 rows=425 width=14) (actual\n>> >> > time=3767.47..3767.47 rows=0 loops=1)\n>> >> > Hash Cond: (\"outer\".eppairdefnid = \"inner\".eppairdefnid)\n>> >> > -> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)\n>> >> > (actual time=0.04..2299.94 rows=1286333 loops=1)\n>> >> > -> Hash (cost=0.00..0.00 rows=1 width=4) (actual\n>> >> > time=206.01..206.01 rows=0 loops=1)\n>> >> > -> Seq Scan on fttemp1600384653 (cost=0.00..0.00 rows=1\n>> >> > width=4) (actual time=206.00..206.00 rows=0 loops=1)\n>> >> > Total runtime: 3767.52 msec\n>> >>\n>> >> --\n>> >> Jeff Frost, Owner <[email protected]>\n>> >> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>> >> Phone: 650-780-7908 FAX: 650-649-1954\n>> >>\n>> >\n>> \n>> --\n>> Jeff Frost, Owner <[email protected]>\n>> Frost Consulting, LLC http://www.frostconsultingllc.com/\n>> Phone: 650-780-7908 FAX: 650-649-1954\n>> \n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Wed, 12 Jul 2006 13:40:45 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Usage - PostgreSQL 7.3" } ]
[ { "msg_contents": "There have been dozens, perhaps hundreds, of entries in the pg-admin, pg-general, and pg-performance lists regarding killing a session, but as far as I can tell, there is no Postgres solution. Did I miss something?\n\nThis raises the question: Why doesn't Postgres have a \"kill session\" command that works? Oracle has it, and it's invaluable; there is no substitute. Various writers to these PG lists have raised the question repeatedly. Is it just a matter that nobody has had the time to do it (which I respect!), or is there a reason why the Postgres team decided a \"kill session\" is a bad idea?\n\nThe rest of this email is just to illustrate the convoluted solution I've had to adopt, and even with this, I can't get it to work quite right.\n\nBackground: In our web app, we give our users a fair amount of power to formulate difficult queries. These long-running queries are fork/exec'd from the Apache CGI, and we give the user a \"job status\" page, with the option to kill the job.\n\nI can kill off the CGI, since Apache owns the process. But the \"stock answer\" of\n\n kill -2 backend-pid\n\nwon't work, because I don't want my Apache jobs running as super-user (!) or as setuid processes.\n\nSo here's my solution: Install a couple of C extensions like this:\n\n Datum get_session_id(PG_FUNCTION_ARGS)\n {\n PG_RETURN_INT32(getpid());\n }\n\n Datum kill_session(PG_FUNCTION_ARGS)\n {\n int4 session_id, status;\n session_id = PG_GETARG_INT32(0);\n fprintf(stderr, \"KILLING SESSION: %d, 15\\n\", session_id);\n status = kill(session_id, 15);\n PG_RETURN_BOOL((status == 0) ? true : false);\n }\n\nThese are installed with the appropriate \"CREATE OR REPLACE ...\" sql. Although this is dangerous (anyone who can log in to Postgres can kill any Postgres job!), its safe enough in a controlled enviroment. It allows an Apache CGI to issue the kill(2) command through the Postgres backend, which is running as the Postgres user, and thus has permission to do the deed. When I start a job, I record the backend's PID, which allows another process to connect and kill the first one. Alright, it's a hack, but it's the best I could think of.\n\nBut in spite earlier posting in these forums that say the killing the backend was the way to go, this doesn't really work. First, even though the \"postgres\" backend job is properly killed, a \"postmaster\" job keeps running at 99% CPU, which is pretty useless. Killing the client's backend didn't kill the process actually doing the work!\n\nSecond, the \"KILLING SESSION\" message to stderr is only printed in the PG log file sporadically. This confuses me, since the \"KILLING SESSION\" is printed by a *different* process than the one being killed, so it shouldn't be affected. So what happens to fprintf()'s output? Most of the time, I just get \"unexpected EOF on client connection\" in the log which is presumably the postmaster complaining that the postgres child process died.\n\nI know the kill_session() is working because it returns \"true\", and the job is in fact killed. But the query keeps running in postmaster (or is it something else, like a rollback?), and the stderr output disappears.\n\nThanks,\nCraig\n", "msg_date": "Mon, 10 Jul 2006 22:50:40 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Kill a session" }, { "msg_contents": "Craig A. James wrote:\n> There have been dozens, perhaps hundreds, of entries in the pg-admin, \n> pg-general, and pg-performance lists regarding killing a session, but as \n> far as I can tell, there is no Postgres solution. Did I miss something?\n> \n> This raises the question: Why doesn't Postgres have a \"kill session\" \n> command that works? Oracle has it, and it's invaluable; there is no \n> substitute. Various writers to these PG lists have raised the question \n> repeatedly. Is it just a matter that nobody has had the time to do it \n> (which I respect!), or is there a reason why the Postgres team decided a \n> \"kill session\" is a bad idea?\n\nYou are sure you read:\n\n\nhttp://www.postgresql.org/docs/8.1/interactive/protocol-flow.html#AEN60635\n\n?\n\n\nRegards\nTino Wildenhain\n", "msg_date": "Tue, 11 Jul 2006 09:39:40 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "> There have been dozens, perhaps hundreds, of entries in the \n> pg-admin, pg-general, and pg-performance lists regarding \n> killing a session, but as far as I can tell, there is no \n> Postgres solution. Did I miss something?\n> \n> This raises the question: Why doesn't Postgres have a \"kill \n> session\" command that works? Oracle has it, and it's \n> invaluable; there is no substitute. Various writers to these \n> PG lists have raised the question repeatedly. Is it just a \n> matter that nobody has had the time to do it (which I \n> respect!), or is there a reason why the Postgres team decided \n> a \"kill session\" is a bad idea?\n\n[snip]\n\nI beleive the function to kill a backend is actually in the codebase,\nit's just commented out because it's considered dangerous. There are\nsome possible issues (see -hackers archives) about sending SIGTERM\nwithout actually shutting down the whole cluster.\n\nDoing the client-side function to call is the easy part.\n\nIn many cases you just need to cancel a query, in which case you can use\npg_cancel_backend() for exmaple. If you need to actually kill it, your\nonly supported way is to restart postgresql. \n\n> But in spite earlier posting in these forums that say the \n> killing the backend was the way to go, this doesn't really \n> work. First, even though the \"postgres\" backend job is \n> properly killed, a \"postmaster\" job keeps running at 99% CPU, \n> which is pretty useless. Killing the client's backend didn't \n> kill the process actually doing the work!\n\nThen you killed the wrong backend...\n\n\n> Second, the \"KILLING SESSION\" message to stderr is only \n> printed in the PG log file sporadically. This confuses me, \n> since the \"KILLING SESSION\" is printed by a *different* \n> process than the one being killed, so it shouldn't be \n> affected. So what happens to fprintf()'s output? Most of \n> the time, I just get \"unexpected EOF on client connection\" in \n> the log which is presumably the postmaster complaining that \n> the postgres child process died.\n\nNo, that's the postgres chlid process complaining that your client\n(CGI?) died without sending a close message.\n\n\n> I know the kill_session() is working because it returns \n> \"true\", and the job is in fact killed. But the query keeps \n> running in postmaster (or is it something else, like a \n> rollback?), and the stderr output disappears.\n\nNo queries run in postmaster. They all run in postgres backends. The\npostmaster does very little actual work, other than keeping track of\neverybody else.\n\n//Magnus\n", "msg_date": "Tue, 11 Jul 2006 13:39:07 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Magnus Hagander wrote:\n>> This raises the question: Why doesn't Postgres have a \"kill \n>> session\" command that works? Oracle has it, and it's \n>> invaluable; there is no substitute. Various writers to these \n>> PG lists have raised the question repeatedly. Is it just a \n>> matter that nobody has had the time to do it (which I \n>> respect!), or is there a reason why the Postgres team decided \n>> a \"kill session\" is a bad idea?\n> \n> I beleive the function to kill a backend is actually in the codebase,\n> it's just commented out because it's considered dangerous. There are\n> some possible issues (see -hackers archives) about sending SIGTERM\n> without actually shutting down the whole cluster.\n> \n> Doing the client-side function to call is the easy part.\n> \n> In many cases you just need to cancel a query, in which case you can use\n> pg_cancel_backend() for exmaple. If you need to actually kill it, your\n> only supported way is to restart postgresql. \n\nIn other words, are you confirming that there is no way to kill a query from another process, other than shutting down the database? My understanding of the documentation tells me I can't use cancel, because the process doing the killing isn't the original process.\n\n>> But in spite earlier posting in these forums that say the \n>> killing the backend was the way to go, this doesn't really \n>> work. First, even though the \"postgres\" backend job is \n>> properly killed, a \"postmaster\" job keeps running at 99% CPU, \n>> which is pretty useless. Killing the client's backend didn't \n>> kill the process actually doing the work!\n> \n> Then you killed the wrong backend...\n> No queries run in postmaster. They all run in postgres backends. The\n> postmaster does very little actual work, other than keeping track of\n> everybody else.\n\nIt turns out I was confused by this: ps(1) reports a process called \"postgres\", but top(1) reports a process called \"postmaster\", but they both have the same pid. I guess postmaster replaces its own name in the process table when it's executing a query, and it's not really the postmaster even though top(1) calls it postmaster.\n\nSo \"kill -15 <pid>\" is NOT killing the process -- to kill the process, I have to use signal 9. But if I do that, ALL queries in progress are aborted. I might as well shut down and restart the database, which is an unacceptable solution for a web site.\n\nI'm back to my original question: How do you kill a runaway query without bringing down the whole database? Is there really no answer to this?\n\nThanks,\nCraig\n", "msg_date": "Wed, 12 Jul 2006 08:43:18 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Kill a session" }, { "msg_contents": "On Wed, Jul 12, 2006 at 08:43:18AM -0700, Craig A. James wrote:\n>> Then you killed the wrong backend...\n>> No queries run in postmaster. They all run in postgres backends. The\n>> postmaster does very little actual work, other than keeping track of\n>> everybody else.\n> \n> It turns out I was confused by this: ps(1) reports a process called \n> \"postgres\", but top(1) reports a process called \"postmaster\", but they both \n> have the same pid. I guess postmaster replaces its own name in the process \n> table when it's executing a query, and it's not really the postmaster even \n> though top(1) calls it postmaster.\n> \n> So \"kill -15 <pid>\" is NOT killing the process -- to kill the process, I \n> have to use signal 9. But if I do that, ALL queries in progress are \n> aborted. I might as well shut down and restart the database, which is an \n> unacceptable solution for a web site.\n\nI don't follow your logic here. If you do \"kill -15 <pid>\" of the postmaster\ndoing the work, the query should be aborted without taking down the entire\ncluster. I don't see why you'd need -9 (which is a really bad idea anyhow)...\n\n> I'm back to my original question: How do you kill a runaway query without \n> bringing down the whole database? Is there really no answer to this?\n\nKill it with -15. If you're worried about your CGI scripts, use sudo or some\nsort of client/server solution.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 12 Jul 2006 19:04:04 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "> > I beleive the function to kill a backend is actually in the \n> codebase, \n> > it's just commented out because it's considered dangerous. \n> There are \n> > some possible issues (see -hackers archives) about sending SIGTERM \n> > without actually shutting down the whole cluster.\n> > \n> > Doing the client-side function to call is the easy part.\n> > \n> > In many cases you just need to cancel a query, in which \n> case you can \n> > use\n> > pg_cancel_backend() for exmaple. If you need to actually \n> kill it, your \n> > only supported way is to restart postgresql.\n> \n> In other words, are you confirming that there is no way to \n> kill a query from another process, other than shutting down \n> the database? My understanding of the documentation tells me \n> I can't use cancel, because the process doing the killing \n> isn't the original process.\n\nYou can't kill another backend, no.\nYou can *cancel* a query on it and return it to idle state. See\nhttp://www.postgresql.org/docs/8.1/interactive/functions-admin.html,\npg_cancel_backend().\n\n\n> So \"kill -15 <pid>\" is NOT killing the process -- to kill the \n> process, I have to use signal 9. But if I do that, ALL \n> queries in progress are aborted. I might as well shut down \n> and restart the database, which is an unacceptable solution \n> for a web site.\n> \n> I'm back to my original question: How do you kill a runaway \n> query without bringing down the whole database? Is there \n> really no answer to this?\n\nRunaway queries can be killed with pg_cancel_backend(), or from the\ncommandline using kill -INT <pid>. The backend will still be around, but\nit will have cancelled the query.\n\n//Magnus\n", "msg_date": "Wed, 12 Jul 2006 19:20:38 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Craig A. James wrote:\n> Magnus Hagander wrote:\n>>> This raises the question: Why doesn't Postgres have a \"kill session\"\n>>> command that works? Oracle has it, and it's invaluable; there is no\n>>> substitute. Various writers to these PG lists have raised the\n>>> question repeatedly. Is it just a matter that nobody has had the\n>>> time to do it (which I respect!), or is there a reason why the\n>>> Postgres team decided a \"kill session\" is a bad idea?\n>>\n>> I beleive the function to kill a backend is actually in the codebase,\n>> it's just commented out because it's considered dangerous. There are\n>> some possible issues (see -hackers archives) about sending SIGTERM\n>> without actually shutting down the whole cluster.\n>>\n>> Doing the client-side function to call is the easy part.\n>>\n>> In many cases you just need to cancel a query, in which case you can use\n>> pg_cancel_backend() for exmaple. If you need to actually kill it, your\n>> only supported way is to restart postgresql. \n> \n> In other words, are you confirming that there is no way to kill a query\n> from another process, other than shutting down the database? My\n> understanding of the documentation tells me I can't use cancel, because\n> the process doing the killing isn't the original process.\n> \n>>> But in spite earlier posting in these forums that say the killing the\n>>> backend was the way to go, this doesn't really work. First, even\n>>> though the \"postgres\" backend job is properly killed, a \"postmaster\"\n>>> job keeps running at 99% CPU, which is pretty useless. Killing the\n>>> client's backend didn't kill the process actually doing the work!\n>>\n>> Then you killed the wrong backend...\n>> No queries run in postmaster. They all run in postgres backends. The\n>> postmaster does very little actual work, other than keeping track of\n>> everybody else.\n> \n> It turns out I was confused by this: ps(1) reports a process called\n> \"postgres\", but top(1) reports a process called \"postmaster\", but they\n> both have the same pid. I guess postmaster replaces its own name in the\n> process table when it's executing a query, and it's not really the\n> postmaster even though top(1) calls it postmaster.\n> \n> So \"kill -15 <pid>\" is NOT killing the process -- to kill the process, I\n> have to use signal 9. But if I do that, ALL queries in progress are\n> aborted. I might as well shut down and restart the database, which is\n> an unacceptable solution for a web site.\n> \n> I'm back to my original question: How do you kill a runaway query\n> without bringing down the whole database? Is there really no answer to\n> this?\n\nare you maybe looking for pg_cancel_backend() ?\n\nhttp://www.postgresql.org/docs/current/interactive/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE\n\nStefan\n", "msg_date": "Wed, 12 Jul 2006 19:39:30 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Craig A. James wrote:\n>\n> \n> I'm back to my original question: How do you kill a runaway query \n> without bringing down the whole database? Is there really no answer to \n> this?\n> \n\nAs others have mentioned, pg_cancel_backend(pid) will stop query \nexecution by backend process id 'pid'.\n\nWhile this is often enough, if you actually want to disconnect a backend \nprocess then there is nothing to let you do this remotely. I recently \ndid a patch for Bizgres that just implements the \npg_terminate_backend(pid) function (currently #ifdef'ed out of the \ncodebase) as a contrib so it can be easily installed. See \nhttp://pgfoundry.org/pipermail/bizgres-general/2006-May/000484.html\n\nIf you want to try it out, please read the README (it discusses possible \ndangers associated with sending SIGTERM to backends). And I would \ncertainly be interested in hearing what level of success (or otherwise) \nyou have with it!\n\nBest wishes\n\nMark\n\n\n", "msg_date": "Fri, 14 Jul 2006 13:30:39 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Thanks for your reply, Mark:\n>> I'm back to my original question: How do you kill a runaway query \n>> without bringing down the whole database? Is there really no answer \n>> to this?\n>\n> ... if you actually want to disconnect a backend \n> process then there is nothing to let you do this remotely. I recently \n> did a patch for Bizgres that just implements the \n> pg_terminate_backend(pid) function (currently #ifdef'ed out of the \n> codebase) as a contrib so it can be easily installed. See \n> http://pgfoundry.org/pipermail/bizgres-general/2006-May/000484.html\n\nThis answers my question. I've finally got a statement in concrete terms, Postgres has no way to kill a backend process via an SQL statement. \"If Mark had to resort to this, then there is no other way.\"\n\n> If you want to try it out, please read the README (it discusses possible \n> dangers associated with sending SIGTERM to backends). And I would \n> certainly be interested in hearing what level of success (or otherwise) \n> you have with it!\n\nThanks, but I've already implemented my own, which is essentially identical in concept to yours, but simpler in the sense of being even less safe than yours -- I just let anyone send the signal, since I have no users other than my own app. I'll keep my version since it's embedded in my own plug-in. That way I won't have to keep remembering to modify the Postgres code when I upgrade. I like to keep Postgres \"stock\".\n\nCraig\n", "msg_date": "Thu, 13 Jul 2006 19:17:14 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Wed, Jul 12, 2006 at 08:43:18AM -0700, Craig A. James wrote:\n>>> Then you killed the wrong backend...\n>>> No queries run in postmaster. They all run in postgres backends. The\n>>> postmaster does very little actual work, other than keeping track of\n>>> everybody else.\n>> It turns out I was confused by this: ps(1) reports a process called \n>> \"postgres\", but top(1) reports a process called \"postmaster\", but they both \n>> have the same pid. I guess postmaster replaces its own name in the process \n>> table when it's executing a query, and it's not really the postmaster even \n>> though top(1) calls it postmaster.\n>>\n>> So \"kill -15 <pid>\" is NOT killing the process -- to kill the process, I \n>> have to use signal 9. But if I do that, ALL queries in progress are \n>> aborted. I might as well shut down and restart the database, which is an \n>> unacceptable solution for a web site.\n> \n> I don't follow your logic here. If you do \"kill -15 <pid>\" of the postmaster\n> doing the work, the query should be aborted without taking down the entire\n> cluster. I don't see why you'd need -9 (which is a really bad idea anyhow)...\n\nI've solved this mystery. \"kill -15\" doesn't immediately kill the job -- it aborts the query, but it might take 15-30 seconds to clean up.\n\nThis confused me, because the query I was using to test took about 30 seconds, so the SIGTERM didn't seem to make a difference. But when I used a harder query, one that would run for 5-10 minutes, SIGTERM still stopped it after 15 seconds, which isn't great but it's acceptable. \n\nBottom line is that I was expecting \"instant death\" with SIGTERM, but instead got an agonizing, drawn out -- but safe -- death of the query. At least that's my deduction based on experiments. I haven't dug into the source to confirm.\n\nThanks everyone for your answers. My \"kill this query\" feature is now acceptable.\n\nCraig\n", "msg_date": "Thu, 13 Jul 2006 19:23:19 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Kill a session" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Bottom line is that I was expecting \"instant death\" with SIGTERM, but\n> instead got an agonizing, drawn out -- but safe -- death of the query.\n\nWhat was the query exactly?\n\nOur expectation is that all or at least most queries should respond to\nSIGINT or SIGTERM interrupts pretty rapidly, say on a less-than-a-second\ntimescale. However there are various loops in the backend that fail to\nexecute CHECK_FOR_INTERRUPTS sufficiently often :-(. We've been\ngradually finding and fixing these, and will be glad to fix your case\nif you provide enough details to pin it down. You might be interested\nin this current thread about a similar problem:\n\nhttp://archives.postgresql.org/pgsql-patches/2006-07/msg00039.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Jul 2006 01:50:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session " }, { "msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n> Our expectation is that all or at least most queries should respond to\n> SIGINT or SIGTERM interrupts pretty rapidly, say on a less-than-a-second\n> timescale. However there are various loops in the backend that fail to\n> execute CHECK_FOR_INTERRUPTS sufficiently often :-(. \n\nThe same is true for user-defined C funtions.\n\nThe PostGIS GEOS geometry functions come to mind, for complex\ngeometries, they can need hours to complete. And as GEOS is a 3rd-Party\nlibrary, I don't see an easy way to make them CHECK_FOR_INTERRUPTS.\n\nDoes anybody know how this is for plpgsql, pljava and plpython?\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 14 Jul 2006 13:28:29 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session" }, { "msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> Bottom line is that I was expecting \"instant death\" with SIGTERM, but\n>> instead got an agonizing, drawn out -- but safe -- death of the query.\n> \n> What was the query exactly?\n> \n> Our expectation is that all or at least most queries should respond to\n> SIGINT or SIGTERM interrupts pretty rapidly, say on a less-than-a-second\n> timescale. However there are various loops in the backend that fail to\n> execute CHECK_FOR_INTERRUPTS sufficiently often :-(. We've been\n> gradually finding and fixing these, and will be glad to fix your case\n> if you provide enough details to pin it down. You might be interested\n> in this current thread about a similar problem:\n> \n> http://archives.postgresql.org/pgsql-patches/2006-07/msg00039.php\n\nThanks, this is good information. The qsort is a distinct possibility. The query is a big\n\n insert into some_hitlist (select id from another_hitlist join data_table on (...))\n\nwhere the hitlists are unindexed. So it may be using a merge-join with qsort. When I have a few minutes, I'll turn on logging in the app and find the exact SQL, and run an EXPLAIN ANALYZE and see what's really happening.\n\nIt's also possible that the INSERT itself is the problem, or adds to the problem. The SIGINT may come after a few million rows have been inserted, so it would have to clean that up, right?\n\nThanks,\nCraig\n\n", "msg_date": "Fri, 14 Jul 2006 08:17:38 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Kill a session" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> It's also possible that the INSERT itself is the problem, or adds to the problem. The SIGINT may come after a few million rows have been inserted, so it would have to clean that up, right?\n\nNo, because we don't use UNDO. The next VACUUM would have a bit of a\nmess to clean up, but you wouldn't pay for it at the time of the abort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Jul 2006 12:25:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Kill a session " } ]
[ { "msg_contents": "Best regards,\r\nJamal\n\n", "msg_date": "Tue, 11 Jul 2006 22:18:19 +0200", "msg_from": "\"Jghaffou\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?iso-8859-1?Q?Unsubscribe?=" } ]