threads
listlengths
1
275
[ { "msg_contents": "[Reposted from General section with updated information]\nPg 7.4.5\n\nI'm running an update statement on about 12 million records using the\nfollowing query:\n\nUpdate table_A\nset F1 = b.new_data\nfrom table_B b\nwhere b.keyfield = table_A.keyfield\n\nboth keyfields are indexed, all other keys in table_A were dropped, yet this job has been running over 15 hours. Is\nthis normal?\n\nI stopped the process the first time after 3 hours of running due to excessive log rotation and reset the conf file to these settings:\n\n\nwal_buffers = 64 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 128 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 1800 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\nWould it just be quicker to run a JOIN statement to a temp file and then reinsert? \n\nTIA \nPatrick\n\n", "msg_date": "Sat, 06 Aug 2005 06:16:02 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Slow update statement" }, { "msg_contents": "Patrick Hatcher wrote:\n> [Reposted from General section with updated information]\n> Pg 7.4.5\n>\n> I'm running an update statement on about 12 million records using the\n> following query:\n>\n> Update table_A\n> set F1 = b.new_data\n> from table_B b\n> where b.keyfield = table_A.keyfield\n>\n> both keyfields are indexed, all other keys in table_A were dropped, yet\n> this job has been running over 15 hours. Is\n> this normal?\n\nCan you do an EXPLAIN UPDATE so that we can have an idea what the\nplanner is trying to do?\n\nMy personal concern is if it doing something like pulling in all rows\nfrom b, and then one by one updating table_A, but as it is going, it\ncan't retire any dead rows, because you are still in a transaction. So\nyou are getting a lot of old rows, which it has to pull in to realize it\nwas old.\n\nHow many rows are in table_B?\n\nI can see that possibly doing it in smaller chunks might be faster, as\nwould inserting into another table. But I would do more of a test and\nsee what happens.\n\nJohn\n=:->\n\n>\n> I stopped the process the first time after 3 hours of running due to\n> excessive log rotation and reset the conf file to these settings:\n>\n>\n> wal_buffers = 64 # min 4, 8KB each\n>\n> # - Checkpoints -\n>\n> checkpoint_segments = 128 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 1800 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # 0 is off, in seconds\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n>\n>\n> Would it just be quicker to run a JOIN statement to a temp file and then\n> reinsert?\n> TIA Patrick\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>", "msg_date": "Sat, 06 Aug 2005 08:34:25 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update statement" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> I'm running an update statement on about 12 million records using the\n> following query:\n\n> Update table_A\n> set F1 = b.new_data\n> from table_B b\n> where b.keyfield = table_A.keyfield\n\nWhat does EXPLAIN show for this?\n\nDo you have any foreign key references to table_A from elsewhere?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Aug 2005 10:12:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update statement " }, { "msg_contents": "Sorry went out of town for the weekend. The update did occur, but I \nhave no idea when it finished.\n\nHere's the actual query and the explain Update:\ncdm.bcp_ddw_ck_cus = 12.7 M\ncdm.cdm_ddw_customer = 12.8M\n\nexplain\nupdate cdm.cdm_ddw_customer\n set indiv_fkey = b.indiv_fkey\n from cdm.bcp_ddw_ck_cus b\n where \ncdm.cdm_ddw_customer.cus_nbr = b.cus_num;\n\n\n Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)\n Hash Cond: (\"outer\".cus_num = \"inner\".cus_nbr)\n -> Seq Scan on bcp_ddw_ck_cus b (cost=0.00..195690.76 rows=12702676 \nwidth=16)\n -> Hash (cost=874854.34..874854.34 rows=12880834 width=192)\n -> Seq Scan on cdm_ddw_customer (cost=0.00..874854.34 \nrows=12880834 width=192)\n\n\nJohn A Meinel wrote:\n\n>Patrick Hatcher wrote:\n> \n>\n>>[Reposted from General section with updated information]\n>>Pg 7.4.5\n>>\n>>I'm running an update statement on about 12 million records using the\n>>following query:\n>>\n>>Update table_A\n>>set F1 = b.new_data\n>>from table_B b\n>>where b.keyfield = table_A.keyfield\n>>\n>>both keyfields are indexed, all other keys in table_A were dropped, yet\n>>this job has been running over 15 hours. Is\n>>this normal?\n>> \n>>\n>\n>Can you do an EXPLAIN UPDATE so that we can have an idea what the\n>planner is trying to do?\n>\n>My personal concern is if it doing something like pulling in all rows\n>from b, and then one by one updating table_A, but as it is going, it\n>can't retire any dead rows, because you are still in a transaction. So\n>you are getting a lot of old rows, which it has to pull in to realize it\n>was old.\n>\n>How many rows are in table_B?\n>\n>I can see that possibly doing it in smaller chunks might be faster, as\n>would inserting into another table. But I would do more of a test and\n>see what happens.\n>\n>John\n>=:->\n>\n> \n>\n>>I stopped the process the first time after 3 hours of running due to\n>>excessive log rotation and reset the conf file to these settings:\n>>\n>>\n>>wal_buffers = 64 # min 4, 8KB each\n>>\n>># - Checkpoints -\n>>\n>>checkpoint_segments = 128 # in logfile segments, min 1, 16MB each\n>>checkpoint_timeout = 1800 # range 30-3600, in seconds\n>>#checkpoint_warning = 30 # 0 is off, in seconds\n>>#commit_delay = 0 # range 0-100000, in microseconds\n>>#commit_siblings = 5 # range 1-1000\n>>\n>>\n>>Would it just be quicker to run a JOIN statement to a temp file and then\n>>reinsert?\n>>TIA Patrick\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>> \n>>\n>\n> \n>\n", "msg_date": "Sun, 07 Aug 2005 19:00:51 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow update statement" }, { "msg_contents": "Sorry went out of town for the weekend. The update did occur, but I \nhave no idea when it finished.\n\nHere's the actual query and the explain Update:\ncdm.bcp_ddw_ck_cus = 12.7 M\ncdm.cdm_ddw_customer = 12.8M\n\nexplain\nupdate cdm.cdm_ddw_customer\n set indiv_fkey = b.indiv_fkey\n from cdm.bcp_ddw_ck_cus b\n where \ncdm.cdm_ddw_customer.cus_nbr = b.cus_num;\n\n\nHere's the table layout. It's the first time I noticed this, but there \nis a PK on the cus_nbr and an index. Does really need to be both and \ncould this be causing the issue? I thought that if a primary key was \ndesignated, it was automatically indexed.:\n\nCREATE TABLE cdm.cdm_ddw_customer\n(\n cus_nbr int8 NOT NULL,\n ph_home int8,\n ph_day int8,\n email_adr varchar(255),\n name_prefix varchar(5),\n name_first varchar(20),\n name_middle varchar(20),\n name_last varchar(30),\n name_suffix varchar(5),\n addr1 varchar(40),\n addr2 varchar(40),\n addr3 varchar(40),\n city varchar(25),\n state varchar(7),\n zip varchar(10),\n country varchar(16),\n gender varchar(1),\n lst_dte date,\n add_dte date,\n reg_id int4,\n indiv_fkey int8,\n CONSTRAINT ddwcus_pk PRIMARY KEY (cus_nbr)\n)\nWITH OIDS;\n\nCREATE INDEX cdm_ddwcust_id_idx\n ON cdm.cdm_ddw_customer\n USING btree\n (cus_nbr);\n\n\nCREATE TABLE cdm.bcp_ddw_ck_cus\n(\n cus_num int8,\n indiv_fkey int8 NOT NULL\n)\nWITHOUT OIDS;\n\nTom Lane wrote:\n\n>Patrick Hatcher <[email protected]> writes:\n> \n>\n>>I'm running an update statement on about 12 million records using the\n>>following query:\n>> \n>>\n>\n> \n>\n>>Update table_A\n>>set F1 = b.new_data\n>>from table_B b\n>>where b.keyfield = table_A.keyfield\n>> \n>>\n>\n>What does EXPLAIN show for this?\n>\n>Do you have any foreign key references to table_A from elsewhere?\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n", "msg_date": "Sun, 07 Aug 2005 19:09:04 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow update statement" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)\n> Hash Cond: (\"outer\".cus_num = \"inner\".cus_nbr)\n> -> Seq Scan on bcp_ddw_ck_cus b (cost=0.00..195690.76 rows=12702676 \n> width=16)\n> -> Hash (cost=874854.34..874854.34 rows=12880834 width=192)\n> -> Seq Scan on cdm_ddw_customer (cost=0.00..874854.34 \n> rows=12880834 width=192)\n\nYipes, that's a bit of a large hash table, if the planner's estimates\nare on-target. What do you have work_mem (sort_mem if pre 8.0) set to,\nand how does that compare to actual available RAM? I'm thinking you\nmight have set work_mem too large and the thing is now swap-thrashing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Aug 2005 23:48:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update statement " }, { "msg_contents": "At the time this was the only process running on the box so I set \nsort_mem= 228000;\nIt's a 12G box.\n\nTom Lane wrote:\n\n>Patrick Hatcher <[email protected]> writes:\n> \n>\n>> Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)\n>> Hash Cond: (\"outer\".cus_num = \"inner\".cus_nbr)\n>> -> Seq Scan on bcp_ddw_ck_cus b (cost=0.00..195690.76 rows=12702676 \n>>width=16)\n>> -> Hash (cost=874854.34..874854.34 rows=12880834 width=192)\n>> -> Seq Scan on cdm_ddw_customer (cost=0.00..874854.34 \n>>rows=12880834 width=192)\n>> \n>>\n>\n>Yipes, that's a bit of a large hash table, if the planner's estimates\n>are on-target. What do you have work_mem (sort_mem if pre 8.0) set to,\n>and how does that compare to actual available RAM? I'm thinking you\n>might have set work_mem too large and the thing is now swap-thrashing.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n", "msg_date": "Sun, 07 Aug 2005 21:35:36 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow update statement" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> Here's the table layout. It's the first time I noticed this, but there \n> is a PK on the cus_nbr and an index. Does really need to be both and \n> could this be causing the issue? I thought that if a primary key was \n> designated, it was automatically indexed.:\n\nThe duplicate index is certainly a waste, but it's no more expensive to\nmaintain than any other index would be; it doesn't seem likely that that\nwould account for any huge slowdown.\n\nA long-shot theory occurs to me upon noticing that your join keys are\nint8: 7.4 had a pretty bad hash function for int8, to wit it took the\nlow order half of the integer and ignored the high order half. For\nordinary distributions of key values this made no difference, but I\nrecall seeing at least one real-world case where the information was\nall in the high half of the key, and so the hash join degenerated to a\nsequential search because all the entries went into the same hash\nbucket. Were you assigning cus_nbrs nonsequentially by any chance?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 11:18:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update statement " } ]
[ { "msg_contents": "> Kari Lavikka <[email protected]> writes:\n> > samples % symbol name\n> > 13513390 16.0074 AtEOXact_CatCache\n> \n> That seems quite odd --- I'm not used to seeing that function at the\ntop\n> of a profile. What is the workload being profiled, exactly?\n\nHe is running a commit_delay of 80000. Could that be playing a role?\n\nMerlin\n", "msg_date": "Mon, 8 Aug 2005 11:24:42 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> Kari Lavikka <[email protected]> writes:\n>>> samples % symbol name\n>>> 13513390 16.0074 AtEOXact_CatCache\n>> \n>> That seems quite odd --- I'm not used to seeing that function at the top\n>> of a profile. What is the workload being profiled, exactly?\n\n> He is running a commit_delay of 80000. Could that be playing a role?\n\nIt wouldn't cause AtEOXact_CatCache to suddenly get expensive. (I have\nlittle or no faith in the value of nonzero commit_delay, though.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 11:37:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "\nActually I modified postgresql.conf a bit and there isn't commit delay any \nmore. That didn't make noticeable difference though..\n\nWorkload is generated by a website with about 1000 dynamic page views a \nsecond. Finland's biggest site among youths btw.\n\nAnyway, there are about 70 tables and here's some of the most important:\n relname | reltuples\n----------------------------------+-------------\n comment | 1.00723e+08\n comment_archive | 9.12764e+07\n channel_comment | 6.93912e+06\n image | 5.80314e+06\n admin_event | 5.1936e+06\n user_channel | 3.36877e+06\n users | 325929\n channel | 252267\n\nQueries to \"comment\" table are mostly IO-bound but are performing quite \nwell. Here's an example:\n\n(SELECT u.nick, c.comment, c.private, c.admin, c.visible, c.parsable, \nc.uid_sender, to_char(c.stamp, 'DD.MM.YY HH24:MI') AS stamp, c.comment_id \nFROM comment c INNER JOIN users u ON u.uid = c.uid_sender WHERE u.status = \n'a' AND c.image_id = 15500900 AND c.uid_target = 780345 ORDER BY \nuid_target DESC, image_id DESC, c.comment_id DESC) LIMIT 36\n\nAnd explain analyze:\n Limit (cost=0.00..6.81 rows=1 width=103) (actual time=0.263..17.522 rows=12 loops=1)\n -> Nested Loop (cost=0.00..6.81 rows=1 width=103) (actual time=0.261..17.509 rows=12 loops=1)\n -> Index Scan Backward using comment_uid_target_image_id_comment_id_20050527 on \"comment\" c (cost=0.00..3.39 rows=1 width=92) (actual time=0.129..16.213 rows=12 loops=1)\n Index Cond: ((uid_target = 780345) AND (image_id = 15500900))\n -> Index Scan using users_pkey on users u (cost=0.00..3.40 rows=1 width=15) (actual time=0.084..0.085 rows=1 loops=12)\n Index Cond: (u.uid = \"outer\".uid_sender)\n Filter: (status = 'a'::bpchar)\n Total runtime: 17.653 ms\n\n\nWe are having performance problems with some smaller tables and very \nsimple queries. For example:\n\nSELECT u.uid, u.nick, extract(epoch from uc.stamp) AS stamp FROM \nuser_channel uc INNER JOIN users u USING (uid) WHERE channel_id = 281321 \nAND u.status = 'a' ORDER BY uc.channel_id, upper(uc.nick)\n\nAnd explain analyze:\n Nested Loop (cost=0.00..200.85 rows=35 width=48) (actual time=0.414..38.128 rows=656 loops=1)\n -> Index Scan using user_channel_channel_id_nick on user_channel uc (cost=0.00..40.18 rows=47 width=27) (actual time=0.090..0.866 rows=667 loops=1)\n Index Cond: (channel_id = 281321)\n -> Index Scan using users_pkey on users u (cost=0.00..3.40 rows=1 width=25) (actual time=0.048..0.051 rows=1 loops=667)\n Index Cond: (\"outer\".uid = u.uid)\n Filter: (status = 'a'::bpchar)\n Total runtime: 38.753 ms\n\nUnder heavy load these queries tend to take several minutes to execute \nalthough there's plenty of free cpu available. There aren't any blocking \nlocks in pg_locks.\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\nOn Mon, 8 Aug 2005, Merlin Moncure wrote:\n\n>> Kari Lavikka <[email protected]> writes:\n>>> samples % symbol name\n>>> 13513390 16.0074 AtEOXact_CatCache\n>>\n>> That seems quite odd --- I'm not used to seeing that function at the\n> top\n>> of a profile. What is the workload being profiled, exactly?\n>\n> He is running a commit_delay of 80000. Could that be playing a role?\n>\n> Merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Mon, 8 Aug 2005 19:19:09 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "Kari Lavikka <[email protected]> writes:\n> We are having performance problems with some smaller tables and very \n> simple queries. For example:\n\n> SELECT u.uid, u.nick, extract(epoch from uc.stamp) AS stamp FROM \n> user_channel uc INNER JOIN users u USING (uid) WHERE channel_id = 281321 \n> AND u.status = 'a' ORDER BY uc.channel_id, upper(uc.nick)\n\n> And explain analyze:\n> Nested Loop (cost=0.00..200.85 rows=35 width=48) (actual time=0.414..38.128 rows=656 loops=1)\n> -> Index Scan using user_channel_channel_id_nick on user_channel uc (cost=0.00..40.18 rows=47 width=27) (actual time=0.090..0.866 rows=667 loops=1)\n> Index Cond: (channel_id = 281321)\n> -> Index Scan using users_pkey on users u (cost=0.00..3.40 rows=1 width=25) (actual time=0.048..0.051 rows=1 loops=667)\n> Index Cond: (\"outer\".uid = u.uid)\n> Filter: (status = 'a'::bpchar)\n> Total runtime: 38.753 ms\n\n> Under heavy load these queries tend to take several minutes to execute \n> although there's plenty of free cpu available.\n\nWhat that sounds like to me is a machine with inadequate disk I/O bandwidth.\nYour earlier comment that checkpoint drives the machine into the ground\nfits right into that theory, too. You said there is \"almost no IO-wait\"\nbut are you sure you are measuring that correctly?\n\nSomething else just struck me from your first post:\n\n> Queries accumulate and when checkpointing is over, there can be\n> something like 400 queries running but over 50% of cpu is just idling.\n\n400 queries? Are you launching 400 separate backends to do that?\nSome sort of connection pooling seems like a good idea, if you don't\nhave it in place already. If the system's effective behavior in the\nface of heavy load is to start even more concurrent backends, that\ncould easily drive things into the ground.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 12:56:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "On Mon, 8 Aug 2005, Tom Lane wrote:\n> What that sounds like to me is a machine with inadequate disk I/O bandwidth.\n> Your earlier comment that checkpoint drives the machine into the ground\n> fits right into that theory, too. You said there is \"almost no IO-wait\"\n> but are you sure you are measuring that correctly?\n\nCurrently there's some iowait caused by \"fragmentation\" of the comment \ntable. Periodic clustering helps a lot.\n\nDisk configurations looks something like this:\n sda: data (10 spindles, raid10)\n sdb: xlog & clog (2 spindles, raid1)\n sdc: os and other stuff\n\nUsually iostat (2 second interval) says:\n avg-cpu: %user %nice %sys %iowait %idle\n 32.38 0.00 12.88 11.62 43.12\n\n Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n sda 202.00 1720.00 0.00 3440 0\n sdb 152.50 4.00 2724.00 8 5448\n sdc 0.00 0.00 0.00 0 0\n\nAnd during checkpoint:\n avg-cpu: %user %nice %sys %iowait %idle\n 31.25 0.00 14.75 54.00 0.00\n\n Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n sda 3225.50 1562.00 35144.00 3124 70288\n sdb 104.50 10.00 2348.00 20 4696\n sdc 0.00 0.00 0.00 0 0\n\nI think (insufficiency of) disk IO shouldn't cause those lingering queries \nbecause dataset is rather small and it's continuously accessed. It should \nfit into cache and stay there(?)\n\n> 400 queries? Are you launching 400 separate backends to do that?\n\nWell yes. That's the common problem with php and persistent connections.\n\n> Some sort of connection pooling seems like a good idea, if you don't\n> have it in place already.\n\npg_pool for example? I'm planning to give it a try.\n\n> \t\t\tregards, tom lane\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\n", "msg_date": "Mon, 8 Aug 2005 20:54:38 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "Kari Lavikka <[email protected]> writes:\n> Disk configurations looks something like this:\n> sda: data (10 spindles, raid10)\n> sdb: xlog & clog (2 spindles, raid1)\n> sdc: os and other stuff\n\nThat's definitely wrong. Put clog on the data disk. The entire point\nof giving xlog its own spindle is that you don't ever want the disk\nheads moving off the current xlog file. I'm not sure how much this is\nhurting you, given that clog is relatively low volume, but if you're\ngoing to go to the trouble of putting xlog on a separate spindle then\nit should be a completely dedicated spindle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 15:27:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "On Mon, 8 Aug 2005, Tom Lane wrote:\n> What that sounds like to me is a machine with inadequate disk I/O bandwidth.\n> Your earlier comment that checkpoint drives the machine into the ground\n> fits right into that theory, too. You said there is \"almost no IO-wait\"\n> but are you sure you are measuring that correctly?\n\nReducing checkpoint_timeout to 600 seconds had a positive effect. Previous \nvalue was 1800 seconds.\n\nWe have a spare disk array from the old server and I'm planning to use it \nas a tablespace for the comment table (the 100M+ rows one) as Ron \nsuggested.\n\n>> Queries accumulate and when checkpointing is over, there can be\n>> something like 400 queries running but over 50% of cpu is just idling.\n>\n> 400 queries? Are you launching 400 separate backends to do that?\n> Some sort of connection pooling seems like a good idea, if you don't\n> have it in place already. If the system's effective behavior in the\n> face of heavy load is to start even more concurrent backends, that\n> could easily drive things into the ground.\n\nOk, I implemented connection pooling using pgpool and it increased \nperformance a lot! We are now delivering about 1500 dynamic pages a second \nwithout problems. Each of the eight single-cpu webservers are running a \npgpool instance with 20 connections.\n\nHowever, those configuration changes didn't have significant effect to \noprofile results. AtEOXact_CatCache consumes even more cycles. This isn't \na problem right now but it may be in the future...\n\nCPU: AMD64 processors, speed 2190.23 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 100000\nsamples % symbol name\n1147870 21.1602 AtEOXact_CatCache\n187466 3.4558 hash_seq_search\n174357 3.2142 AllocSetAlloc\n170896 3.1504 nocachegetattr\n131724 2.4282 ExecMakeFunctionResultNoSets\n125292 2.3097 SearchCatCache\n117264 2.1617 StrategyDirtyBufferList\n105741 1.9493 hash_search\n98245 1.8111 FunctionCall2\n97878 1.8043 yyparse\n90932 1.6763 LWLockAcquire\n83555 1.5403 LWLockRelease\n81045 1.4940 _bt_compare\n... and so on ...\n\n----->8 Signigicant rows from current postgresql.conf 8<-----\n\nmax_connections = 768 # unnecessarily large with connection \npooling\nshared_buffers = 15000\nwork_mem = 2048\nmaintenance_work_mem = 32768\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nbgwriter_percent = 2\nfsync = true\nwal_buffers = 512\ncheckpoint_segments = 200 # less would probably be enuff with 600sec \ntimeout\ncheckpoint_timeout = 600\neffective_cache_size = 500000\nrandom_page_cost = 1.5\ndefault_statistics_target = 150\nstats_start_collector = true\nstats_command_string = true\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\n", "msg_date": "Fri, 19 Aug 2005 14:34:47 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "Kari Lavikka <[email protected]> writes:\n> However, those configuration changes didn't have significant effect to \n> oprofile results. AtEOXact_CatCache consumes even more cycles.\n\nI believe I've fixed that for 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Aug 2005 09:05:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "\n<[email protected]> writes\n>\n>\n> so, if I do a qry like \"EXPLAIN ANALYZE select * from pridecdr where\n> idsede=8977758488\" it tooks a lot of time before i get back any result:\n>\n> Index Scan using prd_id_sede on pridecdr (cost=0.00..699079.90\n> rows=181850 width=138) (actual time=51.241..483068.255 rows=150511\n> loops=1)\n> Index Cond: (idsede = 8977758488::bigint)\n> Total runtime: 483355.325 ms\n>\n\nThe query plan looks ok. Try to do EXPLAIN ANALYZE twice and see if there is\nany difference. This could reduce the IO time to read your index/data since\nyou got enough RAM.\n\nAlso, if you haven't done VACUUM FULL for a long time, do so and compare the\ndifference.\n\nRegards,\nQingqing\n\n\n", "msg_date": "Tue, 9 Aug 2005 12:30:41 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: QRY seems not using indexes" }, { "msg_contents": "Qingqing Zhou wrote:\n> <[email protected]> writes\n> \n>>\n>>so, if I do a qry like \"EXPLAIN ANALYZE select * from pridecdr where\n>>idsede=8977758488\" it tooks a lot of time before i get back any result:\n>>\n>>Index Scan using prd_id_sede on pridecdr (cost=0.00..699079.90\n>>rows=181850 width=138) (actual time=51.241..483068.255 rows=150511\n>>loops=1)\n>> Index Cond: (idsede = 8977758488::bigint)\n>> Total runtime: 483355.325 ms\n>>\n> \n> \n> The query plan looks ok. Try to do EXPLAIN ANALYZE twice and see if there is\n> any difference. This could reduce the IO time to read your index/data since\n> you got enough RAM.\n> \n> Also, if you haven't done VACUUM FULL for a long time, do so and compare the\n> difference.\n> \n\nCould also be libpq buffering all 150000 rows before showing any.\n\nIt might be worthwhile using a CURSOR and doing 1 FETCH. If that is \nquick, then buffering is probably the issue. BTW - do you really want \nall the rows?\n\nCheers\n\nMark\n", "msg_date": "Tue, 09 Aug 2005 17:05:39 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: QRY seems not using indexes" } ]
[ { "msg_contents": "I thought I would send this to pg-performance since so many people \nhelped me with my speed issues recently. I was definitely IO- \nbottlenecked.\n\nSince then, I have installed 2 RAID arrays with 7 15k drives in them \nin RAID 0+1 as well as add a new controller card with 512MB of cache \non it. I also created this new partition on the RAID as XFS instead \nof ext3.\n\nThese changes have definitely improved performance, but I am now \nfinding some trouble with UPDATE or DELETE queries \"hanging\" and \nnever releasing their locks. As this happens, other statements queue \nup behind it. It seems to occur at times of very high loads on the \nbox. Is my only option to kill the query ( which usually takes down \nthe whole postmaster with it! ouch ).\n\nCould these locking issues be related to the other changes I made? \nI'm really scared that this is related to choosing XFS, but I sure \nhope not. How should I go about troubleshooting the \"problem\" \nqueries? They don't seem to be specific to a single table or single \ndatabase.\n\nI'm running 8.0.1 on kernel 2.6.12-3 on 64-bit Opterons if that \nmatters..\n\n\n-Dan\n", "msg_date": "Tue, 9 Aug 2005 12:04:11 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Table locking problems?" }, { "msg_contents": "\n> Could these locking issues be related to the other changes I made? I'm \n> really scared that this is related to choosing XFS, but I sure hope \n> not. How should I go about troubleshooting the \"problem\" queries? \n> They don't seem to be specific to a single table or single database.\n\nMy experience is that when this type of thing happens it is typically \nspecific queries that cause the problem. If you turn on statement \nlogging you can get the exact queries and debug from there.\n\nHere are some things to look for:\n\nIs it a large table (and thus large indexes) that it is updating?\nIs the query using indexes?\nIs the query modifying ALOT of rows?\n\nOf course there is also the RTFM of are you analyzing and vacuuming?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> I'm running 8.0.1 on kernel 2.6.12-3 on 64-bit Opterons if that matters..\n> \n> \n> -Dan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Tue, 09 Aug 2005 11:33:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "On Tue, Aug 09, 2005 at 12:04:11PM -0600, Dan Harris wrote:\n> These changes have definitely improved performance, but I am now \n> finding some trouble with UPDATE or DELETE queries \"hanging\" and \n> never releasing their locks. As this happens, other statements queue \n> up behind it.\n\nHave you examined pg_locks to see if the UPDATE or DELETE is blocked\nbecause of a lock another session holds?\n\nAre you using foreign keys? When updating referencing rows, released\nversions of PostgreSQL acquire a lock on the referenced row that can\nhurt concurrency or cause deadlock (this will be improved in 8.1).\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 9 Aug 2005 12:52:01 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> My experience is that when this type of thing happens it is typically \n> specific queries that cause the problem. If you turn on statement \n> logging you can get the exact queries and debug from there.\n\n> Here are some things to look for:\n\n> Is it a large table (and thus large indexes) that it is updating?\n> Is the query using indexes?\n> Is the query modifying ALOT of rows?\n\nAnother thing to look at is foreign keys. Dan could be running into\nproblems with an update on one side of an FK being blocked by locks\non the associated rows on the other side.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Aug 2005 15:08:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems? " }, { "msg_contents": "\nOn Aug 9, 2005, at 1:08 PM, Tom Lane wrote:\n\n> \"Joshua D. Drake\" <[email protected]> writes:\n>\n>> My experience is that when this type of thing happens it is typically\n>> specific queries that cause the problem. If you turn on statement\n>> logging you can get the exact queries and debug from there.\n>>\n>\n>\n>> Here are some things to look for:\n>>\n>\n>\n>> Is it a large table (and thus large indexes) that it is updating?\n>> Is the query using indexes?\n>> Is the query modifying ALOT of rows?\n>>\n>\n> Another thing to look at is foreign keys. Dan could be running into\n> problems with an update on one side of an FK being blocked by locks\n> on the associated rows on the other side.\n>\n> regards, tom lane\n>\n\nTom, Steve, Josh:\n\nThank you for your ideas. The updates are only on a single table, no \njoins. I had stats collection turned off. I have turned that on \nagain so that I can try and catch one while the problem is \noccurring. The last table it did this on was about 3 million \nrecords. 4 single-column indexes on it.\n\nThe problem I had with statement logging is that if the query never \nfinishes, it doesn't get logged as far as I can tell. So everything \nthat did get logged was normal and would run with no isses in psql by \ncopy and pasting it. The rows updated will certainly vary by query. \nI really need to \"catch it in the act\" with stats collection on so I \ncan get the query from pg_stat_activity. Once I get it, I will play \nwith explains and see if I can reproduce it outside the wild.\n\nThanks again for your help.\n\n-Dan\n\n", "msg_date": "Tue, 9 Aug 2005 14:42:57 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table locking problems? " }, { "msg_contents": "\nOn Aug 10, 2005, at 12:49 AM, Steve Poe wrote:\n\n> Dan,\n>\n> Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a\n> clarification, since RAID 0 is still a single-point of failure even if\n> RAID1 is on top of RAID0.\n\nWell, you tell me if I stated incorrectly. There are two raid \nenclosures with 7 drives in each. Each is on its own bus on a dual- \nchannel controller. Each box has a stripe across its drives and the \nenclosures are mirrors of each other. I understand the controller \ncould be a single point of failure, but I'm not sure I understand \nyour concern about the RAID structure itself.\n\n>\n> How many users are connected when your update / delete queries are\n> hanging? Have you done an analyze verbose on those queries?\n\nMost of the traffic is from programs we run to do analysis of the \ndata and managing changes. At the time I noticed it this morning, \nthere were 10 connections open to the database. That rarely goes \nabove 20 concurrent. As I said in my other response, I believe that \nthe log will only contain the query at the point the query finishes, \nso if it never finishes...\n\n>\n> Have you made changes to the postgresql.conf? kernel.vm settings? IO\n> scheduler?\n\nI set shmmax appropriately for my shared_buffers setting, but that's \nthe only kernel tweak.\n\n>\n> If you're not doing so already, you may consider running sar \n> (iostat) to\n> monitor when the hanging occurs if their is a memory / IO bottleneck\n> somewhere.\n>\n\nI will try that. Thanks\n\n\n", "msg_date": "Tue, 9 Aug 2005 14:51:18 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "Dan Harris wrote:\n> \n> On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:\n> \n>> Dan,\n>>\n>> Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a\n>> clarification, since RAID 0 is still a single-point of failure even if\n>> RAID1 is on top of RAID0.\n> \n> \n> Well, you tell me if I stated incorrectly. There are two raid \n> enclosures with 7 drives in each. Each is on its own bus on a dual- \n> channel controller. Each box has a stripe across its drives and the \n> enclosures are mirrors of each other. I understand the controller \n> could be a single point of failure, but I'm not sure I understand your \n> concern about the RAID structure itself.\n\nIn this configuration, if you have a drive fail on both controllers, the \nentire RAID dies. Lets label them A1-7, B1-7, because you stripe within \na set, if a single one of A dies, and a single one of B dies, you have \nlost your entire mirror.\n\nThe correct way of doing it, is to have A1 be a mirror of B1, and then \nstripe above that. Since you are using 2 7-disk enclosures, I'm not sure \nhow you can do it well, since it is not an even number of disks. Though \nif you are using software RAID, there should be no problem.\n\nThe difference is that in this scenario, *all* of the A drives can die, \nand you haven't lost any data. The only thing you can't lose is a \nmatched pair (eg losing both A1 and B1 will cause complete data loss)\n\nI believe the correct notation for this last form is RAID 1 + 0 (RAID10) \nsince you have a set of RAID1 drives, with a RAID0 on-top of them.\n\n> \n>>\n>> How many users are connected when your update / delete queries are\n>> hanging? Have you done an analyze verbose on those queries?\n> \n> \n> Most of the traffic is from programs we run to do analysis of the data \n> and managing changes. At the time I noticed it this morning, there \n> were 10 connections open to the database. That rarely goes above 20 \n> concurrent. As I said in my other response, I believe that the log \n> will only contain the query at the point the query finishes, so if it \n> never finishes...\n> \n>>\n>> Have you made changes to the postgresql.conf? kernel.vm settings? IO\n>> scheduler?\n> \n> \n> I set shmmax appropriately for my shared_buffers setting, but that's \n> the only kernel tweak.\n> \n>>\n>> If you're not doing so already, you may consider running sar (iostat) to\n>> monitor when the hanging occurs if their is a memory / IO bottleneck\n>> somewhere.\n>>\n> \n> I will try that. Thanks\n> \n\nWhen you discover that an update is hanging, can you get into the \ndatabase, and see what locks currently exist? (SELECT * FROM pg_locks)\n\nThat might help you figure out what is being locked and possibly \npreventing your updates.\n\nIt is also possible that your UPDATE query is trying to do something \nfunny (someone just recently was talking about an UPDATE that wanted to \ndo a hash join against 12M rows). Which probably meant that it had to \nspill to disk, where a merge join would have worked better.\n\nJohn\n=:->", "msg_date": "Tue, 09 Aug 2005 16:51:22 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "\nOn Aug 9, 2005, at 3:51 PM, John A Meinel wrote:\n\n> Dan Harris wrote:\n>\n>> On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:\n>>\n>>> Dan,\n>>>\n>>> Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a\n>>> clarification, since RAID 0 is still a single-point of failure \n>>> even if\n>>> RAID1 is on top of RAID0.\n>>>\n>> Well, you tell me if I stated incorrectly. There are two raid \n>> enclosures with 7 drives in each. Each is on its own bus on a \n>> dual- channel controller. Each box has a stripe across its drives \n>> and the enclosures are mirrors of each other. I understand the \n>> controller could be a single point of failure, but I'm not sure I \n>> understand your concern about the RAID structure itself.\n>>\n>\n> In this configuration, if you have a drive fail on both \n> controllers, the entire RAID dies. Lets label them A1-7, B1-7, \n> because you stripe within a set, if a single one of A dies, and a \n> single one of B dies, you have lost your entire mirror.\n>\n> The correct way of doing it, is to have A1 be a mirror of B1, and \n> then stripe above that. Since you are using 2 7-disk enclosures, \n> I'm not sure how you can do it well, since it is not an even number \n> of disks. Though if you are using software RAID, there should be no \n> problem.\n>\n> The difference is that in this scenario, *all* of the A drives can \n> die, and you haven't lost any data. The only thing you can't lose \n> is a matched pair (eg losing both A1 and B1 will cause complete \n> data loss)\n>\n> I believe the correct notation for this last form is RAID 1 + 0 \n> (RAID10) since you have a set of RAID1 drives, with a RAID0 on-top \n> of them.\n>\n\nI have read up on the difference now. I don't understand why it's a \n\"single point of failure\". Technically any array could be a \"single \npoint\" depending on your level of abstraction. In retrospect, I \nprobably should have gone 8 drives in each and used RAID 10 instead \nfor the better fault-tolerance, but it's online now and will require \nsome planning to see if I want to reconfigure that in the future. I \nwish HP's engineer would have promoted that method instead of 0+1..\n\n-Dan\n\n", "msg_date": "Tue, 9 Aug 2005 16:05:34 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "Dan Harris wrote:\n> \n> On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:\n> \n>> Dan Harris wrote:\n>>\n>>> On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:\n>>>\n>>>> Dan,\n>>>>\n>>>> Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a\n>>>> clarification, since RAID 0 is still a single-point of failure even if\n>>>> RAID1 is on top of RAID0.\n>>>>\n>>> Well, you tell me if I stated incorrectly. There are two raid \n>>> enclosures with 7 drives in each. Each is on its own bus on a dual- \n>>> channel controller. Each box has a stripe across its drives and \n>>> the enclosures are mirrors of each other. I understand the \n>>> controller could be a single point of failure, but I'm not sure I \n>>> understand your concern about the RAID structure itself.\n>>>\n>>\n>> In this configuration, if you have a drive fail on both controllers, \n>> the entire RAID dies. Lets label them A1-7, B1-7, because you stripe \n>> within a set, if a single one of A dies, and a single one of B dies, \n>> you have lost your entire mirror.\n>>\n>> The correct way of doing it, is to have A1 be a mirror of B1, and \n>> then stripe above that. Since you are using 2 7-disk enclosures, I'm \n>> not sure how you can do it well, since it is not an even number of \n>> disks. Though if you are using software RAID, there should be no \n>> problem.\n>>\n>> The difference is that in this scenario, *all* of the A drives can \n>> die, and you haven't lost any data. The only thing you can't lose is \n>> a matched pair (eg losing both A1 and B1 will cause complete data loss)\n>>\n>> I believe the correct notation for this last form is RAID 1 + 0 \n>> (RAID10) since you have a set of RAID1 drives, with a RAID0 on-top of \n>> them.\n>>\n> \n> I have read up on the difference now. I don't understand why it's a \n> \"single point of failure\". Technically any array could be a \"single \n> point\" depending on your level of abstraction. In retrospect, I \n> probably should have gone 8 drives in each and used RAID 10 instead for \n> the better fault-tolerance, but it's online now and will require some \n> planning to see if I want to reconfigure that in the future. I wish \n> HP's engineer would have promoted that method instead of 0+1..\n\nI wouldn't say that it is a single point of failure, but I *can* say \nthat it is much more likely to fail. (2 drives rather than on average n \ndrives)\n\nIf your devices will hold 8 drives, you could simply do 1 8-drive, and \none 6-drive. And then do RAID1 with pairs, and RAID0 across the \nresultant 7 RAID1 sets.\n\nI'm really surprised that someone promoted RAID 0+1 over RAID10. I think \nI've heard that there is a possible slight performance improvement, but \nreally the failure mode makes it a poor tradeoff.\n\nJohn\n=:->\n\n> \n> -Dan\n>", "msg_date": "Tue, 09 Aug 2005 17:30:17 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems?" }, { "msg_contents": "Dan,\n\nDo you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a\nclarification, since RAID 0 is still a single-point of failure even if\nRAID1 is on top of RAID0. \n\nHow many users are connected when your update / delete queries are\nhanging? Have you done an analyze verbose on those queries?\n\nHave you made changes to the postgresql.conf? kernel.vm settings? IO\nscheduler?\n\nIf you're not doing so already, you may consider running sar (iostat) to\nmonitor when the hanging occurs if their is a memory / IO bottleneck\nsomewhere.\n\nGood luck.\n\nSteve Poe\n\n\nOn Tue, 2005-08-09 at 12:04 -0600, Dan Harris wrote:\n> I thought I would send this to pg-performance since so many people \n> helped me with my speed issues recently. I was definitely IO- \n> bottlenecked.\n> \n> Since then, I have installed 2 RAID arrays with 7 15k drives in them \n> in RAID 0+1 as well as add a new controller card with 512MB of cache \n> on it. I also created this new partition on the RAID as XFS instead \n> of ext3.\n> \n> These changes have definitely improved performance, but I am now \n> finding some trouble with UPDATE or DELETE queries \"hanging\" and \n> never releasing their locks. As this happens, other statements queue \n> up behind it. It seems to occur at times of very high loads on the \n> box. Is my only option to kill the query ( which usually takes down \n> the whole postmaster with it! ouch ).\n> \n> Could these locking issues be related to the other changes I made? \n> I'm really scared that this is related to choosing XFS, but I sure \n> hope not. How should I go about troubleshooting the \"problem\" \n> queries? They don't seem to be specific to a single table or single \n> database.\n> \n> I'm running 8.0.1 on kernel 2.6.12-3 on 64-bit Opterons if that \n> matters..\n> \n> \n> -Dan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Wed, 10 Aug 2005 06:49:07 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table locking problems?" } ]
[ { "msg_contents": "So, I have a table game with a timestamp attribute 'game_end', ranging from\njan-2005 to present. The game table also have an attribute state, with live\ngames beeing in state 2, and ended games beeing in state 4 (so,\ngame_end+delta>now() usually means state=4). There are also an insignificant\nnumber of games in states 1,3.\n\nThis query puzzles me:\n\n select * from game where game_end>'2005-07-30' and state in (3,4);\n \nNow, one (at least me) should believe that the best index would be a partial\nindex,\n\n \"resolved_game_by_date\" btree (game_end) WHERE ((state = 3) OR (state = 4))\n \nNBET=> explain analyze select * from game where game_end>'2005-07-30' and state in (3,4);\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using resolved_game_by_date on game (cost=0.00..7002.87 rows=7147 width=555) (actual time=0.220..86.234 rows=3852 loops=1)\n Index Cond: (game_end > '2005-07-30 00:00:00'::timestamp without time zone)\n Filter: ((state = 3) OR (state = 4))\n Total runtime: 90.568 ms\n(4 rows)\n \nSince state has only two significant states, I wouldn't believe this index\nto be any good:\n\n \"game_by_state\" btree (state)\n \n\n...and it seems like I'm right:\n\nNBET=> explain analyze select * from game where game_end>'2005-07-30' and\nstate in (3,4);\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using game_by_state, game_by_state on game (cost=0.00..4413.78 rows=7147 width=555) (actual time=0.074..451.771 rows=3851 loops=1)\n Index Cond: ((state = 3) OR (state = 4))\n Filter: (game_end > '2005-07-30 00:00:00'::timestamp without time zone)\n Total runtime: 457.132 ms\n(4 rows)\n\nNow, how can the planner believe the game_by_state-index to be better?\n\n('vacuum analyze game' did not significantly impact the numbers, and I've\ntried running the queries some times with and without the\ngame_by_state-index to rule out cacheing effects)\n\n-- \nTobias Brox\nThis signature has been virus scanned, and is probably safe to read.\nThis mail may contain confidential information, please keep your eyes closed.\n", "msg_date": "Wed, 10 Aug 2005 19:52:08 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "partial index regarded more expensive" }, { "msg_contents": "\n\twhy not simply create an index on (game_end, state) ?\n", "msg_date": "Wed, 10 Aug 2005 20:15:13 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partial index regarded more expensive" }, { "msg_contents": "[PFC - Wed at 08:15:13PM +0200]\n> \twhy not simply create an index on (game_end, state) ?\n\nNo, the planner prefers to use the partial index (I dropped the index on\ngame(state)).\n\n-- \nTobias Brox, Nordicbet IT dept\nThis signature has been virus scanned, and is probably safe to read.\nThis mail may contain confidential information, please keep your eyes closed.\n", "msg_date": "Wed, 10 Aug 2005 20:31:42 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partial index regarded more expensive" }, { "msg_contents": "Tobias Brox <[email protected]> writes:\n> This query puzzles me:\n> select * from game where game_end>'2005-07-30' and state in (3,4);\n> ...\n> Now, how can the planner believe the game_by_state-index to be better?\n\nI suspect the problem has to do with lack of cross-column statistics.\nThe planner does not know that state=4 is correlated with game_end,\nand it's probably coming up with some bogus guesses about the numbers\nof index rows visited in each case. You haven't given enough info to\nquantify this, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Aug 2005 23:14:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partial index regarded more expensive " } ]
[ { "msg_contents": "I have a web page for my customers that shows them count of records \nand some min/max date ranges in each table of a database, as this is \nhow we bill them for service. They can log in and check the counts \nat any time. I'd like for the counts to be as fresh as possible by \nkeeping this dynamic, but I will use a periodic 'snapshot'/cron job \nif that is the only option to speed this up. I have thought about \nusing the table statistics, but the estimate error is probably \nunacceptable because of the billing purposes.\n\nFor some reason, the SQL Server we migrated the app from can return \ncount(*) in a split second on multi-million row tables, even though \nit is a MUCH slower box hardware-wise, but it's now taking many \nseconds to run. I have read in the archives the problems MVCC brings \ninto the count(*) dilemma forcing Pg to run a seq scan to get \ncounts. Does SQLServer not use MVCC or have they found another \napproach for arriving at this number? Compounding all the min/max \nand counts from other tables and all those queries take about a \nminute to run. The tables will contain anywhere from 1 million to 40 \nmillion rows.\n\nAlso, I am using \"select ... group by ... order by .. limit 1\" to get \nthe min/max since I have already been bit by the issue of min() max() \nbeing slower.\n\n\n-Dan\n\n\n", "msg_date": "Wed, 10 Aug 2005 17:37:49 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Speedier count(*)" }, { "msg_contents": "\n> Also, I am using \"select ... group by ... order by .. limit 1\" to get \n> the min/max since I have already been bit by the issue of min() max() \n> being slower.\n\nThis specific instance is fixed in 8.1\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n> -Dan\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Wed, 10 Aug 2005 17:23:39 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "On Wed, Aug 10, 2005 at 05:37:49PM -0600, Dan Harris wrote:\n> Also, I am using \"select ... group by ... order by .. limit 1\" to get \n> the min/max since I have already been bit by the issue of min() max() \n> being slower.\n\nPostgreSQL 8.1 will have optimizations for certain MIN and MAX\nqueries.\n\nhttp://archives.postgresql.org/pgsql-committers/2005-04/msg00163.php\nhttp://archives.postgresql.org/pgsql-committers/2005-04/msg00168.php\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 10 Aug 2005 18:36:35 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Dan Harris wrote:\n> I have a web page for my customers that shows them count of records and\n> some min/max date ranges in each table of a database, as this is how we\n> bill them for service. They can log in and check the counts at any\n> time. I'd like for the counts to be as fresh as possible by keeping\n> this dynamic, but I will use a periodic 'snapshot'/cron job if that is\n> the only option to speed this up. I have thought about using the\n> table statistics, but the estimate error is probably unacceptable\n> because of the billing purposes.\n>\n> For some reason, the SQL Server we migrated the app from can return\n> count(*) in a split second on multi-million row tables, even though it\n> is a MUCH slower box hardware-wise, but it's now taking many seconds to\n> run. I have read in the archives the problems MVCC brings into the\n> count(*) dilemma forcing Pg to run a seq scan to get counts. Does\n> SQLServer not use MVCC or have they found another approach for arriving\n> at this number? Compounding all the min/max and counts from other\n> tables and all those queries take about a minute to run. The tables\n> will contain anywhere from 1 million to 40 million rows.\n\nI believe SQL Server doesn't use MVCC in the same way. At the very\nleast, it stores some row information in the index, so it can get some\ninfo from just an index, without having to go to the actual page (MVCC\nrequires a main page visit to determine visibility.)\n\nDepending on how much it impacts performance, you can create an\nINSERT/UPDATE trigger so that whenever a new entry is added, it\nautomatically updates a statistics table. It would be maintained as you\ngo, rather than periodically like a cron job.\n\nI would go Cron if things can be slightly out of date (like 1 hour at\nleast), and you need updates & inserts to not be slowed down.\nOtherwise I think the trigger is nicer, since it doesn't do redundant\nwork, and means everything stays up-to-date.\n\n\n>\n> Also, I am using \"select ... group by ... order by .. limit 1\" to get\n> the min/max since I have already been bit by the issue of min() max()\n> being slower.\n>\n>\n> -Dan\n\nJohn\n=:->", "msg_date": "Wed, 10 Aug 2005 20:07:50 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Hi Dan,\n\nOn Wed, 10 Aug 2005, Dan Harris wrote:\n\n> I have a web page for my customers that shows them count of records\n> and some min/max date ranges in each table of a database, as this is\n> how we bill them for service. They can log in and check the counts\n> at any time. I'd like for the counts to be as fresh as possible by\n> keeping this dynamic, but I will use a periodic 'snapshot'/cron job\n> if that is the only option to speed this up. I have thought about\n> using the table statistics, but the estimate error is probably\n> unacceptable because of the billing purposes.\n>\n> For some reason, the SQL Server we migrated the app from can return\n> count(*) in a split second on multi-million row tables, even though\n> it is a MUCH slower box hardware-wise, but it's now taking many\n> seconds to run. I have read in the archives the problems MVCC brings\n> into the count(*) dilemma forcing Pg to run a seq scan to get\n> counts. Does SQLServer not use MVCC or have they found another\n\nSQL Server probably jumps through a lot of hoops to do fast count(*)s. I'm\nsure we could do something similar -- it's just a question of complexity,\nresources, desirability, etc. The are other solutions, which makes the\nidea of doing it less attractive still.\n\n> approach for arriving at this number? Compounding all the min/max\n> and counts from other tables and all those queries take about a\n> minute to run. The tables will contain anywhere from 1 million to 40\n> million rows.\n>\n> Also, I am using \"select ... group by ... order by .. limit 1\" to get\n> the min/max since I have already been bit by the issue of min() max()\n> being slower.\n\nI generally pre generate the results. There are two ways to do this: the\n'snapshot'/cronjon you mentioned or using rules and triggers to maintain\n'count' tables. The idea is that if data is added, modified or removed\nfrom your table, you modify counters in these other tables.\n\nAlternatively, feel free to post your schema and sample queries with\nexplain analyze results to this list. Alternatively, jump on irc at\nirc.freenode.net #postgresql and someone will be more than happy to look\nthrough the problem in more detail.\n\nThanks,\n\nGavin\n", "msg_date": "Thu, 11 Aug 2005 13:52:04 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Here's a trigger I wrote to perform essentially the same purpose. The nice\nthing about this is it keeps the number up to date for you, but you do incur\nslight overhead.\n\nCREATE TABLE test (id serial primary key, name varchar(20));\n\nCREATE TABLE rowcount (tablename varchar(50), rowcount bigint default 0);\nCREATE INDEX rowcount_tablename ON rowcount(tablename);\n\nCREATE OR REPLACE FUNCTION del_rowcount() RETURNS trigger AS $$\nBEGIN\n UPDATE rowcount SET rowcount = rowcount-1 WHERE tablename = TG_RELNAME;\n RETURN OLD;\nEND;\n$$ LANGUAGE PLPGSQL;\n\nCREATE OR REPLACE FUNCTION add_rowcount() RETURNS trigger AS $$\nBEGIN\n UPDATE rowcount SET rowcount = rowcount+1 WHERE tablename = TG_RELNAME;\n RETURN NEW;\nEND;\n$$ LANGUAGE PLPGSQL;\n\nCREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE\n PROCEDURE del_rowcount();\nCREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE\n PROCEDURE add_rowcount();\n\nINSERT INTO rowcount (tablename) VALUES ('test');\n\nroot=# select * from test;\n id | name \n----+------\n(0 rows)\n\nTime: 0.934 ms\nroot=# select * from rowcount;\n tablename | rowcount\n-----------+----------\n test | 0\n(1 row)\n\nTime: 0.630 ms\nroot=# insert into test (name) values ('blah');\nINSERT 1190671626 1\nTime: 3.278 ms\nroot=# select * from test;\n id | name \n----+------\n 5 | blah\n(1 row)\n\nTime: 0.612 ms\nroot=# select * from rowcount;\n tablename | rowcount\n-----------+----------\n test | 1\n(1 row)\n\nTime: 0.640 ms\nroot=# insert into test (name) values ('blah');\nINSERT 1190671627 1\nTime: 1.677 ms\nroot=# select * from test;\n id | name \n----+------\n 5 | blah\n 6 | blah\n(2 rows)\n\nTime: 0.653 ms\nroot=# select * from rowcount;\n tablename | rowcount\n-----------+----------\n test | 2\n(1 row)\n\nTime: 0.660 ms\nroot=# delete from test where id = 6;\nDELETE 1\nTime: 2.412 ms\nroot=# select * from test;\n id | name \n----+------\n 5 | blah\n(1 row)\n\nTime: 0.631 ms\nroot=# select * from rowcount;\n tablename | rowcount\n-----------+----------\n test | 1\n(1 row)\n\nTime: 0.609 ms\n\nOne thing to be mindful of . . . Truncate is NOT accounted for with this,\nand unfortunately the rule system doesn't allow truncate operations so you\ncan't work around it that way.\n\n'njoy,\nMark\n\n\nOn 8/10/05 11:52 PM, \"Gavin Sherry\" <[email protected]> wrote:\n\n> Hi Dan,\n> \n> On Wed, 10 Aug 2005, Dan Harris wrote:\n> \n>> I have a web page for my customers that shows them count of records\n>> and some min/max date ranges in each table of a database, as this is\n>> how we bill them for service. They can log in and check the counts\n>> at any time. I'd like for the counts to be as fresh as possible by\n>> keeping this dynamic, but I will use a periodic 'snapshot'/cron job\n>> if that is the only option to speed this up. I have thought about\n>> using the table statistics, but the estimate error is probably\n>> unacceptable because of the billing purposes.\n>> \n>> For some reason, the SQL Server we migrated the app from can return\n>> count(*) in a split second on multi-million row tables, even though\n>> it is a MUCH slower box hardware-wise, but it's now taking many\n>> seconds to run. I have read in the archives the problems MVCC brings\n>> into the count(*) dilemma forcing Pg to run a seq scan to get\n>> counts. Does SQLServer not use MVCC or have they found another\n> \n> SQL Server probably jumps through a lot of hoops to do fast count(*)s. I'm\n> sure we could do something similar -- it's just a question of complexity,\n> resources, desirability, etc. The are other solutions, which makes the\n> idea of doing it less attractive still.\n> \n>> approach for arriving at this number? Compounding all the min/max\n>> and counts from other tables and all those queries take about a\n>> minute to run. The tables will contain anywhere from 1 million to 40\n>> million rows.\n>> \n>> Also, I am using \"select ... group by ... order by .. limit 1\" to get\n>> the min/max since I have already been bit by the issue of min() max()\n>> being slower.\n> \n> I generally pre generate the results. There are two ways to do this: the\n> 'snapshot'/cronjon you mentioned or using rules and triggers to maintain\n> 'count' tables. The idea is that if data is added, modified or removed\n> from your table, you modify counters in these other tables.\n> \n> Alternatively, feel free to post your schema and sample queries with\n> explain analyze results to this list. Alternatively, jump on irc at\n> irc.freenode.net #postgresql and someone will be more than happy to look\n> through the problem in more detail.\n> \n> Thanks,\n> \n> Gavin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "Thu, 11 Aug 2005 00:40:23 -0400", "msg_from": "Mark Cotner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:\n> Here's a trigger I wrote to perform essentially the same purpose. The nice\n> thing about this is it keeps the number up to date for you, but you do incur\n> slight overhead.\n...\n> \n> CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE\n> PROCEDURE del_rowcount();\n> CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE\n> PROCEDURE add_rowcount();\n> \n> INSERT INTO rowcount (tablename) VALUES ('test');\n...\n\nbeware of problems with concurrency and even what happens\nif transactions roll back. Maybe you can \"fix\" it a bit\nby regulary correcting the count via cronjob or so.\n\n", "msg_date": "Thu, 11 Aug 2005 09:24:08 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "On Thu, 11 Aug 2005, Tino Wildenhain wrote:\n\n> Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:\n> > Here's a trigger I wrote to perform essentially the same purpose. The nice\n> > thing about this is it keeps the number up to date for you, but you do incur\n> > slight overhead.\n> ...\n> >\n> > CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE\n> > PROCEDURE del_rowcount();\n> > CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE\n> > PROCEDURE add_rowcount();\n> >\n> > INSERT INTO rowcount (tablename) VALUES ('test');\n> ...\n>\n> beware of problems with concurrency and even what happens\n> if transactions roll back. Maybe you can \"fix\" it a bit\n> by regulary correcting the count via cronjob or so.\n\nWhat problems? MVCC takes care of this.\n\nGavin\n", "msg_date": "Thu, 11 Aug 2005 20:36:58 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Am Donnerstag, den 11.08.2005, 20:36 +1000 schrieb Gavin Sherry:\n> On Thu, 11 Aug 2005, Tino Wildenhain wrote:\n> \n> > Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:\n> > > Here's a trigger I wrote to perform essentially the same purpose. The nice\n> > > thing about this is it keeps the number up to date for you, but you do incur\n> > > slight overhead.\n> > ...\n> > >\n> > > CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE\n> > > PROCEDURE del_rowcount();\n> > > CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE\n> > > PROCEDURE add_rowcount();\n> > >\n> > > INSERT INTO rowcount (tablename) VALUES ('test');\n> > ...\n> >\n> > beware of problems with concurrency and even what happens\n> > if transactions roll back. Maybe you can \"fix\" it a bit\n> > by regulary correcting the count via cronjob or so.\n> \n> What problems? MVCC takes care of this.\n\nActually in this case MVCC works against you.\nJust imagine some competing transactions to insert\nend delete at will. \n\nYou could lock the count table to prevent the problem\nwhere 2 competing transactions do an insert, read the\nstart value and add 1 to it and then write the result\n- which is n+1 rather then n+2 - so you are off by one.\nThink of the same when one transaction inserts 100\nand the other 120. Then you could even be off by 100.\n\nBut locking probably gets your worser performance then\nsimply count(*) all the time if you insert a lot. Also\nprepare for the likeness of deadlocks.\n\n", "msg_date": "Thu, 11 Aug 2005 12:52:16 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "\n\n> You could lock the count table to prevent the problem\n> where 2 competing transactions do an insert, read the\n> start value and add 1 to it and then write the result\n> - which is n+1 rather then n+2 - so you are off by one.\n> Think of the same when one transaction inserts 100\n> and the other 120. Then you could even be off by 100.\n\n\tNiet.\n\n\tIf your trigger does UPDATE counts_cache SET cached_count = \ncached_count+N WHERE ...\n\tThen all locking is taken care of by Postgres.\n\tOf course if you use 2 queries then you have locking issues.\n\n\tHowever the UPDATE counts_cache has a problem, ie. it locks this row FOR \nUPDATE for the whole transaction, and all transactions which want to \nupdate the same row must wait to see if the update commits or rollbacks, \nso if you have one count cache row for the whole table you get MySQL style \nscalability...\n\n\tTo preserve scalability you could, instead of UPDATE, INSERT the delta of \nrows inserted/deleted in a table (which has no concurrencies issues) and \ncompute the current count with the sum() of the deltas, then with a cron, \nconsolidate the deltas and update the count_cache table so that the deltas \ntable stays very small.\n", "msg_date": "Thu, 11 Aug 2005 14:08:34 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Am Donnerstag, den 11.08.2005, 14:08 +0200 schrieb PFC:\n> \n> > You could lock the count table to prevent the problem\n> > where 2 competing transactions do an insert, read the\n> > start value and add 1 to it and then write the result\n> > - which is n+1 rather then n+2 - so you are off by one.\n> > Think of the same when one transaction inserts 100\n> > and the other 120. Then you could even be off by 100.\n> \n> \tNiet.\n> \n> \tIf your trigger does UPDATE counts_cache SET cached_count = \n> cached_count+N WHERE ...\n> \tThen all locking is taken care of by Postgres.\n> \tOf course if you use 2 queries then you have locking issues.\n\nYes, in the case you use just the UPDATE statement you are right. This\ndoes the locking I was talking about.\n\nIn either case I'd use an after trigger and not before to minimize\nthe impact.\n\n> \tHowever the UPDATE counts_cache has a problem, ie. it locks this row FOR \n> UPDATE for the whole transaction, and all transactions which want to \n> update the same row must wait to see if the update commits or rollbacks, \n> so if you have one count cache row for the whole table you get MySQL style \n> scalability...\n> \n> \tTo preserve scalability you could, instead of UPDATE, INSERT the delta of \n> rows inserted/deleted in a table (which has no concurrencies issues) and \n> compute the current count with the sum() of the deltas, then with a cron, \n> consolidate the deltas and update the count_cache table so that the deltas \n> table stays very small.\n\nYes, this is in fact a better approach to this problem.\n\n(All this provided you want an unqualified count() - as the \n original poster)\n\n\n\n", "msg_date": "Thu, 11 Aug 2005 14:34:00 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speedier count(*)" }, { "msg_contents": "Thanks for all the great ideas. I have more options to evaluate now.\n\n-Dan\n\n", "msg_date": "Thu, 11 Aug 2005 09:33:00 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speedier count(*)" } ]
[ { "msg_contents": ">\n>hi, i got one situation here, i create one pl/pgsql function that using temp table to store temporary data.\n>wherever i execute my function, i need to delete all the data inside the temp table, but this will slow down the \n>searching function if i conitnue to run the server because old tuples are not really clear if just using delete command.\n>so i use drop table command and recreate the table. my question is, would it slow down the postmaster speed if i continue to \n>run this searching function more than 300 time per day?, cause the speed for execute searching function will graduatelly increase \n>after i test it for few day? anyway to test it is causing by the drop temp table and create temp table command?\n>\n>regards\n>ivan\n\n\n\n\n\n\n>\n>hi, i got one situation here, i create one \npl/pgsql function that using temp table to store temporary data.\n>wherever i execute my function, i need to \ndelete all the data inside the temp table, but this will slow down the \n\n>searching function if i conitnue to run the \nserver because old tuples are not really clear if just using delete \ncommand.\n>so i use drop table command and recreate the \ntable. my question is, would it slow down the postmaster speed if i continue to \n\n>run this searching function more than 300 time \nper day?, cause the speed for execute searching function will graduatelly \nincrease \n>after i test it for few day? anyway to test it \nis causing by the drop temp table and create temp table command?\n>\n>regards\n>ivan", "msg_date": "Thu, 11 Aug 2005 11:37:57 +0800", "msg_from": "\"Chun Yit(Chronos)\" <[email protected]>", "msg_from_op": true, "msg_subject": "it is always delete temp table will slow down the postmaster?" }, { "msg_contents": "\n\"\"Chun Yit(Chronos)\"\" <[email protected]> writes\n>\n>hi, i got one situation here, i create one pl/pgsql function that using\ntemp table to store temporary data.\n>wherever i execute my function, i need to delete all the data inside the\ntemp table, but this will slow down the\n>searching function if i conitnue to run the server because old tuples are\nnot really clear if just using delete command.\n>so i use drop table command and recreate the table.\n\nA better way to empty a table fast is \"truncate table\".\n\nRegards,\nQingqing\n\n\n", "msg_date": "Thu, 11 Aug 2005 14:37:20 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: it is always delete temp table will slow down the postmaster?" }, { "msg_contents": "\n----- Original Message ----- \nFrom: \"Qingqing Zhou\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, August 11, 2005 2:37 PM\nSubject: Re: [PERFORM] it is always delete temp table will slow down the \npostmaster?\n\n\n>\n> \"\"Chun Yit(Chronos)\"\" <[email protected]> writes\n>>\n>>hi, i got one situation here, i create one pl/pgsql function that using\n> temp table to store temporary data.\n>>wherever i execute my function, i need to delete all the data inside the\n> temp table, but this will slow down the\n>>searching function if i conitnue to run the server because old tuples are\n> not really clear if just using delete command.\n>>so i use drop table command and recreate the table.\n>\n> A better way to empty a table fast is \"truncate table\".\n>\n> Regards,\n> Qingqing\n>\n\n>sorry, but truncate table cannot use inside function, any other way ?\n>Regards\n>ivan\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.338 / Virus Database: 267.10.5/68 - Release Date: 10/Aug/05\n> \n\n", "msg_date": "Thu, 11 Aug 2005 15:03:30 +0800", "msg_from": "\"Chun Yit(Chronos)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: it is always delete temp table will slow down the postmaster?" } ]
[ { "msg_contents": "Hi everyone,\n\nI have some problems with a quite long query and the plan postgreSQL is \nchoosing. The query joins 12 tables and in the WHERE clause I use a IN \nexpression with a lot of identifiers (up to 2000). The problem is that \nthe planner is proposing a seq_scan on two tables 2M rows each \n(internalexpressionprofile and expressionprofile)\n\nI have just try this query (after doing a vacuum analyze), in the 'IN' \nclause there are 1552 identifiers, and the query should return 14K rows.\nI'm using a PostgreSQL 8.0.2 on a SuSE 8.1 with 1GB of RAM.\n\nexplain analyze SELECT DISTINCT rset.replicatesetid, tra.value as value, \ntra.expressionprofileid, rep.*, epg.expprogeneid, con.ordinal\nFROM expprogene epg JOIN reporter rep ON \n(epg.reporterid=rep.reporterid), expressionprofile epro,\ntransformedexpressionprofile tra, internalexpressionprofile int,\nmeanvalues mea, replicateset rset, replicateset_condition rsco, \ncondition con,\n\"CLUSTER\" clu, clustertree tre, clusteranalysis an\nWHERE epg.expprogeneid IN (80174,84567,...) AND \nepg.expprogeneid=epro.expprogeneid\nAND epro.expressionprofileid=tra.expressionprofileid AND \ntra.expressionprofileid=int.expressionprofileid\nAND int.meanvaluesid=mea.meanvaluesid AND \nmea.replicatesetid=rset.replicatesetid\nAND rset.replicatesetid=rsco.replicatesetid AND \nrsco.conditionid=con.conditionid\nAND tra.clusterid=clu.clusterid AND clu.clustertreeid=tre.clustertreeid \nAND tre.clustertreeid=an.genetreeid\nAND an.clusteranalysisid=1 AND con.clusteranalysisid = an.clusteranalysisid\nORDER BY epg.expprogeneid, con.ordinal;\n\nThe plan...\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=129132.53..129132.59 rows=2 width=150) (actual \ntime=12637.224..12676.016 rows=13968 loops=1)\n -> Sort (cost=129132.53..129132.54 rows=2 width=150) (actual \ntime=12637.217..12646.484 rows=13968 loops=1)\n Sort Key: epg.expprogeneid, con.ordinal, rset.replicatesetid, \ntra.value, tra.expressionprofileid, rep.reporterid, rep.name, \nrep.anotation, rep.otherinfo, rep.incidences\n -> Nested Loop (cost=62927.42..129132.52 rows=2 width=150) \n(actual time=7112.942..12586.314 rows=13968 loops=1)\n Join Filter: (\"outer\".genetreeid = \"inner\".clustertreeid)\n -> Nested Loop (cost=62927.42..127893.86 rows=409 \nwidth=162) (actual time=7112.864..11960.324 rows=41904 loops=1)\n -> Nested Loop (cost=62927.42..125727.31 rows=369 \nwidth=154) (actual time=7112.825..11500.645 rows=13968 loops=1)\n -> Merge Join (cost=3.02..7.70 rows=1 \nwidth=12) (actual time=0.057..0.073 rows=1 loops=1)\n Merge Cond: (\"outer\".clustertreeid = \n\"inner\".genetreeid)\n -> Index Scan using clustertree_pk on \nclustertree tre (cost=0.00..4.35 rows=123 width=4) (actual \ntime=0.017..0.024 rows=2 loops=1)\n -> Sort (cost=3.02..3.03 rows=1 \nwidth=8) (actual time=0.028..0.030 rows=1 loops=1)\n Sort Key: an.genetreeid\n -> Index Scan using \nclusteranalysis_pk on clusteranalysis an (cost=0.00..3.01 rows=1 \nwidth=8) (actual time=0.015..0.018 rows=1 loops=1)\n Index Cond: \n(clusteranalysisid = 1)\n -> Hash Join (cost=62924.39..125715.53 \nrows=408 width=150) (actual time=7112.758..11455.797 rows=13968 loops=1)\n Hash Cond: (\"outer\".expressionprofileid \n= \"inner\".expressionprofileid)\n -> Hash Join (cost=15413.58..78079.33 \nrows=24339 width=134) (actual time=1489.347..5721.306 rows=41904 loops=1)\n Hash Cond: (\"outer\".expprogeneid \n= \"inner\".expprogeneid)\n -> Seq Scan on expressionprofile \nepro (cost=0.00..48263.24 rows=2831824 width=8) (actual \ntime=0.039..3097.656 rows=2839676 loops=1)\n -> Hash \n(cost=15409.72..15409.72 rows=1546 width=130) (actual \ntime=43.365..43.365 rows=0 loops=1)\n -> Nested Loop \n(cost=0.00..15409.72 rows=1546 width=130) (actual time=0.056..40.637 \nrows=1552 loops=1)\n -> Index Scan using \nexpprogene_pk, expprogene_pk, [......] on expprogene epg \n(cost=0.00..10698.83 rows=1546 width=8) (actual time=0.027..15.907 \nrows=1552 loops=1)\n Index Cond: \n((expprogeneid = 80174) OR (expprogeneid = 84567) OR (expprogeneid = \n83608) OR [OR ....])\n -> Index Scan using \nreporter_pkey on reporter rep (cost=0.00..3.03 rows=1 width=126) \n(actual time=0.009..0.010 rows=1 loops=1552)\n Index Cond: \n(\"outer\".reporterid = rep.reporterid)\n -> Hash (cost=47403.68..47403.68 \nrows=42853 width=16) (actual time=5623.174..5623.174 rows=0 loops=1)\n -> Hash Join \n(cost=2369.91..47403.68 rows=42853 width=16) (actual \ntime=346.040..5538.571 rows=75816 loops=1)\n Hash Cond: \n(\"outer\".meanvaluesid = \"inner\".meanvaluesid)\n -> Seq Scan on \ninternalexpressionprofile \"int\" (cost=0.00..34506.16 rows=2019816 \nwidth=8) (actual time=0.003..2231.427 rows=2019816 loops=1)\n -> Hash \n(cost=2262.78..2262.78 rows=42853 width=16) (actual \ntime=345.803..345.803 rows=0 loops=1)\n -> Nested Loop \n(cost=17.49..2262.78 rows=42853 width=16) (actual time=1.965..259.363 \nrows=75816 loops=1)\n -> Hash Join \n(cost=17.49..28.42 rows=6 width=16) (actual time=1.881..2.387 rows=9 \nloops=1)\n Hash \nCond: (\"outer\".replicatesetid = \"inner\".replicatesetid)\n -> Seq \nScan on replicateset rset (cost=0.00..9.58 rows=258 width=4) (actual \ntime=0.003..0.295 rows=258 loops=1)\n -> Hash \n(cost=17.47..17.47 rows=6 width=12) (actual time=1.575..1.575 rows=0 \nloops=1)\n -> \nHash Join (cost=3.17..17.47 rows=6 width=12) (actual time=0.315..1.557 \nrows=9 loops=1)\n \nHash Cond: (\"outer\".conditionid = \"inner\".conditionid)\n \n-> Seq Scan on replicateset_condition rsco (cost=0.00..10.83 rows=683 \nwidth=8) (actual time=0.004..0.688 rows=683 loops=1)\n \n-> Hash (cost=3.14..3.14 rows=9 width=12) (actual time=0.059..0.059 \nrows=0 loops=1)\n \n-> Index Scan using clustering_analysis_fk on condition con \n(cost=0.00..3.14 rows=9 width=12) (actual time=0.019..0.039 rows=9 loops=1)\n \nIndex Cond: (clusteranalysisid = 1)\n -> Index Scan \nusing has_meanvalues_fk on meanvalues mea (cost=0.00..264.03 rows=8669 \nwidth=8) (actual time=0.027..13.032 rows=8424 loops=9)\n Index \nCond: (\"outer\".replicatesetid = mea.replicatesetid)\n -> Index Scan using comes_from_raw_fk on \ntransformedexpressionprofile tra (cost=0.00..5.86 rows=1 width=16) \n(actual time=0.010..0.018 rows=3 loops=13968)\n Index Cond: (tra.expressionprofileid = \n\"outer\".expressionprofileid)\n -> Index Scan using _cluster__pk on \"CLUSTER\" clu \n(cost=0.00..3.01 rows=1 width=8) (actual time=0.009..0.010 rows=1 \nloops=41904)\n Index Cond: (\"outer\".clusterid = clu.clusterid)\n Total runtime: 12696.289 ms\n(48 rows)\n\nI tried setting the enable_seq_scan to off and the query's runtime \nreturned by the explain analyze is 4000ms.\nWhy postgre is not using the indexes?\nWhat is the real impact of having such a big 'IN' clause?\n\n\nThanks in advance,\n\nLuis Cornide\n\n\n\n\n\n\n\nHi everyone,\n\nI have some problems with a quite long query and the plan postgreSQL is\nchoosing. The query joins 12 tables and in the WHERE clause I use a IN\nexpression with a lot of identifiers (up to 2000). The problem is that\nthe planner is proposing a seq_scan on two tables 2M rows each (internalexpressionprofile and\nexpressionprofile) \n\nI have just try this query (after doing a vacuum analyze), in the 'IN'\nclause there are 1552 identifiers, and the query should return 14K rows.\nI'm using a PostgreSQL 8.0.2 on a SuSE 8.1 with 1GB of RAM.\n\nexplain analyze SELECT DISTINCT rset.replicatesetid, tra.value as\nvalue, tra.expressionprofileid, rep.*, epg.expprogeneid,  con.ordinal \nFROM expprogene epg JOIN reporter rep ON \n(epg.reporterid=rep.reporterid), expressionprofile epro, \ntransformedexpressionprofile tra, internalexpressionprofile int, \nmeanvalues mea, replicateset rset, replicateset_condition rsco,\ncondition con, \n\"CLUSTER\" clu, clustertree tre, clusteranalysis an\nWHERE epg.expprogeneid IN (80174,84567,...) AND\nepg.expprogeneid=epro.expprogeneid \nAND epro.expressionprofileid=tra.expressionprofileid AND\ntra.expressionprofileid=int.expressionprofileid \nAND int.meanvaluesid=mea.meanvaluesid AND\nmea.replicatesetid=rset.replicatesetid \nAND rset.replicatesetid=rsco.replicatesetid AND\nrsco.conditionid=con.conditionid \nAND tra.clusterid=clu.clusterid AND clu.clustertreeid=tre.clustertreeid\nAND tre.clustertreeid=an.genetreeid\nAND an.clusteranalysisid=1 AND con.clusteranalysisid =\nan.clusteranalysisid \nORDER BY epg.expprogeneid, con.ordinal;\n\nThe plan...\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=129132.53..129132.59 rows=2 width=150) (actual\ntime=12637.224..12676.016 rows=13968 loops=1)\n   ->  Sort  (cost=129132.53..129132.54 rows=2 width=150) (actual\ntime=12637.217..12646.484 rows=13968 loops=1)\n         Sort Key: epg.expprogeneid, con.ordinal, rset.replicatesetid,\ntra.value, tra.expressionprofileid, rep.reporterid, rep.name,\nrep.anotation, rep.otherinfo, rep.incidences\n         ->  Nested Loop  (cost=62927.42..129132.52 rows=2\nwidth=150) (actual time=7112.942..12586.314 rows=13968 loops=1)\n               Join Filter: (\"outer\".genetreeid = \"inner\".clustertreeid)\n               ->  Nested Loop  (cost=62927.42..127893.86 rows=409\nwidth=162) (actual time=7112.864..11960.324 rows=41904 loops=1)\n                     ->  Nested Loop  (cost=62927.42..125727.31\nrows=369 width=154) (actual time=7112.825..11500.645 rows=13968 loops=1)\n                           ->  Merge Join  (cost=3.02..7.70 rows=1\nwidth=12) (actual time=0.057..0.073 rows=1 loops=1)\n                                 Merge Cond: (\"outer\".clustertreeid =\n\"inner\".genetreeid)\n                                 ->  Index Scan using clustertree_pk\non clustertree tre  (cost=0.00..4.35 rows=123 width=4) (actual\ntime=0.017..0.024 rows=2 loops=1)\n                                 ->  Sort  (cost=3.02..3.03 rows=1\nwidth=8) (actual time=0.028..0.030 rows=1 loops=1)\n                                       Sort Key: an.genetreeid\n                                       ->  Index Scan using\nclusteranalysis_pk on clusteranalysis an  (cost=0.00..3.01 rows=1\nwidth=8) (actual time=0.015..0.018 rows=1 loops=1)\n                                             Index Cond:\n(clusteranalysisid = 1)\n                           ->  Hash Join  (cost=62924.39..125715.53\nrows=408 width=150) (actual time=7112.758..11455.797 rows=13968 loops=1)\n                                 Hash Cond:\n(\"outer\".expressionprofileid = \"inner\".expressionprofileid)\n                                 ->  Hash Join \n(cost=15413.58..78079.33 rows=24339 width=134) (actual\ntime=1489.347..5721.306 rows=41904 loops=1)\n                                       Hash Cond: (\"outer\".expprogeneid\n= \"inner\".expprogeneid)\n                                       ->  Seq Scan on\nexpressionprofile epro  (cost=0.00..48263.24 rows=2831824 width=8)\n(actual time=0.039..3097.656 rows=2839676 loops=1)\n                                       ->  Hash \n(cost=15409.72..15409.72 rows=1546 width=130) (actual\ntime=43.365..43.365 rows=0 loops=1)\n                                             ->  Nested Loop \n(cost=0.00..15409.72 rows=1546 width=130) (actual time=0.056..40.637\nrows=1552 loops=1)\n                                                   ->  Index Scan\nusing expprogene_pk, expprogene_pk, [......] on expprogene epg \n(cost=0.00..10698.83 rows=1546 width=8) (actual time=0.027..15.907\nrows=1552 loops=1)\n                                                         Index Cond:\n((expprogeneid = 80174) OR (expprogeneid = 84567) OR (expprogeneid =\n83608) OR [OR ....])\n                                                   ->  Index Scan\nusing reporter_pkey on reporter rep  (cost=0.00..3.03 rows=1 width=126)\n(actual time=0.009..0.010 rows=1 loops=1552)\n                                                         Index Cond:\n(\"outer\".reporterid = rep.reporterid)\n                                 ->  Hash  (cost=47403.68..47403.68\nrows=42853 width=16) (actual time=5623.174..5623.174 rows=0 loops=1)\n                                       ->  Hash Join \n(cost=2369.91..47403.68 rows=42853 width=16) (actual\ntime=346.040..5538.571 rows=75816 loops=1)\n                                             Hash Cond:\n(\"outer\".meanvaluesid = \"inner\".meanvaluesid)\n                                             ->  Seq Scan on\ninternalexpressionprofile \"int\"  (cost=0.00..34506.16 rows=2019816\nwidth=8) (actual time=0.003..2231.427 rows=2019816 loops=1)\n                                             ->  Hash \n(cost=2262.78..2262.78 rows=42853 width=16) (actual\ntime=345.803..345.803 rows=0 loops=1)\n                                                   ->  Nested Loop \n(cost=17.49..2262.78 rows=42853 width=16) (actual time=1.965..259.363\nrows=75816 loops=1)\n                                                         ->  Hash\nJoin  (cost=17.49..28.42 rows=6 width=16) (actual time=1.881..2.387\nrows=9 loops=1)\n                                                               Hash\nCond: (\"outer\".replicatesetid = \"inner\".replicatesetid)\n                                                               -> \nSeq Scan on replicateset rset  (cost=0.00..9.58 rows=258 width=4)\n(actual time=0.003..0.295 rows=258 loops=1)\n                                                               -> \nHash  (cost=17.47..17.47 rows=6 width=12) (actual time=1.575..1.575\nrows=0 loops=1)\n                                                                    \n->  Hash Join  (cost=3.17..17.47 rows=6 width=12) (actual\ntime=0.315..1.557 rows=9 loops=1)\n                                                                          \nHash Cond: (\"outer\".conditionid = \"inner\".conditionid)\n                                                                          \n->  Seq Scan on replicateset_condition rsco  (cost=0.00..10.83\nrows=683 width=8) (actual time=0.004..0.688 rows=683 loops=1)\n                                                                          \n->  Hash  (cost=3.14..3.14 rows=9 width=12) (actual\ntime=0.059..0.059 rows=0 loops=1)\n                                                                                \n->  Index Scan using clustering_analysis_fk on condition con \n(cost=0.00..3.14 rows=9 width=12) (actual time=0.019..0.039 rows=9\nloops=1)\n                                                                                      \nIndex Cond: (clusteranalysisid = 1)\n                                                         ->  Index\nScan using has_meanvalues_fk on meanvalues mea  (cost=0.00..264.03\nrows=8669 width=8) (actual time=0.027..13.032 rows=8424 loops=9)\n                                                               Index\nCond: (\"outer\".replicatesetid = mea.replicatesetid)\n                     ->  Index Scan using comes_from_raw_fk on\ntransformedexpressionprofile tra  (cost=0.00..5.86 rows=1 width=16)\n(actual time=0.010..0.018 rows=3 loops=13968)\n                           Index Cond: (tra.expressionprofileid =\n\"outer\".expressionprofileid)\n               ->  Index Scan using _cluster__pk on \"CLUSTER\" clu \n(cost=0.00..3.01 rows=1 width=8) (actual time=0.009..0.010 rows=1\nloops=41904)\n                     Index Cond: (\"outer\".clusterid = clu.clusterid)\n Total runtime: 12696.289 ms\n(48 rows)\n\nI tried setting the enable_seq_scan to off and the query's runtime\nreturned by the explain analyze is 4000ms.\nWhy postgre is not using the indexes?\nWhat is the real impact of having such a big 'IN' clause?\n\n\nThanks in advance,\n\nLuis Cornide", "msg_date": "Thu, 11 Aug 2005 13:33:37 +0200", "msg_from": "Luis Cornide Arce <[email protected]>", "msg_from_op": true, "msg_subject": "Why is not using the index" }, { "msg_contents": "Luis Cornide Arce wrote:\n> Hi everyone,\n> \n> I have some problems with a quite long query and the plan postgreSQL is \n> choosing. The query joins 12 tables and in the WHERE clause I use a IN \n> expression with a lot of identifiers (up to 2000). The problem is that \n> the planner is proposing a seq_scan on two tables 2M rows each \n> (internalexpressionprofile and expressionprofile)\n> \n> I have just try this query (after doing a vacuum analyze), in the 'IN' \n> clause there are 1552 identifiers, and the query should return 14K rows.\n> I'm using a PostgreSQL 8.0.2 on a SuSE 8.1 with 1GB of RAM.\n\n> WHERE epg.expprogeneid IN (80174,84567,...) AND \n> epg.expprogeneid=epro.expprogeneid\n\n-> Hash Join\n\t(cost=15413.58..78079.33 rows=24339 width=134)\n\t(actual time=1489.347..5721.306 rows=41904 loops=1)\n\tHash Cond: (\"outer\".expprogeneid = \"inner\".expprogeneid)\n -> Seq Scan on expressionprofile epro\n\t\t(cost=0.00..48263.24 rows=2831824 width=8)\n\t\t(actual time=0.039..3097.656 rows=2839676 loops=1)\n\n-> Index Scan using\nexpprogene_pk, expprogene_pk, [......] on expprogene epg\n(cost=0.00..10698.83 rows=1546 width=8) (actual time=0.027..15.907\nrows=1552 loops=1)\n\tIndex Cond: ((expprogeneid = 80174) OR (expprogeneid = 84567)\n\tOR (expprogeneid = 83608) OR [OR ....])\n\nOK - it looks like the \"IN\" clause is using your index. The fact that \nit's using a Seq-scan on \"expressionprofile epro\" looks odd though, \nespecially since it expects 24339 matches (out of 2.8 million rows - \nthat should favour an index).\n\nOf course, I've not considered the context of the rest of the query, but \nI'd expect the index to be used.\n\nDo you have any unusual config settings?\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 11 Aug 2005 13:49:10 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is not using the index" }, { "msg_contents": "\nWell I have change the next setting in the postgresql.conf\n\nshared_buffers= 16384\nwork_mem =32768\nmaintenance_work_mem= 65536\nbgwriter_delay =800\nbgwriter_maxpages= 100\nwal_buffers =64\nefective_cache_size= 20000\n\nThe rest of the settings are the default.\n\nThanks,\n\nLuis\n\nRichard Huxton escribi�:\n\n> Luis Cornide Arce wrote:\n>\n>> Hi everyone,\n>>\n>> I have some problems with a quite long query and the plan postgreSQL \n>> is choosing. The query joins 12 tables and in the WHERE clause I use \n>> a IN expression with a lot of identifiers (up to 2000). The problem \n>> is that the planner is proposing a seq_scan on two tables 2M rows \n>> each (internalexpressionprofile and expressionprofile)\n>>\n>> I have just try this query (after doing a vacuum analyze), in the \n>> 'IN' clause there are 1552 identifiers, and the query should return \n>> 14K rows.\n>> I'm using a PostgreSQL 8.0.2 on a SuSE 8.1 with 1GB of RAM.\n>\n>\n>> WHERE epg.expprogeneid IN (80174,84567,...) AND \n>> epg.expprogeneid=epro.expprogeneid\n>\n>\n> -> Hash Join\n> (cost=15413.58..78079.33 rows=24339 width=134)\n> (actual time=1489.347..5721.306 rows=41904 loops=1)\n> Hash Cond: (\"outer\".expprogeneid = \"inner\".expprogeneid)\n> -> Seq Scan on expressionprofile epro\n> (cost=0.00..48263.24 rows=2831824 width=8)\n> (actual time=0.039..3097.656 rows=2839676 loops=1)\n>\n> -> Index Scan using\n> expprogene_pk, expprogene_pk, [......] on expprogene epg\n> (cost=0.00..10698.83 rows=1546 width=8) (actual time=0.027..15.907\n> rows=1552 loops=1)\n> Index Cond: ((expprogeneid = 80174) OR (expprogeneid = 84567)\n> OR (expprogeneid = 83608) OR [OR ....])\n>\n> OK - it looks like the \"IN\" clause is using your index. The fact that \n> it's using a Seq-scan on \"expressionprofile epro\" looks odd though, \n> especially since it expects 24339 matches (out of 2.8 million rows - \n> that should favour an index).\n>\n> Of course, I've not considered the context of the rest of the query, \n> but I'd expect the index to be used.\n>\n> Do you have any unusual config settings?\n\n", "msg_date": "Thu, 11 Aug 2005 15:59:31 +0200", "msg_from": "Luis Cornide Arce <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is not using the index" } ]
[ { "msg_contents": "Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\nCPUs running Solaris 10. The DB cluster is on an external fibre-attached\nSun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n\nThe system is for the sole use of a couple of data warehouse developers,\nhence we are keen to use 'aggressive' tuning options to maximise\nperformance.\n\nSo far we have made the following changes and measured the impact on our\ntest suite:\n\n1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\nin some cases.\n\n2) Increase work_mem from 1,024 to 524,288.\n\n3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\nsetting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n\nQuestion - can Postgres only use 2GB RAM, given that shared_buffers can\nonly be set as high as 262,143 (8K pages)?\n\nSo far so good...\n\n4) Move /pg_xlog to an internal disk within the V250. This has had a\nsevere *negative* impact on performance. Copy job has gone from 2 mins to\n12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\njobs.\n\nI'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\na single spindle disk?\n\nIn cases such as this, where an external storage array with a hardware\nRAID controller is used, the normal advice to separate the data from the\npg_xlog seems to come unstuck, or are we missing something?\n\nCheers,\n\nPaul Johnson.\n", "msg_date": "Thu, 11 Aug 2005 13:23:21 +0100 (BST)", "msg_from": "\"Paul Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "PG8 Tuning" }, { "msg_contents": "Paul Johnson wrote:\n> Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\n> CPUs running Solaris 10. The DB cluster is on an external fibre-attached\n> Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n> \n> The system is for the sole use of a couple of data warehouse developers,\n> hence we are keen to use 'aggressive' tuning options to maximise\n> performance.\n> \n> So far we have made the following changes and measured the impact on our\n> test suite:\n> \n> 1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\n> in some cases.\n\nOK\n\n> 2) Increase work_mem from 1,024 to 524,288.\n\nDon't forget you can use multiples of this in a single query. Might want \nto reign it back a bit. I *think* you can set it per-query if you want \nanyway.\n\n> 3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\n> setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n> \n> Question - can Postgres only use 2GB RAM, given that shared_buffers can\n> only be set as high as 262,143 (8K pages)?\n\nWell, normally you'd want to keep a fair bit for the O.S. to cache data. \nOne quarter of your RAM seems very high. Did you try 5000,10000,50000 \ntoo or go straight to the top end?\n\n> So far so good...\n> \n> 4) Move /pg_xlog to an internal disk within the V250. This has had a\n> severe *negative* impact on performance. Copy job has gone from 2 mins to\n> 12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\n> jobs.\n> \n> I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\n> a single spindle disk?\n\nThe key limitation will be one commit per rotation of the disk. Multiple \nspindles, or better still with a battery-backed write-cache will give \nyou peak transactions.\n\n> In cases such as this, where an external storage array with a hardware\n> RAID controller is used, the normal advice to separate the data from the\n> pg_xlog seems to come unstuck, or are we missing something?\n\nWell, I think the advice then is actually \"get 2 external arrays...\"\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 11 Aug 2005 13:55:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:\n>I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\n>a single spindle disk?\n>\n>In cases such as this, where an external storage array with a hardware\n>RAID controller is used, the normal advice to separate the data from the\n>pg_xlog seems to come unstuck\n\nYes. That's the downside to dogma. If you're writing pg_xlog to a\nbattery-backed ram buffer you'll see faster commits than you will with a\nwrite to a disk, even if you've got a dedicated spindle, unless you've\ngot constant write activity. (Because once the buffer fills you're\nlimited to disk speed as you wait for buffer flushes.) If you've got a\nlot of system RAM, a battery-backed disk buffer, an OS/filesystem than\neffectively delays writes, and bursty transactional writes it's quite\npossible you'll get better performance putting everything on one array\nrather than breaking it up to follow the \"rules\". You might get a\nperformance boost by putting the transaction log on a seperate partition\nor lun on the external array, depending on how the fs implements syncs\nor whether you can optimize the filsystem choice for each partition. The\ncorrect approach is to run comparative benchmarks of each configuration.\n:-)\n\nMike Stone\n", "msg_date": "Thu, 11 Aug 2005 10:07:04 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "On Fri, 2005-08-12 at 08:47 +0000, Steve Poe wrote:\n> Paul,\n> \n> Before I say anything else, one online document which may be of\n> assistance to you is:\n> http://www.powerpostgresql.com/PerfList/\n> \n> Some thoughts I have:\n> \n> 3) You're shared RAM setting seems overkill to me. Part of the challenge\n> is you're going from 1000 to 262K with no assessment in between. Each\n> situation can be different, but try in the range of 10 - 50K.\n> \n> 4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog\n> you're better off.\n\nLike Mr. Stone said earlier, this is pure dogma. In my experience,\nxlogs on the same volume with data is much faster if both are on\nbattery-backed write-back RAID controller memory. Moving from this\nsituation to xlogs on a single normal disk is going to be much slower in\nmost cases.\n\n-jwb\n", "msg_date": "Thu, 11 Aug 2005 09:58:44 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "(Musing, trying to think of a general-purpose performance-tuning rule\nthat applies here):\n\nActually, it seems to me that with the addition of the WAL in PostgreSQL\nand the subsequent decreased need to fsync the data files themselves\n(only during checkpoints?), that the only time a battery-backed write\ncache would make a really large performance difference would be on the\ndrive(s) hosting the WAL.\n\nSo although it is in general good to have a dedicated spindle for the\nWAL, for many workloads it is in fact significantly better to have the\nWAL written to a battery-backed write cache. The exception would be for\napplications with fewer, larger transactions, in which case you could\nactually use the dedicated spindle.\n\nHmmm, on second thought, now I think I understand the rationale behind\nhaving a non-zero commit delay setting-- the problem with putting\npg_xlog on a single disk without a write cache is that frequent fsync()\ncalls might cause it to spend most of its time seeking instead of\nwriting (as seems to be happening to Paul here). Then again, the OS IO\nscheduler should take care of this for you, making this a non-issue.\nPerhaps Solaris 10 just has really poor IO scheduling performance with\nthis particular hardware and workload?\n\nAh well. Thought myself in circles and have no real conclusions to show\nfor it. Posting anyway, maybe this will give somebody some ideas to\nwork with.\n\n-- Mark Lewis\n\nOn Fri, 2005-08-12 at 08:47 +0000, Steve Poe wrote:\n> Paul,\n> \n> Before I say anything else, one online document which may be of\n> assistance to you is:\n> http://www.powerpostgresql.com/PerfList/\n> \n> Some thoughts I have:\n> \n> 3) You're shared RAM setting seems overkill to me. Part of the challenge\n> is you're going from 1000 to 262K with no assessment in between. Each\n> situation can be different, but try in the range of 10 - 50K.\n> \n> 4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog\n> you're better off. If it is sharing with any other OS/DB resource, the\n> performance will be impacted.\n> \n> >From what I have learned from others on this list, RAID5 is not the best\n> choice for the database. RAID10 would be a better solution (using 8 of\n> your disks) then take the remaining disk and do mirror with your pg_xlog\n> if possible.\n> \n> Best of luck,\n> \n> Steve Poe\n> \n> On Thu, 2005-08-11 at 13:23 +0100, Paul Johnson wrote:\n> > Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\n> > CPUs running Solaris 10. The DB cluster is on an external fibre-attached\n> > Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n> > \n> > The system is for the sole use of a couple of data warehouse developers,\n> > hence we are keen to use 'aggressive' tuning options to maximise\n> > performance.\n> > \n> > So far we have made the following changes and measured the impact on our\n> > test suite:\n> > \n> > 1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\n> > in some cases.\n> > \n> > 2) Increase work_mem from 1,024 to 524,288.\n> > \n> > 3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\n> > setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n> > \n> > Question - can Postgres only use 2GB RAM, given that shared_buffers can\n> > only be set as high as 262,143 (8K pages)?\n> > \n> > So far so good...\n> > \n> > 4) Move /pg_xlog to an internal disk within the V250. This has had a\n> > severe *negative* impact on performance. Copy job has gone from 2 mins to\n> > 12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\n> > jobs.\n> > \n> > I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\n> > a single spindle disk?\n> > \n> > In cases such as this, where an external storage array with a hardware\n> > RAID controller is used, the normal advice to separate the data from the\n> > pg_xlog seems to come unstuck, or are we missing something?\n> > \n> > Cheers,\n> > \n> > Paul Johnson.\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Thu, 11 Aug 2005 10:18:44 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "I think the T-3 RAID at least breaks some of these rules -- I've got 2 \nT-3's, 1 configured as RAID-10 and the other as RAID5, and they both \nseem to perform about the same. I use RAID5 with a hot spare, so it's \nusing 8 spindles.\n\nI got a lot of performance improvement out of mount the fs noatime and \nturning journaling off. Of course it takes a *long* time to recover \nfrom a crash.\n\nSteve Poe wrote:\n> Paul,\n> \n> Before I say anything else, one online document which may be of\n> assistance to you is:\n> http://www.powerpostgresql.com/PerfList/\n> \n> Some thoughts I have:\n> \n> 3) You're shared RAM setting seems overkill to me. Part of the challenge\n> is you're going from 1000 to 262K with no assessment in between. Each\n> situation can be different, but try in the range of 10 - 50K.\n> \n> 4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog\n> you're better off. If it is sharing with any other OS/DB resource, the\n> performance will be impacted.\n> \n>>From what I have learned from others on this list, RAID5 is not the best\n> choice for the database. RAID10 would be a better solution (using 8 of\n> your disks) then take the remaining disk and do mirror with your pg_xlog\n> if possible.\n> \n> Best of luck,\n> \n> Steve Poe\n> \n> On Thu, 2005-08-11 at 13:23 +0100, Paul Johnson wrote:\n> \n>>Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\n>>CPUs running Solaris 10. The DB cluster is on an external fibre-attached\n>>Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n>>\n>>The system is for the sole use of a couple of data warehouse developers,\n>>hence we are keen to use 'aggressive' tuning options to maximise\n>>performance.\n>>\n>>So far we have made the following changes and measured the impact on our\n>>test suite:\n>>\n>>1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\n>>in some cases.\n>>\n>>2) Increase work_mem from 1,024 to 524,288.\n>>\n>>3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\n>>setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n>>\n>>Question - can Postgres only use 2GB RAM, given that shared_buffers can\n>>only be set as high as 262,143 (8K pages)?\n>>\n>>So far so good...\n>>\n>>4) Move /pg_xlog to an internal disk within the V250. This has had a\n>>severe *negative* impact on performance. Copy job has gone from 2 mins to\n>>12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\n>>jobs.\n>>\n>>I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\n>>a single spindle disk?\n>>\n>>In cases such as this, where an external storage array with a hardware\n>>RAID controller is used, the normal advice to separate the data from the\n>>pg_xlog seems to come unstuck, or are we missing something?\n>>\n>>Cheers,\n>>\n>>Paul Johnson.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n> \n", "msg_date": "Thu, 11 Aug 2005 11:00:21 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:\n>Actually, it seems to me that with the addition of the WAL in PostgreSQL\n>and the subsequent decreased need to fsync the data files themselves\n>(only during checkpoints?), that the only time a battery-backed write\n>cache would make a really large performance difference would be on the\n>drive(s) hosting the WAL.\n\nWrite cache on a raid array helps in the general case too, because\nit allows the controller to aggregate & reorder write requests. The OS\nprobably tries to do this to some degree, but it can't do as well as the\nraid controller because it doesn't know the physical disk layout. \n\n>Hmmm, on second thought, now I think I understand the rationale behind\n>having a non-zero commit delay setting-- the problem with putting\n>pg_xlog on a single disk without a write cache is that frequent fsync()\n>calls might cause it to spend most of its time seeking instead of\n>writing (as seems to be happening to Paul here). Then again, the OS IO\n>scheduler should take care of this for you, making this a non-issue.\n\nThe OS can't do anything much in terms of IO scheduling for synchronous\nwrites. Either it handles them in suboptimal order or you get hideous\nlatency while requests are queued for reordering. Neither option is\nreally great. \n\nMike Stone\n", "msg_date": "Thu, 11 Aug 2005 19:54:44 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "Paul,\n\nBefore I say anything else, one online document which may be of\nassistance to you is:\nhttp://www.powerpostgresql.com/PerfList/\n\nSome thoughts I have:\n\n3) You're shared RAM setting seems overkill to me. Part of the challenge\nis you're going from 1000 to 262K with no assessment in between. Each\nsituation can be different, but try in the range of 10 - 50K.\n\n4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog\nyou're better off. If it is sharing with any other OS/DB resource, the\nperformance will be impacted.\n\n>From what I have learned from others on this list, RAID5 is not the best\nchoice for the database. RAID10 would be a better solution (using 8 of\nyour disks) then take the remaining disk and do mirror with your pg_xlog\nif possible.\n\nBest of luck,\n\nSteve Poe\n\nOn Thu, 2005-08-11 at 13:23 +0100, Paul Johnson wrote:\n> Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\n> CPUs running Solaris 10. The DB cluster is on an external fibre-attached\n> Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n> \n> The system is for the sole use of a couple of data warehouse developers,\n> hence we are keen to use 'aggressive' tuning options to maximise\n> performance.\n> \n> So far we have made the following changes and measured the impact on our\n> test suite:\n> \n> 1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\n> in some cases.\n> \n> 2) Increase work_mem from 1,024 to 524,288.\n> \n> 3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\n> setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n> \n> Question - can Postgres only use 2GB RAM, given that shared_buffers can\n> only be set as high as 262,143 (8K pages)?\n> \n> So far so good...\n> \n> 4) Move /pg_xlog to an internal disk within the V250. This has had a\n> severe *negative* impact on performance. Copy job has gone from 2 mins to\n> 12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\n> jobs.\n> \n> I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\n> a single spindle disk?\n> \n> In cases such as this, where an external storage array with a hardware\n> RAID controller is used, the normal advice to separate the data from the\n> pg_xlog seems to come unstuck, or are we missing something?\n> \n> Cheers,\n> \n> Paul Johnson.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Fri, 12 Aug 2005 08:47:08 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "[SPAM?] Re: PG8 Tuning" }, { "msg_contents": "\nOn Aug 11, 2005, at 12:58 PM, Jeffrey W. Baker wrote:\n\n> Like Mr. Stone said earlier, this is pure dogma. In my experience,\n> xlogs on the same volume with data is much faster if both are on\n> battery-backed write-back RAID controller memory. Moving from this\n> situation to xlogs on a single normal disk is going to be much \n> slower in\n> most cases.\n>\n\nThis does also point one important point about performance. Which is \na touch unfortunate (and expensive to test): Your milage may vary on \nany of these improvements. Some people have 0 problems and \nincredible performance with say, 1000 shared_bufs and the WAL on the \nsame disk.. Others need 10k shared bufs and wal split over a 900 \nspindle raid with data spread across 18 SAN's...\nUnfortunately there is no one true way :(\n\nThe best bet (which is great if you can): Try out various settings.. \nif you still run into problems look into some more hardware.. see if \nyou can borrow any or fabricate a \"poor man\"'s equivalent for testing.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 12 Aug 2005 08:18:27 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "Jeff,\n\n> > 4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog\n> > you're better off.\n>\n> Like Mr. Stone said earlier, this is pure dogma.  In my experience,\n> xlogs on the same volume with data is much faster if both are on\n> battery-backed write-back RAID controller memory.  Moving from this\n> situation to xlogs on a single normal disk is going to be much slower in\n> most cases.\n\nThe advice on separate drives for xlog (as is all advice on that web page) is \nbased on numerous, repeatable tests at OSDL. \n\nHowever, you are absolutely correct in that it's *relative* advice, not \nabsolute advice. If, for example, you're using a $100,000 EMC SAN as your \nstorage you'll probably be better off giving it everything and letting its \ncontroller and cache handle disk allocation etc. On the other hand, if \nyou're dealing with the 5 drives in a single Dell 6650, I've yet to encounter \na case where a separate xlog disk did not benefit an OLTP application.\n\nFor Solaris, the advantage of using a separate disk or partition is that the \nmount options you want for the xlog (including forcedirectio) are \nconsiderably different from what you'd use with the main database.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 16 Aug 2005 09:12:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:\n\n> However, you are absolutely correct in that it's *relative* advice, not \n> absolute advice. If, for example, you're using a $100,000 EMC SAN as your \n> storage you'll probably be better off giving it everything and letting its \n> controller and cache handle disk allocation etc. On the other hand, if \n> you're dealing with the 5 drives in a single Dell 6650, I've yet to encounter \n> a case where a separate xlog disk did not benefit an OLTP application.\n\nI've been asked this a couple of times and I don't know the answer: what\nhappens if you give XLog a single drive (unmirrored single spindle), and\nthat drive dies? So the question really is, should you be giving two\ndisks to XLog?\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n", "msg_date": "Tue, 16 Aug 2005 12:25:31 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "\n> I've been asked this a couple of times and I don't know the answer: what\n> happens if you give XLog a single drive (unmirrored single spindle), and\n> that drive dies? So the question really is, should you be giving two\n> disks to XLog?\n\nIf that drive dies your restoring from backup. You would need to run at \nleast RAID 1, preferrably RAID 10.\n\nSincerely,\n\nJoshua D. Drkae\n\n\n> \n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Tue, 16 Aug 2005 09:31:47 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "Alvaro Herrera wrote:\n> On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:\n>\n>\n>>However, you are absolutely correct in that it's *relative* advice, not\n>>absolute advice. If, for example, you're using a $100,000 EMC SAN as your\n>>storage you'll probably be better off giving it everything and letting its\n>>controller and cache handle disk allocation etc. On the other hand, if\n>>you're dealing with the 5 drives in a single Dell 6650, I've yet to encounter\n>>a case where a separate xlog disk did not benefit an OLTP application.\n>\n>\n> I've been asked this a couple of times and I don't know the answer: what\n> happens if you give XLog a single drive (unmirrored single spindle), and\n> that drive dies? So the question really is, should you be giving two\n> disks to XLog?\n>\n\nI can propose a simple test. Create a test database. Run postgres,\ninsert a bunch of stuff. Stop postgres. Delete everything in the pg_xlog\ndirectory. Start postgres again, what does it do?\n\nI suppose to simulate more of a failure mode, you could kill -9 the\npostmaster (and all children processes) perhaps during an insert, and\nthen delete pg_xlog.\n\nBut I would like to hear from the postgres folks what they *expect*\nwould happen if you ever lost pg_xlog.\n\nWhat about something like keeping pg_xlog on a ramdisk, and then\nrsyncing it to a hard-disk every 5 minutes. If you die in the middle,\ndoes it just restore back to the 5-minutes ago point, or does it get\nmore thoroughly messed up?\nFor some people, a 5-minute old restore would be okay, as long as you\nstill have transaction safety, so that you can figure out what needs to\nbe restored.\n\nJohn\n=:->", "msg_date": "Tue, 16 Aug 2005 11:33:43 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:\n>However, you are absolutely correct in that it's *relative* advice, not \n>absolute advice. If, for example, you're using a $100,000 EMC SAN as your \n>storage you'll probably be better off giving it everything and letting its \n>controller and cache handle disk allocation etc. \n\nWell, you don't have to spend *quite* that much to get a decent storage\narray. :) \n\n>On the other hand, if you're dealing with the 5 drives in a single Dell\n>6650, I've yet to encounter a case where a separate xlog disk did not\n>benefit an OLTP application.\n\nIIRC, that's an older raid controller that tops out at 128MB write\ncache, and 5 spindles ain't a lot--so it makes sense that it would\nbenefit from a seperate spindle for xlog. Also note that I said the\nwrite cache advice goes out the window if you have a workload that\ninvolves constant writing (or if your xlog writes come in faster than\nyour write cache can drain) because at that point you essentially drop\nback to raw disk speed; I assume the OLTP apps you mention are fairly\nwrite-intensive. OTOH, in a reasonably safe configuration I suppose\nyou'd end up with a 3 disk raid5 / 2 disk raid1 or 2 raid 1 pairs on\nthat dell 6650; is that how you test? Once you're down to that small a\ndata set I'd expect the system's ram cache to be a much larger\npercentage of the working set, which would tend to make the xlog just\nabout the *only* latency-critical i/o. That's a different creature from\na data mining app that might really benefit from having additional\nspindles to accelerate read performance from indices much larger than\nRAM. At any rate, this just underscores the need for testing a\nparticular workload on particular hardware. Things like the disk speed,\nraid configuration, write cache size, transaction size, data set size,\nworking set size, concurrent transactions, read vs write emphasis, etc.,\nare each going to have a fairly large impact on performance. \n\n>For Solaris, the advantage of using a separate disk or partition is that the \n>mount options you want for the xlog (including forcedirectio) are \n>considerably different from what you'd use with the main database.\n\nYeah, having a seperate partition is often good even if you do have\neverything on the same disks.\n\nMike Stone\n", "msg_date": "Tue, 16 Aug 2005 13:30:47 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM?] Re: PG8 Tuning" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> Alvaro Herrera wrote:\n>> I've been asked this a couple of times and I don't know the answer: what\n>> happens if you give XLog a single drive (unmirrored single spindle), and\n>> that drive dies? So the question really is, should you be giving two\n>> disks to XLog?\n\n> I can propose a simple test. Create a test database. Run postgres,\n> insert a bunch of stuff. Stop postgres. Delete everything in the pg_xlog\n> directory. Start postgres again, what does it do?\n\nThat test would really be completely unrelated to the problem.\n\nIf you are able to shut down the database cleanly, then you do not need\npg_xlog anymore --- everything is on disk in the data area. You might\nhave to use pg_resetxlog to get going again, but you won't lose anything\nby doing so.\n\nThe question of importance is: if the xlog drive dies while the database\nis running, are you going to be able to get the postmaster to shut down\ncleanly? My suspicion is \"no\" --- if the kernel is reporting write\nfailures on WAL, that's going to prevent writes to the data drives (good\nol' WAL-before-data rule). You could imagine failure modes where the\ndrive is toast but isn't actually reporting any errors ... but one hopes\nthat's not a common scenario.\n\nIn a scenario like this, it might be interesting to have a shutdown mode\nthat deliberately ignores writing to WAL and just does its best to get\nall the dirty pages down onto the data drives.\n\nIn the meantime ... use a mirrored drive for WAL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Aug 2005 18:49:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning " }, { "msg_contents": "Tom Lane wrote:\n> John A Meinel <[email protected]> writes:\n>\n>>Alvaro Herrera wrote:\n>>\n>>>I've been asked this a couple of times and I don't know the answer: what\n>>>happens if you give XLog a single drive (unmirrored single spindle), and\n>>>that drive dies? So the question really is, should you be giving two\n>>>disks to XLog?\n>\n>\n>>I can propose a simple test. Create a test database. Run postgres,\n>>insert a bunch of stuff. Stop postgres. Delete everything in the pg_xlog\n>>directory. Start postgres again, what does it do?\n>\n>\n> That test would really be completely unrelated to the problem.\n>\n> If you are able to shut down the database cleanly, then you do not need\n> pg_xlog anymore --- everything is on disk in the data area. You might\n> have to use pg_resetxlog to get going again, but you won't lose anything\n> by doing so.\n\nSo pg_xlog is really only needed for a dirty shutdown. So what about the\nidea of having pg_xlog on a ramdisk that is syncronized periodically to\na real disk.\n\nI'm guessing you would get corruption of the database, or at least you\ndon't know what is clean and what is dirty, since there would be no WAL\nentry for some of the things that completed, but also no WAL entry for\nthings that were not completed.\n\nSo what is actually written to the WAL? Is it something like:\n\"I am writing these pages, and when page X has a certain value, I am\nfinished\"\n\nI'm just curious, because I don't believe you write to the WAL when you\ncomplete the writing the data, you only make a note about what you are\ngoing to do before you do it. So there needs to be a way to detect if\nyou actually finished (which would be in the actual data).\n\nJohn\n=:->\n\n>\n> The question of importance is: if the xlog drive dies while the database\n> is running, are you going to be able to get the postmaster to shut down\n> cleanly? My suspicion is \"no\" --- if the kernel is reporting write\n> failures on WAL, that's going to prevent writes to the data drives (good\n> ol' WAL-before-data rule). You could imagine failure modes where the\n> drive is toast but isn't actually reporting any errors ... but one hopes\n> that's not a common scenario.\n>\n> In a scenario like this, it might be interesting to have a shutdown mode\n> that deliberately ignores writing to WAL and just does its best to get\n> all the dirty pages down onto the data drives.\n>\n> In the meantime ... use a mirrored drive for WAL.\n>\n> \t\t\tregards, tom lane\n>", "msg_date": "Tue, 16 Aug 2005 18:14:41 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> So pg_xlog is really only needed for a dirty shutdown. So what about the\n> idea of having pg_xlog on a ramdisk that is syncronized periodically to\n> a real disk.\n\nWell, if \"periodically\" means \"at every transaction commit\", that's\npretty much what we do now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Aug 2005 22:27:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning " }, { "msg_contents": "Michael,\n\n> Well, you don't have to spend *quite* that much to get a decent storage\n> array. :)\n\nYes, I'm just pointing out that it's only the extreme cases which are \nclear-cut. Middle cases are a lot harder to define. For example, we've \nfound that on DBT2 running of a 14-drive JBOD, seperating off WAL boosts \nperformance about 8% to 14%. On DBT3 (DSS) seperate WAL (and seperate \ntablespaces) helps considerably during data load,but not otherwise. So it \nall depends.\n\n> That's a different creature from\n> a data mining app that might really benefit from having additional\n> spindles to accelerate read performance from indices much larger than\n> RAM. \n\nYes, although the data mining app benefits from the special xlog disk \nduring ETL. So it's a tradeoff. \n\n> At any rate, this just underscores the need for testing a \n> particular workload on particular hardware\n\nYes, absolutely.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Aug 2005 13:39:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG8 Tuning" } ]
[ { "msg_contents": "Hi,\n\nI have a perfomance issue :\n\nI run PG (8.0.3) and SQLServer2000 on a Windows2000 Server (P4 1,5Ghz 512Mo)\nI have a table (3200000 rows) and I run this single query :\n\nselect cod from mytable group by cod\nI have an index on cod (char(4) - 88 different values)\n\nPG = ~ 20 sec.\nSQLServer = < 8 sec\n\n\nthe explain is :\n\nHashAggregate (cost=64410.09..64410.09 rows=55 width=8)\n -> Seq Scan on mytable (cost=0.00..56325.27 rows=3233927 width=8)\n\n\nif I switch to \"enable_hashagg = false\" (just for a try...)\nthe planner will choose my index :\n\nGroup (cost=0.00..76514.01 rows=55 width=8)\n -> Index Scan using myindex on mytable (cost=0.00..68429.20 rows=3233927\nwidth=8)\n\nbut performance will be comparable to previous test.\n\nSo with or without using Index I have the same result.\n\n\nThanks for help.\n \nStéphane COEZ\n\n\n\n", "msg_date": "Thu, 11 Aug 2005 15:19:06 +0200", "msg_from": "=?iso-8859-1?Q?St=E9phane_COEZ?= <[email protected]>", "msg_from_op": true, "msg_subject": "Performance pb vs SQLServer." }, { "msg_contents": "Stéphane COEZ wrote:\n\n>Hi,\n>\n>I have a perfomance issue :\n>\n>I run PG (8.0.3) and SQLServer2000 on a Windows2000 Server (P4 1,5Ghz 512Mo)\n>I have a table (3200000 rows) and I run this single query :\n>\n>select cod from mytable group by cod\n>I have an index on cod (char(4) - 88 different values)\n>\n>PG = ~ 20 sec.\n>SQLServer = < 8 sec\n>\n>\n>the explain is :\n>\n>HashAggregate (cost=64410.09..64410.09 rows=55 width=8)\n> -> Seq Scan on mytable (cost=0.00..56325.27 rows=3233927 width=8)\n>\n>\n>if I switch to \"enable_hashagg = false\" (just for a try...)\n>the planner will choose my index :\n>\n>Group (cost=0.00..76514.01 rows=55 width=8)\n> -> Index Scan using myindex on mytable (cost=0.00..68429.20 rows=3233927\n>width=8)\n>\n>but performance will be comparable to previous test.\n>\n>So with or without using Index I have the same result.\n> \n>\n\nMy guess is that this is part of a larger query. There isn't really much\nyou can do. If you want all 3.2M rows, then you have to wait for them to\nbe pulled in.\n\nWhat you generally can do for performance, is to restructure things, so\nthat you *don't* have to touch all 3.2M rows.\nIf you are just trying to determine what the unique entries are for cod,\nyou probably are better off doing some normalization, and keeping a\nseparate table of cod values.\n\nI'm guessing the reason your query is faster with SQLServer is because\nof how postgres handles MVCC. Basically, it still has to fetch the main\npage to determine if a row exists. While SQL server doesn't do MVCC, so\nit can just look things up in the index.\n\nYou might also try a different query, something like:\n\nSELECT DISTINCT cod FROM mytable ORDER BY cod GROUP BY cod;\n(You may or may not want order by, or group by, try the different\ncombinations.)\nIt might be possible to have the planner realize that all you want is\nunique rows, just doing a group by doesn't give you that.\n\nJohn\n=:->\n\n>\n>Thanks for help.\n> \n>Stéphane COEZ\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n> \n>", "msg_date": "Sun, 14 Aug 2005 19:27:38 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote:\n> My guess is that this is part of a larger query. There isn't really much\n> you can do. If you want all 3.2M rows, then you have to wait for them to\n> be pulled in.\n\nTo me, it looks like he'll get 88 rows, not 3.2M. Surely we must be able to\ndo something better than a full sequential scan in this case?\n\ntest=# create table foo ( bar char(4) );\nCREATE TABLE\ntest=# insert into foo values ('0000');\nINSERT 24773320 1\ntest=# insert into foo values ('0000');\nINSERT 24773321 1\ntest=# insert into foo values ('1111');\nINSERT 24773322 1\ntest=# select * from foo group by bar;\n bar \n------\n 1111\n 0000\n(2 rows)\n\nI considered doing some odd magic with generate_series() and subqueries with\nLIMIT 1, but it was a bit too weird in the end :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 15 Aug 2005 03:01:59 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote:\n> If you are just trying to determine what the unique entries are for cod,\n> you probably are better off doing some normalization, and keeping a\n> separate table of cod values.\n\nPah, I missed this part of the e-mail -- you can ignore most of my (other)\nreply, then :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 15 Aug 2005 03:04:15 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "Steinar H. Gunderson wrote:\n\n>On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote:\n> \n>\n>>My guess is that this is part of a larger query. There isn't really much\n>>you can do. If you want all 3.2M rows, then you have to wait for them to\n>>be pulled in.\n>> \n>>\n>\n>To me, it looks like he'll get 88 rows, not 3.2M. Surely we must be able to\n>do something better than a full sequential scan in this case?\n>\n>test=# create table foo ( bar char(4) );\n>CREATE TABLE\n>test=# insert into foo values ('0000');\n>INSERT 24773320 1\n>test=# insert into foo values ('0000');\n>INSERT 24773321 1\n>test=# insert into foo values ('1111');\n>INSERT 24773322 1\n>test=# select * from foo group by bar;\n> bar \n>------\n> 1111\n> 0000\n>(2 rows)\n>\n>I considered doing some odd magic with generate_series() and subqueries with\n>LIMIT 1, but it was a bit too weird in the end :-)\n>\n>/* Steinar */\n> \n>\nI think a plain \"GROUP BY\" is not smart enough to detect it doesn't need\nall rows (since it is generally used because you want to get aggregate\nvalues of other columns).\nI think you would want something like SELECT DISTINCT, possibly with an\nORDER BY rather than a GROUP BY (which was my final suggestion).\n\nJohn\n=:->", "msg_date": "Sun, 14 Aug 2005 20:05:58 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> To me, it looks like he'll get 88 rows, not 3.2M. Surely we must be able to\n> do something better than a full sequential scan in this case?\n\nNot really. There's been some speculation about implementing index\n\"skip search\" --- once you've verified there's at least one visible\nrow of a given index value, tell the index to skip to the next different\nvalue instead of handing back any of the remaining entries of the\ncurrent value. But it'd be a lot of work and AFAICS not useful for\nvery many kinds of queries besides this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Aug 2005 21:18:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer. " }, { "msg_contents": "On Sun, Aug 14, 2005 at 09:18:45PM -0400, Tom Lane wrote:\n> Not really. There's been some speculation about implementing index\n> \"skip search\" --- once you've verified there's at least one visible\n> row of a given index value, tell the index to skip to the next different\n> value instead of handing back any of the remaining entries of the\n> current value. But it'd be a lot of work and AFAICS not useful for\n> very many kinds of queries besides this.\n\nThis is probably a completely wrong way of handling it all, but could it be\ndone in a PL/PgSQL query like this? (Pseudo-code, sort of; I'm not very well\nversed in the actual syntax, but I'd guess you get the idea.)\n\nx = ( SELECT foo FROM table ORDER BY foo LIMIT 1 );\nWHILE x IS NOT NULL\n RETURN NEXT x;\n x = ( SELECT foo FROM table WHERE foo > x ORDER BY foo LIMIT 1 );\nEND;\n\n(Replace with max() and min() for 8.1, of course.)\n\n/* Steinar */\n- fond of horrible hacks :-)\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 15 Aug 2005 03:37:41 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "John Arbash Meinel wrote : \n> \n> You might also try a different query, something like:\n> \n> SELECT DISTINCT cod FROM mytable ORDER BY cod GROUP BY cod; \n> (You may or may not want order by, or group by, try the different\n> combinations.)\n> It might be possible to have the planner realize that all you \n> want is unique rows, just doing a group by doesn't give you that.\n> \n> John\n> =:->\n> \nThanks John, but using SELECT DISTINCT with or without Order nor Group by is\nworth...\n30 sec (with index) - stopped at 200 sec without index...\n\nSo Hash Aggregate is much better than index scan ...\n\n\n> >\n> >Thanks for help.\n> > \n> >Stéphane COEZ\n> >\n> >\n> >\n> >\n> >---------------------------(end of \n> >broadcast)---------------------------\n> >TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n> > \n> >\n> \n> \n> \n\n\n\n", "msg_date": "Mon, 15 Aug 2005 11:05:11 +0200", "msg_from": "=?iso-8859-1?Q?St=E9phane_COEZ?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance pb vs SQLServer." } ]
[ { "msg_contents": "Hi Paul,\n\nI was passed your message... regarding DSS workload with Postgres on Solaris. (I am not in the alias).\n\nPerformance is relative to your workload. Can you actually send us what you are doing in your queries, updates etc?\n\nI have been running few tests myself and here are my rules of thumbs, your mileage can vary..\n\nhttp://blogs.sun.com/roller/page/jkshah?entry=tuning_postgresql_8_0_2\n\n* Increasing checkpoint certainly helps. (I went as far as actually going to increase LOGFILE size from 16MB to 256MB and recompiling it and then using lower number of checkpoints (appropriately).. (file rotations also decreases performance)\n\n* Moving pg_xlog to a different file system and mounting that file system with \"forcedirectio\" also helps a lot (This increases the througput by another 2x to 5x or more.) (This can be done either by adding forcedirectio in your /etc/vfstab mount options or for existing mounts as follows:\nmount -o remount,forcedirectio /filesystem\n(Note: Database files should not be using forcedirectio otherwise file system cache will not be used for it)\n\n* I actually reduced the PG Bufferpool to 1G or less since it seemed to decrease performance as I increased its bufferpool size (depending on your workload)\n\n* If you are using SPARC then following etc commands will help..\n\nset segmap_percent=60\nset ufs:freebehind=0\n\n\nThis will allocate 60% of RAM for file system buffer (database files) and also cache all files (since PostgreSQL files are 1G by default)\n\nThis will help your repeat queries significantly. \n\nOther things depends on what you queries you are running? If you send me few samples, I can send you appropriate DTrace scripts (Solaris 10 or higher) to run to figure out what's happening\n\nRegards,\nJignesh\n\n\n\n____________________________________________________\n\nJignesh K. Shah MTS Software Engineer \nSun Microsystems, Inc MDE-Horizontal Technologies \nEmail: [email protected] Phone: (781) 442 3052\nhttp://blogs.sun.com/jkshah\n____________________________________________________\n\n----- Original Message -----\n>From \tPaul Johnson <[email protected]>\nDate \tThu, 11 Aug 2005 13:23:21 +0100 (BST)\nTo \[email protected]\nSubject \t[PERFORM] PG8 Tuning\nHi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\nCPUs running Solaris 10. The DB cluster is on an external fibre-attached\nSun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n\nThe system is for the sole use of a couple of data warehouse developers,\nhence we are keen to use 'aggressive' tuning options to maximise\nperformance.\n\nSo far we have made the following changes and measured the impact on our\ntest suite:\n\n1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\nin some cases.\n\n2) Increase work_mem from 1,024 to 524,288.\n\n3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\nsetting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n\nQuestion - can Postgres only use 2GB RAM, given that shared_buffers can\nonly be set as high as 262,143 (8K pages)?\n\nSo far so good...\n\n4) Move /pg_xlog to an internal disk within the V250. This has had a\nsevere *negative* impact on performance. Copy job has gone from 2 mins to\n12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\njobs.\n\nI'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\na single spindle disk?\n\nIn cases such as this, where an external storage array with a hardware\nRAID controller is used, the normal advice to separate the data from the\npg_xlog seems to come unstuck, or are we missing something?\n\nCheers,\n\nPaul Johnson.\n\n\n\nHi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC\nCPUs running Solaris 10. The DB cluster is on an external fibre-attached\nSun T3 array that has 9*36GB drives configured as a single RAID5 LUN.\n\nThe system is for the sole use of a couple of data warehouse developers,\nhence we are keen to use 'aggressive' tuning options to maximise\nperformance.\n\nSo far we have made the following changes and measured the impact on our\ntest suite:\n\n1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement\nin some cases.\n\n2) Increase work_mem from 1,024 to 524,288.\n\n3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required\nsetting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.\n\nQuestion - can Postgres only use 2GB RAM, given that shared_buffers can\nonly be set as high as 262,143 (8K pages)?\n\nSo far so good...\n\n4) Move /pg_xlog to an internal disk within the V250. This has had a\nsevere *negative* impact on performance. Copy job has gone from 2 mins to\n12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL\njobs.\n\nI'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to\na single spindle disk?\n\nIn cases such as this, where an external storage array with a hardware\nRAID controller is used, the normal advice to separate the data from the\npg_xlog seems to come unstuck, or are we missing something?\n\nCheers,\n\nPaul Johnson.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org", "msg_date": "Thu, 11 Aug 2005 10:30:13 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: PG8 Tuning]" } ]
[ { "msg_contents": "Hello all,\n\nI just was running strace in the writer process and I noticed this pattern:\n\nselect(0, NULL, NULL, NULL, {0, 200000}) = 0 (Timeout)\ngetppid() = 4240\ntime(NULL) = 1123773324\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0x81000) = 0x69ea3000\nsemop(1409034, 0xffffc0bc, 1) = 0\n<...seeks and writes...>\nmunmap(0x69ea3000, 528384) = 0\nselect(0, NULL, NULL, NULL, {0, 200000}) = 0 (Timeout)\ngetppid() = 4240\ntime(NULL) = 1123773324\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0x81000) = 0x69ea3000\nsemop(1605648, 0xffffc0bc, 1) = 0\n<...seeks and writes...>\nmunmap(0x69ea3000, 528384) = 0\nselect(0, NULL, NULL, NULL, {0, 200000}) = 0 (Timeout)\n\n\nwhy mmap and munmap each time? mmap and munmap are fairly expensive \noperations (on some systems), especially on multi cpu machines. munmap \nin particular generally needs to issue cross calls to the other cpus to \nensure any page mappings are invalidated. \n\nJust curious.\n\nThanks!\n\n-- Alan\n", "msg_date": "Thu, 11 Aug 2005 11:25:27 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "BG writer question?" }, { "msg_contents": "On Thu, Aug 11, 2005 at 11:25:27AM -0400, Alan Stange wrote:\n\n> why mmap and munmap each time? mmap and munmap are fairly expensive \n> operations (on some systems), especially on multi cpu machines. munmap \n> in particular generally needs to issue cross calls to the other cpus to \n> ensure any page mappings are invalidated. \n\nThere are no mmap/munmap calls in our code. The problematic code is\nprobably somewhere in the libc. Maybe it'd be useful to figure out\nwhere it's called and why, with an eye on working around that.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n", "msg_date": "Thu, 11 Aug 2005 12:43:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BG writer question?" } ]
[ { "msg_contents": "I'm having an odd case where my system is locking such that if I insert\ninto a table during a transaction, if I start a new connection and\ntransaction, it blocks while trying to do a similar insert until the\nfirst transaction is committed or rolled back.\n\nThe schema is rather complex (currently 157 tables, 200 views), and I\nstill haven't been able to create a small test case. Everything I've\ntried so far just works.\n\nThe data is private, but the schema is open source, so I probably could\nwork with someone on it. When I look at the pg_locks table, I seem to be\nblocked on:\nSELECT * FROM pg_locks WHERE granted = false;\n relation | database | transaction | pid | mode | granted\n----------+----------+-------------+-------+------------------+---------\n | | 1525932 | 30175 | ShareLock | f\n...\n\nWhich if I understand correctly, means that the current transaction is\nintentionally blocking waiting for the other transaction to finish.\n\nI'm currently running 8.0.3, but the database was first created under\n7.4.? I confirmed this behavior on both systems.\n\nUnder what circumstances would this occur?\n\nTo try and outline the situation there is a main object table, which is\nthe root object. It contains a group column which is used for access\nrights. There is a groupref table, which keeps track of the group rights\nfor each user. (Each user has specific insert,update,select rights per\ngroup).\n\nThe select rights are enforced by views (the tables are not publicly\naccessible, the views join against the groupref table to check for\nselect permission).\nInsert and update rights are validated by BEFORE INSERT triggers.\n\nMost tables references the object table. Basically it is OO, but doesn't\nuse the postgres inheritance (in our testing postgres inheritance didn't\nscale well for deep inheritance, and wasn't able to enforce uniqueness\nanyway.) The views present an OO appearance, and behind the scenes\ndirect table foreign keys maintain referential integrity.\n\nI have checked using RAISE NOTICE and the BEFORE INSERT trigger gets all\nthe way to the RETURN statement before things hang, so I haven't figured\nout what is actually hanging.\n\nI have a bzip'd version of the schema and just enough data to be useful\navailable here:\nhttp://www.arbash-meinel.com/extras/schema_and_data.sql.bz2\n\nThis is the commands to replicate the locking:\n\n-- Connect as postgres\n\n-- Required before any inserts, so that the TEMP env table is\n-- created and filled out.\nselect mf_setup_env();\n\n-- Begin a transaction and insert some data\nBEGIN;\nINSERT INTO object(vgroup,otype,oname) VALUES ('test',1,'test');\n\n-- Start a new shell, and connect again and do exactly the same thing\n-- as the above.\n-- It should hang until you either do END/ROLLBACK in the first\n-- connection.\n\nThanks for any help,\nJohn\n=:->", "msg_date": "Thu, 11 Aug 2005 15:36:31 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Odd Locking Problem" }, { "msg_contents": "On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:\n> I'm having an odd case where my system is locking such that if I insert\n> into a table during a transaction, if I start a new connection and\n> transaction, it blocks while trying to do a similar insert until the\n> first transaction is committed or rolled back.\n\nAre there foreign keys here? I can duplicate the problem easily with\nthem:\n\n-- session 1\ncreate table a (a serial primary key);\ncreate table b (a int references a);\ninsert into a values (1);\n\nbegin;\ninsert into b values (1);\n\n\n-- session 2\ninsert into b values (1);\n-- hangs\n\n\nIf I commit on session 1, session 2 is unlocked.\n\nThis is a known problem, solved in 8.1. A workaround for previous\nreleases is to defer FK checks until commit:\n\ncreate table b (a int references a initially deferred);\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\nDios hizo a Ad�n, pero fue Eva quien lo hizo hombre.\n", "msg_date": "Thu, 11 Aug 2005 17:08:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd Locking Problem" }, { "msg_contents": "Alvaro Herrera wrote:\n> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:\n>\n>>I'm having an odd case where my system is locking such that if I insert\n>>into a table during a transaction, if I start a new connection and\n>>transaction, it blocks while trying to do a similar insert until the\n>>first transaction is committed or rolled back.\n>\n>\n> Are there foreign keys here? I can duplicate the problem easily with\n> them:\n>\n> -- session 1\n> create table a (a serial primary key);\n> create table b (a int references a);\n> insert into a values (1);\n>\n> begin;\n> insert into b values (1);\n>\n>\n> -- session 2\n> insert into b values (1);\n> -- hangs\n>\n\nActually, there are but the insert is occurring into table 'a' not table\n'b'.\n'a' refers to other tables, but these should not be modified.\n\n>\n> If I commit on session 1, session 2 is unlocked.\n>\n> This is a known problem, solved in 8.1. A workaround for previous\n> releases is to defer FK checks until commit:\n>\n> create table b (a int references a initially deferred);\n\nI'll try one of the CVS entries and see if it happens there. Good to\nhear there has been work done.\n\nJohn\n=:->\n\n>", "msg_date": "Thu, 11 Aug 2005 16:11:58 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd Locking Problem" }, { "msg_contents": "Alvaro Herrera wrote:\n> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:\n>\n\n...\n\n>\n> This is a known problem, solved in 8.1. A workaround for previous\n> releases is to defer FK checks until commit:\n\nSo I don't know exactly what the fix was, but I just tested, and my\nproblem is indeed fixed with the latest CVS head. It no longer blocks.\n\n>\n> create table b (a int references a initially deferred);\n>\n\nJohn\n=:->", "msg_date": "Thu, 11 Aug 2005 18:30:28 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd Locking Problem" }, { "msg_contents": "On Thu, 11 Aug 2005 16:11:58 -0500, John A Meinel\n<[email protected]> wrote:\n>the insert is occurring into table 'a' not table 'b'.\n>'a' refers to other tables, but these should not be modified.\n\nSo your \"a\" is Alvaro's \"b\", and one of your referenced tables is\nAlvaro's \"a\". This is further supported by the fact that the problem\ndoesn't occur with 8.1.\n\nServus\n Manfred\n\n", "msg_date": "Mon, 15 Aug 2005 17:19:44 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd Locking Problem" } ]
[ { "msg_contents": "> Actually, it seems to me that with the addition of the WAL in\nPostgreSQL\n> and the subsequent decreased need to fsync the data files themselves\n> (only during checkpoints?), that the only time a battery-backed write\n> cache would make a really large performance difference would be on the\n> drive(s) hosting the WAL.\n\nIt still helps. In my experience a good BBU Raid controller is only\nslightly slower than fsync=false. Fear the checkpoint storm if you\ndon't have some write caching. Beyond that I don't really care about\nwrite delay.\n\nAnother thing to watch out for is that some sync modes (varying by\nplatform) can do >1 seeks per sync. This will absolutely kill your\ncommit performance on the WAL without write caching.\n \n> So although it is in general good to have a dedicated spindle for the\n> WAL, for many workloads it is in fact significantly better to have the\n> WAL written to a battery-backed write cache. The exception would be\nfor\n> applications with fewer, larger transactions, in which case you could\n> actually use the dedicated spindle.\n\nExactly.\n\n \n> Hmmm, on second thought, now I think I understand the rationale behind\n> having a non-zero commit delay setting-- the problem with putting\n\nI don't trust commit_delay. Get a good raid controller and make sure pg\nis properly using it. Now, if you can't (or won't) do some type of\nwrite caching bbu or no, your system has to be very carefully designed\nto get any performance at all, especially with high transaction volumes.\n\n\nMerlin\n", "msg_date": "Thu, 11 Aug 2005 16:38:53 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG8 Tuning" } ]
[ { "msg_contents": "I have a largely table-append-only application where most transactions \nare read-intensive and many are read-only. The transactions may span \nmany tables, and in some cases might need to pull 70 MB of data out of a \ncouple of the larger tables.\n\n\nIn 7.3, I don't seem to see any file system or other caching that helps \nwith repeated reads of the 70MB of data. Secondary fetches are pretty \nmuch as slow as the first fetch. (The 70MB in this example might take \nplace via 2000 calls to a parameterized statement via JDBC).\n\nWere there changes after 7.3 w.r.t. caching of data? I read this list \nand see people saying that 8.0 will use the native file system cache to \ngood effect. Is this true? Is it supposed to work with 7.3? Is there \nsomething I need to do to get postgresql to take advatage of large ram \nsystems?\n\nThanks for any advice.\n", "msg_date": "Thu, 11 Aug 2005 18:21:01 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Mostly read performance" }, { "msg_contents": "Jeffrey Tenny wrote:\n> I have a largely table-append-only application where most transactions\n> are read-intensive and many are read-only. The transactions may span\n> many tables, and in some cases might need to pull 70 MB of data out of a\n> couple of the larger tables.\n>\n>\n> In 7.3, I don't seem to see any file system or other caching that helps\n> with repeated reads of the 70MB of data. Secondary fetches are pretty\n> much as slow as the first fetch. (The 70MB in this example might take\n> place via 2000 calls to a parameterized statement via JDBC).\n>\n> Were there changes after 7.3 w.r.t. caching of data? I read this list\n> and see people saying that 8.0 will use the native file system cache to\n> good effect. Is this true? Is it supposed to work with 7.3? Is there\n> something I need to do to get postgresql to take advatage of large ram\n> systems?\n>\n> Thanks for any advice.\n>\n\nWell, first off, the general recommendation is probably that 7.3 is\nreally old, and you should try to upgrade to at least 7.4, though\nrecommended to 8.0.\n\nThe bigger questions: How much RAM do you have? How busy is your system?\n\n8.0 doesn't really do anything to do make the system cache the data.\nWhat kernel are you using?\n\nAlso, if your tables are small enough, and your RAM is big enough, you\nmight already have everything cached.\n\nOne way to flush the caches, is to allocate a bunch of memory, and then\nscan through it. Or maybe mmap a really big file, and access every byte.\nBut if your kernel is smart enough, it could certainly deallocate pages\nafter you stopped accessing them, so I can't say for sure that you can\nflush the memory cache. Usually, I believe these methods are sufficient.\n\nJohn\n=:->", "msg_date": "Thu, 11 Aug 2005 17:38:47 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mostly read performance" }, { "msg_contents": "John A Meinel wrote:\n > Well, first off, the general recommendation is probably that 7.3 is\n> really old, and you should try to upgrade to at least 7.4, though\n> recommended to 8.0.\n\nThere have been issues with each release that led me to wait.\nEven now I'm waiting for some things to settle in the 8.0 JDBC driver\n(timezones), and 7.3 has behaved well for me. But yes, I'd like to upgrade.\n\n> \n> The bigger questions: How much RAM do you have? How busy is your system?\n\nThe system for testing was 512MB. I'm in the process of buying some \nadditional memory. However there was no swap activity on that system, \nso I doubt memory was the limiting factor.\n\n> \n> 8.0 doesn't really do anything to do make the system cache the data.\n> What kernel are you using?\n\n2.4.X for various large x. (Multiple systems). Gonna try 2.6.x soon.\n\n> \n> Also, if your tables are small enough, and your RAM is big enough, you\n> might already have everything cached.\n\nWell, that's what you'd expect. But a first time 70MB fetch on a \nfreshly rebooted system took just as long as all secondary times. (Took \nover a minute to fetch, which is too long for my needs, at least on \nsecondary attempts).\n\n> One way to flush the caches, is to allocate a bunch of memory, and then\n> scan through it. Or maybe mmap a really big file, and access every byte.\n> But if your kernel is smart enough, it could certainly deallocate pages\n> after you stopped accessing them, so I can't say for sure that you can\n> flush the memory cache. Usually, I believe these methods are sufficient.\n\nNot sure how that would really help. It doesn't seem like the database \nor file system is caching the table content either way, which led me to \nthis inquiry.\n", "msg_date": "Thu, 11 Aug 2005 19:13:27 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mostly read performance" }, { "msg_contents": "On Thu, Aug 11, 2005 at 07:13:27PM -0400, Jeffrey Tenny wrote:\n>The system for testing was 512MB\n\nThat's definately *not* a \"large ram\" system. If you're reading a subset\nof data that totals 70MB I'm going to guess that your data set is larger\nthan or at least a large fraction of 512MB.\n\n>additional memory. However there was no swap activity on that system, \n>so I doubt memory was the limiting factor.\n\nThe system won't swap if your data set is larger than your memory, it\njust won't cache the data.\n\n>Well, that's what you'd expect. But a first time 70MB fetch on a \n>freshly rebooted system took just as long as all secondary times. (Took \n>over a minute to fetch, which is too long for my needs, at least on \n>secondary attempts).\n\nIf the query involves a table scan and the data set is larger than your\navailable memory, you'll need a full scan every time. If you do a table\nscan and the table fits in RAM, subsequent runs should be faster. If you\nhave an index and only need to look at a subset of the table, subsequent\nruns should be faster. Without knowing more about your queries it's not\nclear what your situation is.\n\nMike Stone\n", "msg_date": "Thu, 11 Aug 2005 20:30:31 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mostly read performance" }, { "msg_contents": "\nMichael Stone <[email protected]> writes:\n\n> > Well, that's what you'd expect. But a first time 70MB fetch on a freshly\n> > rebooted system took just as long as all secondary times. (Took over a\n> > minute to fetch, which is too long for my needs, at least on secondary\n> > attempts).\n\nThat's not impressively fast even for the disk. You should get up to about\n40Mbit/s or 5MByte/s from the disk. Add some overhead for postgres; so I would\nexpect a full table scan of 70MB to take more like 15-30s, not over a minute.\n\nWhat is your shared_buffers setting? Perhaps you have it set way too high or\nway too low?\n\nAlso, you probably should post the \"explain analyze\" output of the actual\nquery you're trying to optimize. Even if you're not looking for a better plan\nhaving hard numbers is better than guessing.\n\nAnd the best way to tell if the data is cached is having a \"vmstat 1\" running\nin another window. Start the query and look at the bi/bo columns. If you see\nbi spike upwards then it's reading from disk.\n\n-- \ngreg\n\n", "msg_date": "12 Aug 2005 03:32:37 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mostly read performance" } ]
[ { "msg_contents": "I have database of company data, and some of them is table of information \nabout employees. I need each employee to have access only to his own row. \nPostgre cannot do this by system of privileges, because that can give \nprivileges only to whole tables.\n\nPossibility is to create a view for each employee that chooses only his data \nand give employee privileges to this view. But I am not sure if such number \nof views does not have some performance drawbacks or even if postgre can \nsupport it (I expect i can). I would need several tables protected like this \nand it can result in, say 1000 views in maximum.\n\nBecause access to DB will go through PHP information system, other \npossibility to protect data is to let IS connect as more privileged than \nuser really is, but let it retrieve only data for that user.\n\nView-approach seems far more clear than this, but im not sure if postgre can \nhandle it without problems.\n\nThanks for any reply :-)\n\n-----------------------------------------------------------\nPetr Kavan\nDatabase Development\n\n\n", "msg_date": "Fri, 12 Aug 2005 08:53:33 +0200", "msg_from": "\"Petr Kavan\" <[email protected]>", "msg_from_op": true, "msg_subject": "How many views is ok?" }, { "msg_contents": "Petr Kavan wrote:\n\n> I have database of company data, and some of them is table of\n> information about employees. I need each employee to have access only\n> to his own row. Postgre cannot do this by system of privileges,\n> because that can give privileges only to whole tables.\n>\n> Possibility is to create a view for each employee that chooses only\n> his data and give employee privileges to this view. But I am not sure\n> if such number of views does not have some performance drawbacks or\n> even if postgre can support it (I expect i can). I would need several\n> tables protected like this and it can result in, say 1000 views in\n> maximum.\n>\n> Because access to DB will go through PHP information system, other\n> possibility to protect data is to let IS connect as more privileged\n> than user really is, but let it retrieve only data for that user.\n>\n> View-approach seems far more clear than this, but im not sure if\n> postgre can handle it without problems.\n\nWe do a similar thing tying user to per-row permissions. We have 1 view\nper table, and it works fine.\nI would recommend that you do something similar. Basically, just make\nthe view:\n\nCREATE VIEW just_me SECURITY DEFINER AS\n SELECT * FROM user_table WHERE username=session_user;\nREVOKE ALL FROM user_table;\nGRANT SELECT TO just_me TO PUBLIC;\n\nsecurity definer, means that the 'just_me' view will be executed as the\nuser who created the function (superuser).\nThe REVOKE ALL (my syntax might be wrong) prevents users from querying\nthe user tables directly.\nThe 'session_user' makes the view use the name of the actual connected\nuser (because of security definer, plain 'user' is the superuser)\nThis should allow a user to see only their own row in the database.\n(Whichever rows that have username matching the connected name).\n\nNow, this only works if the php front end connects specifically as the\ngiven user (our system is setup to do this).\n\nIf you don't do it this way, you'll always be stuck with the IS layer\ndoing the restriction. Even if you create a view per user, if your PHP\nlayer has the right to look at other tables/views, it doesn't really help.\n\nGood luck,\nJohn\n=:->\n\n>\n> Thanks for any reply :-)\n>\n> -----------------------------------------------------------\n> Petr Kavan\n> Database Development\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>", "msg_date": "Sun, 14 Aug 2005 19:22:12 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many views is ok?" }, { "msg_contents": "\"Petr Kavan\" <[email protected]> writes:\n> Possibility is to create a view for each employee that chooses only his data \n> and give employee privileges to this view. But I am not sure if such number \n> of views does not have some performance drawbacks or even if postgre can \n> support it (I expect i can).\n\nDo you really need more than one view? I'd consider something like\n\n\tcreate view emp_view as select * from emp where name = current_user;\n\nThis requires that your Postgres usernames match up with something in\nthe underlying table, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Aug 2005 20:38:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many views is ok? " }, { "msg_contents": "Hey, that trick with session_user is great! :-) Thank you all very much, \nthis will certainly help.\n\n\n-----------------------------------------------------------\nPetr Kavan\nDatabase Development\n\n\n----- Original Message ----- \nFrom: \"John Arbash Meinel\" <[email protected]>\nTo: \"Petr Kavan\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, August 15, 2005 2:22 AM\nSubject: Re: [PERFORM] How many views is ok?\n\n\n", "msg_date": "Mon, 15 Aug 2005 07:23:23 +0200", "msg_from": "\"Petr Kavan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many views is ok?" } ]
[ { "msg_contents": "(Pardon my replying two two replies at once, I only get the digest and \nthis was easier).\n\nMichael Stone wrote:\n[...]\n>> Well, that's what you'd expect. But a first time 70MB fetch on a freshly rebooted system took just as long as all secondary times. (Took over a minute to fetch, which is too long for my needs, at least on secondary attempts).\n> \n> \n> If the query involves a table scan and the data set is larger than your\n> available memory, you'll need a full scan every time. If you do a table\n> scan and the table fits in RAM, subsequent runs should be faster. If you\n> have an index and only need to look at a subset of the table, subsequent\n> runs should be faster. Without knowing more about your queries it's not\n> clear what your situation is.\n\nI must amend my original statement. I'm not using a parameterized \nstatement. The system is effectively fetching file content stored in \nthe database for portions of one or more files. It attempts to batch\nthe records being fetched into as few non-parameterized queries as \npossible, while balancing the rowset retrieval memory impact.\n\nCurrently that means it will request up to 16K records in a query that \nis assembled using a combination of IN (recids...) , BETWEEN ranges, and\nUNION ALL for multiple file IDs. I do this to minimize the latency of\ndbclient/dbserver requests, while at the same time capping the maximum \ndata returned by a DBIO to about 1.2MB per maximum retrieved record set.\n(I'm trying not to pound the java app server via jdbc memory usage).\nThere's an ORDER BY on the file id column too.\n\nIt sounds like a simple enough thing to do, but this \"pieces of many \nfiles in a database\" problem is actually pretty hard to optimize.\nFetching all records for all files, even though I don't need all \nrecords, is both inefficient and likely to use too much memory. \nFetching 1 file at a time is likely to result in too many queries \n(latency overhead). So right now I err on the side of large but record \nlimited queries. That let's me process many files in one query, unless \nthe pieces of the files I need are substantial.\n(I've been burned by trying to use setFetchSize so many times it isn't \nfunny, I never count on that any more).\n\nAn index is in place to assist with record selection, I'll double check \nthat it's being used. It's a joint index on file-id and \nrecord-id-within-the-file. I'll check to be sure it's being used.\n\n------------------------\n\n\nGreg Stark wrote:\n[...]\n\n> What is your shared_buffers setting? Perhaps you have it set way too high or\n> way too low?\n\nI generally run with the conservative installation default. I did some \nexperimenting with larger values but didn't see any improvement (and \nyes, I restarted postmaster). This testing was done a while ago, I \ndon't have the numbers in memory any more so I can't tell you what they \nwere.\n\n> \n> Also, you probably should post the \"explain analyze\" output of the actual\n> query you're trying to optimize. Even if you're not looking for a better plan\n> having hard numbers is better than guessing.\n\nA good suggestion. I'll look into it.\n\n\n> And the best way to tell if the data is cached is having a \"vmstat 1\" running\n> in another window. Start the query and look at the bi/bo columns. If you see\n> bi spike upwards then it's reading from disk.\n\nAnother good suggestion.\n\nI'll look into getting further data from the above suggestions.\n\nI'm also looking into getting a gig or two of ram to make sure that \nisn't an issue.\n\nThe basis of my post originally was to make sure that, all things being \nequal, there's no reason those disk I/Os on behalf of the database \nshouldn't be cached by the operating/file system so that repeated reads \nmight benefit from in-memory data.\n\n", "msg_date": "Fri, 12 Aug 2005 18:37:45 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mostly read performance (2 replies)" } ]
[ { "msg_contents": "This is a multi-part message in MIME format.\n\n--bound1124085115\nContent-Type: text/plain; charset=iso-8859-1\nContent-Transfer-Encoding: 7bit\n\nOne little thing. Did you shutdown sql2000 while testing postgresql? Remember that postgresql uses system cache. Sql2000 uses a large part of memory as buffer and it will not be available to operating system. I must say that, probably, results will be the same, but it will be a better test.\n\n> I'm guessing the reason your query is faster with SQLServer is because\n> of how postgres handles MVCC. Basically, it still has to fetch the main\n> page to determine if a row exists. While SQL server doesn't do MVCC, so\n> it can just look things up in the index.\n\nAnother thing [almost offtopic]:\nI would like to add something to understand what does MVCC means and what are the consecuences.\nMVCC: multiversion concurrency control. (ehhh...)\n\nJust do this.\n\nOpen two psql sessions. Do this:\nSession 1:\n begin;\n update any_table set any_column = 'value_a' where other_column = 'value_b'\n -- do not commit\nSession 2:\n select any_table where other_column = 'value_b'\n Watch the result.\nSession 1:\n commit;\nSession 2:\n select any_table where other_column = 'value_b'\n Watch the result.\n\nNow open two session in query analyzer. Do the same thing:\nSession 1:\n begin tran\n update any_table set any_column = 'value_a' where other_column = 'value_b'\n -- do not commit\nSession 2:\n select any_table where other_column = 'value_b'\n Wait for result.\n Wait... wait... (Oh, a lock! Ok, when you get tired, go back to session 1.)\nSession 1:\n commit\nSession 2:\n Then watch the result. \n\nWhich one was faster?\n\n[\"very, very offtopic\"]\nOk. This comparition is just as useless as the other one, because it's comparing oranges with apples (It's funny anyway). I was just choosing an example in which you can see the best of postgresql against 'not so nice' behavior of mssql2000 (no service pack, it's my desktop system, I'll do the same test later with SP4 and different isolation levels and I'll check results). Furthermore, MSSQL2000 is 5 years old now. Does anybody has the same cellular phone, or computer? (I don't want to know :-) ). The big question is 'What do you need?'. No system can give you all. That's marketing 'sarasa'.\n\nSorry for my english and the noise. [End of offtopic]\n\nLong life, little spam and prosperity.\n\n--bound1124085115--\n\n", "msg_date": "Mon, 15 Aug 2005 01:51:55 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": " \n> \n> One little thing. Did you shutdown sql2000 while testing \n> postgresql? Remember that postgresql uses system cache. \n> Sql2000 uses a large part of memory as buffer and it will not \n> be available to operating system. I must say that, probably, \n> results will be the same, but it will be a better test.\n> \n\nShutting done SQL2000 has no effect on PG performancies.\n\nStephane.\n\n\n\n", "msg_date": "Mon, 15 Aug 2005 11:08:06 +0200", "msg_from": "=?iso-8859-1?Q?St=E9phane_COEZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." } ]
[ { "msg_contents": "> Hi,\n> \n> I have a perfomance issue :\n> \n> I run PG (8.0.3) and SQLServer2000 on a Windows2000 Server \n> (P4 1,5Ghz 512Mo) I have a table (3200000 rows) and I run \n> this single query :\n> \n> select cod from mytable group by cod\n> I have an index on cod (char(4) - 88 different values)\n> \n> PG = ~ 20 sec.\n> SQLServer = < 8 sec\n> \n> \n> the explain is :\n> \n> HashAggregate (cost=64410.09..64410.09 rows=55 width=8)\n> -> Seq Scan on mytable (cost=0.00..56325.27 rows=3233927 width=8)\n> \n> \n> if I switch to \"enable_hashagg = false\" (just for a try...) \n> the planner will choose my index :\n> \n> Group (cost=0.00..76514.01 rows=55 width=8)\n> -> Index Scan using myindex on mytable \n> (cost=0.00..68429.20 rows=3233927\n> width=8)\n> \n> but performance will be comparable to previous test.\n> \n> So with or without using Index I have the same result.\n\nOut of curiosity, what plan do you get from SQLServer? I bet it's a clustered index scan...\n\n\n//Magnus\n", "msg_date": "Mon, 15 Aug 2005 10:18:03 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "> De : Magnus Hagander [mailto:[email protected]] \n> Out of curiosity, what plan do you get from SQLServer? I bet \n> it's a clustered index scan...\n> \n> \n> //Magnus\n> \n\nI have a Table scan and Hashaggregate...\nStephane\n \n\n\n\n", "msg_date": "Mon, 15 Aug 2005 11:08:06 +0200", "msg_from": "=?iso-8859-1?Q?St=E9phane_COEZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." } ]
[ { "msg_contents": "> [\"very, very offtopic\"]\n> Ok. This comparition is just as useless as the other one, \n> because it's comparing oranges with apples (It's funny \n> anyway). I was just choosing an example in which you can see \n> the best of postgresql against 'not so nice' behavior of \n> mssql2000 (no service pack, it's my desktop system, I'll do \n> the same test later with SP4 and different isolation levels \n> and I'll check results).\n\nThere will be no difference in the service packs.\nSQL 2005 has \"MVCC\" (they call it something different, of course, but\nthat's basicallyi what it is)\n\n> Furthermore, MSSQL2000 is 5 years \n> old now. Does anybody has the same cellular phone, or \n> computer? (I don't want to know :-) ). The big question is\n\nThere is a big difference between your database and your cellphone.\nThere are a lot of systems out there running very solidly on older\nproducts like MSSQL 7 (probably even some on 6.x), as well as Oracle 7,8\nand 9...\nI'd say there is generally a huge difference in reliabilty in your\ncellphone hw/sw than there is in your db hw/sw. I have yet to see a\ncellphone that can run for a year without a reboot (or with a lot of\nbrands, complete replacement).\n \n//Magnus\n", "msg_date": "Mon, 15 Aug 2005 10:25:47 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "On Mon, Aug 15, 2005 at 10:25:47AM +0200, Magnus Hagander wrote:\n\n> SQL 2005 has \"MVCC\" (they call it something different, of course, but\n> that's basicallyi what it is)\n\nInteresting; do they use an overwriting storage manager like Oracle, or\na non-overwriting one like Postgres?\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n", "msg_date": "Mon, 15 Aug 2005 12:18:59 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "\n\"Alvaro Herrera\" <[email protected]> writes\n>\n> Interesting; do they use an overwriting storage manager like Oracle, or\n> a non-overwriting one like Postgres?\n>\n\nThey call this MVCC \"RLV(row level versioning)\". I think they use rollback\nsegment like Oracle (a.k.a \"version store\" or tempdb in SQL Server). Some\ndetails are explained in their white paper:\"Database concurrency and row\nlevel versioning in SQL Server 2005\".\n\nRegards,\nQingqing\n\n\n", "msg_date": "Thu, 18 Aug 2005 16:56:41 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." }, { "msg_contents": "Qingqing Zhou wrote:\n> \"Alvaro Herrera\" <[email protected]> writes\n>\n>>Interesting; do they use an overwriting storage manager like Oracle, or\n>>a non-overwriting one like Postgres?\n>>\n>\n>\n> They call this MVCC \"RLV(row level versioning)\". I think they use rollback\n> segment like Oracle (a.k.a \"version store\" or tempdb in SQL Server). Some\n> details are explained in their white paper:\"Database concurrency and row\n> level versioning in SQL Server 2005\".\n>\n> Regards,\n> Qingqing\n>\n\nI found the paper here:\nhttp://www.microsoft.com/technet/prodtechnol/sql/2005/cncrrncy.mspx\n\nAnd it does sound like they are doing it the Oracle way:\n\nWhen a record in a table or index is updated, the new record is stamped\nwith the transaction sequence_number of the transaction that is doing\nthe update. The previous version of the record is stored in the version\nstore, and the new record contains a pointer to the old record in the\nversion store. Old records in the version store may contain pointers to\neven older versions. All the old versions of a particular record are\nchained in a linked list, and SQL Server may need to follow several\npointers in a list to reach the right version. Version records need to\nbe kept in the version store only as long as there are there are\noperations that might require them.\n\nJohn\n=:->", "msg_date": "Thu, 18 Aug 2005 09:44:34 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance pb vs SQLServer." } ]
[ { "msg_contents": "The system is a dual Xenon with 6Gig of ram and 14 73Gig 15K u320 scsi\ndrives. Plus 2 raid 1 system dives.\n\nRedHat EL ES4 is the OS. \n\n\nAny1 have any suggestions as to the configuration? The database is about\n60 Gig's. Should jump to 120 here quite soon. Mus of the searches\ninvolve people's names. Through a website. My current setup just doesn't\nseem to have resulted in the performance kick I wanted. I don't know if\nit's LVM or what. The strang thing is that My Memory usage stays very\nLOW for some reason. While on my current production server it stays very\nhigh. Also looking for ideas on stipe and extent size. The below is run\noff of a RAID 10. I have not moved my WAL file yet, but there were no\nincoming transactions at the time the query was run. My stats on the\nidentity table are set to 1000.\n\n\n\n> explain analyze select distinct case_category,identity_id,court.name,litigant_details.case_id,case_year,date_of_birth,assigned_case_role,litigant_details.court_ori,full_name,litigant_details.actor_id,case_data.type_code,case_data.subtype_code,litigant_details.impound_litigant_data, to_number(trim(leading case_data.type_code from trim(leading case_data.case_year from case_data.case_id)),'999999') as seq from identity,court,litigant_details,case_data where identity.court_ori = litigant_details.court_ori and identity.case_id = litigant_details.case_id and identity.actor_id = litigant_details.actor_id and court.id = identity.court_ori and identity.court_ori = case_data.court_ori and case_data.case_id = identity.case_id and identity.court_ori = 'IL081025J' and full_name like 'SMITH%' order by full_name;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=34042.46..34042.57 rows=3 width=173) (actual time=63696.896..63720.193 rows=8086 loops=1)\n> -> Sort (cost=34042.46..34042.47 rows=3 width=173) (actual time=63696.892..63702.239 rows=8086 loops=1)\n> Sort Key: identity.full_name, case_data.case_category, identity.identity_id, court.name, litigant_details.case_id, case_data.case_year, identity.date_of_birth, litigant_details.assigned_case_role, litigant_details.court_ori, litigant_details.actor_id, case_data.type_code, case_data.subtype_code, litigant_details.impound_litigant_data, to_number(ltrim(ltrim((case_data.case_id)::text, (case_data.case_year)::text), (case_data.type_code)::text), '999999'::text)\n> -> Nested Loop (cost=0.00..34042.43 rows=3 width=173) (actual time=135.498..63655.542 rows=8086 loops=1)\n> -> Nested Loop (cost=0.00..34037.02 rows=1 width=159) (actual time=95.760..34637.611 rows=8086 loops=1)\n> -> Nested Loop (cost=0.00..34033.72 rows=1 width=138) (actual time=89.222..34095.763 rows=8086 loops=1)\n> Join Filter: ((\"outer\".case_id)::text = (\"inner\".case_id)::text)\n> -> Index Scan using name_speed on identity (cost=0.00..1708.26 rows=8152 width=82) (actual time=42.589..257.818 rows=8092 loops=1)\n> Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n> Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'SMITH%'::text))\n> -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..3.95 rows=1 width=81) (actual time=4.157..4.170 rows=1 loops=8092)\n> Index Cond: ((\"outer\".actor_id)::text = (litigant_details.actor_id)::text)\n> Filter: ('IL081025J'::text = (court_ori)::text)\n> -> Seq Scan on court (cost=0.00..3.29 rows=1 width=33) (actual time=0.051..0.058 rows=1 loops=8086)\n> Filter: ('IL081025J'::text = (id)::text)\n> -> Index Scan using case_data_pkey on case_data (cost=0.00..5.36 rows=2 width=53) (actual time=3.569..3.572 rows=1 loops=8086)\n> Index Cond: (('IL081025J'::text = (case_data.court_ori)::text) AND ((case_data.case_id)::text = (\"outer\".case_id)::text))\n> Total runtime: 63727.873 ms\n> \n> \n\n\n\n> tcpip_socket = true\n> max_connections = 100\n> shared_buffers = 50000 # min 16, at least max_connections*2, 8KB each\n> sort_mem = 2024000 # min 64, size in KB\n> vacuum_mem = 819200 # min 1024, size in KB\n> checkpoint_segments = 20 # in logfile segments, min 1, 16MB each\n> effective_cache_size = 3600000 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n> log_min_duration_statement = 10000 # Log all statements whose\n> lc_messages = 'C' # locale for system error message strings\n> lc_monetary = 'C' # locale for monetary formatting\n> lc_numeric = 'C' # locale for number formatting\n> lc_time = 'C' # locale for time formatting\n\n\n\n\n\nIngrate, n.: A man who bites the hand that feeds him, and then complains\nof indigestion.", "msg_date": "Mon, 15 Aug 2005 11:24:57 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "I'm configuraing a new system (Bigish) and need some advice." }, { "msg_contents": "that took a little while to get through the system didn't it. Please\nignore.\n\n\n> Ingrate, n.: A man who bites the hand that feeds him, and then complains\n> of indigestion.\n-- \nA free society is one where it is safe to be unpopular.\n -- Adlai Stevenson\n\n", "msg_date": "Fri, 19 Aug 2005 10:26:25 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I'm configuraing a new system (Bigish) and need some" } ]
[ { "msg_contents": "7.4 is the pg version BTW....going to switch to 8 if it's worth it.\n\n\nIngrate, n.: A man who bites the hand that feeds him, and then complains\nof indigestion.\n-- \n\"Don't say yes until I finish talking.\"\n -- Darryl F. Zanuck", "msg_date": "Mon, 15 Aug 2005 11:29:04 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I'm configuraing a new system (Bigish) and need some advice." } ]
[ { "msg_contents": "7.4 is the pg version BTW....going to switch to 8 if it's worth it.\n\n\nIngrate, n.: A man who bites the hand that feeds him, and then complains\nof indigestion.\n-- \n\"Don't say yes until I finish talking.\"\n -- Darryl F. Zanuck\n-- \n\"Don't say yes until I finish talking.\"\n -- Darryl F. Zanuck\n\n", "msg_date": "Mon, 15 Aug 2005 11:29:23 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I'm configuraing a new system (Bigish) and need some advice." } ]
[ { "msg_contents": "Hi,\n\n \n\nOne simple question. For 125 or more checkpoint segments\n(checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n170MB) on a very busy database, what is more suitable, a separate 6 disk\nRAID5 volume, or a RAID10 volume? Databases will be on separate\nspindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\nparamount, but I don't want to use RAID0.\n\n \n\nPG7.4.7 on RHAS 4.0\n\n \n\nI can provide more info if needed.\n\n \n\nAppreciate some recommendations!\n\n \n\nThanks,\n\nAnjan\n\n \n\n \n---\nThis email message and any included attachments constitute confidential\nand privileged information intended exclusively for the listed\naddressee(s). If you are not the intended recipient, please notify\nVantage by immediately telephoning 215-579-8390, extension 1158. In\naddition, please reply to this message confirming your receipt of the\nsame in error. A copy of your email reply can also be sent to\[email protected]. Please do not disclose, copy, distribute or take\nany action in reliance on the contents of this information. Kindly\ndestroy all copies of this message and any attachments. Any other use of\nthis email is prohibited. Thank you for your cooperation. For more\ninformation about Vantage, please visit our website at\nhttp://www.vantage.com <http://www.vantage.com/> .\n---\n\n \n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nOne simple question. For 125 or more checkpoint segments (checkpoint_timeout\nis 600 seconds, shared_buffers are at 21760 or 170MB) on a very busy database,\nwhat is more suitable, a separate 6 disk RAID5 volume, or a RAID10 volume?\nDatabases will be on separate spindles. Disks are 36GB 15KRPM, 2Gb Fiber\nChannel. Performance is paramount, but I don’t want to use RAID0.\n \nPG7.4.7 on RHAS 4.0\n \nI can provide more info\nif needed.\n \nAppreciate some recommendations!\n \nThanks,\nAnjan\n \n ---This email message and any included attachments constitute confidential and privileged information intended exclusively for the listed addressee(s). If you are not the intended recipient, please notify Vantage by immediately telephoning 215-579-8390, extension 1158. In addition, please reply to this message confirming your receipt of the same in error. A copy of your email reply can also be sent to [email protected]. Please do not disclose, copy, distribute or take any action in reliance on the contents of this information. Kindly destroy all copies of this message and any attachments. Any other use of this email is prohibited. Thank you for your cooperation. For more information about Vantage, please visit our website at http://www.vantage.com.---", "msg_date": "Mon, 15 Aug 2005 16:35:05 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "choosing RAID level for xlogs" }, { "msg_contents": "Quoting Anjan Dave <[email protected]>:\n\n> Hi,\n> \n> \n> \n> One simple question. For 125 or more checkpoint segments\n> (checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n> 170MB) on a very busy database, what is more suitable, a separate 6 disk\n> RAID5 volume, or a RAID10 volume? Databases will be on separate\n> spindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\n> paramount, but I don't want to use RAID0.\n> \n\nRAID10 -- no question. xlog activity is overwhelmingly sequential 8KB writes. \nIn order for RAID5 to perform a write, the host (or controller) needs to perform\nextra calculations for parity. This turns into latency. RAID10 does not\nperform those extra calculations.\n\n> \n> \n> PG7.4.7 on RHAS 4.0\n> \n> \n> \n> I can provide more info if needed.\n> \n> \n> \n> Appreciate some recommendations!\n> \n> \n> \n> Thanks,\n> \n> Anjan\n> \n> \n> \n> \n> ---\n> This email message and any included attachments constitute confidential\n> and privileged information intended exclusively for the listed\n> addressee(s). If you are not the intended recipient, please notify\n> Vantage by immediately telephoning 215-579-8390, extension 1158. In\n> addition, please reply to this message confirming your receipt of the\n> same in error. A copy of your email reply can also be sent to\n> [email protected]. Please do not disclose, copy, distribute or take\n> any action in reliance on the contents of this information. Kindly\n> destroy all copies of this message and any attachments. Any other use of\n> this email is prohibited. Thank you for your cooperation. For more\n> information about Vantage, please visit our website at\n> http://www.vantage.com <http://www.vantage.com/> .\n> ---\n> \n> \n> \n> \n\n\n", "msg_date": "Tue, 16 Aug 2005 11:00:08 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" } ]
[ { "msg_contents": "i have problem in database encoding with indexing\n\ncase :1\nwhen i initdb -E win874 mydb and create database with createdb -E win874 \ndbname\nand create table emp with index in empname field\n\n\ni can select sorting name correct in thai alphabet\nsuch select empName from emp order by empName;\nbut in\nselect empName from emp where empname like 'xxx%'\nit not using index scan , it use seq scan so it may slow in find name\n\ncase :2\nwhen i initdb mydb (use default) and create database with createdb -E win874 \ndbname\nand create table emp with index in empname field\n\n\ni can not select sorting name correct in thai alphabet\nbut in\nselect empName from emp where empname like 'xxx%'\nit using index scan , very fast in find name\n\nproblem:\nhow can i configure database that can correct in sorting name and using \nindex scan in like 'xxxx%' search\n\n\nusing FreeBSD 5.4\npostgreql 8.0.\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE! \nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n", "msg_date": "Tue, 16 Aug 2005 07:01:00 +0000", "msg_from": "\"wisan watcharinporn\" <[email protected]>", "msg_from_op": true, "msg_subject": "database encoding with index search problem" }, { "msg_contents": "wisan watcharinporn wrote:\n> problem:\n> how can i configure database that can correct in sorting name and using \n> index scan in like 'xxxx%' search\n\nI think you'll want to read the following then have a quick search of \nthe mailing list archives for \"opclass\" for some examples.\n http://www.postgresql.org/docs/8.0/static/sql-createindex.html\n http://www.postgresql.org/docs/8.0/static/indexes-opclass.html\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Aug 2005 09:23:58 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database encoding with index search problem" } ]
[ { "msg_contents": "Hello,\n\nI would like to test the performance of my Java/PostgreSQL applications\nespecially when making full text searches.\nFor this I am looking for a database with 50 to 300 MB having text fields.\ne.g. A table with books with fields holding a comment, table of content\nor example chapters\nor what ever else.\n\nDoes anybody have an idea where I can find a database like this or does\neven have something like this?\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB \n\nGet support, education and consulting for these technologies - uncomplicated and cheap.\n\n", "msg_date": "Tue, 16 Aug 2005 09:29:32 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for a large database for testing" }, { "msg_contents": "Sebastian Hennebrueder schrieb:\n> Hello,\n> \n> I would like to test the performance of my Java/PostgreSQL applications\n> especially when making full text searches.\n> For this I am looking for a database with 50 to 300 MB having text fields.\n> e.g. A table with books with fields holding a comment, table of content\n> or example chapters\n> or what ever else.\n> \n> Does anybody have an idea where I can find a database like this or does\n> even have something like this?\n> \nYou can download the wikipedia content. Just browse the wikimedia site.\nIts some work to change the data to be able to import into postgres,\nbut at least you have a lot real world data - in many languages.\n\n\n", "msg_date": "Tue, 16 Aug 2005 10:07:48 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a large database for testing" }, { "msg_contents": "Tino Wildenhain schrieb:\n\n> Sebastian Hennebrueder schrieb:\n>\n>> Hello,\n>>\n>> I would like to test the performance of my Java/PostgreSQL applications\n>> especially when making full text searches.\n>> For this I am looking for a database with 50 to 300 MB having text\n>> fields.\n>> e.g. A table with books with fields holding a comment, table of content\n>> or example chapters\n>> or what ever else.\n>>\n>> Does anybody have an idea where I can find a database like this or does\n>> even have something like this?\n>>\n> You can download the wikipedia content. Just browse the wikimedia site.\n> Its some work to change the data to be able to import into postgres,\n> but at least you have a lot real world data - in many languages.\n\nI have just found it. Here there is a link\nhttp://download.wikimedia.org/\nThey have content in multiple languages and dumps up to 20 GB.\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB\n\nGet support, education and consulting for these technologies -\nuncomplicated and cheap.\n", "msg_date": "Tue, 16 Aug 2005 10:23:56 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for a large database for testing" }, { "msg_contents": "On Tue, Aug 16, 2005 at 09:29:32AM +0200, Sebastian Hennebrueder wrote:\n> I would like to test the performance of my Java/PostgreSQL applications\n> especially when making full text searches.\n> For this I am looking for a database with 50 to 300 MB having text fields.\n> e.g. A table with books with fields holding a comment, table of content\n> or example chapters\n> or what ever else.\n\nYou could try the OMIM database, which is currently 100M\nIt contains both journal references and large sections of\n'plain' text. It also contains a large amount of technical\nterms which will really test any kind of soundex matching\nif you are using that.\n\nhttp://www.ncbi.nlm.nih.gov/Omim/omimfaq.html#download\n\nUnfortunately it only comes as a flat text file, but is\nvery easy to parse.\n\nAnd if you start reading it, you'll probably learn quite\na lot of things you really didn't want to know!! :-D\n\n -Mark\n", "msg_date": "Tue, 16 Aug 2005 09:39:10 +0100", "msg_from": "Mark Rae <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a large database for testing" }, { "msg_contents": "Sebastian,\n\nyou can try document generator. I used \nhttp://www.cs.rmit.edu.au/~jz/resources/finnegan.zip\nyuo can play with freq. of words and document length distribution.\nAlso, I have SentenceGenerator.java which could be used for\ngeneration of synthetic texts.\n\n \tOleg\nOn Tue, 16 Aug 2005, Sebastian Hennebrueder wrote:\n\n> Hello,\n>\n> I would like to test the performance of my Java/PostgreSQL applications\n> especially when making full text searches.\n> For this I am looking for a database with 50 to 300 MB having text fields.\n> e.g. A table with books with fields holding a comment, table of content\n> or example chapters\n> or what ever else.\n>\n> Does anybody have an idea where I can find a database like this or does\n> even have something like this?\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 16 Aug 2005 13:38:41 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a large database for testing" }, { "msg_contents": "Sebastian Hennebrueder schrieb:\n\n>Tino Wildenhain schrieb:\n>\n>\n> \n>\n>>You can download the wikipedia content. Just browse the wikimedia site.\n>>Its some work to change the data to be able to import into postgres,\n>>but at least you have a lot real world data - in many languages.\n>> \n>>\n>\n>I have just found it. Here there is a link\n>http://download.wikimedia.org/\n>They have content in multiple languages and dumps up to 20 GB.\n>\n> \n>\nJust if anybody wants to import the wikipedia data. I had considerable\nproblems to get the proper encoding working. I downloaded the german\ncontent from wikipedia, which is a dump of a unicode encoded database of\nmysql (utf8)\n\nI used MySql 4.1 on Windows 2000 to read the dump and then copied the\ndata with a small application to postgreSQL\nIn\nmysql.ini you should configure the setting\nmax_allowed_packet = 10M\nI set it to 10, wich worked out. Else you can not import the dump into\nmysql. The error message was something like lost connection ....\nThe default encoding of mysql was latin1 which worked.\n\nThen I imported the dump\nmysql -uYourUserName -pPassword --default-character-set=utf8 database <\ndownloadedAndUnzippedFile\nThe default-character-set is very important\n\nCreate table in postgres (not with all the columns)\nCREATE TABLE content\n(\n cur_id int4 NOT NULL DEFAULT nextval('public.cur_cur_id_seq'::text),\n cur_namespace int2 NOT NULL DEFAULT (0)::smallint,\n cur_title varchar(255) NOT NULL DEFAULT ''::character varying,\n cur_text text NOT NULL,\n cur_comment text,\n cur_user int4 NOT NULL DEFAULT 0,\n cur_user_text varchar(255) NOT NULL DEFAULT ''::character varying,\n cur_timestamp varchar(14) NOT NULL DEFAULT ''::character varying\n) ;\n\nAfter this I copied the data from mySql to postgres with a small Java\napplication. The code is not beautiful.\n\n private void copyEntries() throws Exception {\n Class.forName(\"org.postgresql.Driver\");\n Class.forName(\"com.mysql.jdbc.Driver\");\n Connection conMySQL = DriverManager.getConnection(\n \"jdbc:mysql://localhost/wikidb\", \"root\", \"mysql\");\n Connection conPostgreSQL = DriverManager.getConnection(\n \"jdbc:postgresql://localhost/wiki\", \"postgres\", \"p\");\n Statement selectStatement = conMySQL.createStatement();\n StringBuffer sqlQuery = new StringBuffer();\n sqlQuery.append(\"insert into content (\");\n sqlQuery\n .append(\"cur_id, cur_namespace, cur_title, cur_text,\ncur_comment, cur_user, \");\n sqlQuery.append(\"cur_user_text , cur_timestamp) \");\n sqlQuery.append(\"values (?,?,?,?,?,?,?,?)\");\n\n PreparedStatement insertStatement = conPostgreSQL\n .prepareStatement(sqlQuery.toString());\n\n // get total rows\n java.sql.ResultSet resultSet = selectStatement\n .executeQuery(\"select count(*) from cur\");\n resultSet.next();\n int iMax = resultSet.getInt(1);\n\n \n int i = 0;\n while (i < iMax) {\n resultSet = selectStatement\n .executeQuery(\"select * from cur limit \"+i +\", 2000\");\n while (resultSet.next()) {\n i++;\n if (i % 100 == 0)\n System.out.println(\"\" + i + \" von \" + iMax);\n insertStatement.setInt(1, resultSet.getInt(1));\n insertStatement.setInt(2, resultSet.getInt(2));\n insertStatement.setString(3, resultSet.getString(3));\n insertStatement.setString(4, resultSet.getString(4));\n// this blob field is utf-8 encoded\n byte comment[] = resultSet.getBytes(5);\n\n insertStatement.setString(5, new String(comment, \"UTF-8\"));\n insertStatement.setInt(6, resultSet.getInt(6));\n insertStatement.setString(7, resultSet.getString(7));\n insertStatement.setString(8, resultSet.getString(8));\n insertStatement.execute();\n }\n }\n }\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB \n\nGet support, education and consulting for these technologies.\n\n", "msg_date": "Tue, 16 Aug 2005 23:58:56 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for a large database for testing" }, { "msg_contents": "On Tue, Aug 16, 2005 at 09:29:32AM +0200, Sebastian Hennebrueder wrote:\n> Hello,\n> \n> I would like to test the performance of my Java/PostgreSQL applications\n> especially when making full text searches.\n> For this I am looking for a database with 50 to 300 MB having text fields.\n> e.g. A table with books with fields holding a comment, table of content\n> or example chapters\n> or what ever else.\n> \n> Does anybody have an idea where I can find a database like this or does\n> even have something like this?\n\nMost benchmarks (such as dbt* and pgbench) have data generators you\ncould use.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Mon, 22 Aug 2005 19:43:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a large database for testing" } ]
[ { "msg_contents": "Hello,\n\none of our services is click counting for on line advertising. We do \nthis by importing Apache log files every five minutes. This results in a \nlot of insert and delete statements. At the same time our customers \nshall be able to do on line reporting.\n\nWe have a box with\nLinux Fedora Core 3, Postgres 7.4.2\nIntel(R) Pentium(R) 4 CPU 2.40GHz\n2 scsi 76GB disks (15.000RPM, 2ms)\n\nI did put pg_xlog on another file system on other discs.\n\nStill when several users are on line the reporting gets very slow. \nQueries can take more then 2 min.\n\nI need some ideas how to improve performance in some orders of \nmagnitude. I already thought of a box with the whole database on a ram \ndisc. So really any idea is welcome.\n\nUlrich\n\n\n\n-- \nUlrich Wisser / System Developer\n\nRELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden\nDirect (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769\n________________________________________________________________\nhttp://www.relevanttraffic.com\n", "msg_date": "Tue, 16 Aug 2005 17:39:26 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": true, "msg_subject": "Need for speed" }, { "msg_contents": "Ulrich Wisser wrote:\n> Hello,\n> \n> one of our services is click counting for on line advertising. We do \n> this by importing Apache log files every five minutes. This results in a \n> lot of insert and delete statements. At the same time our customers \n> shall be able to do on line reporting.\n\n> I need some ideas how to improve performance in some orders of \n> magnitude. I already thought of a box with the whole database on a ram \n> disc. So really any idea is welcome.\n\nSo what's the problem - poor query plans? CPU saturated? I/O saturated? \nToo much context-switching?\n\nWhat makes it worse - adding another reporting user, or importing \nanother logfile?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Aug 2005 17:03:55 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Ulrich Wisser wrote:\n> Hello,\n>\n> one of our services is click counting for on line advertising. We do\n> this by importing Apache log files every five minutes. This results in a\n> lot of insert and delete statements. At the same time our customers\n> shall be able to do on line reporting.\n\nWhat are you deleting? I can see having a lot of updates and inserts,\nbut I'm trying to figure out what the deletes would be.\n\nIs it just that you completely refill the table based on the apache log,\nrather than doing only appending?\nOr are you deleting old rows?\n\n>\n> We have a box with\n> Linux Fedora Core 3, Postgres 7.4.2\n> Intel(R) Pentium(R) 4 CPU 2.40GHz\n> 2 scsi 76GB disks (15.000RPM, 2ms)\n>\n> I did put pg_xlog on another file system on other discs.\n>\n> Still when several users are on line the reporting gets very slow.\n> Queries can take more then 2 min.\n\nIf it only gets slow when you have multiple clients it sounds like your\nselect speed is the issue, more than conflicting with your insert/deletes.\n\n>\n> I need some ideas how to improve performance in some orders of\n> magnitude. I already thought of a box with the whole database on a ram\n> disc. So really any idea is welcome.\n\nHow much ram do you have in the system? It sounds like you only have 1\nCPU, so there is a lot you can do to make the box scale.\n\nA dual Opteron (possibly a dual motherboard with dual core (but only\nfill one for now)), with 16GB of ram, and an 8-drive RAID10 system would\nperform quite a bit faster.\n\nHow big is your database on disk? Obviously it isn't very large if you\nare thinking to hold everything in RAM (and only have 76GB of disk\nstorage to put it in anyway).\n\nIf your machine only has 512M, an easy solution would be to put in a\nbunch more memory.\n\nIn general, your hardware is pretty low in overall specs. So if you are\nwilling to throw money at the problem, there is a lot you can do.\n\nAlternatively, turn on statement logging, and then post the queries that\nare slow. This mailing list is pretty good at fixing poor queries.\n\nOne thing you are probably hitting is a lot of sequential scans on the\nmain table.\n\nIf you are doing mostly inserting, make sure you are in a transaction,\nand think about doing a COPY.\n\nThere is a lot more that can be said, we just need to have more\ninformation about what you want.\n\nJohn\n=:->\n\n>\n> Ulrich\n>\n>\n>", "msg_date": "Tue, 16 Aug 2005 11:12:24 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "On Tue, 2005-08-16 at 17:39 +0200, Ulrich Wisser wrote:\n> Hello,\n> \n> one of our services is click counting for on line advertising. We do \n> this by importing Apache log files every five minutes. This results in a \n> lot of insert and delete statements. At the same time our customers \n> shall be able to do on line reporting.\n> \n> We have a box with\n> Linux Fedora Core 3, Postgres 7.4.2\n> Intel(R) Pentium(R) 4 CPU 2.40GHz\n\nThis is not a good CPU for this workload. Try an Opteron or Xeon. Also\nof major importance is the amount of memory. If possible, you would\nlike to have memory larger than the size of your database.\n\n> 2 scsi 76GB disks (15.000RPM, 2ms)\n\nIf you decide your application is I/O bound, here's an obvious place for\nimprovement. More disks == faster.\n\n> I did put pg_xlog on another file system on other discs.\n\nDid that have a beneficial effect?\n\n> Still when several users are on line the reporting gets very slow. \n> Queries can take more then 2 min.\n\nIs this all the time or only during the insert?\n\n> I need some ideas how to improve performance in some orders of \n> magnitude. I already thought of a box with the whole database on a ram \n> disc. So really any idea is welcome.\n\nYou don't need a RAM disk, just a lot of RAM. Your operating system\nwill cache disk contents in memory if possible. You have a very small\nconfiguration, so more CPU, more memory, and especially more disks will\nprobably all yield improvements.\n", "msg_date": "Tue, 16 Aug 2005 09:43:01 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Are you calculating aggregates, and if so, how are you doing it (I ask\nthe question from experience of a similar application where I found\nthat my aggregating PGPLSQL triggers were bogging the system down, and\nchanged them so scheduled jobs instead).\n\nAlex Turner\nNetEconomist\n\nOn 8/16/05, Ulrich Wisser <[email protected]> wrote:\n> Hello,\n> \n> one of our services is click counting for on line advertising. We do\n> this by importing Apache log files every five minutes. This results in a\n> lot of insert and delete statements. At the same time our customers\n> shall be able to do on line reporting.\n> \n> We have a box with\n> Linux Fedora Core 3, Postgres 7.4.2\n> Intel(R) Pentium(R) 4 CPU 2.40GHz\n> 2 scsi 76GB disks (15.000RPM, 2ms)\n> \n> I did put pg_xlog on another file system on other discs.\n> \n> Still when several users are on line the reporting gets very slow.\n> Queries can take more then 2 min.\n> \n> I need some ideas how to improve performance in some orders of\n> magnitude. I already thought of a box with the whole database on a ram\n> disc. So really any idea is welcome.\n> \n> Ulrich\n> \n> \n> \n> --\n> Ulrich Wisser / System Developer\n> \n> RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden\n> Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769\n> ________________________________________________________________\n> http://www.relevanttraffic.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Tue, 16 Aug 2005 13:59:53 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "On Tue, 16 Aug 2005, Ulrich Wisser wrote:\n\n> Still when several users are on line the reporting gets very slow. \n> Queries can take more then 2 min.\n\nCould you show an exampleof such a query and the output of EXPLAIN ANALYZE\non that query (preferably done when the database is slow).\n\nIt's hard to say what is wrong without more information.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Tue, 16 Aug 2005 20:47:50 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Hello,\n\nthanks for all your suggestions.\n\nI can see that the Linux system is 90% waiting for disc io. At that time \nall my queries are *very* slow. My scsi raid controller and disc are \nalready the fastest available. The query plan uses indexes and \"vacuum \nanalyze\" is run once a day.\n\nTo avoid aggregating to many rows, I already made some aggregation \ntables which will be updated after the import from the Apache logfiles.\nThat did help, but only to a certain level.\n\nI believe the biggest problem is disc io. Reports for very recent data \nare quite fast, these are used very often and therefor already in the \ncache. But reports can contain (and regulary do) very old data. In that \ncase the whole system slows down. To me this sounds like the recent data \nis flushed out of the cache and now all data for all queries has to be \nfetched from disc.\n\nMy machine has 2GB memory, please find postgresql.conf below.\n\nUlrich\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 20000 # min 16, at least max_connections*2, \nsort_mem = 4096 # min 64, size in KB\nvacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 50000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 3000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = false # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\nwal_buffers = 128 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 16 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n", "msg_date": "Wed, 17 Aug 2005 11:15:39 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Ulrich Wisser <[email protected]> writes:\n> My machine has 2GB memory, please find postgresql.conf below.\n\n> max_fsm_pages = 50000 # min max_fsm_relations*16, 6 bytes each\n\nFWIW, that index I've been groveling through in connection with your\nother problem contains an astonishingly large amount of dead space ---\nalmost 50%. I suspect that you need a much larger max_fsm_pages\nsetting, and possibly more-frequent vacuuming, in order to keep a lid\non the amount of wasted space.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Aug 2005 10:52:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed " }, { "msg_contents": "On Wed, 2005-08-17 at 11:15 +0200, Ulrich Wisser wrote:\n> Hello,\n> \n> thanks for all your suggestions.\n> \n> I can see that the Linux system is 90% waiting for disc io. At that time \n> all my queries are *very* slow. My scsi raid controller and disc are \n> already the fastest available.\n\nWhat RAID controller? Initially you said you have only 2 disks, and\nsince you have your xlog on a separate spindle, I assume you have 1 disk\nfor the xlog and 1 for the data. Even so, if you have a RAID, I'm going\nto further assume you are using RAID 1, since no sane person would use\nRAID 0. In those cases you are getting the performance of a single\ndisk, which is never going to be very impressive. You need a RAID.\n\nPlease be more precise when describing your system to this list.\n\n-jwb\n\n", "msg_date": "Wed, 17 Aug 2005 09:49:37 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Ulrich,\n\n> I believe the biggest problem is disc io. Reports for very recent data\n> are quite fast, these are used very often and therefor already in the\n> cache. But reports can contain (and regulary do) very old data. In that\n> case the whole system slows down. To me this sounds like the recent data\n> is flushed out of the cache and now all data for all queries has to be\n> fetched from disc.\n\nHow large is the database on disk?\n\n> My machine has 2GB memory, please find postgresql.conf below.\n\nhmmmm ...\neffective_cache_size?\nrandom_page_cost?\ncpu_tuple_cost?\netc.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Aug 2005 10:28:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "At 05:15 AM 8/17/2005, Ulrich Wisser wrote:\n>Hello,\n>\n>thanks for all your suggestions.\n>\n>I can see that the Linux system is 90% waiting for disc io.\n\nA clear indication that you need to improve your HD IO subsystem.\n\n>At that time all my queries are *very* slow.\n\nTo be more precise, your server performance at that point is \nessentially equal to your HD IO subsystem performance.\n\n\n> My scsi raid controller and disc are already the fastest available.\n\nOh, REALLY? This is the description of the system you gave us:\n\n\"We have a box with\nLinux Fedora Core 3, Postgres 7.4.2\nIntel(R) Pentium(R) 4 CPU 2.40GHz\n2 scsi 76GB disks (15.000RPM, 2ms)\"\n\nThe is far, Far, FAR from the \"the fastest available\" in terms of SW, \nOS, CPU host, _or_ HD subsystem.\n\nThe \"fastest available\" means\n1= you should be running 8.0.3\n2= you should be running the latest stable 2.6 based kernel\n3= you should be running an Opteron based server\n4= Fibre Channel HDs are higher performance than SCSI ones.\n5= (and this is the big one) YOU NEED MORE SPINDLES AND A HIGHER END \nRAID CONTROLLER.\n\nThe absolute \"top of the line\" for RAID controllers is something \nbased on Fibre Channel from Xyratex (who make the RAID engines for \nEMC and NetApps), Engino (the enterprise division of LSI Logic who \nsell mostly to IBM. Apple has a server based on an Engino card), \ndot-hill (who bought Chaparral among others). I suspect you can't \nafford them even if they would do business with you. The ante for a \nFC-based RAID subsystem in this class is in the ~$32K to ~$128K \nrange, even if you buy direct from the actual RAID HW manufacturer \nrather than an OEM like\n\nIn the retail commodity market, the current best RAID controllers are \nprobably the 16 and 24 port versions of the Areca cards ( \nwww.areca.us ). They come darn close to saturating the the Real \nWorld Peak Bandwidth of a 64b 133MHz PCI-X bus.\n\nI did put pg_xlog on another file system on other discs.\n\n> The query plan uses indexes and \"vacuum analyze\" is run once a day.\n\nThat\n\n\n>To avoid aggregating to many rows, I already made some aggregation \n>tables which will be updated after the import from the Apache \n>logfiles. That did help, but only to a certain level.\n>\n>I believe the biggest problem is disc io. Reports for very recent \n>data are quite fast, these are used very often and therefor already \n>in the cache. But reports can contain (and regulary do) very old \n>data. In that case the whole system slows down. To me this sounds \n>like the recent data is flushed out of the cache and now all data \n>for all queries has to be fetched from disc.\n>\n>My machine has 2GB memory,\n\n\n\n", "msg_date": "Wed, 17 Aug 2005 14:00:14 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "RRS (http://rrs.decibel.org) might be of use in this case.\n\nOn Tue, Aug 16, 2005 at 01:59:53PM -0400, Alex Turner wrote:\n> Are you calculating aggregates, and if so, how are you doing it (I ask\n> the question from experience of a similar application where I found\n> that my aggregating PGPLSQL triggers were bogging the system down, and\n> changed them so scheduled jobs instead).\n> \n> Alex Turner\n> NetEconomist\n> \n> On 8/16/05, Ulrich Wisser <[email protected]> wrote:\n> > Hello,\n> > \n> > one of our services is click counting for on line advertising. We do\n> > this by importing Apache log files every five minutes. This results in a\n> > lot of insert and delete statements. At the same time our customers\n> > shall be able to do on line reporting.\n> > \n> > We have a box with\n> > Linux Fedora Core 3, Postgres 7.4.2\n> > Intel(R) Pentium(R) 4 CPU 2.40GHz\n> > 2 scsi 76GB disks (15.000RPM, 2ms)\n> > \n> > I did put pg_xlog on another file system on other discs.\n> > \n> > Still when several users are on line the reporting gets very slow.\n> > Queries can take more then 2 min.\n> > \n> > I need some ideas how to improve performance in some orders of\n> > magnitude. I already thought of a box with the whole database on a ram\n> > disc. So really any idea is welcome.\n> > \n> > Ulrich\n> > \n> > \n> > \n> > --\n> > Ulrich Wisser / System Developer\n> > \n> > RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden\n> > Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769\n> > ________________________________________________________________\n> > http://www.relevanttraffic.com\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Mon, 22 Aug 2005 19:48:56 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" } ]
[ { "msg_contents": "Hi,\n\nHow much Ram do you have ?\nCould you give us your postgresql.conf ? (shared buffer parameter)\n\nIf you do lots of deletes/inserts operations you HAVE to vacuum analyze \nyour table (especially if you have indexes). \n\nI'm not sure if vacuuming locks your table with pg 7.4.2 (it doesn't with \n8.0), you might consider upgrading your pg version. \nAnyway, your \"SELECT\" performance while vacuuming is going to be altered. \n\n\nI don't know your application but I would certainly try to split your \ntable. it would result in one table for inserts/vaccum and one for \nselects. You would have to switch from one to the other every five \nminutes.\n\nBenjamin.\n\n\n\n\n\nUlrich Wisser <[email protected]>\nEnvoyé par : [email protected]\n16/08/2005 17:39\n\n \n Pour : [email protected]\n cc : \n Objet : [PERFORM] Need for speed\n\n\nHello,\n\none of our services is click counting for on line advertising. We do \nthis by importing Apache log files every five minutes. This results in a \nlot of insert and delete statements. At the same time our customers \nshall be able to do on line reporting.\n\nWe have a box with\nLinux Fedora Core 3, Postgres 7.4.2\nIntel(R) Pentium(R) 4 CPU 2.40GHz\n2 scsi 76GB disks (15.000RPM, 2ms)\n\nI did put pg_xlog on another file system on other discs.\n\nStill when several users are on line the reporting gets very slow. \nQueries can take more then 2 min.\n\nI need some ideas how to improve performance in some orders of \nmagnitude. I already thought of a box with the whole database on a ram \ndisc. So really any idea is welcome.\n\nUlrich\n\n\n\n-- \nUlrich Wisser / System Developer\n\nRELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden\nDirect (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769\n________________________________________________________________\nhttp://www.relevanttraffic.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "Tue, 16 Aug 2005 18:02:53 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re. : Need for speed" } ]
[ { "msg_contents": "> Ulrich Wisser wrote:\r\n> >\r\n> > one of our services is click counting for on line advertising. We do\r\n> > this by importing Apache log files every five minutes. This results in a\r\n> > lot of insert and delete statements. \r\n...\r\n> If you are doing mostly inserting, make sure you are in a transaction,\r\n\r\nWell, yes, but you may need to make sure that a single transaction doesn't have too many inserts in it.\r\nI was having a performance problem when doing transactions with a huge number of inserts\r\n(tens of thousands), and I solved the problem by putting a simple counter in the loop (in the Java import code, \r\nthat is) and doing a commit every 100 or so inserts.\r\n\r\n-Roger\r\n\r\n> John\r\n>\r\n> > Ulrich\r\n", "msg_date": "Tue, 16 Aug 2005 10:01:14 -0700", "msg_from": "\"Roger Hand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed" }, { "msg_contents": ">> Ulrich Wisser wrote:\n>> >\n>> > one of our services is click counting for on line advertising. We do\n>> > this by importing Apache log files every five minutes. This results in a\n>> > lot of insert and delete statements. \n> ...\n>> If you are doing mostly inserting, make sure you are in a transaction,\n>\n> Well, yes, but you may need to make sure that a single transaction\n> doesn't have too many inserts in it. I was having a performance\n> problem when doing transactions with a huge number of inserts (tens\n> of thousands), and I solved the problem by putting a simple counter\n> in the loop (in the Java import code, that is) and doing a commit\n> every 100 or so inserts.\n\nAre you sure that was an issue with PostgreSQL?\n\nI have certainly observed that issue with Oracle, but NOT with\nPostgreSQL.\n\nI have commonly done data loads where they loaded 50K rows at a time,\nthe reason for COMMITting at that point being \"programming paranoia\"\nat the possibility that some data might fail to load and need to be\nretried, and I'd rather have less fail...\n\nIt would seem more likely that the issue would be on the Java side; it\nmight well be that the data being loaded might bloat JVM memory usage,\nand that the actions taken at COMMIT time might keep the size of the\nJava-side memory footprint down.\n-- \n(reverse (concatenate 'string \"moc.liamg\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/\nIf we were meant to fly, we wouldn't keep losing our luggage.\n", "msg_date": "Fri, 19 Aug 2005 08:00:26 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" } ]
[ { "msg_contents": "Summary\n=======\nWe are writing to the db pretty much 24 hours a day.\nRecently the amount of data we write has increased, and the query speed, formerly okay, has taken a dive.\nThe query is using the indexes as expected, so I don't _think_ I have a query tuning issue, just an io problem. \nThe first time a query is done it takes about 60 seconds. The second time it runs in about 6 seconds.\nWhat I know I need advice on is io settings and various buffer settings. \nI may also need advice on other things, but just don't know it yet!\n\nBelow is ...\n- an explain analyze\n- details of the db setup and hardware\n- some vmstat and iostat output showing the disks are very busy\n- the SHOW ALL output for the db config.\n\nDetails\n=======\nPostgres 8.0.3\n\nBelow is a sample query. (This is actually implemented as a prepared statement. Here I fill in the '?'s with actual values.)\n\nelectric=# EXPLAIN ANALYZE\nelectric-# SELECT datavalue, logfielddatatype, timestamp FROM logdata_recent \nelectric-# WHERE (logfielddatatype = 70 OR logfielddatatype = 71 OR logfielddatatype = 69) \nelectric-# AND graphtargetlog = 1327 \nelectric-# AND timestamp >= 1123052400 AND timestamp <= 1123138800 \nelectric-# ORDER BY timestamp;\n QUERY PLAN \n--------------------------------------------------\n Sort (cost=82.48..82.50 rows=6 width=14) (actual time=60208.968..60211.232 rows=2625 loops=1)\n Sort Key: public.logdata_recent.\"timestamp\"\n -> Result (cost=0.00..82.41 rows=6 width=14) (actual time=52.483..60200.868 rows=2625 loops=1)\n -> Append (cost=0.00..82.41 rows=6 width=14) (actual time=52.476..60189.929 rows=2625 loops=1)\n -> Seq Scan on logdata_recent (cost=0.00..46.25 rows=1 width=14) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: (((logfielddatatype = 70) OR (logfielddatatype = 71) OR (logfielddatatype = 69)) AND (graphtargetlog = 1327) AND (\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800))\n -> Index Scan using logdata_recent_1123085306_ix_t_fld_gtl, logdata_recent_1123085306_ix_t_fld_gtl, logdata_recent_1123085306_ix_t_fld_gtl on logdata_recent_stale logdata_recent (cost=0.00..18.08 rows=3 width=14) (actual time=52.465..60181.624 rows=2625 loops=1)\n Index Cond: (((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 70) AND (graphtargetlog = 1327)) OR ((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 71) AND (graphtargetlog = 1327)) OR ((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 69) AND (graphtargetlog = 1327)))\n Filter: (((logfielddatatype = 70) OR (logfielddatatype = 71) OR (logfielddatatype = 69)) AND (graphtargetlog = 1327) AND (\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800))\n -> Index Scan using logdata_recent_1123139634_ix_t_fld_gtl, logdata_recent_1123139634_ix_t_fld_gtl, logdata_recent_1123139634_ix_t_fld_gtl on logdata_recent_active logdata_recent (cost=0.00..18.08 rows=2 width=14) (actual time=0.178..0.178 rows=0 loops=1)\n Index Cond: (((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 70) AND (graphtargetlog = 1327)) OR ((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 71) AND (graphtargetlog = 1327)) OR ((\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800) AND (logfielddatatype = 69) AND (graphtargetlog = 1327)))\n Filter: (((logfielddatatype = 70) OR (logfielddatatype = 71) OR (logfielddatatype = 69)) AND (graphtargetlog = 1327) AND (\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800))\n Total runtime: 60214.545 ms\n(13 rows)\n\n60 seconds is much longer than it used to be. I would guess it used to be under 10 seconds. The second time the above query is run we see the magic of caching as the time goes down to 6 seconds.\n\nlogdata_recent_active and logdata_recent_stale are inherited tables of logdata_recent, which never has any data. (This is pseudo-partitioning in action!)\nSo the very quick seq_scan on the empty logdata_recent parent table is okay with me.\n\nThe index is built on timestamp, logfielddatatype, graphtargetlog. I am curious as to why the same index shows up 3 times in the \"using\" clause, but can live without knowing the details as long as it doesn't indicate that something's wrong.\n\nThe logdata_recent_stale table has 5 millions rows. The size of the table itself, on disk, is 324MB. The size of the index is 210MB.\n\nThe disks are ext3 with journalling type of ordered, but this was later changed to writeback with no apparent change in speed.\n\nThey're on a Dell poweredge 6650 with LSI raid card, setup as follows:\n4 disks raid 10 for indexes (145GB) - sdc1\n6 disks raid 10 for data (220GB) - sdd1\n2 mirrored disks for logs - sdb1\n\nstripe size is 32k\ncache policy: cached io (am told the controller has bbu)\nwrite policy: write-back\nread policy: readahead\n\nThe partition names do what they say ...\n[root@rage-db2 /dbdata01]$ df\nFilesystem 1K-blocks Used Available Use% Mounted on\n/dev/sdb1 70430588 729324 66123592 2% /dblog01\n/dev/sdc1 140861236 19472588 114233300 15% /dbindex01\n/dev/sdd1 211299960 157159988 43406548 79% /dbdata01\n...\n\nUsing iostat (the version from http://linux.inet.hr/) I saw at one point that the data disk was 100% busy.\nI believe this was when running the above query, or similar, but in any case the system is always busy with both reads and (usually) writes.\n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 61 0.1 15.5 0.4 305.7 19.6 0.1 5.8 4.9 8 \nsdc1 21 22 20.6 17.7 164.6 158.6 8.4 1.1 28.3 6.2 24 \nsdd1 1742 11 1904.7 6.6 14585.6 71.5 7.7 20.6 10.8 0.5 100 \n\nAnother time, when I was running the query above, the index partition went to 90+% busy for 40 seconds:\n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 0 0.0 0.2 0.0 0.8 4.0 0.0 20.0 15.0 0 \nsdc1 366 53 687.0 66.1 4213.1 483.0 6.2 11.8 15.7 1.3 96 \nsdd1 8 17 16.6 13.9 99.5 125.4 7.4 0.7 23.0 1.9 6 \n\nOn another occasion (when the query took 24 seconds) I ran vmstat and iostat every 5 seconds\nfrom just before the query until just after. About the first two outputs are before the query.\nIn this case the index disk is maxed.\n\n[root@rage-db2 ~]$ vmstat 5 16\nprocs memory swap io system cpu\n r b swpd free buff cache si so bi bo in cs us sy wa id\n 0 0 92 1233500 225692 9578564 0 0 0 0 1 1 2 2 1 1\n 0 1 92 1218460 225748 9595136 0 0 3322 18 655 898 0 0 20 79\n 0 1 92 1202124 225780 9616140 0 0 4204 58 920 1291 0 1 24 76\n 0 1 92 1172876 225820 9645348 0 0 5847 120 1053 1482 0 1 23 76\n 1 0 92 1151712 225836 9666504 0 0 4234 7 847 1239 2 1 18 78\n 1 0 92 1140860 225844 9677436 0 0 2153 500 575 2027 13 2 11 73\n 1 0 92 1140852 225848 9677636 0 0 0 506 213 442 10 1 0 89\n\n[root@rage-db2 ~]$ /usr/local/bin/iostat -Px 5 16\n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 1 243 0.1 105.5 2.7 37.6 0.4 0.0 0.2 0.1 1 \nsdc1 6 111 3.7 75.9 38.3 769.3 10.1 0.0 0.3 0.2 1 \nsdd1 255 107 85.2 37.6 4.9 581.0 4.8 0.0 0.1 0.0 1 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 \nsdc1 273 0 414.0 0.4 2747.7 2.4 6.6 1.6 3.9 1.7 69 \nsdd1 0 1 1.4 0.4 7.2 5.6 7.1 0.0 10.0 6.7 1 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 0 0.0 0.4 0.0 1.6 4.0 0.0 10.0 5.0 0 \nsdc1 225 4 777.1 4.6 4011.0 35.1 5.2 2.5 3.2 1.3 99 \nsdd1 0 2 0.0 2.6 0.0 16.8 6.5 0.0 8.5 0.8 0 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 \nsdc1 508 7 917.8 7.4 5703.0 58.3 6.2 2.2 2.4 1.1 98 \nsdd1 0 4 0.0 6.8 0.0 44.7 6.6 0.1 15.6 0.6 0 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 \nsdc1 361 0 737.5 0.4 4391.7 2.4 6.0 1.8 2.4 1.0 76 \nsdd1 0 0 0.0 0.4 0.0 2.4 6.0 0.0 0.0 0.0 0 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 87 0.0 17.8 0.0 418.3 23.6 0.0 1.3 1.2 2 \nsdc1 216 2 489.5 0.4 2821.7 11.2 5.8 1.2 2.4 1.1 56 \nsdd1 2 4 7.2 0.6 37.5 18.4 7.2 0.0 6.2 3.3 3 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 89 0.0 22.4 0.0 446.3 20.0 0.0 1.1 0.8 2 \nsdc1 0 4 0.0 1.0 0.0 18.4 18.4 0.0 0.0 0.0 0 \nsdd1 0 6 0.0 1.0 0.0 27.1 27.2 0.0 0.0 0.0 0 \n\ndevice mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b \nsdb1 0 89 0.0 22.5 0.0 446.2 19.8 0.0 0.4 0.3 1 \nsdc1 0 2 0.0 0.4 0.0 9.6 24.0 0.0 0.0 0.0 0 \nsdd1 0 4 0.0 0.6 0.0 20.0 33.3 0.0 0.0 0.0 0 \n\n\nFinally, here's a show all:\n\nadd_missing_from | on\narchive_command | unset\naustralian_timezones | off\nauthentication_timeout | 60\nbgwriter_delay | 200\nbgwriter_maxpages | 100\nbgwriter_percent | 1\nblock_size | 8192\ncheck_function_bodies | on\ncheckpoint_segments | 20\ncheckpoint_timeout | 300\ncheckpoint_warning | 30\nclient_encoding | UNICODE\nclient_min_messages | notice\ncommit_delay | 350\ncommit_siblings | 5\nconfig_file | /dbdata01/pgdata/postgresql.conf\ncpu_index_tuple_cost | 0.001\ncpu_operator_cost | 0.0025\ncpu_tuple_cost | 0.01\ncustom_variable_classes | unset\ndata_directory | /dbdata01/pgdata\nDateStyle | ISO, MDY\ndb_user_namespace | off\ndeadlock_timeout | 1000\ndebug_pretty_print | off\ndebug_print_parse | off\ndebug_print_plan | off\ndebug_print_rewritten | off\ndebug_shared_buffers | 0\ndefault_statistics_target | 50\ndefault_tablespace | unset\ndefault_transaction_isolation | read committed\ndefault_transaction_read_only | off\ndefault_with_oids | off\ndynamic_library_path | $libdir\neffective_cache_size | 48000\nenable_hashagg | on\nenable_hashjoin | on\nenable_indexscan | on\nenable_mergejoin | on\nenable_nestloop | on\nenable_seqscan | on\nenable_sort | on\nenable_tidscan | on\nexplain_pretty_print | on\nexternal_pid_file | unset\nextra_float_digits | 0\nfrom_collapse_limit | 8\nfsync | on\ngeqo | on\ngeqo_effort | 5\ngeqo_generations | 0\ngeqo_pool_size | 0\ngeqo_selection_bias | 2\ngeqo_threshold | 12\nhba_file | /dbdata01/pgdata/pg_hba.conf\nident_file | /dbdata01/pgdata/pg_ident.conf\ninteger_datetimes | off\njoin_collapse_limit | 8\nkrb_server_keyfile | unset\nlc_collate | en_US.UTF-8\nlc_ctype | en_US.UTF-8\nlc_messages | en_US.UTF-8\nlc_monetary | en_US.UTF-8\nlc_numeric | en_US.UTF-8\nlc_time | en_US.UTF-8\nlisten_addresses | *\nlog_connections | off\nlog_destination | stderr\nlog_directory | /dblog01\nlog_disconnections | off\nlog_duration | off\nlog_error_verbosity | default\nlog_executor_stats | off\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log\nlog_hostname | off\nlog_line_prefix | unset\nlog_min_duration_statement | -1\nlog_min_error_statement | panic\nlog_min_messages | notice\nlog_parser_stats | off\nlog_planner_stats | off\nlog_rotation_age | 1440\nlog_rotation_size | 10240\nlog_statement | none\nlog_statement_stats | off\nlog_truncate_on_rotation | off\nmaintenance_work_mem | 262144\nmax_connections | 40\nmax_files_per_process | 1000\nmax_fsm_pages | 100000\nmax_fsm_relations | 1000\nmax_function_args | 32\nmax_identifier_length | 63\nmax_index_keys | 32\nmax_locks_per_transaction | 64\nmax_stack_depth | 2048\npassword_encryption | on\nport | 5432\npre_auth_delay | 0\npreload_libraries | unset\nrandom_page_cost | 4\nredirect_stderr | on\nregex_flavor | advanced\nrendezvous_name | unset\nsearch_path | $user,public\nserver_encoding | UNICODE\nserver_version | 8.0.3\nshared_buffers | 10240\nsilent_mode | off\nsql_inheritance | on\nssl | off\nstatement_timeout | 0\nstats_block_level | off\nstats_command_string | off\nstats_reset_on_server_start | on\nstats_row_level | on\nstats_start_collector | on\nsuperuser_reserved_connections | 2\nsyslog_facility | LOCAL0\nsyslog_ident | postgres\nTimeZone | PST8PDT\ntrace_notify | off\ntransaction_isolation | read committed\ntransaction_read_only | off\ntransform_null_equals | off\nunix_socket_directory | unset\nunix_socket_group | unset\nunix_socket_permissions | 511\nvacuum_cost_delay | 180\nvacuum_cost_limit | 200\nvacuum_cost_page_dirty | 20\nvacuum_cost_page_hit | 1\nvacuum_cost_page_miss | 10\nwal_buffers | 8\nwal_sync_method | fdatasync\nwork_mem | 98304\nzero_damaged_pages | off\n", "msg_date": "Tue, 16 Aug 2005 10:46:46 -0700", "msg_from": "\"Roger Hand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan looks OK, but slow I/O - settings advice?" }, { "msg_contents": "On Tue, 2005-08-16 at 10:46 -0700, Roger Hand wrote:\n> The disks are ext3 with journalling type of ordered, but this was later changed to writeback with no apparent change in speed.\n> \n> They're on a Dell poweredge 6650 with LSI raid card, setup as follows:\n> 4 disks raid 10 for indexes (145GB) - sdc1\n> 6 disks raid 10 for data (220GB) - sdd1\n> 2 mirrored disks for logs - sdb1\n> \n> stripe size is 32k\n> cache policy: cached io (am told the controller has bbu)\n> write policy: write-back\n> read policy: readahead\n\nI assume you are using Linux 2.6. Have you considered booting your\nmachine with elevator=deadline? You can also change this at runtime\nusing sysfs.\n\nThese read speeds are not too impressive. Perhaps this is a slow\ncontroller. Alternately you might need bigger CPUs.\n\nThere's a lot of possibilities, obviously :) I'd start with the\nelevator, since that's easily tested.\n\n-jwb\n\n", "msg_date": "Thu, 18 Aug 2005 23:55:35 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan looks OK, but slow I/O - settings advice?" }, { "msg_contents": "The query plan does *not* look okay.\n\n> electric=# EXPLAIN ANALYZE\n> electric-# SELECT datavalue, logfielddatatype, timestamp FROM logdata_recent\n> electric-# WHERE (logfielddatatype = 70 OR logfielddatatype = 71 OR logfielddatatype = 69)\n> electric-# AND graphtargetlog = 1327\n> electric-# AND timestamp >= 1123052400 AND timestamp <= 1123138800\n> electric-# ORDER BY timestamp;\n> QUERY PLAN\n> --------------------------------------------------\n> Sort (cost=82.48..82.50 rows=6 width=14) (actual time=60208.968..60211.232 rows=2625 loops=1)\n> Sort Key: public.logdata_recent.\"timestamp\"\n> -> Result (cost=0.00..82.41 rows=6 width=14) (actual time=52.483..60200.868 rows=2625 loops=1)\n> -> Append (cost=0.00..82.41 rows=6 width=14) (actual time=52.476..60189.929 rows=2625 loops=1)\n> -> Seq Scan on logdata_recent (cost=0.00..46.25 rows=1 width=14) (actual time=0.003..0.003 rows=0 loops=1)\n> Filter: (((logfielddatatype = 70) OR (logfielddatatype = 71) OR (logfielddatatype = 69)) AND (graphtargetlog = 1327) AND (\"timestamp\" >= 1123052400) AND (\"timestamp\" <= 1123138800))\n> -> Index Scan using logdata_recent_1123085306_ix_t_fld_gtl, logdata_recent_1123085306_ix_t_fld_gtl, logdata_recent_1123085306_ix_t_fld_gtl on logdata_recent_stale logdata_recent (cost=0.00..18.08 rows=3 width=14) (actual time=52.465..60181.624 rows=2625 loops=1)\n\nNotice here that expected rows is 3, but actual rows is a hell of a lot\nhigher. Try increasing stats collections for the columns on which\nlogdata_recent_1123085306_ix_t_fld_gtl is declared.\n\nAlso, the actual index scan is taking a long time. How recently have you\nvacuum full'd?\n\nThanks,\n\nGavin\n", "msg_date": "Fri, 19 Aug 2005 17:17:45 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan looks OK, but slow I/O - settings advice?" }, { "msg_contents": "\nOn Aug 19, 2005, at 12:55 AM, Jeffrey W. Baker wrote:\n\n> On Tue, 2005-08-16 at 10:46 -0700, Roger Hand wrote:\n> Have you considered booting your\n> machine with elevator=deadline?\n\nAlthough I'm not the OP for this problem, I thought I'd try it out. \nWOW.. this should be in a Pg tuning guide somewhere. I added this to \nmy server tonight just for kicks and saw a pronounced improvement in \nIO performance. Thank you very much for mentioning this on the list.\n\nI didn't have a long enough maintenance window to do solid \nbenchmarking, but I can say for certain that the change was \nnoticeable, especially in VACUUM operations.\n\nSpecs for the server:\n\nPG 8.0.1\nLinux 2.6.12-3 kernel\n4xOpteron 2.2\n12GB RAM\n16-drive RAID 10\nXFS mounted with noatime\npg_xlog on separate RAID controller\n\n-Dan\n\n", "msg_date": "Sat, 20 Aug 2005 00:52:08 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan looks OK, but slow I/O - settings advice?" }, { "msg_contents": "On Sat, Aug 20, 2005 at 12:52:08AM -0600, Dan Harris wrote:\n>On Aug 19, 2005, at 12:55 AM, Jeffrey W. Baker wrote:\n>> Have you considered booting your\n>>machine with elevator=deadline?\n>\n>Although I'm not the OP for this problem, I thought I'd try it out. \n>WOW.. this should be in a Pg tuning guide somewhere.\n[snip]\n>16-drive RAID 10\n\nYeah, the default scheduler tries to optimize disk access patterns for a\nsingle-spindle setup, and actually makes things worse if you have a\ndevice with multiple spindles.\n\nMike Stone\n", "msg_date": "Sat, 20 Aug 2005 08:15:04 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan looks OK, but slow I/O - settings advice?" } ]
[ { "msg_contents": "Yes, that's true, though, I am a bit confused because the Clariion array\ndocument I am reading talks about how the write cache can eliminate the\nRAID5 Write Penalty for sequential and large IOs...resulting in better\nsequential write performance than RAID10.\n\nanjan\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Tuesday, August 16, 2005 2:00 PM\nTo: [email protected]\nSubject: Re: [PERFORM] choosing RAID level for xlogs\n\nQuoting Anjan Dave <[email protected]>:\n\n> Hi,\n> \n> \n> \n> One simple question. For 125 or more checkpoint segments\n> (checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n> 170MB) on a very busy database, what is more suitable, a separate 6\ndisk\n> RAID5 volume, or a RAID10 volume? Databases will be on separate\n> spindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\n> paramount, but I don't want to use RAID0.\n> \n\nRAID10 -- no question. xlog activity is overwhelmingly sequential 8KB\nwrites. \nIn order for RAID5 to perform a write, the host (or controller) needs to\nperform\nextra calculations for parity. This turns into latency. RAID10 does\nnot\nperform those extra calculations.\n\n> \n> \n> PG7.4.7 on RHAS 4.0\n> \n> \n> \n> I can provide more info if needed.\n> \n> \n> \n> Appreciate some recommendations!\n> \n> \n> \n> Thanks,\n> \n> Anjan\n> \n> \n> \n> \n> ---\n> This email message and any included attachments constitute\nconfidential\n> and privileged information intended exclusively for the listed\n> addressee(s). If you are not the intended recipient, please notify\n> Vantage by immediately telephoning 215-579-8390, extension 1158. In\n> addition, please reply to this message confirming your receipt of the\n> same in error. A copy of your email reply can also be sent to\n> [email protected]. Please do not disclose, copy, distribute or take\n> any action in reliance on the contents of this information. Kindly\n> destroy all copies of this message and any attachments. Any other use\nof\n> this email is prohibited. Thank you for your cooperation. For more\n> information about Vantage, please visit our website at\n> http://www.vantage.com <http://www.vantage.com/> .\n> ---\n> \n> \n> \n> \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 16 Aug 2005 14:37:38 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "Anjan Dave wrote:\n> Yes, that's true, though, I am a bit confused because the Clariion array\n> document I am reading talks about how the write cache can eliminate the\n> RAID5 Write Penalty for sequential and large IOs...resulting in better\n> sequential write performance than RAID10.\n>\n> anjan\n>\n\nWell, if your stripe size is 128k, and you have N disks in the RAID (N\nmust be even and > 4 for RAID10).\n\nWith RAID5 you have a stripe across N-1 disks, and 1 parity entry.\nWith RAID10 you have a stripe across N/2 disks, replicated on the second\nset.\n\nSo if the average write size is >128k*N/2, then you will generally be\nusing all of the disks during a write, and you can expect a the maximum\nscale up of about N/2 for RAID10.\n\nIf your average write size is >128k*(N-1) then you can again write an\nentire stripe at a time and even the parity since you already know all\nof the information you don't have to do any reading. So you can get a\nmaximum speed up of N-1.\n\nIf you are doing infrequent smallish writes, it can be buffered by the\nwrite cache, and isn't disk limited at all. And the controller can write\nit out when it feels like it. So it should be able to do more buffered\nall-at-once writes.\n\nIf you are writing a little bit more often (such that the cache fills\nup), depending on your write pattern, it is possible that all of the\nstripes are already in the cache, so again there is little penalty for\nthe parity stripe.\n\nI suppose the worst case is if you were writing lots of very small\nchunks, all over the disk in random order. In which case each write\nencounters a 2x read penalty for a smart controller, or a Nx read\npenalty if you are going for more safety than speed. (You can read the\noriginal value, and the parity, and re-compute the parity with the new\nvalue (2x read penalty), but if there is corruption it would not be\ndetected, so you might want to read all of the stripes in the block, and\nrecompute the parity with the new data (Nx read penalty)).\n\nI think the issue for Postgres is that it writes 8k pages, which is\nquite small relative to the stripe size. So you don't tend to build up\nbig buffers to write out the entire stripe at once.\n\nSo if you aren't filling up your write buffer, RAID5 can do quite well\nwith bulk loads.\nI also don't know about the penalties for a read followed immediately by\na write. Since you will be writing to the same location, you know that\nyou have to wait for the disk to spin back to the same location. At 10k\nrpm that is a 6ms wait time. For 7200rpm disks, it is 8.3ms.\n\nJust to say that there are some specific extra penalties when you are\nreading the location that you are going to write right away. Now a\nreally smart controller with lots of data to write could read the whole\ncircle on the disk, and then start writing out the entire circle, and\nnot have any spin delay. But you would have to know the size of the\ncircle, and that depends on what block you are on, and the heads\narrangement and everything else.\nThough since hard-drives also have small caches in them, you could hide\nsome of the spin delay, but not a lot, since you have to leave the head\nthere until you are done writing, so while the current command would\nfinish quickly, the next command couldn't start until the first actually\nfinished.\n\nWriting large buffers hides all of these seek/spin based latencies, so\nyou can get really good throughput. But a lot of DB action is small\nbuffers randomly distributed, so you really do need low seek time, of\nwhich RAID10 is probably better than RAID5.\n\nJohn\n=:->", "msg_date": "Tue, 16 Aug 2005 14:04:05 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "\nOn Aug 16, 2005, at 2:37 PM, Anjan Dave wrote:\n\n> Yes, that's true, though, I am a bit confused because the Clariion \n> array\n> document I am reading talks about how the write cache can eliminate \n> the\n> RAID5 Write Penalty for sequential and large IOs...resulting in better\n> sequential write performance than RAID10.\n>\n\nwell, then run your own tests and find out :-)\n\nif I were using LSI MegaRAID controllers, I'd probalby go RAID10, but \nI don't see why you need 6 disks for this... perhaps just 4 would be \nenough? Or are your logs really that big?\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n", "msg_date": "Tue, 16 Aug 2005 15:04:23 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "Anjan Dave wrote:\n> Yes, that's true, though, I am a bit confused because the Clariion array\n> document I am reading talks about how the write cache can eliminate the\n> RAID5 Write Penalty for sequential and large IOs...resulting in better\n> sequential write performance than RAID10.\n>\n> anjan\n>\n\nTo give a shorter statement after my long one...\nIf you have enough cache that the controller can write out big chunks to\nthe disk at a time, you can get very good sequential RAID5 performance,\nbecause the stripe size is large (so it can do a parallel write to all\ndisks).\n\nBut for small chunk writes, you suffer the penalty of the read before\nwrite, and possible multi-disk read (depends on what is in cache).\n\nRAID10 generally handles small writes better, and I would guess that\n4disks would perform almost identically to 6disks, since you aren't\nusually writing enough data to span multiple stripes.\n\nIf your battery-backed cache is big enough that you don't fill it, they\nprobably perform about the same (superfast) since the cache hides the\nlatency of the disks.\n\nIf you start filling up your cache, RAID5 probably can do better because\nof the parallelization.\n\nBut small writes followed by an fsync do favor RAID10 over RAID5.\n\nJohn\n=:->", "msg_date": "Tue, 16 Aug 2005 14:16:48 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "Theoretically RAID 5 can perform better than RAID 10 over the same\nnumber of drives (more members form the stripe in RAID 5 than in RAID\n10). All you have to do is calculate parity faster than the drives\ncan write. Doesn't seem like a hard task really, although most RAID\ncontrollers seem incapable of doing so, it is possible that Clariion\nmight be able to acheive it. The other factor is that for partial\nblock writes, the array has to first read the original block in order\nto recalculate the parity, so small random writes are very slow. If\nyou are writing chunks that are larger than your stripe size*(n-1),\nthen in theory the controller doesn't have to re-read a block, and can\njust overwrite the parity with the new info.\n\nConsider just four drives. in RAID 10, it is a stripe of two mirrors,\nforming two independant units to write to. in RAID 5, it is a 3 drive\nstripe with parity giving three independant units to write to. \nTheoretically the RAID 5 should be faster, but I've yet to benchmark a\ncontroler where this holds to be true.\n\nOf course if you ever do have a drive failure, your array grinds to a\nhalt because rebuilding a raid 5 requires reading (n-1) blocks to\nrebuild just one block where n is the number of drives in the array,\nwhereas a mirror only required to read from a single spindle of the\nRAID.\n\nI would suggest running some benchmarks at RAID 5 and RAID 10 to see\nwhat the _real_ performance actualy is, thats the only way to really\ntell.\n\nAlex Turner\nNetEconomist\n\nOn 8/16/05, Anjan Dave <[email protected]> wrote:\n> Yes, that's true, though, I am a bit confused because the Clariion array\n> document I am reading talks about how the write cache can eliminate the\n> RAID5 Write Penalty for sequential and large IOs...resulting in better\n> sequential write performance than RAID10.\n> \n> anjan\n> \n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Tuesday, August 16, 2005 2:00 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] choosing RAID level for xlogs\n> \n> Quoting Anjan Dave <[email protected]>:\n> \n> > Hi,\n> >\n> >\n> >\n> > One simple question. For 125 or more checkpoint segments\n> > (checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n> > 170MB) on a very busy database, what is more suitable, a separate 6\n> disk\n> > RAID5 volume, or a RAID10 volume? Databases will be on separate\n> > spindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\n> > paramount, but I don't want to use RAID0.\n> >\n> \n> RAID10 -- no question. xlog activity is overwhelmingly sequential 8KB\n> writes.\n> In order for RAID5 to perform a write, the host (or controller) needs to\n> perform\n> extra calculations for parity. This turns into latency. RAID10 does\n> not\n> perform those extra calculations.\n> \n> >\n> >\n> > PG7.4.7 on RHAS 4.0\n> >\n> >\n> >\n> > I can provide more info if needed.\n> >\n> >\n> >\n> > Appreciate some recommendations!\n> >\n> >\n> >\n> > Thanks,\n> >\n> > Anjan\n> >\n> >\n> >\n> >\n> > ---\n> > This email message and any included attachments constitute\n> confidential\n> > and privileged information intended exclusively for the listed\n> > addressee(s). If you are not the intended recipient, please notify\n> > Vantage by immediately telephoning 215-579-8390, extension 1158. In\n> > addition, please reply to this message confirming your receipt of the\n> > same in error. A copy of your email reply can also be sent to\n> > [email protected]. Please do not disclose, copy, distribute or take\n> > any action in reliance on the contents of this information. Kindly\n> > destroy all copies of this message and any attachments. Any other use\n> of\n> > this email is prohibited. Thank you for your cooperation. For more\n> > information about Vantage, please visit our website at\n> > http://www.vantage.com <http://www.vantage.com/> .\n> > ---\n> >\n> >\n> >\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n", "msg_date": "Tue, 16 Aug 2005 15:21:09 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "Don't forget that often controlers don't obey fsyncs like a plain\ndrive does. thats the point of having a BBU ;)\n\nAlex Turner\nNetEconomist\n\nOn 8/16/05, John A Meinel <[email protected]> wrote:\n> Anjan Dave wrote:\n> > Yes, that's true, though, I am a bit confused because the Clariion array\n> > document I am reading talks about how the write cache can eliminate the\n> > RAID5 Write Penalty for sequential and large IOs...resulting in better\n> > sequential write performance than RAID10.\n> >\n> > anjan\n> >\n> \n> To give a shorter statement after my long one...\n> If you have enough cache that the controller can write out big chunks to\n> the disk at a time, you can get very good sequential RAID5 performance,\n> because the stripe size is large (so it can do a parallel write to all\n> disks).\n> \n> But for small chunk writes, you suffer the penalty of the read before\n> write, and possible multi-disk read (depends on what is in cache).\n> \n> RAID10 generally handles small writes better, and I would guess that\n> 4disks would perform almost identically to 6disks, since you aren't\n> usually writing enough data to span multiple stripes.\n> \n> If your battery-backed cache is big enough that you don't fill it, they\n> probably perform about the same (superfast) since the cache hides the\n> latency of the disks.\n> \n> If you start filling up your cache, RAID5 probably can do better because\n> of the parallelization.\n> \n> But small writes followed by an fsync do favor RAID10 over RAID5.\n> \n> John\n> =:->\n> \n> \n>\n", "msg_date": "Tue, 16 Aug 2005 15:52:45 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" } ]
[ { "msg_contents": "I would be very cautious about ever using RAID5, despite manufacturers' claims to the contrary. The link below is authored by a very knowledgable fellow whose posts I know (and trust) from Informix land.\n\n<http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt>\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Anjan Dave\nSent:\tMon 8/15/2005 1:35 PM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] choosing RAID level for xlogs\nHi,\n\n \n\nOne simple question. For 125 or more checkpoint segments\n(checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n170MB) on a very busy database, what is more suitable, a separate 6 disk\nRAID5 volume, or a RAID10 volume? Databases will be on separate\nspindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\nparamount, but I don't want to use RAID0.\n\n \n\nPG7.4.7 on RHAS 4.0\n\n \n\nI can provide more info if needed.\n\n \n\nAppreciate some recommendations!\n\n \n\nThanks,\n\nAnjan\n\n \n\n \n---\nThis email message and any included attachments constitute confidential\nand privileged information intended exclusively for the listed\naddressee(s). If you are not the intended recipient, please notify\nVantage by immediately telephoning 215-579-8390, extension 1158. In\naddition, please reply to this message confirming your receipt of the\nsame in error. A copy of your email reply can also be sent to\[email protected]. Please do not disclose, copy, distribute or take\nany action in reliance on the contents of this information. Kindly\ndestroy all copies of this message and any attachments. Any other use of\nthis email is prohibited. Thank you for your cooperation. For more\ninformation about Vantage, please visit our website at\nhttp://www.vantage.com <http://www.vantage.com/> .\n---\n\n \n\n\n\n!DSPAM:4300fd35105094125621296!\n\n\n\n", "msg_date": "Tue, 16 Aug 2005 15:22:55 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: choosing RAID level for xlogs" } ]
[ { "msg_contents": "Thanks, everyone. I got some excellent replies, including some long explanations. Appreciate the time you guys took out for the responses.\r\n \r\nThe gist of it i take, is to use RAID10. I have 400MB+ of write cache on the controller(s), that the RAID5 LUN(s) could benefit from by filling it up and writing out the complete stripe, but come to think of it, it's shared among the two Storage Processors, all the LUNs, not just the ones holding the pg_xlog directory. The other thing (with Clariion) is the write cache mirroring. Write isn't signalled complete to the host until the cache content is mirrored across the other SP (and vice-versa), which is a good thing, but this operation could potentially become a bottleneck with very high load on the SPs.\r\n \r\nAlso, one would have to fully trust the controller/manufacturer's claim on signalling the write completion. And, performance is a priority over the drive space lost in RAID10 for me.\r\n \r\nI can use 4 drives instead of 6.\r\n \r\nThanks,\r\nAnjan \r\n\r\n\tt-----Original Message----- \r\n\tFrom: Gregory S. Williamson [mailto:[email protected]] \r\n\tSent: Tue 8/16/2005 6:22 PM \r\n\tTo: Anjan Dave; [email protected] \r\n\tCc: \r\n\tSubject: RE: [PERFORM] choosing RAID level for xlogs\r\n\t\r\n\t\r\n\r\n\tI would be very cautious about ever using RAID5, despite manufacturers' claims to the contrary. The link below is authored by a very knowledgable fellow whose posts I know (and trust) from Informix land.\r\n\r\n\t<http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt> \r\n\r\n\tGreg Williamson \r\n\tDBA \r\n\tGlobeXplorer LLC \r\n\r\n\r\n\t-----Original Message----- \r\n\tFrom: [email protected] on behalf of Anjan Dave \r\n\tSent: Mon 8/15/2005 1:35 PM \r\n\tTo: [email protected] \r\n\tCc: \r\n\tSubject: [PERFORM] choosing RAID level for xlogs \r\n\tHi, \r\n\r\n\t\r\n\r\n\tOne simple question. For 125 or more checkpoint segments \r\n\t(checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or \r\n\t170MB) on a very busy database, what is more suitable, a separate 6 disk \r\n\tRAID5 volume, or a RAID10 volume? Databases will be on separate \r\n\tspindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is \r\n\tparamount, but I don't want to use RAID0. \r\n\r\n\t\r\n\r\n\tPG7.4.7 on RHAS 4.0 \r\n\r\n\t\r\n\r\n\tI can provide more info if needed. \r\n\r\n\t\r\n\r\n\tAppreciate some recommendations! \r\n\r\n\t\r\n\r\n\tThanks, \r\n\r\n\tAnjan \r\n\r\n\t\r\n\r\n\t\r\n\t--- \r\n\tThis email message and any included attachments constitute confidential \r\n\tand privileged information intended exclusively for the listed \r\n\taddressee(s). If you are not the intended recipient, please notify \r\n\tVantage by immediately telephoning 215-579-8390, extension 1158. In \r\n\taddition, please reply to this message confirming your receipt of the \r\n\tsame in error. A copy of your email reply can also be sent to \r\n\[email protected]. Please do not disclose, copy, distribute or take \r\n\tany action in reliance on the contents of this information. Kindly \r\n\tdestroy all copies of this message and any attachments. Any other use of \r\n\tthis email is prohibited. Thank you for your cooperation. For more \r\n\tinformation about Vantage, please visit our website at \r\n\thttp://www.vantage.com <http://www.vantage.com/> . \r\n\t--- \r\n\r\n\t\r\n\r\n\r\n\r\n\t!DSPAM:4300fd35105094125621296! \r\n\r\n\r\n\r\n", "msg_date": "Tue, 16 Aug 2005 21:12:14 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: choosing RAID level for xlogs" }, { "msg_contents": "The other point that is well made is that with enough drives you will\nmax out the PCI bus before you max out the drives. 64-bit 66Mhz can\ndo about 400MB/sec, which can be acheived by two 3 drive stripes (6\ndrive in RAID 10). A true PCI-X card can do better, but can your\ncontroller? Remember, U320 is only 320MB/channel...\n\nAlex Turner\nNetEconomist\n\nOn 8/16/05, Anjan Dave <[email protected]> wrote:\n> Thanks, everyone. I got some excellent replies, including some long explanations. Appreciate the time you guys took out for the responses.\n> \n> The gist of it i take, is to use RAID10. I have 400MB+ of write cache on the controller(s), that the RAID5 LUN(s) could benefit from by filling it up and writing out the complete stripe, but come to think of it, it's shared among the two Storage Processors, all the LUNs, not just the ones holding the pg_xlog directory. The other thing (with Clariion) is the write cache mirroring. Write isn't signalled complete to the host until the cache content is mirrored across the other SP (and vice-versa), which is a good thing, but this operation could potentially become a bottleneck with very high load on the SPs.\n> \n> Also, one would have to fully trust the controller/manufacturer's claim on signalling the write completion. And, performance is a priority over the drive space lost in RAID10 for me.\n> \n> I can use 4 drives instead of 6.\n> \n> Thanks,\n> Anjan\n> \n> t-----Original Message-----\n> From: Gregory S. Williamson [mailto:[email protected]]\n> Sent: Tue 8/16/2005 6:22 PM\n> To: Anjan Dave; [email protected]\n> Cc:\n> Subject: RE: [PERFORM] choosing RAID level for xlogs\n> \n> \n> \n> I would be very cautious about ever using RAID5, despite manufacturers' claims to the contrary. The link below is authored by a very knowledgable fellow whose posts I know (and trust) from Informix land.\n> \n> <http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt>\n> \n> Greg Williamson\n> DBA\n> GlobeXplorer LLC\n> \n> \n> -----Original Message-----\n> From: [email protected] on behalf of Anjan Dave\n> Sent: Mon 8/15/2005 1:35 PM\n> To: [email protected]\n> Cc:\n> Subject: [PERFORM] choosing RAID level for xlogs\n> Hi,\n> \n> \n> \n> One simple question. For 125 or more checkpoint segments\n> (checkpoint_timeout is 600 seconds, shared_buffers are at 21760 or\n> 170MB) on a very busy database, what is more suitable, a separate 6 disk\n> RAID5 volume, or a RAID10 volume? Databases will be on separate\n> spindles. Disks are 36GB 15KRPM, 2Gb Fiber Channel. Performance is\n> paramount, but I don't want to use RAID0.\n> \n> \n> \n> PG7.4.7 on RHAS 4.0\n> \n> \n> \n> I can provide more info if needed.\n> \n> \n> \n> Appreciate some recommendations!\n> \n> \n> \n> Thanks,\n> \n> Anjan\n> \n> \n> \n> \n> ---\n> This email message and any included attachments constitute confidential\n> and privileged information intended exclusively for the listed\n> addressee(s). If you are not the intended recipient, please notify\n> Vantage by immediately telephoning 215-579-8390, extension 1158. In\n> addition, please reply to this message confirming your receipt of the\n> same in error. A copy of your email reply can also be sent to\n> [email protected]. Please do not disclose, copy, distribute or take\n> any action in reliance on the contents of this information. Kindly\n> destroy all copies of this message and any attachments. Any other use of\n> this email is prohibited. Thank you for your cooperation. For more\n> information about Vantage, please visit our website at\n> http://www.vantage.com <http://www.vantage.com/> .\n> ---\n> \n> \n> \n> \n> \n> !DSPAM:4300fd35105094125621296!\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Tue, 16 Aug 2005 23:10:38 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing RAID level for xlogs" } ]
[ { "msg_contents": "We just moved a large production instance of ours from Oracle to\nPostgres 8.0.3 on linux. When running on Oracle the machine hummed\nalong using about 5% of the CPU easily handling the fairly constant\nload, after moving the data to Postgres the machine was pretty much\nmaxed out on CPU and could no longer keep up with the transaction\nvolume. On a hunch I switched the jdbc driver to using the V2 protocol\nand the load on the machine dropped down to what it was when using\nOracle and everything was fine.\n\n \n\nNow obviously I have found a work around for the performance problem,\nbut I really don't want to rely on using the V2 protocol forever, and\ndon't want to have to recommend to our customers that they need to run\nwith the V2 protocol. So I would like to resolve the problem and be\nable to move back to a default configuration with the V3 protocol and\nthe benefits thereof.\n\n \n\nThe problem is that I don't really know where to begin to debug a\nproblem like this. In development environments and testing environments\nwe have not seen performance problems with the V3 protocol in the jdbc\ndriver. But they don't come close to approaching the transaction volume\nof this production instance.\n\n \n\nWhat I see when running the V3 protocol under 'top' is that the postgres\nprocesses are routinely using 15% or more of the CPU each, when running\nthe V2 protocol they use more like 0.3%.\n\n \n\nDoes anyone have any suggestions on an approach to debug a problem like\nthis?\n\n \n\nThanks,\n\n--Barry\n\n\n\n\n\n\n\n\n\n\nWe just moved a large production instance of ours from\nOracle to Postgres 8.0.3 on linux.  When running on Oracle the machine\nhummed along using about 5% of the CPU easily handling the fairly constant load,\nafter moving the data to Postgres the machine was pretty much maxed out on CPU\nand could no longer keep up with the transaction volume.  On a hunch I\nswitched the jdbc driver to using the V2 protocol and the load on the machine\ndropped down to what it was when using Oracle and everything was fine.\n \nNow obviously I have found a work around for the performance\nproblem, but I really don’t want to rely on using the V2 protocol\nforever, and don’t want to have to recommend to our customers that they\nneed to run with the V2 protocol.  So I would like to resolve the problem\nand be able to move back to a default configuration with the V3 protocol and\nthe benefits thereof.\n \nThe problem is that I don’t really know where to begin\nto debug a problem like this.  In development environments and testing\nenvironments we have not seen performance problems with the V3 protocol in the\njdbc driver.  But they don’t come close to approaching the\ntransaction volume of this production instance.\n \nWhat I see when running the V3 protocol under ‘top’\nis that the postgres processes are routinely using 15% or more of the CPU each,\nwhen running the V2 protocol they use more like 0.3%.\n \nDoes anyone have any suggestions on an approach to debug a\nproblem like this?\n \nThanks,\n--Barry", "msg_date": "Tue, 16 Aug 2005 21:42:29 -0700", "msg_from": "\"Barry Lind\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem using V3 protocol in jdbc driver" }, { "msg_contents": "\"Barry Lind\" <[email protected]> writes:\n> ... On a hunch I switched the jdbc driver to using the V2 protocol\n> and the load on the machine dropped down to what it was when using\n> Oracle and everything was fine.\n\nFirst knee-jerk reaction is that it's an optimization problem stemming\nfrom V3 protocol feeding parameterized queries to the backend where V2\ndid not, and the planner being unable to cope :-(\n\nCan you identify the specific queries causing the problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Aug 2005 01:01:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem using V3 protocol in jdbc driver " }, { "msg_contents": "Quoting Barry Lind <[email protected]>:\n\n<snip>\n> \n> \n> What I see when running the V3 protocol under 'top' is that the postgres\n> processes are routinely using 15% or more of the CPU each, when running\n> the V2 protocol they use more like 0.3%.\n> \n> \n> \n> Does anyone have any suggestions on an approach to debug a problem like\n> this?\n> \n> \n\nTracing system calls is a good starting point--truss on Solaris, strace on Linux\n(Redhat anyway), ktrace on BSD. The difference between 0.3% and 15% CPU\nutilization under similar load will very likely (though not with complete\ncertainty) be showing very noticeably different system call activity.\n\nIf you notice a difference in system call activity, then that would probably\nprovide a hint as to what's going on--where the inefficiency lies. It's\npossible to spin the CPU up without any system calls, but system call tracing\ncan be done pretty quickly and you should be able to see any interesting\npatterns emerge quite quickly.\n\n^\n|\n\nThis method is a good starting point for troubleshooting just about any funny\nprocess activity. And it comes with the added benefit of not having to know\nahead of time about the specific matter at hand (JDBC implementation, in this\ncase). :-) That's having your cake and eating it, too.\n\n> \n> Thanks,\n> \n> --Barry\n> \n> \n\n\n", "msg_date": "Wed, 17 Aug 2005 01:14:24 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance problem using V3 protocol in jdbc driver" }, { "msg_contents": "Barry,\n\nI have made a similar experience, moving a big Oracle data base to\nPostgres 8.03 on linux.\nThe first impact was similar, huge performance problems.\nThe main problem was bad planner choices. The cause in our case: bad\nparameter types in the jdbc set methods (I guess you use Java). For\noracle we used the NUMERIC type to set primary keys, but the postgres id\ntype used was BIGINT, and it just refused to use the index in this case.\nImagine that kicking in on a 100 million rows table... a sequential scan\nstarted a few times a second, now that made the DB unusable.\nSo we fixed the code that for oracle continues to use NUMERIC and for\npostgres it uses BIGINT, and that is very important on setNull calls\ntoo.\n\nOne very useful tool was the following query:\n\nprepare ps as\nSELECT procpid, substring(current_query for 97),\nto_char((now()-query_start), 'HH24:MI:SS') as t\nFROM pg_stat_activity\nwhere current_query not like '%<insufficient%'\n and current_query not like '%IDLE%' order by t desc;\n\nThen you just \"execute ps;\" in psql, and it will show you the queries\nwhich are already running for a while.\n\nOther problems were caused by complex queries, where more than 2 tables\nwere joined. For oracle we were giving \"hints\" in the form of special\ncomments, to point to the right index, right plan, but that's not an\noption for postgres (yet ?). So the fix in this case was to use explicit\njoins which do influence the postgres planner choices. This fixed\nanother class of issues for us...\n\nAnother problem: if you want to avoid worst-case plans, and do away with\na generic plan for all cases, then you might force the usage of server\nside prepare statements in all cases. I had to do that, a lot of queries\nwere performing very badly without this. Now maybe that could be solved\nby raising the statistics targets where needed, but in my case the\ngeneric plan was always good enough, by design. We rely on the DB\npicking a good generic plan in all cases. One typical example for us\nwould be: a limit query which select 20 rows out of 100 million, with a\nwhere clause which actually selects 1 row out of it for the last\nchunk... it was going for an index scan, but on the wrong index. The\nright index would have selected that exactly 1 row, the wrong one had to\ncruise through a few million rows... the limit fooled the planner that\nit will get 20 rows quickly. Now when I forced the usage of a prepared\nstatement, it went for the right index and all was good.\nI actually set this in our connection pool:\n ((PGConnection)connection).setPrepareThreshold(1);\nbut it is possible to set/reset it on a statement level, I just didn't\nfind any query I should to do it for yet... the DB is steady now.\n\nAnother issue was that we've had some functional indexes on oracle\nreturning null for uninteresting rows, to lower the index size. This is\neasier to implement on postgres using a partial index, which has a lot\nsimpler syntax than the oracle hack, and it is easier to handle. The\ncatch was that we needed to change the where clause compared to oracle\nso that postgres picks the partial index indeed. There are cases where\nthe planner can't figure out that it can use the index, especially if\nyou use prepared statements and one of the parameters is used in the\nindex condition. In this case it is needed to add the proper restriction\nto the where clause to point postgres to use the partial index. Using\npartial indexes speeds up the inserts and updates on those tables, and\ncould speed up some selects too.\n\nHmmm... that's about what I recall now... beside the postgres admin\nstuff, have you analyzed your data after import ? I forgot to do that at\nfirst, and almost reverted again back to oracle... and then after a few\ndays it was very clear that running the auto-vacuum daemon is also a\nmust :-)\nAnd: for big data sets is important to tweak all performance settings in\nthe config file, otherwise you get surprises. We've been running a\nsmaller instance of the same code on postgres for quite a while before\ndeciding to migrate a big one, and that was cruising along happily with\nthe default settings, so the first time we needed to do optimizations\nwas when using a data set with a lot of data in it...\n\nHTH,\nCsaba.\n\n\nOn Wed, 2005-08-17 at 06:42, Barry Lind wrote:\n> We just moved a large production instance of ours from Oracle to\n> Postgres 8.0.3 on linux. When running on Oracle the machine hummed\n> along using about 5% of the CPU easily handling the fairly constant\n> load, after moving the data to Postgres the machine was pretty much\n> maxed out on CPU and could no longer keep up with the transaction\n> volume. On a hunch I switched the jdbc driver to using the V2\n> protocol and the load on the machine dropped down to what it was when\n> using Oracle and everything was fine.\n> \n> \n> \n> Now obviously I have found a work around for the performance problem,\n> but I really don’t want to rely on using the V2 protocol forever, and\n> don’t want to have to recommend to our customers that they need to run\n> with the V2 protocol. So I would like to resolve the problem and be\n> able to move back to a default configuration with the V3 protocol and\n> the benefits thereof.\n> \n> \n> \n> The problem is that I don’t really know where to begin to debug a\n> problem like this. In development environments and testing\n> environments we have not seen performance problems with the V3\n> protocol in the jdbc driver. But they don’t come close to approaching\n> the transaction volume of this production instance.\n> \n> \n> \n> What I see when running the V3 protocol under ‘top’ is that the\n> postgres processes are routinely using 15% or more of the CPU each,\n> when running the V2 protocol they use more like 0.3%.\n> \n> \n> \n> Does anyone have any suggestions on an approach to debug a problem\n> like this?\n> \n> \n> \n> Thanks,\n> \n> --Barry\n> \n> \n\n", "msg_date": "Wed, 17 Aug 2005 11:30:51 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem using V3 protocol in jdbc driver" } ]
[ { "msg_contents": "I think I have a solution for you.\n\nYou have posted that you presently have these RAID volumes and behaviors:\n sda: data (10 spindles, raid10)\n sdb: xlog & clog (2 spindles, raid1)\n sdc: os and other stuff\n\nUsually iostat (2 second interval) says:\navg-cpu: %user %nice %sys %iowait %idle\n 32.38 0.00 12.88 11.62 43.12\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n sda 202.00 1720.00 0.00 3440 0\n sdb 152.50 4.00 2724.00 8 5448\n sdc 0.00 0.00 0.00 0 0\n\nAnd during checkpoint:\navg-cpu: %user %nice %sys %iowait %idle\n 31.25 0.00 14.75 54.00 0.00\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 3225.50 1562.00 35144.00 3124 70288\nsdb 104.50 10.00 2348.00 20 4696\nsdc 0.00 0.00 0.00 0 0\n\n\nDuring checkpoints sda is becoming saturated, essentially halting all \nother DB activity involving sda. A lesser version of the porblem is \nprobably occurring every time multiple entities on sda are being \naccessed simultaneously, particularly simultaneous writes.\n\nMy Proposed Solution:\nPut comment and its index on it's own dedicated RAID volume.\nPut comment_archive and its index on its own dedicated RAID volume.\nPut the rest of the tables currently part of \"data\" on their own \ndedicated RAID volume.\nPut the rest if the indexes to the tables currently part of \"data\" on \ntheir own dedicated RAID volume.\nPut xlog on its own dedicated RAID volume.\n\nThe general idea here is to put any tables or indexes that tend to \nrequire simultaneous access, particularly write access, on different \nspindles. Like all things, there's a point of diminishing returns \nthat is dependent on the HW used and the DB load.\n\nIf you must wring every last bit of IO out of the HD subsystem, a \nmore exact set of spindle assignments can be made by analyzing your \nqueries and then 1) make sure writes that tend to be simultaneous are \nto different spindles, then (if you still need better IO) 2) make \nsure reads that tend to be simultaneous are to different \nspindles. At some point, your controller will become the \nbottleneck. At some point beyond that, the IO channels on the \nmainboard will become the bottleneck.\n\nMy suggestion should get you to within 80-90% of optimal if I've \nunderstood the implications of your posts correctly.\n\nThe other suggestion I'd make is to bump your RAM from 16GB to 32GB \nas soon as you can afford it and then tune your PostgreSQL parameters \nto make best use of it. The more RAM resident your DB, the better.\n\nHope this helps,\nRon Peacetree\n\n\n===========Original Message Follows===========\nFrom: Kari Lavikka <tuner ( at ) bdb ( dot ) fi>\nTo: Merlin Moncure <merlin ( dot ) moncure ( at ) rcsonline ( dot ) com>\nSubject: Re: Finding bottleneck\nDate: Mon, 8 Aug 2005 19:19:09 +0300 (EETDST)\n\n----------\n\nActually I modified postgresql.conf a bit and there isn't commit \ndelay any more. That didn't make noticeable difference though..\n\nWorkload is generated by a website with about 1000 dynamic page views \na second. Finland's biggest site among youths btw.\n\n\nAnyway, there are about 70 tables and here's some of the most important:\n relname | reltuples\n----------------------------------+-------------\n comment | 1.00723e+08\n comment_archive | 9.12764e+07\n channel_comment | 6.93912e+06\n image | 5.80314e+06\n admin_event | 5.1936e+06\n user_channel | 3.36877e+06\n users | 325929\n channel | 252267\n\nQueries to \"comment\" table are mostly IO-bound but are performing \nquite well. Here's an example:\n(SELECT u.nick, c.comment, c.private, c.admin, c.visible, c.parsable, \nc.uid_sender, to_char(c.stamp, 'DD.MM.YY HH24:MI') AS stamp, \nc.comment_id FROM comment c INNER JOIN users u ON u.uid = \nc.uid_sender WHERE u.status = 'a' AND c.image_id = 15500900 AND \nc.uid_target = 780345 ORDER BY uid_target DESC, image_id DESC, \nc.comment_id DESC) LIMIT 36\n\n\nAnd explain analyze:\n Limit (cost=0.00..6.81 rows=1 width=103) (actual \ntime=0.263..17.522 rows=12 loops=1)\n -> Nested Loop (cost=0.00..6.81 rows=1 width=103) (actual \ntime=0.261..17.509 rows=12 loops=1)\n -> Index Scan Backward using \ncomment_uid_target_image_id_comment_id_20050527 on \"comment\" \nc (cost=0.00..3.39 rows=1 width=92) (actual time=0.129..16.213 \nrows=12 loops=1)\n Index Cond: ((uid_target = 780345) AND (image_id = 15500900))\n -> Index Scan using users_pkey on users \nu (cost=0.00..3.40 rows=1 width=15) (actual time=0.084..0.085 rows=1 loops=12)\n Index Cond: (u.uid = \"outer\".uid_sender)\n Filter: (status = 'a'::bpchar)\n Total runtime: 17.653 ms\n\n\nWe are having performance problems with some smaller tables and very \nsimple queries. For example:\nSELECT u.uid, u.nick, extract(epoch from uc.stamp) AS stamp FROM \nuser_channel uc INNER JOIN users u USING (uid) WHERE channel_id = \n281321 AND u.status = 'a' ORDER BY uc.channel_id, upper(uc.nick)\n\n\nAnd explain analyze:\n Nested Loop (cost=0.00..200.85 rows=35 width=48) (actual \ntime=0.414..38.128 rows=656 loops=1)\n -> Index Scan using user_channel_channel_id_nick on user_channel \nuc (cost=0.00..40.18 rows=47 width=27) (actual time=0.090..0.866 \nrows=667 loops=1)\n Index Cond: (channel_id = 281321)\n -> Index Scan using users_pkey on users u (cost=0.00..3.40 \nrows=1 width=25) (actual time=0.048..0.051 rows=1 loops=667)\n Index Cond: (\"outer\".uid = u.uid)\n Filter: (status = 'a'::bpchar)\n Total runtime: 38.753 ms\n\nUnder heavy load these queries tend to take several minutes to \nexecute although there's plenty of free cpu available. There aren't \nany blocking locks in pg_locks.\n\n\n |\\__/|\n ( oo ) Kari Lavikka - tuner ( at ) bdb ( dot ) fi - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\n\n\n", "msg_date": "Wed, 17 Aug 2005 00:48:26 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck" } ]
[ { "msg_contents": "That was my suspicion as well, which is why I tried the V2 protocol. \n\nI do not know of any specific queries that are causing the problem. As\nI monitor 'top' I see processes utilizing a significant amount of CPU\nrunning SELECT, UPDATE and DELETE, which would lead me to believe that\nit isn't any one specific query.\n\nHow does one identify on a live system specific queries that are running\nslow, especially with the V3 protocol and when the system is executing\nabout a 100 queries a second (which makes turning on any sort of logging\nvery very verbose)? (I just subscribed to the performance list, so this\nis probably something that has been answered many times before on this\nlist).\n\nI haven't tried to track down a performance problem like this before on\npostgres. Since most of our large customers run Oracle that is where I\nhave the knowledge to figure something like this out.\n\nThanks,\n--Barry\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, August 16, 2005 10:02 PM\nTo: Barry Lind\nCc: [email protected]; [email protected]\nSubject: Re: [JDBC] Performance problem using V3 protocol in jdbc driver\n\n\n\"Barry Lind\" <[email protected]> writes:\n> ... On a hunch I switched the jdbc driver to using the V2 protocol\n> and the load on the machine dropped down to what it was when using\n> Oracle and everything was fine.\n\nFirst knee-jerk reaction is that it's an optimization problem stemming\nfrom V3 protocol feeding parameterized queries to the backend where V2\ndid not, and the planner being unable to cope :-(\n\nCan you identify the specific queries causing the problem?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 16 Aug 2005 22:43:40 -0700", "msg_from": "\"Barry Lind\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem using V3 protocol in jdbc driver " }, { "msg_contents": "Barry,\n\n\nOne way to do this is to turn logging on for calls over a certain \nduration\n\n\nlog_duration in the config file. This will only log calls over n \nmilliseconds.\n\nThere's a tool called iron eye SQL that monitors JDBC calls.\n\nhttp://www.irongrid.com/\n\nunfortunately I am getting DNS errors from that site right now. I do \nhave a copy of their code if you need it.\n\nDave\n\nOn 17-Aug-05, at 1:43 AM, Barry Lind wrote:\n\n> That was my suspicion as well, which is why I tried the V2 protocol.\n>\n> I do not know of any specific queries that are causing the \n> problem. As\n> I monitor 'top' I see processes utilizing a significant amount of CPU\n> running SELECT, UPDATE and DELETE, which would lead me to believe that\n> it isn't any one specific query.\n>\n> How does one identify on a live system specific queries that are \n> running\n> slow, especially with the V3 protocol and when the system is executing\n> about a 100 queries a second (which makes turning on any sort of \n> logging\n> very very verbose)? (I just subscribed to the performance list, so \n> this\n> is probably something that has been answered many times before on this\n> list).\n>\n> I haven't tried to track down a performance problem like this \n> before on\n> postgres. Since most of our large customers run Oracle that is \n> where I\n> have the knowledge to figure something like this out.\n>\n> Thanks,\n> --Barry\n>\n>\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Tuesday, August 16, 2005 10:02 PM\n> To: Barry Lind\n> Cc: [email protected]; [email protected]\n> Subject: Re: [JDBC] Performance problem using V3 protocol in jdbc \n> driver\n>\n>\n> \"Barry Lind\" <[email protected]> writes:\n>\n>> ... On a hunch I switched the jdbc driver to using the V2 protocol\n>> and the load on the machine dropped down to what it was when using\n>> Oracle and everything was fine.\n>>\n>\n> First knee-jerk reaction is that it's an optimization problem stemming\n> from V3 protocol feeding parameterized queries to the backend where V2\n> did not, and the planner being unable to cope :-(\n>\n> Can you identify the specific queries causing the problem?\n>\n> regards, tom lane\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\n", "msg_date": "Wed, 17 Aug 2005 08:59:30 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Performance problem using V3 protocol in jdbc driver " } ]
[ { "msg_contents": "Hi\n I am using Postgres version \n *PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5).* \n for an multy user desktop application using VB 6.0 as a front end toll.\n\n To connect To the PostgreSQL I am using *PostgreSQL Win32 ODBC and OLEDB client drivers 1.0.0.2* \n\n The files included are \n Version 1.0 of the PGW32CLI Installer will install the following file versions. Files are installed in a separate PGW32CLI directory so should not conflict with existing applications.\n libpq.dll 8.0.2.5098 (PostgreSQL library)\n libintl-2.dll 0.11.5.1189 (GNU Text Utils)\n libiconv-2.dll 1.8.1134.7927 (GNU Text Utils)\n psqlodbc.dll 8.0.0.4 (PG ODBC)\n pgoledb.dll 1.0.0.19 (PgOleDB)\n libeay32.dll 0.9.7.f (OpenSSL)\n ssleay32.dll 0.9.7.f (OpenSSL)\n \nI have server configuration as \n P4 3 GHz HT Tech\n 2 GB DDR RAM,\n Intel Original 875 Chipset Motherboard,\n 73 GB 10 K RPM SCSI HDD x 2 Nos.\n Adp SCSI Controller, (You can do software RAID on it)\n Server Class Cabinet\n \n Since in the database I have one Major table that Debtor table which is master table and having around 55 lac records. I have set debtorId as a primary key having index on it.I am developing a search screen to search a specific debtor info using this table. \n\nWhen I fire a query to search a debtor id, it took around 5 seconds to return an answer for a query whether entered debtor id is present in the database or not using ODBC. Where as when Explian the query on the database \n Index Scan using tbmstban_debtorid on tbmstbandetails (cost=0.00..6.01 rows=2 width=143)\n Index Cond: ((debtorid)::text = '234'::text)\n\nQuery for the search criteria is \n select * from tbmstdebtordetails where debtorid ='234' \n\n Where as when I am using a like query to search a record starting with debtor id having a characters then it took around 10-15 sec to return a record set having records.\nquery is \nselect * from tbmstdebtordetails where debtorid like '234%' \n\nExplain output on the database\n Index Scan using tbmstban_debtorid on tbmstbandetails (cost=0.00..6.01 rows=2 width=143)\n Index Cond: ((debtorid)::text = '234%'::text)\nThanks & regards,\nMahesh Shinde\n------------------------------------------------------------------\nCodec Communications (I) Pvt. Ltd.\nPUNE (INDIA)\nT # 91-20-24221460/70(Ext 43)\nDesk No. 25143\nEmail - [email protected]\n\n\n\n\n\n\n\nHi\n    I am using Postgres version \n\n        \n*PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 \n20030222 (Red Hat Linux 3.2.2-5).* \n        for an multy \nuser desktop application using VB 6.0 as a front end \ntoll.\n \n    To connect To the PostgreSQL I am using *PostgreSQL Win32 \nODBC and OLEDB client drivers 1.0.0.2* \n \n    The files included are \n\n        \nVersion 1.0 of the PGW32CLI Installer will install the following file versions. \nFiles are installed in a separate PGW32CLI directory so should not conflict with \nexisting             \napplications.\n        \n    libpq.dll 8.0.2.5098 (PostgreSQL \nlibrary)            libintl-2.dll \n0.11.5.1189 (GNU Text Utils)            libiconv-2.dll \n1.8.1134.7927 (GNU Text Utils)            psqlodbc.dll \n8.0.0.4 (PG ODBC)            pgoledb.dll \n1.0.0.19 (PgOleDB)            libeay32.dll \n0.9.7.f (OpenSSL)            ssleay32.dll \n0.9.7.f (OpenSSL)\n    \n\nI have server configuration as \n\n        \n    P4 3 GHz HT Tech\n    \n        2 GB DDR RAM,\n    \n        Intel Original 875 Chipset \nMotherboard,\n    \n        73 GB  10 K RPM SCSI HDD x  \n2 Nos.\n    \n        Adp SCSI Controller, (You can do software \nRAID on it)\n    \n        Server Class Cabinet\n    \n\n    Since in the \ndatabase I have one Major table that Debtor table which is master table and \nhaving around 55 lac records. I have set debtorId as a primary key having index \non it.I am developing a search screen to search a specific debtor \ninfo using this table. \n \nWhen I fire a query to search a \ndebtor id,  it took around 5 seconds to return an answer for a query \nwhether entered debtor id is present in the database or not using ODBC. Where as \nwhen Explian  the query on the database \n Index Scan using \ntbmstban_debtorid on tbmstbandetails  (cost=0.00..6.01 rows=2 \nwidth=143)   Index Cond: ((debtorid)::text = \n'234'::text)Query for the \nsearch criteria is \n\n select * from tbmstdebtordetails \nwhere debtorid ='234' \n \n Where as when I am using a like \nquery to search a record starting with debtor id having a characters then it \ntook around 10-15 sec to return a record set having records.\nquery is \nselect * from tbmstdebtordetails where \ndebtorid like '234%' \n \nExplain output on the database\n Index Scan \nusing tbmstban_debtorid on tbmstbandetails  (cost=0.00..6.01 rows=2 \nwidth=143)   Index Cond: ((debtorid)::text = \n'234%'::text)\nThanks & regards,Mahesh \nShinde------------------------------------------------------------------Codec \nCommunications (I) Pvt. Ltd.PUNE (INDIA)T # 91-20-24221460/70(Ext \n43)Desk No. 25143Email – [email protected]", "msg_date": "Wed, 17 Aug 2005 02:50:45 -0700", "msg_from": "\"Mahesh Shinde\" <[email protected]>", "msg_from_op": true, "msg_subject": "Data Selection Slow From VB 6.0" }, { "msg_contents": "\n\n> When I fire a query to search a debtor id, it took around 5 seconds\n> to return an answer for a query [...]\n\nAre you sure that time is actually spent in the database engine?\nMaybe there are DNS resolving issues or something...\n\nDid you try to execute the queries directly on the server from\nthe psql shell?\n\nBye, Chris.\n\n\n> \n\n", "msg_date": "Wed, 17 Aug 2005 12:28:16 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data Selection Slow From VB 6.0" }, { "msg_contents": "Mahesh Shinde wrote:\n> Hi\n> I am using Postgres version\n> **PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc \n> (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5).* *\n> for an multy user desktop application using VB 6.0 as a front \n> end toll.\n> \n> To connect To the PostgreSQL I am using **PostgreSQL Win32 ODBC and \n> OLEDB client drivers 1.0.0.2**\n\npgsql-jdbc isn't relevant, then -- the JDBC driver is not involved.\n\n-O\n", "msg_date": "Wed, 17 Aug 2005 12:29:13 +0000", "msg_from": "Oliver Jowett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data Selection Slow From VB 6.0" }, { "msg_contents": "Mahesh Shinde wrote:\n> Hi\n...\n\n> To connect To the PostgreSQL I am using **PostgreSQL Win32 ODBC and\n> OLEDB client drivers 1.0.0.2**\n> \n\n...\n\n> Since in the database I have one Major table that Debtor table which\n> is master table and having around 55 lac records. I have set debtorId as\n> a primary key having index on it.I am developing a search screen to\n> search a specific debtor info using this table.\n> \n> When I fire a query to search a debtor id, it took around 5 seconds to\n> return an answer for a query whether entered debtor id is present in the\n> database or not using ODBC. Where as when Explian the query on the\n> database\n> Index Scan using tbmstban_debtorid on tbmstbandetails (cost=0.00..6.01\n> rows=2 width=143)\n> Index Cond: ((debtorid)::text = '234'::text)\n\nAre you checking this from the VB App? Or just going onto the server and\nrunning psql? (I'm guessing there is some way to run a flat query using\nVB. In which case you can just have the query run EXPLAIN ANALYZE, the\nreturn value is just the text, one line after another.)\n\nWhat I'm thinking is that it might be a locale/encoding issue.\nWhat is the encoding of your database? And what is the default locale\nand the locale that you are connecting as?\n\nCan you give us the \"EXPLAIN ANALYZE\" output so that we can see if the\nplanner is doing what it thinks it is?\n\nIt certainly sounds like either it is always doing a sequential scan, or\nsomething else is going on. 5 sec is a really long time for the type of\nquery you are doing.\n\nOh, and can you run the win32 psql client to see if it might be ODBC\nwhich is causing the problem?\n\nJohn\n=:->\n\n\n> \n> Query for the search criteria is\n> *select * from tbmstdebtordetails where debtorid ='234'*\n> \n> Where as when I am using a like query to search a record starting with\n> debtor id having a characters then it took around 10-15 sec to return a\n> record set having records.\n> query is \n> *select * from tbmstdebtordetails where debtorid like '234%'*\n> \n> Explain output on the database\n> Index Scan using tbmstban_debtorid on tbmstbandetails (cost=0.00..6.01\n> rows=2 width=143)\n> Index Cond: ((debtorid)::text = '234%'::text)\n> Thanks & regards,\n> Mahesh Shinde\n> ------------------------------------------------------------------\n> Codec Communications (I) Pvt. Ltd.\n> PUNE (INDIA)\n> T # 91-20-24221460/70(Ext 43)\n> Desk No. 25143\n> Email – [email protected] <mailto:[email protected]>", "msg_date": "Wed, 17 Aug 2005 10:18:35 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Data Selection Slow From VB 6.0" } ]
[ { "msg_contents": "I have some questions about tuning my effective_cache_size\n\nI have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and\nthey have HT on). The box has 8GB memory. In my postgresql.conf, I\nhave set the effective_cache_size = 530505 (~4GB).\n\nHowever, I am noticing on this machine, top is telling me that I have\n~3.5GB in the buff and only 3GB in cached.\n\nHere are the exact numbers:\nMem: 7720040K av, 7714364K used, 5676K free, 314816K shrd, 3737540K buff\nSwap: 2096440K av, 119448K used, 1976992K free 3188192K cached\n\n\n1. Is the configuration that linux is running in hurting PostgreSQL in any way?\n\n2. Is there a negative impact of the effective_cache_size being\nlarger than the actual cached memory?\n\n3. What effect postive or negative does the buff memory have on PostgreSQL.\\\n\n4. What exactly is this buff memory used for? We have had a time\ntrying to find a good explanation of what it means.\n\nOverall, this system appears to be running fine. However, I was taken\nback when I saw the current memory configuration. We have been\nwalking a fine line performance wise, and can quickly become i/o\nstarved. So I want to make sure that I am not pushing the db towards\nthe i/o starved side.\n\nThanks for any insight,\n\nChris\n", "msg_date": "Wed, 17 Aug 2005 13:14:29 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning Effective Cache Question" }, { "msg_contents": "Sorry, forgot to state that we are still on PG 7.3.4.\n\nOn 8/17/05, Chris Hoover <[email protected]> wrote:\n> I have some questions about tuning my effective_cache_size\n> \n> I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and\n> they have HT on). The box has 8GB memory. In my postgresql.conf, I\n> have set the effective_cache_size = 530505 (~4GB).\n> \n> However, I am noticing on this machine, top is telling me that I have\n> ~3.5GB in the buff and only 3GB in cached.\n> \n> Here are the exact numbers:\n> Mem: 7720040K av, 7714364K used, 5676K free, 314816K shrd, 3737540K buff\n> Swap: 2096440K av, 119448K used, 1976992K free 3188192K cached\n> \n> \n> 1. Is the configuration that linux is running in hurting PostgreSQL in any way?\n> \n> 2. Is there a negative impact of the effective_cache_size being\n> larger than the actual cached memory?\n> \n> 3. What effect postive or negative does the buff memory have on PostgreSQL.\\\n> \n> 4. What exactly is this buff memory used for? We have had a time\n> trying to find a good explanation of what it means.\n> \n> Overall, this system appears to be running fine. However, I was taken\n> back when I saw the current memory configuration. We have been\n> walking a fine line performance wise, and can quickly become i/o\n> starved. So I want to make sure that I am not pushing the db towards\n> the i/o starved side.\n> \n> Thanks for any insight,\n> \n> Chris\n>\n", "msg_date": "Wed, 17 Aug 2005 13:17:33 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning Effective Cache Question" }, { "msg_contents": "Chris,\n\n> I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and\n> they have HT on). The box has 8GB memory. In my postgresql.conf, I\n> have set the effective_cache_size = 530505 (~4GB).\n>\n> However, I am noticing on this machine, top is telling me that I have\n> ~3.5GB in the buff and only 3GB in cached.\n\neffective_cache_size is just information for the query planner; it does not \naffect the actually system file cache.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Aug 2005 13:31:38 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Effective Cache Question" } ]
[ { "msg_contents": "At 05:15 AM 8/17/2005, Ulrich Wisser wrote:\n>Hello,\n>\n>thanks for all your suggestions.\n>\n>I can see that the Linux system is 90% waiting for disc io.\n\nA clear indication that you need to improve your HD IO subsystem if possible.\n\n\n>At that time all my queries are *very* slow.\n\nTo be more precise, your server performance at that point is \nessentially equal to your HD IO subsystem performance.\n\n\n> My scsi raid controller and disc are already the fastest available.\n\nOh, REALLY? This is the description of the system you gave us:\n\n\"We have a box with\nLinux Fedora Core 3, Postgres 7.4.2\nIntel(R) Pentium(R) 4 CPU 2.40GHz\n2 scsi 76GB disks (15.000RPM, 2ms)\"\n\n\nThe is far, Far, FAR from the \"the fastest available\" in terms of SW, \nOS, CPU host, _or_ HD subsystem.\n\nThe \"fastest available\" means\n1= you should be running PostgreSQL 8.0.3\n2= you should be running the latest stable 2.6 based kernel\n3= you should be running an Opteron based server\n4= Fibre Channel HDs are slightly higher performance than SCSI ones.\n5= (and this is the big one) YOU NEED MORE SPINDLES AND A HIGHER END \nRAID CONTROLLER.\n\nYour description of you workload was:\n\"one of our services is click counting for on line advertising. We do \nthis by importing Apache log files every five minutes. This results \nin a lot of insert and delete statements. At the same time our \ncustomers shall be able to do on line reporting.\"\n\nThere are two issues here:\n1= your primary usage is OLTP-like, but you are also expecting to do \nreports against the same schema that is supporting your OLTP-like \nusage. Bad Idea. Schemas that are optimized for reporting and other \ndata mining like operation are pessimal for OLTP-like applications \nand vice versa. You need two schemas: one optimized for lots of \ninserts and deletes (OLTP-like), and one optimized for reporting \n(data-mining like).\n\n2= 2 spindles, even 15K rpm spindles, is minuscule. Real enterprise \nclass RAID subsystems have at least 10-20x that many spindles, \nusually split into 6-12 sets dedicated to different groups of tables \nin the DB. Putting xlog on its own dedicated spindles is just the \nfirst step.\n\nThe absolute \"top of the line\" for RAID controllers is something \nbased on Fibre Channel from Xyratex (who make the RAID engines for \nEMC and NetApps), Engino (the enterprise division of LSI Logic who \nsell mostly to IBM. Apple has a server based on an Engino card), or \ndot-hill (who bought Chaparral among others). I suspect you can't \nafford them even if they would do business with you. The ante for a \nFC-based RAID subsystem in this class is in the ~$32K to ~$128K \nrange, even if you buy direct from the actual RAID HW manufacturer \nrather than an OEM like EMC, IBM, or NetApp who will 2x or 4x the \nprice. OTOH, these subsystems will provide OLTP or OLTP-like DB apps \nwith performance that is head-and-shoulders better than anything else \nto be found. Numbers like 50K-200K IOPS. You get what you pay for.\n\nIn the retail commodity market where you are more realistically going \nto be buying, the current best RAID controllers are probably the \nAreca cards ( www.areca.us ). They come darn close to saturating the \nReal World Peak Bandwidth of a 64b 133MHz PCI-X bus and have better \nIOPS numbers than their commodity brethren. However, _none_ of the \ncommodity RAID cards have IOPS numbers anywhere near as high as those \nmentioned above.\n\n\n>To avoid aggregating to many rows, I already made some aggregation \n>tables which will be updated after the import from the Apache \n>logfiles. That did help, but only to a certain level.\n>\n>I believe the biggest problem is disc io. Reports for very recent \n>data are quite fast, these are used very often and therefor already \n>in the cache. But reports can contain (and regulary do) very old \n>data. In that case the whole system slows down. To me this sounds \n>like the recent data is flushed out of the cache and now all data \n>for all queries has to be fetched from disc.\n\nI completely agree. Hopefully my above suggestions make sense and \nare of use to you.\n\n\n>My machine has 2GB memory,\n\n...and while we are at it, OLTP like apps benefit less from RAM than \ndata mining ones, but still 2GB of RAM is just not that much for a \nreal DB server...\n\n\nRon Peacetree\n\n\n", "msg_date": "Wed, 17 Aug 2005 14:33:20 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed" }, { "msg_contents": "On 8/17/05, Ron <[email protected]> wrote:\n> At 05:15 AM 8/17/2005, Ulrich Wisser wrote:\n> >Hello,\n> >\n> >thanks for all your suggestions.\n> >\n> >I can see that the Linux system is 90% waiting for disc io.\n...\n> 1= your primary usage is OLTP-like, but you are also expecting to do\n> reports against the same schema that is supporting your OLTP-like\n> usage. Bad Idea. Schemas that are optimized for reporting and other\n> data mining like operation are pessimal for OLTP-like applications\n> and vice versa. You need two schemas: one optimized for lots of\n> inserts and deletes (OLTP-like), and one optimized for reporting\n> (data-mining like).\n\nUlrich,\n\nIf you meant that your disc/scsi system is already the fastest\navailable *with your current budget* then following Ron's advise I\nquoted above will be a good step.\n\nI have some systems very similar to yours. What I do is import in\nbatches and then immediately pre-process the batch data into tables\noptimized for quick queries. For example, if your reports frequenly\nneed to find the total number of views per hour for each customer,\ncreate a table whose data contains just the totals for each customer\nfor each hour of the day. This will make it a tiny fraction of the\nsize, allowing it to fit largely in RAM for the query and making the\nindexes more efficient.\n\nThis is a tricky job, but if you do it right, your company will be a\nbig success and buy you more hardware to work with. Of course, they'll\nalso ask you to create dozens of new reports, but that's par for the\ncourse.\n\nEven if you have the budget for more hardware, I feel that creating an\neffective db structure is a much more elegant solution than to throw\nmore hardware. (I admit, sometimes its cheaper to throw more hardware)\n\nIf you have particular queries that are too slow, posting the explain\nanalyze for each on the list should garner some help.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Wed, 17 Aug 2005 15:33:52 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed" }, { "msg_contents": "Hello,\n\nI realize I need to be much more specific. Here is a more detailed\ndescription of my hardware and system design.\n\n\nPentium 4 2.4GHz\nMemory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR\nMotherboard chipset 'I865G', two IDE channels on board\n2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100\n(software raid 1, system, swap, pg_xlog)\nADAPTEC SCSI RAID 2100S ULTRA160 32MB 1-CHANNEL\n2x SEAGATE CHEETAH 15K.3 73GB ULTRA320 68-PIN WIDE\n(raid 1, /var/lib/pgsql)\n\nDatabase size on disc is 22GB. (without pg_xlog)\n\nPlease find my postgresql.conf below.\n\nPutting pg_xlog on the IDE drives gave about 10% performance\nimprovement. Would faster disks give more performance?\n\nWhat my application does:\n\nEvery five minutes a new logfile will be imported. Depending on the\nsource of the request it will be imported in one of three \"raw click\"\ntables. (data from two months back, to be able to verify customer complains)\nFor reporting I have a set of tables. These contain data from the last\ntwo years. My app deletes all entries from today and reinserts updated\ndata calculated from the raw data tables.\n\nThe queries contain no joins only aggregates. I have several indexes to \nspeed different kinds of queries.\n\nMy problems occur when one users does a report that contains to much old\ndata. In that case all cache mechanisms will fail and disc io is the\nlimiting factor.\n\nIf one query contains so much data, that a full table scan is needed, I \ndo not care if it takes two minutes to answer. But all other queries \nwith less data (at the same time) still have to be fast.\n\nI can not stop users doing that kind of reporting. :(\n\nI need more speed in orders of magnitude. Will more disks / more memory\ndo that trick?\n\nMoney is of course a limiting factor but it doesn't have to be real cheap.\n\nUlrich\n\n\n\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\ntcpip_socket = true\nmax_connections = 100\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from\nshared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#port = 5432\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults\nto any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 20000 # min 16, at least max_connections*2, \n8KB each\nsort_mem = 4096 # min 64, size in KB\nvacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 200000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 10000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = false # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 128 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 16 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\nsyslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = info # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = info # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log,\nfatal,\n # panic\n\nlog_error_verbosity = verbose # terse, default, or verbose messages\n\nlog_min_error_statement = info # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2,\ndebug1,\n # info, notice, warning, error,\npanic(off)\n\nlog_min_duration_statement = 1000 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\nsilent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\nlog_connections = true\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'en_US' # locale for system error message strings\nlc_monetary = 'en_US' # locale for monetary formatting\nlc_numeric = 'en_US' # locale for number formatting\nlc_time = 'en_US' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n", "msg_date": "Thu, 25 Aug 2005 09:10:37 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": false, "msg_subject": "Need for speed 2" }, { "msg_contents": "On Thu, 25 Aug 2005 09:10:37 +0200\nUlrich Wisser <[email protected]> wrote:\n\n> Pentium 4 2.4GHz\n> Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR\n> Motherboard chipset 'I865G', two IDE channels on board\n> 2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100\n> (software raid 1, system, swap, pg_xlog)\n> ADAPTEC SCSI RAID 2100S ULTRA160 32MB 1-CHANNEL\n> 2x SEAGATE CHEETAH 15K.3 73GB ULTRA320 68-PIN WIDE\n> (raid 1, /var/lib/pgsql)\n> \n> Database size on disc is 22GB. (without pg_xlog)\n> \n> Please find my postgresql.conf below.\n> \n> Putting pg_xlog on the IDE drives gave about 10% performance\n> improvement. Would faster disks give more performance?\n\n Faster as in RPM on your pg_xlog partition probably won't make\n much of a difference. However, if you can get a drive with better\n overall write performance then it would be a benefit. \n\n Another thing to consider on this setup is whether or not you're\n hitting swap often and/or logging to that same IDE RAID set. For\n optimal insertion benefit you want the heads of your disks to \n essentially be only used for pg_xlog. If you're having to jump\n around the disk in the following manner: \n\n write to pg_xlog\n read from swap\n write syslog data\n write to pg_xlog \n ...\n ...\n\n You probably aren't getting anywhere near the benefit you could. One\n thing you could easily try is to break your IDE RAID set and put \n OS/swap on one disk and pg_xlog on the other. \n\n> If one query contains so much data, that a full table scan is needed,\n> I do not care if it takes two minutes to answer. But all other\n> queries with less data (at the same time) still have to be fast.\n> \n> I can not stop users doing that kind of reporting. :(\n> \n> I need more speed in orders of magnitude. Will more disks / more\n> memory do that trick?\n\n More disk and more memory always helps out. Since you say these\n queries are mostly on not-often-used data I would lean toward more\n disks in your SCSI RAID-1 setup than maxing out available RAM based\n on the size of your database. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 25 Aug 2005 08:47:11 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 2" }, { "msg_contents": "At 03:10 AM 8/25/2005, Ulrich Wisser wrote:\n\n>I realize I need to be much more specific. Here is a more detailed\n>description of my hardware and system design.\n>\n>\n>Pentium 4 2.4GHz\n>Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR\n>Motherboard chipset 'I865G', two IDE channels on board\n\nFirst suggestion: Get better server HW. AMD Opteron based dual \nprocessor board is the current best in terms of price/performance \nratio, _particularly_ for DB applications like the one you have \ndescribed. Such mainboards cost ~$400-$500. RAM will cost about \n$75-$150/GB. Opteron 2xx are ~$200-$700 apiece. So a 2P AMD system \ncan be had for as little as ~$850 + the cost of the RAM you need. In \nthe worst case where you need 24GB of RAM (~$3600), the total comes \nin at ~$4450. As you can see from the numbers, buying only what RAM \nyou actually need can save you a great deal on money.\n\nGiven what little you said about how much of your DB is frequently \naccessed, I'd suggest buying a server based around the 2P 16 DIMM \nslot IWill DK88 mainboard (Tyan has announced a 16 DIMM slot \nmainboard, but I do not think it is actually being sold yet.). Then \nfill it with the minimum amount of RAM that will allow the \"working \nset\" of the DB to be cached in RAM. In the worst case where DB \naccess is essentially uniform and essentially random, you will need \n24GB of RAM to hold the 22GB DB + OS + etc. That worst case is \n_rare_. Usually DB's have a working set that is smaller than the \nentire DB. You want to keep that working set in RAM. If you can't \nidentify the working set, buy enough RAM to hold the entire DB.\n\nIn particular, you want to make sure that any frequently accessed \nread only tables or indexes are kept in RAM. The \"read only\" part is \nvery important. Tables (and their indexes) that are frequently \nwritten to _have_ to access HD. Therefore you get much less out of \nhaving them in RAM. Read only tables and their indexes can be loaded \ninto tmpfs at boot time thereby keeping out of the way of the file \nsystem buffer cache. tmpfs does not save data if the host goes down \nso it is very important that you ONLY use this trick with read only \ntables. The other half of the trick is to make sure that the file \nsystem buffer cache does _not_ cache whatever you have loaded into tmpfs.\n\n\n>2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100\n>(software raid 1, system, swap, pg_xlog)\n>ADAPTEC SCSI RAID 2100S ULTRA160 32MB 1-CHANNEL\n>2x SEAGATE CHEETAH 15K.3 73GB ULTRA320 68-PIN WIDE\n>(raid 1, /var/lib/pgsql)\n\nSecond suggestion: you need a MUCH better IO subsystem. In fact, \ngiven that you have described this system as being primarily OLTP \nlike, this is more important that the above server HW. Best would be \nto upgrade everything, but if you are strapped for cash, upgrade the \nIO subsystem first.\n\nYou need many more spindles and a decent RAID card or cards. You \nwant 15Krpm (best) or 10Krpm HDs. As long as all of the HD's are at \nleast 10Krpm, more spindles is more important than faster \nspindles. If it's a choice between more 10Krpm discs or fewer 15Krpm \ndiscs, buy the 10Krpm discs. Get the spindle count as high as you \nRAID cards can handle.\n\nWhatever RAID cards you get should have as much battery backed write \nbuffer as possible. In the commodity market, presently the highest \nperformance RAID cards I know of, and the ones that support the \nlargest battery backed write buffer, are made by Areca.\n\n\n>Database size on disc is 22GB. (without pg_xlog)\n\nFind out what the working set, ie the most frequently accessed \nportion, of this 22GB is and you will know how much RAM is worth \nhaving. 4GB is definitely too little!\n\n\n>Please find my postgresql.conf below.\n\nThird suggestion: make sure you are running a 2.6 based kernel and \nat least PG 8.0.3. Helping beta test PG 8.1 might be an option for \nyou as well.\n\n\n>Putting pg_xlog on the IDE drives gave about 10% performance \n>improvement. Would faster disks give more performance?\n>\n>What my application does:\n>\n>Every five minutes a new logfile will be imported. Depending on the \n>source of the request it will be imported in one of three \"raw click\"\n>tables. (data from two months back, to be able to verify customer \n>complains) For reporting I have a set of tables. These contain data \n>from the last two years. My app deletes all entries from today and \n>reinserts updated data calculated from the raw data tables.\n\nThe raw data tables seem to be read only? If so, you should buy \nenough RAM to load them into tmpfs at boot time and have them be \ncompletely RAM resident in addition to having enough RAM for the OS \nto cache an appropriate amount of the rest of the DB.\n\n\n>The queries contain no joins only aggregates. I have several indexes \n>to speed different kinds of queries.\n>\n>My problems occur when one users does a report that contains too \n>much old data. In that case all cache mechanisms will fail and disc \n>io is the limiting factor.\n>\n>If one query contains so much data, that a full table scan is \n>needed, I do not care if it takes two minutes to answer. But all \n>other queries with less data (at the same time) still have to be fast.\n\nHDs can only do one thing at once. If they are in the middle of a \nfull table scan, everything else that requires HD access is going to \nwait until it is done.\n\nAt some point, looking at your DB schema and queries will be worth it \nfor optimization purposes. Right now, you HW is so underpowered \ncompared to the demands you are placing on it that there's little \npoint to SW tuning.\n\n>I can not stop users doing that kind of reporting. :(\n>\n>I need more speed in orders of magnitude. Will more disks / more \n>memory do that trick?\n\nIf you do the right things with them ;)\n\n>Money is of course a limiting factor but it doesn't have to be real cheap.\n>\n>Ulrich\n>\n>\n>\n>\n>\n># -----------------------------\n># PostgreSQL configuration file\n># -----------------------------\n>#---------------------------------------------------------------------------\n># CONNECTIONS AND AUTHENTICATION\n>#---------------------------------------------------------------------------\n>\n># - Connection Settings -\n>\n>tcpip_socket = true\n>max_connections = 100\n> # note: increasing max_connections costs about 500 bytes of shared\n> # memory per connection slot, in addition to costs from \n> shared_buffers\n> # and max_locks_per_transaction.\n>#superuser_reserved_connections = 2\n>#port = 5432\n>#unix_socket_directory = ''\n>#unix_socket_group = ''\n>#unix_socket_permissions = 0777 # octal\n>#virtual_host = '' # what interface to listen on; defaults to any\n>#rendezvous_name = '' # defaults to the computer name\n>\n># - Security & Authentication -\n>\n>#authentication_timeout = 60 # 1-600, in seconds\n>#ssl = false\n>#password_encryption = true\n>#krb_server_keyfile = ''\n>#db_user_namespace = false\n>\n>\n>#---------------------------------------------------------------------------\n># RESOURCE USAGE (except WAL)\n>#---------------------------------------------------------------------------\n>\n># - Memory -\n>\n>shared_buffers = 20000 # min 16, at least max_connections*2, 8KB each\n>sort_mem = 4096 # min 64, size in KB\n\n4MB seems small. Find out how much memory you usually need for a \nsort, and how many sorts you are usually doing at once to set this to \na sane size.\n\n\n>vacuum_mem = 8192 # min 1024, size in KB\n>\n># - Free Space Map -\n>\n>max_fsm_pages = 200000 # min max_fsm_relations*16, 6 bytes each\n>max_fsm_relations = 10000 # min 100, ~50 bytes each\n>\n># - Kernel Resource Usage -\n>\n>#max_files_per_process = 1000 # min 25\n>#preload_libraries = ''\n>\n>\n>#---------------------------------------------------------------------------\n># WRITE AHEAD LOG\n>#---------------------------------------------------------------------------\n>\n># - Settings -\n>\n>fsync = false # turns forced synchronization on or off\n>#wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n\nI hope you have a battery backed write buffer!\n\n>open_datasync\n>wal_buffers = 128 # min 4, 8KB each\n\nThere might be a better value for you to use.\n\nI'll hold off on looking at the rest of this...\n\n># - Checkpoints -\n>\n>checkpoint_segments = 16 # in logfile segments, min 1, 16MB each\n>#checkpoint_timeout = 300 # range 30-3600, in seconds\n>#checkpoint_warning = 30 # 0 is off, in seconds\n>#commit_delay = 0 # range 0-100000, in microseconds\n>#commit_siblings = 5 # range 1-1000\n>\n>\n>#---------------------------------------------------------------------------\n># QUERY TUNING\n>#---------------------------------------------------------------------------\n>\n># - Planner Method Enabling -\n>\n>#enable_hashagg = true\n>#enable_hashjoin = true\n>#enable_indexscan = true\n>#enable_mergejoin = true\n>#enable_nestloop = true\n>#enable_seqscan = true\n>#enable_sort = true\n>#enable_tidscan = true\n>\n># - Planner Cost Constants -\n>\n>#effective_cache_size = 1000 # typically 8KB each\n>#random_page_cost = 4 # units are one sequential page fetch cost\n>#cpu_tuple_cost = 0.01 # (same)\n>#cpu_index_tuple_cost = 0.001 # (same)\n>#cpu_operator_cost = 0.0025 # (same)\n>\n># - Genetic Query Optimizer -\n>\n>#geqo = true\n>#geqo_threshold = 11\n>#geqo_effort = 1\n>#geqo_generations = 0\n>#geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n>#geqo_selection_bias = 2.0 # range 1.5-2.0\n>\n># - Other Planner Options -\n>\n>#default_statistics_target = 10 # range 1-1000\n>#from_collapse_limit = 8\n>#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n>\n>\n>#---------------------------------------------------------------------------\n># ERROR REPORTING AND LOGGING\n>#---------------------------------------------------------------------------\n>\n># - Syslog -\n>\n>syslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog\n>syslog_facility = 'LOCAL0'\n>syslog_ident = 'postgres'\n>\n># - When to Log -\n>\n>client_min_messages = info # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # log, info, notice, warning, error\n>\n>log_min_messages = info # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # info, notice, warning, error, log,\n>fatal,\n> # panic\n>\n>log_error_verbosity = verbose # terse, default, or verbose messages\n>\n>log_min_error_statement = info # Values in order of increasing severity:\n> # debug5, debug4, debug3, debug2,\n>debug1,\n> # info, notice, warning, error,\n>panic(off)\n>\n>log_min_duration_statement = 1000 # Log all statements whose\n> # execution time exceeds the value, in\n> # milliseconds. Zero prints all queries.\n> # Minus-one disables.\n>\n>silent_mode = false # DO NOT USE without Syslog!\n>\n># - What to Log -\n>\n>#debug_print_parse = false\n>#debug_print_rewritten = false\n>#debug_print_plan = false\n>#debug_pretty_print = false\n>log_connections = true\n>#log_duration = false\n>#log_pid = false\n>#log_statement = false\n>#log_timestamp = false\n>#log_hostname = false\n>#log_source_port = false\n>\n>\n>#---------------------------------------------------------------------------\n># RUNTIME STATISTICS\n>#---------------------------------------------------------------------------\n>\n># - Statistics Monitoring -\n>\n>#log_parser_stats = false\n>#log_planner_stats = false\n>#log_executor_stats = false\n>#log_statement_stats = false\n>\n># - Query/Index Statistics Collector -\n>\n>#stats_start_collector = true\n>#stats_command_string = false\n>#stats_block_level = false\n>#stats_row_level = false\n>#stats_reset_on_server_start = true\n>\n>\n>#---------------------------------------------------------------------------\n># CLIENT CONNECTION DEFAULTS\n>#---------------------------------------------------------------------------\n>\n># - Statement Behavior -\n>\n>#search_path = '$user,public' # schema names\n>#check_function_bodies = true\n>#default_transaction_isolation = 'read committed'\n>#default_transaction_read_only = false\n>#statement_timeout = 0 # 0 is disabled, in milliseconds\n>\n># - Locale and Formatting -\n>\n>#datestyle = 'iso, mdy'\n>#timezone = unknown # actually, defaults to TZ environment\n>setting\n>#australian_timezones = false\n>#extra_float_digits = 0 # min -15, max 2\n>#client_encoding = sql_ascii # actually, defaults to database encoding\n>\n># These settings are initialized by initdb -- they may be changed\n>lc_messages = 'en_US' # locale for system error message strings\n>lc_monetary = 'en_US' # locale for monetary formatting\n>lc_numeric = 'en_US' # locale for number formatting\n>lc_time = 'en_US' # locale for time formatting\n>\n># - Other Defaults -\n>\n>#explain_pretty_print = true\n>#dynamic_library_path = '$libdir'\n>#max_expr_depth = 10000 # min 10\n>\n>\n>#---------------------------------------------------------------------------\n># LOCK MANAGEMENT\n>#---------------------------------------------------------------------------\n>\n>#deadlock_timeout = 1000 # in milliseconds\n>#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n>\n>\n>#---------------------------------------------------------------------------\n># VERSION/PLATFORM COMPATIBILITY\n>#---------------------------------------------------------------------------\n>\n># - Previous Postgres Versions -\n>\n>#add_missing_from = true\n>#regex_flavor = advanced # advanced, extended, or basic\n>#sql_inheritance = true\n>\n># - Other Platforms & Clients -\n>\n>#transform_null_equals = false\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n\n\n\n", "msg_date": "Thu, 25 Aug 2005 11:16:33 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed 2" }, { "msg_contents": "On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:\n> ># - Settings -\n> >\n> >fsync = false # turns forced synchronization on or off\n> >#wal_sync_method = fsync # the default varies across platforms:\n> > # fsync, fdatasync, open_sync, or\n> \n> I hope you have a battery backed write buffer!\n\nBattery backed write buffer will do nothing here, because the OS is\ntaking it's sweet time flushing to the controller's battery backed write\nbuffer!\n\nIsn't the reason for batter backed controller cache to make fsync()s\nfast?\n\n-K\n", "msg_date": "Thu, 25 Aug 2005 14:00:22 -0500", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 2" }, { "msg_contents": "I have found that while the OS may flush to the controller fast with \nfsync=true, the controller does as it pleases (it has BBU, so I'm not too \nworried), so you get great performance because your controller is determine \nread/write sequence outside of what is being demanded by an fsync.\n\nAlex Turner\nNetEconomist\n\nOn 8/25/05, Kelly Burkhart <[email protected]> wrote:\n> \n> On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:\n> > ># - Settings -\n> > >\n> > >fsync = false # turns forced synchronization on or off\n> > >#wal_sync_method = fsync # the default varies across platforms:\n> > > # fsync, fdatasync, open_sync, or\n> >\n> > I hope you have a battery backed write buffer!\n> \n> Battery backed write buffer will do nothing here, because the OS is\n> taking it's sweet time flushing to the controller's battery backed write\n> buffer!\n> \n> Isn't the reason for batter backed controller cache to make fsync()s\n> fast?\n> \n> -K\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nI have found that while the OS may flush to the controller fast with\nfsync=true, the controller does as it pleases (it has BBU, so I'm not\ntoo worried), so you get great performance because your controller is\ndetermine read/write sequence outside of what is being demanded by an\nfsync.\n\nAlex Turner\nNetEconomistOn 8/25/05, Kelly Burkhart <[email protected]> wrote:\nOn Thu, 2005-08-25 at 11:16 -0400, Ron wrote:> ># - Settings -> >>\n>fsync =\nfalse                  \n# turns forced synchronization on or off> >#wal_sync_method = fsync        # the default varies across platforms:>\n>                                \n# fsync, fdatasync, open_sync, or>> I hope you have a battery backed write buffer!Battery backed write buffer will do nothing here, because the OS istaking it's sweet time flushing to the controller's battery backed write\nbuffer!Isn't the reason for batter backed controller cache to make fsync()sfast?-K---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to\n       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Tue, 20 Sep 2005 11:35:58 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 2" } ]
[ { "msg_contents": "Hello,\nDoing some testing on upcoming 8.1 devel and am having serious issues\nwith new bitmap index scan feature. It is easy to work around (just\ndisable it) but IMO the planner is using it when a regular index scan\nshould be strongly favored. The performance of the bitmapscan in my\nusage is actually quite a bit worse than a full sequential scan.\n\nhere is a query which does this:\nexplain analyze execute\ndata1_read_next_product_structure_file_0('012241', '', '', '002', 1);\n\nHere is the 8.0/bitmap off plan:\nLimit (cost=0.00..45805.23 rows=5722 width=288) (actual\ntime=0.070..0.072 rows=1 loops=1)\n -> Index Scan using product_structure_file_pkey on\nproduct_structure_file (cost=0.00..45805.23 rows=5722 width=288)\n(actual time=0.063..0.063 row\ns=1 loops=1)\n Index Cond: ((ps_parent_code)::text >= ($1)::text)\n Filter: ((((ps_parent_code)::text > ($1)::text) OR\n(ps_group_code >= $2)) AND (((ps_parent_code)::text > ($1)::text) OR\n(ps_group_code > $2)\nOR ((ps_section_code)::text >= ($3)::text)) AND (((ps_parent_code)::text\n> ($1)::text) OR (ps_group_code > $2) OR ((ps_section_code)::text >\n($3)::tex\nt) OR ((ps_seq_no)::smallint > $4)))\n Total runtime: 0.185 ms\n\nHere is the 8.1 with bitamp on:\nLimit (cost=3768.32..3782.63 rows=5722 width=288) (actual\ntime=2287.488..2287.490 rows=1 loops=1)\n -> Sort (cost=3768.32..3782.63 rows=5722 width=288) (actual\ntime=2287.480..2287.480 rows=1 loops=1)\n Sort Key: ps_parent_code, ps_group_code, ps_section_code,\nps_seq_no\n -> Bitmap Heap Scan on product_structure_file\n(cost=187.84..3411.20 rows=5722 width=288) (actual time=19.977..514.532\nrows=47355 loops=1)\n Recheck Cond: ((ps_parent_code)::text >= ($1)::text)\n Filter: ((((ps_parent_code)::text > ($1)::text) OR\n(ps_group_code >= $2)) AND (((ps_parent_code)::text > ($1)::text) OR\n(ps_group_code\n> $2) OR ((ps_section_code)::text >= ($3)::text)) AND\n(((ps_parent_code)::text > ($1)::text) OR (ps_group_code > $2) OR\n((ps_section_code)::text > ($3\n)::text) OR ((ps_seq_no)::smallint > $4)))\n -> Bitmap Index Scan on product_structure_file_pkey\n(cost=0.00..187.84 rows=18239 width=0) (actual time=19.059..19.059\nrows=47356 loo\nps=1)\n Index Cond: ((ps_parent_code)::text >= ($1)::text)\n Total runtime: 2664.034 ms\n\n\nHere is the prepared statement definition:\nprepare data1_read_next_product_structure_file_0 (character varying,\ncharacter, character varying, int4, int4)\n\tas select 1::int4, * from data1.product_structure_file\n\twhere ps_parent_code >= $1 and \n\t\t(ps_parent_code > $1 or ps_group_code >= $2) and \n\t\t(ps_parent_code > $1 or ps_group_code > $2 or\nps_section_code >= $3) and \n\t\t(ps_parent_code > $1 or ps_group_code > $2 or\nps_section_code > $3 or ps_seq_no > $4) \n\torder by ps_parent_code, ps_group_code, ps_section_code,\nps_seq_no\n\tlimit $5\n\nAside: this is the long way of writing\nselect 1::int4, * from data1.product_structure_file where\n(ps_parent_code, ps_group_code, ps_section_code, ps_seq_no) > ($1, $2,\n$3, $4) limit %5\n\nwhich is allowed in pg but returns the wrong answer.\n\nMerlin\n", "msg_date": "Wed, 17 Aug 2005 16:40:34 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "bitmap scan issues 8.1 devel" }, { "msg_contents": "Merlin,\n\n>    ->  Index Scan using product_structure_file_pkey on\n> product_structure_file  (cost=0.00..45805.23 rows=5722 width=288)\n> (actual time=0.063..0.063 row\n> s=1 loops=1)\n\nIt appears that your DB is estimating the number of rows returned much too \nhigh (5722 instead of 1). Please raise the statistics on all columns to \nabout 500, analyze, and try your test again.\n\nThanks!\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Aug 2005 14:33:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bitmap scan issues 8.1 devel" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Doing some testing on upcoming 8.1 devel and am having serious issues\n> with new bitmap index scan feature. It is easy to work around (just\n> disable it) but IMO the planner is using it when a regular index scan\n> should be strongly favored.\n\nI think blaming the bitmap code is the wrong response. What I see in\nyour example is that the planner doesn't know what the LIMIT value is,\nand accordingly is favoring a plan that isn't going to get blown out of\nthe water if the LIMIT is large. I'd suggest not parameterizing the\nLIMIT.\n\n(But hmm ... I wonder if we could use estimate_expression_value for\nLIMIT items, instead of handling only simple Consts as the code does\nnow?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Aug 2005 17:54:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bitmap scan issues 8.1 devel " } ]
[ { "msg_contents": "Hello all,\n\nis there a simple way to limit the number of concurrent callers to a \nstored proc?\n\nThe problem we have is about 50 clients come and perform the same \noperation at nearly the same time. Typically, this query takes a few \nseconds to run, but in the case of this thundering herd the query time \ndrops to 70 seconds or much more. The query can return up to 15MB of data.\n\nThe machine is a dual opteron, 8 GB memory, lots of fiber channel disk, \nLinux 2.6, etc.\n\nSo, I'm thinking that a semaphore than will block more than N clients \nfrom being in the core of the function at one time would be a good thing. \n\nThanks!\n\n-- Alan\n", "msg_date": "Wed, 17 Aug 2005 21:40:20 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "limit number of concurrent callers to a stored proc?" }, { "msg_contents": "Hi Alan,\n\nOn Wed, 17 Aug 2005, Alan Stange wrote:\n\n> Hello all,\n>\n> is there a simple way to limit the number of concurrent callers to a\n> stored proc?\n>\n> The problem we have is about 50 clients come and perform the same\n> operation at nearly the same time. Typically, this query takes a few\n> seconds to run, but in the case of this thundering herd the query time\n> drops to 70 seconds or much more. The query can return up to 15MB of data.\n>\n> The machine is a dual opteron, 8 GB memory, lots of fiber channel disk,\n> Linux 2.6, etc.\n>\n> So, I'm thinking that a semaphore than will block more than N clients\n> from being in the core of the function at one time would be a good thing.\n\nThere is no PostgreSQL feature which will do this for you. It should be\npossible to implement this yourself, without too much pain. If you're\nusing PL/PgSQL, write another function in C or one of the other more\nsophisticated PLs to implement the logic for you. At the beginning of the\nfunction, execute the function to increment the count; at the end, execute\na function to decrement it.\n\nIf you're writing the function in C or one of those more sophisticated\nPLs, it's even easier.\n\nAs an aside, using semaphores might be a little painful. I'd just grab\nsome shared memory and keep a counter in it. If the counter is greater\nthan your desired number of concurrent executions, you sleep and try again\nsoon.\n\nThat being said, did you want to give us a look at your function and data\nand see if we can improve the performance at all?\n\nThanks,\n\nGavin\n", "msg_date": "Thu, 18 Aug 2005 12:30:24 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit number of concurrent callers to a stored proc?" }, { "msg_contents": "You could use a 1 column/1 row table perhaps. Use some sort of locking \nmechanism.\n\nAlso, check out contrib/userlock\n\nChris\n\nAlan Stange wrote:\n> Hello all,\n> \n> is there a simple way to limit the number of concurrent callers to a \n> stored proc?\n> \n> The problem we have is about 50 clients come and perform the same \n> operation at nearly the same time. Typically, this query takes a few \n> seconds to run, but in the case of this thundering herd the query time \n> drops to 70 seconds or much more. The query can return up to 15MB of data.\n> \n> The machine is a dual opteron, 8 GB memory, lots of fiber channel disk, \n> Linux 2.6, etc.\n> \n> So, I'm thinking that a semaphore than will block more than N clients \n> from being in the core of the function at one time would be a good thing.\n> Thanks!\n> \n> -- Alan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Thu, 18 Aug 2005 10:36:07 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit number of concurrent callers to a stored proc?" }, { "msg_contents": "At 09:40 PM 8/17/2005, Alan Stange wrote:\n\n>is there a simple way to limit the number of concurrent callers to a \n>stored proc?\n>\n>The problem we have is about 50 clients come and perform the same \n>operation at nearly the same time. Typically, this query takes a \n>few seconds to run, but in the case of this thundering herd the \n>query time drops to 70 seconds or much more. The query can return \n>up to 15MB of data.\n\nI'm assuming there is some significant write activity going on at \nsome point as a result of the query, since MVCC should not care about \nconcurrent read activity?\n\nIs that \"a few seconds each query\" or \"a few seconds total if we run \n50 queries sequentially but 70+ seconds per query if we try to run 50 \nqueries concurrently\"?\n\nA) If the former, \"a few seconds\" * 50 can easily be 70+ seconds, and \nthings are what you should expect. Getting higher performance in \nthat situation means reducing per query times, which may or may not \nbe easy. Looking at the stored procedure code with an eye towards \noptimization would be a good place to start.\n\nB) If the later, then table access contention is driving performance \ninto the ground, and there are a few things you can try:\n1= lock the table(s) under these circumstances so only one query of \nthe 50 can be acting on it at a time. If the table(s) is/are small \nenough to be made RAM resident, this may be a particularly low-cost, \nlow-effort, reasonable solution.\n\n2= put a queue into place and only let some small number n of queries \nrun against the table(s) concurrently. Adjust n until you get best \nperformance. There are a few ways this could be done.\n\n3= Buy a SSD and put the table(s) in question on it. IIRC, 3.5\" \nformat SSDs that can \"drop in\" replace HDs are available in up to \n147GB capacities.\n\n\n>The machine is a dual opteron, 8 GB memory, lots of fiber channel \n>disk, Linux 2.6, etc.\n>\n>So, I'm thinking that a semaphore than will block more than N \n>clients from being in the core of the function at one time would be \n>a good thing.\n\nThis will only help in case \"B\" above. If you go the \"hard\" route of \nusing systems programming, you will have a lot of details that must \nbe paid attention to correctly or Bad Things (tm) will \nhappen. Putting the semaphore in place is the tip of the iceberg.\n\n\nHope this helps,\nRon Peacetree\n\n\n\n", "msg_date": "Wed, 17 Aug 2005 23:19:04 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit number of concurrent callers to a stored" } ]
[ { "msg_contents": "I just put together a system with 6GB of ram on a 14 disk raid 10 array.\nWhen I run my usual big painful queries, I get very little to know\nmemory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\nmost of the time. the new devel box sits at around 250MB. \n\nI've switched to an 8.0 system on the new devel box, but the .conf\nreally didn't change. Index usage is the same. Something seems wrong and\nI'm not sure why. \n\n\nany thoughts,\n-jj-\n\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\nwork_mem = 2097151 # min 64, size in KB\nmaintenance_work_mem = 819200 # min 1024, size in KB\nmax_fsm_pages = 80000 # min max_fsm_relations*16, 6 bytes each\ncheckpoint_segments = 30 # in logfile segments, min 1, 16MB each\neffective_cache_size = 3600000 <-----this is a little out of control, but would it have any real effect?\nrandom_page_cost = 2 # units are one sequential page fetch cost\nlog_min_duration_statement = 10000 # -1 is disabled, in milliseconds.\nlc_messages = 'C' # locale for system error message strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n\n\n-- \n\"Now this is a totally brain damaged algorithm. Gag me with a\nsmurfette.\"\n -- P. Buhr, Computer Science 354\n\n", "msg_date": "Wed, 17 Aug 2005 21:11:48 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> I just put together a system with 6GB of ram on a 14 disk raid 10 array.\n> When I run my usual big painful queries, I get very little to know\n> memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\n> most of the time. the new devel box sits at around 250MB.\n>\n> I've switched to an 8.0 system on the new devel box, but the .conf\n> really didn't change. Index usage is the same. Something seems wrong and\n> I'm not sure why.\n>\n\nHow big is your actual database on disk? And how much of it is actually\ntouched by your queries?\n\nIt seems that your tough queries might only be exercising a portion of\nthe database. If you really want to make memory usage increase try\nsomething like:\nfind . -type f -print0 | xargs -0 cat >/dev/null\nWhich should read all the files. After doing that, does the memory usage\nincrease?\n\n>\n> any thoughts,\n> -jj-\n>\n>\n> shared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\n> work_mem = 2097151 # min 64, size in KB\n\nThis seems awfully high. 2GB Per sort? This might actually be flushing\nsome of your ram, since it would get allocated and filled, and then\nfreed when finished. Remember, depending on what you are doing, this\namount can get allocated more than once per query.\n\n> maintenance_work_mem = 819200 # min 1024, size in KB\n> max_fsm_pages = 80000 # min max_fsm_relations*16, 6 bytes each\n> checkpoint_segments = 30 # in logfile segments, min 1, 16MB each\n> effective_cache_size = 3600000 <-----this is a little out of control, but would it have any real effect?\n\nIt should just tell the planner that it is more likely to have buffers\nin cache, so index scans are slightly cheaper than they would otherwise be.\n\n> random_page_cost = 2 # units are one sequential page fetch cost\n> log_min_duration_statement = 10000 # -1 is disabled, in milliseconds.\n> lc_messages = 'C' # locale for system error message strings\n> lc_monetary = 'C' # locale for monetary formatting\n> lc_numeric = 'C' # locale for number formatting\n> lc_time = 'C' # locale for time formatting\n>\n\nJohn\n=:->", "msg_date": "Wed, 17 Aug 2005 21:21:04 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "\nOn Aug 17, 2005, at 10:11 PM, Jeremiah Jahn wrote:\n\n> I just put together a system with 6GB of ram on a 14 disk raid 10 \n> array.\n> When I run my usual big painful queries, I get very little to know\n> memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\n> most of the time. the new devel box sits at around 250MB.\n>\n\nIs the system performing fine? Are you touching as much data as the \nproduction box?\n\nIf the system is performing fine don't worry about it.\n\n> work_mem = 2097151 # min 64, size in KB\n\nThis is EXTREMELY high. You realize this is the amount of memory \nthat can be used per-sort and per-hash build in a query? You can end \nup with multiples of this on a single query. If you have some big \nqueries that are run infrequently have them set it manually.\n\n> effective_cache_size = 3600000 <-----this is a little out of \n> control, but would it have any real effect?\n\nThis doesn't allocate anything - it is a hint to the planner about \nhow much data it can assume is cached.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Thu, 18 Aug 2005 09:00:31 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Wed, 2005-08-17 at 21:21 -0500, John A Meinel wrote:\n> Jeremiah Jahn wrote:\n> > I just put together a system with 6GB of ram on a 14 disk raid 10 array.\n> > When I run my usual big painful queries, I get very little to know\n> > memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\n> > most of the time. the new devel box sits at around 250MB.\n> >\n> > I've switched to an 8.0 system on the new devel box, but the .conf\n> > really didn't change. Index usage is the same. Something seems wrong and\n> > I'm not sure why.\n> >\n> \n> How big is your actual database on disk? And how much of it is actually\n> touched by your queries?\nThe DB is about 60GB. About 10GB is actually used in real queries,\nversus get me this single record with this ID. I have a large query that\nfinds court cases based on certain criteria that is name based. I get a\nfull seq scan on the name table in about 7 seconds, This table has about\n6 million names (most being 'smith, something'). The index scan takes\nmuch less time of course, once it's been cached (somewhere but not\napparently memory). The really query can take 60 seconds on a first run.\nAnd 1.3 seconds on a second run. I'm very happy with the cached results,\njust not really sure where that caching is happening since it doesn't\nshow up as memory usage. I do know that the caching that happens seems\nto be independent of the DB. I can restart the DB and my speeds are\nstill the same as the cached second query. Is there some way to\npre-cache some of the tables/files on the file system? If I switch my\nquery to search for 'jones%' instead of 'smith%', I take a hit. But if I\nthen rerun the smith search, I still get cached speed. I only have two\ntables essentially names and events that have to do any real work ie.\nnot very atomic data. I'd love to be able to force these two tables into\na cache somewhere. This is a linux system (RHEL ES4) by the way. \n> \n> It seems that your tough queries might only be exercising a portion of\n> the database. If you really want to make memory usage increase try\n> something like:\n> find . -type f -print0 | xargs -0 cat >/dev/null\n> Which should read all the files. After doing that, does the memory usage\n> increase?\n> \n> >\n> > any thoughts,\n> > -jj-\n> >\n> >\n> > shared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\n> > work_mem = 2097151 # min 64, size in KB\n> \n> This seems awfully high. 2GB Per sort? This might actually be flushing\n> some of your ram, since it would get allocated and filled, and then\n> freed when finished. Remember, depending on what you are doing, this\n> amount can get allocated more than once per query.\nWhat's a good way to determine the optimal size?\n\n> \n> > maintenance_work_mem = 819200 # min 1024, size in KB\n> > max_fsm_pages = 80000 # min max_fsm_relations*16, 6 bytes each\n> > checkpoint_segments = 30 # in logfile segments, min 1, 16MB each\n> > effective_cache_size = 3600000 <-----this is a little out of control, but would it have any real effect?\n> \n> It should just tell the planner that it is more likely to have buffers\n> in cache, so index scans are slightly cheaper than they would otherwise be.\n> \n> > random_page_cost = 2 # units are one sequential page fetch cost\n> > log_min_duration_statement = 10000 # -1 is disabled, in milliseconds.\n> > lc_messages = 'C' # locale for system error message strings\n> > lc_monetary = 'C' # locale for monetary formatting\n> > lc_numeric = 'C' # locale for number formatting\n> > lc_time = 'C' # locale for time formatting\n> >\n> \n> John\n> =:->\n-- \n\"Now this is a totally brain damaged algorithm. Gag me with a\nsmurfette.\"\n -- P. Buhr, Computer Science 354\n\n", "msg_date": "Thu, 18 Aug 2005 11:35:11 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n\n>On Wed, 2005-08-17 at 21:21 -0500, John A Meinel wrote:\n> \n>\n>>Jeremiah Jahn wrote:\n>> \n>>\n>>>I just put together a system with 6GB of ram on a 14 disk raid 10 array.\n>>>When I run my usual big painful queries, I get very little to know\n>>>memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\n>>>most of the time. the new devel box sits at around 250MB.\n>>>\n>>>I've switched to an 8.0 system on the new devel box, but the .conf\n>>>really didn't change. Index usage is the same. Something seems wrong and\n>>>I'm not sure why.\n>>>\n>>> \n>>>\n>>How big is your actual database on disk? And how much of it is actually\n>>touched by your queries?\n>> \n>>\n>The DB is about 60GB. About 10GB is actually used in real queries,\n>versus get me this single record with this ID. I have a large query that\n>finds court cases based on certain criteria that is name based. I get a\n>full seq scan on the name table in about 7 seconds, This table has about\n>6 million names (most being 'smith, something'). The index scan takes\n>much less time of course, once it's been cached (somewhere but not\n>apparently memory). The really query can take 60 seconds on a first run.\n>And 1.3 seconds on a second run. I'm very happy with the cached results,\n>just not really sure where that caching is happening since it doesn't\n>show up as memory usage. I do know that the caching that happens seems\n>to be independent of the DB. I can restart the DB and my speeds are\n>still the same as the cached second query. Is there some way to\n>pre-cache some of the tables/files on the file system? If I switch my\n>query to search for 'jones%' instead of 'smith%', I take a hit. But if I\n>then rerun the smith search, I still get cached speed. I only have two\n>tables essentially names and events that have to do any real work ie.\n>not very atomic data. I'd love to be able to force these two tables into\n>a cache somewhere. This is a linux system (RHEL ES4) by the way. \n> \n>\nI think what is happening is that *some* of the index pages are being\ncached, just not all of them. Most indexes (if you didn't specify\nanything special) are btree, so that you load the root page, and then\ndetermine what pages need to be loaded from there. So the \"jones%\" pages\naren't anywhere near the \"smith%\" pages. And don't need to be loaded if\nyou aren't accessing them.\n\nSo the required memory usage might be smaller than you think. At least\nuntil all of the index pages have been accessed.\n\nThe reason it is DB independent is because the OS is caching a file\naccess (you read a file, it keeps the old pages in RAM in case you ask\nfor it again).\n\nPart of the trick, is that as you use the database, it will cache what\nhas been used. So you may not need to do anything. It should sort itself\nout with time.\nHowever, if you have to have cached performance as soon as your machine\nreboots, you could figure out what files on disk represent your indexes\nand tables, and then just \"cat $files >/dev/null\"\nThat should cause a read on those files, which should pull them into the\nmemory cache. *However* this will fail if the size of those files is\ngreater than available memory, so you may want to be a little bit stingy\nabout what you preload.\nAlternatively, you could just write an SQL script which runs a bunch of\nindexed queries to make sure all the pages get loaded.\n\nSomething like:\nFOR curname IN SELECT DISTINCT name FROM users LOOP\n SELECT name FROM users WHERE name=curname;\nEND LOOP;\n\nThat should make the database go through the entire table, and load the\nindex for every user. This is overkill, and will probably take a long\ntime to execute. But you could do it if you wanted.\n\n>>It seems that your tough queries might only be exercising a portion of\n>>the database. If you really want to make memory usage increase try\n>>something like:\n>>find . -type f -print0 | xargs -0 cat >/dev/null\n>>Which should read all the files. After doing that, does the memory usage\n>>increase?\n>>\n>> \n>>\n>>>any thoughts,\n>>>-jj-\n>>>\n>>>\n>>>shared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\n>>>work_mem = 2097151 # min 64, size in KB\n>>> \n>>>\n>>This seems awfully high. 2GB Per sort? This might actually be flushing\n>>some of your ram, since it would get allocated and filled, and then\n>>freed when finished. Remember, depending on what you are doing, this\n>>amount can get allocated more than once per query.\n>> \n>>\n>What's a good way to determine the optimal size?\n> \n>\n\nPractice. :) A few questions I guess...\nHow many concurrent connections are you expecting? How many joins does a\nstandard query have? How big are the joins?\n\nIn general, I would tend to make this a smaller number, so that the os\nhas more room to cache tables, rather than having big buffers for joins.\nIf someone is requesting a join that requires a lot of rows, I would\nrather *that* query be slower, than impacting everyone else.\nI would put it more with a maximum in the 20-100MB range.\n\nJohn\n=:->\n\n> \n>\n>>>maintenance_work_mem = 819200 # min 1024, size in KB\n>>>max_fsm_pages = 80000 # min max_fsm_relations*16, 6 bytes each\n>>>checkpoint_segments = 30 # in logfile segments, min 1, 16MB each\n>>>effective_cache_size = 3600000 <-----this is a little out of control, but would it have any real effect?\n>>> \n>>>\n>>It should just tell the planner that it is more likely to have buffers\n>>in cache, so index scans are slightly cheaper than they would otherwise be.\n>>\n>> \n>>\n>>>random_page_cost = 2 # units are one sequential page fetch cost\n>>>log_min_duration_statement = 10000 # -1 is disabled, in milliseconds.\n>>>lc_messages = 'C' # locale for system error message strings\n>>>lc_monetary = 'C' # locale for monetary formatting\n>>>lc_numeric = 'C' # locale for number formatting\n>>>lc_time = 'C' # locale for time formatting\n>>>\n>>> \n>>>\n>>John\n>>=:->\n>> \n>>", "msg_date": "Thu, 18 Aug 2005 12:14:01 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "here's an example standard query. Ireally have to make the first hit go\nfaster. The table is clustered as well on full_name as well. 'Smith%'\ntook 87 seconds on the first hit. I wonder if I set up may array wrong.\nI remeber see something about DMA access versus something else, and\nchoose DMA access. LVM maybe?\n\nexplain analyze select distinct case_category,identity_id,court.name,litigant_details.case_id,case_year,date_of_birth,assigned_case_role,litigant_details.court_ori,full_name,litigant_details.actor_id,case_data.type_code,case_data.subtype_code,litigant_details.impound_litigant_data, to_number(trim(leading case_data.type_code from trim(leading case_data.case_year from case_data.case_id)),'999999') as seq from identity,court,litigant_details,case_data where identity.court_ori = litigant_details.court_ori and identity.case_id = litigant_details.case_id and identity.actor_id = litigant_details.actor_id and court.id = identity.court_ori and identity.court_ori = case_data.court_ori and case_data.case_id = identity.case_id and identity.court_ori = 'IL081025J' and full_name like 'MILLER%' order by full_name;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=20411.84..20411.91 rows=2 width=173) (actual time=38340.231..38355.120 rows=4906 loops=1)\n -> Sort (cost=20411.84..20411.84 rows=2 width=173) (actual time=38340.227..38343.667 rows=4906 loops=1)\n Sort Key: identity.full_name, case_data.case_category, identity.identity_id, court.name, litigant_details.case_id, case_data.case_year, identity.date_of_birth, litigant_details.assigned_case_role, litigant_details.court_ori, litigant_details.actor_id, case_data.type_code, case_data.subtype_code, litigant_details.impound_litigant_data, to_number(ltrim(ltrim((case_data.case_id)::text, (case_data.case_year)::text), (case_data.type_code)::text), '999999'::text)\n -> Nested Loop (cost=0.00..20411.83 rows=2 width=173) (actual time=12.891..38317.017 rows=4906 loops=1)\n -> Nested Loop (cost=0.00..20406.48 rows=1 width=159) (actual time=12.826..23232.106 rows=4906 loops=1)\n -> Nested Loop (cost=0.00..20403.18 rows=1 width=138) (actual time=12.751..22885.439 rows=4906 loops=1)\n Join Filter: ((\"outer\".case_id)::text = (\"inner\".case_id)::text)\n -> Index Scan using name_speed on identity (cost=0.00..1042.34 rows=4868 width=82) (actual time=0.142..52.538 rows=4915 loops=1)\n Index Cond: (((full_name)::text >= 'MILLER'::character varying) AND ((full_name)::text < 'MILLES'::character varying))\n Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'MILLER%'::text))\n -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..3.96 rows=1 width=81) (actual time=4.631..4.635 rows=1 loops=4915)\n Index Cond: ((\"outer\".actor_id)::text = (litigant_details.actor_id)::text)\n Filter: ('IL081025J'::text = (court_ori)::text)\n -> Seq Scan on court (cost=0.00..3.29 rows=1 width=33) (actual time=0.053..0.062 rows=1 loops=4906)\n Filter: ('IL081025J'::text = (id)::text)\n -> Index Scan using case_speed on case_data (cost=0.00..5.29 rows=3 width=53) (actual time=3.049..3.058 rows=1 loops=4906)\n Index Cond: (('IL081025J'::text = (case_data.court_ori)::text) AND ((case_data.case_id)::text = (\"outer\".case_id)::text))\n Total runtime: 38359.722 ms\n(18 rows)\n\ncopa=> explain analyze select distinct case_category,identity_id,court.name,litigant_details.case_id,case_year,date_of_birth,assigned_case_role,litigant_details.court_ori,full_name,litigant_details.actor_id,case_data.type_code,case_data.subtype_code,litigant_details.impound_litigant_data, to_number(trim(leading case_data.type_code from trim(leading case_data.case_year from case_data.case_id)),'999999') as seq from identity,court,litigant_details,case_data where identity.court_ori = litigant_details.court_ori and identity.case_id = litigant_details.case_id and identity.actor_id = litigant_details.actor_id and court.id = identity.court_ori and identity.court_ori = case_data.court_ori and case_data.case_id = identity.case_id and identity.court_ori = 'IL081025J' and full_name like 'MILLER%' order by full_name;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=20411.84..20411.91 rows=2 width=173) (actual time=666.832..688.081 rows=4906 loops=1)\n -> Sort (cost=20411.84..20411.84 rows=2 width=173) (actual time=666.825..671.833 rows=4906 loops=1)\n Sort Key: identity.full_name, case_data.case_category, identity.identity_id, court.name, litigant_details.case_id, case_data.case_year, identity.date_of_birth, litigant_details.assigned_case_role, litigant_details.court_ori, litigant_details.actor_id, case_data.type_code, case_data.subtype_code, litigant_details.impound_litigant_data, to_number(ltrim(ltrim((case_data.case_id)::text, (case_data.case_year)::text), (case_data.type_code)::text), '999999'::text)\n -> Nested Loop (cost=0.00..20411.83 rows=2 width=173) (actual time=0.216..641.366 rows=4906 loops=1)\n -> Nested Loop (cost=0.00..20406.48 rows=1 width=159) (actual time=0.149..477.063 rows=4906 loops=1)\n -> Nested Loop (cost=0.00..20403.18 rows=1 width=138) (actual time=0.084..161.045 rows=4906 loops=1)\n Join Filter: ((\"outer\".case_id)::text = (\"inner\".case_id)::text)\n -> Index Scan using name_speed on identity (cost=0.00..1042.34 rows=4868 width=82) (actual time=0.047..37.898 rows=4915 loops=1)\n Index Cond: (((full_name)::text >= 'MILLER'::character varying) AND ((full_name)::text < 'MILLES'::character varying))\n Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'MILLER%'::text))\n -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..3.96 rows=1 width=81) (actual time=0.015..0.017 rows=1 loops=4915)\n Index Cond: ((\"outer\".actor_id)::text = (litigant_details.actor_id)::text)\n Filter: ('IL081025J'::text = (court_ori)::text)\n -> Seq Scan on court (cost=0.00..3.29 rows=1 width=33) (actual time=0.049..0.056 rows=1 loops=4906)\n Filter: ('IL081025J'::text = (id)::text)\n -> Index Scan using case_speed on case_data (cost=0.00..5.29 rows=3 width=53) (actual time=0.017..0.020 rows=1 loops=4906)\n Index Cond: (('IL081025J'::text = (case_data.court_ori)::text) AND ((case_data.case_id)::text = (\"outer\".case_id)::text))\n Total runtime: 694.639 ms\n(18 rows)\n\n\n\nOn Thu, 2005-08-18 at 09:00 -0400, Jeff Trout wrote:\n> On Aug 17, 2005, at 10:11 PM, Jeremiah Jahn wrote:\n> \n> > I just put together a system with 6GB of ram on a 14 disk raid 10 \n> > array.\n> > When I run my usual big painful queries, I get very little to know\n> > memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used\n> > most of the time. the new devel box sits at around 250MB.\n> >\n> \n> Is the system performing fine? Are you touching as much data as the \n> production box?\n> \n> If the system is performing fine don't worry about it.\n> \n> > work_mem = 2097151 # min 64, size in KB\n> \n> This is EXTREMELY high. You realize this is the amount of memory \n> that can be used per-sort and per-hash build in a query? You can end \n> up with multiples of this on a single query. If you have some big \n> queries that are run infrequently have them set it manually.\n> \n> > effective_cache_size = 3600000 <-----this is a little out of \n> > control, but would it have any real effect?\n> \n> This doesn't allocate anything - it is a hint to the planner about \n> how much data it can assume is cached.\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n-- \n\"Now this is a totally brain damaged algorithm. Gag me with a\nsmurfette.\"\n -- P. Buhr, Computer Science 354\n\n", "msg_date": "Thu, 18 Aug 2005 12:39:21 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n\n>here's an example standard query. Ireally have to make the first hit go\n>faster. The table is clustered as well on full_name as well. 'Smith%'\n>took 87 seconds on the first hit. I wonder if I set up may array wrong.\n>I remeber see something about DMA access versus something else, and\n>choose DMA access. LVM maybe?\n> \n>\nIt would be nice if you would format your queries to be a little bit\neasier to read before posting them.\nHowever, I believe I am reading it correctly, to say that the index scan\non identity is not your slow point. In fact, as near as I can tell, it\nonly takes 52ms to complete.\n\nThe expensive parts are the 4915 lookups into the litigant_details (each\none takes approx 4ms for a total of ~20s).\nAnd then you do it again on case_data (average 3ms each * 4906 loops =\n~15s).\n\nSo there is no need for preloading your indexes on the identity table.\nIt is definitely not the bottleneck.\n\nSo a few design bits, which may help your database.\nWhy is \"actor_id\" a text field instead of a number?\nYou could try creating an index on \"litigant_details (actor_id,\ncount_ori)\" so that it can do just an index lookup, rather than an index\n+ filter.\n\nMore importantly, though, the planner seems to think the join of\nidentity to litigant_details will only return 1 row, not 5000.\nDo you regularly vacuum analyze your tables?\nJust as a test, try running:\nset enable_nested_loop to off;\nAnd then run EXPLAIN ANALYZE again, just to see if it is faster.\n\nYou probably need to increase some statistics targets, so that the\nplanner can design better plans.\n\n> -> Nested Loop (cost=0.00..20411.83 rows=2 width=173)\n> (actual time=12.891..38317.017 rows=4906 loops=1)\n> -> Nested Loop (cost=0.00..20406.48 rows=1 width=159)\n> (actual time=12.826..23232.106 rows=4906 loops=1)\n> -> Nested Loop (cost=0.00..20403.18 rows=1\n> width=138) (actual time=12.751..22885.439 rows=4906 loops=1)\n> Join Filter: ((\"outer\".case_id)::text =\n> (\"inner\".case_id)::text)\n> -> Index Scan using name_speed on\n> identity (cost=0.00..1042.34 rows=4868 width=82) (actual\n> time=0.142..52.538 rows=4915 loops=1)\n> Index Cond: (((full_name)::text >=\n> 'MILLER'::character varying) AND ((full_name)::text <\n> 'MILLES'::character varying))\n> Filter: (((court_ori)::text =\n> 'IL081025J'::text) AND ((full_name)::text ~~ 'MILLER%'::text))\n> -> Index Scan using lit_actor_speed on\n> litigant_details (cost=0.00..3.96 rows=1 width=81) (actual\n> time=4.631..4.635 rows=1 loops=4915)\n> Index Cond: ((\"outer\".actor_id)::text\n> = (litigant_details.actor_id)::text)\n> Filter: ('IL081025J'::text =\n> (court_ori)::text)\n> -> Seq Scan on court (cost=0.00..3.29 rows=1\n> width=33) (actual time=0.053..0.062 rows=1 loops=4906)\n> Filter: ('IL081025J'::text = (id)::text)\n> -> Index Scan using case_speed on case_data \n> (cost=0.00..5.29 rows=3 width=53) (actual time=3.049..3.058 rows=1\n> loops=4906)\n> Index Cond: (('IL081025J'::text =\n> (case_data.court_ori)::text) AND ((case_data.case_id)::text =\n> (\"outer\".case_id)::text))\n\n\nJohn\n=:->", "msg_date": "Thu, 18 Aug 2005 12:55:03 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "At 01:55 PM 8/18/2005, John Arbash Meinel wrote:\n>Jeremiah Jahn wrote:\n>\n> >here's an example standard query. Ireally have to make the first hit go\n> >faster. The table is clustered as well on full_name as well. 'Smith%'\n> >took 87 seconds on the first hit. I wonder if I set up may array wrong.\n> >I remeber see something about DMA access versus something else, and\n> >choose DMA access. LVM maybe?\n> >\n> >\n>It would be nice if you would format your queries to be a little bit\n>easier to read before posting them.\n>However, I believe I am reading it correctly, to say that the index scan\n>on identity is not your slow point. In fact, as near as I can tell, it\n>only takes 52ms to complete.\n>\n>The expensive parts are the 4915 lookups into the litigant_details (each\n>one takes approx 4ms for a total of ~20s).\n>And then you do it again on case_data (average 3ms each * 4906 loops =\n>~15s).\n\nHow big are litigant_details and case_data? If they can fit in RAM, \npreload them using methods like the \"cat to /dev/null\" trick and \nthose table lookups will be ~100-1000x faster. If they won't fit \ninto RAM but the machine can be expanded to hold enough RAM to fit \nthe tables, it's well worth the ~$75-$150/GB to upgrade the server so \nthat the tables will fit into RAM.\n\nIf they can't be made to fit into RAM as atomic entities, you have a \nfew choices:\nA= Put the data tables and indexes on separate dedicated spindles and \nput litigant_details and case_data each on their own dedicated \nspindles. This will lower seek conflicts. Again it this requires \nbuying some more HDs, it's well worth it.\n\nB= Break litigant_details and case_data into a set of smaller tables \n(based on something sane like the first n characters of the primary key)\nsuch that the smaller tables easily fit into RAM. Given that you've \nsaid only 10GB/60GB is \"hot\", this could work very well. Combine it \nwith \"A\" above (put all the litigant_details sub tables on one \ndedicated spindle set and all the case_data sub tables on another \nspindle set) for added oomph.\n\nC= Buy a SSD big enough to hold litigant_details and case_data and \nput them there. Again, this can be combined with \"A\" and \"B\" above \nto lessen the size of the SSD needed.\n\n\n>So there is no need for preloading your indexes on the identity \n>table. It is definitely not the bottleneck.\n>\n>So a few design bits, which may help your database. Why is \n>\"actor_id\" a text field instead of a number?\n>You could try creating an index on \"litigant_details (actor_id, \n>count_ori)\" so that it can do just an index lookup, rather than an \n>index+ filter.\n\nYes, that certainly sounds like it would be more efficient.\n\n\n>More importantly, though, the planner seems to think the join of \n>identity to litigant_details will only return 1 row, not 5000.\n>Do you regularly vacuum analyze your tables?\n>Just as a test, try running:\n>set enable_nested_loop to off;\n>And then run EXPLAIN ANALYZE again, just to see if it is faster.\n>\n>You probably need to increase some statistics targets, so that the\n>planner can design better plans.\n>\n> > -> Nested Loop (cost=0.00..20411.83 rows=2 width=173) (actual \n> time=12.891..38317.017 rows=4906 loops=1)\n> > -> Nested Loop (cost=0.00..20406.48 rows=1 width=159)(actual \n> time=12.826..23232.106 rows=4906 loops=1)\n> > -> Nested Loop (cost=0.00..20403.18 rows=1 width=138) \n> (actual time=12.751..22885.439 rows=4906 loops=1)\n> > Join Filter: ((\"outer\".case_id)::text = \n> (\"inner\".case_id)::text)\n> > -> Index Scan using name_speed on \n> identity (cost=0.00..1042.34 rows=4868 width=82) (actual time=0.142..52.538\n> > rows=4915 loops=1)\n> > Index Cond: (((full_name)::text >= \n> 'MILLER'::character varying) AND ((full_name)::text < \n> 'MILLES'::character varying))\n> > Filter: (((court_ori)::text = \n> 'IL081025J'::text) AND ((full_name)::text ~~ 'MILLER%'::text))\n> > -> Index Scan using lit_actor_speed on \n> litigant_details (cost=0.00..3.96 rows=1 width=81) (actual\n> > time=4.631..4.635 rows=1 loops=4915)\n> > Index Cond: ((\"outer\".actor_id)::text = \n> (litigant_details.actor_id)::text)\n> > Filter: ('IL081025J'::text = (court_ori)::text)\n> > -> Seq Scan on court (cost=0.00..3.29 \n> rows=1 width=33) (actual time=0.053..0.062 rows=1 loops=4906)\n> > Filter: ('IL081025J'::text = (id)::text)\n> > -> Index Scan using case_speed on \n> case_data (cost=0.00..5.29 rows=3 width=53) (actual time=3.049..3.058\n> > rows=1 loops=4906)\n> > Index Cond: (('IL081025J'::text \n> = (case_data.court_ori)::text) AND ((case_data.case_id)::text =\n> > (\"outer\".case_id)::text))\n\n\n\n", "msg_date": "Thu, 18 Aug 2005 15:56:53 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Sorry about the formatting. \n\nOn Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> Jeremiah Jahn wrote:\n> \n> >here's an example standard query. Ireally have to make the first hit go\n> >faster. The table is clustered as well on full_name as well. 'Smith%'\n> >took 87 seconds on the first hit. I wonder if I set up may array wrong.\n> >I remeber see something about DMA access versus something else, and\n> >choose DMA access. LVM maybe?\n> > \n> >\n> It would be nice if you would format your queries to be a little bit\n> easier to read before posting them.\n> However, I believe I am reading it correctly, to say that the index scan\n> on identity is not your slow point. In fact, as near as I can tell, it\n> only takes 52ms to complete.\n> \n> The expensive parts are the 4915 lookups into the litigant_details (each\n> one takes approx 4ms for a total of ~20s).\n> And then you do it again on case_data (average 3ms each * 4906 loops =\n> ~15s).\nIs there some way to avoid this? \n\n\n> \n> So there is no need for preloading your indexes on the identity table.\n> It is definitely not the bottleneck.\n> \n> So a few design bits, which may help your database.\n> Why is \"actor_id\" a text field instead of a number?\nThis is simply due to the nature of the data.\n\n> You could try creating an index on \"litigant_details (actor_id,\n> count_ori)\" so that it can do just an index lookup, rather than an index\n> + filter.\nI have one, but it doesn't seem to like to use it. Don't really need it\nthough, I can just drop the court_id out of the query. It's redundant,\nsince each actor_id is also unique in litigant details. I had run vac\nfull and analyze but I ran them again anyway and the planning improved.\nHowever, my 14 disk raid 10 array is still slower than my 3 disk raid 5\non my production box. 46sec vs 30sec (with live traffic on the\nproduction) One of the strange things is that when I run the cat command\non my index and tables that are \"HOT\" it has no effect on memory usage.\nRight now I'm running ext3 on LVM. I'm still in a position to redo the\nfile system and everything. Is this a good way to do it or should I\nswitch to something else? What about stripe and extent sizes...? kernel\nparameters to change? \n\n\n\n---------------devel box:-----------------------\n\ncopa=# EXPLAIN ANALYZE select full_name,identity_id,identity.case_id,court.id,date_of_birth,assigned_case_role,litigant_details.impound_litigant_data\ncopa-# from identity\ncopa-# join litigant_details on identity.actor_id = litigant_details.actor_id\ncopa-# join case_data on litigant_details.case_id = case_data.case_id and litigant_details.court_ori = case_data.court_ori\ncopa-# join court on identity.court_ori = court.id\ncopa-# where identity.court_ori = 'IL081025J' and full_name like 'JONES%' order by full_name;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=3.29..29482.22 rows=3930 width=86) (actual time=114.060..46001.480 rows=5052 loops=1)\n -> Nested Loop (cost=3.29..16193.27 rows=3820 width=112) (actual time=93.038..24584.275 rows=5052 loops=1)\n -> Nested Loop (cost=0.00..16113.58 rows=3820 width=113) (actual time=85.778..24536.489 rows=5052 loops=1)\n -> Index Scan using name_speed on identity (cost=0.00..824.72 rows=3849 width=82) (actual time=50.284..150.133 rows=5057 loops=1)\n Index Cond: (((full_name)::text >= 'JONES'::character varying) AND ((full_name)::text < 'JONET'::character varying))\n Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'JONES%'::text))\n -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..3.96 rows=1 width=81) (actual time=4.788..4.812 rows=1 loops=5057)\n Index Cond: ((\"outer\".actor_id)::text = (litigant_details.actor_id)::text)\n -> Materialize (cost=3.29..3.30 rows=1 width=12) (actual time=0.002..0.003 rows=1 loops=5052)\n -> Seq Scan on court (cost=0.00..3.29 rows=1 width=12) (actual time=7.248..7.257 rows=1 loops=1)\n Filter: ('IL081025J'::text = (id)::text)\n -> Index Scan using case_speed on case_data (cost=0.00..3.46 rows=1 width=26) (actual time=4.222..4.230 rows=1 loops=5052)\n Index Cond: (((\"outer\".court_ori)::text = (case_data.court_ori)::text) AND ((\"outer\".case_id)::text = (case_data.case_id)::text))\n Total runtime: 46005.994 ms\n\n\n\n> \n> More importantly, though, the planner seems to think the join of\n> identity to litigant_details will only return 1 row, not 5000.\n> Do you regularly vacuum analyze your tables?\n> Just as a test, try running:\n> set enable_nested_loop to off;\nnot quite acceptable\nTotal runtime: 221486.149 ms\n\n> And then run EXPLAIN ANALYZE again, just to see if it is faster.\n> \n> You probably need to increase some statistics targets, so that the\n> planner can design better plans.\n\n---------------------this is the output from the production box------------------\nLOG: duration: 27213.068 ms statement: EXPLAIN ANALYZE select full_name,identity_id,identity.case_id,court.id,date_of_birth,assigned_case_role,litigant_details.impound_litigant_data\n from identity\n join litigant_details on identity.actor_id = litigant_details.actor_id\n join case_data on litigant_details.case_id = case_data.case_id and litigant_details.court_ori = case_data.court_ori\n join court on identity.court_ori = court.id\n where identity.court_ori = 'IL081025J' and full_name like 'JONES%' order by full_name;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=3.29..43498.76 rows=2648 width=86) (actual time=17.106..27192.000 rows=5052 loops=1)\n -> Nested Loop (cost=0.00..43442.53 rows=2647 width=87) (actual time=16.947..27120.619 rows=5052 loops=1)\n -> Nested Loop (cost=0.00..23061.79 rows=3827 width=113) (actual time=16.801..17390.682 rows=5052 loops=1)\n -> Index Scan using name_speed on identity (cost=0.00..1277.39 rows=3858 width=82) (actual time=9.842..213.424 rows=5057 loops=1)\n Index Cond: (((full_name)::text >= 'JONES'::character varying) AND ((full_name)::text < 'JONET'::character varying))\n Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'JONES%'::text))\n -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..5.63 rows=1 width=81) (actual time=3.355..3.364 rows=1 loops=5057)\n Index Cond: ((\"outer\".actor_id)::text = (litigant_details.actor_id)::text)\n -> Index Scan using case_data_pkey on case_data (cost=0.00..5.31 rows=1 width=26) (actual time=1.897..1.904 rows=1 loops=5052)\n Index Cond: (((\"outer\".court_ori)::text = (case_data.court_ori)::text) AND ((\"outer\".case_id)::text = (case_data.case_id)::text))\n -> Materialize (cost=3.29..3.30 rows=1 width=12) (actual time=0.002..0.003 rows=1 loops=5052)\n -> Seq Scan on court (cost=0.00..3.29 rows=1 width=12) (actual time=0.142..0.165 rows=1 loops=1)\n Filter: ('IL081025J'::text = (id)::text)\n Total runtime: 27205.060 ms\n\n> \n> \n> John\n> =:->\n> \n-- \n\"I didn't know it was impossible when I did it.\"\n\n", "msg_date": "Fri, 19 Aug 2005 11:48:29 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> Sorry about the formatting.\n>\n> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n>\n>>Jeremiah Jahn wrote:\n>>\n>>\n\n...\n\n>>The expensive parts are the 4915 lookups into the litigant_details (each\n>>one takes approx 4ms for a total of ~20s).\n>>And then you do it again on case_data (average 3ms each * 4906 loops =\n>>~15s).\n>\n> Is there some way to avoid this?\n>\n\nWell, in general, 3ms for a single lookup seems really long. Maybe your\nindex is bloated by not vacuuming often enough. Do you tend to get a lot\nof updates to litigant_details?\n\nThere are a couple possibilities at this point. First, you can REINDEX\nthe appropriate index, and see if that helps. However, if this is a test\nbox, it sounds like you just did a dump and reload, which wouldn't have\nbloat in an index.\n\nAnother possibility. Is this the column that you usually use when\npulling information out of litigant_details? If so, you can CLUSTER\nlitigant_details on the appropriate index. This will help things be\nclose together that should be, which decreases the index lookup costs.\n\nHowever, if this is not the common column, then you probably will slow\ndown whatever other accesses you may have on this table.\n\nAfter CLUSTER, the current data will stay clustered, but new data will\nnot, so you have to continually CLUSTER, the same way that you might\nVACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\nexpensive as a VACUUM FULL. Be aware of this, but it might vastly\nimprove your performance, so it would be worth it.\n\n>\n>\n>>So there is no need for preloading your indexes on the identity table.\n>>It is definitely not the bottleneck.\n>>\n>>So a few design bits, which may help your database.\n>>Why is \"actor_id\" a text field instead of a number?\n>\n> This is simply due to the nature of the data.\n>\n\nI'm just wondering if changing into a number, and using a number->name\nlookup would be faster for you. It may not be. In general, I prefer to\nuse numbers for references. I may be over paranoid, but I know that some\nlocales are bad with string -> string comparisons. And since the data in\nyour database is stored as UNICODE, I'm not sure if it has to do any\ntranslating or not. Again, something to consider, it may not make any\ndifference.\n\n\n>\n>>You could try creating an index on \"litigant_details (actor_id,\n>>count_ori)\" so that it can do just an index lookup, rather than an index\n>>+ filter.\n>\n> I have one, but it doesn't seem to like to use it. Don't really need it\n> though, I can just drop the court_id out of the query. It's redundant,\n> since each actor_id is also unique in litigant details. I had run vac\n> full and analyze but I ran them again anyway and the planning improved.\n> However, my 14 disk raid 10 array is still slower than my 3 disk raid 5\n> on my production box. 46sec vs 30sec (with live traffic on the\n> production) One of the strange things is that when I run the cat command\n> on my index and tables that are \"HOT\" it has no effect on memory usage.\n> Right now I'm running ext3 on LVM. I'm still in a position to redo the\n> file system and everything. Is this a good way to do it or should I\n> switch to something else? What about stripe and extent sizes...? kernel\n> parameters to change?\n\nWell, the plans are virtually identical. There is one small difference\nas to whether it joins against case_data or court first. But 'court' is\nvery tiny (small enough to use a seqscan instead of index scan) I'm a\nlittle surprised with court being this small that it doesn't do\nsomething like a hash aggregation, but court takes no time anyway.\n\nThe real problem is that your nested loop index time is *much* slower.\n\nDevel:\n-> Index Scan using lit_actor_speed on litigant_details\n\t(cost=0.00..3.96 rows=1 width=81)\n\t(actual time=4.788..4.812 rows=1 loops=5057)\n\nProduction:\n-> Index Scan using lit_actor_speed on litigant_details\n\t(cost=0.00..5.63 rows=1 width=81)\n\t(actual time=3.355..3.364 rows=1 loops=5057)\n\nDevel:\n-> Index Scan using case_speed on case_data\n\t(cost=0.00..3.46 rows=1 width=26)\n\t(actual time=4.222..4.230 rows=1 loops=5052)\n\nProduction:\n-> Index Scan using case_data_pkey on case_data\n\t(cost=0.00..5.31 rows=1 width=26)\n\t(actual time=1.897..1.904 rows=1 loops=5052)\n\nNotice that the actual per-row cost is as much as 1/2 less than on your\ndevel box.\n\nAs a test, can you do \"time cat $index_file >/dev/null\" a couple of\ntimes. And then determine the MB/s.\nAlternatively run vmstat in another shell. If the read/s doesn't change,\nthen you know the \"cat\" is being served from RAM, and thus it really is\ncached.\n\nI can point you to REINDEX and CLUSTER, but if it is caching in ram, I\nhonestly can't say why the per loop would be that much slower.\nAre both systems running the same postgres version? It sounds like it is\ndifferent (since you say something about switching to 8.0).\nI doubt it, but you might try an 8.1devel version.\n\n...\n\n>>Do you regularly vacuum analyze your tables?\n>>Just as a test, try running:\n>>set enable_nested_loop to off;\n>\n> not quite acceptable\n> Total runtime: 221486.149 ms\n>\n\nWell, the estimates are now at least closer (3k vs 5k instead of 1), and\nit is still choosing nested loops. So they probably are faster.\nI would still be interested in the actual EXPLAIN ANALYZE with nested\nloops disabled. It is possible that *some* of the nested loops are\nperforming worse than they have to.\nBut really, you have worse index speed, and that needs to be figured out.\n\nJohn\n=:->", "msg_date": "Fri, 19 Aug 2005 12:18:42 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "At 01:18 PM 8/19/2005, John A Meinel wrote:\n>Jeremiah Jahn wrote:\n> > Sorry about the formatting.\n> >\n> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> >\n> >>Jeremiah Jahn wrote:\n> >>\n> >>\n>\n>...\n>\n> >>The expensive parts are the 4915 lookups into the litigant_details (each\n> >>one takes approx 4ms for a total of ~20s).\n> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> >>~15s).\n> >\n> > Is there some way to avoid this?\n> >\n>\n>Well, in general, 3ms for a single lookup seems really long. Maybe your\n>index is bloated by not vacuuming often enough. Do you tend to get a lot\n>of updates to litigant_details?\n\nGiven that the average access time for a 15Krpm HD is in the 5.5-6ms \nrange (7.5-8ms for a 10Krpm HD), having an average of 3ms for a \nsingle lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm \ncase) table accesses is requiring a seek.\n\nThis implies a poor match between physical layout and access pattern.\n\nIf I understand correctly, the table should not be very fragmented \ngiven that this is a reasonably freshly loaded DB? That implies that \nthe fields being looked up are not well sorted in the table compared \nto the query pattern.\n\nIf the entire table could fit in RAM, this would be far less of a \nconsideration. Failing that, the physical HD layout has to be \nimproved or the query pattern has to be changed to reduce seeks.\n\n\n>There are a couple possibilities at this point. First, you can REINDEX\n>the appropriate index, and see if that helps. However, if this is a test\n>box, it sounds like you just did a dump and reload, which wouldn't have\n>bloat in an index.\n>\n>Another possibility. Is this the column that you usually use when\n>pulling information out of litigant_details? If so, you can CLUSTER\n>litigant_details on the appropriate index. This will help things be\n>close together that should be, which decreases the index lookup costs.\n>\n>However, if this is not the common column, then you probably will slow\n>down whatever other accesses you may have on this table.\n>\n>After CLUSTER, the current data will stay clustered, but new data will\n>not, so you have to continually CLUSTER, the same way that you might\n>VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n>expensive as a VACUUM FULL. Be aware of this, but it might vastly\n>improve your performance, so it would be worth it.\n\nCLUSTER can be a very large maintenance overhead/problem if the \ntable(s) in question actually need to be \"continually\" re CLUSTER ed.\n\nIf there is no better solution available, then you do what you have \nto, but it feels like there should be a better answer here.\n\nPerhaps the DB schema needs examining to see if it matches up well \nwith its real usage?\n\nRon Peacetree\n\n\n", "msg_date": "Fri, 19 Aug 2005 14:42:43 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Ron wrote:\n> At 01:18 PM 8/19/2005, John A Meinel wrote:\n>\n>> Jeremiah Jahn wrote:\n>> > Sorry about the formatting.\n>> >\n>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n>> >\n>> >>Jeremiah Jahn wrote:\n>> >>\n>> >>\n>>\n>> ...\n>>\n>> >>The expensive parts are the 4915 lookups into the litigant_details\n>> (each\n>> >>one takes approx 4ms for a total of ~20s).\n>> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n>> >>~15s).\n>> >\n>> > Is there some way to avoid this?\n>> >\n>>\n>> Well, in general, 3ms for a single lookup seems really long. Maybe your\n>> index is bloated by not vacuuming often enough. Do you tend to get a lot\n>> of updates to litigant_details?\n>\n>\n> Given that the average access time for a 15Krpm HD is in the 5.5-6ms\n> range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single\n> lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)\n> table accesses is requiring a seek.\n>\n\n\nWell, from what he has said, the total indexes are < 1GB and he has 6GB\nof ram. So everything should fit. Not to mention he is only accessing\n5000/several million rows.\n\n\n> This implies a poor match between physical layout and access pattern.\n\nThis seems to be the case. But since this is not the only query, it may\nbe that other access patterns are more important to optimize for.\n\n>\n> If I understand correctly, the table should not be very fragmented given\n> that this is a reasonably freshly loaded DB? That implies that the\n> fields being looked up are not well sorted in the table compared to the\n> query pattern.\n>\n> If the entire table could fit in RAM, this would be far less of a\n> consideration. Failing that, the physical HD layout has to be improved\n> or the query pattern has to be changed to reduce seeks.\n>\n>\n\n...\n\n>> After CLUSTER, the current data will stay clustered, but new data will\n>> not, so you have to continually CLUSTER, the same way that you might\n>> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n>> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n>> improve your performance, so it would be worth it.\n>\n>\n> CLUSTER can be a very large maintenance overhead/problem if the table(s)\n> in question actually need to be \"continually\" re CLUSTER ed.\n>\n> If there is no better solution available, then you do what you have to,\n> but it feels like there should be a better answer here.\n>\n> Perhaps the DB schema needs examining to see if it matches up well with\n> its real usage?\n>\n> Ron Peacetree\n>\n\nI certainly agree that CLUSTER is expensive, and is an on-going\nmaintenance issue. If it is the normal access pattern, though, it may be\nworth it.\n\nI also wonder, though, if his table is properly normalized. Which, as\nyou mentioned, might lead to improved access patterns.\n\nJohn\n=:->", "msg_date": "Fri, 19 Aug 2005 14:23:02 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n> Jeremiah Jahn wrote:\n> > Sorry about the formatting.\n> >\n> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> >\n> >>Jeremiah Jahn wrote:\n> >>\n> >>\n> \n> ...\n> \n> >>The expensive parts are the 4915 lookups into the litigant_details (each\n> >>one takes approx 4ms for a total of ~20s).\n> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> >>~15s).\n> >\n> > Is there some way to avoid this?\n> >\n> \n> Well, in general, 3ms for a single lookup seems really long. Maybe your\n> index is bloated by not vacuuming often enough. Do you tend to get a lot\n> of updates to litigant_details?\nI have vacuumed this already. I get lots of updates, but this data is\nmostly unchanging.\n\n> \n> There are a couple possibilities at this point. First, you can REINDEX\n> the appropriate index, and see if that helps. However, if this is a test\n> box, it sounds like you just did a dump and reload, which wouldn't have\n> bloat in an index.\n\nI loaded it using slony\n\n> \n> Another possibility. Is this the column that you usually use when\n> pulling information out of litigant_details? If so, you can CLUSTER\n> litigant_details on the appropriate index. This will help things be\n> close together that should be, which decreases the index lookup costs.\nclustering on this right now. Most of the other things are already\nclustered. name and case_data\n\n> \n> However, if this is not the common column, then you probably will slow\n> down whatever other accesses you may have on this table.\n> \n> After CLUSTER, the current data will stay clustered, but new data will\n> not, so you have to continually CLUSTER, the same way that you might\n> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> improve your performance, so it would be worth it.\nI generally re-cluster once a week.\n> \n> >\n> >\n> >>So there is no need for preloading your indexes on the identity table.\n> >>It is definitely not the bottleneck.\n> >>\n> >>So a few design bits, which may help your database.\n> >>Why is \"actor_id\" a text field instead of a number?\n> >\n> > This is simply due to the nature of the data.\n> >\n> \n> I'm just wondering if changing into a number, and using a number->name\n> lookup would be faster for you. It may not be. In general, I prefer to\n> use numbers for references. I may be over paranoid, but I know that some\n> locales are bad with string -> string comparisons. And since the data in\n> your database is stored as UNICODE, I'm not sure if it has to do any\n> translating or not. Again, something to consider, it may not make any\n> difference.\nI don't believe so. I initialze the DB as 'lang=C'. I used to have the\nproblem where things were being inited as en_US. this would prevent any\ntext based index from working. This doesn't seem to be the case here, so\nI'm not worried about it.\n\n\n> \n> \n> >\n> >>You could try creating an index on \"litigant_details (actor_id,\n> >>count_ori)\" so that it can do just an index lookup, rather than an index\n> >>+ filter.\n> >\n> > I have one, but it doesn't seem to like to use it. Don't really need it\n> > though, I can just drop the court_id out of the query. It's redundant,\n> > since each actor_id is also unique in litigant details. I had run vac\n> > full and analyze but I ran them again anyway and the planning improved.\n> > However, my 14 disk raid 10 array is still slower than my 3 disk raid 5\n> > on my production box. 46sec vs 30sec (with live traffic on the\n> > production) One of the strange things is that when I run the cat command\n> > on my index and tables that are \"HOT\" it has no effect on memory usage.\n> > Right now I'm running ext3 on LVM. I'm still in a position to redo the\n> > file system and everything. Is this a good way to do it or should I\n> > switch to something else? What about stripe and extent sizes...? kernel\n> > parameters to change?\n> \n> Well, the plans are virtually identical. There is one small difference\n> as to whether it joins against case_data or court first. But 'court' is\n> very tiny (small enough to use a seqscan instead of index scan) I'm a\n> little surprised with court being this small that it doesn't do\n> something like a hash aggregation, but court takes no time anyway.\n> \n> The real problem is that your nested loop index time is *much* slower.\n> \n> Devel:\n> -> Index Scan using lit_actor_speed on litigant_details\n> \t(cost=0.00..3.96 rows=1 width=81)\n> \t(actual time=4.788..4.812 rows=1 loops=5057)\n> \n> Production:\n> -> Index Scan using lit_actor_speed on litigant_details\n> \t(cost=0.00..5.63 rows=1 width=81)\n> \t(actual time=3.355..3.364 rows=1 loops=5057)\n> \n> Devel:\n> -> Index Scan using case_speed on case_data\n> \t(cost=0.00..3.46 rows=1 width=26)\n> \t(actual time=4.222..4.230 rows=1 loops=5052)\n> \n> Production:\n> -> Index Scan using case_data_pkey on case_data\n> \t(cost=0.00..5.31 rows=1 width=26)\n> \t(actual time=1.897..1.904 rows=1 loops=5052)\n> \n> Notice that the actual per-row cost is as much as 1/2 less than on your\n> devel box.\n> \n> As a test, can you do \"time cat $index_file >/dev/null\" a couple of\n> times. And then determine the MB/s.\n> Alternatively run vmstat in another shell. If the read/s doesn't change,\n> then you know the \"cat\" is being served from RAM, and thus it really is\n> cached.\nit's cached alright. I'm getting a read rate of about 150MB/sec. I would\nhave thought is would be faster with my raid setup. I think I'm going to\nscrap the whole thing and get rid of LVM. I'll just do a straight ext3\nsystem. Maybe that will help. Still trying to get suggestions for a\nstripe size. \n\n> \n> I can point you to REINDEX and CLUSTER, but if it is caching in ram, I\n> honestly can't say why the per loop would be that much slower.\n> Are both systems running the same postgres version? It sounds like it is\n> different (since you say something about switching to 8.0).\nThese had little or no effect. \nThe production machine is running 7.4 while the devel machine is running\n8.0\n\n> I doubt it, but you might try an 8.1devel version.\n> \n> ...\n> \n> >>Do you regularly vacuum analyze your tables?\n> >>Just as a test, try running:\n> >>set enable_nested_loop to off;\n> >\n> > not quite acceptable\n> > Total runtime: 221486.149 ms\n> >\n> \n> Well, the estimates are now at least closer (3k vs 5k instead of 1), and\n> it is still choosing nested loops. So they probably are faster.\n> I would still be interested in the actual EXPLAIN ANALYZE with nested\n> loops disabled. It is possible that *some* of the nested loops are\n> performing worse than they have to.\n\nthis is a cached version.\n\n> copa=> explain analyze select full_name,identity_id,identity.case_id,court.id,date_of_birth,assigned_case_role,litigant_details.impound_litigant_data\n> copa-> from identity\n> copa-> join litigant_details on identity.actor_id = litigant_details.actor_id\n> copa-> join case_data on litigant_details.case_id = case_data.case_id and litigant_details.court_ori = case_data.court_ori\n> copa-> join court on identity.court_ori = court.id\n> copa-> where identity.court_ori = 'IL081025J' and full_name like 'SMITH%' order by full_name;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=100502560.72..100502583.47 rows=9099 width=86) (actual time=17843.876..17849.401 rows=8094 loops=1)\n> Sort Key: identity.full_name\n> -> Merge Join (cost=100311378.72..100501962.40 rows=9099 width=86) (actual time=15195.816..17817.847 rows=8094 loops=1)\n> Merge Cond: (((\"outer\".court_ori)::text = \"inner\".\"?column10?\") AND ((\"outer\".case_id)::text = \"inner\".\"?column11?\"))\n> -> Index Scan using case_speed on case_data (cost=0.00..170424.73 rows=3999943 width=26) (actual time=0.015..4540.525 rows=3018284 loops=1)\n> -> Sort (cost=100311378.72..100311400.82 rows=8839 width=112) (actual time=9594.985..9601.174 rows=8094 loops=1)\n> Sort Key: (litigant_details.court_ori)::text, (litigant_details.case_id)::text\n> -> Nested Loop (cost=100002491.43..100310799.34 rows=8839 width=112) (actual time=6892.755..9555.828 rows=8094 loops=1)\n> -> Seq Scan on court (cost=0.00..3.29 rows=1 width=12) (actual time=0.085..0.096 rows=1 loops=1)\n> Filter: ('IL081025J'::text = (id)::text)\n> -> Merge Join (cost=2491.43..310707.66 rows=8839 width=113) (actual time=6892.656..9519.680 rows=8094 loops=1)\n> Merge Cond: ((\"outer\".actor_id)::text = \"inner\".\"?column7?\")\n> -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..295722.00 rows=4956820 width=81) (actual time=0.027..5613.814 rows=3736703 loops=1)\n> -> Sort (cost=2491.43..2513.71 rows=8913 width=82) (actual time=116.071..122.272 rows=8100 loops=1)\n> Sort Key: (identity.actor_id)::text\n> -> Index Scan using name_speed on identity (cost=0.00..1906.66 rows=8913 width=82) (actual time=0.133..81.104 rows=8100 loops=1)\n> Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n> Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'SMITH%'::text))\n> Total runtime: 17859.917 ms\n\n> But really, you have worse index speed, and that needs to be figured out.\n> \n> John\n> =:->\n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Fri, 19 Aug 2005 14:56:26 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:\n> Ron wrote:\n> > At 01:18 PM 8/19/2005, John A Meinel wrote:\n> >\n> >> Jeremiah Jahn wrote:\n> >> > Sorry about the formatting.\n> >> >\n> >> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> >> >\n> >> >>Jeremiah Jahn wrote:\n> >> >>\n> >> >>\n> >>\n> >> ...\n> >>\n> >> >>The expensive parts are the 4915 lookups into the litigant_details\n> >> (each\n> >> >>one takes approx 4ms for a total of ~20s).\n> >> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> >> >>~15s).\n> >> >\n> >> > Is there some way to avoid this?\n> >> >\n> >>\n> >> Well, in general, 3ms for a single lookup seems really long. Maybe your\n> >> index is bloated by not vacuuming often enough. Do you tend to get a lot\n> >> of updates to litigant_details?\n> >\n> >\n> > Given that the average access time for a 15Krpm HD is in the 5.5-6ms\n> > range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single\n> > lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)\n> > table accesses is requiring a seek.\n> >\nI think LVM may be a problem, since it also seems to break things up on\nthe file system. My access time on the seek should be around 1/7th the\n15Krpm I believe since it's a 14 disk raid 10 array. And no other\ntraffic at the moment. \n\n\n\n> \n> \n> Well, from what he has said, the total indexes are < 1GB and he has 6GB\n> of ram. So everything should fit. Not to mention he is only accessing\n> 5000/several million rows.\nI table spaced some of the indexes and they are around 211066880 bytes\nfor the name_speed index and 149825330 for the lit_actor_speed index\ntables seem to be about a gig. \n\n> \n> \n> > This implies a poor match between physical layout and access pattern.\n> \n> This seems to be the case. But since this is not the only query, it may\n> be that other access patterns are more important to optimize for.\n> \n> >\n> > If I understand correctly, the table should not be very fragmented given\n> > that this is a reasonably freshly loaded DB? That implies that the\n> > fields being looked up are not well sorted in the table compared to the\n> > query pattern.\n> >\n> > If the entire table could fit in RAM, this would be far less of a\n> > consideration. Failing that, the physical HD layout has to be improved\n> > or the query pattern has to be changed to reduce seeks.\n> >\n> >\n> \n> ...\n> \n> >> After CLUSTER, the current data will stay clustered, but new data will\n> >> not, so you have to continually CLUSTER, the same way that you might\n> >> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> >> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> >> improve your performance, so it would be worth it.\n> >\n> >\n> > CLUSTER can be a very large maintenance overhead/problem if the table(s)\n> > in question actually need to be \"continually\" re CLUSTER ed.\n> >\n> > If there is no better solution available, then you do what you have to,\n> > but it feels like there should be a better answer here.\n> >\n> > Perhaps the DB schema needs examining to see if it matches up well with\n> > its real usage?\n> >\n> > Ron Peacetree\n> >\n> \n> I certainly agree that CLUSTER is expensive, and is an on-going\n> maintenance issue. If it is the normal access pattern, though, it may be\n> worth it.\n\nThe query I've sent you is one of the most common I get just change the\nname. I handle about 180K of them a day mostly between 8 and 5. The\nclustering has never really been a problem. Like I said before I do it\nabout once a week. I handle about 3000 update an hour consisting of\nabout 1000-3000 statement per update. ie about 2.5 million updates per\nhour. In the last few months or so I've filtered these down to about\n400K update/delete/insert statements per hour. \n\n> \n> I also wonder, though, if his table is properly normalized. Which, as\n> you mentioned, might lead to improved access patterns.\nThe system is about as normalized as I can get it. In general the layout\nis the following:\ncourts have cases, cases have litigant_details. Actors have identities\nand litigant_details. \n\n> \n> John\n> =:->\n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Fri, 19 Aug 2005 15:11:28 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Rebuild in progress with just ext3 on the raid array...will see if this\nhelps the access times. If it doesn't I'll mess with the stripe size. I\nhave REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index\n> /dev/null' none of this seems to have helped, or even increased my\nmemory usage. argh! The only thing about this new system that I'm\nunfamiliar with is the array setup and LVM, which is why I think that's\nwhere the issue is. clustering and indexing as well as vacuum etc are\nthings that I do and have been aware of for sometime. Perhaps slony is a\nfactor, but I really don't see it causing problems on index read speed\nesp. when it's not running. \n\nthanx for your help, I really appreciate it. \n-jj-\n\n\n\n\nOn Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:\n> Ron wrote:\n> > At 01:18 PM 8/19/2005, John A Meinel wrote:\n> >\n> >> Jeremiah Jahn wrote:\n> >> > Sorry about the formatting.\n> >> >\n> >> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> >> >\n> >> >>Jeremiah Jahn wrote:\n> >> >>\n> >> >>\n> >>\n> >> ...\n> >>\n> >> >>The expensive parts are the 4915 lookups into the litigant_details\n> >> (each\n> >> >>one takes approx 4ms for a total of ~20s).\n> >> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> >> >>~15s).\n> >> >\n> >> > Is there some way to avoid this?\n> >> >\n> >>\n> >> Well, in general, 3ms for a single lookup seems really long. Maybe your\n> >> index is bloated by not vacuuming often enough. Do you tend to get a lot\n> >> of updates to litigant_details?\n> >\n> >\n> > Given that the average access time for a 15Krpm HD is in the 5.5-6ms\n> > range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single\n> > lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)\n> > table accesses is requiring a seek.\n> >\n> \n> \n> Well, from what he has said, the total indexes are < 1GB and he has 6GB\n> of ram. So everything should fit. Not to mention he is only accessing\n> 5000/several million rows.\n> \n> \n> > This implies a poor match between physical layout and access pattern.\n> \n> This seems to be the case. But since this is not the only query, it may\n> be that other access patterns are more important to optimize for.\n> \n> >\n> > If I understand correctly, the table should not be very fragmented given\n> > that this is a reasonably freshly loaded DB? That implies that the\n> > fields being looked up are not well sorted in the table compared to the\n> > query pattern.\n> >\n> > If the entire table could fit in RAM, this would be far less of a\n> > consideration. Failing that, the physical HD layout has to be improved\n> > or the query pattern has to be changed to reduce seeks.\n> >\n> >\n> \n> ...\n> \n> >> After CLUSTER, the current data will stay clustered, but new data will\n> >> not, so you have to continually CLUSTER, the same way that you might\n> >> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> >> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> >> improve your performance, so it would be worth it.\n> >\n> >\n> > CLUSTER can be a very large maintenance overhead/problem if the table(s)\n> > in question actually need to be \"continually\" re CLUSTER ed.\n> >\n> > If there is no better solution available, then you do what you have to,\n> > but it feels like there should be a better answer here.\n> >\n> > Perhaps the DB schema needs examining to see if it matches up well with\n> > its real usage?\n> >\n> > Ron Peacetree\n> >\n> \n> I certainly agree that CLUSTER is expensive, and is an on-going\n> maintenance issue. If it is the normal access pattern, though, it may be\n> worth it.\n> \n> I also wonder, though, if his table is properly normalized. Which, as\n> you mentioned, might lead to improved access patterns.\n> \n> John\n> =:->\n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Fri, 19 Aug 2005 16:01:39 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n>\n>>Jeremiah Jahn wrote:\n>>\n\n\n...\n\n>>\n>>Well, in general, 3ms for a single lookup seems really long. Maybe your\n>>index is bloated by not vacuuming often enough. Do you tend to get a lot\n>>of updates to litigant_details?\n>\n> I have vacuumed this already. I get lots of updates, but this data is\n> mostly unchanging.\n>\n>\n>>There are a couple possibilities at this point. First, you can REINDEX\n>>the appropriate index, and see if that helps. However, if this is a test\n>>box, it sounds like you just did a dump and reload, which wouldn't have\n>>bloat in an index.\n>\n>\n> I loaded it using slony\n\nI don't know that slony versus pg_dump/pg_restore really matters. The\nbig thing is that Updates wouldn't be trashing your index.\nBut if you are saying that you cluster once/wk your index can't be that\nmessed up anyway. (Unless CLUSTER messes up the non-clustered indexes,\nbut that would make cluster much less useful, so I would have guessed\nthis was not the case)\n\n>\n>\n>>Another possibility. Is this the column that you usually use when\n>>pulling information out of litigant_details? If so, you can CLUSTER\n>>litigant_details on the appropriate index. This will help things be\n>>close together that should be, which decreases the index lookup costs.\n>\n> clustering on this right now. Most of the other things are already\n> clustered. name and case_data\n\nJust as a reality check, they are clustered on the columns in question,\nright? (I don't know if this column is a primary key or not, but any\nindex can be used for clustering).\n\n>\n>\n>>However, if this is not the common column, then you probably will slow\n>>down whatever other accesses you may have on this table.\n>>\n>>After CLUSTER, the current data will stay clustered, but new data will\n>>not, so you have to continually CLUSTER, the same way that you might\n>>VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n>>expensive as a VACUUM FULL. Be aware of this, but it might vastly\n>>improve your performance, so it would be worth it.\n>\n> I generally re-cluster once a week.\n>\n>>>\n>>>>So there is no need for preloading your indexes on the identity table.\n>>>>It is definitely not the bottleneck.\n>>>>\n>>>>So a few design bits, which may help your database.\n>>>>Why is \"actor_id\" a text field instead of a number?\n>>>\n>>>This is simply due to the nature of the data.\n>>>\n>>\n>>I'm just wondering if changing into a number, and using a number->name\n>>lookup would be faster for you. It may not be. In general, I prefer to\n>>use numbers for references. I may be over paranoid, but I know that some\n>>locales are bad with string -> string comparisons. And since the data in\n>>your database is stored as UNICODE, I'm not sure if it has to do any\n>>translating or not. Again, something to consider, it may not make any\n>>difference.\n>\n> I don't believe so. I initialze the DB as 'lang=C'. I used to have the\n> problem where things were being inited as en_US. this would prevent any\n> text based index from working. This doesn't seem to be the case here, so\n> I'm not worried about it.\n>\n\nSorry, I think I was confusing you with someone else who posted SHOW ALL.\n\n>\n>\n>>\n\n...\n\n> it's cached alright. I'm getting a read rate of about 150MB/sec. I would\n> have thought is would be faster with my raid setup. I think I'm going to\n> scrap the whole thing and get rid of LVM. I'll just do a straight ext3\n> system. Maybe that will help. Still trying to get suggestions for a\n> stripe size.\n>\n\nI don't think 150MB/s is out of the realm for a 14 drive array.\nHow fast is\ntime dd if=/dev/zero of=testfile bs=8192 count=1000000\n(That should create a 8GB file, which is too big to cache everything)\nAnd then how fast is:\ntime dd if=testfile of=/dev/null bs=8192 count=1000000\n\nThat should give you a semi-decent way of measuring how fast the RAID\nsystem is, since it should be too big to cache in ram.\n\n>\n>>I can point you to REINDEX and CLUSTER, but if it is caching in ram, I\n>>honestly can't say why the per loop would be that much slower.\n>>Are both systems running the same postgres version? It sounds like it is\n>>different (since you say something about switching to 8.0).\n>\n> These had little or no effect.\n> The production machine is running 7.4 while the devel machine is running\n> 8.0\n>\n\nWell, my concern is that maybe some portion of the 8.0 code actually\nslowed things down for you. You could try reverting to 7.4 on the devel\nbox, though I think playing with upgrading to 8.1 might be more worthwhile.\n\n...\n\n>\n> this is a cached version.\n>\n\nI assume that you mean this is the second run of the query. I can't\ncompare it too much, since this is \"smith\" rather than \"jones\". But this\none is 17s rather than the other one being 46s.\n\nAnd that includes having 8k rows instead of having 5k rows.\n\nHave you tried other values with disabled nested loops? Because this\nquery (at least in cached form) seems to be *way* faster than with\nnested loops.\nI know that you somehow managed to get 200s in your testing, but it\nmight just be that whatever needed to be loaded is now loaded, and you\nwould get better performance.\nIf this is true, it means you might need to tweak some settings, and\nmake sure your statistics are decent, so that postgres can actually pick\nthe optimal plan.\n\n>\n>>copa=> explain analyze select full_name,identity_id,identity.case_id,court.id,date_of_birth,assigned_case_role,litigant_details.impound_litigant_data\n>>copa-> from identity\n>>copa-> join litigant_details on identity.actor_id = litigant_details.actor_id\n>>copa-> join case_data on litigant_details.case_id = case_data.case_id and litigant_details.court_ori = case_data.court_ori\n>>copa-> join court on identity.court_ori = court.id\n>>copa-> where identity.court_ori = 'IL081025J' and full_name like 'SMITH%' order by full_name;\n>> QUERY PLAN\n>>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=100502560.72..100502583.47 rows=9099 width=86) (actual time=17843.876..17849.401 rows=8094 loops=1)\n>> Sort Key: identity.full_name\n>> -> Merge Join (cost=100311378.72..100501962.40 rows=9099 width=86) (actual time=15195.816..17817.847 rows=8094 loops=1)\n>> Merge Cond: (((\"outer\".court_ori)::text = \"inner\".\"?column10?\") AND ((\"outer\".case_id)::text = \"inner\".\"?column11?\"))\n>> -> Index Scan using case_speed on case_data (cost=0.00..170424.73 rows=3999943 width=26) (actual time=0.015..4540.525 rows=3018284 loops=1)\n>> -> Sort (cost=100311378.72..100311400.82 rows=8839 width=112) (actual time=9594.985..9601.174 rows=8094 loops=1)\n>> Sort Key: (litigant_details.court_ori)::text, (litigant_details.case_id)::text\n>> -> Nested Loop (cost=100002491.43..100310799.34 rows=8839 width=112) (actual time=6892.755..9555.828 rows=8094 loops=1)\n>> -> Seq Scan on court (cost=0.00..3.29 rows=1 width=12) (actual time=0.085..0.096 rows=1 loops=1)\n>> Filter: ('IL081025J'::text = (id)::text)\n\nWhat I don't really understand is the next part. It seems to be doing an\nindex scan on 3.7M rows, and getting very decent performance (5s), and\nthen merging against a table which returns only 8k rows.\nWhy is it having to look through all of those rows?\nI may be missing something, but this says it is able to do 600 index\nlookups / millisecond. Which seems superfast. (Compared to your earlier\n4ms / lookup)\n\nSomething fishy is going on here.\n\n\n>> -> Merge Join (cost=2491.43..310707.66 rows=8839 width=113) (actual time=6892.656..9519.680 rows=8094 loops=1)\n>> Merge Cond: ((\"outer\".actor_id)::text = \"inner\".\"?column7?\")\n>> -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..295722.00 rows=4956820 width=81) (actual time=0.027..5613.814 rows=3736703 loops=1)\n>> -> Sort (cost=2491.43..2513.71 rows=8913 width=82) (actual time=116.071..122.272 rows=8100 loops=1)\n>> Sort Key: (identity.actor_id)::text\n>> -> Index Scan using name_speed on identity (cost=0.00..1906.66 rows=8913 width=82) (actual time=0.133..81.104 rows=8100 loops=1)\n>> Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n>> Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'SMITH%'::text))\n>> Total runtime: 17859.917 ms\n>\n>\n>>But really, you have worse index speed, and that needs to be figured out.\n>>\n>>John\n>>=:->\n\nI'm assuming your data is private (since it looks like legal stuff).\nUnless maybe that makes it part of the public record.\nAnyway, I'm not able to, but sometimes someone like Tom can profile\nstuff to see what is going on.\n\nI might just be messing up my ability to read the explain output. But\nsomehow things don't seem to be lining up with the cost of a single\nindex lookup.\nOn my crappy Celeron 450 box, an index lookup is 0.06ms once things are\ncached in ram.\n\nJohn\n=:->", "msg_date": "Fri, 19 Aug 2005 16:03:04 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> Rebuild in progress with just ext3 on the raid array...will see if this\n> helps the access times. If it doesn't I'll mess with the stripe size. I\n> have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index\n>\n>>/dev/null' none of this seems to have helped, or even increased my\n>\n> memory usage. argh! The only thing about this new system that I'm\n> unfamiliar with is the array setup and LVM, which is why I think that's\n> where the issue is. clustering and indexing as well as vacuum etc are\n> things that I do and have been aware of for sometime. Perhaps slony is a\n> factor, but I really don't see it causing problems on index read speed\n> esp. when it's not running.\n>\n> thanx for your help, I really appreciate it.\n> -jj-\n>\n\nBy the way, how are you measuring memory usage? Can you give the output\nof that command, just to make sure you are reading it correctly.\n\nJohn\n=:->", "msg_date": "Fri, 19 Aug 2005 16:07:18 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "\nOn Aug 19, 2005, at 3:01 PM, Jeremiah Jahn wrote:\n\n> Rebuild in progress with just ext3 on the raid array...will see if \n> this\n> helps the access times.\n\n From my recent experiences, I can say ext3 is probably not a great \nchoice for Pg databases. If you check the archives you'll see \nthere's a lot of discussion about various journalling filesystems and \next3 usually(always?) comes up on the bottom as far as performance \ngoes. If you insist on using it, I would at least recommend the \nnoatime option in fstab and using data=writeback to get the faster of \nthe journal modes.\n\nXFS seems to be a trusted choice, followed by Reiser and JFS both \nwith the occasional controversy when the comparisons pop up.\n\n-Dan\n\n", "msg_date": "Sat, 20 Aug 2005 01:12:15 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Dan Harris wrote:\n\n> From my recent experiences, I can say ext3 is probably not a great \n> choice for Pg databases. If you check the archives you'll see \n> there's a lot of discussion about various journalling filesystems and \n> ext3 usually(always?) comes up on the bottom as far as performance \n> goes. If you insist on using it, I would at least recommend the \n> noatime option in fstab and using data=writeback to get the faster \n\n\nBased on my knoledge, Ext3 is good with keeping filesystem integrity AND\ndata integrity while\npressing the reset button.\nHowever, by selecting data=writeback, you gain more speed, but you risk the\ndata integrity during a crash: Ext3 garantees only filesystem integrity.\n\nThis means with database transaction logs: The last transactions are not\nguaranteed to be written into the hard drives during a hardware reset,\nmeaning of a loss of some committed transactions.\n\nReiserfs is known to do things this false way also.\nIs there a way with a Reiserfs filesystem to fulfill both\nfilesystem AND data integrity requirements nowadays?\n\nSee for example \"man mount\" to see the effects of data=journal,\ndata=ordered(default) and\ndata=writeback for Ext3. Only the writeback risks data integrity.\n\nExt3 is the only journaled filesystem, that I know that fulfills\nthese fundamental data integrity guarantees. Personally I like about\nsuch filesystems, even though it means less speed.\n\nMarko Ristola\n\n\n\n", "msg_date": "Sat, 20 Aug 2005 14:17:54 +0300", "msg_from": "Marko Ristola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Sat, Aug 20, 2005 at 01:12:15AM -0600, Dan Harris wrote:\n>XFS seems to be a trusted choice, followed by Reiser and JFS both \n>with the occasional controversy when the comparisons pop up.\n\nAnd don't put the xlog on a journaled filesystem. There is no advantage\nto doing so, and it will slow things down. (Assuming a sane seperate xlog\npartition configuration, sized reasonably.)\n\nMike Stone\n", "msg_date": "Sat, 20 Aug 2005 08:18:33 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Sat, Aug 20, 2005 at 02:17:54PM +0300, Marko Ristola wrote:\n>Based on my knoledge, Ext3 is good with keeping filesystem integrity\n>AND data integrity while pressing the reset button. However, by\n>selecting data=writeback, you gain more speed, but you risk the data\n>integrity during a crash: Ext3 garantees only filesystem integrity.\n\nThat's why postgres keeps its own transaction log. Any of these\nfilesystems guarantee data integrity for data that's been synced to\ndisk, and postgres keeps track of what data has and has not been\ncommitted so it can recover gracefully from a failure. That's why most\nfilesystems are designed the way they are; the application can determine\nwhat things need better data integrity and which need better performance\non a case-by-case basis. \n\nMike Stone\n", "msg_date": "Sat, 20 Aug 2005 08:23:05 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Michael Stone <[email protected]> writes:\n> On Sat, Aug 20, 2005 at 02:17:54PM +0300, Marko Ristola wrote:\n>> Based on my knoledge, Ext3 is good with keeping filesystem integrity\n>> AND data integrity while pressing the reset button. However, by\n>> selecting data=writeback, you gain more speed, but you risk the data\n>> integrity during a crash: Ext3 garantees only filesystem integrity.\n\n> That's why postgres keeps its own transaction log. Any of these\n> filesystems guarantee data integrity for data that's been synced to\n> disk, and postgres keeps track of what data has and has not been\n> committed so it can recover gracefully from a failure.\n\nRight. I think the optimal setting for a Postgres data directory is\njournaled metadata, non-journaled file content. Postgres can take care\nof the data integrity for itself, but it does assume that the filesystem\nstays structurally sane (eg, data blocks don't get reassigned to the\nwrong file), so you need a filesystem guarantee about the metadata.\n\nWAL files are handled in a much more conservative way (created, filled\nwith zeroes, and fsync'd before we ever put any valuable data in 'em).\nIf you have WAL on its own drive then I think Mike's recommendation of\nno filesystem journalling at all for that drive is probably OK. Or\nyou can do same as above (journal metadata only) if you want a little\nextra protection.\n\nAnd of course all this reasoning depends on the assumption that the\ndrive tells the truth about write-completion. If the drive does write\ncaching it had better be able to complete all its accepted writes before\ndying in a power failure. (Hence, battery-backed write cache is OK, any\nother kind is evil.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Aug 2005 10:40:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage " }, { "msg_contents": "At 04:11 PM 8/19/2005, Jeremiah Jahn wrote:\n>On Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:\n> > Ron wrote:\n> > > At 01:18 PM 8/19/2005, John A Meinel wrote:\n> > >\n> > >> Jeremiah Jahn wrote:\n> > >> > Sorry about the formatting.\n> > >> >\n> > >> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> > >> >\n> > >> >>Jeremiah Jahn wrote:\n> > >> >>\n> > >> >>\n> > >>\n> > >> ...\n> > >>\n> > >> >>The expensive parts are the 4915 lookups into the litigant_details\n> > >> (each\n> > >> >>one takes approx 4ms for a total of ~20s).\n> > >> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> > >> >>~15s).\n> > >> >\n> > >> > Is there some way to avoid this?\n> > >> >\n> > >>\n> > >> Well, in general, 3ms for a single lookup seems really long. Maybe your\n> > >> index is bloated by not vacuuming often enough. Do you tend to get a lot\n> > >> of updates to litigant_details?\n> > >\n> > >\n> > > Given that the average access time for a 15Krpm HD is in the 5.5-6ms\n> > > range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single\n> > > lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)\n> > > table accesses is requiring a seek.\n> > >\n>I think LVM may be a problem, since it also seems to break things up on\n>the file system. My access time on the seek should be around 1/7th the\n>15Krpm I believe since it's a 14 disk raid 10 array. And no other\n>traffic at the moment.\n\nOops. There's a misconception here. RAID arrays increase \n_throughput_ AKA _bandwidth_ through parallel access to HDs. OTOH, \naccess time is _latency_, and that is not changed. Access time for a \nRAID set is equal to that of the slowest access time, AKA highest \nlatency, HD in the RAID set.\n\n> > Well, from what he has said, the total indexes are < 1GB and he has 6GB\n> > of ram. So everything should fit. Not to mention he is only accessing\n> > 5000/several million rows.\n>I table spaced some of the indexes and they are around 211066880 bytes\n>for the name_speed index and 149825330 for the lit_actor_speed index\n>tables seem to be about a gig.\n\nHmm. And you think you are only using 250MB out of your 6GB of \nRAM? Something doesn't seem to add up here. From what's been \nposted, I'd expect much more RAM to be in use.\n\n\n> > > This implies a poor match between physical layout and access pattern.\n> >\n> > This seems to be the case. But since this is not the only query, it may\n> > be that other access patterns are more important to optimize for.\n> >\n> > >\n> > > If I understand correctly, the table should not be very fragmented given\n> > > that this is a reasonably freshly loaded DB? That implies that the\n> > > fields being looked up are not well sorted in the table compared to the\n> > > query pattern.\n> > >\n> > > If the entire table could fit in RAM, this would be far less of a\n> > > consideration. Failing that, the physical HD layout has to be improved\n> > > or the query pattern has to be changed to reduce seeks.\n> > >\n> > >\n> >\n> > ...\n> >\n> > >> After CLUSTER, the current data will stay clustered, but new data will\n> > >> not, so you have to continually CLUSTER, the same way that you might\n> > >> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> > >> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> > >> improve your performance, so it would be worth it.\n> > >\n> > >\n> > > CLUSTER can be a very large maintenance overhead/problem if the table(s)\n> > > in question actually need to be \"continually\" re CLUSTER ed.\n> > >\n> > > If there is no better solution available, then you do what you have to,\n> > > but it feels like there should be a better answer here.\n> > >\n> > > Perhaps the DB schema needs examining to see if it matches up well with\n> > > its real usage?\n> > >\n> > > Ron Peacetree\n> > >\n> >\n> > I certainly agree that CLUSTER is expensive, and is an on-going\n> > maintenance issue. If it is the normal access pattern, though, it may be\n> > worth it.\n>\n>The query I've sent you is one of the most common I get just change the\n>name. I handle about 180K of them a day mostly between 8 and 5. The\n>clustering has never really been a problem. Like I said before I do it\n>about once a week. I handle about 3000 update an hour consisting of\n>about 1000-3000 statement per update. ie about 2.5 million updates per\n>hour. In the last few months or so I've filtered these down to about\n>400K update/delete/insert statements per hour.\n\n2.5M updates per hour = ~695 updates per second. 400K per hour = \n~112 updates per sec. These should be well within the capabilities \nof a RAID 10 subsystem based on 14 15Krpm HDs assuming a decent RAID \ncard. What is the exact HW of the RAID subsystem involved and how is \nit configured? You shouldn't be having a performance problem AFAICT...\n\n> > I also wonder, though, if his table is properly normalized. \n> Which, as you mentioned, might lead to improved access patterns.\n>The system is about as normalized as I can get it. In general the \n>layout is the following:\n>courts have cases, cases have litigant_details. Actors have \n>identities and litigant_details.\n\nHmmm. Can you tell us more about the actual schema, I may have an idea...\n\n> >\n> > John\n> > =:->\n>--\n>Speak softly and carry a +6 two-handed sword.\n\nNah. A wand of 25th level automatic Magic Missile Fire ;-)\n\nRon Peacetree\n\n\n", "msg_date": "Sat, 20 Aug 2005 11:59:32 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "I'm just watching gnome-system-monoitor. Which after careful\nconsideration.....and looking at dstat means I'm on CRACK....GSM isn't\nshowing cached memory usage....I asume that the cache memory usage is\nwhere data off of the disks would be cached...?\n\n\n\nmemory output from dstat is this for a few seconds:\n\n---procs--- ------memory-usage----- ---paging-- --disk/sda----disk/sdb- ----swap--- ----total-cpu-usage----\nrun blk new|_used _buff _cach _free|__in_ _out_|_read write:_read write|_used _free|usr sys idl wai hiq siq\n 0 0 0|1336M 10M 4603M 17M| 490B 833B|3823B 3503k:1607k 4285k| 160k 2048M| 4 1 89 7 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 464k| 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 48k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 132k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 36k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 12k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n 2 0 0|1353M 10M 4585M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 1 75 0 0 0\n 1 0 0|1321M 10M 4616M 19M| 0 0 | 0 0 : 0 0 | 160k 2048M| 18 8 74 0 0 0\n 1 0 0|1326M 10M 4614M 17M| 0 0 | 0 0 :4096B 0 | 160k 2048M| 16 10 74 1 0 0\n 1 0 0|1330M 10M 4609M 17M| 0 0 | 0 12k:4096B 0 | 160k 2048M| 17 9 74 0 0 0\n 0 1 0|1343M 10M 4596M 17M| 0 0 | 0 0 : 0 316M| 160k 2048M| 5 10 74 11 0 1\n 0 1 0|1339M 10M 4596M 21M| 0 0 | 0 0 : 0 0 | 160k 2048M| 0 0 74 25 0 1\n 0 2 0|1334M 10M 4596M 25M| 0 0 | 0 4096B: 0 0 | 160k 2048M| 0 0 54 44 0 1\n 1 0 0|1326M 10M 4596M 34M| 0 0 | 0 0 : 0 364k| 160k 2048M| 4 1 60 34 0 1\n 1 0 0|1290M 10M 4596M 70M| 0 0 | 0 12k: 0 0 | 160k 2048M| 24 1 75 0 0 0\n 1 0 0|1301M 10M 4596M 59M| 0 0 | 0 20k: 0 0 | 160k 2048M| 21 4 75 0 0 0\n 1 0 0|1312M 10M 4596M 48M| 0 0 | 0 0 : 0 0 | 160k 2048M| 22 4 75 0 0 0\n 1 0 0|1323M 10M 4596M 37M| 0 0 | 0 0 : 0 24k| 160k 2048M| 21 4 75 0 0 0\n 1 0 0|1334M 10M 4596M 25M| 0 0 | 0 0 : 0 56k| 160k 2048M| 21 4 75 0 0 0\n\n\n\nOn Fri, 2005-08-19 at 16:07 -0500, John A Meinel wrote:\n> Jeremiah Jahn wrote:\n> > Rebuild in progress with just ext3 on the raid array...will see if this\n> > helps the access times. If it doesn't I'll mess with the stripe size. I\n> > have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index\n> >\n> >>/dev/null' none of this seems to have helped, or even increased my\n> >\n> > memory usage. argh! The only thing about this new system that I'm\n> > unfamiliar with is the array setup and LVM, which is why I think that's\n> > where the issue is. clustering and indexing as well as vacuum etc are\n> > things that I do and have been aware of for sometime. Perhaps slony is a\n> > factor, but I really don't see it causing problems on index read speed\n> > esp. when it's not running.\n> >\n> > thanx for your help, I really appreciate it.\n> > -jj-\n> >\n> \n> By the way, how are you measuring memory usage? Can you give the output\n> of that command, just to make sure you are reading it correctly.\n> \n> John\n> =:->\n> \n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Sat, 20 Aug 2005 13:16:09 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Sat, 2005-08-20 at 11:59 -0400, Ron wrote:\n> At 04:11 PM 8/19/2005, Jeremiah Jahn wrote:\n> >On Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:\n> > > Ron wrote:\n> > > > At 01:18 PM 8/19/2005, John A Meinel wrote:\n> > > >\n> > > >> Jeremiah Jahn wrote:\n> > > >> > Sorry about the formatting.\n> > > >> >\n> > > >> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:\n> > > >> >\n> > > >> >>Jeremiah Jahn wrote:\n> > > >> >>\n> > > >> >>\n> > > >>\n> > > >> ...\n> > > >>\n> > > >> >>The expensive parts are the 4915 lookups into the litigant_details\n> > > >> (each\n> > > >> >>one takes approx 4ms for a total of ~20s).\n> > > >> >>And then you do it again on case_data (average 3ms each * 4906 loops =\n> > > >> >>~15s).\n> > > >> >\n> > > >> > Is there some way to avoid this?\n> > > >> >\n> > > >>\n> > > >> Well, in general, 3ms for a single lookup seems really long. Maybe your\n> > > >> index is bloated by not vacuuming often enough. Do you tend to get a lot\n> > > >> of updates to litigant_details?\n> > > >\n> > > >\n> > > > Given that the average access time for a 15Krpm HD is in the 5.5-6ms\n> > > > range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single\n> > > > lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)\n> > > > table accesses is requiring a seek.\n> > > >\n> >I think LVM may be a problem, since it also seems to break things up on\n> >the file system. My access time on the seek should be around 1/7th the\n> >15Krpm I believe since it's a 14 disk raid 10 array. And no other\n> >traffic at the moment.\n> \n> Oops. There's a misconception here. RAID arrays increase \n> _throughput_ AKA _bandwidth_ through parallel access to HDs. OTOH, \n> access time is _latency_, and that is not changed. Access time for a \n> RAID set is equal to that of the slowest access time, AKA highest \n> latency, HD in the RAID set.\n\nso I will max out at the 5.5-6ms rang for access time?\n\n\n> \n> > > Well, from what he has said, the total indexes are < 1GB and he has 6GB\n> > > of ram. So everything should fit. Not to mention he is only accessing\n> > > 5000/several million rows.\n> >I table spaced some of the indexes and they are around 211066880 bytes\n> >for the name_speed index and 149825330 for the lit_actor_speed index\n> >tables seem to be about a gig.\n> \n> Hmm. And you think you are only using 250MB out of your 6GB of \n> RAM? Something doesn't seem to add up here. From what's been \n> posted, I'd expect much more RAM to be in use.\n\nthe cached memory usage is complete using up the rest of the memory. \n\n> \n> \n> > > > This implies a poor match between physical layout and access pattern.\n> > >\n> > > This seems to be the case. But since this is not the only query, it may\n> > > be that other access patterns are more important to optimize for.\n> > >\n> > > >\n> > > > If I understand correctly, the table should not be very fragmented given\n> > > > that this is a reasonably freshly loaded DB? That implies that the\n> > > > fields being looked up are not well sorted in the table compared to the\n> > > > query pattern.\n> > > >\n> > > > If the entire table could fit in RAM, this would be far less of a\n> > > > consideration. Failing that, the physical HD layout has to be improved\n> > > > or the query pattern has to be changed to reduce seeks.\n> > > >\n> > > >\n> > >\n> > > ...\n> > >\n> > > >> After CLUSTER, the current data will stay clustered, but new data will\n> > > >> not, so you have to continually CLUSTER, the same way that you might\n> > > >> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> > > >> expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> > > >> improve your performance, so it would be worth it.\n> > > >\n> > > >\n> > > > CLUSTER can be a very large maintenance overhead/problem if the table(s)\n> > > > in question actually need to be \"continually\" re CLUSTER ed.\n> > > >\n> > > > If there is no better solution available, then you do what you have to,\n> > > > but it feels like there should be a better answer here.\n> > > >\n> > > > Perhaps the DB schema needs examining to see if it matches up well with\n> > > > its real usage?\n> > > >\n> > > > Ron Peacetree\n> > > >\n> > >\n> > > I certainly agree that CLUSTER is expensive, and is an on-going\n> > > maintenance issue. If it is the normal access pattern, though, it may be\n> > > worth it.\n> >\n> >The query I've sent you is one of the most common I get just change the\n> >name. I handle about 180K of them a day mostly between 8 and 5. The\n> >clustering has never really been a problem. Like I said before I do it\n> >about once a week. I handle about 3000 update an hour consisting of\n> >about 1000-3000 statement per update. ie about 2.5 million updates per\n> >hour. In the last few months or so I've filtered these down to about\n> >400K update/delete/insert statements per hour.\n> \n> 2.5M updates per hour = ~695 updates per second. 400K per hour = \n> ~112 updates per sec. These should be well within the capabilities \n> of a RAID 10 subsystem based on 14 15Krpm HDs assuming a decent RAID \n> card. What is the exact HW of the RAID subsystem involved and how is \n> it configured? You shouldn't be having a performance problem AFAICT...\n\ndell perc4 with 14 drives and the each pair is raid 1 with spanning\nenabled across all of the pairs. It doesn't say raid 10...But it seem to\nbe it. What else would you like to know?\n\n> \n> > > I also wonder, though, if his table is properly normalized. \n> > Which, as you mentioned, might lead to improved access patterns.\n> >The system is about as normalized as I can get it. In general the \n> >layout is the following:\n> >courts have cases, cases have litigant_details. Actors have \n> >identities and litigant_details.\n> \n> Hmmm. Can you tell us more about the actual schema, I may have an idea...\nIn what format would you like it. What kind of things would you like to\nknow..? I've probably missed a few things, but this is what running on\nthe production box. There are no foreign keys. Cascading delete were far\ntoo slow. And having to determine the order of deletes was a pain in the\nbut. \n\n\n\nCREATE TABLE actor (\n actor_id character varying(50) NOT NULL,\n case_id character varying(50) DEFAULT '0'::character varying NOT NULL,\n court_ori character varying(18) NOT NULL,\n role_class_code character varying(50) NOT NULL\n);\n\n\n\nCREATE TABLE identity (\n identity_id character varying(50) NOT NULL,\n actor_id character varying(50) NOT NULL,\n case_id character varying(50) DEFAULT '0'::character varying NOT NULL,\n court_ori character varying(18) NOT NULL,\n identity_type character varying(10) NOT NULL,\n entity_type character varying(50),\n full_name character varying(60) NOT NULL,\n entity_acronym character varying(50),\n name_prefix character varying(50),\n first_name character varying(50),\n middle_name character varying(50),\n last_name character varying(50),\n name_suffix character varying(50),\n gender_code character varying(50),\n date_of_birth date,\n place_of_birth character varying(50),\n height character varying(50),\n height_unit character varying(50),\n weight character varying(50),\n weight_unit character varying(50),\n religion character varying(50),\n ethnicity character varying(50),\n citizenship_country character varying(50),\n hair_color character varying(50),\n eye_color character varying(50),\n scars_marks_tatto character varying(255),\n marital_status character varying(50)\n);\nALTER TABLE ONLY identity ALTER COLUMN full_name SET STATISTICS 1000;\n\n\n\nCREATE TABLE case_data (\n case_id character varying(50) NOT NULL,\n court_ori character varying(18) NOT NULL,\n type_code character varying(50),\n subtype_code character varying(50),\n case_category character varying(50),\n case_title character varying(100),\n type_subtype_text character varying(255),\n case_year integer,\n extraction_datetime character varying(15) NOT NULL,\n update_date date NOT NULL,\n case_dom oid,\n data bytea\n);\n\n\n\nCREATE TABLE litigant_details (\n actor_id character varying(50) NOT NULL,\n case_id character varying(50) NOT NULL,\n court_ori character varying(18) NOT NULL,\n assigned_case_role character varying(50) NOT NULL,\n initial_file_date date,\n initial_close_date date,\n reopen_date date,\n reclose_date date,\n physical_file_location character varying(50),\n impound_litigant_data character varying(50),\n impound_litigant_minutes character varying(50),\n actor_type character varying(50) NOT NULL,\n conviction character varying(3)\n);\n\n\n\nCREATE TABLE actor_identifier (\n identity_id character varying(50) NOT NULL,\n actor_id character varying(50) NOT NULL,\n case_id character varying(50) DEFAULT '0'::character varying NOT NULL,\n court_ori character varying(18) NOT NULL,\n actor_identifier_type_code character varying(50) NOT NULL,\n actor_identifier_id character varying(50) NOT NULL\n);\n\n\n\nCREATE TABLE actor_relationship (\n litigant_actor_id character varying(50) NOT NULL,\n related_actor_id character varying(50) NOT NULL,\n case_id character varying(50) NOT NULL,\n court_ori character varying(18) NOT NULL,\n relationship_type character varying(50) NOT NULL\n);\n\nCREATE INDEX lit_actor_speed ON litigant_details USING btree (actor_id);\n\nCREATE INDEX name_speed ON identity USING btree (full_name);\nALTER TABLE identity CLUSTER ON name_speed;\n\nCREATE INDEX case_speed ON case_data USING btree (court_ori, case_id);\nALTER TABLE case_data CLUSTER ON case_speed;\n\n\nALTER TABLE ONLY actor\n ADD CONSTRAINT actor_pkey PRIMARY KEY (court_ori, case_id, actor_id);\nALTER TABLE ONLY identity\n ADD CONSTRAINT identity_pkey PRIMARY KEY (court_ori, case_id, identity_id, actor_id);\nALTER TABLE ONLY case_data\n ADD CONSTRAINT case_data_pkey PRIMARY KEY (court_ori, case_id);\nALTER TABLE ONLY litigant_details\n ADD CONSTRAINT litigant_details_pkey PRIMARY KEY (actor_id, case_id, court_ori);\n\n\n\n> \n> > >\n> > > John\n> > > =:->\n> >--\n> >Speak softly and carry a +6 two-handed sword.\n> \n> Nah. A wand of 25th level automatic Magic Missile Fire ;-)\n> \n> Ron Peacetree\n> \n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Sat, 20 Aug 2005 13:31:15 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:\n> Jeremiah Jahn wrote:\n> > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n> >\n> >>Jeremiah Jahn wrote:\n> >>\n> \n> \n> ...\n> \n> >>\n> >>Well, in general, 3ms for a single lookup seems really long. Maybe your\n> >>index is bloated by not vacuuming often enough. Do you tend to get a lot\n> >>of updates to litigant_details?\n> >\n> > I have vacuumed this already. I get lots of updates, but this data is\n> > mostly unchanging.\n> >\n> >\n> >>There are a couple possibilities at this point. First, you can REINDEX\n> >>the appropriate index, and see if that helps. However, if this is a test\n> >>box, it sounds like you just did a dump and reload, which wouldn't have\n> >>bloat in an index.\n> >\n> >\n> > I loaded it using slony\n> \n> I don't know that slony versus pg_dump/pg_restore really matters. The\n> big thing is that Updates wouldn't be trashing your index.\n> But if you are saying that you cluster once/wk your index can't be that\n> messed up anyway. (Unless CLUSTER messes up the non-clustered indexes,\n> but that would make cluster much less useful, so I would have guessed\n> this was not the case)\n> \n> >\n> >\n> >>Another possibility. Is this the column that you usually use when\n> >>pulling information out of litigant_details? If so, you can CLUSTER\n> >>litigant_details on the appropriate index. This will help things be\n> >>close together that should be, which decreases the index lookup costs.\n> >\n> > clustering on this right now. Most of the other things are already\n> > clustered. name and case_data\n> \n> Just as a reality check, they are clustered on the columns in question,\n> right? (I don't know if this column is a primary key or not, but any\n> index can be used for clustering).\n> \n> >\n> >\n> >>However, if this is not the common column, then you probably will slow\n> >>down whatever other accesses you may have on this table.\n> >>\n> >>After CLUSTER, the current data will stay clustered, but new data will\n> >>not, so you have to continually CLUSTER, the same way that you might\n> >>VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as\n> >>expensive as a VACUUM FULL. Be aware of this, but it might vastly\n> >>improve your performance, so it would be worth it.\n> >\n> > I generally re-cluster once a week.\n> >\n> >>>\n> >>>>So there is no need for preloading your indexes on the identity table.\n> >>>>It is definitely not the bottleneck.\n> >>>>\n> >>>>So a few design bits, which may help your database.\n> >>>>Why is \"actor_id\" a text field instead of a number?\n> >>>\n> >>>This is simply due to the nature of the data.\n> >>>\n> >>\n> >>I'm just wondering if changing into a number, and using a number->name\n> >>lookup would be faster for you. It may not be. In general, I prefer to\n> >>use numbers for references. I may be over paranoid, but I know that some\n> >>locales are bad with string -> string comparisons. And since the data in\n> >>your database is stored as UNICODE, I'm not sure if it has to do any\n> >>translating or not. Again, something to consider, it may not make any\n> >>difference.\n> >\n> > I don't believe so. I initialze the DB as 'lang=C'. I used to have the\n> > problem where things were being inited as en_US. this would prevent any\n> > text based index from working. This doesn't seem to be the case here, so\n> > I'm not worried about it.\n> >\n> \n> Sorry, I think I was confusing you with someone else who posted SHOW ALL.\n> \n> >\n> >\n> >>\n> \n> ...\n> \n> > it's cached alright. I'm getting a read rate of about 150MB/sec. I would\n> > have thought is would be faster with my raid setup. I think I'm going to\n> > scrap the whole thing and get rid of LVM. I'll just do a straight ext3\n> > system. Maybe that will help. Still trying to get suggestions for a\n> > stripe size.\n> >\n> \n> I don't think 150MB/s is out of the realm for a 14 drive array.\n> How fast is\n> time dd if=/dev/zero of=testfile bs=8192 count=1000000\ntime dd if=/dev/zero of=testfile bs=8192 count=1000000\n1000000+0 records in\n1000000+0 records out\n\nreal 1m24.248s\nuser 0m0.381s\nsys 0m33.028s\n\n\n> (That should create a 8GB file, which is too big to cache everything)\n> And then how fast is:\n> time dd if=testfile of=/dev/null bs=8192 count=1000000\n\ntime dd if=testfile of=/dev/null bs=8192 count=1000000\n1000000+0 records in\n1000000+0 records out\n\nreal 0m54.139s\nuser 0m0.326s\nsys 0m8.916s\n\n\nand on a second run:\n\nreal 0m55.667s\nuser 0m0.341s\nsys 0m9.013s\n\n\n> \n> That should give you a semi-decent way of measuring how fast the RAID\n> system is, since it should be too big to cache in ram.\n\nabout 150MB/Sec. Is there no better way to make this go faster...? \n\n> \n> >\n> >>I can point you to REINDEX and CLUSTER, but if it is caching in ram, I\n> >>honestly can't say why the per loop would be that much slower.\n> >>Are both systems running the same postgres version? It sounds like it is\n> >>different (since you say something about switching to 8.0).\n> >\n> > These had little or no effect.\n> > The production machine is running 7.4 while the devel machine is running\n> > 8.0\n> >\n> \n> Well, my concern is that maybe some portion of the 8.0 code actually\n> slowed things down for you. You could try reverting to 7.4 on the devel\n> box, though I think playing with upgrading to 8.1 might be more worthwhile.\nAnd the level of stability for 8.1? I started with 7.4 and it didn't\nreally feel as fast as it should either.\n\n> \n> ...\n> \n> >\n> > this is a cached version.\n> >\n> \n> I assume that you mean this is the second run of the query. I can't\n> compare it too much, since this is \"smith\" rather than \"jones\". But this\n> one is 17s rather than the other one being 46s.\n> \n> And that includes having 8k rows instead of having 5k rows.\n> \n> Have you tried other values with disabled nested loops? Because this\n> query (at least in cached form) seems to be *way* faster than with\n> nested loops.\n> I know that you somehow managed to get 200s in your testing, but it\n> might just be that whatever needed to be loaded is now loaded, and you\n> would get better performance.\n> If this is true, it means you might need to tweak some settings, and\n> make sure your statistics are decent, so that postgres can actually pick\n> the optimal plan.\n> \n> >\n> >>copa=> explain analyze select full_name,identity_id,identity.case_id,court.id,date_of_birth,assigned_case_role,litigant_details.impound_litigant_data\n> >>copa-> from identity\n> >>copa-> join litigant_details on identity.actor_id = litigant_details.actor_id\n> >>copa-> join case_data on litigant_details.case_id = case_data.case_id and litigant_details.court_ori = case_data.court_ori\n> >>copa-> join court on identity.court_ori = court.id\n> >>copa-> where identity.court_ori = 'IL081025J' and full_name like 'SMITH%' order by full_name;\n> >> QUERY PLAN\n> >>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> >> Sort (cost=100502560.72..100502583.47 rows=9099 width=86) (actual time=17843.876..17849.401 rows=8094 loops=1)\n> >> Sort Key: identity.full_name\n> >> -> Merge Join (cost=100311378.72..100501962.40 rows=9099 width=86) (actual time=15195.816..17817.847 rows=8094 loops=1)\n> >> Merge Cond: (((\"outer\".court_ori)::text = \"inner\".\"?column10?\") AND ((\"outer\".case_id)::text = \"inner\".\"?column11?\"))\n> >> -> Index Scan using case_speed on case_data (cost=0.00..170424.73 rows=3999943 width=26) (actual time=0.015..4540.525 rows=3018284 loops=1)\n> >> -> Sort (cost=100311378.72..100311400.82 rows=8839 width=112) (actual time=9594.985..9601.174 rows=8094 loops=1)\n> >> Sort Key: (litigant_details.court_ori)::text, (litigant_details.case_id)::text\n> >> -> Nested Loop (cost=100002491.43..100310799.34 rows=8839 width=112) (actual time=6892.755..9555.828 rows=8094 loops=1)\n> >> -> Seq Scan on court (cost=0.00..3.29 rows=1 width=12) (actual time=0.085..0.096 rows=1 loops=1)\n> >> Filter: ('IL081025J'::text = (id)::text)\n> \n> What I don't really understand is the next part. It seems to be doing an\n> index scan on 3.7M rows, and getting very decent performance (5s), and\n> then merging against a table which returns only 8k rows.\n> Why is it having to look through all of those rows?\n> I may be missing something, but this says it is able to do 600 index\n> lookups / millisecond. Which seems superfast. (Compared to your earlier\n> 4ms / lookup)\n> \nMakes me a little confused myself...\n\n> Something fishy is going on here.\n> \n> \n> >> -> Merge Join (cost=2491.43..310707.66 rows=8839 width=113) (actual time=6892.656..9519.680 rows=8094 loops=1)\n> >> Merge Cond: ((\"outer\".actor_id)::text = \"inner\".\"?column7?\")\n> >> -> Index Scan using lit_actor_speed on litigant_details (cost=0.00..295722.00 rows=4956820 width=81) (actual time=0.027..5613.814 rows=3736703 loops=1)\n> >> -> Sort (cost=2491.43..2513.71 rows=8913 width=82) (actual time=116.071..122.272 rows=8100 loops=1)\n> >> Sort Key: (identity.actor_id)::text\n> >> -> Index Scan using name_speed on identity (cost=0.00..1906.66 rows=8913 width=82) (actual time=0.133..81.104 rows=8100 loops=1)\n> >> Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n> >> Filter: (((court_ori)::text = 'IL081025J'::text) AND ((full_name)::text ~~ 'SMITH%'::text))\n> >> Total runtime: 17859.917 ms\n> >\n> >\n> >>But really, you have worse index speed, and that needs to be figured out.\n> >>\n> >>John\n> >>=:->\n> \n> I'm assuming your data is private (since it looks like legal stuff).\n> Unless maybe that makes it part of the public record.\n> Anyway, I'm not able to, but sometimes someone like Tom can profile\n> stuff to see what is going on.\nI've had tom on here before..:) not my devel box, but my production box\na couple of years ago. \n\n\n> \n> I might just be messing up my ability to read the explain output. But\n> somehow things don't seem to be lining up with the cost of a single\n> index lookup.\n> On my crappy Celeron 450 box, an index lookup is 0.06ms once things are\n> cached in ram.\n> \n> John\n> =:->\n> \n> \n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Sat, 20 Aug 2005 13:53:23 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:\n>On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:\n> > Jeremiah Jahn wrote:\n> > > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n> > >\n><snip>\n> >\n> > > it's cached alright. I'm getting a read rate of about 150MB/sec. I would\n> > > have thought is would be faster with my raid setup. I think I'm going to\n> > > scrap the whole thing and get rid of LVM. I'll just do a straight ext3\n> > > system. Maybe that will help. Still trying to get suggestions for a\n> > > stripe size.\n> > >\n> >\n> > I don't think 150MB/s is out of the realm for a 14 drive array.\n> > How fast is time dd if=/dev/zero of=testfile bs=8192 count=1000000\n> >\n>time dd if=/dev/zero of=testfile bs=8192 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 1m24.248s\n>user 0m0.381s\n>sys 0m33.028s\n>\n>\n> > (That should create a 8GB file, which is too big to cache everything)\n> > And then how fast is:\n> > time dd if=testfile of=/dev/null bs=8192 count=1000000\n>\n>time dd if=testfile of=/dev/null bs=8192 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m54.139s\n>user 0m0.326s\n>sys 0m8.916s\n>\n>\n>and on a second run:\n>\n>real 0m55.667s\n>user 0m0.341s\n>sys 0m9.013s\n>\n>\n> >\n> > That should give you a semi-decent way of measuring how fast the RAID\n> > system is, since it should be too big to cache in ram.\n>\n>about 150MB/Sec. Is there no better way to make this go faster...?\nAssuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of \nthem doing raw sequential IO like this should be capable of at\n ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= \n553MB/s if using Fujitsu MAU's, and ~7*86MB/s= 602MB/s if using \nMaxtor Atlas 15K II's to devices external to the RAID array.\n\n_IF_ the controller setup is high powered enough to keep that kind of \nIO rate up. This will require a controller or controllers providing \ndual channel U320 bandwidth externally and quad channel U320 \nbandwidth internally. IOW, it needs a controller or controllers \ntalking 64b 133MHz PCI-X, reasonably fast DSP/CPU units, and probably \na decent sized IO buffer as well.\n\nAFAICT, the Dell PERC4 controllers use various flavors of the LSI \nLogic MegaRAID controllers. What I don't know is which exact one \nyours is, nor do I know if it (or any of the MegaRAID controllers) \nare high powered enough.\n\nTalk to your HW supplier to make sure you have controllers adequate \nto your HD's.\n\n...and yes, your average access time will be in the 5.5ms - 6ms range \nwhen doing a physical seek.\nEven with RAID, you want to minimize seeks and maximize sequential IO \nwhen accessing them.\nBest to not go to HD at all ;-)\n\nHope this helps,\nRon Peacetree\n\n\n\n\n", "msg_date": "Sat, 20 Aug 2005 17:01:53 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "At 02:16 PM 8/20/2005, Jeremiah Jahn wrote:\n>I'm just watching gnome-system-monoitor. Which after careful \n>consideration.....and looking at dstat means I'm on CRACK....GSM isn't\n>showing cached memory usage....I asume that the cache memory usage \n>is where data off of the disks would be cached...?\n>\n>memory output from dstat is this for a few seconds:\n>\n>---procs--- ------memory-usage----- ---paging-- \n>--disk/sda----disk/sdb- ----swap--- ----total-cpu-usage----\n>run blk new|_used _buff _cach _free|__in_ _out_|_read write:_read \n>write|_used _free|usr sys idl wai hiq siq\n> 0 0 0|1336M 10M 4603M 17M| 490B 833B|3823B 3503k:1607k \n> 4285k| 160k 2048M| 4 1 89 7 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 \n> : 0 464k| 160k 2048M| 25 0 75 0 0 0\n><snip>\n> 1 0 0|1334M 10M 4596M 25M| 0 0 | 0 0 \n> : 0 56k| 160k 2048M| 21 4 75 0 0 0\n\nThen the \"low memory usage\" was a chimera. Excellent!\n\nGiven the evidence in this thread, IMO you should upgrade your box to \n16GB of RAM ASAP. That should be enough to cache most, if not all, \nof the 10GB of the \"hot\" part of your DB; thereby dedicating your HD \nsubsystem as much as possible to writes (which is unavoidable HD \nIO). As I've posted before, at $75-$150/GB, it's well worth the \ninvestment whenever you can prove it will help as we have here.\n\nHope this helps,\nRon Peacetree\n\n\n\n", "msg_date": "Sat, 20 Aug 2005 17:23:25 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> I'm just watching gnome-system-monoitor. Which after careful\n> consideration.....and looking at dstat means I'm on CRACK....GSM isn't\n> showing cached memory usage....I asume that the cache memory usage is\n> where data off of the disks would be cached...?\n> \n\nWell a simple \"free\" also tells you how much has been cached.\nI believe by reading the _cach line, it looks like you have 4.6G cached. \nSo you are indeed using memory.\n\nI'm still concerned why it seems to be taking 3-4ms per index lookup, \nwhen things should already be cached in RAM.\nNow, I may be wrong about whether the indexes are cached, but I sure \nwould expect them to be.\nWhat is the time for a cached query on your system (with normal nested \nloops)? (give the EXPLAIN ANALYZE for the *second* run, or maybe the \nfourth).\n\nI'm glad that we aren't seeing something weird with your kernel, at least.\n\nJohn\n=:->\n\n\n> \n> \n> memory output from dstat is this for a few seconds:\n> \n> ---procs--- ------memory-usage----- ---paging-- --disk/sda----disk/sdb- ----swap--- ----total-cpu-usage----\n> run blk new|_used _buff _cach _free|__in_ _out_|_read write:_read write|_used _free|usr sys idl wai hiq siq\n> 0 0 0|1336M 10M 4603M 17M| 490B 833B|3823B 3503k:1607k 4285k| 160k 2048M| 4 1 89 7 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 464k| 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 48k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 132k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 36k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 12k: 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 1 0 0|1337M 10M 4600M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 0 75 0 0 0\n> 2 0 0|1353M 10M 4585M 18M| 0 0 | 0 0 : 0 0 | 160k 2048M| 25 1 75 0 0 0\n> 1 0 0|1321M 10M 4616M 19M| 0 0 | 0 0 : 0 0 | 160k 2048M| 18 8 74 0 0 0\n> 1 0 0|1326M 10M 4614M 17M| 0 0 | 0 0 :4096B 0 | 160k 2048M| 16 10 74 1 0 0\n> 1 0 0|1330M 10M 4609M 17M| 0 0 | 0 12k:4096B 0 | 160k 2048M| 17 9 74 0 0 0\n> 0 1 0|1343M 10M 4596M 17M| 0 0 | 0 0 : 0 316M| 160k 2048M| 5 10 74 11 0 1\n> 0 1 0|1339M 10M 4596M 21M| 0 0 | 0 0 : 0 0 | 160k 2048M| 0 0 74 25 0 1\n> 0 2 0|1334M 10M 4596M 25M| 0 0 | 0 4096B: 0 0 | 160k 2048M| 0 0 54 44 0 1\n> 1 0 0|1326M 10M 4596M 34M| 0 0 | 0 0 : 0 364k| 160k 2048M| 4 1 60 34 0 1\n> 1 0 0|1290M 10M 4596M 70M| 0 0 | 0 12k: 0 0 | 160k 2048M| 24 1 75 0 0 0\n> 1 0 0|1301M 10M 4596M 59M| 0 0 | 0 20k: 0 0 | 160k 2048M| 21 4 75 0 0 0\n> 1 0 0|1312M 10M 4596M 48M| 0 0 | 0 0 : 0 0 | 160k 2048M| 22 4 75 0 0 0\n> 1 0 0|1323M 10M 4596M 37M| 0 0 | 0 0 : 0 24k| 160k 2048M| 21 4 75 0 0 0\n> 1 0 0|1334M 10M 4596M 25M| 0 0 | 0 0 : 0 56k| 160k 2048M| 21 4 75 0 0 0\n> \n> \n> \n> On Fri, 2005-08-19 at 16:07 -0500, John A Meinel wrote:\n> \n>>Jeremiah Jahn wrote:\n>>\n>>>Rebuild in progress with just ext3 on the raid array...will see if this\n>>>helps the access times. If it doesn't I'll mess with the stripe size. I\n>>>have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index\n>>>\n>>>\n>>>>/dev/null' none of this seems to have helped, or even increased my\n>>>\n>>>memory usage. argh! The only thing about this new system that I'm\n>>>unfamiliar with is the array setup and LVM, which is why I think that's\n>>>where the issue is. clustering and indexing as well as vacuum etc are\n>>>things that I do and have been aware of for sometime. Perhaps slony is a\n>>>factor, but I really don't see it causing problems on index read speed\n>>>esp. when it's not running.\n>>>\n>>>thanx for your help, I really appreciate it.\n>>>-jj-\n>>>\n>>\n>>By the way, how are you measuring memory usage? Can you give the output\n>>of that command, just to make sure you are reading it correctly.\n>>\n>>John\n>>=:->\n>>", "msg_date": "Sat, 20 Aug 2005 21:26:04 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Ron wrote:\n> At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:\n> \n>> On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:\n>> > Jeremiah Jahn wrote:\n>> > > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n>> > >\n>> <snip>\n>> >\n>> > > it's cached alright. I'm getting a read rate of about 150MB/sec. I \n>> would\n>> > > have thought is would be faster with my raid setup. I think I'm \n>> going to\n>> > > scrap the whole thing and get rid of LVM. I'll just do a straight \n>> ext3\n>> > > system. Maybe that will help. Still trying to get suggestions for a\n>> > > stripe size.\n>> > >\n\nWell, since you can get a read of the RAID at 150MB/s, that means that \nit is actual I/O speed. It may not be cached in RAM. Perhaps you could \ntry the same test, only using say 1G, which should be cached.\n\n>> >\n>> > I don't think 150MB/s is out of the realm for a 14 drive array.\n>> > How fast is time dd if=/dev/zero of=testfile bs=8192 count=1000000\n>> >\n>> time dd if=/dev/zero of=testfile bs=8192 count=1000000\n>> 1000000+0 records in\n>> 1000000+0 records out\n>>\n>> real 1m24.248s\n>> user 0m0.381s\n>> sys 0m33.028s\n>>\n>>\n>> > (That should create a 8GB file, which is too big to cache everything)\n>> > And then how fast is:\n>> > time dd if=testfile of=/dev/null bs=8192 count=1000000\n>>\n>> time dd if=testfile of=/dev/null bs=8192 count=1000000\n>> 1000000+0 records in\n>> 1000000+0 records out\n>>\n>> real 0m54.139s\n>> user 0m0.326s\n>> sys 0m8.916s\n>>\n>>\n>> and on a second run:\n>>\n>> real 0m55.667s\n>> user 0m0.341s\n>> sys 0m9.013s\n>>\n>>\n>> >\n>> > That should give you a semi-decent way of measuring how fast the RAID\n>> > system is, since it should be too big to cache in ram.\n>>\n>> about 150MB/Sec. Is there no better way to make this go faster...?\n\nI'm actually curious about PCI bus saturation at this point. Old 32-bit \n33MHz pci could only push 1Gbit = 100MB/s. Now, I'm guessing that this \nis a higher performance system. But I'm really surprised that your write \nspeed is that close to your read speed. (100MB/s write, 150MB/s read).\n\n> \n> Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them \n> doing raw sequential IO like this should be capable of at\n> ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= 553MB/s \n> if using Fujitsu MAU's, and ~7*86MB/s= 602MB/s if using Maxtor Atlas 15K \n> II's to devices external to the RAID array.\n\nI know I thought these were SATA drives, over 2 controllers. I could be \ncompletely wrong, though.\n\n> \n> _IF_ the controller setup is high powered enough to keep that kind of IO \n> rate up. This will require a controller or controllers providing dual \n> channel U320 bandwidth externally and quad channel U320 bandwidth \n> internally. IOW, it needs a controller or controllers talking 64b \n> 133MHz PCI-X, reasonably fast DSP/CPU units, and probably a decent sized \n> IO buffer as well.\n> \n> AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic \n> MegaRAID controllers. What I don't know is which exact one yours is, \n> nor do I know if it (or any of the MegaRAID controllers) are high \n> powered enough.\n> \n> Talk to your HW supplier to make sure you have controllers adequate to \n> your HD's.\n> \n> ...and yes, your average access time will be in the 5.5ms - 6ms range \n> when doing a physical seek.\n> Even with RAID, you want to minimize seeks and maximize sequential IO \n> when accessing them.\n> Best to not go to HD at all ;-)\n\nWell, certainly, if you can get more into RAM, you're always better off. \nFor writing, a battery-backed write cache, and for reading lots of \nsystem RAM.\n\n> \n> Hope this helps,\n> Ron Peacetree\n> \n\nJohn\n=:->", "msg_date": "Sat, 20 Aug 2005 21:32:04 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "\nI'm Sorry,\nthat I wrote that the option would risk the LOG\npersistency with PostgreSQL.\n\nI should have asked instead, that how you have taken this into account.\n\nTom Lane's email below convinces me, that you have taken the metadata\nonly journalling into account and still fulfill the persistency\nof committed transactions.\n\nThis means, that Ext3 with data=writeback is safe with PostgreSQL even with\na hardware reset button. Metadata only journalling is faster, when it\ncan be used.\n\nI didn't know, that any database can keep the database guarantees\nwith the metadata only journalling option.\n\nI looked at your problem.\nOne of the problems is that you need to keep the certain data\ncached in memory all the time.\n\nThat could be solved by doing\nSELECT COUNT(*) from to_be_cached;\nas a cron job. It loads the whole table into the Linux Kernel memory cache.\n\n\nMarko Ristola\n\nTom Lane wrote:\n\n>Right. I think the optimal setting for a Postgres data directory is\n>journaled metadata, non-journaled file content. Postgres can take care\n>of the data integrity for itself, but it does assume that the filesystem\n>stays structurally sane (eg, data blocks don't get reassigned to the\n>wrong file), so you need a filesystem guarantee about the metadata.\n>\n>WAL files are handled in a much more conservative way (created, filled\n>with zeroes, and fsync'd before we ever put any valuable data in 'em).\n>If you have WAL on its own drive then I think Mike's recommendation of\n>no filesystem journalling at all for that drive is probably OK. Or\n>you can do same as above (journal metadata only) if you want a little\n>extra protection.\n>\n>And of course all this reasoning depends on the assumption that the\n>drive tells the truth about write-completion. If the drive does write\n>caching it had better be able to complete all its accepted writes before\n>dying in a power failure. (Hence, battery-backed write cache is OK, any\n>other kind is evil.)\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n> \n>\n\n", "msg_date": "Sun, 21 Aug 2005 12:52:04 +0300", "msg_from": "Marko Ristola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:\n> Ron wrote:\n> > At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:\n> > \n> >> On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:\n> >> > Jeremiah Jahn wrote:\n> >> > > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n> >> > >\n> >> <snip>\n> >> >\n> >> > > it's cached alright. I'm getting a read rate of about 150MB/sec. I \n> >> would\n> >> > > have thought is would be faster with my raid setup. I think I'm \n> >> going to\n> >> > > scrap the whole thing and get rid of LVM. I'll just do a straight \n> >> ext3\n> >> > > system. Maybe that will help. Still trying to get suggestions for a\n> >> > > stripe size.\n> >> > >\n> \n> Well, since you can get a read of the RAID at 150MB/s, that means that \n> it is actual I/O speed. It may not be cached in RAM. Perhaps you could \n> try the same test, only using say 1G, which should be cached.\n\n[root@io pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000\n1000000+0 records in\n1000000+0 records out\n\nreal 0m8.885s\nuser 0m0.299s\nsys 0m6.998s\n[root@io pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000\n1000000+0 records in\n1000000+0 records out\n\nreal 0m1.654s\nuser 0m0.232s\nsys 0m1.415s\n\n\n> \n> >> >\n> >> > I don't think 150MB/s is out of the realm for a 14 drive array.\n> >> > How fast is time dd if=/dev/zero of=testfile bs=8192 count=1000000\n> >> >\n> >> time dd if=/dev/zero of=testfile bs=8192 count=1000000\n> >> 1000000+0 records in\n> >> 1000000+0 records out\n> >>\n> >> real 1m24.248s\n> >> user 0m0.381s\n> >> sys 0m33.028s\n> >>\n> >>\n> >> > (That should create a 8GB file, which is too big to cache everything)\n> >> > And then how fast is:\n> >> > time dd if=testfile of=/dev/null bs=8192 count=1000000\n> >>\n> >> time dd if=testfile of=/dev/null bs=8192 count=1000000\n> >> 1000000+0 records in\n> >> 1000000+0 records out\n> >>\n> >> real 0m54.139s\n> >> user 0m0.326s\n> >> sys 0m8.916s\n> >>\n> >>\n> >> and on a second run:\n> >>\n> >> real 0m55.667s\n> >> user 0m0.341s\n> >> sys 0m9.013s\n> >>\n> >>\n> >> >\n> >> > That should give you a semi-decent way of measuring how fast the RAID\n> >> > system is, since it should be too big to cache in ram.\n> >>\n> >> about 150MB/Sec. Is there no better way to make this go faster...?\n> \n> I'm actually curious about PCI bus saturation at this point. Old 32-bit \n> 33MHz pci could only push 1Gbit = 100MB/s. Now, I'm guessing that this \n> is a higher performance system. But I'm really surprised that your write \n> speed is that close to your read speed. (100MB/s write, 150MB/s read).\n\nThe raid array I have is currently set up to use a single channel. But I\nhave dual controllers In the array. And dual external slots on the card.\nThe machine is brand new and has pci-e backplane. \n\n\n\n> \n> > \n> > Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them \n> > doing raw sequential IO like this should be capable of at\n> > ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= 553MB/s \nBTW I'm using Seagate Cheetah 15K.4's\n\n> > if using Fujitsu MAU's, and ~7*86MB/s= 602MB/s if using Maxtor Atlas 15K \n> > II's to devices external to the RAID array.\n> \n> I know I thought these were SATA drives, over 2 controllers. I could be \n> completely wrong, though.\n> \n> > \n> > _IF_ the controller setup is high powered enough to keep that kind of IO \n> > rate up. This will require a controller or controllers providing dual \n> > channel U320 bandwidth externally and quad channel U320 bandwidth \n> > internally. IOW, it needs a controller or controllers talking 64b \n> > 133MHz PCI-X, reasonably fast DSP/CPU units, and probably a decent sized \n> > IO buffer as well.\n> > \n> > AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic \n> > MegaRAID controllers. What I don't know is which exact one yours is, \n> > nor do I know if it (or any of the MegaRAID controllers) are high \n> > powered enough.\n\nPERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n\n> > \n> > Talk to your HW supplier to make sure you have controllers adequate to \n> > your HD's.\n> > \n> > ...and yes, your average access time will be in the 5.5ms - 6ms range \n> > when doing a physical seek.\n> > Even with RAID, you want to minimize seeks and maximize sequential IO \n> > when accessing them.\n> > Best to not go to HD at all ;-)\n> \n> Well, certainly, if you can get more into RAM, you're always better off. \n> For writing, a battery-backed write cache, and for reading lots of \n> system RAM.\n\nI'm not really worried about the writing, it's the reading the reading\nthat needs to be faster. \n\n> \n> > \n> > Hope this helps,\n> > Ron Peacetree\n> > \n> \n> John\n> =:->\n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Sun, 21 Aug 2005 09:54:26 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:\n> \n>>Ron wrote:\n>>\n>>>At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:\n>>\n>>Well, since you can get a read of the RAID at 150MB/s, that means that \n>>it is actual I/O speed. It may not be cached in RAM. Perhaps you could \n>>try the same test, only using say 1G, which should be cached.\n> \n> \n> [root@io pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000\n> 1000000+0 records in\n> 1000000+0 records out\n> \n> real 0m8.885s\n> user 0m0.299s\n> sys 0m6.998s\n> [root@io pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000\n> 1000000+0 records in\n> 1000000+0 records out\n> \n> real 0m1.654s\n> user 0m0.232s\n> sys 0m1.415s\n> \n\nThe write time seems about the same (but you only have 128MB of write \ncache), but your read jumped up to 620MB/s. So you drives do seem to be \ngiving you 150MB/s.\n\n> \n\n...\n\n>>I'm actually curious about PCI bus saturation at this point. Old 32-bit \n>>33MHz pci could only push 1Gbit = 100MB/s. Now, I'm guessing that this \n>>is a higher performance system. But I'm really surprised that your write \n>>speed is that close to your read speed. (100MB/s write, 150MB/s read).\n> \n> \n> The raid array I have is currently set up to use a single channel. But I\n> have dual controllers In the array. And dual external slots on the card.\n> The machine is brand new and has pci-e backplane. \n> \n> \n> \n> \n>>>Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them \n>>>doing raw sequential IO like this should be capable of at\n>>> ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= 553MB/s \n> \n> BTW I'm using Seagate Cheetah 15K.4's\n> \n\nNow, are the numbers that Ron is quoting in megabytes or megabits? I'm \nguessing he knows what he is talking about, and is doing megabytes. \n80MB/s sustained seems rather high for a hard-disk.\n\nThough this page:\nhttp://www.storagereview.com/articles/200411/20041116ST3146754LW_2.html\n\nDoes seem to agree with that statement. (Between 56 and 93MB/s)\n\nAnd since U320 is a 320MB/s bus, it doesn't seem like anything there \nshould be saturating. So why the low performance????\n\n>>\n>>>_IF_ the controller setup is high powered enough to keep that kind of IO \n>>>rate up. This will require a controller or controllers providing dual \n>>>channel U320 bandwidth externally and quad channel U320 bandwidth \n>>>internally. IOW, it needs a controller or controllers talking 64b \n>>>133MHz PCI-X, reasonably fast DSP/CPU units, and probably a decent sized \n>>>IO buffer as well.\n>>>\n>>>AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic \n>>>MegaRAID controllers. What I don't know is which exact one yours is, \n>>>nor do I know if it (or any of the MegaRAID controllers) are high \n>>>powered enough.\n> \n> \n> PERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n\nDo you know which card it is? Does it look like this one:\nhttp://www.lsilogic.com/products/megaraid/megaraid_320_2e.html\n\nJudging by the 320 speed, and 2 external controllers, that is my guess.\nThey at least claim a theoretical max of 2GB/s.\n\nWhich makes you wonder why reading from RAM is only able to get \nthroughput of 600MB/s. Did you run it multiple times? On my windows \nsystem, I get just under 550MB/s for what should be cached, copying from \n/dev/zero to /dev/null I get 2.4GB/s (though that might be a no-op).\n\nOn a similar linux machine, I'm able to get 1200MB/s for a cached file. \n(And 3GB/s for a zero=>null copy).\n\nJohn\n=:->\n\n> \n> \n>>>Talk to your HW supplier to make sure you have controllers adequate to \n>>>your HD's.\n>>>\n>>>...and yes, your average access time will be in the 5.5ms - 6ms range \n>>>when doing a physical seek.\n>>>Even with RAID, you want to minimize seeks and maximize sequential IO \n>>>when accessing them.\n>>>Best to not go to HD at all ;-)\n>>\n>>Well, certainly, if you can get more into RAM, you're always better off. \n>>For writing, a battery-backed write cache, and for reading lots of \n>>system RAM.\n> \n> \n> I'm not really worried about the writing, it's the reading the reading\n> that needs to be faster. \n> \n> \n>>>Hope this helps,\n>>>Ron Peacetree\n>>>\n>>\n>>John\n>>=:->", "msg_date": "Sun, 21 Aug 2005 10:51:52 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:\n>On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:\n> > Ron wrote:\n> >\n> > Well, since you can get a read of the RAID at 150MB/s, that means that\n> > it is actual I/O speed. It may not be cached in RAM. Perhaps you could\n> > try the same test, only using say 1G, which should be cached.\n>\n>[root@io pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m8.885s\n>user 0m0.299s\n>sys 0m6.998s\n\nThis is abysmally slow.\n\n\n>[root@io pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m1.654s\n>user 0m0.232s\n>sys 0m1.415s\n\nThis transfer rate is the only one out of the 4 you have posted that \nis in the vicinity of where it should be.\n\n\n>The raid array I have is currently set up to use a single channel. But I\n>have dual controllers in the array. And dual external slots on the card.\n>The machine is brand new and has pci-e backplane.\n>\nSo you have 2 controllers each with 2 external slots? But you are \ncurrently only using 1 controller and only one external slot on that \ncontroller?\n\n\n> > > Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them\n> > > doing raw sequential IO like this should be capable of at\n> > > ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's\n>BTW I'm using Seagate Cheetah 15K.4's\n\nOK, now we have that nailed down.\n\n\n> > > AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic\n> > > MegaRAID controllers. What I don't know is which exact one yours is,\n> > > nor do I know if it (or any of the MegaRAID controllers) are high\n> > > powered enough.\n>\n>PERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n\nLooks like they are using the LSI Logic MegaRAID SCSI 320-2E \ncontroller. IIUC, you have 2 of these, each with 2 external channels?\n\nThe specs on these appear a bit strange. They are listed as being a \nPCI-Ex8 card, which means they should have a max bandwidth of 20Gb/s= \n2GB/s, yet they are also listed as only supporting dual channel U320= \n640MB/s when they could easily support quad channel U320= \n1.28GB/s. Why bother building a PCI-Ex8 card when only a PCI-Ex4 \ncard (which is a more standard physical format) would've been \nenough? Or if you are going to build a PCI-Ex8 card, why not support \nquad channel U320? This smells like there's a problem with LSI's design.\n\nThe 128MB buffer also looks suspiciously small, and I do not see any \nupgrade path for it on LSI Logic's site. \"Serious\" RAID controllers \nfrom companies like Xyratex, Engino, and Dot-hill can have up to \n1-2GB of buffer, and there's sound technical reasons for it. See if \nthere's a buffer upgrade available or if you can get controllers that \nhave larger buffer capabilities.\n\nRegardless of the above, each of these controllers should still be \ngood for about 80-85% of 640MB/s, or ~510-540 MB/s apiece when doing \nraw sequential IO if you plug 3-4 fast enough HD's into each SCSI \nchannel. Cheetah 15K.4's certainly are fast enough. Optimal setup \nis probably to split each RAID 1 pair so that one HD is on each of \nthe SCSI channels, and then RAID 0 those pairs. That will also \nprotect you from losing the entire disk subsystem if one of the SCSI \nchannels dies.\n\nThat 128MB of buffer cache may very well be too small to keep the IO \nrate up, and/or there may be a more subtle problem with the LSI card, \nand/or you may have a configuration problem, but _something(s)_ need \nfixing since you are only getting raw sequential IO of ~100-150MB/s \nwhen it should be above 500MB/s.\n\nThis will make the most difference for initial reads (first time you \nload a table, first time you make a given query, etc) and for any writes.\n\nYour HW provider should be able to help you, even if some of the HW \nin question needs to be changed. You paid for a solution. As long \nas this stuff is performing at so much less then what it is supposed \nto, you have not received the solution you paid for.\n\nBTW, on the subject of RAID stripes IME the sweet spot tends to be in \nthe 64KB to 256KB range (very large, very read heavy data mines can \nwant larger RAID stripes.). Only experimentation will tell you what \nresults in the best performance for your application.\n\n\n>I'm not really worried about the writing, it's the reading the reading\n>that needs to be faster.\n\nInitial reads are only going to be as fast as your HD subsystem, so \nthere's a reason for making the HD subsystem faster even if all you \ncare about is reads. In addition, I'll repeat my previous advice \nthat upgrading to 16GB of RAM would be well worth it for you.\n\nHope this helps,\nRon Peacetree\n\n\n", "msg_date": "Sun, 21 Aug 2005 16:13:17 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "On Sun, 2005-08-21 at 16:13 -0400, Ron wrote:\n> At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:\n> >On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:\n> > > Ron wrote:\n> > >\n> > > Well, since you can get a read of the RAID at 150MB/s, that means that\n> > > it is actual I/O speed. It may not be cached in RAM. Perhaps you could\n> > > try the same test, only using say 1G, which should be cached.\n> >\n> >[root@io pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000\n> >1000000+0 records in\n> >1000000+0 records out\n> >\n> >real 0m8.885s\n> >user 0m0.299s\n> >sys 0m6.998s\n> \n> This is abysmally slow.\n> \n> \n> >[root@io pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000\n> >1000000+0 records in\n> >1000000+0 records out\n> >\n> >real 0m1.654s\n> >user 0m0.232s\n> >sys 0m1.415s\n> \n> This transfer rate is the only one out of the 4 you have posted that \n> is in the vicinity of where it should be.\n> \n> \n> >The raid array I have is currently set up to use a single channel. But I\n> >have dual controllers in the array. And dual external slots on the card.\n> >The machine is brand new and has pci-e backplane.\n> >\n> So you have 2 controllers each with 2 external slots? But you are \n> currently only using 1 controller and only one external slot on that \n> controller?\n\nSorry, no. I have one dual channel card in the system and two\ncontrollers on the array. Dell PowerVault 220S w/ PERC4eDC-PCI Express\n\n> \n> \n> > > > Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them\n> > > > doing raw sequential IO like this should be capable of at\n> > > > ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's\n> >BTW I'm using Seagate Cheetah 15K.4's\n> \n> OK, now we have that nailed down.\n> \n> \n> > > > AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic\n> > > > MegaRAID controllers. What I don't know is which exact one yours is,\n> > > > nor do I know if it (or any of the MegaRAID controllers) are high\n> > > > powered enough.\n> >\n> >PERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n> \n> Looks like they are using the LSI Logic MegaRAID SCSI 320-2E \n> controller. IIUC, you have 2 of these, each with 2 external channels?\n> \n> The specs on these appear a bit strange. They are listed as being a \n> PCI-Ex8 card, which means they should have a max bandwidth of 20Gb/s= \n> 2GB/s, yet they are also listed as only supporting dual channel U320= \n> 640MB/s when they could easily support quad channel U320= \n> 1.28GB/s. Why bother building a PCI-Ex8 card when only a PCI-Ex4 \n> card (which is a more standard physical format) would've been \n> enough? Or if you are going to build a PCI-Ex8 card, why not support \n> quad channel U320? This smells like there's a problem with LSI's design.\n> \n> The 128MB buffer also looks suspiciously small, and I do not see any \n> upgrade path for it on LSI Logic's site. \"Serious\" RAID controllers \n> from companies like Xyratex, Engino, and Dot-hill can have up to \n> 1-2GB of buffer, and there's sound technical reasons for it. See if \n> there's a buffer upgrade available or if you can get controllers that \n> have larger buffer capabilities.\n> \n> Regardless of the above, each of these controllers should still be \n> good for about 80-85% of 640MB/s, or ~510-540 MB/s apiece when doing \n> raw sequential IO if you plug 3-4 fast enough HD's into each SCSI \n> channel. Cheetah 15K.4's certainly are fast enough. Optimal setup \n> is probably to split each RAID 1 pair so that one HD is on each of \n> the SCSI channels, and then RAID 0 those pairs. That will also \n> protect you from losing the entire disk subsystem if one of the SCSI \n> channels dies.\nI like this idea, but how exactly does one bond the two channels\ntogether? Won't this cause me to have both an /dev/sdb and an /dev/sdc? \n\n\n> \n> That 128MB of buffer cache may very well be too small to keep the IO \n> rate up, and/or there may be a more subtle problem with the LSI card, \n> and/or you may have a configuration problem, but _something(s)_ need \n> fixing since you are only getting raw sequential IO of ~100-150MB/s \n> when it should be above 500MB/s.\n\nIt looks like there's a way to add more memory to it.\n\n> \n> This will make the most difference for initial reads (first time you \n> load a table, first time you make a given query, etc) and for any writes.\n> \n> Your HW provider should be able to help you, even if some of the HW \n> in question needs to be changed. You paid for a solution. As long \n> as this stuff is performing at so much less then what it is supposed \n> to, you have not received the solution you paid for.\n> \n> BTW, on the subject of RAID stripes IME the sweet spot tends to be in \n> the 64KB to 256KB range (very large, very read heavy data mines can \n> want larger RAID stripes.). Only experimentation will tell you what \n> results in the best performance for your application.\nI think I have them very small at the moment. \n\n> \n> \n> >I'm not really worried about the writing, it's the reading the reading\n> >that needs to be faster.\n> \n> Initial reads are only going to be as fast as your HD subsystem, so \n> there's a reason for making the HD subsystem faster even if all you \n> care about is reads. In addition, I'll repeat my previous advice \n> that upgrading to 16GB of RAM would be well worth it for you.\n\n12GB is my max. I may run with it for a while and see. \n\n> \n> Hope this helps,\n> Ron Peacetree\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n-- \nSpeak softly and carry a +6 two-handed sword.\n\n", "msg_date": "Mon, 22 Aug 2005 09:42:43 -0500", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Ron wrote:\n>> PERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n> \n> Looks like they are using the LSI Logic MegaRAID SCSI 320-2E \n> controller. IIUC, you have 2 of these, each with 2 external channels?\n\nA lot of people have mentioned Dell's versions of the LSI cards can be \nWAY slower than the ones you buy from LSI. Why this is the case? Nobody \nknows for sure.\n\nHere's a guess on my part. A while back, I was doing some googling. And \ninstead of typing \"LSI MegaRAID xxx\", I just typed \"MegaRAID xxx\". Going \nbeyond the initial pages, I saw Tekram -- a company that supposedly \nproduces their own controllers -- listing products with the exact model \nnumbers and photos as cards from LSI and Areca. Seemed puzzling until I \nread a review about SATA RAID cards where it mentioned Tekram produces \nthe Areca cards under their own name but using slower components to \navoid competing at the highend with them.\n\nSo what may be happening is that the logic circuitry on the Dell PERCs \nare the same as the source LSI cards, the speed of the RAID \nprocessor/RAM/internal buffers/etc is not as fast so Dell can shave off \na few bucks for every server. That would mean while a true LSI card has \nthe processing power to do the RAID calculates for X drives, the Dell \nversion probably can only do X*0.6 drives or so.\n\n\n> The 128MB buffer also looks suspiciously small, and I do not see any \n> upgrade path for it on LSI Logic's site. \"Serious\" RAID controllers \n> from companies like Xyratex, Engino, and Dot-hill can have up to 1-2GB \n\nThe card is upgradable. If you look at the pic of the card, it shows a \nSDRAM DIMM versus integrated RAM chips. I've also read reviews a while \nback comparing benchmarks of the 320-2 w/ 128K versus 512K onboard RAM. \nTheir product literature is just nebulous on the RAM upgrade part. I'm \nsure if you opened up the PDF manuals, you could find the exact info\n\n\n> That 128MB of buffer cache may very well be too small to keep the IO \n> rate up, and/or there may be a more subtle problem with the LSI card, \n> and/or you may have a configuration problem, but _something(s)_ need \n> fixing since you are only getting raw sequential IO of ~100-150MB/s when \n> it should be above 500MB/s.\n\nI think it just might be the Dell hardware or the lack of 64-bit IOMMU \non Xeon's. Here's my numbers on 320-1 w/ 128K paired up with Opterons \ncompared to Jeremiah's.\n\n >> # time dd if=/dev/zero of=testfile bs=1024 count=1000000\n >> 1000000+0 records in\n >> 1000000+0 records out\n >>\n >> real 0m8.885s\n >> user 0m0.299s\n >> sys 0m6.998s\n\n2x15K RAID1\nreal 0m14.493s\nuser 0m0.255s\nsys 0m11.712s\n\n6x15K RAID10 (2x 320-1)\nreal 0m9.986s\nuser 0m0.200s\nsys 0m8.634s\n\n\n >> # time dd of=/dev/null if=testfile bs=1024 count=1000000\n >> 1000000+0 records in\n >> 1000000+0 records out\n >>\n >> real 0m1.654s\n >> user 0m0.232s\n >> sys 0m1.415s\n\n2x15K RAID1\nreal 0m3.383s\nuser 0m0.176s\nsys 0m3.207s\n\n6x15K RAID10 (2x 320-1)\nreal 0m2.427s\nuser 0m0.178s\nsys 0m2.250s\n\nIf all 14 HDs are arranged in a RAID10 array, I'd say there's definitely \nsomething wrong with Jeremiah's hardware.\n\n\n", "msg_date": "Mon, 22 Aug 2005 08:07:36 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" }, { "msg_contents": "Jeremiah Jahn wrote:\n> On Sun, 2005-08-21 at 16:13 -0400, Ron wrote:\n>\n>>At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:\n>>\n\n...\n\n>>So you have 2 controllers each with 2 external slots? But you are\n>>currently only using 1 controller and only one external slot on that\n>>controller?\n>\n>\n> Sorry, no. I have one dual channel card in the system and two\n> controllers on the array. Dell PowerVault 220S w/ PERC4eDC-PCI Express\n>\n>\n\n...\n\n>>Regardless of the above, each of these controllers should still be\n>>good for about 80-85% of 640MB/s, or ~510-540 MB/s apiece when doing\n>>raw sequential IO if you plug 3-4 fast enough HD's into each SCSI\n>>channel. Cheetah 15K.4's certainly are fast enough. Optimal setup\n>>is probably to split each RAID 1 pair so that one HD is on each of\n>>the SCSI channels, and then RAID 0 those pairs. That will also\n>>protect you from losing the entire disk subsystem if one of the SCSI\n>>channels dies.\n>\n> I like this idea, but how exactly does one bond the two channels\n> together? Won't this cause me to have both an /dev/sdb and an /dev/sdc?\n>\n\nWell, even if you did, you could always either use software raid, or lvm\nto turn it into a single volume.\n\nIt also depends what the controller card bios would let you get away\nwith. Some cards would let you setup 4 RAID1's (one drive from each\nchannel), and then create a RAID0 of those pairs. Software raid should\ndo this without any problem. And can even be done such that it can be\ngrown in the future, as well as work across multiple cards (though the\nlatter is supported by some cards as well).\n\n>\n>\n>>That 128MB of buffer cache may very well be too small to keep the IO\n>>rate up, and/or there may be a more subtle problem with the LSI card,\n>>and/or you may have a configuration problem, but _something(s)_ need\n>>fixing since you are only getting raw sequential IO of ~100-150MB/s\n>>when it should be above 500MB/s.\n>\n>\n> It looks like there's a way to add more memory to it.\n\nThis memory probably helps more in writing than reading. If you are\nreading the same area over and over, it might end up being a little bit\nof extra cache for that (but it should already be cached in system RAM,\nso you don't really get anything).\n\n...\n\n>>Initial reads are only going to be as fast as your HD subsystem, so\n>>there's a reason for making the HD subsystem faster even if all you\n>>care about is reads. In addition, I'll repeat my previous advice\n>>that upgrading to 16GB of RAM would be well worth it for you.\n>\n>\n> 12GB is my max. I may run with it for a while and see.\n\nIf your working set truly is 10GB, then you can get a massive\nperformance increase even at 12GB. If your working set is 10GB and you\nhave 6GB of RAM, it probably is always swapping out what it just read\nfor the new stuff, even though you will read that same thing again in a\nfew seconds. So rather than just paying for the 4GB that can't be\ncached, you pay for the whole 10.\n\nJohn\n=:->\n\n>\n>\n>>Hope this helps,\n>>Ron Peacetree\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match", "msg_date": "Tue, 23 Aug 2005 04:48:20 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremly low memory usage" } ]
[ { "msg_contents": "Hello:\n\nWe are having serious performance problems using JBOSS and PGSQL.\n\nI'm sure the problem has to do with the application itself (and neither\nwith JBOSS nor PGSQL) but the fact is that we are using desktop\nequipment to run both Jboss and Postgresql (An Athlon 2600, 1 Gb Ram,\nIDE HDD with 60 Mb/sec Transfer Rate), and the answers arise:\n\nIf we upgrade our hardware to a Dual Processor would the transactions\nper second increase significantly? Would Postgresql take advantage from\nSMP? Presumably yes, but can we do a forecast about the number of tps?\nWhat we need is a paper with some figures showing the expected\nperformance in different environments. Some study about the \"degree of\ncorrelation\" between TPS and Number of Processors, Cache, Frequency,\nWord Size, Architecture, etc. \n\nIt exists something like this? Does anybody has experience about this\nsubject?\n\n \n\nThanks in advance and best regards.\n\n \n\nP.S. I've been looking at www.tpc.org but I could't find anything\nvaluable. \n\n \n\n\n\n\n\n\n\n\n\n\nHello:\nWe are having serious performance problems using\nJBOSS and PGSQL.\nI’m sure the problem has to do with the\napplication itself (and neither with JBOSS nor PGSQL) but the fact is that we are using desktop equipment\nto run both Jboss and Postgresql (An Athlon 2600, 1\nGb Ram, IDE HDD with 60 Mb/sec Transfer Rate), and the answers arise:\nIf we upgrade our hardware to a Dual Processor would\nthe transactions per second increase significantly? Would Postgresql take advantage from SMP? Presumably yes,\nbut can we do a forecast about the number of tps? What we need is a paper with\nsome figures showing the expected performance in different environments. Some\nstudy about the “degree of correlation” between TPS and Number of Processors,\nCache, Frequency, Word Size, Architecture, etc. \nIt exists something like\nthis? Does anybody has experience about this subject?\n \nThanks in\nadvance and best regards.\n \nP.S. I’ve been\nlooking at www.tpc.org but I could’t find anything valuable.", "msg_date": "Thu, 18 Aug 2005 12:01:24 -0300", "msg_from": "\"Sebastian Lallana\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Tx forecast improving harware capabilities." }, { "msg_contents": "Sebastian,\n\n> We are having serious performance problems using JBOSS and PGSQL.\n\nHow about some information about your application? Performance tuning \napproaches vary widely according to what you're doing with the database.\n\nAlso, read this:\nhttp://www.powerpostgresql.com/PerfList\n\n> I'm sure the problem has to do with the application itself (and neither\n> with JBOSS nor PGSQL) but the fact is that we are using desktop\n> equipment to run both Jboss and Postgresql (An Athlon 2600, 1 Gb Ram,\n> IDE HDD with 60 Mb/sec Transfer Rate), and the answers arise:\n\nWell, first off, the IDE HDD is probably killing performance unless your \napplication is 95% read or greater.\n\n> If we upgrade our hardware to a Dual Processor would the transactions\n> per second increase significantly? Would Postgresql take advantage from\n> SMP? Presumably yes, but can we do a forecast about the number of tps?\n\nIf this is an OLTP application, chance are that nothing is going to improve \nperformance until you get decent disk support.\n\n> What we need is a paper with some figures showing the expected\n> performance in different environments. Some study about the \"degree of\n> correlation\" between TPS and Number of Processors, Cache, Frequency,\n> Word Size, Architecture, etc.\n\nI don't think such a thing exists even for Oracle. Hardware configuration \nfor maximum performance is almost entirely dependant on your application.\n\nIf it helps, running DBT2 (an OLTP test devised by OSDL after TPC-C), I can \neasily get 1700 new orders per minute (NOTPM) (about 3000 total \nmultiple-write transactions per minute) on a quad-pentium-III with 4GB RAM \nand 14 drives, and 6500 notpm on a dual-Itanium machine. \n\n> P.S. I've been looking at www.tpc.org but I could't find anything\n> valuable.\n\nNor would you for any real-world situation even if we had a TPC benchmark \n(which are involved and expensive, give us a couple of years). The TPC \nbenchmarks are more of a litmus test that your database system & platform are \n\"competitive\"; they don't really relate to real-world performance (unless you \nhave budget for an 112-disk system!)\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 18 Aug 2005 09:30:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Tx forecast improving harware capabilities." }, { "msg_contents": "\nOn 18 Aug 2005, at 16:01, Sebastian Lallana wrote:\n\n\n> It exists something like this? Does anybody has experience about \n> this subject?\n\nI've just been through this with a client with both a badly tuned Pg and\nan application being less than optimal.\n\nFirst, find a benchmark. Just something you can hold on to. For us, it\nwas the generation time of the site's home page. In this case, 7 \nseconds.\nWe looked hard at postgresql.conf, planned the memory usage, sort_memory\nand all that. That was a boost. Then we looked at the queries that were\nbeing thrown at the database. Over 200 to build one page! So, a layer\nof caching was built into the web server layer. Finally, some frequently\noccurring combinations of queries were pushed down into stored procs.\nWe got the page gen time down to 1.5 seconds AND the server being stable\nunder extreme stress. So, a fair win.\n\nThanks to cms for several clues.\n\nSo, without understanding your application and were it's taking the \ntime,\nyou can't begin to estimate hardware usage.\n\n\n", "msg_date": "Thu, 18 Aug 2005 23:08:29 +0100", "msg_from": "David Hodgkinson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Tx forecast improving harware capabilities." } ]
[ { "msg_contents": "Christopher\n> You could use a 1 column/1 row table perhaps. Use some sort of\nlocking\n> mechanism.\n> \n> Also, check out contrib/userlock\n\nuserlock is definitely the way to go for this type of problem. \n\nThe are really the only way to provide locking facilities that live\noutside transactions.\n\nYou are provided with 48 bits of lock space in the form of offset/block\nin 32 bit field and a 16 bit field. The 16 bit field could be the pid\nof the locker and the 32 bit field the oid of the function.\n\nUnfortunately, userlocks are not really easy to query via the pg_locks()\nview. However this has been addressed for 8.1. In 8.1, it will be\ntrivial to create a function which checked the number of lockers on the\nfunction oid and acquire a lock if less than a certain amount.\n\nMerlin\n", "msg_date": "Thu, 18 Aug 2005 15:22:46 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit number of concurrent callers to a stored proc?" } ]
[ { "msg_contents": "Jeffrey W. Baker wrote:\n> On Tue, 2005-08-16 at 10:46 -0700, Roger Hand wrote:\n>> The disks are ext3 with journalling type of ordered, but this was later changed to writeback with no apparent change in speed.\n>> \n>> They're on a Dell poweredge 6650 with LSI raid card, setup as follows:\n>> 4 disks raid 10 for indexes (145GB) - sdc1\n>> 6 disks raid 10 for data (220GB) - sdd1\n>> 2 mirrored disks for logs - sdb1\n>> \n>> stripe size is 32k\n>> cache policy: cached io (am told the controller has bbu)\n>> write policy: write-back\n>> read policy: readahead\n> \n> I assume you are using Linux 2.6. \n\nOops, sorry I left that out. Nope, we're on 2.4:\n\n[root@rage-db2 ~]$ uname -a\nLinux xxx.xxx.xxx 2.4.21-27.0.2.ELsmp #1 SMP Wed Jan 12 23:35:44 EST 2005 i686 i686 i386 GNU/Linux\n\nIt's RedHat Enterprise AS3.0 Fri Nov 5 17:55:14 PST 2004\n\n> Have you considered booting your\n> machine with elevator=deadline? \n\nI just did a little Googling and see that the 2.4 kernel didn't have a decent elevator tuning system, and that was fixed in 2.6. Hmmm ....\n\nThanks for the ideas ...\n\n-Roger\n", "msg_date": "Fri, 19 Aug 2005 00:35:23 -0700", "msg_from": "\"Roger Hand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan looks OK, but slow I/O - settings advice?" } ]
[ { "msg_contents": "Hi all,\nI bet you get tired of the same ole questions over and\nover. \n\nI'm currently working on an application that will poll\nthousands of cable modems per minute and I would like\nto use PostgreSQL to maintain state between polls of\neach device. This requires a very heavy amount of\nupdates in place on a reasonably large table(100k-500k\nrows, ~7 columns mostly integers/bigint). Each row\nwill be refreshed every 15 minutes, or at least that's\nhow fast I can poll via SNMP. I hope I can tune the\nDB to keep up.\n\nThe app is threaded and will likely have well over 100\nconcurrent db connections. Temp tables for storage\naren't a preferred option since this is designed to be\na shared nothing approach and I will likely have\nseveral polling processes.\n\nHere are some of my assumptions so far . . . \n\nHUGE WAL\nVacuum hourly if not more often\n\nI'm getting 1700tx/sec from MySQL and I would REALLY\nprefer to use PG. I don't need to match the number,\njust get close.\n\nIs there a global temp table option? In memory tables\nwould be very beneficial in this case. I could just\nflush it to disk occasionally with an insert into blah\nselect from memory table.\n\nAny help or creative alternatives would be greatly\nappreciated. :)\n\n'njoy,\nMark\n\n\n-- \nWriting software requires an intelligent person,\ncreating functional art requires an artist.\n-- Unknown\n\n", "msg_date": "Fri, 19 Aug 2005 01:24:04 -0700 (PDT)", "msg_from": "Mark Cotner <[email protected]>", "msg_from_op": true, "msg_subject": "sustained update load of 1-2k/sec" }, { "msg_contents": "\nOn Aug 18, 2005, at 10:24 PM, Mark Cotner wrote:\n\n> I'm currently working on an application that will poll\n> thousands of cable modems per minute and I would like\n> to use PostgreSQL to maintain state between polls of\n> each device. This requires a very heavy amount of\n> updates in place on a reasonably large table(100k-500k\n> rows, ~7 columns mostly integers/bigint). Each row\n> will be refreshed every 15 minutes, or at least that's\n> how fast I can poll via SNMP. I hope I can tune the\n> DB to keep up.\n>\n> The app is threaded and will likely have well over 100\n> concurrent db connections. Temp tables for storage\n> aren't a preferred option since this is designed to be\n> a shared nothing approach and I will likely have\n> several polling processes.\n\nSomewhat OT, but..\n\nThe easiest way to speed that up is to use less threads. You're \nadding a whole TON of overhead with that many threads that you just \ndon't want or need. You should probably be using something event- \ndriven to solve this problem, with just a few database threads to \nstore all that state. Less is definitely more in this case. See \n<http://www.kegel.com/c10k.html> (and there's plenty of other \nliterature out there saying that event driven is an extremely good \nway to do this sort of thing).\n\nHere are some frameworks to look at for this kind of network code:\n(Python) Twisted - <http://twistedmatrix.com/>\n(Perl) POE - <http://poe.perl.org/>\n(Java) java.nio (not familiar enough with the Java thing to know \nwhether or not there's a high-level wrapper)\n(C++) ACE - <http://www.cs.wustl.edu/~schmidt/ACE.html>\n(Ruby) IO::Reactor - <http://www.deveiate.org/code/IO-Reactor.html>\n(C) libevent - <http://monkey.org/~provos/libevent/>\n\n.. and of course, you have select/poll/kqueue/WaitNextEvent/whatever \nthat you could use directly, if you wanted to roll your own solution, \nbut don't do that.\n\nIf you don't want to optimize the whole application, I'd at least \njust push the DB operations down to a very small number of \nconnections (*one* might even be optimal!), waiting on some kind of \nthread-safe queue for updates from the rest of the system. This way \nyou can easily batch those updates into transactions and you won't be \nputting so much unnecessary synchronization overhead into your \napplication and the database.\n\nGenerally, once you have more worker threads (or processes) than \nCPUs, you're going to get diminishing returns in a bad way, assuming \nthose threads are making good use of their time.\n\n-bob\n\n", "msg_date": "Thu, 18 Aug 2005 23:09:27 -1000", "msg_from": "Bob Ippolito <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "Excellent feedback. Thank you. Please do keep in mind I'm storing the\nresults of SNMP queries. The majority of the time each thread is in a wait\nstate, listening on a UDP port for return packet. The number of threads is\nhigh because in order to sustain poll speed I need to minimize the impact of\ntimeouts and all this waiting for return packets.\n\nI had intended to have a fallback plan which would build a thread safe queue\nfor db stuffs, but the application isn't currently architected that way.\nIt's not completely built yet so now is the time for change. I hadn't\nthought of building up a batch of queries and creating a transaction from\nthem.\n\nI've been looking into memcached as a persistent object store as well and\nhadn't seen the reactor pattern yet. Still trying to get my puny brain\naround that one.\n\nAgain, thanks for the help.\n\n'njoy,\nMark\n\n\nOn 8/19/05 5:09 AM, \"Bob Ippolito\" <[email protected]> wrote:\n\n> \n> On Aug 18, 2005, at 10:24 PM, Mark Cotner wrote:\n> \n>> I'm currently working on an application that will poll\n>> thousands of cable modems per minute and I would like\n>> to use PostgreSQL to maintain state between polls of\n>> each device. This requires a very heavy amount of\n>> updates in place on a reasonably large table(100k-500k\n>> rows, ~7 columns mostly integers/bigint). Each row\n>> will be refreshed every 15 minutes, or at least that's\n>> how fast I can poll via SNMP. I hope I can tune the\n>> DB to keep up.\n>> \n>> The app is threaded and will likely have well over 100\n>> concurrent db connections. Temp tables for storage\n>> aren't a preferred option since this is designed to be\n>> a shared nothing approach and I will likely have\n>> several polling processes.\n> \n> Somewhat OT, but..\n> \n> The easiest way to speed that up is to use less threads. You're\n> adding a whole TON of overhead with that many threads that you just\n> don't want or need. You should probably be using something event-\n> driven to solve this problem, with just a few database threads to\n> store all that state. Less is definitely more in this case. See\n> <http://www.kegel.com/c10k.html> (and there's plenty of other\n> literature out there saying that event driven is an extremely good\n> way to do this sort of thing).\n> \n> Here are some frameworks to look at for this kind of network code:\n> (Python) Twisted - <http://twistedmatrix.com/>\n> (Perl) POE - <http://poe.perl.org/>\n> (Java) java.nio (not familiar enough with the Java thing to know\n> whether or not there's a high-level wrapper)\n> (C++) ACE - <http://www.cs.wustl.edu/~schmidt/ACE.html>\n> (Ruby) IO::Reactor - <http://www.deveiate.org/code/IO-Reactor.html>\n> (C) libevent - <http://monkey.org/~provos/libevent/>\n> \n> .. and of course, you have select/poll/kqueue/WaitNextEvent/whatever\n> that you could use directly, if you wanted to roll your own solution,\n> but don't do that.\n> \n> If you don't want to optimize the whole application, I'd at least\n> just push the DB operations down to a very small number of\n> connections (*one* might even be optimal!), waiting on some kind of\n> thread-safe queue for updates from the rest of the system. This way\n> you can easily batch those updates into transactions and you won't be\n> putting so much unnecessary synchronization overhead into your\n> application and the database.\n> \n> Generally, once you have more worker threads (or processes) than\n> CPUs, you're going to get diminishing returns in a bad way, assuming\n> those threads are making good use of their time.\n> \n> -bob\n> \n\n\n", "msg_date": "Fri, 19 Aug 2005 06:14:54 -0400", "msg_from": "Mark Cotner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "\nOn Aug 19, 2005, at 12:14 AM, Mark Cotner wrote:\n\n> Excellent feedback. Thank you. Please do keep in mind I'm storing \n> the\n> results of SNMP queries. The majority of the time each thread is \n> in a wait\n> state, listening on a UDP port for return packet. The number of \n> threads is\n> high because in order to sustain poll speed I need to minimize the \n> impact of\n> timeouts and all this waiting for return packets.\n\nAsynchronous IO via select/poll/etc. basically says: \"given these 100 \nsockets, wake me up when any of them has something to tell me, or \nwake me up anyway in N milliseconds\". From one thread, you can \nusually deal with thousands of connections without breaking a sweat, \nwhere with thread-per-connection you have so much overhead just for \nthe threads that you probably run out of RAM before your network is \nthrottled. The reactor pattern basically just abstracts this a bit \nso that you worry about what do to when the sockets have something to \nsay, and also allow you to schedule timed events, rather than having \nto worry about how to implement that correctly *and* write your \napplication.\n\nWith 100 threads you are basically invoking a special-case of the \nsame mechanism that only looks at one socket, but this makes for 100 \ndifferent data structures that end up in both userspace and kernel \nspace, plus the thread stacks (which can easily be a few megs each) \nand context switching when any of them wakes up.. You're throwing a \nlot of RAM and CPU cycles out the window by using this design.\n\nAlso, preemptive threads are hard.\n\n> I had intended to have a fallback plan which would build a thread \n> safe queue\n> for db stuffs, but the application isn't currently architected that \n> way.\n> It's not completely built yet so now is the time for change. I hadn't\n> thought of building up a batch of queries and creating a \n> transaction from\n> them.\n\nIt should be *really* easy to just swap out the implementation of \nyour \"change this record\" function with one that simply puts its \narguments on a queue, with another thread that gets them from the \nqueue and actually does the work.\n\n> I've been looking into memcached as a persistent object store as \n> well and\n> hadn't seen the reactor pattern yet. Still trying to get my puny \n> brain\n> around that one.\n\nmemcached is RAM based, it's not persistent at all... unless you are \nsure all of your nodes will be up at all times and will never go \ndown. IIRC, it also just starts throwing away data once you hit its \nsize limit. If course, this isn't really any different than MySQL's \nMyISAM tables if you hit the row limit, but I think that memcached \nmight not even give you an error when this happens. Also, memcached \nis just key/value pairs over a network, not much of a database going \non there.\n\nIf you can fit all this data in RAM and you don't care so much about \nthe integrity, you might not benefit much from a RDBMS at all. \nHowever, I don't really know what you're doing with the data once you \nhave it so I might be very wrong here...\n\n-bob\n\n>\n> Again, thanks for the help.\n>\n> 'njoy,\n> Mark\n>\n>\n> On 8/19/05 5:09 AM, \"Bob Ippolito\" <[email protected]> wrote:\n>\n>\n>>\n>> On Aug 18, 2005, at 10:24 PM, Mark Cotner wrote:\n>>\n>>\n>>> I'm currently working on an application that will poll\n>>> thousands of cable modems per minute and I would like\n>>> to use PostgreSQL to maintain state between polls of\n>>> each device. This requires a very heavy amount of\n>>> updates in place on a reasonably large table(100k-500k\n>>> rows, ~7 columns mostly integers/bigint). Each row\n>>> will be refreshed every 15 minutes, or at least that's\n>>> how fast I can poll via SNMP. I hope I can tune the\n>>> DB to keep up.\n>>>\n>>> The app is threaded and will likely have well over 100\n>>> concurrent db connections. Temp tables for storage\n>>> aren't a preferred option since this is designed to be\n>>> a shared nothing approach and I will likely have\n>>> several polling processes.\n>>>\n>>\n>> Somewhat OT, but..\n>>\n>> The easiest way to speed that up is to use less threads. You're\n>> adding a whole TON of overhead with that many threads that you just\n>> don't want or need. You should probably be using something event-\n>> driven to solve this problem, with just a few database threads to\n>> store all that state. Less is definitely more in this case. See\n>> <http://www.kegel.com/c10k.html> (and there's plenty of other\n>> literature out there saying that event driven is an extremely good\n>> way to do this sort of thing).\n>>\n>> Here are some frameworks to look at for this kind of network code:\n>> (Python) Twisted - <http://twistedmatrix.com/>\n>> (Perl) POE - <http://poe.perl.org/>\n>> (Java) java.nio (not familiar enough with the Java thing to know\n>> whether or not there's a high-level wrapper)\n>> (C++) ACE - <http://www.cs.wustl.edu/~schmidt/ACE.html>\n>> (Ruby) IO::Reactor - <http://www.deveiate.org/code/IO-Reactor.html>\n>> (C) libevent - <http://monkey.org/~provos/libevent/>\n>>\n>> .. and of course, you have select/poll/kqueue/WaitNextEvent/whatever\n>> that you could use directly, if you wanted to roll your own solution,\n>> but don't do that.\n>>\n>> If you don't want to optimize the whole application, I'd at least\n>> just push the DB operations down to a very small number of\n>> connections (*one* might even be optimal!), waiting on some kind of\n>> thread-safe queue for updates from the rest of the system. This way\n>> you can easily batch those updates into transactions and you won't be\n>> putting so much unnecessary synchronization overhead into your\n>> application and the database.\n>>\n>> Generally, once you have more worker threads (or processes) than\n>> CPUs, you're going to get diminishing returns in a bad way, assuming\n>> those threads are making good use of their time.\n>>\n>> -bob\n>>\n>>\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n", "msg_date": "Fri, 19 Aug 2005 01:28:07 -1000", "msg_from": "Bob Ippolito <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "I have managed tx speeds that high from postgresql going even as high\nas 2500/sec for small tables, but it does require a good RAID\ncontroler card (yes I'm even running with fsync on). I'm using 3ware\n9500S-8MI with Raptor drives in multiple RAID 10s. The box wasn't too\n$$$ at just around $7k. I have two independant controlers on two\nindependant PCI buses to give max throughput. on with a 6 drive RAID\n10 and the other with two 4 drive RAID 10s.\n\nAlex Turner\nNetEconomist\n\nOn 8/19/05, Mark Cotner <[email protected]> wrote:\n> Hi all,\n> I bet you get tired of the same ole questions over and\n> over.\n> \n> I'm currently working on an application that will poll\n> thousands of cable modems per minute and I would like\n> to use PostgreSQL to maintain state between polls of\n> each device. This requires a very heavy amount of\n> updates in place on a reasonably large table(100k-500k\n> rows, ~7 columns mostly integers/bigint). Each row\n> will be refreshed every 15 minutes, or at least that's\n> how fast I can poll via SNMP. I hope I can tune the\n> DB to keep up.\n> \n> The app is threaded and will likely have well over 100\n> concurrent db connections. Temp tables for storage\n> aren't a preferred option since this is designed to be\n> a shared nothing approach and I will likely have\n> several polling processes.\n> \n> Here are some of my assumptions so far . . .\n> \n> HUGE WAL\n> Vacuum hourly if not more often\n> \n> I'm getting 1700tx/sec from MySQL and I would REALLY\n> prefer to use PG. I don't need to match the number,\n> just get close.\n> \n> Is there a global temp table option? In memory tables\n> would be very beneficial in this case. I could just\n> flush it to disk occasionally with an insert into blah\n> select from memory table.\n> \n> Any help or creative alternatives would be greatly\n> appreciated. :)\n> \n> 'njoy,\n> Mark\n> \n> \n> --\n> Writing software requires an intelligent person,\n> creating functional art requires an artist.\n> -- Unknown\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Fri, 19 Aug 2005 08:40:39 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "Bob Ippolito <[email protected]> writes:\n> If you don't want to optimize the whole application, I'd at least \n> just push the DB operations down to a very small number of \n> connections (*one* might even be optimal!), waiting on some kind of \n> thread-safe queue for updates from the rest of the system.\n\nWhile I agree that hundreds of threads seems like overkill, I think the\nabove advice might be going too far in the other direction. The problem\nwith single-threaded operation is that any delay affects the whole\nsystem --- eg, if you're blocked waiting for disk I/O, the CPU doesn't\nget anything done either. You want enough DB connections doing things\nin parallel to make sure that there's always something else useful to do\nfor each major component. This is particularly important for Postgres,\nwhich doesn't do any internal query parallelization (not that it would\nhelp much anyway for the sorts of trivial queries you are worried about).\nIf you have, say, a 4-way CPU you want at least 4 active connections to\nmake good use of the CPUs.\n\nI'd suggest trying to build the system so that it uses a dozen or two\nactive database connections. If that doesn't match up to the number of\npolling activities you want to have in flight at any instant, then you\ncan do something like what Bob suggested on the client side to bridge\nthe gap.\n\nAs far as the question \"can PG do 1-2k xact/sec\", the answer is \"yes\nif you throw enough hardware at it\". Spending enough money on the\ndisk subsystem is the key ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Aug 2005 09:35:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec " }, { "msg_contents": "Tom Lane wrote:\n\n>Bob Ippolito <[email protected]> writes:\n> \n>\n>>If you don't want to optimize the whole application, I'd at least \n>>just push the DB operations down to a very small number of \n>>connections (*one* might even be optimal!), waiting on some kind of \n>>thread-safe queue for updates from the rest of the system.\n>> \n>>\n>\n>While I agree that hundreds of threads seems like overkill, I think the\n>above advice might be going too far in the other direction. The problem\n>with single-threaded operation is that any delay affects the whole\n>system --- eg, if you're blocked waiting for disk I/O, the CPU doesn't\n>get anything done either. You want enough DB connections doing things\n>in parallel to make sure that there's always something else useful to do\n>for each major component. This is particularly important for Postgres,\n>which doesn't do any internal query parallelization (not that it would\n>help much anyway for the sorts of trivial queries you are worried about).\n>If you have, say, a 4-way CPU you want at least 4 active connections to\n>make good use of the CPUs.\n>\n>I'd suggest trying to build the system so that it uses a dozen or two\n>active database connections. If that doesn't match up to the number of\n>polling activities you want to have in flight at any instant, then you\n>can do something like what Bob suggested on the client side to bridge\n>the gap.\n>\n>As far as the question \"can PG do 1-2k xact/sec\", the answer is \"yes\n>if you throw enough hardware at it\". Spending enough money on the\n>disk subsystem is the key ...\n> \n>\nThe 1-2k xact/sec for MySQL seems suspicious, sounds very much like \nwrite-back cached, not write-through, esp. considering that heavy \nconcurrent write access isn't said to be MySQLs strength...\n\nI wonder if preserving the database after a fatal crash is really \nnecessary, since the data stored sounds quite volatile; in this case, \nfsync=false might be sufficient.\n\nRegards,\nAndreas\n\n", "msg_date": "Fri, 19 Aug 2005 15:58:22 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "Andreas Pflug <[email protected]> writes:\n> Tom Lane wrote:\n>> As far as the question \"can PG do 1-2k xact/sec\", the answer is \"yes\n>> if you throw enough hardware at it\". Spending enough money on the\n>> disk subsystem is the key ...\n>> \n> The 1-2k xact/sec for MySQL seems suspicious, sounds very much like \n> write-back cached, not write-through, esp. considering that heavy \n> concurrent write access isn't said to be MySQLs strength...\n\n> I wonder if preserving the database after a fatal crash is really \n> necessary, since the data stored sounds quite volatile; in this case, \n> fsync=false might be sufficient.\n\nYeah, that's something to think about. If you do need full transaction\nsafety, then you *must* have a decent battery-backed-write-cache setup,\nelse your transaction commit rate will be limited by disk rotation\nspeed --- for instance, a single connection can commit at most 250 xacts\nper second if the WAL log is on a 15000RPM drive. (You can improve this\nto the extent that you can spread activity across multiple connections,\nbut I'm not sure you can expect to reliably have 8 or more connections\nready to commit each time the disk goes 'round.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Aug 2005 10:09:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec " }, { "msg_contents": "Alex mentions a nice setup, but I'm pretty sure I know how to beat \nthat IO subsystems HW's performance by at least 1.5x or 2x. Possibly \nmore. (No, I do NOT work for any vendor I'm about to discuss.)\n\nStart by replacing the WD Raptors with Maxtor Atlas 15K II's.\nAt 5.5ms average access, 97.4MB/s outer track throughput, 85.9MB/s \naverage, and 74.4 MB/s inner track throughput, they have the best \nperformance characteristics of any tested shipping HDs I know \nof. (Supposedly the new SAS versions will _sustain_ ~98MB/s, but \nI'll believe that only if I see it under independent testing).\nIn comparison, the numbers on the WD740GD are 8.1ms average access, \n71.8, 62.9, and 53.9 MB/s outer, average and inner track throughputs \nrespectively.\n\nBe prepared to use as many of them as possible (read: as many you can \nafford) if you want to maximize transaction rates, particularly for \nsmall transactions like this application seems to be mentioning.\n\nNext, use a better RAID card. The TOL enterprise stuff (Xyratex, \nEngino, Dot-hill) is probably too expensive, but in the commodity \nmarket benchmarks indicate that that Areca's 1GB buffer RAID cards \ncurrently outperform all the other commodity RAID stuff.\n\n9 Atlas II's per card in a RAID 5 set, or 16 per card in a RAID 10 \nset, should max the RAID card's throughput and come very close to, if \nnot attaining, the real world peak bandwidth of the 64b 133MHz PCI-X \nbus they are plugged into. Say somewhere in the 700-800MB/s range.\n\nRepeat the above for as many independent PCI-X buses as you have for \na very fast commodity RAID IO subsystem.\n\nTwo such configured cards used in the dame manner as mentioned by \nAlex should easily attain 1.5x - 2x the transaction numbers mentioned \nby Alex unless there's a bottleneck somewhere else in the system design.\n\nHope this helps,\nRon Peacetree\n\nAt 08:40 AM 8/19/2005, Alex Turner wrote:\n>I have managed tx speeds that high from postgresql going even as high\n>as 2500/sec for small tables, but it does require a good RAID\n>controler card (yes I'm even running with fsync on). I'm using 3ware\n>9500S-8MI with Raptor drives in multiple RAID 10s. The box wasn't too\n>$$$ at just around $7k. I have two independant controlers on two\n>independant PCI buses to give max throughput. on with a 6 drive RAID\n>10 and the other with two 4 drive RAID 10s.\n>\n>Alex Turner\n>NetEconomist\n>\n>On 8/19/05, Mark Cotner <[email protected]> wrote:\n> > Hi all,\n> > I bet you get tired of the same ole questions over and\n> > over.\n> >\n> > I'm currently working on an application that will poll\n> > thousands of cable modems per minute and I would like\n> > to use PostgreSQL to maintain state between polls of\n> > each device. This requires a very heavy amount of\n> > updates in place on a reasonably large table(100k-500k\n> > rows, ~7 columns mostly integers/bigint). Each row\n> > will be refreshed every 15 minutes, or at least that's\n> > how fast I can poll via SNMP. I hope I can tune the\n> > DB to keep up.\n> >\n> > The app is threaded and will likely have well over 100\n> > concurrent db connections. Temp tables for storage\n> > aren't a preferred option since this is designed to be\n> > a shared nothing approach and I will likely have\n> > several polling processes.\n> >\n> > Here are some of my assumptions so far . . .\n> >\n> > HUGE WAL\n> > Vacuum hourly if not more often\n> >\n> > I'm getting 1700tx/sec from MySQL and I would REALLY\n> > prefer to use PG. I don't need to match the number,\n> > just get close.\n> >\n> > Is there a global temp table option? In memory tables\n> > would be very beneficial in this case. I could just\n> > flush it to disk occasionally with an insert into blah\n> > select from memory table.\n> >\n> > Any help or creative alternatives would be greatly\n> > appreciated. :)\n> >\n> > 'njoy,\n> > Mark\n> >\n> >\n> > --\n> > Writing software requires an intelligent person,\n> > creating functional art requires an artist.\n> > -- Unknown\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n\n\n\n", "msg_date": "Fri, 19 Aug 2005 10:54:57 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "At 09:58 AM 8/19/2005, Andreas Pflug wrote:\n\n>The 1-2k xact/sec for MySQL seems suspicious, sounds very much like \n>write-back cached, not write-through, esp. considering that heavy \n>concurrent write access isn't said to be MySQLs strength...\n\nDon't be suspicious.\n\nI haven't seen the code under discussion, but I have seen mySQL \neasily achieve these kinds of numbers using the myISAM storage engine \nin write-through cache\nmode.\n\nmyISAM can be =FAST=. Particularly when decent HW is thrown at it.\n\nRon\n\n\n", "msg_date": "Fri, 19 Aug 2005 11:03:54 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "On Fri, 2005-08-19 at 10:54 -0400, Ron wrote:\n> Maxtor Atlas 15K II's.\n\n> Areca's 1GB buffer RAID cards \n\nThe former are SCSI disks and the latter is an SATA controller. The\ncombination would have a transaction rate of approximately 0.\n\nI can vouch for the Areca controllers, however. You can certainly\nachieve pgbench transaction rates in the hundreds per second even with\nonly 5 7200RPM disks and 128MB cache.\n\nDon't forget to buy the battery.\n\n-jwb\n", "msg_date": "Fri, 19 Aug 2005 09:34:44 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "On 8/19/05 1:24 AM, \"Mark Cotner\" <[email protected]> wrote:\n> I'm currently working on an application that will poll\n> thousands of cable modems per minute and I would like\n> to use PostgreSQL to maintain state between polls of\n> each device. This requires a very heavy amount of\n> updates in place on a reasonably large table(100k-500k\n> rows, ~7 columns mostly integers/bigint). Each row\n> will be refreshed every 15 minutes, or at least that's\n> how fast I can poll via SNMP. I hope I can tune the\n> DB to keep up.\n> \n> The app is threaded and will likely have well over 100\n> concurrent db connections. Temp tables for storage\n> aren't a preferred option since this is designed to be\n> a shared nothing approach and I will likely have\n> several polling processes.\n\n\nMark,\n\nWe have PostgreSQL databases on modest hardware doing exactly what you are\nattempting to (massive scalable SNMP monitoring system). The monitoring\nvolume for a single database server appears to exceed what you are trying to\ndo by a few orders of magnitude with no scaling or performance issues, so I\ncan state without reservation that PostgreSQL can easily handle your\napplication in theory.\n\nHowever, that is predicated on having a well-architected system that\nminimizes resource contention and unnecessary blocking, and based on your\ndescription you may be going about it a bit wrong.\n\nThe biggest obvious bottleneck is the use of threads and massive\nprocess-level parallelization. As others have pointed out, async queues are\nyour friends, as is partitioning the workload horizontally rather than\nvertically through the app stack. A very scalable high-throughput engine\nfor SNMP polling only requires two or three threads handling different parts\nof the workload to saturate the network, and by choosing what each thread\ndoes carefully you can all but eliminate blocking when there is work to be\ndone.\n\nWe only use a single database connection to insert all the data into\nPostgreSQL, and that process/thread receives its data from a work queue.\nDepending on how you design your system, you can batch many records in your\nqueue as a single transaction. In our case, we also use very few updates,\nmostly just inserts, which is probably advantageous in terms of throughput\nif you have the disk for it. The insert I/O load is easily handled, and our\ndisk array is a modest 10k SCSI rig. The only thing that really hammers the\nserver is when multiple reporting processes are running, which frequently\ntouch several million rows each (the database is much larger than the system\nmemory), and even this is manageable with clever database design.\n\n\nIn short, what you are trying to do is easily doable on PostgreSQL in\ntheory. However, restrictions on design choices may pose significant\nhurdles. We did not start out with an ideal system either; it took a fair\namount of re-engineering to solve all the bottlenecks and problems that pop\nup.\n\nGood luck,\n\nJ. Andrew Rogers\[email protected]\n\n\n", "msg_date": "Fri, 19 Aug 2005 10:12:42 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "At 12:34 PM 8/19/2005, Jeffrey W. Baker wrote:\n>On Fri, 2005-08-19 at 10:54 -0400, Ron wrote:\n> > Maxtor Atlas 15K II's.\n>\n> > Areca's 1GB buffer RAID cards\n>\n>The former are SCSI disks and the latter is an SATA controller. The\n>combination would have a transaction rate of approximately 0.\n\nYou are evidently thinking of the Areca ARC-11xx controllers (and you \nare certainly right for that HW combination ;-) ). Those are not the \nonly product Areca makes that can be upgraded to a 1GB cache.\n\nUntil SAS infrastructure is good enough, U320 SCSI and FC HD's remain \nthe top performing HD's realistically available. At the most \nfundamental, your DBMS is only as good as your HD IO subsystem, and \nyour HD IO subsystem is only as good as your HDs. As others have \nsaid here, skimping on your HDs is _not_ a good design choice where \nDBMSs are concerned.\n\nAs an aside, the Atlas 15K II's are now available in SAS:\nhttp://www.maxtor.com/portal/site/Maxtor/menuitem.ba88f6d7cf664718376049b291346068/?channelpath=/en_us/Products/SCSI%20Hard%20Drives/Atlas%2015K%20Family/Atlas%2015K%20II%20SAS\n\nI haven't seen independent benches on them, so I explicitly \nreferenced the U320 Atlas 15K II's known performance numbers \ninstead. As I said, Maxtor is claiming even better for the SAS \nversion of the Atlas 15K II.\n\nNone of the SAS <-> PCI-X or PCI-E RAID cards I know of are ready for \nmass market yet, although a few are in beta..\n\n\n>I can vouch for the Areca controllers, however. You can certainly\n>achieve pgbench transaction rates in the hundreds per second even with\n>only 5 7200RPM disks and 128MB cache.\n>\n>Don't forget to buy the battery.\n\nAgreed.\n\nHope this is helpful,\nRon Peacetree\n\n\n", "msg_date": "Fri, 19 Aug 2005 13:57:46 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "\n> While I agree that hundreds of threads seems like overkill, I think the\n> above advice might be going too far in the other direction. The problem\n> with single-threaded operation is that any delay affects the whole\n> system --- eg, if you're blocked waiting for disk I/O, the CPU doesn't\n\n\tYou use UDP which is a connectionless protocol... then why use threads ?\n\n\tI'd advise this :\n\n\tUse asynchronous network code (one thread) to do your network stuff. This \nwill lower the CPU used by this code immensely.\n\tEvery minute, dump a file contianing everything to insert into the table.\n\tUse another thread to COPY it into the DB, in a temporary table if you \nwish, and then INSERT INTO ... SELECT.\n\tThis should be well adapted to your requirements.\n", "msg_date": "Fri, 19 Aug 2005 20:11:54 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec " }, { "msg_contents": "Don't forget that Ultra 320 is the speed of the bus, not each drive. \nNo matter how many honking 15k disks you put on a 320MB bus, you can\nonly get 320MB/sec! and have so many outstanding IO/s on the bus.\n\nNot so with SATA! Each drive is on it's own bus, and you are only\nlimited by the speed of your PCI-X Bus, which can be as high as\n800MB/sec at 133Mhz/64bit.\n\nIt's cheap and it's fast - all you have to do is pay for the\nenclosure, which can be a bit pricey, but there are some nice 24bay\nand even 40bay enclosures out there for SATA.\n\nYes a 15k RPM drive will give you better seek time and better peak\nthrough put, but put them all on a single U320 bus and you won't see\nmuch return past a stripe size of 3 or 4.\n\nIf it's raw transactions per second data warehouse style, it's all\nabout the xlog baby which is sequential writes, and all about large\nblock reads, which is sequential reads.\n\nAlex Turner\nNetEconomist\nP.S. Sorry if i'm a bit punchy, I've been up since yestarday with\nserver upgrade nightmares that continue ;)\n\nOn 8/19/05, Ron <[email protected]> wrote:\n> Alex mentions a nice setup, but I'm pretty sure I know how to beat\n> that IO subsystems HW's performance by at least 1.5x or 2x. Possibly\n> more. (No, I do NOT work for any vendor I'm about to discuss.)\n> \n> Start by replacing the WD Raptors with Maxtor Atlas 15K II's.\n> At 5.5ms average access, 97.4MB/s outer track throughput, 85.9MB/s\n> average, and 74.4 MB/s inner track throughput, they have the best\n> performance characteristics of any tested shipping HDs I know\n> of. (Supposedly the new SAS versions will _sustain_ ~98MB/s, but\n> I'll believe that only if I see it under independent testing).\n> In comparison, the numbers on the WD740GD are 8.1ms average access,\n> 71.8, 62.9, and 53.9 MB/s outer, average and inner track throughputs\n> respectively.\n> \n> Be prepared to use as many of them as possible (read: as many you can\n> afford) if you want to maximize transaction rates, particularly for\n> small transactions like this application seems to be mentioning.\n> \n> Next, use a better RAID card. The TOL enterprise stuff (Xyratex,\n> Engino, Dot-hill) is probably too expensive, but in the commodity\n> market benchmarks indicate that that Areca's 1GB buffer RAID cards\n> currently outperform all the other commodity RAID stuff.\n> \n> 9 Atlas II's per card in a RAID 5 set, or 16 per card in a RAID 10\n> set, should max the RAID card's throughput and come very close to, if\n> not attaining, the real world peak bandwidth of the 64b 133MHz PCI-X\n> bus they are plugged into. Say somewhere in the 700-800MB/s range.\n> \n> Repeat the above for as many independent PCI-X buses as you have for\n> a very fast commodity RAID IO subsystem.\n> \n> Two such configured cards used in the dame manner as mentioned by\n> Alex should easily attain 1.5x - 2x the transaction numbers mentioned\n> by Alex unless there's a bottleneck somewhere else in the system design.\n> \n> Hope this helps,\n> Ron Peacetree\n> \n> At 08:40 AM 8/19/2005, Alex Turner wrote:\n> >I have managed tx speeds that high from postgresql going even as high\n> >as 2500/sec for small tables, but it does require a good RAID\n> >controler card (yes I'm even running with fsync on). I'm using 3ware\n> >9500S-8MI with Raptor drives in multiple RAID 10s. The box wasn't too\n> >$$$ at just around $7k. I have two independant controlers on two\n> >independant PCI buses to give max throughput. on with a 6 drive RAID\n> >10 and the other with two 4 drive RAID 10s.\n> >\n> >Alex Turner\n> >NetEconomist\n> >\n> >On 8/19/05, Mark Cotner <[email protected]> wrote:\n> > > Hi all,\n> > > I bet you get tired of the same ole questions over and\n> > > over.\n> > >\n> > > I'm currently working on an application that will poll\n> > > thousands of cable modems per minute and I would like\n> > > to use PostgreSQL to maintain state between polls of\n> > > each device. This requires a very heavy amount of\n> > > updates in place on a reasonably large table(100k-500k\n> > > rows, ~7 columns mostly integers/bigint). Each row\n> > > will be refreshed every 15 minutes, or at least that's\n> > > how fast I can poll via SNMP. I hope I can tune the\n> > > DB to keep up.\n> > >\n> > > The app is threaded and will likely have well over 100\n> > > concurrent db connections. Temp tables for storage\n> > > aren't a preferred option since this is designed to be\n> > > a shared nothing approach and I will likely have\n> > > several polling processes.\n> > >\n> > > Here are some of my assumptions so far . . .\n> > >\n> > > HUGE WAL\n> > > Vacuum hourly if not more often\n> > >\n> > > I'm getting 1700tx/sec from MySQL and I would REALLY\n> > > prefer to use PG. I don't need to match the number,\n> > > just get close.\n> > >\n> > > Is there a global temp table option? In memory tables\n> > > would be very beneficial in this case. I could just\n> > > flush it to disk occasionally with an insert into blah\n> > > select from memory table.\n> > >\n> > > Any help or creative alternatives would be greatly\n> > > appreciated. :)\n> > >\n> > > 'njoy,\n> > > Mark\n> > >\n> > >\n> > > --\n> > > Writing software requires an intelligent person,\n> > > creating functional art requires an artist.\n> > > -- Unknown\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/docs/faq\n> > >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: Don't 'kill -9' the postmaster\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Fri, 19 Aug 2005 15:31:58 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": "At 03:31 PM 8/19/2005, Alex Turner wrote:\n>Don't forget that Ultra 320 is the speed of the bus, not each drive.\n>No matter how many honking 15k disks you put on a 320MB bus, you can\n>only get 320MB/sec! and have so many outstanding IO/s on the bus.\n\nOf course. This is exactly why multi-channel SCSI and multichannel \nFibre Channel cards exist; and why external RAID enclosures usually \nhave multiple such cards in them...\n\nEven moderately acceptable U320 SCSI cards are dual channel at this \npoint (think Adaptec dual channel AHAxxxx's), and Quad channel ones \nare just as common. The Quads will, of course, saturate a 64b 133MHz \nPCI-X bus. _IF_ the chipset on them can keep up.\n\nThe current kings of RAID card performance are all Fibre Channel \nbased, and all the ones I know of are theoretically capable of \nsaturating a 64b 133MHz PCI-X bus. Again, _IF_ the chipset on them \ncan keep up.\n\nMost commodity RAID card have neither adequate CPU nor enough \nbuffer. Regardless of the peripheral IO technology they use.\n\n\n>Not so with SATA! Each drive is on it's own bus, and you are only\n>limited by the speed of your PCI-X Bus, which can be as high as\n>800MB/sec at 133Mhz/64bit.\n\nThat's the Theory anyway, and latency should be lower as well. OTOH, \nas my wife likes to say \"In theory, Theory and Practice are the \nsame. In practice, they almost never are.\"\n\nYou are only getting the performance you mention as long as your card \ncan keep up with multiplexing N IO streams, crunching RAID 5 XORs \n(assuming you are using RAID 5), etc, etc. As I'm sure you know, \n\"The chain is only as strong as its weakest link.\".\n\nMost commodity SATA RAID cards brag about being able to pump 300MB/s \n(they were all over LW SF bragging about this!?), which in this \ncontext is woefully unimpressive. Sigh.\n\nI'm impressed with the Areca cards because they usually have CPUs \nthat actually can come close to pushing the theoretical IO limit of \nthe bus they are plugged into; and they can be upgraded to (barely) \nacceptable buffer amounts <rant>(come on, manufacturers! 4GB of DDR \nPC3200 is only -2- DIMMs, and shortly that will be enough to hold 8GB \nof DDR PC3200. Give us more buffer!)</rant>.\n\n\n>It's cheap and it's fast - all you have to do is pay for the \n>enclosure, which can be a bit pricey, but there are some nice 24bay \n>and even 40bay enclosures out there for SATA.\n\nI've even seen 48 bay ones. However, good enclosures, particularly \nfor larger numbers of HDs, are examples of non-trivial engineering \nand priced accordingly. Too many times I see people buy \"bargain\" \nenclosures and set themselves and their organizations up for some \n_very_ unpleasant times that could easily have been avoided by being \ncareful to buy quality products. \"Pay when you buy or pay much more later.\"\n\n\n>Yes a 15k RPM drive will give you better seek time and better peak\n>through put, but put them all on a single U320 bus and you won't see\n>much return past a stripe size of 3 or 4\n\nAgreed. Same holds for 2Gbps FC. Haven't tested 4Gbps FC personally \nyet, but I'm told the limit is higher in the manner you'd expect.\n\n\n>If it's raw transactions per second data warehouse style, it's all\n>about the xlog baby which is sequential writes, and all about large\n>block reads, which is sequential reads.\n>\n>Alex Turner\n>NetEconomist\n>P.S. Sorry if i'm a bit punchy, I've been up since yestarday with\n>server upgrade nightmares that continue ;)\n\nMy condolences and sympathies. I've definitely been there and done that.\n\nRon Peacetree\n\n\n", "msg_date": "Fri, 19 Aug 2005 17:19:00 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sustained update load of 1-2k/sec" }, { "msg_contents": ":) Most of the ppl on this list are systems programmers, however I am not.\nThe tool of choice for this app is Ruby and the libraries don't support\nasync SNMP at the moment.\n\nI've done a good deal of async snmp and the libraries that actually pull it\noff generally aren't that good(Net-SNMP and Perl's Net::SNMP). Granted, UDP\nis connectionless to an extent, but you still have to send the PDU, and bind\nto the return socket and wait. If you batch the outgoing PDUs then you can\nget away with sending them out synchronously and listening on the returning\nsocket synchronously, but it would require that your libraries support this.\nI understand the concepts well enough, maybe I'll put together a patch. It\nwould be much lower overhead than managing all those threads. Looks like\nit's gonna be a fun weekend.\n\nThanks again for all the great feedback.\n\n'njoy,\nMark\n\n\nOn 8/19/05 2:11 PM, \"PFC\" <[email protected]> wrote:\n\n> \n>> While I agree that hundreds of threads seems like overkill, I think the\n>> above advice might be going too far in the other direction. The problem\n>> with single-threaded operation is that any delay affects the whole\n>> system --- eg, if you're blocked waiting for disk I/O, the CPU doesn't\n> \n> You use UDP which is a connectionless protocol... then why use threads ?\n> \n> I'd advise this :\n> \n> Use asynchronous network code (one thread) to do your network stuff. This\n> will lower the CPU used by this code immensely.\n> Every minute, dump a file contianing everything to insert into the table.\n> Use another thread to COPY it into the DB, in a temporary table if you\n> wish, and then INSERT INTO ... SELECT.\n> This should be well adapted to your requirements.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Fri, 19 Aug 2005 19:02:35 -0400", "msg_from": "Mark Cotner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sustained update load of 1-2k/sec " }, { "msg_contents": "Thanks again everyone for the excellent suggestions.\n\nI looked into IO::Reactor, but after a few hours of fiddling decided I was\ngetting the kind of performance I wanted from using a slightly more than\nmodest number of threads and decided(due to dev timelines) to come back to\npatching the SNMP libraries for Ruby to do async using Reactor later.\n\nI am unfortunately stuck with updates, but I think(with you're suggestions)\nI've made it work for me.\n\nMySQL = 1500 updates/sec\nPostgreSQL w/10k tx per commit using single thread = 1400 updates/sec\nGiven the update heavy nature of this table I felt it was necessary to test\nduring a vacuum. Turns out the hit wasn't that bad . . .\nPostgreSQL w/10k tx per commit using a single thread during a vacuum = 1300\nupdates/sec\n\n100-200 updates/sec is a small price to pay for mature stored procedures,\nmore stored procedure language options, acid compliance, mvcc, very few if\nany corrupt tables(get about 2 a week from MySQL on the 40 DBs I manage),\nmore crash resistant db(crash about once a month on one of my 40 MySQL dbs),\nand replication that actually works for more than a day before quitting for\nno apparent reason ;) [/flame off]\n\nFor those of you with Cox Communications cable modems look forward to better\ncustomer service and cable plant management. :)\n\nAnd if anyone's curious here's the app I'm rebuilding/updating\nhttp://www.mysql.com/customers/customer.php?id=16\nWe won runner up behind Saabre airline reservation system for MySQL app of\nthe year. Needless to say they weren't too happy when they heard we might\nbe switching DBs. \n\n'njoy,\nMark\n\nOn 8/19/05 1:12 PM, \"J. Andrew Rogers\" <[email protected]> wrote:\n\n> On 8/19/05 1:24 AM, \"Mark Cotner\" <[email protected]> wrote:\n>> I'm currently working on an application that will poll\n>> thousands of cable modems per minute and I would like\n>> to use PostgreSQL to maintain state between polls of\n>> each device. This requires a very heavy amount of\n>> updates in place on a reasonably large table(100k-500k\n>> rows, ~7 columns mostly integers/bigint). Each row\n>> will be refreshed every 15 minutes, or at least that's\n>> how fast I can poll via SNMP. I hope I can tune the\n>> DB to keep up.\n>> \n>> The app is threaded and will likely have well over 100\n>> concurrent db connections. Temp tables for storage\n>> aren't a preferred option since this is designed to be\n>> a shared nothing approach and I will likely have\n>> several polling processes.\n> \n> \n> Mark,\n> \n> We have PostgreSQL databases on modest hardware doing exactly what you are\n> attempting to (massive scalable SNMP monitoring system). The monitoring\n> volume for a single database server appears to exceed what you are trying to\n> do by a few orders of magnitude with no scaling or performance issues, so I\n> can state without reservation that PostgreSQL can easily handle your\n> application in theory.\n> \n> However, that is predicated on having a well-architected system that\n> minimizes resource contention and unnecessary blocking, and based on your\n> description you may be going about it a bit wrong.\n> \n> The biggest obvious bottleneck is the use of threads and massive\n> process-level parallelization. As others have pointed out, async queues are\n> your friends, as is partitioning the workload horizontally rather than\n> vertically through the app stack. A very scalable high-throughput engine\n> for SNMP polling only requires two or three threads handling different parts\n> of the workload to saturate the network, and by choosing what each thread\n> does carefully you can all but eliminate blocking when there is work to be\n> done.\n> \n> We only use a single database connection to insert all the data into\n> PostgreSQL, and that process/thread receives its data from a work queue.\n> Depending on how you design your system, you can batch many records in your\n> queue as a single transaction. In our case, we also use very few updates,\n> mostly just inserts, which is probably advantageous in terms of throughput\n> if you have the disk for it. The insert I/O load is easily handled, and our\n> disk array is a modest 10k SCSI rig. The only thing that really hammers the\n> server is when multiple reporting processes are running, which frequently\n> touch several million rows each (the database is much larger than the system\n> memory), and even this is manageable with clever database design.\n> \n> \n> In short, what you are trying to do is easily doable on PostgreSQL in\n> theory. However, restrictions on design choices may pose significant\n> hurdles. We did not start out with an ideal system either; it took a fair\n> amount of re-engineering to solve all the bottlenecks and problems that pop\n> up.\n> \n> Good luck,\n> \n> J. Andrew Rogers\n> [email protected]\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n\n", "msg_date": "Mon, 22 Aug 2005 05:29:12 -0400", "msg_from": "Mark Cotner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sustained update load of 1-2k/sec" } ]
[ { "msg_contents": "> Kari Lavikka <[email protected]> writes:\n> > However, those configuration changes didn't have significant effect\nto\n> > oprofile results. AtEOXact_CatCache consumes even more cycles.\n> \n> I believe I've fixed that for 8.1.\n\nRelative to 8.0, I am seeing a dramatic, almost miraculous reduction in\nCPU load times in 8.1devel. This is for ISAM style access patterns over\nthe parse/bind interface. (IOW one record at a time, 90% read, 10%\nwrite).\n\nRelative to commercial dedicated ISAM storage engines, pg holds up very\nwell except in cpu load, but 8.1 is a huge step towards addressing that.\n\nSo far, except for one minor (and completely understandable) issue with\nbitmap issues, 8.1 has been a stellar performer. Also great is the\nexpansion of pg_locks view (which I didn't see mentioned in Bruce's TODO\nlist, just FYI).\n\nMerlin\n", "msg_date": "Fri, 19 Aug 2005 09:37:19 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in\n> CPU load times in 8.1devel. This is for ISAM style access patterns over\n> the parse/bind interface. (IOW one record at a time, 90% read, 10%\n> write).\n\n> Relative to commercial dedicated ISAM storage engines, pg holds up very\n> well except in cpu load, but 8.1 is a huge step towards addressing that.\n\nCool --- we've done a fair amount of work on squeezing out internal\ninefficiencies during this devel cycle, but it's always hard to predict\njust how much anyone will notice in the real world.\n\nCare to do some oprofile or gprof profiles to see where it's still bad?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Aug 2005 10:03:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "Hi list,\n\nI´m using Pg 8.0.3 on Linux FC2.\n\nThis question may have a very simple answer (I hope), but I´m having lots of trouble solving it, and I counldn´t find any other post about it or anything in the pg docs.\n\nI have some very complex select statements on 4 million rows tables. When using LEFT JOIN ON, some select statements takes about 2 minutes. When I write exactly the same statement but with LEFT JOIN USING, it takes only 1 minute. Comparing to Oracle, the same statement takes 1 minute also, but with LEFT JOIN ON.\n\nSometimes tables have the same column names and I can use LEFT JOIN USING, but in some other cases I MUST use LEFT JOIN ON, because the tables have different column names.\n\nSo my question is: is there a way to make LEFT JOIN ON uses the same plan of LEFT JOIN USING?\n\nThanks,\n\nDiego de Lima\n\n\n\n\n\n\n\n\nHi list,\n \nI´m using Pg 8.0.3 on Linux FC2.\n \nThis question may have a very simple answer (I \nhope), but I´m having lots of trouble solving it, and I counldn´t find any other \npost about it or anything in the pg docs.\n \nI have some very complex select statements on 4 \nmillion rows tables. When using LEFT JOIN ON, some select statements \ntakes about 2 minutes. When I write exactly the same statement but with LEFT \nJOIN USING, it takes only 1 minute. Comparing to Oracle, the same statement \ntakes 1 minute also, but with LEFT JOIN ON.\n \nSometimes tables have the same column \nnames and I can use LEFT JOIN USING, but in some other cases I MUST use LEFT \nJOIN ON, because the tables have different column names.\n \nSo my question is: is there a way to make LEFT JOIN \nON uses the same plan of LEFT JOIN USING?\n \nThanks,\n \nDiego de Lima", "msg_date": "Fri, 19 Aug 2005 12:22:35 -0300", "msg_from": "\"Diego de Lima\" <[email protected]>", "msg_from_op": true, "msg_subject": "LEFT JOIN ON vs. LEFT JOIN USING performance" }, { "msg_contents": "\"Diego de Lima\" <[email protected]> writes:\n> I have some very complex select statements on 4 million rows tables. =\n> When using LEFT JOIN ON, some select statements takes about 2 minutes. =\n> When I write exactly the same statement but with LEFT JOIN USING, it =\n> takes only 1 minute.\n\nCould we see details please? Like the table schemas, the query itself,\nand EXPLAIN ANALYZE results for both cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Aug 2005 11:40:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN ON vs. LEFT JOIN USING performance " }, { "msg_contents": "Diego de Lima wrote:\n> Hi list,\n> \n> I´m using Pg 8.0.3 on Linux FC2.\n> \n> This question may have a very simple answer (I hope), but I´m having\n> lots of trouble solving it, and I counldn´t find any other post about it\n> or anything in the pg docs.\n> \n> I have some very complex select statements on 4 million rows\n> tables. When using LEFT JOIN ON, some select statements takes about 2\n> minutes. When I write exactly the same statement but with LEFT JOIN\n> USING, it takes only 1 minute. Comparing to Oracle, the same statement\n> takes 1 minute also, but with LEFT JOIN ON.\n> \n> Sometimes tables have the same column names and I can use LEFT JOIN\n> USING, but in some other cases I MUST use LEFT JOIN ON, because the\n> tables have different column names.\n> \n> So my question is: is there a way to make LEFT JOIN ON uses the same\n> plan of LEFT JOIN USING?\n> \n> Thanks,\n> \n> Diego de Lima\n> \n> \n\nI'm guessing that ON/USING isn't the specific problem. It's probably\nmore an issue of how the planner is deciding to do the joins (merge\njoin, hash join, nested loop, etc.)\n\nCan you send the results of EXPLAIN ANALYZE <your query>?\n\nAlso, any sort of join where you have to join against millions of rows\nis going to be slow. I don't know your specific design, but likely you\ncould change the design to be more selective at an earlier level, which\nmeans that you can cut the size of the join by a lot. If you post you\nquery, a lot of times people here can help optimize your query. (But\nmake sure to explain what you are trying to do, so the optimizations\nmake sense.)\n\nJohn\n=:->", "msg_date": "Fri, 19 Aug 2005 10:40:59 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN ON vs. LEFT JOIN USING performance" } ]
[ { "msg_contents": "> Cool --- we've done a fair amount of work on squeezing out internal\n> inefficiencies during this devel cycle, but it's always hard to\npredict\n> just how much anyone will notice in the real world.\n> \n> Care to do some oprofile or gprof profiles to see where it's still\nbad?\n> \n\nSince release of 8.0, we are a strictly windows shop :). I tried\nbuilding pg with -pg flag and got errors in some of the satellite\nlibraries. I think this is solvable though at some point I'll spend\nmore time on it. Anyways, just so you know the #s that I'm seein, I've\nrun several benchmarks of various programs that access pg via our ISAM\nbridge. The results are as consistent as they are good. These tests\nare on the same box using the same .conf on the same freshly loaded\ndata. The disk doesn't play a major role in these tests. All data\naccess is through ExecPrepared libpq C interface. Benchmark is run from\na separate box on a LAN.\n\nBill of Materials Traversal ( ~ 62k records).\n\n ISAM* pg 8.0 pg 8.1 devel delta 8.0->8.1\nrunning time 63 sec 90 secs 71 secs 21%\ncpu load 17% 45% 32% 29%\t \nloadsecs** 10.71 40.5 22.72 44%\nrecs/sec 984 688 873\nrecs/loadsec 5882 1530 2728\n\n*ISAM is an anonymous commercial ISAM library in an optimized server\narchitecture (pg smokes the non-optimized flat file version).\n**Loadsecs being seconds of CPU at 100% load. \n\n\nIOW cpu load drop is around 44%. Amazing!\n\nMerlin\n\n\n", "msg_date": "Fri, 19 Aug 2005 13:20:49 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "Hi,\n\nWhile testing 8.1dev I came to this:\n\nCREATE TABLE t (\na int,\nb int\nPRIMARY KEY (a,b));\n\nIn that case, the index is as big as the table.\n\nMy question is is it worthwhile to have such index peformance wise.\nI understand I'd loose uniqness buthas such an index any chance to be used\nagainst seq scan.\n\nIs there any chance we have a \"btree table\" in the future for that case?\n\nRegards,\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n15, Chemin des Monges +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n", "msg_date": "Sat, 20 Aug 2005 14:18:29 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "index as large as table" }, { "msg_contents": "On Sat, 20 Aug 2005 [email protected] wrote:\n\n> Hi,\n>\n> While testing 8.1dev I came to this:\n>\n> CREATE TABLE t (\n> a int,\n> b int\n> PRIMARY KEY (a,b));\n>\n> In that case, the index is as big as the table.\n\nRight. Think about it: the index must store a, b, a reference to the data\nin the table itself and index meta data. If an index is defined across all\ncolumns of the table, it must be bigger than the table itself. (In\nPostgreSQL, when the table is small, the index will be smaller still. This\nis because of each entry in the table itself has meta data. But the amount\nof data per row of a table remains constant, whereas, the amount of\nmetadata in an index grows.)\n\n> My question is is it worthwhile to have such index peformance wise.\n> I understand I'd loose uniqness buthas such an index any chance to be used\n> against seq scan.\n\nOf course. The idea is that, generally speaking, you're only interested in\na small portion of the data stored in the table. Indexes store extra data\nso that they can locate the portion you're interested in faster.\n\nGavin\n", "msg_date": "Sat, 20 Aug 2005 23:08:13 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index as large as table" }, { "msg_contents": "On Sat, Aug 20, 2005 at 11:08:13PM +1000, Gavin Sherry wrote:\n> Of course. The idea is that, generally speaking, you're only interested in\n> a small portion of the data stored in the table. Indexes store extra data\n> so that they can locate the portion you're interested in faster.\n\nI think his question was more why you needed the data in itself, when you had\neverything you needed in the index anyway. (Actually, you don't -- indexes\ndon't carry MVCC information, but I guess that's a bit beside the point.)\n\nThere has been discussion on \"heap tables\" or whatever you'd want to call\nthem (ie. tables that are organized as a B+-tree on some index) here before;\nI guess the archives would be a reasonable place to start looking.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 20 Aug 2005 17:10:25 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index as large as table" } ]
[ { "msg_contents": "I'm reposting this because my mailer hiccuped when I sent it the \nfirst time. If this results in a double post, I apologize.\n\nAt 02:53 PM 8/20/2005, Jeremiah Jahn wrote:\n>On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:\n> > Jeremiah Jahn wrote:\n> > > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:\n> > >\n><snip>\n> >\n> > > it's cached alright. I'm getting a read rate of about 150MB/sec. I would\n> > > have thought is would be faster with my raid setup. I think I'm going to\n> > > scrap the whole thing and get rid of LVM. I'll just do a straight ext3\n> > > system. Maybe that will help. Still trying to get suggestions for a\n> > > stripe size.\n> > >\n> >\n> > I don't think 150MB/s is out of the realm for a 14 drive array.\n> > How fast is time dd if=/dev/zero of=testfile bs=8192 count=1000000\n> >\n>time dd if=/dev/zero of=testfile bs=8192 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 1m24.248s\n>user 0m0.381s\n>sys 0m33.028s\n>\n>\n> > (That should create a 8GB file, which is too big to cache everything)\n> > And then how fast is:\n> > time dd if=testfile of=/dev/null bs=8192 count=1000000\n>\n>time dd if=testfile of=/dev/null bs=8192 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m54.139s\n>user 0m0.326s\n>sys 0m8.916s\n>\n>\n>and on a second run:\n>\n>real 0m55.667s\n>user 0m0.341s\n>sys 0m9.013s\n>\n>\n> >\n> > That should give you a semi-decent way of measuring how fast the RAID\n> > system is, since it should be too big to cache in ram.\n>\n>about 150MB/Sec. Is there no better way to make this go faster...?\nAssuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of \nthem doing raw sequential IO like this should be capable of at\n ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= \n553MB/s if using Fujitsu MAU's, and ~7*86MB/s= 602MB/s if using \nMaxtor Atlas 15K II's to devices external to the RAID array.\n\n_IF_ the controller setup is high powered enough to keep that kind of \nIO rate up. This will require a controller or controllers providing \ndual channel U320 bandwidth externally and quad channel U320 \nbandwidth internally. IOW, it needs a controller or controllers \ntalking 64b 133MHz PCI-X, reasonably fast DSP/CPU units, and probably \na decent sized IO buffer as well.\n\nAFAICT, the Dell PERC4 controllers use various flavors of the LSI \nLogic MegaRAID controllers. What I don't know is which exact one \nyours is, nor do I know if it (or any of the MegaRAID controllers) \nare high powered enough.\n\nTalk to your HW supplier to make sure you have controllers adequate \nto your HD's.\n\n...and yes, your average access time will be in the 5.5ms - 6ms range \nwhen doing a physical seek.\nEven with RAID, you want to minimize seeks and maximize sequential IO \nwhen accessing them.\nBest to not go to HD at all ;-)\n\nHope this helps,\nRon Peacetree\n\n\n\n\n", "msg_date": "Sat, 20 Aug 2005 17:12:07 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" } ]
[ { "msg_contents": "I need to improve the performance for the following\nquery.\n\nSoon after I reboot my server, the following query takes\n20 seconds the first time I run it.\nWhen I run it after that, it takes approximately 2 seconds.\nI understand the caching taking place (at the os or db\nlevel, it doesn't matter here).\n\nHere are the results of the explain analyze run:\n\n-----\nLOG: duration: 6259.632 ms statement: explain analyze\nSELECT\nc.id AS contact_id,\nsr.id AS sales_rep_id,\nLTRIM(RTRIM(sr.firstname || ' ' || sr.lastname)) AS sales_rep_name,\np.id AS partner_id,\np.company AS partner_company,\ncoalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname)))\nAS contact_company,\nLTRIM(RTRIM(c.city || ' ' || c.state || ' ' || c.postalcode || ' ' || c.country))\nAS contact_location,\nc.phone AS contact_phone,\nc.email AS contact_email,\nco.name AS contact_country,\nTO_CHAR(c.request_status_last_modified, 'mm/dd/yy hh12:mi pm')\nAS request_status_last_modified,\nTO_CHAR(c.request_status_last_modified, 'yyyymmddhh24miss')\nAS rqst_stat_last_mdfd_sortable,\nc.token_id,\nc.master_key_token AS token\nFROM\nsales_reps sr\nJOIN partners p ON (sr.id = p.sales_rep_id)\nJOIN contacts c ON (p.id = c.partner_id)\nJOIN countries co ON (LOWER(c.country) = LOWER(co.code))\nJOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\nWHERE\nc.lead_deleted IS NULL\nAND EXISTS\n(\nSELECT\nlr.id\nFROM\nlead_requests lr,\nlead_request_status lrs\nWHERE\nc.id = lr.contact_id AND\nlr.status_id = lrs.id AND\nlrs.is_closed = 0\n)\nORDER BY\ncontact_company, contact_id;\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------\n Sort (cost=39093.16..39102.80 rows=3856 width=238) (actual time=6220.481..6221.188 rows=1071 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Merge Join (cost=38580.89..38863.48 rows=3856 width=238) (actual time=6015.751..6184.199 rows=1071 loops=1)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column19?\")\n -> Sort (cost=14.00..14.61 rows=242 width=19) (actual time=9.250..9.500 rows=240 loops=1)\n Sort Key: lower((co.code)::text)\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.132..4.498 rows=242 loops=1)\n -> Sort (cost=38566.89..38574.86 rows=3186 width=225) (actual time=6005.644..6006.954 rows=1071 loops=1)\n Sort Key: lower((c.country)::text)\n -> Merge Join (cost=75.65..38381.50 rows=3186 width=225) (actual time=58.086..5979.287 rows=1071 loops=1)\n Merge Cond: (\"outer\".partner_id = \"inner\".id)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..160907.39 rows=20106 width=171) (actual time=2.569..5816.985 rows=1547 loops=1)\n Filter: ((lead_deleted IS NULL) AND (subplan))\n SubPlan\n -> Nested Loop (cost=1.16..6.56 rows=2 width=10) (actual time=0.119..0.119 rows=0 loops=40261)\n Join Filter: (\"outer\".status_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..4.86 rows=3 width=20) (actual time=0.079..0.083 rows=0 loops=40261)\n Index Cond: ($0 = contact_id)\n -> Materialize (cost=1.16..1.24 rows=8 width=10) (actual time=0.002..0.011 rows=6 loops=12592)\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.083..0.270 rows=7 loops=1)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=75.65..76.37 rows=290 width=64) (actual time=55.073..56.990 rows=1334 loops=1)\n Sort Key: p.id\n -> Merge Join (cost=59.24..63.79 rows=290 width=64) (actual time=31.720..41.096 rows=395 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".sales_rep_id)\n -> Sort (cost=2.42..2.52 rows=39 width=31) (actual time=1.565..1.616 rows=39 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=31) (actual time=0.043..0.581 rows=39 loops=1)\n -> Sort (cost=56.82..57.55 rows=290 width=43) (actual time=29.921..30.310 rows=395 loops=1)\n Sort Key: p.sales_rep_id\n -> Nested Loop (cost=24.35..44.96 rows=290 width=43) (actual time=0.169..22.566 rows=395 loops=1)\n Join Filter: (\"inner\".classification_id = \"outer\".id)\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.059..0.102 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Materialize (cost=24.35..28.70 rows=435 width=53) (actual time=0.023..5.880 rows=435 loops=2)\n -> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=53) (actual time=0.034..8.937 rows=435 loops=1)\n Total runtime: 6225.791 ms\n(37 rows)\n\n-----\n\nMy first question is, what is the Materialize query plan element?\nIt happens twice, and usually when I see it, my query is slow.\n\nMy second and more important question is, does anyone have\nany ideas or suggestions as to how I can increase the speed\nfor this query?\n\nThings I have already done are, modify the joins and conditions\nso it starts with smaller tables, thus the join set is smaller,\nmodify the configuration of the server to ensure index scans\nare used as they should be, ran vacuumdb and analyze on the\ndatabase.\n\nThank you very much in advance for any pointers for additional\nplaces I can look.\n\nThanks.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Sat, 20 Aug 2005 20:48:41 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "complex query performance assistance request" }, { "msg_contents": "On Sat, 20 Aug 2005, John Mendenhall wrote:\n\n> I need to improve the performance for the following\n> query.\n\nI have run the same query in the same database under\ndifferent schemas. Each schema is pretty much the same\ntables and indices. One has an extra backup table and\nan extra index which are not used in either of the explain\nanalyze plans.\n\nThe first schema is a development schema, which I used\nto performance tune the server so everything was great.\n\nHere are the current results of the sql run in the development\nenvironment:\n\n-----\nLOG: duration: 852.275 ms statement: explain analyze\nSELECT\n c.id AS contact_id,\n sr.id AS sales_rep_id,\n p.id AS partner_id,\n coalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname))) AS contact_company,\n co.name AS contact_country,\n c.master_key_token\nFROM\n sales_reps sr\n JOIN partners p ON (sr.id = p.sales_rep_id)\n JOIN contacts c ON (p.id = c.partner_id)\n JOIN countries co ON (LOWER(c.country) = LOWER(co.code))\n JOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\nWHERE\n c.lead_deleted IS NULL\n AND EXISTS\n (\n SELECT\n lr.id\n FROM\n lead_requests lr,\n lead_request_status lrs\n WHERE\n c.id = lr.contact_id AND\n lr.status_id = lrs.id AND\n lrs.is_closed = 0\n )\nORDER BY\n contact_company, contact_id\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------\n Sort (cost=18238.25..18238.27 rows=11 width=102) (actual time=823.721..823.915 rows=247 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Hash Join (cost=18230.34..18238.06 rows=11 width=102) (actual time=808.042..818.427 rows=247 loops=1)\n Hash Cond: (lower((\"outer\".code)::text) = lower((\"inner\".country)::text))\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.032..1.208 rows=242 loops=1)\n -> Hash (cost=18230.31..18230.31 rows=9 width=95) (actual time=807.554..807.554 rows=0 loops=1)\n -> Merge Join (cost=18229.98..18230.31 rows=9 width=95) (actual time=794.413..804.855 rows=247 loops=1)\n Merge Cond: (\"outer\".sales_rep_id = \"inner\".id)\n -> Sort (cost=18227.56..18227.59 rows=9 width=95) (actual time=793.132..793.502 rows=250 loops=1)\n Sort Key: p.sales_rep_id\n -> Merge Join (cost=18227.26..18227.42 rows=9 width=95) (actual time=782.832..789.205 rows=250 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".classification_id)\n -> Sort (cost=1.05..1.05 rows=2 width=10) (actual time=0.189..0.194 rows=2 loops=1)\n Sort Key: pc.id\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.089..0.127 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Sort (cost=18226.21..18226.24 rows=13 width=105) (actual time=782.525..782.818 rows=251 loops=1)\n Sort Key: p.classification_id\n -> Merge Join (cost=0.00..18225.97 rows=13 width=105) (actual time=54.135..776.299 rows=449 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".partner_id)\n -> Index Scan using partners_pkey on partners p (cost=0.00..30.80 rows=395 width=30) (actual time=0.073..6.873 rows=395 loops=1)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..130157.20 rows=93 width=85) (actual time=0.366..739.783 rows=453 loops=1)\n Filter: ((lead_deleted IS NULL) AND (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..6.75 rows=2 width=10) (actual time=0.103..0.103 rows=0 loops=5576)\n Join Filter: (\"outer\".status_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..4.23 rows=2 width=20) (actual time=0.075..0.075 rows=0 loops=5576)\n Index Cond: ($0 = contact_id)\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.028..0.098 rows=4 loops=522)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=2.42..2.52 rows=39 width=10) (actual time=1.183..1.569 rows=268 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=10) (actual time=0.056..0.353 rows=39 loops=1)\n Total runtime: 826.425 ms\n(34 rows)\n-----\n\nHere is the current run in the production environment,\nwhich I need to figure out how to get to the performance\nlevel of the development environment:\n\n-----\nLOG: duration: 6447.934 ms statement: explain analyze\nSELECT\n c.id AS contact_id,\n sr.id AS sales_rep_id,\n p.id AS partner_id,\n coalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname))) AS contact_company,\n co.name AS contact_country,\n c.master_key_token\nFROM\n sales_reps sr\n JOIN partners p ON (sr.id = p.sales_rep_id)\n JOIN contacts c ON (p.id = c.partner_id)\n JOIN countries co ON (LOWER(c.country) = LOWER(co.code))\n JOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\nWHERE\n c.lead_deleted IS NULL\n AND EXISTS\n (\n SELECT\n lr.id\n FROM\n lead_requests lr,\n lead_request_status lrs\n WHERE\n c.id = lr.contact_id AND\n lr.status_id = lrs.id AND\n lrs.is_closed = 0\n )\nORDER BY\n contact_company, contact_id\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------\n Sort (cost=40838.98..40849.08 rows=4042 width=102) (actual time=6418.732..6419.536 rows=1071 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Merge Join (cost=40442.25..40596.85 rows=4042 width=102) (actual time=6357.161..6389.616 rows=1071 loops=1)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column9?\")\n -> Sort (cost=14.00..14.61 rows=242 width=19) (actual time=9.753..10.018 rows=240 loops=1)\n Sort Key: lower((co.code)::text)\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.126..3.950 rows=242 loops=1)\n -> Sort (cost=40428.24..40436.59 rows=3340 width=95) (actual time=6347.154..6348.429 rows=1071 loops=1)\n Sort Key: lower((c.country)::text)\n -> Merge Join (cost=75.65..40232.76 rows=3340 width=95) (actual time=60.308..6331.266 rows=1071 loops=1)\n Merge Cond: (\"outer\".partner_id = \"inner\".id)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..161018.18 rows=20120 width=85) (actual time=2.769..6188.886 rows=1548 loops=1)\n Filter: ((lead_deleted IS NULL) AND (subplan))\n SubPlan\n -> Nested Loop (cost=1.16..6.57 rows=2 width=10) (actual time=0.129..0.129 rows=0 loops=40262)\n Join Filter: (\"outer\".status_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..4.86 rows=3 width=20) (actual time=0.086..0.092 rows=0 loops=40262)\n Index Cond: ($0 = contact_id)\n -> Materialize (cost=1.16..1.24 rows=8 width=10) (actual time=0.002..0.013 rows=6 loops=12593)\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.078..0.243 rows=7 loops=1)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=75.65..76.37 rows=290 width=20) (actual time=57.243..59.574 rows=1334 loops=1)\n Sort Key: p.id\n -> Merge Join (cost=59.24..63.79 rows=290 width=20) (actual time=33.975..42.215 rows=395 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".sales_rep_id)\n -> Sort (cost=2.42..2.52 rows=39 width=10) (actual time=1.206..1.285 rows=39 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=10) (actual time=0.028..0.365 rows=39 loops=1)\n -> Sort (cost=56.82..57.55 rows=290 width=20) (actual time=32.566..33.254 rows=395 loops=1)\n Sort Key: p.sales_rep_id\n -> Nested Loop (cost=24.35..44.96 rows=290 width=20) (actual time=0.158..25.227 rows=395 loops=1)\n Join Filter: (\"inner\".classification_id = \"outer\".id)\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.050..0.096 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Materialize (cost=24.35..28.70 rows=435 width=30) (actual time=0.028..6.617 rows=435 loops=2)\n -> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=30) (actual time=0.042..9.941 rows=435 loops=1)\n Total runtime: 6423.683 ms\n(37 rows)\n-----\n\nThe SQL is exactly the same.\n\nThe issue is the query plan is different, and thus,\nnot up to the performance we need.\n\nWe have 256meg in the machine. Would it help if\nwe threw some more memory in?\n\nPlease let me know if you have *any* pointers as to\nthe reason for the difference.\n\nThank you very much in advance for any pointers or\nsuggestions.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Mon, 22 Aug 2005 11:21:38 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: complex query performance assistance request" }, { "msg_contents": "John Mendenhall <[email protected]> writes:\n> The issue is the query plan is different, and thus,\n> not up to the performance we need.\n\nNo, the issue is that you've got eight times as much data in the\nproduction server; so it's hardly surprising that it takes about\neight times longer.\n\nThe production query is spending most of its time on the subplan\nattached to the contacts table:\n\n> -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..161018.18 rows=20120 width=85) (actual time=2.769..6188.886 rows=1548 loops=1)\n> Filter: ((lead_deleted IS NULL) AND (subplan))\n> SubPlan\n> -> Nested Loop (cost=1.16..6.57 rows=2 width=10) (actual time=0.129..0.129 rows=0 loops=40262)\n\n0.129 * 40262 = 5193.798, so about five seconds in the subplan and\nanother one second in the indexscan proper. The problem is that the\nsubplan (the EXISTS clause) is iterated for each of 40262 rows of\ncontacts --- basically, every contacts row that has null lead_deleted.\n\nOn the dev server the same scan shows these numbers:\n\n> -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..130157.20 rows=93 width=85) (actual time=0.366..739.783 rows=453 loops=1)\n> Filter: ((lead_deleted IS NULL) AND (subplan))\n> SubPlan\n> -> Nested Loop (cost=0.00..6.75 rows=2 width=10) (actual time=0.103..0.103 rows=0 loops=5576)\n\nHere the subplan is iterated only 5576 times for 574 total msec. It's\nstill the bulk of the runtime though; the fact that the upper levels\nof the plan are a bit different has got little to do with where the time\nis going.\n\nI'd suggest trying to get rid of the EXISTS clause --- can you refactor\nthat into something that joins at the top query level?\n\nOr, if this is 7.4 or later (and you should ALWAYS mention which version\nyou are using in a performance question, because it matters), try to\nconvert the EXISTS into an IN. \"x IN (subselect)\" is planned much better\nthan \"EXISTS(subselect-using-x)\" these days.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Aug 2005 16:15:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: complex query performance assistance request " }, { "msg_contents": "Tom,\n\n> No, the issue is that you've got eight times as much data in the\n> production server; so it's hardly surprising that it takes about\n> eight times longer.\n> \n> The production query is spending most of its time on the subplan\n> attached to the contacts table:\n> \n> > -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..161018.18 rows=20120 width=85) (actual time=2.769..6188.886 rows=1548 loops=1)\n> > Filter: ((lead_deleted IS NULL) AND (subplan))\n> > SubPlan\n> > -> Nested Loop (cost=1.16..6.57 rows=2 width=10) (actual time=0.129..0.129 rows=0 loops=40262)\n> \n> 0.129 * 40262 = 5193.798, so about five seconds in the subplan and\n> another one second in the indexscan proper. The problem is that the\n> subplan (the EXISTS clause) is iterated for each of 40262 rows of\n> contacts --- basically, every contacts row that has null lead_deleted.\n> \n> On the dev server the same scan shows these numbers:\n> \n> > -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..130157.20 rows=93 width=85) (actual time=0.366..739.783 rows=453 loops=1)\n> > Filter: ((lead_deleted IS NULL) AND (subplan))\n> > SubPlan\n> > -> Nested Loop (cost=0.00..6.75 rows=2 width=10) (actual time=0.103..0.103 rows=0 loops=5576)\n> \n> I'd suggest trying to get rid of the EXISTS clause --- can you refactor\n> that into something that joins at the top query level?\n> \n> Or, if this is 7.4 or later (and you should ALWAYS mention which version\n> you are using in a performance question, because it matters), try to\n> convert the EXISTS into an IN. \"x IN (subselect)\" is planned much better\n> than \"EXISTS(subselect-using-x)\" these days.\n\nWe are using version 7.4.6.\n\nThe number of contacts in the dev env is 37080.\nThe number of contacts in the production env is 40307.\nThe amount of data is statistically about the same.\n\nHowever, the number of lead_requests are much different.\nThe dev env has 1438 lead_requests, the production env\nhas 15554 lead_requests. Each contacts row can have\nmultiple lead_requests, each lead_requests entry can\nhave an open or closed status. We are trying to select\nthe contacts with an open lead_request.\n\nWould it be best to attempt to rewrite it for IN?\nOr, should we try to tie it in with a join? I would\nprobably need to GROUP so I can just get a count of those\ncontacts with open lead_requests. Unless you know of a\nbetter way?\n\nThanks for your assistance. This is helping a lot.\nBTW, what does the Materialize query plan element mean?\n\nThanks again.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Mon, 22 Aug 2005 14:07:51 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: complex query performance assistance request" }, { "msg_contents": "John Mendenhall <[email protected]> writes:\n> Would it be best to attempt to rewrite it for IN?\n> Or, should we try to tie it in with a join?\n\nCouldn't say without a deeper understanding of what you're trying to\naccomplish.\n\n> BTW, what does the Materialize query plan element mean?\n\nMeans \"run the contained subplan once, and save the results aside in a\nbuffer; on subsequent loops, just pass back the buffer contents instead\nof re-running the subplan\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Aug 2005 20:54:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: complex query performance assistance request " }, { "msg_contents": "Tom,\n\n> > Would it be best to attempt to rewrite it for IN?\n> > Or, should we try to tie it in with a join?\n> \n> Couldn't say without a deeper understanding of what you're trying to\n> accomplish.\n\nHere are the results of each SQL rewrite.\n\nThe first pass, I rewrote it as c.id IN ():\n-----\nLOG: duration: 2669.682 ms statement: explain analyze\nSELECT\n c.id AS contact_id,\n sr.id AS sales_rep_id,\n p.id AS partner_id,\n coalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname))) AS contact_company,\n co.name AS contact_country,\n c.master_key_token\nFROM\n sales_reps sr\n JOIN partners p ON (sr.id = p.sales_rep_id)\n JOIN contacts c ON (p.id = c.partner_id)\n JOIN countries co ON (LOWER(c.country) = LOWER(co.code))\n JOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\nWHERE\n c.lead_deleted IS NULL\n AND c.id IN\n (\n SELECT\n lr.contact_id\n FROM\n lead_requests lr,\n lead_request_status lrs\n WHERE\n lr.status_id = lrs.id AND\n lrs.is_closed = 0\n )\nORDER BY\n contact_company, contact_id\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4413.35..4416.16 rows=1123 width=102) (actual time=2617.069..2617.719 rows=1071 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Merge Join (cost=4311.31..4356.45 rows=1123 width=102) (actual time=2549.717..2589.398 rows=1071 loops=1)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column9?\")\n -> Sort (cost=14.00..14.61 rows=242 width=19) (actual time=9.765..9.966 rows=240 loops=1)\n Sort Key: lower((co.code)::text)\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.142..5.118 rows=242 loops=1)\n -> Sort (cost=4297.31..4299.63 rows=928 width=95) (actual time=2539.685..2540.913 rows=1071 loops=1)\n Sort Key: lower((c.country)::text)\n -> Merge IN Join (cost=4163.02..4251.57 rows=928 width=95) (actual time=2377.539..2524.844 rows=1071 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".contact_id)\n -> Sort (cost=1835.53..1851.27 rows=6296 width=95) (actual time=1843.866..1853.193 rows=6349 loops=1)\n Sort Key: c.id\n -> Merge Join (cost=75.65..1438.24 rows=6296 width=95) (actual time=51.713..1505.633 rows=6349 loops=1)\n Merge Cond: (\"outer\".partner_id = \"inner\".id)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..5303.84 rows=40243 width=85) (actual time=0.077..584.736 rows=40267 loops=1)\n Filter: (lead_deleted IS NULL)\n -> Sort (cost=75.65..76.37 rows=290 width=20) (actual time=51.508..62.288 rows=6462 loops=1)\n Sort Key: p.id\n -> Merge Join (cost=59.24..63.79 rows=290 width=20) (actual time=30.152..38.281 rows=395 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".sales_rep_id)\n -> Sort (cost=2.42..2.52 rows=39 width=10) (actual time=1.390..1.505 rows=39 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=10) (actual time=0.026..0.380 rows=39 loops=1)\n -> Sort (cost=56.82..57.55 rows=290 width=20) (actual time=28.558..29.120 rows=395 loops=1)\n Sort Key: p.sales_rep_id\n -> Nested Loop (cost=24.35..44.96 rows=290 width=20) (actual time=0.191..21.408 rows=395 loops=1)\n Join Filter: (\"inner\".classification_id = \"outer\".id)\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.068..0.121 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Materialize (cost=24.35..28.70 rows=435 width=30) (actual time=0.029..5.380 rows=435 loops=2)\n -> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=30) (actual time=0.038..8.161 rows=435 loops=1)\n -> Sort (cost=2327.50..2351.43 rows=9573 width=11) (actual time=533.508..535.629 rows=1742 loops=1)\n Sort Key: lr.contact_id\n -> Merge Join (cost=1520.94..1694.49 rows=9573 width=11) (actual time=302.932..461.644 rows=1745 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".status_id)\n -> Sort (cost=1.28..1.30 rows=8 width=10) (actual time=0.392..0.404 rows=7 loops=1)\n Sort Key: lrs.id\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.117..0.280 rows=7 loops=1)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=1519.66..1558.55 rows=15556 width=21) (actual time=302.423..321.939 rows=15387 loops=1)\n Sort Key: lr.status_id\n -> Seq Scan on lead_requests lr (cost=0.00..436.56 rows=15556 width=21) (actual time=0.029..164.708 rows=15559 loops=1)\n Total runtime: 2632.987 ms\n(44 rows)\n-----\n\nThe second pass, I rewrote it to tie in with a JOIN, adding\na DISTINCT at the top to get rid of the duplicates:\n-----\nLOG: duration: 3285.645 ms statement: explain analyze\nSELECT DISTINCT\n c.id AS contact_id,\n sr.id AS sales_rep_id,\n p.id AS partner_id,\n coalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname))) AS contact_company,\n co.name AS contact_country,\n c.master_key_token\nFROM\n sales_reps sr\n JOIN partners p ON (sr.id = p.sales_rep_id)\n JOIN contacts c ON (p.id = c.partner_id)\n JOIN countries co ON (LOWER(c.country) = LOWER(co.code))\n JOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\n JOIN lead_requests lr ON (c.id = lr.contact_id)\n JOIN lead_request_status lrs ON (lr.status_id = lrs.id AND lrs.is_closed = 0)\nWHERE\n c.lead_deleted IS NULL\nORDER BY\n contact_company, contact_id\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=3039.78..3071.46 rows=1810 width=102) (actual time=3219.707..3228.637 rows=1071 loops=1)\n -> Sort (cost=3039.78..3044.31 rows=1810 width=102) (actual time=3219.695..3220.560 rows=1118 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id, sr.id, p.id, co.name, c.master_key_token\n -> Merge Join (cost=2870.92..2941.85 rows=1810 width=102) (actual time=3156.788..3188.338 rows=1118 loops=1)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column9?\")\n -> Sort (cost=14.00..14.61 rows=242 width=19) (actual time=9.196..9.445 rows=240 loops=1)\n Sort Key: lower((co.code)::text)\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.128..3.914 rows=242 loops=1)\n -> Sort (cost=2856.92..2860.66 rows=1496 width=95) (actual time=3147.340..3148.477 rows=1118 loops=1)\n Sort Key: lower((c.country)::text)\n -> Merge Join (cost=2750.88..2778.03 rows=1496 width=95) (actual time=3008.933..3132.122 rows=1118 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".status_id)\n -> Sort (cost=1.28..1.30 rows=8 width=10) (actual time=0.366..0.379 rows=7 loops=1)\n Sort Key: lrs.id\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.094..0.254 rows=7 loops=1)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=2749.60..2755.67 rows=2430 width=105) (actual time=3008.396..3023.502 rows=9992 loops=1)\n Sort Key: lr.status_id\n -> Merge Join (cost=1835.53..2612.95 rows=2430 width=105) (actual time=1975.714..2912.632 rows=10089 loops=1)\n Merge Cond: (\"outer\".contact_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..683.87 rows=15556 width=21) (actual time=0.073..247.148 rows=15556 loops=1)\n -> Sort (cost=1835.53..1851.27 rows=6296 width=95) (actual time=1975.273..1988.664 rows=10089 loops=1)\n Sort Key: c.id\n -> Merge Join (cost=75.65..1438.24 rows=6296 width=95) (actual time=56.107..1625.186 rows=6349 loops=1)\n Merge Cond: (\"outer\".partner_id = \"inner\".id)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..5303.84 rows=40243 width=85) (actual time=0.047..580.311 rows=40267 loops=1)\n Filter: (lead_deleted IS NULL)\n -> Sort (cost=75.65..76.37 rows=290 width=20) (actual time=55.935..65.502 rows=6462 loops=1)\n Sort Key: p.id\n -> Merge Join (cost=59.24..63.79 rows=290 width=20) (actual time=31.765..39.925 rows=395 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".sales_rep_id)\n -> Sort (cost=2.42..2.52 rows=39 width=10) (actual time=1.072..1.117 rows=39 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=10) (actual time=0.022..0.312 rows=39 loops=1)\n -> Sort (cost=56.82..57.55 rows=290 width=20) (actual time=30.489..30.893 rows=395 loops=1)\n Sort Key: p.sales_rep_id\n -> Nested Loop (cost=24.35..44.96 rows=290 width=20) (actual time=0.159..23.356 rows=395 loops=1)\n Join Filter: (\"inner\".classification_id = \"outer\".id)\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.047..0.086 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Materialize (cost=24.35..28.70 rows=435 width=30) (actual time=0.028..6.124 rows=435 loops=2)\n -> Seq Scan on partners p (cost=0.00..24.35 rows=435 width=30) (actual time=0.039..9.383 rows=435 loops=1)\n Total runtime: 3241.139 ms\n(43 rows)\n-----\n\nThe DISTINCT ON condition was about the same amount of time,\nstatistically. Removing the DISTINCT entirely only gave a\nvery slight improvement in performance.\n\nSo, the bottom line is, unless there are other ideas to\nimprove the performance, I will most likely rewrite our\napplication to use the c.id IN () option.\n\nThank you very much for your input and suggestions.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Tue, 23 Aug 2005 12:05:25 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: complex query performance assistance request" } ]
[ { "msg_contents": "Hi,\n\nSay I have a table with column A, B, C, D\nA has a unique index on it (primary key)\nB and C have a normal index on it\nD has no index\n\nIf I perform a query like update tbl set D = 'whatever' ;\nthat should make no difference on the indexes on the other columns, \nright ?\n\nOr is there some kind of mechanism that does create a sort of new \nrecord, thus makes the indexes go wild.\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Sun, 21 Aug 2005 20:32:31 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "(Re)-indexing on updates" }, { "msg_contents": "On Sun, 2005-08-21 at 20:32 +0200, Yves Vindevogel wrote:\n> \n> \n> ______________________________________________________________________\n> \n> Hi,\n> \n> Say I have a table with column A, B, C, D\n> A has a unique index on it (primary key)\n> B and C have a normal index on it\n> D has no index\n> \n> If I perform a query like update tbl set D = 'whatever' ;\n> that should make no difference on the indexes on the other columns,\n> right ?\n\nWhat postgresql does on update is to make a new record, so there will be\ntwo records in your table and two records in your index. You would need\nto vacuum the table to mark the space for the old record free, and you\nwould need to reindex the table to shrink the index.\n\n> \n> Or is there some kind of mechanism that does create a sort of new\n> record, thus makes the indexes go wild.\n\nYes.\n\n-jwb\n\n", "msg_date": "Sun, 21 Aug 2005 12:06:05 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: (Re)-indexing on updates" }, { "msg_contents": "The option with\n\nT1: A B C and T2 A D (to avoid the updates)\nworks very well with a simple query\n\nInsert into T2 (A, D)\nselect A, functionToGetD from T1 left join T2 on T1.A = T2.A\nwhere T2.A is null\n\nThe above gives me the new records for those where D was not filled yet.\nSince they are all new records, I have no trouble with the MVCC\n\nOn 21 Aug 2005, at 21:06, Jeffrey W. Baker wrote:\n\n> On Sun, 2005-08-21 at 20:32 +0200, Yves Vindevogel wrote:\n>>\n>>\n>> ______________________________________________________________________\n>>\n>> Hi,\n>>\n>> Say I have a table with column A, B, C, D\n>> A has a unique index on it (primary key)\n>> B and C have a normal index on it\n>> D has no index\n>>\n>> If I perform a query like update tbl set D = 'whatever' ;\n>> that should make no difference on the indexes on the other columns,\n>> right ?\n>\n> What postgresql does on update is to make a new record, so there will \n> be\n> two records in your table and two records in your index. You would \n> need\n> to vacuum the table to mark the space for the old record free, and you\n> would need to reindex the table to shrink the index.\n>\n>>\n>> Or is there some kind of mechanism that does create a sort of new\n>> record, thus makes the indexes go wild.\n>\n> Yes.\n>\n> -jwb\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 22 Aug 2005 11:11:25 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: (Re)-indexing on updates" } ]
[ { "msg_contents": "I always forget that this goes to the writer itself and not to the \ngroup.\n>\n>\n> Ok, this is a major setback in some of my procedures.\n> From time to time, I must update one field in about 10% of the records.\n> So this will take time.\n>\n> How can I work around that ?\n>\n> Some personal opinions ...\n> 1) Drop indexes, run update, create indexes, vacuum\n> 2) Move the field to another table and use joins ? I could delete the \n> records when needed and add them again\n>\n>\n> This mechanism, of inserting a new record and marking the old one, is \n> that data kept somewhere where I can \"see\" it ?\n> I need for one app a trace of all my changes in the database. I have \n> a set of triggers to do that for the moment on each table.\n> Could I use that mechanism somehow to avoid my triggers ?\n> Any documentation on that mechanism (hacker stuff like what tables are \n> used) ?\n> Any good books on stuff like this ? I love to read and know how the \n> inside mechanics work.\n>\n> Tnx\n>\n>\n>\n> On 21 Aug 2005, at 21:06, Jeffrey W. Baker wrote:\n>\n>> On Sun, 2005-08-21 at 20:32 +0200, Yves Vindevogel wrote:\n>>>\n>>>\n>>> _____________________________________________________________________ \n>>> _\n>>>\n>>> Hi,\n>>>\n>>> Say I have a table with column A, B, C, D\n>>> A has a unique index on it (primary key)\n>>> B and C have a normal index on it\n>>> D has no index\n>>>\n>>> If I perform a query like update tbl set D = 'whatever' ;\n>>> that should make no difference on the indexes on the other columns,\n>>> right ?\n>>\n>> What postgresql does on update is to make a new record, so there will \n>> be\n>> two records in your table and two records in your index. You would \n>> need\n>> to vacuum the table to mark the space for the old record free, and you\n>> would need to reindex the table to shrink the index.\n>>\n>>>\n>>> Or is there some kind of mechanism that does create a sort of new\n>>> record, thus makes the indexes go wild.\n>>\n>> Yes.\n>>\n>> -jwb\n>>\n>>\n>>\n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Sun, 21 Aug 2005 21:36:03 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: (Re)-indexing on updates" } ]
[ { "msg_contents": "My DB is quite simple. It holds data about printjobs that come from \nthe windows eventlog.\nThe data is shown on a website. I have one main table: tblPrintjobs.\nWe add some extra data to it. Like the applicationtype, based on rules \nwe define in other tables.\n\nWhen a rule changes, the updates take place (and take so long).\nAlso, when new records are added, this takes place.\n\nFor instance, rule 1 and rule 2 are changing positions in importance. \n(1 was before 2, now 2 before 1)\nThe records that hold reference to rule 1 are reset to null (one field)\nRule 2 is assigned, then rule 1 is assigned.\n\nWhat I could do is also:\ndelete all from tblRefRules where rule is 1\ninsert all from tblPrintjobs that are not yet in RefRules for Rule2, \nthen insert all for rule2\n\nThat would be a workaround for the MVCC. Not ?\n\nBTW: The good rule is: drop index, update, vacuum, create index ?\nI think I mistook the purpose of vacuum.\nIf I index before the vacuum, my marked records will still be in the \nindex ? Even if all transactions are finished ?\n\n\nBegin forwarded message:\n\n> From: \"Jeffrey W. Baker\" <[email protected]>\n> Date: Sun 21 Aug 2005 21:36:16 CEST\n> To: Yves Vindevogel <[email protected]>\n> Subject: Re: [PERFORM] (Re)-indexing on updates\n>\n> On Sun, 2005-08-21 at 21:18 +0200, Yves Vindevogel wrote:\n>>\n>>\n>> ______________________________________________________________________\n>>\n>> Ok, this is a major setback in some of my procedures.\n>> From time to time, I must update one field in about 10% of the\n>> records.\n>> So this will take time.\n>>\n>> How can I work around that ?\n>>\n>> Some personal opinions ...\n>> 1) Drop indexes, run update, create indexes, vacuum\n>\n> Drop index, update, vacuum, create index\n>\n> -or-\n>\n> update, vacuum, reindex\n>\n>> 2) Move the field to another table and use joins ? I could delete the\n>> records when needed and add them again\n>\n> I'm not familiar with your application, but you could try it and tell \n> us\n> if this works :)\n>\n>>\n>> This mechanism, of inserting a new record and marking the old one, is\n>> that data kept somewhere where I can \"see\" it ?\n>\n> This is MVCC: multi-version cuncurrency. The old record is kept \n> because\n> there could be an old transaction that can still see it, and cannot yet\n> see the updated record. And no other transaction can see your record\n> until you commit. The old row isn't removed until you vacuum.\n>\n>> I need for one app a trace of all my changes in the database. I have\n>> a set of triggers to do that for the moment on each table.\n>> Could I use that mechanism somehow to avoid my triggers ?\n>> Any documentation on that mechanism (hacker stuff like what tables are\n>> used) ?\n>\n> You could search the postgresql documentation (or the web) for MVCC.\n>\n> Regards,\n> jwb\n>\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Sun, 21 Aug 2005 21:59:11 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: (Re)-indexing on updates" } ]
[ { "msg_contents": "I'm resending this as it appears not to have made it to the list.\n\nAt 10:54 AM 8/21/2005, Jeremiah Jahn wrote:\n>On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:\n> > Ron wrote:\n> >\n> > Well, since you can get a read of the RAID at 150MB/s, that means that\n> > it is actual I/O speed. It may not be cached in RAM. Perhaps you could\n> > try the same test, only using say 1G, which should be cached.\n>\n>[root@io pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m8.885s\n>user 0m0.299s\n>sys 0m6.998s\n\nThis is abysmally slow.\n\n\n>[root@io pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000\n>1000000+0 records in\n>1000000+0 records out\n>\n>real 0m1.654s\n>user 0m0.232s\n>sys 0m1.415s\n\nThis transfer rate is the only one out of the 4 you have posted that \nis in the vicinity of where it should be.\n\n\n>The raid array I have is currently set up to use a single channel. But I\n>have dual controllers in the array. And dual external slots on the card.\n>The machine is brand new and has pci-e backplane.\n>\nSo you have 2 controllers each with 2 external slots? But you are \ncurrently only using 1 controller and only one external slot on that \ncontroller?\n\n\n> > > Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them\n> > > doing raw sequential IO like this should be capable of at\n> > > ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's\n>BTW I'm using Seagate Cheetah 15K.4's\n\nOK, now we have that nailed down.\n\n\n> > > AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic\n> > > MegaRAID controllers. What I don't know is which exact one yours is,\n> > > nor do I know if it (or any of the MegaRAID controllers) are high\n> > > powered enough.\n>\n>PERC4eDC-PCI Express, 128MB Cache, 2-External Channels\n\nLooks like they are using the LSI Logic MegaRAID SCSI 320-2E \ncontroller. IIUC, you have 2 of these, each with 2 external channels?\n\nThe specs on these appear a bit strange. They are listed as being a \nPCI-Ex8 card, which means they should have a max bandwidth of 20Gb/s= \n2GB/s, yet they are also listed as only supporting dual channel U320= \n640MB/s when they could easily support quad channel U320= \n1.28GB/s. Why bother building a PCI-Ex8 card when only a PCI-Ex4 \ncard (which is a more standard physical format) would've been \nenough? Or if you are going to build a PCI-Ex8 card, why not support \nquad channel U320? This smells like there's a problem with LSI's design.\n\nThe 128MB buffer also looks suspiciously small, and I do not see any \nupgrade path for it on LSI Logic's site. \"Serious\" RAID controllers \nfrom companies like Xyratex, Engino, and Dot-hill can have up to \n1-2GB of buffer, and there's sound technical reasons for it. See if \nthere's a buffer upgrade available or if you can get controllers that \nhave larger buffer capabilities.\n\nRegardless of the above, each of these controllers should still be \ngood for about 80-85% of 640MB/s, or ~510-540 MB/s apiece when doing \nraw sequential IO if you plug 3-4 fast enough HD's into each SCSI \nchannel. Cheetah 15K.4's certainly are fast enough. Optimal setup \nis probably to split each RAID 1 pair so that one HD is on each of \nthe SCSI channels, and then RAID 0 those pairs. That will also \nprotect you from losing the entire disk subsystem if one of the SCSI \nchannels dies.\n\nThat 128MB of buffer cache may very well be too small to keep the IO \nrate up, and/or there may be a more subtle problem with the LSI card, \nand/or you may have a configuration problem, but _something(s)_ need \nfixing since you are only getting raw sequential IO of ~100-150MB/s \nwhen it should be above 500MB/s.\n\nThis will make the most difference for initial reads (first time you \nload a table, first time you make a given query, etc) and for any writes.\n\nYour HW provider should be able to help you, even if some of the HW \nin question needs to be changed. You paid for a solution. As long \nas this stuff is performing at so much less then what it is supposed \nto, you have not received the solution you paid for.\n\nBTW, on the subject of RAID stripes IME the sweet spot tends to be in \nthe 64KB to 256KB range (very large, very read heavy data mines can \nwant larger RAID stripes.). Only experimentation will tell you what \nresults in the best performance for your application.\n\n\n>I'm not really worried about the writing, it's the reading the reading\n>that needs to be faster.\n\nInitial reads are only going to be as fast as your HD subsystem, so \nthere's a reason for making the HD subsystem faster even if all you \ncare about is reads. In addition, I'll repeat my previous advice \nthat upgrading to 16GB of RAM would be well worth it for you.\n\nHope this helps,\nRon Peacetree\n\n\n", "msg_date": "Sun, 21 Aug 2005 18:59:15 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: extremly low memory usage" } ]
[ { "msg_contents": "> Bill of Materials Traversal ( ~ 62k records).\n> \n> ISAM* pg 8.0 pg 8.1 devel delta 8.0->8.1\n> running time 63 sec 90 secs 71 secs 21%\n> cpu load 17% 45% 32% 29%\n> loadsecs** 10.71 40.5 22.72 44%\n> recs/sec 984 688 873\n> recs/loadsec 5882 1530 2728\n> \n> *ISAM is an anonymous commercial ISAM library in an optimized server\n> architecture (pg smokes the non-optimized flat file version).\n> **Loadsecs being seconds of CPU at 100% load.\n\nOne thing that might interest you is that the penalty in 8.1 for\nstats_command_string=true in this type of access pattern is very high: I\nwas experimenting to see if the new cpu efficiency gave me enough of a\nbudget to start using this. This more than doubled the cpu load to\naround 70% with a runtime of 82 seconds. This is actually worse than\n8.0 :(.\n\nThis *might* be a somewhat win32 specific issue. I've had issues with\nthe stats collector before. Anyways, the feature is a frill so it's not\na big deal.\n\nMerlin\n\n\n", "msg_date": "Mon, 22 Aug 2005 09:15:10 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> One thing that might interest you is that the penalty in 8.1 for\n> stats_command_string=true in this type of access pattern is very high: I\n> was experimenting to see if the new cpu efficiency gave me enough of a\n> budget to start using this. This more than doubled the cpu load to\n> around 70% with a runtime of 82 seconds. This is actually worse than\n> 8.0 :(.\n\nThat seems quite peculiar; AFAICS the pgstat code shouldn't be any\nslower than before. At first I thought it might be because we'd\nincreased PGSTAT_ACTIVITY_SIZE, but actually that happened before\n8.0 release, so it shouldn't be a factor in this comparison.\n\nCan anyone else confirm a larger penalty for stats_command_string in\nHEAD than in 8.0? A self-contained test case would be nice too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Aug 2005 10:17:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "\nHello,\n\nI am running PostgreSQL 8.0.x on Solaris 10 AMD64. My Tablesize for this \ntest is about 80G. When I run a query on a column which is not indexed, I \nget a full table scan query and that's what I am testing right now. (I am \nartificially creating that scenario to improve that corner case). Aparently \nI find that the full query is running much slower compared to what hardware \ncan support and hence dug into DTrace to figure out where it is spending \nmost of its time.\n\nRunning a script (available on my blog) I find the following top 5 functions \nwhere it spends most time during a 10 second run of the script\n<PRE>\n Time in (millisec) \n Call Count\nMemoryContextSwitchTo 775 106564\nLockBuffer 707 \n 109367\nLWLockAcquire 440 \n 58888\nExecEvalConst 418 \n 53282\nResourceOwnerRememberBuffer 400 54684\nTransactionIdFollowsOrEquals 392 \n53281\n\n</PRE>\n\nWhile the times look pretty small (0.775 second out of 10seconds which is \nabout 7.75%), it still represents significant time since the table is pretty \nbig and the entire scan takes about 30 minute (about 80G big table). \nConsidering it is a single threaded single process scan all the hits of the \nfunction calls itself can delay the performance.\n\nMemoryContextSwitchTo and LockBuffer itself takes 15% of the total time of \nthe query. I was expecting \"read\" to be the slowest part (biggest component) \nbut it was way down in the 0.4% level.\n\nNow the question is why there are so many calls to MemoryContextSwitchTo in \na single SELECT query command? Can it be minimized?\n\nAlso is there any way to optimize LockBuffer?\n\nIs there anything else that can minimize the time spent in these calls \nitself? (Of course it is the first iteration but something else will be the \nbottleneck... but that's the goal).\n\nIf there are any hackers interested in tackling this problem let me know.\n\nThanks.\nRegards,\nJignesh\n\n\n", "msg_date": "Mon, 22 Aug 2005 09:47:38 -0400", "msg_from": "\"Jignesh Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "MemoryContextSwitchTo during table scan?" }, { "msg_contents": "\"Jignesh Shah\" <[email protected]> writes:\n> Running a script (available on my blog) I find the following top 5 functions \n> where it spends most time during a 10 second run of the script\n\nIt's pretty risky to draw conclusions from only 10 seconds' worth of\ngprof data --- that's only 1000 samples total at the common sampling\nrate of 100/sec. If there's one function eating 90% of the runtime,\nyou'll find out, but you don't have enough data to believe that you\nknow what is happening with resolution of a percent or so. I generally\ntry to accumulate several minutes worth of CPU time in a gprof run.\n\n> MemoryContextSwitchTo and LockBuffer itself takes 15% of the total time of \n> the query. I was expecting \"read\" to be the slowest part (biggest component) \n> but it was way down in the 0.4% level.\n\nYou do know that gprof counts only CPU time, and only user-space CPU\ntime at that? read() isn't going to show up at all. It's fairly likely\nthat your test case is I/O bound and that worrying about CPU efficiency\nfor it is a waste of time anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Aug 2005 11:41:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MemoryContextSwitchTo during table scan? " }, { "msg_contents": "Tom,\n\nOn 8/22/05 8:41 AM, \"Tom Lane\" <[email protected]> wrote:\n\n>> MemoryContextSwitchTo and LockBuffer itself takes 15% of the total time of\n>> the query. I was expecting \"read\" to be the slowest part (biggest component)\n>> but it was way down in the 0.4% level.\n> \n> You do know that gprof counts only CPU time, and only user-space CPU\n> time at that? read() isn't going to show up at all. It's fairly likely\n> that your test case is I/O bound and that worrying about CPU efficiency\n> for it is a waste of time anyway.\n\nHe's running DTRACE, a CPU profiler that uses hardware performance\nregisters, not gprof. BTW, if you statically link your app, you get\nprofiling information for system calls with gprof.\n\nJignesh has been analyzing PG for quite a while, there are definite issues\nwith CPU consuming functions in the data path IMO. This result he reported\nis one of them on Solaris 10, and possibly on other platforms.\n\nWe are limited to about 130MB/s of I/O throughput for sequential scan on\nplatforms that can do 240MB/s of sequential 8k block I/O. Though I haven't\nprofiled seqscan, I'd look at Jignesh's results carefully because they\ncorrelate with our experience.\n\n- Luke\n\n\n", "msg_date": "Mon, 22 Aug 2005 10:46:33 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MemoryContextSwitchTo during table scan?" }, { "msg_contents": "Jignesh,\n\n> Also is there any way to optimize LockBuffer?\n\nYes, test on 8.1. The buffer manager was re-written for 8.1. You should \nsee a decrease in both LockBuffer and context switch activity.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 22 Aug 2005 12:12:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MemoryContextSwitchTo during table scan?" }, { "msg_contents": "\nHi Tom,\n\nLike I mentioned I am using DTrace on Solaris 10 x64 and not gprof.\nDTrace is not based on sampling but actual entry/exit point. Ofcourse my 10 \nsecond profile is just a sample that I can assure you is representative of \nthe query since it is a very simple query that does simple table scan. (I am \ntaken profiles at different times of the queries all giving similar outputs)\n\nIn case of DTrace I am actually measuring \"wall clock\" for leaf functions.\n\nFor more information on DTrace please refer to:\nhttp://docs.sun.com/app/docs/doc/817-6223/6mlkidlf1?a=view\n\nRegards,\nJignesh\n\n\n----Original Message Follows----\nFrom: Tom Lane <[email protected]>\nTo: \"Jignesh Shah\" <[email protected]>\nCC: [email protected]\nSubject: Re: [PERFORM] MemoryContextSwitchTo during table scan?\nDate: Mon, 22 Aug 2005 11:41:40 -0400\n\n\"Jignesh Shah\" <[email protected]> writes:\n > Running a script (available on my blog) I find the following top 5 \nfunctions\n > where it spends most time during a 10 second run of the script\n\nIt's pretty risky to draw conclusions from only 10 seconds' worth of\ngprof data --- that's only 1000 samples total at the common sampling\nrate of 100/sec. If there's one function eating 90% of the runtime,\nyou'll find out, but you don't have enough data to believe that you\nknow what is happening with resolution of a percent or so. I generally\ntry to accumulate several minutes worth of CPU time in a gprof run.\n\n > MemoryContextSwitchTo and LockBuffer itself takes 15% of the total time \nof\n > the query. I was expecting \"read\" to be the slowest part (biggest \ncomponent)\n > but it was way down in the 0.4% level.\n\nYou do know that gprof counts only CPU time, and only user-space CPU\ntime at that? read() isn't going to show up at all. It's fairly likely\nthat your test case is I/O bound and that worrying about CPU efficiency\nfor it is a waste of time anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Aug 2005 18:37:21 -0400", "msg_from": "\"Jignesh Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MemoryContextSwitchTo during table scan?" }, { "msg_contents": "Jignesh Shah wrote:\n> Now the question is why there are so many calls to MemoryContextSwitchTo \n> in a single SELECT query command? Can it be minimized?\n\nI agree with Tom -- if profiling indicates that MemoryContextSwitchTo() \nis the bottleneck, I would be suspicious that your profiling setup is \nmisconfigured. MemoryContextSwitchTo() is essentially a function call, \ntwo pointer assignments, and a function return. Try rerunning the test \nwith current sources -- MemoryContextSwitchTo() is now inlined when \nusing GCC, which should just leave the assignments.\n\n-Neil\n", "msg_date": "Mon, 22 Aug 2005 18:48:18 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MemoryContextSwitchTo during table scan?" } ]
[ { "msg_contents": "> That seems quite peculiar; AFAICS the pgstat code shouldn't be any\n> slower than before. At first I thought it might be because we'd\n> increased PGSTAT_ACTIVITY_SIZE, but actually that happened before\n> 8.0 release, so it shouldn't be a factor in this comparison.\n\nJust FYI the last time I looked at stats was in the 8.0 beta period.\n \n> Can anyone else confirm a larger penalty for stats_command_string in\n> HEAD than in 8.0? A self-contained test case would be nice too.\n\nlooking into it.\n\nMerlin\n\n\n", "msg_date": "Mon, 22 Aug 2005 13:18:28 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "I am looking for the latest pgbench and documentation.\n\nIf someone know where I can locate them it would save a lot of search time.\n\nThanks\n\nPhilip Pinkerton\nTPC-C Benchmarks Sybase\nIndependant Consultant\nRio de Janeiro, RJ, Brazil 22031-010\n\n\n", "msg_date": "Mon, 22 Aug 2005 22:15:30 -0300", "msg_from": "Philip Pinkerton <[email protected]>", "msg_from_op": true, "msg_subject": "pgbench" }, { "msg_contents": "Phillip,\n\n> I am looking for the latest pgbench and documentation.\n\nCurrently they are packaged with the PostgreSQL source code.\n\nHowever, if you're looking for a serious benchmark, may I suggest OSDL's DBT2? \nIt's substantially similar to TPC-C.\nhttp://sourceforge.net/projects/osdldbt\n\nWhat's your interest in benchmarking PostgreSQL, BTW? \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 22 Aug 2005 22:01:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench" }, { "msg_contents": "pgbench is located in the contrib directory of any source tarball, \nalong with a README that serves as documentation.\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n\nOn Aug 22, 2005, at 8:15 PM, Philip Pinkerton wrote:\n\n> I am looking for the latest pgbench and documentation.\n>\n> If someone know where I can locate them it would save a lot of \n> search time.\n>\n> Thanks\n>\n> Philip Pinkerton\n> TPC-C Benchmarks Sybase\n> Independant Consultant\n> Rio de Janeiro, RJ, Brazil 22031-010\n", "msg_date": "Tue, 23 Aug 2005 08:52:46 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench" } ]
[ { "msg_contents": "Hello all,\n\nwhat are unused item pointers and how do I get rid of them?\n\nWe have a fairly large table which is vacuumed daily and reindexed every \nweekend.\n\nNFO: vacuuming \"public.tbltimeseries\"\nINFO: index \"idx_timeseries\" now contains 26165807 row versions in \n151713 pages\nDETAIL: 8610108 index row versions were removed.\n58576 index pages have been deleted, 36223 are currently reusable.\nCPU 6.36s/18.46u sec elapsed 263.75 sec.\nINFO: \"tbltimeseries\": removed 8610108 row versions in 500766 pages\nDETAIL: CPU 37.07s/29.76u sec elapsed 826.82 sec.\nINFO: \"tbltimeseries\": found 8610108 removable, 26165807 nonremovable \nrow versions in 5744789 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 235555635 unused item pointers.\n0 pages are entirely empty.\nCPU 119.13s/61.09u sec elapsed 2854.22 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2361976783\"\nINFO: index \"pg_toast_2361976783_index\" now contains 24749150 row \nversions in 108975 pages\nDETAIL: 5857243 index row versions were removed.\n33592 index pages have been deleted, 16007 are currently reusable.\nCPU 4.15s/13.53u sec elapsed 78.56 sec.\nINFO: \"pg_toast_2361976783\": removed 5857243 row versions in 1125801 pages\nDETAIL: CPU 82.62s/69.48u sec elapsed 1571.43 sec.\nINFO: \"pg_toast_2361976783\": found 5857243 removable, 24749150 \nnonremovable row versions in 10791766 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 33395357 unused item pointers.\n0 pages are entirely empty.\nCPU 235.46s/105.91u sec elapsed 4458.31 sec.\nINFO: \"pg_toast_2361976783\": truncated 10791766 to 10778290 pages\nDETAIL: CPU 0.21s/0.07u sec elapsed 7.09 sec.\nINFO: analyzing \"public.tbltimeseries\"\nINFO: \"tbltimeseries\": scanned 150000 of 5744789 pages, containing \n691250 live rows and 0 dead rows; 150000 rows in sample, 26473903 \nestimated total rows\n\nas you can see we have 235M unused item pointers in the main table and a \nfew 10's of millions more in other associated tables. \n\nPlease note that the advice \"vacuum more often\" is a non-starter as the \ntotal time here is already about 3 hours and this is just one table. \nThis is a fairly active table to which about 20M rows are added and \nremoved daily.\n\nThe free space map is set at 11M pages and just today we popped up over \nthat amount in the vacuum output. I don't think this is an issue here \nthough as the large number of unused item pointers has been present for \na while.\n\nThanks!\n\n-- Alan\n", "msg_date": "Mon, 22 Aug 2005 22:51:28 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "unused item pointers?" }, { "msg_contents": "Alan Stange <[email protected]> writes:\n> INFO: \"tbltimeseries\": found 8610108 removable, 26165807 nonremovable \n> row versions in 5744789 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 235555635 unused item pointers.\n> 0 pages are entirely empty.\n\nThe item pointers themselves are not very interesting --- at 4 bytes\napiece, they're only accounting for 2% of the table space. However\nthe fact that there are nearly 10x more unused than used ones suggests\nthat this table is suffering a pretty serious bloat problem. Assuming\nconstant-width rows in the table, that implies something like 90% of\nthe space in the table is unused. (contrib/pgstattuple might be useful\nto confirm this estimate.)\n\nVACUUM FULL, or perhaps better CLUSTER, would get you out of that. And\nyes, you will need to vacuum more often afterwards if you want to keep\nthe bloat under control.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Aug 2005 00:42:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unused item pointers? " }, { "msg_contents": "On Mon, 2005-08-22 at 22:51 -0400, Alan Stange wrote:\n> Hello all,\n> \n> what are unused item pointers and how do I get rid of them?\n> \n> We have a fairly large table which is vacuumed daily and reindexed every \n> weekend.\n\n> as you can see we have 235M unused item pointers in the main table and a \n> few 10's of millions more in other associated tables. \n> \n> Please note that the advice \"vacuum more often\" is a non-starter as the \n> total time here is already about 3 hours and this is just one table. \n> This is a fairly active table to which about 20M rows are added and \n> removed daily.\n\nThat may be so, but the answer is still to VACUUM more often. Try the\nautovacuum. If it takes 3 hours with 90% wasted records, it would only\ntake 20 minutes when running properly.\n\nYou might be able to change your application to avoid generating so many\ndead rows. For example, test before insert so you don't make a dead\ntuple on duplicate insert.\n\nTo repair this table, you can try VACUUM FULL but this is likely to take\nlonger than you find reasonable. I would recommend dump and reload.\n\n-jwb\n\n", "msg_date": "Mon, 22 Aug 2005 22:15:08 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unused item pointers?" } ]
[ { "msg_contents": "I've read that indexes aren't used for COUNT(*) and I've noticed (7.3.x)\nwith EXPLAIN that indexes never seem to be used on empty tables - is\nthere any reason to have indexes on empty tables, or will postgresql\nnever use them.\n \nThis is not as silly as it sounds - with table inheritance you might\nhave table children with the data and a parent that is empty. It'd be\nnice to make sure postgresql knows to never really look at the parent -\nespecially is you don't know the names of all the children ..\n \nThoughts ?\n \nthx,\n Rohan\n\n\n\n\n\nI've read that \nindexes aren't used for COUNT(*) and I've noticed (7.3.x) with EXPLAIN that \nindexes never seem to be used on empty tables - is there any reason to have \nindexes on empty tables, or will postgresql never use them.\n \nThis is not as silly \nas it sounds - with table inheritance you might have table children with the \ndata and a parent that is empty.  It'd be nice to make sure postgresql \nknows to never really look at the parent - especially is you don't know the \nnames of all the children ..\n \nThoughts \n?\n \nthx,\n  \nRohan", "msg_date": "Tue, 23 Aug 2005 13:41:32 +1000", "msg_from": "\"Lenard, Rohan (Rohan)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Need indexes on empty tables for good performance ?" }, { "msg_contents": "On Tue, Aug 23, 2005 at 13:41:32 +1000,\n \"Lenard, Rohan (Rohan)\" <[email protected]> wrote:\n> I've read that indexes aren't used for COUNT(*) and I've noticed (7.3.x)\n> with EXPLAIN that indexes never seem to be used on empty tables - is\n> there any reason to have indexes on empty tables, or will postgresql\n> never use them.\n\ncount will use indexes if appropiate. The counts themselves are NOT in the\nindexes, so counts of significant fractions of a table (in particular\nof the whole table) won't benefit from indexes.\n\nYou aren't going to get query speed ups by putting indexes on empty tables.\nHowever, they may be required if you have unique or primary keys declared\nin the table. You may want them to enforce some kinds of constraints.\n", "msg_date": "Sat, 27 Aug 2005 01:19:49 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need indexes on empty tables for good performance ?" }, { "msg_contents": "Lenard, Rohan (Rohan) wrote:\n\n> I've read that indexes aren't used for COUNT(*) and I've noticed \n> (7.3.x) with EXPLAIN that indexes never seem to be used on empty \n> tables - is there any reason to have indexes on empty tables, or will \n> postgresql never use them.\n\nYou could add a row, vacuum analyze, delete the row, etc.... Then you \nare fine until you vacuum analyze again ;-)\n\nThis is a feature designed to prevent really bad plans when you are \nloading tables with data. However, you are right. It can create bad \nplans sometimes.\n\nAny chance one can eventually come up with a way to tell the planner \nthat an empty table is expected not to grow? Otherwise, I can see \nnightmares in a data warehouse environment where you have an empty \nparent table...\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting", "msg_date": "Fri, 26 Aug 2005 23:34:23 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need indexes on empty tables for good performance ?" }, { "msg_contents": "Rohan,\n\nYou should note that in Postgres, indexes are not inherited by child \ntables.\n\nAlso, it seems difficult to select from a child table whose name you \ndon't know unless you access the parent. And if you are accessing the \ndata via the parent, I'm reasonably certain that you will find that \nindexes aren't used (even if they exist on the children) as a result \nof the way the children are accessed.\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n\nOn Aug 22, 2005, at 10:41 PM, Lenard, Rohan (Rohan) wrote:\n\n> I've read that indexes aren't used for COUNT(*) and I've noticed \n> (7.3.x) with EXPLAIN that indexes never seem to be used on empty \n> tables - is there any reason to have indexes on empty tables, or \n> will postgresql never use them.\n>\n> This is not as silly as it sounds - with table inheritance you \n> might have table children with the data and a parent that is \n> empty. It'd be nice to make sure postgresql knows to never really \n> look at the parent - especially is you don't know the names of all \n> the children ..\n>\n> Thoughts ?\n>\n> thx,\n> Rohan\n\n\nRohan,You should note that in Postgres, indexes are not inherited by child tables.Also, it seems difficult to select from a child table whose name you don't know unless you access the parent. And if you are accessing the data via the parent, I'm reasonably certain that you will find that indexes aren't used (even if they exist on the children) as a result of the way the children are accessed. --Thomas F. O'ConnellCo-Founder, Information ArchitectSitening, LLCStrategic Open Source: Open Your i™http://www.sitening.com/110 30th Avenue North, Suite 6Nashville, TN 37203-6320615-469-5150615-469-5151 (fax) On Aug 22, 2005, at 10:41 PM, Lenard, Rohan (Rohan) wrote: I've read that indexes aren't used for COUNT(*) and I've noticed (7.3.x) with EXPLAIN that indexes never seem to be used on empty tables - is there any reason to have indexes on empty tables, or will postgresql never use them.   This is not as silly as it sounds - with table inheritance you might have table children with the data and a parent that is empty.  It'd be nice to make sure postgresql knows to never really look at the parent - especially is you don't know the names of all the children ..   Thoughts ?   thx,   Rohan", "msg_date": "Mon, 29 Aug 2005 15:15:21 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need indexes on empty tables for good performance ?" } ]
[ { "msg_contents": "Hi.\n\nThe company that I'm working for are surveying the djungle of DBMS\nsince we are due to implement the next generation of our system.\n\nThe companys buissnes is utilizing the DBMS to store data that are\naccessed trough the web at daytime (only SELECTs, sometimes with joins,\netc). The data is a collection of bjects that are for sale. The data\nconsists of basic text information about theese togheter with some\ngroup information, etc.\n\nThe data is updated once every night.\n\nThere are about 4 M posts in the database (one table) and is expected\nto grow with atleast 50% during a reasonable long time.\n\nHow well would PostgreSQL fit our needs?\n\nWe are using Pervasive SQL today and suspect that it is much to small.\nWe have some problems with latency. Esp. when updating information,\ncomplicated conditions in selects and on concurrent usage.\n\n\nBest Regards\nRobert Bengtsson\nProject Manager\n\n", "msg_date": "23 Aug 2005 05:14:24 -0700", "msg_from": "\"tobbe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance for relative large DB" }, { "msg_contents": "\"tobbe\" <[email protected]> writes:\n> The company that I'm working for are surveying the djungle of DBMS\n> since we are due to implement the next generation of our system.\n>\n> The companys buissnes is utilizing the DBMS to store data that are\n> accessed trough the web at daytime (only SELECTs, sometimes with joins,\n> etc). The data is a collection of bjects that are for sale. The data\n> consists of basic text information about theese togheter with some\n> group information, etc.\n>\n> The data is updated once every night.\n\nHow much data is updated per night? The whole 4M \"posts\"? Or just\nsome subset?\n\n> There are about 4 M posts in the database (one table) and is\n> expected to grow with atleast 50% during a reasonable long time.\n\nSo you're expecting to have ~6M entries in the 'posts' table?\n\n> How well would PostgreSQL fit our needs?\n>\n> We are using Pervasive SQL today and suspect that it is much to small.\n> We have some problems with latency. Esp. when updating information,\n> complicated conditions in selects and on concurrent usage.\n\nIf you're truly updating all 4M/6M rows each night, *that* would turn\nout to be something of a bottleneck, as every time you update a tuple,\nthis creates a new copy, leaving the old one to be later cleaned away\nvia VACUUM.\n\nThat strikes me as unlikely: I expect instead that you update a few\nthousand or a few tens of thousands of entries per day, in which case\nthe \"vacuum pathology\" won't be a problem.\n\nI wouldn't expect PostgreSQL to be \"too small;\" it can and does cope\nwell with complex queries. \n\nAnd the use of MVCC allows there to be a relatively minimal amount of\nlocking done even though there may be a lot of concurrent users, the\nparticular merit there being that you can essentially eliminate most\nread locks. That is, you can get consistent reports without having to\nlock rows or tables.\n\nOne table with millions of rows isn't that complex a scenario :-).\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/spiritual.html\nAppendium to the Rules of the Evil Overlord #1: \"I will not build\nexcessively integrated security-and-HVAC systems. They may be Really\nCool, but are far too vulnerable to breakdowns.\"\n", "msg_date": "Tue, 23 Aug 2005 11:12:51 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for relative large DB" }, { "msg_contents": "Hi Chris.\n\nThanks for the answer.\nSorry that i was a bit unclear.\n\n1) We update around 20.000 posts per night.\n\n2) What i meant was that we suspect that the DBMS called PervasiveSQL\nthat we are using today is much to small. That's why we're looking for\nalternatives.\n\nToday we base our solution much on using querry-specific tables created\nat night, so instead of doing querrys direct on the \"post\" table (with\n4-6M rows) at daytime, we have the data pre-aligned in several much\nsmaller tables. This is just to make the current DBMS coop with our\namount of data.\n\nWhat I am particulary interested in is if we can expect to run all our\nselect querrys directly from the \"post\" table with PostgreSQL.\n\n3) How well does postgres work with load balancing environments. Is it\nbuilt-in?\n\nBest Regards\nRobert Bengtsson\nProject Manager\n\n", "msg_date": "23 Aug 2005 23:25:02 -0700", "msg_from": "\"tobbe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance for relative large DB" }, { "msg_contents": "\"tobbe\" <[email protected]> writes:\n> Hi Chris.\n>\n> Thanks for the answer.\n> Sorry that i was a bit unclear.\n>\n> 1) We update around 20.000 posts per night.\n\nNo surprise there; I would have been surprised to see 100/nite or\n6M/nite...\n\n> 2) What i meant was that we suspect that the DBMS called PervasiveSQL\n> that we are using today is much to small. That's why we're looking for\n> alternatives.\n>\n> Today we base our solution much on using querry-specific tables created\n> at night, so instead of doing querrys direct on the \"post\" table (with\n> 4-6M rows) at daytime, we have the data pre-aligned in several much\n> smaller tables. This is just to make the current DBMS coop with our\n> amount of data.\n>\n> What I am particulary interested in is if we can expect to run all our\n> select querrys directly from the \"post\" table with PostgreSQL.\n\nGiven a decent set of indices, I'd expect that to work OK... Whether\n4M or 6M rows, that's pretty moderate in size.\n\nIf there are specific states that rows are in which are \"of interest,\"\nthen you can get big wins out of having partial indices... Consider...\n\ncreate index partial_post_status on posts where status in ('Active', 'Pending', 'Locked');\n-- When processing of postings are completely finished, they wind up with 'Closed' status\n\nWe have some 'stateful' tables in our environment where the\ninteresting states are 'P' (where work is \"pending\") and 'C' (where\nall the work has been completed and the records are never of interest\nagain except as ancient history); the partial index \"where status =\n'P'\" winds up being incredibly helpful.\n\nIt's worth your while to dump data out from Pervasive and load it into\na PostgreSQL instance and to do some typical sorts of queries on the\nPostgreSQL side.\n\nDo \"EXPLAIN ANALYZE [some select statement];\" and you'll get a feel\nfor how PostgreSQL is running the queries.\n\nFiddling with indices to see how that affects things will also be a\nbig help.\n\nYou may find there are columns with large cardinalities (quite a lot\nof unique values) where you want to improve the stats analysis via...\n\n alter posts alter column [whatever] set statistics 100; \n -- Default is 10 bins\n analyze posts; \n -- then run ANALYZE to update statistics\n\n> 3) How well does postgres work with load balancing environments. Is\n> it built-in?\n\nLoad balancing means too many things. Can you be more specific about\nwhat you consider it to mean?\n\nFor Internet registry operations, we use replication (Slony-I) to\ncreate replicas used to take particular sorts of load off the \"master\"\nsystems.\n\nBut you might be referring to something else...\n\nFor instance, connection pools, whether implemented inside\napplications (everyone doing Java has one or more favorite Java\nconnection pool implementations) or in web servers (Apache has a DB\nconnection pool manager) or in an outside application (pgpool, a\nC-based connection pool manager) are also sometimes used for load\nbalancing.\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nIn case you weren't aware, \"ad homineum\" is not latin for \"the user of\nthis technique is a fine debater.\" -- Thomas F. Burdick\n", "msg_date": "Wed, 24 Aug 2005 12:34:51 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for relative large DB" }, { "msg_contents": "tobbe wrote:\n\n>Hi Chris.\n>\n>Thanks for the answer.\n>Sorry that i was a bit unclear.\n>\n>1) We update around 20.000 posts per night.\n>\n>2) What i meant was that we suspect that the DBMS called PervasiveSQL\n>that we are using today is much to small. That's why we're looking for\n>alternatives.\n>\n>Today we base our solution much on using querry-specific tables created\n>at night, so instead of doing querrys direct on the \"post\" table (with\n>4-6M rows) at daytime, we have the data pre-aligned in several much\n>smaller tables. This is just to make the current DBMS coop with our\n>amount of data.\n>\n>What I am particulary interested in is if we can expect to run all our\n>select querrys directly from the \"post\" table with PostgreSQL.\n> \n>\n20k transactions per day? Doesn't seem too bad. That amounts to how \nmany transactions per second during peak times? Personally I don't \nthink it will be a problem, but you might want to clarify what sort of \nload you are expecting during its peak time.\n\n>3) How well does postgres work with load balancing environments. Is it\n>built-in?\n> \n>\nThere is no load balancing \"built in.\" You would need to use Slony-I \nand possibly Pg-Pool for that. I don't know about Pg-Pool, but Slony-I \nwas written in large part by member(s?) of the core development team so \neven if it is not \"built in\" it is not as if it is a team of outsiders \nwho wrote it. \n\nIf you need something proprietary, there are similar solutions with \nreplication built in which are based on PostgreSQL and licensed under \nproprietary licenses.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Fri, 26 Aug 2005 23:30:08 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for relative large DB" }, { "msg_contents": "On Tue, Aug 23, 2005 at 11:25:02PM -0700, tobbe wrote:\n> Hi Chris.\n> \n> Thanks for the answer.\n> Sorry that i was a bit unclear.\n> \n> 1) We update around 20.000 posts per night.\nDoesn't seem like a lot at all.\n\n> 2) What i meant was that we suspect that the DBMS called PervasiveSQL\n> that we are using today is much to small. That's why we're looking for\n> alternatives.\n\nJust so no one gets confused, PervasiveSQL is our Btrieve-based\ndatabase; it has nothing to do with Pervasive Posgres or PosgreSQL.\nAlso, feel free to contact me off-list if you'd like our help with this.\n\n> Today we base our solution much on using querry-specific tables created\n> at night, so instead of doing querrys direct on the \"post\" table (with\n> 4-6M rows) at daytime, we have the data pre-aligned in several much\n> smaller tables. This is just to make the current DBMS coop with our\n> amount of data.\n> \n> What I am particulary interested in is if we can expect to run all our\n> select querrys directly from the \"post\" table with PostgreSQL.\n\nProbably, depending on what those queries are, what hardware you have\nand how the table is laid out. Unless you've got a really high query\nload I suspect you could handle this on some fairly mundane hardware...\n\n> 3) How well does postgres work with load balancing environments. Is it\n> built-in?\n\nAs Chris said, there is no built-in solution. PGCluster\n(http://pgfoundry.org/projects/pgcluster/) is a possible solution should\nyou need clustering/load balancing, but as I mentioned I suspect you\nshould be ok without it.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Mon, 29 Aug 2005 16:09:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for relative large DB" } ]
[ { "msg_contents": "Dear Gurus,\n\nSystem: Debian \"Woody\" 2.4.28\nVersion: PostgreSQL 7.4.8\n\nI have a join which causes a better hash if I provide a \"trivial\" condition:\nWHERE m.nap > '1900-01-01'::date\nThis is a date field with a minimum of '2005-06-21'. However, if I omit this \ncondition from the WHERE clause, I get a far worse plan. There's also \nsomething not quite right in the cost tuning part of the config file, but I \n*think* it shouldn't cause such a bad plan.\n\nExplain analyze times:\nWith fake condition: 1104 msec\nWithout it: 11653 msec\nWithout, mergejoin disabled: 5776 msec\n\nFor full query and plans, see below. The operator \"!=@\" is the nonequity \noperator extended so that it treats NULL as a one-element equivalence class, \nthus never returning NULL. (NULL !=@ NULL is false, NULL !=@ \"anything else\" \nis true)\n\n1. What may be the cause that this \"obvious\" condition causes a far better \nhash plan than the one without it, even while mergejoin is disabled?\n\n2. What may be the cause that the planner favors mergejoin to hashjoin? \nusually a sign of too high/too low random page cost, for example? I'm \nwilling to provide config options if it helps.\n\nTIA,\n\n--\nG.\n\n-------------- the query with fake condition (m.nap>=...) --------------\nexplain analyze\nSELECT DISTINCT\n mv.az, mv.vonalkod, mv.idopont, mv.muszakhely as mvhely,\n mv.muszaknap as mvnap, mv.muszakkod as mvmkod,\n m.hely, m.nap, m.muszakkod as mkod, m.tol, m.ig\nFROM muvelet_vonalkod mv\n left join olvaso_hely oh on (oh.olvaso_nev = mv.olvaso_nev\n\tand oh.tol <= mv.idopont and mv.idopont < oh.ig)\n left join muszak m on (oh.hely = m.hely\n\tand m.tol <= mv.idopont and mv.idopont < m.ig)\n , muvelet_vonalkod_ny ny\nwhere mv.az = ny.muvelet_vonalkod\n and ny.idopont >= now()-1\n and m.nap >= '1900-01-01'::date\n and ([email protected] or [email protected]\n\tor [email protected]);\n\n\n-------------- best plan with fake condition --------------\n Unique (cost=6484.22..6826.73 rows=11417 width=75) (actual \ntime=1103.870..1103.872 rows=1 loops=1)\n -> Sort (cost=6484.22..6512.76 rows=11417 width=75) (actual \ntime=1103.867..1103.868 rows=1 loops=1)\n Sort Key: mv.az, mv.vonalkod, mv.idopont, mv.muszakhely, \nmv.muszaknap, mv.muszakkod, m.hely, m.nap, m.muszakkod, m.tol, m.ig\n -> Hash Join (cost=1169.78..5434.78 rows=11417 width=75) (actual \ntime=1075.836..1103.835 rows=1 loops=1)\n Hash Cond: (\"outer\".hely = \"inner\".hely)\n Join Filter: ((\"inner\".tol <= \"outer\".idopont) AND \n(\"outer\".idopont < \"inner\".ig) AND (CASE WHEN ((\"outer\".muszakhely IS NULL) \nAND (\"inner\".hely IS NULL)) THEN false ELSE CASE WHEN ((\"outer\".muszakhely \nIS NULL) AND (\"inner\".hely IS NOT NULL)) THEN true ELSE CASE WHEN \n((\"outer\".muszakhely IS NOT NULL) AND (\"inner\".hely IS NULL)) THEN true ELSE \n(\"outer\".muszakhely <> \"inner\".hely) END END END OR CASE WHEN \n((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NULL)) THEN false ELSE CASE \nWHEN ((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NOT NULL)) THEN true \nELSE CASE WHEN ((\"outer\".muszaknap IS NOT NULL) AND (\"inner\".nap IS NULL)) \nTHEN true ELSE (\"outer\".muszaknap <> \"inner\".nap) END END END OR CASE WHEN \n((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NULL)) THEN false \nELSE CASE WHEN ((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NOT \nNULL)) THEN true ELSE CASE WHEN ((\"outer\".muszakkod IS NOT NULL) AND \n(\"inner\".muszakkod IS NULL)) THEN true ELSE (\"outer\".muszakkod <> \n\"inner\".muszakkod) END END END))\n -> Hash Join (cost=1167.65..2860.48 rows=1370 width=51) \n(actual time=533.035..741.211 rows=3943 loops=1)\n Hash Cond: (\"outer\".muvelet_vonalkod = \"inner\".az)\n -> Index Scan using muvelet_vonalkod_ny_idopont on \nmuvelet_vonalkod_ny ny (cost=0.00..1351.88 rows=24649 width=4) (actual \ntime=0.161..10.735 rows=3943 loops=1)\n Index Cond: (idopont >= (now() - \n('00:00:00'::interval + ('1 days'::text)::interval)))\n -> Hash (cost=1124.61..1124.61 rows=3618 width=51) \n(actual time=532.703..532.703 rows=0 loops=1)\n -> Nested Loop (cost=0.00..1124.61 rows=3618 \nwidth=51) (actual time=0.209..443.765 rows=61418 loops=1)\n -> Seq Scan on olvaso_hely oh \n(cost=0.00..1.01 rows=1 width=28) (actual time=0.031..0.036 rows=1 loops=1)\n -> Index Scan using muvelet_vonalkod_pk2 \non muvelet_vonalkod mv (cost=0.00..1060.30 rows=3617 width=55) (actual \ntime=0.162..244.158 rows=61418 loops=1)\n Index Cond: \n(((\"outer\".olvaso_nev)::text = (mv.olvaso_nev)::text) AND (\"outer\".tol <= \nmv.idopont) AND (mv.idopont < \"outer\".ig))\n -> Hash (cost=1.94..1.94 rows=75 width=28) (actual \ntime=0.333..0.333 rows=0 loops=1)\n -> Seq Scan on muszak m (cost=0.00..1.94 rows=75 \nwidth=28) (actual time=0.070..0.230 rows=73 loops=1)\n Filter: (nap >= '2001-01-01'::date)\n Total runtime: 1104.244 ms\n(19 rows)\n\n\n-------------- mergejoin disabled, no fake condition --------------\n\n Unique (cost=256601.12..262763.39 rows=205409 width=75) (actual \ntime=5776.476..5776.479 rows=1 loops=1)\n -> Sort (cost=256601.12..257114.64 rows=205409 width=75) (actual \ntime=5776.472..5776.472 rows=1 loops=1)\n Sort Key: mv.az, mv.vonalkod, mv.idopont, mv.muszakhely, \nmv.muszaknap, mv.muszakkod, m.hely, m.nap, m.muszakkod, m.tol, m.ig\n -> Hash Join (cost=132547.25..228451.03 rows=205409 width=75) \n(actual time=5733.661..5776.428 rows=1 loops=1)\n Hash Cond: (\"outer\".muvelet_vonalkod = \"inner\".az)\n -> Index Scan using muvelet_vonalkod_ny_idopont on \nmuvelet_vonalkod_ny ny (cost=0.00..1351.88 rows=24649 width=4) (actual \ntime=0.179..8.578 rows=3940 loops=1)\n Index Cond: (idopont >= (now() - ('00:00:00'::interval \n+ ('1 days'::text)::interval)))\n -> Hash (cost=124566.75..124566.75 rows=542600 width=75) \n(actual time=5697.192..5697.192 rows=0 loops=1)\n -> Hash Left Join (cost=2.95..124566.75 rows=542600 \nwidth=75) (actual time=33.430..5689.636 rows=484 loops=1)\n Hash Cond: (\"outer\".hely = \"inner\".hely)\n Join Filter: ((\"inner\".tol <= \"outer\".idopont) \nAND (\"outer\".idopont < \"inner\".ig))\n Filter: (CASE WHEN ((\"outer\".muszakhely IS NULL) \nAND (\"inner\".hely IS NULL)) THEN false ELSE CASE WHEN ((\"outer\".muszakhely \nIS NULL) AND (\"inner\".hely IS NOT NULL)) THEN true ELSE CASE WHEN \n((\"outer\".muszakhely IS NOT NULL) AND (\"inner\".hely IS NULL)) THEN true ELSE \n(\"outer\".muszakhely <> \"inner\".hely) END END END OR CASE WHEN \n((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NULL)) THEN false ELSE CASE \nWHEN ((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NOT NULL)) THEN true \nELSE CASE WHEN ((\"outer\".muszaknap IS NOT NULL) AND (\"inner\".nap IS NULL)) \nTHEN true ELSE (\"outer\".muszaknap <> \"inner\".nap) END END END OR CASE WHEN \n((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NULL)) THEN false \nELSE CASE WHEN ((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NOT \nNULL)) THEN true ELSE CASE WHEN ((\"outer\".muszakkod IS NOT NULL) AND \n(\"inner\".muszakkod IS NULL)) THEN true ELSE (\"outer\".muszakkod <> \n\"inner\".muszakkod) END END END)\n -> Hash Left Join (cost=1.01..2317.03 \nrows=65112 width=51) (actual time=0.462..542.361 rows=61465 loops=1)\n Hash Cond: ((\"outer\".olvaso_nev)::text = \n(\"inner\".olvaso_nev)::text)\n Join Filter: ((\"inner\".tol <= \n\"outer\".idopont) AND (\"outer\".idopont < \"inner\".ig))\n -> Seq Scan on muvelet_vonalkod mv \n(cost=0.00..1502.12 rows=65112 width=55) (actual time=0.028..123.649 \nrows=61465 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 \nwidth=28) (actual time=0.045..0.045 rows=0 loops=1)\n -> Seq Scan on olvaso_hely oh \n(cost=0.00..1.01 rows=1 width=28) (actual time=0.031..0.033 rows=1 loops=1)\n -> Hash (cost=1.75..1.75 rows=75 width=28) \n(actual time=0.319..0.319 rows=0 loops=1)\n -> Seq Scan on muszak m (cost=0.00..1.75 \nrows=75 width=28) (actual time=0.067..0.215 rows=73 loops=1)\n Total runtime: 5776.778 ms\n(21 rows)\n\n\n-------------- mergejoin enabled, no fake condition --------------\n\n Unique (cost=210234.71..216396.98 rows=205409 width=75) (actual \ntime=11652.868..11652.870 rows=1 loops=1)\n -> Sort (cost=210234.71..210748.24 rows=205409 width=75) (actual \ntime=11652.865..11652.865 rows=1 loops=1)\n Sort Key: mv.az, mv.vonalkod, mv.idopont, mv.muszakhely, \nmv.muszaknap, mv.muszakkod, m.hely, m.nap, m.muszakkod, m.tol, m.ig\n -> Merge Join (cost=3152.69..182084.63 rows=205409 width=75) \n(actual time=11408.433..11652.836 rows=1 loops=1)\n Merge Cond: (\"outer\".az = \"inner\".muvelet_vonalkod)\n -> Nested Loop Left Join (cost=2.76..174499.23 rows=542600 \nwidth=75) (actual time=1.506..11632.727 rows=484 loops=1)\n Join Filter: ((\"outer\".hely = \"inner\".hely) AND \n(\"inner\".tol <= \"outer\".idopont) AND (\"outer\".idopont < \"inner\".ig))\n Filter: (CASE WHEN ((\"outer\".muszakhely IS NULL) AND \n(\"inner\".hely IS NULL)) THEN false ELSE CASE WHEN ((\"outer\".muszakhely IS \nNULL) AND (\"inner\".hely IS NOT NULL)) THEN true ELSE CASE WHEN \n((\"outer\".muszakhely IS NOT NULL) AND (\"inner\".hely IS NULL)) THEN true ELSE \n(\"outer\".muszakhely <> \"inner\".hely) END END END OR CASE WHEN \n((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NULL)) THEN false ELSE CASE \nWHEN ((\"outer\".muszaknap IS NULL) AND (\"inner\".nap IS NOT NULL)) THEN true \nELSE CASE WHEN ((\"outer\".muszaknap IS NOT NULL) AND (\"inner\".nap IS NULL)) \nTHEN true ELSE (\"outer\".muszaknap <> \"inner\".nap) END END END OR CASE WHEN \n((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NULL)) THEN false \nELSE CASE WHEN ((\"outer\".muszakkod IS NULL) AND (\"inner\".muszakkod IS NOT \nNULL)) THEN true ELSE CASE WHEN ((\"outer\".muszakkod IS NOT NULL) AND \n(\"inner\".muszakkod IS NULL)) THEN true ELSE (\"outer\".muszakkod <> \n\"inner\".muszakkod) END END END)\n -> Nested Loop Left Join (cost=1.01..3578.48 \nrows=65112 width=51) (actual time=0.140..757.392 rows=61461 loops=1)\n Join Filter: (((\"inner\".olvaso_nev)::text = \n(\"outer\".olvaso_nev)::text) AND (\"inner\".tol <= \"outer\".idopont) AND \n(\"outer\".idopont < \"inner\".ig))\n -> Index Scan using muvelet_vonalkod_pkey on \nmuvelet_vonalkod mv (cost=0.00..1786.89 rows=65112 width=55) (actual \ntime=0.103..144.516 rows=61461 loops=1)\n -> Materialize (cost=1.01..1.02 rows=1 \nwidth=28) (actual time=0.001..0.002 rows=1 loops=61461)\n -> Seq Scan on olvaso_hely oh \n(cost=0.00..1.01 rows=1 width=28) (actual time=0.005..0.007 rows=1 loops=1)\n -> Materialize (cost=1.75..2.50 rows=75 width=28) \n(actual time=0.001..0.054 rows=73 loops=61461)\n -> Seq Scan on muszak m (cost=0.00..1.75 \nrows=75 width=28) (actual time=0.012..0.179 rows=73 loops=1)\n -> Sort (cost=3149.93..3211.55 rows=24649 width=4) (actual \ntime=15.420..17.108 rows=2356 loops=1)\n Sort Key: ny.muvelet_vonalkod\n -> Index Scan using muvelet_vonalkod_ny_idopont on \nmuvelet_vonalkod_ny ny (cost=0.00..1351.88 rows=24649 width=4) (actual \ntime=0.048..9.502 rows=3942 loops=1)\n Index Cond: (idopont >= (now() - \n('00:00:00'::interval + ('1 days'::text)::interval)))\n Total runtime: 11653.429 ms\n(20 rows)\n", "msg_date": "Tue, 23 Aug 2005 16:50:02 +0200", "msg_from": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]>", "msg_from_op": true, "msg_subject": "fake condition causes far better plan" }, { "msg_contents": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]> writes:\n> [ bad query plan ]\n\nMost of the problem is here:\n\n> -> Index Scan using muvelet_vonalkod_ny_idopont on \n> muvelet_vonalkod_ny ny (cost=0.00..1351.88 rows=24649 width=4) (actual \n> time=0.161..10.735 rows=3943 loops=1)\n> Index Cond: (idopont >= (now() - \n> ('00:00:00'::interval + ('1 days'::text)::interval)))\n\n(BTW, you lied about the query, because this index condition doesn't\nmatch anything in the given query text.)\n\nPre-8.0 releases aren't capable of making useful statistical estimates\nfor conditions involving nonconstant subexpressions, so you get a\nbadly-mistaken row count estimate that leads to a poor choice of plan.\n\nIf you can't update to 8.0, the best answer is to do the date arithmetic\non the client side. Another way is to create an allegedly-immutable\nfunction along the lines of \"ago(interval) returns timestamptz\" to hide\nthe now() call --- this is dangerous but sometimes it's the easiest answer.\nSee the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Aug 2005 11:14:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fake condition causes far better plan " } ]
[ { "msg_contents": "Hi all,\n\nI like to know the caching policies of Postgresql. \nWhat parameter in the postgresql.conf affects the\ncache size used by the Postgresql? As far as I have\nsearched my knowledge of the parameters are\n\n1. shared_buffers - Sets the limit on the amount of\nshared memory used. If I take this is as the cache\nsize then my performance should increase with the\nincrease in the size of shared_buffers. But it seems\nit is not the case and my performance actually\ndecreases with the increase in the shared_buffers. I\nhave a RAM size of 32 GB. The table which I use more\nfrequently has around 68 million rows. Can I cache\nthis entire table in RAM?\n\n2. work_mem - It is the amount of memory used by an\noperation. My guess is once the operation is complete\nthis is freed and hence has nothing to do with the\ncaching.\n\n3. effective_cache_size - The parameter used by the\nquery planner and has nothing to do with the actual\ncaching.\n\nSo kindly help me in pointing me to the correct\nparameter to set.\n\nIt will be great if you can point me to the docs that\nexplains the implementation of caching in Postgresql\nwhich will help me in understanding things much\nclearly.\n\nThanks in advance.\nGokul.\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Tue, 23 Aug 2005 10:10:45 -0700 (PDT)", "msg_from": "gokulnathbabu manoharan <[email protected]>", "msg_from_op": true, "msg_subject": "Caching by Postgres" }, { "msg_contents": "gokulnathbabu manoharan wrote:\n> Hi all,\n>\n> I like to know the caching policies of Postgresql.\n> What parameter in the postgresql.conf affects the\n> cache size used by the Postgresql? As far as I have\n> searched my knowledge of the parameters are\n\nIn general, you don't. The OS handles caching based on file usage.\nSo if you are using the files, the OS should cache them. Just like it\ndoes with any other program.\n\n>\n> 1. shared_buffers - Sets the limit on the amount of\n> shared memory used. If I take this is as the cache\n> size then my performance should increase with the\n> increase in the size of shared_buffers. But it seems\n> it is not the case and my performance actually\n> decreases with the increase in the shared_buffers. I\n> have a RAM size of 32 GB. The table which I use more\n> frequently has around 68 million rows. Can I cache\n> this entire table in RAM?\n\nThere is a portion of this which is used for caching. But I believe\nbefore 8.1 there was code that went linearly through all of the\nshared_buffers and checked for dirty/clean pages. So there was a\ntradeoff that the bigger you make it, the longer that search goes. So\nyou got diminishing returns, generally around 10k shared buffers.\nI think it is better in 8.1, but if the OS is going to cache it anyway\n(since it does), then having a Postgres cache is just wasting memory,\nand not letting cache as much.\n\nSo I'm guessing that with 8.1 there would be 2 sweet spots. Low\nshared_buffers (<= 10k), and really high shared buffers (like all of\navailable ram).\nBut because postgres has been tuned for the former I would stick with it\n(I don't think shared_buffers can go >2GB, but that might just be\nwork_mem/maintenance_work_mem).\n\n>\n> 2. work_mem - It is the amount of memory used by an\n> operation. My guess is once the operation is complete\n> this is freed and hence has nothing to do with the\n> caching.\n>\n> 3. effective_cache_size - The parameter used by the\n> query planner and has nothing to do with the actual\n> caching.\n\nThis is important from a planner issue. Because the planner can then\nexpect that the OS is doing its job and caching the tables, so index\nscans are cheaper than they would be otherwise.\n\nJohn\n=:->\n\n>\n> So kindly help me in pointing me to the correct\n> parameter to set.\n>\n> It will be great if you can point me to the docs that\n> explains the implementation of caching in Postgresql\n> which will help me in understanding things much\n> clearly.\n>\n> Thanks in advance.\n> Gokul.\n>", "msg_date": "Tue, 23 Aug 2005 12:25:59 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Tue, Aug 23, 2005 at 10:10:45 -0700,\n gokulnathbabu manoharan <[email protected]> wrote:\n> Hi all,\n> \n> I like to know the caching policies of Postgresql. \n> What parameter in the postgresql.conf affects the\n> cache size used by the Postgresql? As far as I have\n> searched my knowledge of the parameters are\n\nThe main policy is to let the OS do most of the caching.\n\n> 1. shared_buffers - Sets the limit on the amount of\n> shared memory used. If I take this is as the cache\n> size then my performance should increase with the\n> increase in the size of shared_buffers. But it seems\n> it is not the case and my performance actually\n> decreases with the increase in the shared_buffers. I\n> have a RAM size of 32 GB. The table which I use more\n> frequently has around 68 million rows. Can I cache\n> this entire table in RAM?\n\nUsing extermely large values for shared buffers is known to be a performance\nloss for Postgres. Some improvements were made for 8.0 and more for 8.1.\n\nThe OS will cache frequently used data from files for you. So if you are using\nthat table a lot and the rows aren't too wide, it should mostly be cached\nfor you by the OS.\n\n> 2. work_mem - It is the amount of memory used by an\n> operation. My guess is once the operation is complete\n> this is freed and hence has nothing to do with the\n> caching.\n\nThis is used for sorts and some other things.\n\n> 3. effective_cache_size - The parameter used by the\n> query planner and has nothing to do with the actual\n> caching.\n\nYou are supposed to use this to give the planner an idea about how much\nspace the OS will using for caching on behalf of Posgres.\n\n> So kindly help me in pointing me to the correct\n> parameter to set.\n> \n> It will be great if you can point me to the docs that\n> explains the implementation of caching in Postgresql\n> which will help me in understanding things much\n> clearly.\n\nYou probably want to read the following:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n", "msg_date": "Tue, 23 Aug 2005 12:41:08 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Tue, 23 Aug 2005 10:10:45 -0700 (PDT)\ngokulnathbabu manoharan <[email protected]> wrote:\n\n> Hi all,\n> \n> I like to know the caching policies of Postgresql. \n> What parameter in the postgresql.conf affects the\n> cache size used by the Postgresql? As far as I have\n> searched my knowledge of the parameters are\n> \n> 1. shared_buffers - Sets the limit on the amount of\n> shared memory used. If I take this is as the cache\n> size then my performance should increase with the\n> increase in the size of shared_buffers. But it seems\n> it is not the case and my performance actually\n> decreases with the increase in the shared_buffers. I\n> have a RAM size of 32 GB. The table which I use more\n> frequently has around 68 million rows. Can I cache\n> this entire table in RAM?\n\n increasing shared_buffers to a point helps, but after\n a certain threshold it can actually degree performance. \n \n> 2. work_mem - It is the amount of memory used by an\n> operation. My guess is once the operation is complete\n> this is freed and hence has nothing to do with the\n> caching.\n\n This is the amount of memory used for things like sorts and\n order bys on a per backend process basis. \n \n> 3. effective_cache_size - The parameter used by the\n> query planner and has nothing to do with the actual\n> caching.\n\n The instructs the query planner on how large the operating\n system's disk cache is. There isn't a built in cache, PostgreSQL\n relies on the operating system to cache the on disk information\n based on how often it is used. In most cases this is probably\n more accurate anyway. \n\n I wrote an article on PostgreSQL performance tuning that has\n links to several other related sites, you can find it here: \n\n http://www.revsys.com/writings/postgresql-performance.html\n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Tue, 23 Aug 2005 12:43:23 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "John,\n\n> So I'm guessing that with 8.1 there would be 2 sweet spots. Low\n> shared_buffers (<= 10k), and really high shared buffers (like all of\n> available ram).\n> But because postgres has been tuned for the former I would stick with it\n> (I don't think shared_buffers can go >2GB, but that might just be\n> work_mem/maintenance_work_mem).\n\nI'll be testing this as soon as we get some issues with the 64bit \nshared_buffer patch worked out.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 23 Aug 2005 10:57:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "\nI mean well with this comment - \nThis whole issue of data caching is a troubling issue with postreSQL\nin that even if you ran postgreSQL on a 64 bit address space\nwith larger number of CPUs you won't see much of a scale up\nand possibly even a drop. I am not alone in having the *expectation*\nthat a database should have some cache size parameter and\nthe option to skip the file system. If I use oracle, sybase, mysql\nand maxdb they all have the ability to size a data cache and move\nto 64 bits.\n\nIs this a crazy idea - that a project be started to get this adopted? \nIs it\ntoo big and structural to contemplate? \n\n From one who likes postgreSQL\ndc\n\nFrank Wiles wrote:\n\n>On Tue, 23 Aug 2005 10:10:45 -0700 (PDT)\n>gokulnathbabu manoharan <[email protected]> wrote:\n>\n> \n>\n>>Hi all,\n>>\n>>I like to know the caching policies of Postgresql. \n>>What parameter in the postgresql.conf affects the\n>>cache size used by the Postgresql? As far as I have\n>>searched my knowledge of the parameters are\n>>\n>>1. shared_buffers - Sets the limit on the amount of\n>>shared memory used. If I take this is as the cache\n>>size then my performance should increase with the\n>>increase in the size of shared_buffers. But it seems\n>>it is not the case and my performance actually\n>>decreases with the increase in the shared_buffers. I\n>>have a RAM size of 32 GB. The table which I use more\n>>frequently has around 68 million rows. Can I cache\n>>this entire table in RAM?\n>> \n>>\n>\n> increasing shared_buffers to a point helps, but after\n> a certain threshold it can actually degree performance. \n> \n> \n>\n>>2. work_mem - It is the amount of memory used by an\n>>operation. My guess is once the operation is complete\n>>this is freed and hence has nothing to do with the\n>>caching.\n>> \n>>\n>\n> This is the amount of memory used for things like sorts and\n> order bys on a per backend process basis. \n> \n> \n>\n>>3. effective_cache_size - The parameter used by the\n>>query planner and has nothing to do with the actual\n>>caching.\n>> \n>>\n>\n> The instructs the query planner on how large the operating\n> system's disk cache is. There isn't a built in cache, PostgreSQL\n> relies on the operating system to cache the on disk information\n> based on how often it is used. In most cases this is probably\n> more accurate anyway. \n>\n> I wrote an article on PostgreSQL performance tuning that has\n> links to several other related sites, you can find it here: \n>\n> http://www.revsys.com/writings/postgresql-performance.html\n>\n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n>\n\n", "msg_date": "Tue, 23 Aug 2005 14:41:39 -0400", "msg_from": "Donald Courtney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "Donald Courtney <[email protected]> writes:\n> I am not alone in having the *expectation* that a database should have\n> some cache size parameter and the option to skip the file system. If\n> I use oracle, sybase, mysql and maxdb they all have the ability to\n> size a data cache and move to 64 bits.\n\nAnd you're not alone in holding that opinion despite having no shred\nof evidence that it's worthwhile expanding the cache that far.\n\nHowever, since we've gotten tired of hearing this FUD over and over,\n8.1 will have the ability to set shared_buffers as high as you want.\nI expect next we'll be hearing from people complaining that they\nset shared_buffers to use all of RAM and performance went into the\ntank ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Aug 2005 15:23:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres " }, { "msg_contents": "Donald,\n\n> This whole issue of data caching is a troubling issue with postreSQL\n> in that even if you ran postgreSQL on a 64 bit address space\n> with larger number of CPUs you won't see much of a scale up\n> and possibly even a drop. \n\nSince when? Barring the context switch bug, you're not going to get a \ndrop with more processors/more RAM.\n\nYou may fail to get any gain, though. If your database is only 100MB in \nsize, having 11G of cache space isn't going to help you much over having \nonly 1G. \n\n> I am not alone in having the *expectation* \n> that a database should have some cache size parameter and\n> the option to skip the file system. \n\nSure, because that's the conventional wisdom, as writ by Oracle. However, \nthis comes with substantial code maintenance costs and portability \nlimitations which have to be measured against any gain in performance. \n\n> If I use oracle, sybase, mysql \n> and maxdb they all have the ability to size a data cache and move\n> to 64 bits.\n\nAnd yet, we regularly outperform Sybase and MySQL on heavy OLTP loads on \ncommodity x86 hardware. So apparently DB caching isn't everything. ;-)\n\nI'm not saying that it's not worth testing larger database caches -- even \ntaking over most of RAM -- on high-speed systems. In fact, I'm working \non doing that kind of test now. However, barring test results, we can't \nassume that taking over RAM and the FS cache would have a substantial \nperformance benefit; that remains to be shown.\n\nThe other thing is that we've had, and continue to have, low-hanging fruit \nwhich have a clear and measurable effect on performance and are fixable \nwithout bloating the PG code. Some of these issues (COPY path, context \nswitching, locks, GiST concurrency, some aggregates) have been addressed \nin the 8.1 code; some remain to be addressed (sorts, disk spill, 64-bit \nsort mem, other aggregates, index-only access, etc.). Why tackle a huge, \n250-hour project which could fail when a 20-hour patch is more likely to \nprovide the same performance benefit? \n\nWe have the same discussion (annually) about mmap. Using mmap *might* \nprovide us with a huge performance boost. However, it would *definitely* \nrequire 300hours (or more) of programmer time to test properly, and might \nnot benefit us at all.\n\nOf course, if *you* want to work on large database cache improvements, be \nmy guest ... it's an open source project! Submit your patches! I'll be \nhappy to test them.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 23 Aug 2005 12:38:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Tue, Aug 23, 2005 at 02:41:39PM -0400, Donald Courtney wrote:\n> I mean well with this comment - \n> This whole issue of data caching is a troubling issue with postreSQL\n> in that even if you ran postgreSQL on a 64 bit address space\n> with larger number of CPUs you won't see much of a scale up\n> and possibly even a drop. I am not alone in having the *expectation*\n> that a database should have some cache size parameter and\n> the option to skip the file system. If I use oracle, sybase, mysql\n> and maxdb they all have the ability to size a data cache and move\n> to 64 bits.\n> Is this a crazy idea - that a project be started to get this adopted? \n> Is it\n> too big and structural to contemplate? \n> From one who likes postgreSQL\n\nHey Donald. :-)\n\nThis is an operating system issue, not a PostgreSQL issue. If you have\nmore physical memory than fits in 32-bit addresses, and your operating\nsystem isn't using this extra memory to cache files (or anything\nelse), than your OS is what I would consider to be broken (or at the\nvery least, not designed for a 64-bit host).\n\nThe only questions that can be asked here is - 1) can PostgreSQL do a\nbetter job than the OS at best utilizing system RAM, and 2) if so, is\nthe net gain worth the added complexity to PostgreSQL?\n\nI happen to think that yes, PostgreSQL can do a better job than most\nOS's, as it has better information to make decisions as to which pages\nare worth keeping, and which are not, but no, it isn't worth the\neffort until PostgreSQL developers start running out of things to do.\n\nBuy your 64-bit platforms - but if page caching is your concern, 1)\nensure that you really have more physical memory than can fit in 32\nbits, and 2) ensure that your operating system is comfortable caching\ndata pages from files above the 32-bit mark.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 23 Aug 2005 16:03:42 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:\n>which have a clear and measurable effect on performance and are fixable \n>without bloating the PG code. Some of these issues (COPY path, context \n>switching\n\nDoes that include increasing the size of read/write blocks? I've noticed\nthat with a large enough table it takes a while to do a sequential scan,\neven if it's cached; I wonder if the fact that it takes a million\nread(2) calls to get through an 8G table is part of that.\n\nMike Stone\n", "msg_date": "Tue, 23 Aug 2005 16:29:32 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "[email protected] (Donald Courtney) writes:\n> I mean well with this comment -\n> This whole issue of data caching is a troubling issue with postreSQL\n> in that even if you ran postgreSQL on a 64 bit address space\n> with larger number of CPUs you won't see much of a scale up\n> and possibly even a drop. I am not alone in having the *expectation*\n> that a database should have some cache size parameter and\n> the option to skip the file system. If I use oracle, sybase, mysql\n> and maxdb they all have the ability to size a data cache and move\n> to 64 bits.\n>\n> Is this a crazy idea - that a project be started to get this\n> adopted? Is it too big and structural to contemplate?\n\nThis project amounts to \"Implement Your Own Operating System,\" because\nit requires that the DBMS take over the things that operating systems\nnormally do, like:\n a) Managing access to filesystems and\n b) Managing memory\n\nThe world is already sufficiently filled up with numerous variations\nof Linux, BSD 4.4 Lite, and UNIX System V; I can't see justification for\nreinventing this wheel still again.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://cbbrowne.com/info/multiplexor.html\nRules of the Evil Overlord #196. \"I will hire an expert marksman to\nstand by the entrance to my fortress. His job will be to shoot anyone\nwho rides up to challenge me.\" <http://www.eviloverlord.com/>\n", "msg_date": "Tue, 23 Aug 2005 16:41:33 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "[email protected] (Michael Stone) writes:\n> On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:\n>> which have a clear and measurable effect on performance and are\n>> fixable without bloating the PG code. Some of these issues (COPY\n>> path, context switching\n>\n> Does that include increasing the size of read/write blocks? I've\n> noticed that with a large enough table it takes a while to do a\n> sequential scan, even if it's cached; I wonder if the fact that it\n> takes a million read(2) calls to get through an 8G table is part of\n> that.\n\nBut behind the scenes, the OS is still going to have to evaluate the\n\"is this in cache?\" question for each and every one of those pages.\n(Assuming the kernel's page size is 8K; if it's smaller, the number of\nevaluations will be even higher...)\n\nGrouping the read(2) calls together isn't going to have any impact on\n_that_ evaluation.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://www3.sympatico.ca/cbbrowne/finances.html\n\"People who don't use computers are more sociable, reasonable, and ...\nless twisted\" -- Arthur Norman\n", "msg_date": "Tue, 23 Aug 2005 17:17:42 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "Donald Courtney wrote:\n> in that even if you ran postgreSQL on a 64 bit address space\n> with larger number of CPUs you won't see much of a scale up\n> and possibly even a drop. I am not alone in having the *expectation*\n\nWhat's your basis for believing this is the case? Why would PostgreSQL's \ndependence on the OS's caching/filesystem limit scalability? I know when \nI went from 32bit to 64bit Linux, I got *HUGE* increases in performance \nusing the same amount of memory. And when I went from 2x1P to 2xDC, my \naverage cpu usage % dropped almost in half.\n\n> that a database should have some cache size parameter and\n> the option to skip the file system. If I use oracle, sybase, mysql\n> and maxdb they all have the ability to size a data cache and move\n> to 64 bits.\n\nJosh Berkus has already mentioned this as conventional wisdom as written \nby Oracle. This may also be legacy wisdom. Oracle/Sybase/etc has been \naround for a long time; it was probably a clear performance win way back \nwhen. Nowadays with how far open-source OS's have advanced, I'd take it \nwith a grain of salt and do my own performance analysis. I suspect the \nbig vendors wouldn't change their stance even if they knew it was no \nlonger true due to the support hassles.\n\nMy personal experience with PostgreSQL. Dropping shared buffers from 2GB \nto 750MB improved performance on my OLTP DB a good 25%. Going down from \n750MB to 150MB was another +10%.\n", "msg_date": "Tue, 23 Aug 2005 14:59:42 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "\n> Josh Berkus has already mentioned this as conventional wisdom as written \n> by Oracle. This may also be legacy wisdom. Oracle/Sybase/etc has been \n> around for a long time; it was probably a clear performance win way back \n> when. Nowadays with how far open-source OS's have advanced, I'd take it \n> with a grain of salt and do my own performance analysis. I suspect the \n> big vendors wouldn't change their stance even if they knew it was no \n> longer true due to the support hassles.\n\n\tReinvent a filesystem... that would be suicidal.\n\n\tNow, Hans Reiser has expressed interest on the ReiserFS list in tweaking \nhis Reiser4 especially for Postgres. In his own words, he wants a \"Killer \napp for reiser4\". Reiser4 will offser transactional semantics via a \nspecial reiser4 syscall, so it might be possible, with a minimum of \nchanges to postgres (ie maybe just another sync mode besides fsync, \nfdatasync et al) to use this. Other interesting details were exposed on \nthe reiser list, too (ie. a transactional filesystems can give ACID \nguarantees to postgres without the need for fsync()).\n\n\tVery interesting.\n", "msg_date": "Wed, 24 Aug 2005 01:29:42 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "PFC,\n\n>         Now, Hans Reiser has expressed interest on the ReiserFS list in\n> tweaking   his Reiser4 especially for Postgres. In his own words, he wants\n> a \"Killer app for reiser4\". Reiser4 will offser transactional semantics via\n> a special reiser4 syscall, so it might be possible, with a minimum of\n> changes to postgres (ie maybe just another sync mode besides fsync,\n> fdatasync et al) to use this. Other interesting details were exposed on the\n> reiser list, too (ie. a transactional filesystems can give ACID guarantees\n> to postgres without the need for fsync()).\n\nReally? Cool, I'd like to see that. Could you follow up with Hans? Or give \nme his e-mail?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 23 Aug 2005 19:11:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, 24 Aug 2005, PFC wrote:\n\n>\n> > Josh Berkus has already mentioned this as conventional wisdom as written\n> > by Oracle. This may also be legacy wisdom. Oracle/Sybase/etc has been\n> > around for a long time; it was probably a clear performance win way back\n> > when. Nowadays with how far open-source OS's have advanced, I'd take it\n> > with a grain of salt and do my own performance analysis. I suspect the\n> > big vendors wouldn't change their stance even if they knew it was no\n> > longer true due to the support hassles.\n>\n> \tReinvent a filesystem... that would be suicidal.\n>\n> \tNow, Hans Reiser has expressed interest on the ReiserFS list in tweaking\n> his Reiser4 especially for Postgres. In his own words, he wants a \"Killer\n> app for reiser4\". Reiser4 will offser transactional semantics via a\n> special reiser4 syscall, so it might be possible, with a minimum of\n> changes to postgres (ie maybe just another sync mode besides fsync,\n> fdatasync et al) to use this. Other interesting details were exposed on\n> the reiser list, too (ie. a transactional filesystems can give ACID\n> guarantees to postgres without the need for fsync()).\n>\n> \tVery interesting.\n\nUmmm... I don't see anything here which will be a win for Postgres. The\ntransactional semantics we're interested in are fairly complex:\n\n1) Modifications to multiple objects can become visible to the system\natomically\n2) On error, a series of modifications which had been grouped together\nwithin a transaction can be rolled back\n3) Using object version information, determine which version of which\nobject is visible to a given session\n4) Using version information and locking, detect and resolve read/write\nand write/write conflicts\n\nNow, I can see a file system offering (1) and (2). But a file system that\ncan allow people to do (3) and (4) would require that we make *major*\nmodifications to how postgresql is implemented. More over, it would be for\nno gain, since we've already written a system which can do it.\n\nA filesystem could, in theory, help us by providing an API which allows us\nto tell the file system either: the way we'd like it to read ahead, the\nfact that we don't want it to read ahead or the way we'd like it to cache\n(or not cache) data. The thing is, most OSes provide interfaces to do this\nalready and we make only little use of them (I'm think of\nmadv_sequential(), madv_random(), POSIX fadvise(), the various flags to\nopen() which AIX, HPUX, Solaris provide).\n\nGavin\n", "msg_date": "Wed, 24 Aug 2005 14:38:02 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "Gavin Sherry <[email protected]> writes:\n> A filesystem could, in theory, help us by providing an API which allows us\n> to tell the file system either: the way we'd like it to read ahead, the\n> fact that we don't want it to read ahead or the way we'd like it to cache\n> (or not cache) data. The thing is, most OSes provide interfaces to do this\n> already and we make only little use of them (I'm think of\n> madv_sequential(), madv_random(), POSIX fadvise(), the various flags to\n> open() which AIX, HPUX, Solaris provide).\n\nYeah ... the main reason we've not spent too much time on that sort of\nstuff is that *it's not portable*. And with all due respect to Hans,\nspecial tweaks for one filesystem are even less interesting than special\ntweaks for one OS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Aug 2005 01:07:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres " }, { "msg_contents": "On Wed, 24 Aug 2005, Tom Lane wrote:\n\n> Gavin Sherry <[email protected]> writes:\n> > A filesystem could, in theory, help us by providing an API which allows us\n> > to tell the file system either: the way we'd like it to read ahead, the\n> > fact that we don't want it to read ahead or the way we'd like it to cache\n> > (or not cache) data. The thing is, most OSes provide interfaces to do this\n> > already and we make only little use of them (I'm think of\n> > madv_sequential(), madv_random(), POSIX fadvise(), the various flags to\n> > open() which AIX, HPUX, Solaris provide).\n>\n> Yeah ... the main reason we've not spent too much time on that sort of\n> stuff is that *it's not portable*. And with all due respect to Hans,\n> special tweaks for one filesystem are even less interesting than special\n> tweaks for one OS.\n\nRight.\n\nAs an aside, it seems to me that if there is merit in all this low level\ninteraction with the file system (not to mention the other platform\nspecific microoptimisations which come up regularly on the lists) then the\ncompanies currently producing niche commercial releases of PostgreSQL\nshould be taking advantage of them: if it increases performance, then\nthere's a reason to buy as opposed to just downloading the OSS version.\n\nGavin\n", "msg_date": "Wed, 24 Aug 2005 15:24:07 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres " }, { "msg_contents": "\n\nGreat discussion and illuminating for those of us who are still\nlearning the subtleties of postGres.\n\nWilliam\n\nTo be clear -\nI built postgreSQL 8.1 64K bit on solaris 10 a few months ago\nand side by side with the 32 bit postgreSQL build saw no improvement. \nIn fact the 64 bit result was slightly lower.\n\nI used the *same 64 bit S10 OS* for both versions. I think your\nexperience makes sense since your change was from 32 to 64 bit Linux.\n From my experiment I am surmising that there will not be any \nfile/os/buffer-cache\nscale up effect on the same OS with postgreSQL 64. \n\nI was testing on a 4 core system in both cases.\n\n\n\nWilliam Yu wrote:\n\n> Donald Courtney wrote:\n>\n>> in that even if you ran postgreSQL on a 64 bit address space\n>> with larger number of CPUs you won't see much of a scale up\n>> and possibly even a drop. I am not alone in having the *expectation*\n>\n>\n> What's your basis for believing this is the case? Why would \n> PostgreSQL's dependence on the OS's caching/filesystem limit \n> scalability? I know when I went from 32bit to 64bit Linux, I got \n> *HUGE* increases in performance using the same amount of memory. And \n> when I went from 2x1P to 2xDC, my average cpu usage % dropped almost \n> in half.\n>\n>> that a database should have some cache size parameter and\n>> the option to skip the file system. If I use oracle, sybase, mysql\n>> and maxdb they all have the ability to size a data cache and move\n>> to 64 bits.\n>\n>\n> Josh Berkus has already mentioned this as conventional wisdom as \n> written by Oracle. This may also be legacy wisdom. Oracle/Sybase/etc \n> has been around for a long time; it was probably a clear performance \n> win way back when. Nowadays with how far open-source OS's have \n> advanced, I'd take it with a grain of salt and do my own performance \n> analysis. I suspect the big vendors wouldn't change their stance even \n> if they knew it was no longer true due to the support hassles.\n>\n> My personal experience with PostgreSQL. Dropping shared buffers from \n> 2GB to 750MB improved performance on my OLTP DB a good 25%. Going down \n> from 750MB to 150MB was another +10%.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "Wed, 24 Aug 2005 09:21:12 -0400", "msg_from": "Donald Courtney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "* Donald Courtney ([email protected]) wrote:\n> To be clear -\n> I built postgreSQL 8.1 64K bit on solaris 10 a few months ago\n> and side by side with the 32 bit postgreSQL build saw no improvement. \n> In fact the 64 bit result was slightly lower.\n\nThat makes some sense actually. It really depends on what you're doing\nalot of the time. On a Sparc system you're not likely to get much of a\nspeed improvment by going to 64bit (unless, maybe, you're doing lots of\nintensive 64bit math ops). You'll have larger pointers and whatnot\nthough.\n\n> I used the *same 64 bit S10 OS* for both versions. I think your\n> experience makes sense since your change was from 32 to 64 bit Linux.\n\n32bit to 64bit Linux on a Sparc platform really shouldn't affect\nperformance all that much (I'd expect it to be similar to 32bit to 64bit\nunder Solaris actually, at least in terms of the performance\ndifference). 32bit to 64bit Linux on an amd64 platform is another\nmatter entirely though, but not because of the number of bits involved.\n\nUnder amd64, 32bit is limited to 32bit on i386 which has a limited\nnumber of registers and whatnot. Under amd64/64bit you get more\nregisters (and I think some other niceities) which will improve\nperformance. That's not a 32bit vs. 64bit thing, that's i386 vs. native\namd64. It's really mainly an oddity of the platform. On a mips system\nI'd expect the same kind of performance difference between 32bit and\n64bit as you'd see on a sparc platform.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Wed, 24 Aug 2005 10:55:33 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "\n> Really? Cool, I'd like to see that. Could you follow up with Hans? \n> Or give\n> me his e-mail?\n\n\tYou can subscribe to the Reiser mailinglist on namesys.com or :\n\[email protected]\n", "msg_date": "Wed, 24 Aug 2005 17:36:35 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, Aug 24, 2005 at 09:21:12AM -0400, Donald Courtney wrote:\n> I built postgreSQL 8.1 64K bit on solaris 10 a few months ago\n> and side by side with the 32 bit postgreSQL build saw no improvement. \n> In fact the 64 bit result was slightly lower.\n\nI've had this sort of argument with a friend of mine who works at a\nretail computer sales company who always tries to pitch 64-bit\nplatforms to me (I don't have one yet).\n\nThere are a few issues in here that are hard to properly detach to\nallow for a fair comparison.\n\nThe first, to always remember - is that the move from 64-bits to\n32-bits doesn't come for free. In a real 64-bit system with a\n64-bit operating system, and 64-bit applications, pointers are\nnow double their 32-bit size. This means more bytes to copy around\nmemory, and in an extreme case, has the potential to approach\nhalfing both the memory latency to access many such pointers from\nRAM, and half the effective amount of RAM. In real world cases,\nnot everything is a pointer, so this sort of performance degradation\nis doubtful - but it is something to keep in mind.\n\nIn response to this, it appears that, at least on the Intel/AMD side\nof things, they've increased the bandwidth on the motherboard, and\nallowed for faster memory to be connected to the motherboard. They've\nincreased the complexity of the chip, to allow 64-bit register\noperations to be equivalent in speed to 32-bit register operations.\nI have no idea what else they've done... :-)\n\nSo, it may be difficult to properly compare a 32-bit system to a\n64-bit system. Even if the Ghz on the chip appears equal, it isn't\nthe same chip, and unless it is the exact same make, product and\nversion of the motherboard, it may not be a fair compairson. Turning\nsupport for 32-bit on or off, and using a kernel that is only 32-bit\nmay give good comparisons - but with the above explanation, I would\nexpect the 32-bit application + kernel to out-perform the 64-bit\napplication.\n\nSo then we move on to what 64-bit is really useful for. Obviously,\nthere is the arithmetic. If you were previously doing 64-bit\narithmetic through software, you will notice an immediate speed\nimprovement when doing it through hardware instead. If you have\na program that is scanning memory in any way, it may benefit from\n64-bit instructions (for example - copying data 64-bit words at\na time instead of 32-bit words at a time). PostgreSQL might benefit\nslightly from either of these, slightly balancing the performance\ndegradation of using more memory to store the pointers, and more\nmemory bandwidth the access the pointers.\n\nThe real benefit of 64-bit is address space. From the kernel\nperspective, it means that more programs, or bigger programs can run\nat once. From the application perspective, it means your application\ncan use more than 32-bits of address space. For programs that make\nextensive use of mmap(), this can be a necessity. They are mapping\nvery large files into their own address space. This isn't a\nperformance boost, as much as it is a 'you can't do it', if the\nfiles mmap()'ed at the same time, will not fit within 32-bits of\naddress space. This also becomes, potentially, a performance\ndegradation, as the system is now having to manage applications\nthat have very large page tables. Page faults may become\nexpensive.\n\nPostgreSQL uses read(), instead of mmap(), and uses <2 Gbyte files.\nPostgreSQL doesn't require the additional address space for normal\noperation.\n\nIf, however, you happen to have a very large amount of physical memory\n- more memory than is supported by a 32-bit system, but is supported\nby your 64-bit system, then the operating system should be able to use\nthis additional physical memory to cache file system data pages, which\nwill benefit PostgreSQL if used with tables that are larger than the\nmemory supported by your 32-bit system, and which have queries which\nrequire more pages than the memory supported by your 32-bit system to\nbe frequently accessed. If you have a huge database, with many clients\naccessing the data, this would be a definate yes. With anything less,\nit is a maybe, or a probably not.\n\nI've been looking at switching to 64-bit, mostly to benefit from the\nbetter motherboard bandwidth, and just to play around. I'm not\nexpecting to require the 64-bit instructions.\n\nHope this helps,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 24 Aug 2005 13:30:56 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "[email protected] wrote:\n> So then we move on to what 64-bit is really useful for. Obviously,\n> there is the arithmetic. If you were previously doing 64-bit\n> arithmetic through software, you will notice an immediate speed\n> improvement when doing it through hardware instead. If you have\n> a program that is scanning memory in any way, it may benefit from\n> 64-bit instructions (for example - copying data 64-bit words at\n> a time instead of 32-bit words at a time). PostgreSQL might benefit\n> slightly from either of these, slightly balancing the performance\n> degradation of using more memory to store the pointers, and more\n> memory bandwidth the access the pointers.\n> \nAt least on Sparc processors, v8 and newer, any double precision math \n(including longs) is performed with a single instruction, just like for \na 32 bit datum. Loads and stores of 8 byte datums are also handled via \na single instruction. The urban myth that 64bit math is \ndifferent/better on a 64 bit processor is just that; yes, some lower \nend processors would emulate/trap those instructions but that an \nimplementation detail, not architecture. I believe that this is all \ntrue for other RISC processors as well.\n\nThe 64bit API on UltraSparcs does bring along some extra FP registers IIRC.\n\n> If, however, you happen to have a very large amount of physical memory\n> - more memory than is supported by a 32-bit system, but is supported\n> by your 64-bit system, then the operating system should be able to use\n> this additional physical memory to cache file system data pages, which\n> will benefit PostgreSQL if used with tables that are larger than the\n> memory supported by your 32-bit system, and which have queries which\n> require more pages than the memory supported by your 32-bit system to\n> be frequently accessed. If you have a huge database, with many clients\n> accessing the data, this would be a definate yes. With anything less,\n> it is a maybe, or a probably not.\n> \nSolaris, at least, provided support for far more than 4GB of physical \nmemory on 32 bit kernels. A newer 64 bit kernel might be more \nefficient, but that's just because the time was taken to support large \npage sizes and more efficient data structures. It's nothing intrinsic \nto a 32 vs 64 bit kernel.\n\n-- Alan\n", "msg_date": "Wed, 24 Aug 2005 14:47:09 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "Donald Courtney wrote:\n> I built postgreSQL 8.1 64K bit on solaris 10 a few months ago\n> and side by side with the 32 bit postgreSQL build saw no improvement. In \n> fact the 64 bit result was slightly lower.\n\nI'm not surprised 32-bit binaries running on a 64-bit OS would be faster \nthan 64-bit/64-bit. 64-bit isn't some magical wand you wave and it's all \nok. Programs compiled as 64-bit will only run faster if (1) you need \n64-bit address space and you've been using ugly hacks like PAE to get \naccess to memory > 2GB or (2) you need native 64-bit data types and \nyou've been using ugly hacks to piece 32-bit ints together (example, \nencryption/compression). In most cases, 64-bit will run slightly slower \ndue to extra overhead of using larger datatypes.\n\nSince PostgreSQL hands off the majority of memory management/data \ncaching to the OS, only the OS needs to be 64-bit to reap the benefits \nof better memory management. Since Postgres *ALREADY* reaps the 64-bit \nbenefit, I'm not sure how the argument moving caching/mm/fs into \nPostgres would apply. Yes there's the point about possibly implementing \nbetter/smarter/more appropriate caching algorithms but that has nothing \nto do with 64-bit.\n", "msg_date": "Wed, 24 Aug 2005 11:54:42 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, Aug 24, 2005 at 02:47:09PM -0400, Alan Stange wrote:\n> At least on Sparc processors, v8 and newer, any double precision math \n> (including longs) is performed with a single instruction, just like for \n> a 32 bit datum. Loads and stores of 8 byte datums are also handled via \n> a single instruction. The urban myth that 64bit math is \n> different/better on a 64 bit processor is just that; yes, some lower \n> end processors would emulate/trap those instructions but that an \n> implementation detail, not architecture.\n\nIt isn't an urban myth that 64-bit math on a 64-bit processor is\nfaster, at least if done using registers. It definately is faster.\n\nIt may be an urban myth, though, that most applications perform\na sufficient amount of 64-bit arithmetic to warrant the upgrade.\nThe benefit may be lost in the noise for an application such as\nPostgreSQL. It takes, effectively, infinately longer to access\na disk page, than to increment a 64-bit integer in software.\n\nFor the lower end processors that emulate/trap these instructions,\nthey are being performed in software, along with the overhead of a\ntrap, and are therefore not a single instruction any more. We are\ncoming at this from different sides (which is good - perspective is\nalways good :-) ). From the Intel/AMD side of things, ALL non 64-bit\nplatforms are 'lower end processors', and don't emulate/trap the\ninstructions as they didn't exist (at least not yet - who knows what\nclever and sufficiently motivated people will do :-) ).\n\n> >If, however, you happen to have a very large amount of physical memory\n> >- more memory than is supported by a 32-bit system, but is supported\n> >by your 64-bit system, then the operating system should be able to use\n> >this additional physical memory to cache file system data pages, which\n> >will benefit PostgreSQL if used with tables that are larger than the\n> >memory supported by your 32-bit system, and which have queries which\n> >require more pages than the memory supported by your 32-bit system to\n> >be frequently accessed. If you have a huge database, with many clients\n> >accessing the data, this would be a definate yes. With anything less,\n> >it is a maybe, or a probably not.\n> Solaris, at least, provided support for far more than 4GB of physical \n> memory on 32 bit kernels. A newer 64 bit kernel might be more \n> efficient, but that's just because the time was taken to support large \n> page sizes and more efficient data structures. It's nothing intrinsic \n> to a 32 vs 64 bit kernel.\n\nHehe. That's why I was so careful to qualify my statements. :-)\n\nBut yeah, I agree. It's a lot of hype, for not much gain (and some\nloss, depending on what it is being used for). I only want one because\nthey're built better, and because I want to play around.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 24 Aug 2005 15:34:41 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "\n\n> At least on Sparc processors, v8 and newer, any double precision math \n> (including longs) is performed with a single instruction, just like for \n> a 32 bit datum. Loads and stores of 8 byte datums are also handled via \n> a single instruction. The urban myth that 64bit math is \n> different/better on a 64 bit processor is just that; yes, some lower \n> end processors would emulate/trap those instructions but that an \n> implementation detail, not architecture. I believe that this is all \n> true for other RISC processors as well.\n>\n> The 64bit API on UltraSparcs does bring along some extra FP registers \n> IIRC.\n\n\tIt's very different on x86.\n\t64-bit x86 like the Opteron has more registers, which are very scarce on \nthe base x86 (8 I think). This alone is very important. There are other \nfactors as well.\n\n> Solaris, at least, provided support for far more than 4GB of physical \n> memory on 32 bit kernels. A newer 64 bit kernel might be more \n> efficient, but that's just because the time was taken to support large \n> page sizes and more efficient data structures. It's nothing intrinsic \n> to a 32 vs 64 bit kernel.\n\n\tWell, on a large working set, a processor which can directly address more \nthan 4GB of memory will be a lot faster than one which can't, and has to \nplay with the MMU and paging units !\n", "msg_date": "Wed, 24 Aug 2005 22:03:43 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "[email protected] wrote:\n> On Wed, Aug 24, 2005 at 02:47:09PM -0400, Alan Stange wrote:\n> \n>> At least on Sparc processors, v8 and newer, any double precision math \n>> (including longs) is performed with a single instruction, just like for \n>> a 32 bit datum. Loads and stores of 8 byte datums are also handled via \n>> a single instruction. The urban myth that 64bit math is \n>> different/better on a 64 bit processor is just that; yes, some lower \n>> end processors would emulate/trap those instructions but that an \n>> implementation detail, not architecture.\n>> \n>\n> It isn't an urban myth that 64-bit math on a 64-bit processor is\n> faster, at least if done using registers. It definately is faster.\n> \nThe older 32bit RISC processors do have 64 bit registers, ALUs and \ndatapaths, and they are marketed toward high end scientific computing, \nand you're claiming that such a processor is slower than one which has \nthe addition of 64 bit pointers added to it?\n\nAs an example, an UltraSparc running a 32 bit kernel+application will \nhave the same double precision floating point performance as one \nrunning a 64bit kernel+application (except for the additional FP \nregisters in the 64bit API). For a function like daxpy, it's the exact \nsame hardware running the exact same instructions! So why do you think \nthe performance would be different?\n\nI believe the IBM Power processors also upped everything to double \nprecision internally because of some details of the \"multiply-add fused\" \ninstructions. It's been a few years since I taught H&P to CS \nundergrads, but I'm fairly sure the details are all the same for MIPS \nprocessors as well. \n\n-- Alan\n\n\n\n", "msg_date": "Wed, 24 Aug 2005 17:09:04 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, Aug 24, 2005 at 03:34:41PM -0400, [email protected] wrote:\n>It isn't an urban myth that 64-bit math on a 64-bit processor is\n>faster, at least if done using registers. It definately is faster.\n>It may be an urban myth, though, that most applications perform\n>a sufficient amount of 64-bit arithmetic to warrant the upgrade.\n\nThe mjor problem is that the definition of \"64bit processor\" is fuzzy.\nThe major slowdown of \"64bitness\" is the necessity of carting around \n64 bit pointers. It's not, however, necessary to do 64bit pointers to\nget 64bit registers & fast 64 bit ops. E.g., sgi has \"n32\" & \"n64\" abi's\nwhich can access exactly the same instruction set & registers, the\ndifference between them is the size of pointers and whether a \"long\" is\nthe same as a \"long long\". Any discussion of \"64 bit processors\" is\ndoomed from the start because people tend to start making implementation\nassumptions on top of an already vague concept. Current & future\ndiscussions are tinged by the fact that amd64 *doubles* the number\nof registers in 64 bit mode, potentially providing a major speedup--but\none that doesn't really have anything to do with being \"64bit\". \nPretty much any discussion of 64 bit mode really needs to be a\ndiscussion of a particular abi on a particular processor; talking about\n\"64 bit processors\" abstractly is a waste of time.\n\nMike Stone\n", "msg_date": "Wed, 24 Aug 2005 17:09:09 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, Aug 24, 2005 at 05:09:04PM -0400, Alan Stange wrote:\n> The older 32bit RISC processors do have 64 bit registers, ALUs and \n> datapaths, and they are marketed toward high end scientific computing, \n> and you're claiming that such a processor is slower than one which has \n> the addition of 64 bit pointers added to it?\n\nNo. I'm claiming that you are talking about a hybrid 64/32 processor,\nand that this isn't fair to declare that 64-bit arithmetic units don't\nprovide benefit for 64-bit math. :-)\n\n> As an example, an UltraSparc running a 32 bit kernel+application will \n> have the same double precision floating point performance as one \n> running a 64bit kernel+application (except for the additional FP \n> registers in the 64bit API). For a function like daxpy, it's the exact \n> same hardware running the exact same instructions! So why do you think \n> the performance would be different?\n\nDouble precision floating point isn't 64-bit integer arithmetic. I think\nthis is all a little besides the point. If you point is that the SPARC\nwas designed well - I agree with you.\n\nI won't agree that a SPARC with 64-bit registers should be considered\na 32-bit machine. The AMD 64-bit machines come in two forms as well -\nthe ones that allow you to use the 64-bit integer registers (not\nfloating point! those are already 80-bit!), and the ones that allow\nyou to address more memory. I wouldn't consider either to be a 32-bit\nCPU, although they will allow 32-bit applications to run fine.\n\n> I believe the IBM Power processors also upped everything to double \n> precision internally because of some details of the \"multiply-add fused\" \n> instructions. It's been a few years since I taught H&P to CS \n> undergrads, but I'm fairly sure the details are all the same for MIPS \n> processors as well. \n\nSmart design, that obscures the difference - but doesn't make the\ndifference a myth. If it's already there, then it's already there,\nand we can't talk as if it isn't.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 24 Aug 2005 20:13:20 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "On Wed, Aug 24, 2005 at 05:09:09PM -0400, Michael Stone wrote:\n> On Wed, Aug 24, 2005 at 03:34:41PM -0400, [email protected] wrote:\n> >It isn't an urban myth that 64-bit math on a 64-bit processor is\n> >faster, at least if done using registers. It definately is faster.\n> >It may be an urban myth, though, that most applications perform\n> >a sufficient amount of 64-bit arithmetic to warrant the upgrade.\n> The mjor problem is that the definition of \"64bit processor\" is fuzzy.\n> The major slowdown of \"64bitness\" is the necessity of carting around \n> 64 bit pointers. It's not, however, necessary to do 64bit pointers to\n> get 64bit registers & fast 64 bit ops. E.g., sgi has \"n32\" & \"n64\" abi's\n> which can access exactly the same instruction set & registers, the\n> difference between them is the size of pointers and whether a \"long\" is\n> the same as a \"long long\". Any discussion of \"64 bit processors\" is\n> doomed from the start because people tend to start making implementation\n> assumptions on top of an already vague concept. Current & future\n> discussions are tinged by the fact that amd64 *doubles* the number\n> of registers in 64 bit mode, potentially providing a major speedup--but\n> one that doesn't really have anything to do with being \"64bit\". \n> Pretty much any discussion of 64 bit mode really needs to be a\n> discussion of a particular abi on a particular processor; talking about\n> \"64 bit processors\" abstractly is a waste of time.\n\nAgree. :-)\n\nAs this very thread has shown! Hehe...\n\nThere is no way the manufacturers would release two machines, side by\nside that could easily show that the 64-bit version is slower for\nregular application loads. They added these other things specifically\nto mask this... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 24 Aug 2005 20:21:05 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" }, { "msg_contents": "\n> The first, to always remember - is that the move from 64-bits to\n> 32-bits doesn't come for free. In a real 64-bit system with a\n> 64-bit operating system, and 64-bit applications, pointers are\n> now double their 32-bit size. This means more bytes to copy around\n> memory, and in an extreme case, has the potential to approach\n> halfing both the memory latency to access many such pointers from\n> RAM, and half the effective amount of RAM. In real world cases,\n> not everything is a pointer, so this sort of performance degradation\n> is doubtful - but it is something to keep in mind.\n> \nIn addition to the above it lessens the effects of the CPU cache, so be \nsure to take the larger cached versions if you have structures needing \nto fit into the cache...\n\nmy 0.02 EUR\n\nthomas\n", "msg_date": "Thu, 25 Aug 2005 14:23:49 +0100", "msg_from": "Thomas Ganss\n\t<tganss_at_t_dash_online_dot_de-remove-all-after-first-real-dash@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Caching by Postgres" } ]
[ { "msg_contents": "> Does that include increasing the size of read/write blocks? I've \n> noticedthat with a large enough table it takes a while to do a \n> sequential scan,\n> even if it's cached; I wonder if the fact that it takes a million\n> read(2) calls to get through an 8G table is part of that.\n> \n\n\nActually some of that readaheads,etc the OS does already if it does some sort of throttling/clubbing of reads/writes. But its not enough for such types of workloads.\n\nHere is what I think will help:\n\n* Support for different Blocksize TABLESPACE without recompiling the code.. (Atlease support for a different Blocksize for the whole database without recompiling the code)\n\n* Support for bigger sizes of WAL files instead of 16MB files WITHOUT recompiling the code.. Should be a tuneable if you ask me (with checkpoint_segments at 256.. you have too many 16MB files in the log directory) (This will help OLTP benchmarks more since now they don't spend time rotating log files)\n\n* Introduce a multiblock or extent tunable variable where you can define a multiple of 8K (or BlockSize tuneable) to read a bigger chunk and store it in the bufferpool.. (Maybe writes too) (Most devices now support upto 1MB chunks for reads and writes)\n\n*There should be a way to preallocate files for TABLES in TABLESPACES otherwise with multiple table writes in the same filesystem ends with fragmented files which causes poor \"READS\" from the files. \n\n* With 64bit 1GB file chunks is also moot.. Maybe it should be tuneable too like 100GB without recompiling the code.\n\n\nWhy recompiling is bad? Most companies that will support Postgres will support their own binaries and they won't prefer different versions of binaries for different blocksizes, different WAL file sizes, etc... and hence more function using the same set of binaries is more desirable in enterprise environments\n\n\nRegards,\nJignesh\n\n\n", "msg_date": "Tue, 23 Aug 2005 17:29:01 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read/Write block sizes (Was: Caching by Postgres)" }, { "msg_contents": "[email protected] (Jignesh Shah) writes:\n>> Does that include increasing the size of read/write blocks? I've\n>> noticedthat with a large enough table it takes a while to do a\n>> sequential scan, even if it's cached; I wonder if the fact that it\n>> takes a million read(2) calls to get through an 8G table is part of\n>> that.\n>\n> Actually some of that readaheads,etc the OS does already if it does\n> some sort of throttling/clubbing of reads/writes. But its not enough\n> for such types of workloads.\n>\n> Here is what I think will help:\n>\n> * Support for different Blocksize TABLESPACE without recompiling the\n> code.. (Atlease support for a different Blocksize for the whole\n> database without recompiling the code)\n>\n> * Support for bigger sizes of WAL files instead of 16MB files\n> WITHOUT recompiling the code.. Should be a tuneable if you ask me\n> (with checkpoint_segments at 256.. you have too many 16MB files in\n> the log directory) (This will help OLTP benchmarks more since now\n> they don't spend time rotating log files)\n>\n> * Introduce a multiblock or extent tunable variable where you can\n> define a multiple of 8K (or BlockSize tuneable) to read a bigger\n> chunk and store it in the bufferpool.. (Maybe writes too) (Most\n> devices now support upto 1MB chunks for reads and writes)\n>\n> *There should be a way to preallocate files for TABLES in\n> TABLESPACES otherwise with multiple table writes in the same\n> filesystem ends with fragmented files which causes poor \"READS\" from\n> the files.\n>\n> * With 64bit 1GB file chunks is also moot.. Maybe it should be\n> tuneable too like 100GB without recompiling the code.\n>\n> Why recompiling is bad? Most companies that will support Postgres\n> will support their own binaries and they won't prefer different\n> versions of binaries for different blocksizes, different WAL file\n> sizes, etc... and hence more function using the same set of binaries\n> is more desirable in enterprise environments\n\nEvery single one of these still begs the question of whether the\nchanges will have a *material* impact on performance.\n\nWhat we have been finding, as RAID controllers get smarter, is that it\nis getting increasingly futile to try to attach knobs to 'disk stuff;'\nit is *way* more effective to add a few more spindles to an array than\nit is to fiddle with which disks are to be allocated to what database\n'objects.'\n\nThe above suggested 'knobs' are all going to add to complexity and it\nis NOT evident that any of them will forcibly help.\n\nI could be wrong; code contributions combined with Actual Benchmarking\nwould be the actual proof of the merits of the ideas.\n\nBut it also suggests another question, namely...\n\n Will these represent more worthwhile improvements to speed than\n working on other optimizations that are on the TODO list?\n\nIf someone spends 100h working on one of these items, and gets a 2%\nperformance improvement, that's almost certain to be less desirable\nthan spending 50h on something else that gets a 4% improvement.\n\nAnd we might discover that memory management improvements in Linux\n2.6.16 or FreeBSD 5.5 allow some OS kernels to provide some such\nimprovements \"for free\" behind our backs without *any* need to write\ndatabase code. :-)\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nWiener's Law of Libraries:\n There are no answers, only cross references.\n", "msg_date": "Tue, 23 Aug 2005 18:09:09 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:\n>Actually some of that readaheads,etc the OS does already if it does\n>some sort of throttling/clubbing of reads/writes.\n\nNote that I specified the fully cached case--even with the workload in\nRAM the system still has to process a heck of a lot of read calls.\n\n>* Introduce a multiblock or extent tunable variable where you can\n>define a multiple of 8K (or BlockSize tuneable) to read a bigger chunk\n>and store it in the bufferpool.. (Maybe writes too) (Most devices now\n>support upto 1MB chunks for reads and writes)\n\nYeah. The problem with relying on OS readahead is that the OS doesn't\nknow whether you're doing a sequential scan or an index scan; if you\nhave the OS agressively readahead you'll kill your seek performance.\nOTOH, if you don't do readaheads you'll kill your sequential scan\nperformance. At the app level you know which makes sense for each\noperation.\n\nMike Stone\n", "msg_date": "Tue, 23 Aug 2005 19:12:38 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes (Was: Caching by Postgres)" }, { "msg_contents": "On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n>What we have been finding, as RAID controllers get smarter, is that it\n>is getting increasingly futile to try to attach knobs to 'disk stuff;'\n>it is *way* more effective to add a few more spindles to an array than\n>it is to fiddle with which disks are to be allocated to what database\n>'objects.'\n\nThat statement doesn't say anything about trying to maximize performance\nto or from a disk array. Yes, controllers are getting smarter--but they\naren't omnicient. IME an I/O bound sequential table scan doesn't get\ndata moving off the disk nearly as fast as say, a dd with a big ibs.\nWhy? There's obviously a lot of factors at work, but one of those\nfactors is that the raid controller can optimize \"grab this meg\" a lot\nmore than it can optimize \"grab this 8k\". \n\nMike Stone\n", "msg_date": "Tue, 23 Aug 2005 19:24:24 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n> [email protected] (Jignesh Shah) writes:\n> >> Does that include increasing the size of read/write blocks? I've\n> >> noticedthat with a large enough table it takes a while to do a\n> >> sequential scan, even if it's cached; I wonder if the fact that it\n> >> takes a million read(2) calls to get through an 8G table is part of\n> >> that.\n> >\n> > Actually some of that readaheads,etc the OS does already if it does\n> > some sort of throttling/clubbing of reads/writes. But its not enough\n> > for such types of workloads.\n> >\n> > Here is what I think will help:\n> >\n> > * Support for different Blocksize TABLESPACE without recompiling the\n> > code.. (Atlease support for a different Blocksize for the whole\n> > database without recompiling the code)\n> >\n> > * Support for bigger sizes of WAL files instead of 16MB files\n> > WITHOUT recompiling the code.. Should be a tuneable if you ask me\n> > (with checkpoint_segments at 256.. you have too many 16MB files in\n> > the log directory) (This will help OLTP benchmarks more since now\n> > they don't spend time rotating log files)\n> >\n> > * Introduce a multiblock or extent tunable variable where you can\n> > define a multiple of 8K (or BlockSize tuneable) to read a bigger\n> > chunk and store it in the bufferpool.. (Maybe writes too) (Most\n> > devices now support upto 1MB chunks for reads and writes)\n> >\n> > *There should be a way to preallocate files for TABLES in\n> > TABLESPACES otherwise with multiple table writes in the same\n> > filesystem ends with fragmented files which causes poor \"READS\" from\n> > the files.\n> >\n> > * With 64bit 1GB file chunks is also moot.. Maybe it should be\n> > tuneable too like 100GB without recompiling the code.\n> >\n> > Why recompiling is bad? Most companies that will support Postgres\n> > will support their own binaries and they won't prefer different\n> > versions of binaries for different blocksizes, different WAL file\n> > sizes, etc... and hence more function using the same set of binaries\n> > is more desirable in enterprise environments\n> \n> Every single one of these still begs the question of whether the\n> changes will have a *material* impact on performance.\n\nHow many of these things are currently easy to change with a recompile?\nI should be able to start testing some of these ideas in the near\nfuture, if they only require minor code or configure changes.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Tue, 23 Aug 2005 18:36:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Tue, 2005-08-23 at 19:12 -0400, Michael Stone wrote:\n> On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:\n> >Actually some of that readaheads,etc the OS does already if it does\n> >some sort of throttling/clubbing of reads/writes.\n> \n> Note that I specified the fully cached case--even with the workload in\n> RAM the system still has to process a heck of a lot of read calls.\n> \n> >* Introduce a multiblock or extent tunable variable where you can\n> >define a multiple of 8K (or BlockSize tuneable) to read a bigger chunk\n> >and store it in the bufferpool.. (Maybe writes too) (Most devices now\n> >support upto 1MB chunks for reads and writes)\n> \n> Yeah. The problem with relying on OS readahead is that the OS doesn't\n> know whether you're doing a sequential scan or an index scan; if you\n> have the OS agressively readahead you'll kill your seek performance.\n> OTOH, if you don't do readaheads you'll kill your sequential scan\n> performance. At the app level you know which makes sense for each\n> operation.\n\nThis is why we have MADVISE_RANDOM and MADVISE_SEQUENTIAL.\n\n-jwb\n", "msg_date": "Tue, 23 Aug 2005 16:44:10 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes (Was: Caching by Postgres)" }, { "msg_contents": "Chris,\n\nUnless I am wrong, you're making the assumpting the amount of time spent\nand ROI is known. Maybe those who've been down this path know how to get\nthat additional 2-4% in 30 minutes or less? \n\nWhile each person and business' performance gains (or not) could vary,\nsomeone spending the 50-100h to gain 2-4% over a course of a month for a\n24x7 operation would seem worth the investment? \n\nI would assume that dbt2 with STP helps minimize the amount of hours\nsomeone has to invest to determine performance gains with configurable\noptions? \n\nSteve Poe\n\n> If someone spends 100h working on one of these items, and gets a 2%\n> performance improvement, that's almost certain to be less desirable\n> than spending 50h on something else that gets a 4% improvement.\n> \n> And we might discover that memory management improvements in Linux\n> 2.6.16 or FreeBSD 5.5 allow some OS kernels to provide some such\n> improvements \"for free\" behind our backs without *any* need to write\n> database code. :-)\n\n", "msg_date": "Wed, 24 Aug 2005 01:25:43 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "Hi Jim,\n\n| How many of these things are currently easy to change with a recompile?\n| I should be able to start testing some of these ideas in the near\n| future, if they only require minor code or configure changes.\n\n\nThe following\n* Data File Size 1GB\n* WAL File Size of 16MB\n* Block Size of 8K\n\nAre very easy to change with a recompile.. A Tunable will be greatly \nprefered as it will allow one binary for different tunings\n\n* MultiBlock read/write\n\nIs not available but will greatly help in reducing the number of system \ncalls which will only increase as the size of the database increases if \nsomething is not done about i.\n\n* Pregrown files... maybe not important at this point since TABLESPACE \ncan currently work around it a bit (Just need to create a different file \nsystem for each tablespace\n\nBut if you really think hardware & OS is the answer for all small \nthings...... I think we should now start to look on how to make Postgres \nMulti-threaded or multi-processed for each connection. With the influx \nof \"Dual-Core\" or \"Multi-Core\" being the fad.... Postgres can have the \ncutting edge if somehow exploiting cores is designed.\n\nSomebody mentioned that adding CPU to Postgres workload halved the \naverage CPU usage...\nYEAH... PostgreSQL uses only 1 CPU per connection (assuming 100% \nusage) so if you add another CPU it is idle anyway and the system will \nreport only 50% :-) BUT the importing to measure is.. whether the query \ntime was cut down or not? ( No flames I am sure you were talking about \nmulti-connection multi-user environment :-) ) But my point is then this \napproach is worth the ROI and the time and effort spent to solve this \nproblem.\n\nI actually vote for a multi-threaded solution for each connection while \nstill maintaining seperate process for each connections... This way the \nfundamental architecture of Postgres doesn't change, however a \nmulti-threaded connection can then start to exploit different cores.. \n(Maybe have tunables for number of threads to read data files who \nknows.. If somebody is interested in actually working a design .. \ncontact me and I will be interested in assisting this work.\n\nRegards,\nJignesh\n\n\nJim C. Nasby wrote:\n\n>On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n> \n>\n>>[email protected] (Jignesh Shah) writes:\n>> \n>>\n>>>>Does that include increasing the size of read/write blocks? I've\n>>>>noticedthat with a large enough table it takes a while to do a\n>>>>sequential scan, even if it's cached; I wonder if the fact that it\n>>>>takes a million read(2) calls to get through an 8G table is part of\n>>>>that.\n>>>> \n>>>>\n>>>Actually some of that readaheads,etc the OS does already if it does\n>>>some sort of throttling/clubbing of reads/writes. But its not enough\n>>>for such types of workloads.\n>>>\n>>>Here is what I think will help:\n>>>\n>>>* Support for different Blocksize TABLESPACE without recompiling the\n>>>code.. (Atlease support for a different Blocksize for the whole\n>>>database without recompiling the code)\n>>>\n>>>* Support for bigger sizes of WAL files instead of 16MB files\n>>>WITHOUT recompiling the code.. Should be a tuneable if you ask me\n>>>(with checkpoint_segments at 256.. you have too many 16MB files in\n>>>the log directory) (This will help OLTP benchmarks more since now\n>>>they don't spend time rotating log files)\n>>>\n>>>* Introduce a multiblock or extent tunable variable where you can\n>>>define a multiple of 8K (or BlockSize tuneable) to read a bigger\n>>>chunk and store it in the bufferpool.. (Maybe writes too) (Most\n>>>devices now support upto 1MB chunks for reads and writes)\n>>>\n>>>*There should be a way to preallocate files for TABLES in\n>>>TABLESPACES otherwise with multiple table writes in the same\n>>>filesystem ends with fragmented files which causes poor \"READS\" from\n>>>the files.\n>>>\n>>>* With 64bit 1GB file chunks is also moot.. Maybe it should be\n>>>tuneable too like 100GB without recompiling the code.\n>>>\n>>>Why recompiling is bad? Most companies that will support Postgres\n>>>will support their own binaries and they won't prefer different\n>>>versions of binaries for different blocksizes, different WAL file\n>>>sizes, etc... and hence more function using the same set of binaries\n>>>is more desirable in enterprise environments\n>>> \n>>>\n>>Every single one of these still begs the question of whether the\n>>changes will have a *material* impact on performance.\n>> \n>>\n\n", "msg_date": "Tue, 23 Aug 2005 22:22:04 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "Steve,\n\n> I would assume that dbt2 with STP helps minimize the amount of hours\n> someone has to invest to determine performance gains with configurable\n> options?\n\nActually, these I/O operation issues show up mainly with DW workloads, so the \nSTP isn't much use there. If I can ever get some of these machines back \nfrom the build people, I'd like to start testing some stuff.\n\nOne issue with testing this is that currently PostgreSQL doesn't support block \nsizes above 128K. We've already done testing on that (well, Mark has) and \nthe performance gains aren't even worth the hassle of remembering you're on a \ndifferent block size (like, +4%).\n\nWhat the Sun people have done with other DB systems is show that substantial \nperformance gains are possible on large databases (>100G) using block sizes \nof 1MB. I believe that's possible (and that it probably makes more of a \ndifference on Solaris than on BSD) but we can't test it without some hackery \nfirst.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 23 Aug 2005 19:31:29 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Tue, 2005-08-23 at 19:31 -0700, Josh Berkus wrote:\n> Steve,\n> \n> > I would assume that dbt2 with STP helps minimize the amount of hours\n> > someone has to invest to determine performance gains with configurable\n> > options?\n> \n> Actually, these I/O operation issues show up mainly with DW workloads, so the \n> STP isn't much use there. If I can ever get some of these machines back \n> from the build people, I'd like to start testing some stuff.\n> \n> One issue with testing this is that currently PostgreSQL doesn't support block \n> sizes above 128K. We've already done testing on that (well, Mark has) and \n> the performance gains aren't even worth the hassle of remembering you're on a \n> different block size (like, +4%).\n> \n> What the Sun people have done with other DB systems is show that substantial \n> performance gains are possible on large databases (>100G) using block sizes \n> of 1MB. I believe that's possible (and that it probably makes more of a \n> difference on Solaris than on BSD) but we can't test it without some hackery \n> first.\n\nTo get decent I/O you need 1MB fundamental units all the way down the\nstack. You need a filesystem that can take a 1MB write well, and you\nneed an I/O scheduler that will keep it together, and you need a storage\ncontroller that can eat a 1MB request at once. Ideally you'd like an\narchitecture with a 1MB page (Itanium has this, and AMD64 Linux will\nsoon have this.) The Lustre people have done some work in this area,\nopening up the datapaths in the kernel so they can keep the hardware\nreally working. They even modified the QLogic SCSI/FC driver so it\nsupports such large transfers. Their work has shown that you can get\nsignificant perf boost on Linux just by thinking in terms of larger\ntransfers.\n\nUnfortunately I'm really afraid that this conversation is about trees\nwhen the forest is the problem. PostgreSQL doesn't even have an async\nreader, which is the sort of thing that could double or triple its\nperformance. You're talking about block sizes and such, but the kinds\nof improvements you can get there are in the tens of percents at most.\n\n-jwb\n\n", "msg_date": "Tue, 23 Aug 2005 20:07:34 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "Josh Berkus wrote:\n\n>Steve,\n>\n> \n>\n>>I would assume that dbt2 with STP helps minimize the amount of hours\n>>someone has to invest to determine performance gains with configurable\n>>options?\n>> \n>>\n>\n>Actually, these I/O operation issues show up mainly with DW workloads, so the \n>STP isn't much use there. If I can ever get some of these machines back \n>from the build people, I'd like to start testing some stuff.\n>\n>One issue with testing this is that currently PostgreSQL doesn't support block \n>sizes above 128K. We've already done testing on that (well, Mark has) and \n>the performance gains aren't even worth the hassle of remembering you're on a \n>different block size (like, +4%).\n> \n>\nWhat size database was this on?\n\n>What the Sun people have done with other DB systems is show that substantial \n>performance gains are possible on large databases (>100G) using block sizes \n>of 1MB. I believe that's possible (and that it probably makes more of a \n>difference on Solaris than on BSD) but we can't test it without some hackery \n>first.\n>\nWe're running on a 100+GB database, with long streams of 8KB reads with \nthe occasional _llseek(). I've been thinking about running with a \nlarger blocksize with the expectation that we'd see fewer system calls \nand a bit more throughput.\n\nread() calls are a very expensive way to get 8KB of memory (that we know \nis already resident) during scans. One has to trap into the kernel, do \nthe usual process state accounting, find the block, copy the memory to \nuserspace, return back from the kernel to user space reversing all the \nprocess accounting, pick out the bytes one needs, and repeat all over \nagain. That's quite a few sacrificial cache lines for 8KB. Yeah, \nsure, Linux syscalls are fast, but they aren't that fast, and other \noperating systems (windows and solaris) have a bit more overhead on \nsyscalls.\n\nRegarding large blocks sizes on Solaris: the Solaris folks can also use \nlarge memory pages and avoid a lot of the TLB overhead from the VM \nsystem. The various trapstat and cpustat commands can be quite \ninteresting to look at when running any large application on a Solaris \nsystem. \n\nIt should be noted that having a large shared memory segment can be a \nperformance looser just from the standpoint of TLB thrashing. O(GB) \nmemory access patterns can take a huge performance hit in user space \nwith 4K pages compared to the kernel which would be mapping the \"segmap\" \n(in Solaris parlance) with 4MB pages.\n\nAnyway, I guess my point is that the balance between kernel managed vs. \npostgresql managed buffer isn't obvious at all.\n\n-- Alan\n", "msg_date": "Wed, 24 Aug 2005 00:00:26 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> To get decent I/O you need 1MB fundamental units all the way down the\n> stack.\n\nIt would also be a good idea to have an application that isn't likely\nto change a single bit in a 1MB range and then expect you to record\nthat change. This pretty much lets Postgres out of the picture.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Aug 2005 01:10:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes " }, { "msg_contents": "> Unfortunately I'm really afraid that this conversation is about trees\n> when the forest is the problem. PostgreSQL doesn't even have an async\n> reader, which is the sort of thing that could double or triple its\n> performance. You're talking about block sizes and such, but the kinds\n> of improvements you can get there are in the tens of percents at most.\n\nNot 100% sure, but I'm fairly cirtain we were seeing significant performance\ndegradation by too much _scheduled_ I/O activity\n\nie: too much work being submitted to the kernel, due to excessive\nparallelism already!!\n\nThe classic example of this is a seqscan being interleved by a index scan,\nand the disks end up doing nothing but seek activity\n\nOut of all the stuff talked about on this thread so far, only tweaking the\nblock size (and the madvise() stuff) makes any real-world sense, as its the\nonly thing talked about that increases the _work_per_seek_.\n\nAs for the async IO, sure you might think 'oh async IO would be so cool!!'\nand I did, once, too. But then I sat down and _thought_ about it, and\ndecided well, no, actually, theres _very_ few areas it could actually help,\nand in most cases it just make it easier to drive your box into lseek()\ninduced IO collapse.\n\nDont forget that already in postgres, you have a process per connection, and\nall the processes take care of their own I/O.\n\nSomebody mentioned having threaded backends too, but the only benefit would\nbe reduced memory footprint (a backend consumes 1-2MB of RAM, which is\nalmost enough to be a concern for largish systems with a lot of backends)\nbut personally I _know_ the complixities introduced through threading are\nusually not worth it.\n\n\nIMVVHO (naive experience) what is needed is a complete architecture change\n(probably infeasible and only useful as a thought experiment), where:\n\n* a network I/O process deals with client connections\n* a limited pool of worker processes deal with statements (perhaps related\n to number of spindles somehow)\n\nso when a client issues a statement, the net-IO process simply forwards the\nconnection state to a worker process and says 'deal with this'.\n(Clearly the state object needs to contain all user and transaction state\nthe connection is involved in).\n\n- Guy Thornley\n", "msg_date": "Wed, 24 Aug 2005 17:20:23 +1200", "msg_from": "Guy Thornley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:\n> As for the async IO, sure you might think 'oh async IO would be so cool!!'\n> and I did, once, too. But then I sat down and _thought_ about it, and\n> decided well, no, actually, theres _very_ few areas it could actually help,\n> and in most cases it just make it easier to drive your box into lseek()\n> induced IO collapse.\n> \n> Dont forget that already in postgres, you have a process per connection, and\n> all the processes take care of their own I/O.\n\nThat's the problem. Instead you want 1 or 4 or 10 i/o slaves\ncoordinating the I/O of all the backends optimally. For instance, with\nsynchronous scanning.\n\n-jwb\n\n", "msg_date": "Tue, 23 Aug 2005 22:25:21 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:\n>> Dont forget that already in postgres, you have a process per connection, and\n>> all the processes take care of their own I/O.\n\n> That's the problem. Instead you want 1 or 4 or 10 i/o slaves\n> coordinating the I/O of all the backends optimally. For instance, with\n> synchronous scanning.\n\nAnd why exactly are we going to do a better job of I/O scheduling than\nthe OS itself can do?\n\nThere's a fairly basic disconnect in viewpoint involved here. The\nold-school viewpoint (as embodied in Oracle and a few other DBMSes)\nis that the OS is too stupid to be worth anything, and the DB should\nbypass the OS to the greatest extent possible, doing its own caching,\ndisk space layout, I/O scheduling, yadda yadda. That might have been\ndefensible twenty-odd years ago when Oracle was designed. Postgres\nprefers to lay off to the OS anything that the OS can do well --- and\nthat definitely includes caching and I/O scheduling. There are a whole\nlot of smart people working on those problems at the OS level. Maybe we\ncould make marginal improvements on their results after spending a lot\nof effort reinventing the wheel ... but our time will be repaid much\nmore if we work at levels that the OS cannot have knowledge of, such as\njoin planning and data statistics.\n\nThere are some things we could do to reduce the impedance between us and\nthe OS --- for instance, the upthread criticism that a seqscan asks the\nOS for only 8K at a time is fair enough. But that doesn't translate\nto a conclusion that we should schedule the I/O instead of the OS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Aug 2005 01:56:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes " }, { "msg_contents": "On Wed, 2005-08-24 at 01:56 -0400, Tom Lane wrote:\n> \"Jeffrey W. Baker\" <[email protected]> writes:\n> > On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:\n> >> Dont forget that already in postgres, you have a process per connection, and\n> >> all the processes take care of their own I/O.\n> \n> > That's the problem. Instead you want 1 or 4 or 10 i/o slaves\n> > coordinating the I/O of all the backends optimally. For instance, with\n> > synchronous scanning.\n> \n> And why exactly are we going to do a better job of I/O scheduling than\n> the OS itself can do?\n...\n> There are some things we could do to reduce the impedance between us and\n> the OS --- for instance, the upthread criticism that a seqscan asks the\n> OS for only 8K at a time is fair enough. But that doesn't translate\n> to a conclusion that we should schedule the I/O instead of the OS.\n\nSynchronous scanning is a fairly huge and obvious win. If you have two\nprocesses 180 degrees out-of-phase in a linear read, neither process is\ngoing to get anywhere near the throughput they would get from a single\nscan.\n\nI think you're being deliberately obtuse with regards to file I/O and\nthe operating system. The OS isn't magical. It has to strike a balance\nbetween a reasonable read latency and a reasonable throughput. As far\nas the kernel is concerned, a busy postgresql server is\nindistinguishable from 100 unrelated activities. All backends will be\nserved equally, even if in this case \"equally\" means \"quite badly all\naround.\"\n\nAn I/O slave process could be a big win in Postgres for many kinds of\nreads. Instead of opening and reading files the backends would connect\nto the I/O slave and request the file be read. If a scan of that file\nwere already underway, the new backends would be attached. Otherwise a\nnew scan would commence. In either case, the slave process can issue\n(sometimes non-dependant) reads well ahead of the needs of the backend.\nYou may think the OS can do this for you but it can't. On postgres\nknows that it needs the whole file from beginning to end. The OS can\nonly guess.\n\nAsk me sometime about my replacement for GNU sort. It uses the same\nsorting algorithm, but it's an order of magnitude faster due to better\nI/O strategy. Someday, in my infinite spare time, I hope to demonstrate\nthat kind of improvement with a patch to pg.\n\n-jwb\n\n", "msg_date": "Tue, 23 Aug 2005 23:22:52 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "\n> of effort reinventing the wheel ... but our time will be repaid much\n> more if we work at levels that the OS cannot have knowledge of, such as\n> join planning and data statistics.\n\n\tConsidering a global budget of man-hours which is the best ?\n\n1- Spend it on reimplementing half of VFS in postgres, half of Windows in \npostgres, half of FreeBSD in postgres, half of Solaris in Postgres, only \nto discover you gain a meagre speed increase and a million and a half bugs,\n\n2- Spending 5% of that time lowering the impedance between the OS and \nPostgres, and another 5% annoying Kernel people and helping them tweaking \nstuff for database use, and the rest on useful features that give useful \nspeedups, like bitmap indexes, skip scans, and other features that enhance \npower and usability ?\n\nIf you're Oracle and have almost unlimited resources, maybe. But even \nMicrosoft opted for option 2 : they implemented ReadFileGather and \nWriteFileScatter to lower the syscall overhead and that's it.\n\nAnd point 2 will benefit to many other apps, wether 1 would benefit only \npostgres, and then only in certain cases.\n\nI do believe there is something ineresting to uncover with reiser4 though \n(it definitely fits point 2).\n\nI'm happy that the pg team chose point 2 and that new versions keep coming \nwith new features at an unbelievable rate these times. Do you guys sleep ?\n", "msg_date": "Wed, 24 Aug 2005 12:35:08 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes " }, { "msg_contents": "\nThis thread covers several performance ideas. First is the idea that\nmore parameters should be configurable. While this seems like a noble\ngoal, we try to make parameters auto-tuning, or if users have to\nconfigure it, the parameter should be useful for a significant number of\nusers.\n\nIn the commercial software world, if you can convince your boss that a\nfeature/knob is useful, it usually gets into the product. \nUnfortunately, this leads to the golden doorknob on a shack, where some\nfeatures are out of sync with the rest of the product in terms of\nusefulness and utility. With open source, if a feature can not be\nauto-tuned, or has significant overhead, the features has to be\nimplemented and then proven to be a benefit.\n\nIn terms of adding async I/O, threading, and other things, it might make\nsense to explore how these could be implemented in a way that fits the\nabove criteria.\n\n---------------------------------------------------------------------------\n\nJignesh K. Shah wrote:\n> Hi Jim,\n> \n> | How many of these things are currently easy to change with a recompile?\n> | I should be able to start testing some of these ideas in the near\n> | future, if they only require minor code or configure changes.\n> \n> \n> The following\n> * Data File Size 1GB\n> * WAL File Size of 16MB\n> * Block Size of 8K\n> \n> Are very easy to change with a recompile.. A Tunable will be greatly \n> prefered as it will allow one binary for different tunings\n> \n> * MultiBlock read/write\n> \n> Is not available but will greatly help in reducing the number of system \n> calls which will only increase as the size of the database increases if \n> something is not done about i.\n> \n> * Pregrown files... maybe not important at this point since TABLESPACE \n> can currently work around it a bit (Just need to create a different file \n> system for each tablespace\n> \n> But if you really think hardware & OS is the answer for all small \n> things...... I think we should now start to look on how to make Postgres \n> Multi-threaded or multi-processed for each connection. With the influx \n> of \"Dual-Core\" or \"Multi-Core\" being the fad.... Postgres can have the \n> cutting edge if somehow exploiting cores is designed.\n> \n> Somebody mentioned that adding CPU to Postgres workload halved the \n> average CPU usage...\n> YEAH... PostgreSQL uses only 1 CPU per connection (assuming 100% \n> usage) so if you add another CPU it is idle anyway and the system will \n> report only 50% :-) BUT the importing to measure is.. whether the query \n> time was cut down or not? ( No flames I am sure you were talking about \n> multi-connection multi-user environment :-) ) But my point is then this \n> approach is worth the ROI and the time and effort spent to solve this \n> problem.\n> \n> I actually vote for a multi-threaded solution for each connection while \n> still maintaining seperate process for each connections... This way the \n> fundamental architecture of Postgres doesn't change, however a \n> multi-threaded connection can then start to exploit different cores.. \n> (Maybe have tunables for number of threads to read data files who \n> knows.. If somebody is interested in actually working a design .. \n> contact me and I will be interested in assisting this work.\n> \n> Regards,\n> Jignesh\n> \n> \n> Jim C. Nasby wrote:\n> \n> >On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n> > \n> >\n> >>[email protected] (Jignesh Shah) writes:\n> >> \n> >>\n> >>>>Does that include increasing the size of read/write blocks? I've\n> >>>>noticedthat with a large enough table it takes a while to do a\n> >>>>sequential scan, even if it's cached; I wonder if the fact that it\n> >>>>takes a million read(2) calls to get through an 8G table is part of\n> >>>>that.\n> >>>> \n> >>>>\n> >>>Actually some of that readaheads,etc the OS does already if it does\n> >>>some sort of throttling/clubbing of reads/writes. But its not enough\n> >>>for such types of workloads.\n> >>>\n> >>>Here is what I think will help:\n> >>>\n> >>>* Support for different Blocksize TABLESPACE without recompiling the\n> >>>code.. (Atlease support for a different Blocksize for the whole\n> >>>database without recompiling the code)\n> >>>\n> >>>* Support for bigger sizes of WAL files instead of 16MB files\n> >>>WITHOUT recompiling the code.. Should be a tuneable if you ask me\n> >>>(with checkpoint_segments at 256.. you have too many 16MB files in\n> >>>the log directory) (This will help OLTP benchmarks more since now\n> >>>they don't spend time rotating log files)\n> >>>\n> >>>* Introduce a multiblock or extent tunable variable where you can\n> >>>define a multiple of 8K (or BlockSize tuneable) to read a bigger\n> >>>chunk and store it in the bufferpool.. (Maybe writes too) (Most\n> >>>devices now support upto 1MB chunks for reads and writes)\n> >>>\n> >>>*There should be a way to preallocate files for TABLES in\n> >>>TABLESPACES otherwise with multiple table writes in the same\n> >>>filesystem ends with fragmented files which causes poor \"READS\" from\n> >>>the files.\n> >>>\n> >>>* With 64bit 1GB file chunks is also moot.. Maybe it should be\n> >>>tuneable too like 100GB without recompiling the code.\n> >>>\n> >>>Why recompiling is bad? Most companies that will support Postgres\n> >>>will support their own binaries and they won't prefer different\n> >>>versions of binaries for different blocksizes, different WAL file\n> >>>sizes, etc... and hence more function using the same set of binaries\n> >>>is more desirable in enterprise environments\n> >>> \n> >>>\n> >>Every single one of these still begs the question of whether the\n> >>changes will have a *material* impact on performance.\n> >> \n> >>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 24 Aug 2005 09:52:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "[email protected] (Steve Poe) writes:\n> Chris,\n>\n> Unless I am wrong, you're making the assumpting the amount of time spent\n> and ROI is known. Maybe those who've been down this path know how to get\n> that additional 2-4% in 30 minutes or less? \n>\n> While each person and business' performance gains (or not) could vary,\n> someone spending the 50-100h to gain 2-4% over a course of a month for a\n> 24x7 operation would seem worth the investment? \n\nWhat we *do* know is that adding these \"knobs\" would involve a\nsignificant amount of effort, as the values are widely used throughout\nthe database engine. Making them dynamic (e.g. - so they could be\ntuned on a tablespace-by-tablespace basis) would undoubtedly require\nrather a lot of development effort. They are definitely NOT 30 minute\nchanges.\n\nMoreover, knowing how to adjust them is almost certainly also NOT a 30\nminute configuration change; significant benchmarking effort for the\nindividual application is almost sure to be needed.\n\nIt's not much different from the reason why PostgreSQL doesn't use\nthreading...\n\nThe problem with using threading is that introducing it to the code\nbase would require a pretty enormous amount of effort (I'll bet\nmultiple person-years), and it wouldn't provide *any* benefit until\nyou get rather a long ways down the road.\n\nEveryone involved in development seems to me to have a reasonably keen\nunderstanding as to what the potential benefits of threading are; the\nvalue is that there fall out plenty of opportunities to parallelize\nthe evaluation of portions of queries. Alas, it wouldn't be until\n*after* all the effort goes in that we would get any idea as to what\nkinds of speedups this would provide.\n\nIn effect, there has to be a year invested in *breaking* PostgreSQL\n(because this would initially break a lot, since thread programming is\na really tough skill) where you don't actually see any benefits.\n\n> I would assume that dbt2 with STP helps minimize the amount of hours\n> someone has to invest to determine performance gains with\n> configurable options?\n\nThat's going to help in constructing a \"default\" knob value. And if\nwe find an \"optimal default,\" that encourages sticking with the\ncurrent approach, of using #define to apply that value...\n\n>> If someone spends 100h working on one of these items, and gets a 2%\n>> performance improvement, that's almost certain to be less desirable\n>> than spending 50h on something else that gets a 4% improvement.\n>> \n>> And we might discover that memory management improvements in Linux\n>> 2.6.16 or FreeBSD 5.5 allow some OS kernels to provide some such\n>> improvements \"for free\" behind our backs without *any* need to write\n>> database code. :-)\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/lisp.html\n\"For those of you who are into writing programs that are as obscure\nand complicated as possible, there are opportunities for... real fun\nhere\" -- Arthur Norman\n", "msg_date": "Wed, 24 Aug 2005 12:12:22 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "Tom, Gavin,\n\n\n> > To get decent I/O you need 1MB fundamental units all the way down the\n> > stack.\n>\n> It would also be a good idea to have an application that isn't likely\n> to change a single bit in a 1MB range and then expect you to record\n> that change. This pretty much lets Postgres out of the picture.\n\nWe're looking at this pretty much just for data warehousing, where you \nconstantly have gigabytes of data which don't change from month to month or \neven year to year. I agree that it would *not* be an optimization for OLTP \nsystems. Which is why a build-time option would be fine.\n\n> Ummm... I don't see anything here which will be a win for Postgres. The\n> transactional semantics we're interested in are fairly complex:\n>\n> 1) Modifications to multiple objects can become visible to the system\n> atomically\n> 2) On error, a series of modifications which had been grouped together\n> within a transaction can be rolled back\n> 3) Using object version information, determine which version of which\n> object is visible to a given session\n> 4) Using version information and locking, detect and resolve read/write\n> and write/write conflicts\n\nI wasn't thinking of database transactions. I was thinking specifically of \nusing Reiser4 transactions (and other transactional filesytems) to do things \nlike eliminate the need for full page writes in the WAL. Filesystems are \nlow-level things which should take care of low-level needs, like making sure \nan 8K page got written to disk even in the event of a system failure.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 24 Aug 2005 09:26:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "On Wed, Aug 24, 2005 at 12:12:22PM -0400, Chris Browne wrote:\n> Everyone involved in development seems to me to have a reasonably keen\n> understanding as to what the potential benefits of threading are; the\n> value is that there fall out plenty of opportunities to parallelize\n> the evaluation of portions of queries. Alas, it wouldn't be until\n> *after* all the effort goes in that we would get any idea as to what\n> kinds of speedups this would provide.\n\nMy understanding is that the original suggestion was to use threads\nwithin individual backends to allow for parallel query execution, not\nswiching to a completely thread-based model.\n\nIn any case, there are other ways to enable parallelism without using\nthreads, such as handing actual query execution off to a set of\nprocesses.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Wed, 24 Aug 2005 15:57:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "Jeff,\n\n> Ask me sometime about my replacement for GNU sort.  It uses the same\n> sorting algorithm, but it's an order of magnitude faster due to better\n> I/O strategy.  Someday, in my infinite spare time, I hope to demonstrate\n> that kind of improvement with a patch to pg.\n\nSince we desperately need some improvements in sort performance, I do hope \nyou follow up on this.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 25 Aug 2005 12:45:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "At 03:45 PM 8/25/2005, Josh Berkus wrote:\n>Jeff,\n>\n> > Ask me sometime about my replacement for GNU sort.  It uses the same\n> > sorting algorithm, but it's an order of magnitude faster due to better\n> > I/O strategy.  Someday, in my infinite spare time, I hope to demonstrate\n> > that kind of improvement with a patch to pg.\n>\n>Since we desperately need some improvements in sort performance, I do hope\n>you follow up on this.\n>\n>--\n>--Josh\n\nI'll generalize that. IMO we desperately need \nany and all improvements in IO performance. Even \nmore so than we need improvements in sorting or sorting IO performance.\n\nRon\n\n\n", "msg_date": "Thu, 25 Aug 2005 16:26:34 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "[email protected] (Ron) writes:\n> At 03:45 PM 8/25/2005, Josh Berkus wrote:\n>> > Ask me sometime about my replacement for GNU sort. � It uses the\n>> > same sorting algorithm, but it's an order of magnitude faster due\n>> > to better I/O strategy. � Someday, in my infinite spare time, I\n>> > hope to demonstrate that kind of improvement with a patch to pg.\n>>\n>>Since we desperately need some improvements in sort performance, I\n>>do hope you follow up on this.\n>\n> I'll generalize that. IMO we desperately need any and all\n> improvements in IO performance. Even more so than we need\n> improvements in sorting or sorting IO performance.\n\nThat's frankly a step backwards.\n\nFeel free to \"specialise\" that instead. \n\nA patch that improves some specific aspect of performance is a\nthousand times better than any sort of \"desperate desire for any and\nall improvements in I/O performance.\"\n\nThe latter is unlikely to provide any usable result.\n\nThe \"specialized patch\" is also pointedly better in that a\n*confidently submitted* patch is likely to be way better than any sort\nof \"desperate clutching at whatever may come to hand.\"\n\nFar too often, I see people trying to address performance problems via\nthe \"desperate clutching at whatever seems near to hand,\" and that\ngenerally turns out very badly as a particular result of the whole\n\"desperate clutching\" part.\n\nIf you can get a sort improvement submitted, that's a concrete\nimprovement...\n-- \nselect 'cbbrowne' || '@' || 'ntlug.org';\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\nAppendium to the Rules of the Evil Overlord #1: \"I will not build\nexcessively integrated security-and-HVAC systems. They may be Really\nCool, but are far too vulnerable to breakdowns.\"\n", "msg_date": "Thu, 25 Aug 2005 16:49:26 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" }, { "msg_contents": "At 04:49 PM 8/25/2005, Chris Browne wrote:\n>[email protected] (Ron) writes:\n> > At 03:45 PM 8/25/2005, Josh Berkus wrote:\n> >> > Ask me sometime about my replacement for GNU sort.  It uses the\n> >> > same sorting algorithm, but it's an order of magnitude faster due\n> >> > to better I/O strategy.  Someday, in my infinite spare time, I\n> >> > hope to demonstrate that kind of improvement with a patch to pg.\n> >>\n> >>Since we desperately need some improvements in sort performance, I\n> >>do hope you follow up on this.\n> >\n> > I'll generalize that. IMO we desperately need any and all\n> > improvements in IO performance. Even more so than we need\n> > improvements in sorting or sorting IO performance.\n>\n>That's frankly a step backwards. Feel free to \"specialise\" that instead.\n\nWe can agree to disagree, I'm cool with that.\n\nI'm well aware that a Systems Approach to SW \nArchitecture is not always popular in the Open \nSource world. Nonetheless, my POV is that if we \nwant to be taken seriously and beat \"the big \nboys\", we have to do everything smarter and \nfaster, as well as cheaper, than they do. You \nare not likely to be able to do that consistently \nwithout using some of the \"icky\" stuff one is \nrequired to study as part of formal training in \nthe Comp Sci and SW Engineering fields.\n\n\n>A patch that improves some specific aspect of \n>performance is a thousand times better than any \n>sort of \"desperate desire for any and\n>all improvements in I/O performance.\"\n\nminor twisting of my words: substituting \"desire\" \nfor \"need\". The need is provable. Just put \"the \nbig 5\" (SQL Server, Oracle, DB2, mySQL, and \nPostgreSQL) into some realistic benches to see that.\n\nMajor twisting of my words: the apparent \nimplication by you that I don't appreciate \nimprovements in the IO behavior of specific \nthings like sorting as much as I'd appreciate \nmore \"general\" IO performance \nimprovements. Performance optimization is best \ndone as an iterative improvement process that \nstarts with measuring where the need is greatest, \nthen improving that greatest need by the most you \ncan, then repeating the whole cycle. _Every_ \nimprovement in such a process is a specific \nimprovement, even if the improvement is a \ndecision to re-architect the entire product to \nsolve the current biggest issue. Improving \nsorting IO is cool. OTOH, if pg's biggest IO \nproblems are elsewhere, then the amount of \noverall benefit we will get from improving \nsorting IO is going to be minimized until we \nimprove the bigger problem(s). Amdahl's Law.\n\n\n>The \"specialized patch\" is also pointedly better \n>in that a *confidently submitted* patch is \n>likely to be way better than any sort of \n>\"desperate clutching at whatever may come to hand.\"\n\nAnother distortion of my statement and POV. I \nnever suggested nor implied any sort of \n\"desperate clutching...\". We have _measurable_ \nIO issues that need to be addressed in order for \npg to be a better competitor in the \nmarketplace. Just as we do with sorting performance.\n\n\n>Far too often, I see people trying to address \n>performance problems via the \"desperate \n>clutching at whatever seems near to hand,\" and that\n>generally turns out very badly as a particular \n>result of the whole \"desperate clutching\" part.\n>\n>If you can get a sort improvement submitted, that's a concrete improvement...\n\nAs I said, I'm all in favor of concrete, \nmeasurable improvement. I do not think I ever \nstated I was in favor of anything else.\n\nYou evidently are mildly ranting because you've \nseen some examples of poor SW Engineering \nDiscipline/Practice by people with perhaps \ninadequate skills for the issues they were trying \nto address. We all have. \"90% of everything is \nJreck (eg of too low a quality).\"\n\nOTOH, I do not think I've given you any reason to \nthink I lack such Clue, nor do I think my post was advocating such thrashing.\n\nMy post was intended to say that we need an \nOverall Systems Approach to pg optimization \nrather than just applying what compiler writer's \ncall \"peephole optimizations\" to pg. No more, no less.\n\nI apologize if I somehow misled you,\nRon Peacetree\n\n\n", "msg_date": "Thu, 25 Aug 2005 19:46:51 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read/Write block sizes" } ]
[ { "msg_contents": "Hello,\ni have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron 2600+, \n1Gb RAM on IDE HD ( which could be called a \"heavy desktop\" ), measuring \nthis performance with pgbench ( found on /contrib ) it gave me an \naverage ( after several runs ) of 170 transactions per second;\n\nfor the sake of experimentation ( actually, i'm scared this IDE drive \ncould fail at any time, hence i'm looking for an alternative, more \n\"robust\", machine ), i've installed on an aging Compaq Proliant server ( \nfreshly compiled SMP kernel 2.6.12.5 with preemption ), dual Pentium \nIII Xeon 500Mhz, 512Mb RAM, (older) SCSI-2 80pin drives, and re-tested, \nwhen the database was on a single SCSI drive, pgbench gave me an average \nof 90 transactions per second, but, and that scared me most, when the \ndatabase was on a RAID-5 array ( four 9Gb disks, using linux software \nRAID mdadm and LVM2, with the default filesystem cluster size of 32Kb ), \nthe performance dropped to about 55 transactions per second.\n\nDespite the amount of RAM difference, none machine seems to be swapping.\nAll filesystems ( on both machines ) are Reiserfs.\nBoth pg-8.0.3 were compiled with CFLAGS -O3 and -mtune for their \nrespective architectures... and \"gmake -j2\" on the server.\nBoth machines have an original ( except by the pg and the kernel ) \nMandrake 10.1 install.\n\nI've googled a little, and maybe the cluster size might be one problem, \nbut despite that, the performance dropping when running on \n\"server-class\" hardware with RAID-5 SCSI-2 drives was way above my most \ndelirious expectations... i need some help to figure out what is **so** \nwrong...\n\ni wouldn't be so stunned if the newer machine was ( say ) twice faster \nthan the older server, but over three times faster is disturbing.\n\nthe postgresql.conf of both machines is here:\n\nmax_connections = 50\nshared_buffers = 1000 # min 16, at least max_connections*2, \n8KB each\ndebug_print_parse = false\ndebug_print_rewritten = false\ndebug_print_plan = false\ndebug_pretty_print = false\nlog_statement = 'all'\nlog_parser_stats = false\nlog_planner_stats = false\nlog_executor_stats = false\nlog_statement_stats = false\nlc_messages = 'en_US' # locale for system error message strings\nlc_monetary = 'en_US' # locale for monetary formatting\nlc_numeric = 'en_US' # locale for number formatting\nlc_time = 'en_US' # locale for time formatting\n\nmany thanks in advance !\n\n", "msg_date": "Wed, 24 Aug 2005 11:43:05 -0300", "msg_from": "Alexandre Barros <[email protected]>", "msg_from_op": true, "msg_subject": "performance drop on RAID5" }, { "msg_contents": "On Wed, 24 Aug 2005 11:43:05 -0300\nAlexandre Barros <[email protected]> wrote:\n\n> I've googled a little, and maybe the cluster size might be one\n> problem, but despite that, the performance dropping when running on \n> \"server-class\" hardware with RAID-5 SCSI-2 drives was way above my\n> most delirious expectations... i need some help to figure out what is\n> **so** wrong...\n\n RAID-5 isn't great for databases in general. What would be better\n would be to mirror the disks to redundancy or do RAID 1+0. \n\n You could probably also increase your shared_buffers some, but \n that alone most likely won't make up your speed difference. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Wed, 24 Aug 2005 10:52:43 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance drop on RAID5" }, { "msg_contents": "Alexandre Barros wrote:\n\n> Hello,\n> i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron \n> 2600+, 1Gb RAM on IDE HD ( which could be called a \"heavy desktop\" ), \n> measuring this performance with pgbench ( found on /contrib ) it gave \n> me an average ( after several runs ) of 170 transactions per second;\n\nThat is going to be because IDE drives LIE about write times because of \nthe large cache.\n\n> for the sake of experimentation ( actually, i'm scared this IDE drive \n> could fail at any time, hence i'm looking for an alternative, more \n> \"robust\", machine ), i've installed on an aging Compaq Proliant server \n> ( freshly compiled SMP kernel 2.6.12.5 with preemption ), dual \n> Pentium III Xeon 500Mhz, 512Mb RAM, (older) SCSI-2 80pin drives, and \n> re-tested, when the database was on a single SCSI drive, pgbench gave \n> me an average of 90 transactions per second, but, and that scared me \n> most, when the database was on a RAID-5 array ( four 9Gb disks, using \n> linux software RAID mdadm and LVM2, with the default filesystem \n> cluster size of 32Kb ), the performance dropped to about 55 \n> transactions per second.\n\n\nThat seems more reasonable and probably truthful. I would be curious \nwhat type of performance you would get with the exact same\nsetup EXCEPT remove LVM2. Just have the software RAID. In fact, since \nyou have 4 drives you could do RAID 10.\n\n>\n> i wouldn't be so stunned if the newer machine was ( say ) twice faster \n> than the older server, but over three times faster is disturbing.\n>\n> the postgresql.conf of both machines is here:\n>\n> max_connections = 50\n> shared_buffers = 1000 # min 16, at least max_connections*2, \n> 8KB each\n\nYou should look at the annotated conf:\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> debug_print_parse = false\n> debug_print_rewritten = false\n> debug_print_plan = false\n> debug_pretty_print = false\n> log_statement = 'all'\n> log_parser_stats = false\n> log_planner_stats = false\n> log_executor_stats = false\n> log_statement_stats = false\n> lc_messages = 'en_US' # locale for system error message strings\n> lc_monetary = 'en_US' # locale for monetary formatting\n> lc_numeric = 'en_US' # locale for number formatting\n> lc_time = 'en_US' # locale for time formatting\n>\n> many thanks in advance !\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n", "msg_date": "Wed, 24 Aug 2005 09:17:42 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance drop on RAID5" }, { "msg_contents": "On 24-8-2005 16:43, Alexandre Barros wrote:\n> Hello,\n> i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron 2600+, \n> 1Gb RAM on IDE HD ( which could be called a \"heavy desktop\" ), measuring \n> this performance with pgbench ( found on /contrib ) it gave me an \n> average ( after several runs ) of 170 transactions per second;\n\nNowadays you can call that a \"light desktop\", although the amount of RAM \nis a bit more than normal. ;)\n\n> for the sake of experimentation ( actually, i'm scared this IDE drive \n> could fail at any time, hence i'm looking for an alternative, more \n> \"robust\", machine ), i've installed on an aging Compaq Proliant server ( \n> freshly compiled SMP kernel 2.6.12.5 with preemption ), dual Pentium \n\nPreemption is afaik counter-productive for a server.\n\n> III Xeon 500Mhz, 512Mb RAM, (older) SCSI-2 80pin drives, and re-tested, \n> when the database was on a single SCSI drive, pgbench gave me an average \n> of 90 transactions per second, but, and that scared me most, when the \n> database was on a RAID-5 array ( four 9Gb disks, using linux software \n> RAID mdadm and LVM2, with the default filesystem cluster size of 32Kb ), \n> the performance dropped to about 55 transactions per second.\n\nThe default disk io scheduler of the 2.6-series is designed for disks or \ncontrollers that have no command queueing (like most standaard \nIDE-disks). Try changing your default \"anticipatory\" scheduler on the \ntest-device to \"deadline\" or \"cfq\" (see the two *-iosched.txt files in \n/usr/src/linux/Documentation/block/ for more information).\nChanging is simple with a 2.6.11+ kernel, just do \"echo 'deadline' > \n/sys/block/*devicename*/queue/scheduler\" at runtime.\n\n> Despite the amount of RAM difference, none machine seems to be swapping.\n\nBut there is a 512MB extra amount of file-cache. Which can make a \nsignificant difference.\n\n> All filesystems ( on both machines ) are Reiserfs.\n> Both pg-8.0.3 were compiled with CFLAGS -O3 and -mtune for their \n> respective architectures... and \"gmake -j2\" on the server.\n> Both machines have an original ( except by the pg and the kernel ) \n> Mandrake 10.1 install.\n> \n> I've googled a little, and maybe the cluster size might be one problem, \n> but despite that, the performance dropping when running on \n> \"server-class\" hardware with RAID-5 SCSI-2 drives was way above my most \n> delirious expectations... i need some help to figure out what is **so** \n> wrong...\n\nDid you consider you're overestimating the raid's performance and usage? \nIf the benchmark was mostly run from the memory, you're not going to see \nmuch gain in performance from a faster disk.\nBut even worse is that for sequential reads and writes, the performance \nof current (large) IDE drives is very good. It may actually outperform \nyour RAID on that one.\nRandom access will probably still be slower, but may not be that much \nslower. And if the database resides in memory, that doesn't matter much \nanyway.\n\n> i wouldn't be so stunned if the newer machine was ( say ) twice faster \n> than the older server, but over three times faster is disturbing.\n\nI'm actually not surprised. Old scsi disks are not faster than new ones \nanymore, although they still may be a bit faster on random access issues \nor under (very) high load.\n\nEspecially if:\n- you only ran it with 1 client\n- the database mostly or entirely fits in the desktop's memory\n- the database did not fit entirely in the server's memory.\n\nEven worse would be if the database does fit entirely in the desktop's \nmemory, but not in the server's!\n\nPlease don't forget your server probably has much slower memory-access, \nit will likely have 133Mhz SDR Ram instead of your current DDR2700 orso. \nThe latter is much faster (in theory more than twice).\nYour desktop cpu will very likely, even when multiple processes exist, \nbe faster especially with the faster memory accesses. The Xeon's \nprobably only beat it on the amount of cache.\n\nSo please check if pgbench actually makes much use of the disk, if it \ndoes check how large the test databases will be, etc, etc.\n\nBtw, if you'd prefer to use your desktop, but are afraid of the \nIDE-drive dying on you, buy a \"server class\" SATA disk. Most \nmanufacturers have those, Western Digital even has \"scsi like\" sata \ndisks (the Raptor drives), they generally have 3 to 5 years warranty and \nhigher class components.\n\nBest regards,\n\nArjen\n", "msg_date": "Wed, 24 Aug 2005 18:32:21 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance drop on RAID5" }, { "msg_contents": "On 8/24/05, Alexandre Barros <[email protected]> wrote:\n\n> i wouldn't be so stunned if the newer machine was ( say ) twice faster\n> than the older server, but over three times faster is disturbing.\n\nRAID5 on so few spindles is a known losing case for PostgreSQL. You'd\nbe far, far better off doing a pair of RAID1 sets or a single RAID10\nset.\n\n/rls\n\n-- \n:wq\n", "msg_date": "Wed, 24 Aug 2005 11:35:01 -0500", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance drop on RAID5" } ]
[ { "msg_contents": "> Hello,\n> i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron\n2600+,\n> 1Gb RAM on IDE HD ( which could be called a \"heavy desktop\" ),\nmeasuring\n> this performance with pgbench ( found on /contrib ) it gave me an\n> average ( after several runs ) of 170 transactions per second;\n\n170 tps is not plausible no a single platter IDE disk without using\nwrite caching of some kind. For a 7200 rpm drive any result much over\n100 tps is a little suspicious. (my 10k sata raptor can do about 120).\n \n> for the sake of experimentation ( actually, i'm scared this IDE drive\n> could fail at any time, hence i'm looking for an alternative, more\n> \"robust\", machine ), i've installed on an aging Compaq Proliant server\n(\n> freshly compiled SMP kernel 2.6.12.5 with preemption ), dual Pentium\n> III Xeon 500Mhz, 512Mb RAM, (older) SCSI-2 80pin drives, and\nre-tested,\n> when the database was on a single SCSI drive, pgbench gave me an\naverage\n> of 90 transactions per second, but, and that scared me most, when the\n> database was on a RAID-5 array ( four 9Gb disks, using linux software\n> RAID mdadm and LVM2, with the default filesystem cluster size of 32Kb\n),\n> the performance dropped to about 55 transactions per second.\n\nIs natural to see a slight to moderate drop in write performance moving\nto RAID 5. The only raid levels that are faster than single disk levels\nfor writing are the ones with '0' in it or caching raid controllers.\nEven for 0+1, expect modest gains in tps vs. single disk if not using\nwrite caching.\n\nMerlin\n", "msg_date": "Wed, 24 Aug 2005 12:02:51 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance drop on RAID5" } ]
[ { "msg_contents": "Ok, there is always a lot of talk about tuning PostgreSQL on linux and\nhow PostgreSQL uses the linux kernel cache to cache the tables and\nindexes.\n\nMy question is, is there anyway to see what files linux is caching at\nthis moment?\n\nMy reasoning behind this question is:\n\nI have several database systems each with 1 PostgreSQL cluster. \nHowever, each cluster has a large number of identical databases on\nit. Since there can be a great amount of size disparity between the\ndatabases, I am wondering if some of slowness we see might be caused\nby the kernel cache having to reload a lot of data that keeps getting\nswapped out. (most systems have at least 100GB of data/indexes on them\nwith 8 or 12GB ram).\n\nIf this is the case, what sort of symptoms would you expect?\n\nTo help mitigate this potential, I have been researching the\nfollowing, and am thinking of proposing it to management. Any\ncomments would be appreciated.\n\n1. Implement a partition type layout using views and rules - This\nwill allow me to have one table in each view with the \"active\" data,\nand the inactive data stored by year in other tables.\n\nSo I would have the following (for each major table):\n\nTable View as\nselect * from active_table\nunion all \nselect * from table_2005\nunion all\nselect * from table_2004\netc.\n\nEach table would have identical indexes, however only the\n\"active_table\" would have data that is actively being worked. The\nrules and a nightly job can keep the data correctly stored.\n\nI am thinking that with this setup, the active table indexes should\nalmost always be in memory. And, if they do happen to be pushed out,\nthey are much smaller than the indexes I have today (where all data is\nin one table), so they should load faster with less i/o pressure.\n\n From the testing I have done so far, I believe I can implement this\nsystem with out having to ask for developer time. This is a \"Good\nThing\".\n\nAlso, the database is not normalized and is very ugly, by using the\nview to partition and abstract the actual data, I will be in a better\nposition to start normalizing some of the tables w/o developer time\n(once again, a \"Good Thing\")\n\n\n2. I am also thinking of recommending we collapse all databases in a\ncluster into one \"mega\" database. I can then use schema's and views\nto control database access and ensure that no customer can see another\ncustomers data.\n\nThis would mean that there are only one set of indexes being loaded\ninto the cache. While they would be larger, I think in combination\nwith the partition from idea 1, we would be ahead of the ball game. \nSince there would only be one set of indexes, everyone would be\nsharing them so they should always be in memory.\n\nI don't have real numbers to give you, but we know that our systems\nare hurting i/o wise and we are growing by about 2GB+ per week (net). \nWe actually grow by about 5GB/week/server. However, when I run my\nweekly maintenance of vacuum full, reindex, and the vacuum analyze, we\nend up getting about 3GB back. Unfortunately, I do not have the i/o\nbandwidth to vacuum during the day as it causes major slowdowns on our\nsystem. Each night, I do run a vacuum analyze across all db's to try\nand help. I also have my fsm parameters set high (8000000 fsm pages,\nand 5000 fsm relations) to try and compensate.\n\nI believe this is only hurting us as any queries that choose to\ntablescan are only getting slower and slower. Also, obviously, our\nindexes are continually growing. The partitioning should help as the\nactual number of records being worked on each table is a very small\npercentage ( a site may have 1 million records, but only load and work\na few thousand each day). The archive tables would be doing the most\ngrowing while the active tables should stay small. Most of the\nqueries that are tablescanning can not be fixed as the database\nclusters have been initialized with a non-C locale and won't use\nindexes on our queries that are using like with a wild card.\n\n\nRight now, we are still on 7.3.4. However, these ideas would be\nimplemented as part of an upgrade to 8.x (plus, we'll initialize the\nnew clusters with a C locale).\n\nAnyway, I hope this makes since, and any comments, ideas, and/or\nsuggestions would be appreciated.\n\nThanks,\n\nChris\n", "msg_date": "Wed, 24 Aug 2005 12:56:54 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Some ideas for comment" }, { "msg_contents": "On Wed, Aug 24, 2005 at 12:56:54PM -0400, Chris Hoover wrote:\n\n> I don't have real numbers to give you, but we know that our systems\n> are hurting i/o wise and we are growing by about 2GB+ per week (net). \n> We actually grow by about 5GB/week/server. However, when I run my\n> weekly maintenance of vacuum full, reindex, and the vacuum analyze, we\n> end up getting about 3GB back. Unfortunately, I do not have the i/o\n> bandwidth to vacuum during the day as it causes major slowdowns on our\n> system. Each night, I do run a vacuum analyze across all db's to try\n> and help. I also have my fsm parameters set high (8000000 fsm pages,\n> and 5000 fsm relations) to try and compensate.\n\n[...]\n\n> Right now, we are still on 7.3.4. However, these ideas would be\n> implemented as part of an upgrade to 8.x (plus, we'll initialize the\n> new clusters with a C locale).\n\nIf you were on a newer version, I'd suggest that you use the cost-based\nvacuum delay, and vacuum at least some of the tables more often. This\nway you can reduce the continual growth of the data files without\naffecting day-to-day performance, because you allow the VACUUM-inflicted\nI/O to be interleaved by normal query execution.\n\nSadly (for you), I think the cost-based vacuum delay feature was only\nintroduced in 8.0.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n", "msg_date": "Wed, 24 Aug 2005 15:51:43 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some ideas for comment" } ]
[ { "msg_contents": "> Ok, there is always a lot of talk about tuning PostgreSQL on linux and\n> how PostgreSQL uses the linux kernel cache to cache the tables and\n> indexes.\n[...]\n> \n> 1. Implement a partition type layout using views and rules - This\n> will allow me to have one table in each view with the \"active\" data,\n> and the inactive data stored by year in other tables.\n> \n> So I would have the following (for each major table):\n> \n> Table View as\n> select * from active_table\n> union all\n> select * from table_2005\n> union all\n> select * from table_2004\n> etc.\n\nLinux does a pretty good job of deciding what to cache. I don't think\nthis will help much. You can always look at partial indexes too.\n\n> 2. I am also thinking of recommending we collapse all databases in a\n> cluster into one \"mega\" database. I can then use schema's and views\n> to control database access and ensure that no customer can see another\n> customers data.\n\nhm. keep in mind views are tightly bound to the tables they are created\nwith (views can't 'float' over tables in different schemas). pl/pgsql\nfunctions can, though. This is a more efficient use of server\nresources, IMO, but not a windfall.\n \n> This would mean that there are only one set of indexes being loaded\n> into the cache. While they would be larger, I think in combination\n> with the partition from idea 1, we would be ahead of the ball game.\n> Since there would only be one set of indexes, everyone would be\n> sharing them so they should always be in memory.\n\nI would strongly consider adding more memory :).\n \n> I don't have real numbers to give you, but we know that our systems\n> are hurting i/o wise and we are growing by about 2GB+ per week (net).\n> We actually grow by about 5GB/week/server. However, when I run my\n> weekly maintenance of vacuum full, reindex, and the vacuum analyze, we\n> end up getting about 3GB back. Unfortunately, I do not have the i/o\n> bandwidth to vacuum during the day as it causes major slowdowns on our\n> system. Each night, I do run a vacuum analyze across all db's to try\n> and help. I also have my fsm parameters set high (8000000 fsm pages,\n> and 5000 fsm relations) to try and compensate.\n\nGenerally, you can reduce data turnover for the same workload by\nnormalizing your database. IOW, try and make your database more\nefficient in the way it stores data.\n\n> Right now, we are still on 7.3.4. However, these ideas would be\n> implemented as part of an upgrade to 8.x (plus, we'll initialize the\n> new clusters with a C locale).\n\nyes, do this!\n\nMerlin\n", "msg_date": "Wed, 24 Aug 2005 14:59:12 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some ideas for comment" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> Right now, we are still on 7.3.4. However, these ideas would be\n>> implemented as part of an upgrade to 8.x (plus, we'll initialize the\n>> new clusters with a C locale).\n\n> yes, do this!\n\nMoving from 7.3 to 8.0 is alone likely to give you a noticeable\nperformance boost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Aug 2005 16:00:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some ideas for comment " }, { "msg_contents": "On 8/24/05, Merlin Moncure <[email protected]> wrote:\n> > Ok, there is always a lot of talk about tuning PostgreSQL on linux and\n> > how PostgreSQL uses the linux kernel cache to cache the tables and\n> > indexes.\n> [...]\n> >\n> > 1. Implement a partition type layout using views and rules - This\n> > will allow me to have one table in each view with the \"active\" data,\n> > and the inactive data stored by year in other tables.\n> >\n> > So I would have the following (for each major table):\n> >\n> > Table View as\n> > select * from active_table\n> > union all\n> > select * from table_2005\n> > union all\n> > select * from table_2004\n> > etc.\n> \n> Linux does a pretty good job of deciding what to cache. I don't think\n> this will help much. You can always look at partial indexes too.\n> \nYes, but won't this help create the need to store less? If I have\n1,000.000 rows in a table, but only 4,000 are active, if I move those\n4 to another table and link the tables via a view, should that not\nhelp keep the 9,996,000 rows out of the kernel cache (the majority of\nthe time at least)?\n\nThis would mean I have more room for other objects and hopefully less\nturn over in the cache, and less disk i/o.\n\nYes?\n[...]\n> I would strongly consider adding more memory :).\nUnfortunately, it looks like 12GB is all our Dell servers can handle. :(\n\n> \n> > I don't have real numbers to give you, but we know that our systems\n> > are hurting i/o wise and we are growing by about 2GB+ per week (net).\n> > We actually grow by about 5GB/week/server. However, when I run my\n> > weekly maintenance of vacuum full, reindex, and the vacuum analyze, we\n> > end up getting about 3GB back. Unfortunately, I do not have the i/o\n> > bandwidth to vacuum during the day as it causes major slowdowns on our\n> > system. Each night, I do run a vacuum analyze across all db's to try\n> > and help. I also have my fsm parameters set high (8000000 fsm pages,\n> > and 5000 fsm relations) to try and compensate.\n> \n> Generally, you can reduce data turnover for the same workload by\n> normalizing your database. IOW, try and make your database more\n> efficient in the way it stores data.\n> \nThat's the ultimate goal, but this database structure was developed\nand released into production before I started work here. I'm trying\nto slowly change it into a better db, but it is a slow process. \nNormalization does not make it at the top of the priority list,\nunfortunately.\n\n> > Right now, we are still on 7.3.4. However, these ideas would be\n> > implemented as part of an upgrade to 8.x (plus, we'll initialize the\n> > new clusters with a C locale).\n> > > 2. I am also thinking of recommending we collapse all databases in a\n> > cluster into one \"mega\" database. I can then use schema's and views\n> > to control database access and ensure that no customer can see another\n> > customers data.\n> \n> hm. keep in mind views are tightly bound to the tables they are created\n> with (views can't 'float' over tables in different schemas). pl/pgsql\n> functions can, though. This is a more efficient use of server\n> resources, IMO, but not a windfall.\n\nThis I know. Each schema would have to have a \"custom\" set of views\nreplacing the tables with the view programmed to only return that\ncustomers data.\n\nI was thinking all of the tables in schema my_tables and the views all\nquerying the tables stored in the my_tables schema. I would add an\nidentifying column to each table so that I can differentiate the data.\n\nChris\n", "msg_date": "Wed, 24 Aug 2005 16:26:40 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some ideas for comment" }, { "msg_contents": "--On Mittwoch, August 24, 2005 16:26:40 -0400 Chris Hoover \n<[email protected]> wrote:\n\n> On 8/24/05, Merlin Moncure <[email protected]> wrote:\n>> Linux does a pretty good job of deciding what to cache. I don't think\n>> this will help much. You can always look at partial indexes too.\n>>\n> Yes, but won't this help create the need to store less? If I have\n> 1,000.000 rows in a table, but only 4,000 are active, if I move those\n> 4 to another table and link the tables via a view, should that not\n> help keep the 9,996,000 rows out of the kernel cache (the majority of\n> the time at least)?\nThe kernel caches per page, not per file. It is likely linux only caches \nthose pages which contain active rows, as long as no statement does a \nseq-scan on that table.\n\nTo optimize the thing, you could consider to cluster by some index which \nsorts by the \"activity\" of the rows first. That way pages with active rows \nare likely to contain more than only 1 active row and so the cache is \nutilized better.\n\nCluster is rather slow however and tables need to be reclustered from time \nto time.\n\n\nMit freundlichem Gruß\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n", "msg_date": "Thu, 25 Aug 2005 09:18:22 +0200", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some ideas for comment" } ]
[ { "msg_contents": "Since Bruce referred to the \"corporate software world\" I'll chime in...\n\nIt has been a while since adding knobs and dials has been considered a good idea. Customers are almost always bad at tuning their systems, which decreases customer satisfaction. While many people assume the corporate types don't care, that is actually far from the truth. Well run commercial software companies regularly commission (expensive) customer satisfaction surveys. These numbers are the second most important numbers in all of the enterprise, trailing only revenue in importance. Results are sliced and diced in every way imaginable.\n\nThe commercial world is trying to auto-tune their systems just as much. Examples are the work that many of the big boys are doing towards \"autonomic\" computing. While it is driven by naked self interest of wanting to sell version upgrades, those efforts increase customer satisfaction and decrease support costs. Works well for everyone...\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Wednesday, August 24, 2005 8:52 AM\nTo: Jignesh K. Shah\nCc: Jim Nasby; Chris Browne; [email protected]\nSubject: Re: Read/Write block sizes\n\n\n\nThis thread covers several performance ideas. First is the idea that\nmore parameters should be configurable. While this seems like a noble\ngoal, we try to make parameters auto-tuning, or if users have to\nconfigure it, the parameter should be useful for a significant number of\nusers.\n\nIn the commercial software world, if you can convince your boss that a\nfeature/knob is useful, it usually gets into the product. \nUnfortunately, this leads to the golden doorknob on a shack, where some\nfeatures are out of sync with the rest of the product in terms of\nusefulness and utility. With open source, if a feature can not be\nauto-tuned, or has significant overhead, the features has to be\nimplemented and then proven to be a benefit.\n\nIn terms of adding async I/O, threading, and other things, it might make\nsense to explore how these could be implemented in a way that fits the\nabove criteria.\n\n---------------------------------------------------------------------------\n\nJignesh K. Shah wrote:\n> Hi Jim,\n> \n> | How many of these things are currently easy to change with a recompile?\n> | I should be able to start testing some of these ideas in the near\n> | future, if they only require minor code or configure changes.\n> \n> \n> The following\n> * Data File Size 1GB\n> * WAL File Size of 16MB\n> * Block Size of 8K\n> \n> Are very easy to change with a recompile.. A Tunable will be greatly \n> prefered as it will allow one binary for different tunings\n> \n> * MultiBlock read/write\n> \n> Is not available but will greatly help in reducing the number of system \n> calls which will only increase as the size of the database increases if \n> something is not done about i.\n> \n> * Pregrown files... maybe not important at this point since TABLESPACE \n> can currently work around it a bit (Just need to create a different file \n> system for each tablespace\n> \n> But if you really think hardware & OS is the answer for all small \n> things...... I think we should now start to look on how to make Postgres \n> Multi-threaded or multi-processed for each connection. With the influx \n> of \"Dual-Core\" or \"Multi-Core\" being the fad.... Postgres can have the \n> cutting edge if somehow exploiting cores is designed.\n> \n> Somebody mentioned that adding CPU to Postgres workload halved the \n> average CPU usage...\n> YEAH... PostgreSQL uses only 1 CPU per connection (assuming 100% \n> usage) so if you add another CPU it is idle anyway and the system will \n> report only 50% :-) BUT the importing to measure is.. whether the query \n> time was cut down or not? ( No flames I am sure you were talking about \n> multi-connection multi-user environment :-) ) But my point is then this \n> approach is worth the ROI and the time and effort spent to solve this \n> problem.\n> \n> I actually vote for a multi-threaded solution for each connection while \n> still maintaining seperate process for each connections... This way the \n> fundamental architecture of Postgres doesn't change, however a \n> multi-threaded connection can then start to exploit different cores.. \n> (Maybe have tunables for number of threads to read data files who \n> knows.. If somebody is interested in actually working a design .. \n> contact me and I will be interested in assisting this work.\n> \n> Regards,\n> Jignesh\n> \n> \n> Jim C. Nasby wrote:\n> \n> >On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n> > \n> >\n> >>[email protected] (Jignesh Shah) writes:\n> >> \n> >>\n> >>>>Does that include increasing the size of read/write blocks? I've\n> >>>>noticedthat with a large enough table it takes a while to do a\n> >>>>sequential scan, even if it's cached; I wonder if the fact that it\n> >>>>takes a million read(2) calls to get through an 8G table is part of\n> >>>>that.\n> >>>> \n> >>>>\n> >>>Actually some of that readaheads,etc the OS does already if it does\n> >>>some sort of throttling/clubbing of reads/writes. But its not enough\n> >>>for such types of workloads.\n> >>>\n> >>>Here is what I think will help:\n> >>>\n> >>>* Support for different Blocksize TABLESPACE without recompiling the\n> >>>code.. (Atlease support for a different Blocksize for the whole\n> >>>database without recompiling the code)\n> >>>\n> >>>* Support for bigger sizes of WAL files instead of 16MB files\n> >>>WITHOUT recompiling the code.. Should be a tuneable if you ask me\n> >>>(with checkpoint_segments at 256.. you have too many 16MB files in\n> >>>the log directory) (This will help OLTP benchmarks more since now\n> >>>they don't spend time rotating log files)\n> >>>\n> >>>* Introduce a multiblock or extent tunable variable where you can\n> >>>define a multiple of 8K (or BlockSize tuneable) to read a bigger\n> >>>chunk and store it in the bufferpool.. (Maybe writes too) (Most\n> >>>devices now support upto 1MB chunks for reads and writes)\n> >>>\n> >>>*There should be a way to preallocate files for TABLES in\n> >>>TABLESPACES otherwise with multiple table writes in the same\n> >>>filesystem ends with fragmented files which causes poor \"READS\" from\n> >>>the files.\n> >>>\n> >>>* With 64bit 1GB file chunks is also moot.. Maybe it should be\n> >>>tuneable too like 100GB without recompiling the code.\n> >>>\n> >>>Why recompiling is bad? Most companies that will support Postgres\n> >>>will support their own binaries and they won't prefer different\n> >>>versions of binaries for different blocksizes, different WAL file\n> >>>sizes, etc... and hence more function using the same set of binaries\n> >>>is more desirable in enterprise environments\n> >>> \n> >>>\n> >>Every single one of these still begs the question of whether the\n> >>changes will have a *material* impact on performance.\n> >> \n> >>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 24 Aug 2005 15:10:37 -0500", "msg_from": "\"Lance Obermeyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read/Write block sizes" } ]
[ { "msg_contents": "Agreed!!!\n\nBut the knowledge to Auto-tune your application comes from years of understanding of how users are using the so-called \"knobs\".. But if the \"knobs\" are not there in the first place.. how do you know what people are using?\n\nThe \"so-called\" big boys are also using their knowledge base of what works for the customer in their autonomic self healers and its based on the experience of all the settings possible and based on service requests on what had failed that they get the knowledge about avoiding what fails and tuning what works. \n\nRemember \"recompiling\" is a risk with upteem number of variables which not every production release engineer is happy about.\n\nIts easy to change back the knob to the previous value rather than trying to figure out how do I get my old binaries back.\n\n\n-Jignesh\n\n\n----- Original Message -----\nFrom: Lance Obermeyer <[email protected]>\nDate: Wednesday, August 24, 2005 4:10 pm\nSubject: RE: Read/Write block sizes\n\n> Since Bruce referred to the \"corporate software world\" I'll chime \n> in...\n> It has been a while since adding knobs and dials has been \n> considered a good idea. Customers are almost always bad at tuning \n> their systems, which decreases customer satisfaction. While many \n> people assume the corporate types don't care, that is actually far \n> from the truth. Well run commercial software companies regularly \n> commission (expensive) customer satisfaction surveys. These \n> numbers are the second most important numbers in all of the \n> enterprise, trailing only revenue in importance. Results are \n> sliced and diced in every way imaginable.\n> \n> The commercial world is trying to auto-tune their systems just as \n> much. Examples are the work that many of the big boys are doing \n> towards \"autonomic\" computing. While it is driven by naked self \n> interest of wanting to sell version upgrades, those efforts \n> increase customer satisfaction and decrease support costs. Works \n> well for everyone...\n> \n> \n> \n> -----Original Message-----\n> From: Bruce Momjian [[email protected]]\n> Sent: Wednesday, August 24, 2005 8:52 AM\n> To: Jignesh K. Shah\n> Cc: Jim Nasby; Chris Browne; [email protected]\n> Subject: Re: Read/Write block sizes\n> \n> \n> \n> This thread covers several performance ideas. First is the idea that\n> more parameters should be configurable. While this seems like a \n> noblegoal, we try to make parameters auto-tuning, or if users have to\n> configure it, the parameter should be useful for a significant \n> number of\n> users.\n> \n> In the commercial software world, if you can convince your boss \n> that a\n> feature/knob is useful, it usually gets into the product. \n> Unfortunately, this leads to the golden doorknob on a shack, where \n> somefeatures are out of sync with the rest of the product in terms of\n> usefulness and utility. With open source, if a feature can not be\n> auto-tuned, or has significant overhead, the features has to be\n> implemented and then proven to be a benefit.\n> \n> In terms of adding async I/O, threading, and other things, it might \n> makesense to explore how these could be implemented in a way that \n> fits the\n> above criteria.\n> \n> --------------------------------------------------------------------\n> -------\n> \n> Jignesh K. Shah wrote:\n> > Hi Jim,\n> > \n> > | How many of these things are currently easy to change with a \n> recompile?> | I should be able to start testing some of these ideas \n> in the near\n> > | future, if they only require minor code or configure changes.\n> > \n> > \n> > The following\n> > * Data File Size 1GB\n> > * WAL File Size of 16MB\n> > * Block Size of 8K\n> > \n> > Are very easy to change with a recompile.. A Tunable will be \n> greatly \n> > prefered as it will allow one binary for different tunings\n> > \n> > * MultiBlock read/write\n> > \n> > Is not available but will greatly help in reducing the number of \n> system \n> > calls which will only increase as the size of the database \n> increases if \n> > something is not done about i.\n> > \n> > * Pregrown files... maybe not important at this point since \n> TABLESPACE \n> > can currently work around it a bit (Just need to create a \n> different file \n> > system for each tablespace\n> > \n> > But if you really think hardware & OS is the answer for all \n> small \n> > things...... I think we should now start to look on how to make \n> Postgres \n> > Multi-threaded or multi-processed for each connection. With the \n> influx \n> > of \"Dual-Core\" or \"Multi-Core\" being the fad.... Postgres can \n> have the \n> > cutting edge if somehow exploiting cores is designed.\n> > \n> > Somebody mentioned that adding CPU to Postgres workload halved \n> the \n> > average CPU usage...\n> > YEAH... PostgreSQL uses only 1 CPU per connection (assuming 100% \n> > usage) so if you add another CPU it is idle anyway and the \n> system will \n> > report only 50% :-) BUT the importing to measure is.. whether \n> the query \n> > time was cut down or not? ( No flames I am sure you were talking \n> about \n> > multi-connection multi-user environment :-) ) But my point is \n> then this \n> > approach is worth the ROI and the time and effort spent to solve \n> this \n> > problem.\n> > \n> > I actually vote for a multi-threaded solution for each connection \n> while \n> > still maintaining seperate process for each connections... This \n> way the \n> > fundamental architecture of Postgres doesn't change, however a \n> > multi-threaded connection can then start to exploit different \n> cores.. \n> > (Maybe have tunables for number of threads to read data files who \n> > knows.. If somebody is interested in actually working a design .. \n> > contact me and I will be interested in assisting this work.\n> > \n> > Regards,\n> > Jignesh\n> > \n> > \n> > Jim C. Nasby wrote:\n> > \n> > >On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:\n> > > \n> > >\n> > >>[email protected] (Jignesh Shah) writes:\n> > >> \n> > >>\n> > >>>>Does that include increasing the size of read/write blocks? I've\n> > >>>>noticedthat with a large enough table it takes a while to do a\n> > >>>>sequential scan, even if it's cached; I wonder if the fact \n> that it\n> > >>>>takes a million read(2) calls to get through an 8G table is \n> part of\n> > >>>>that.\n> > >>>> \n> > >>>>\n> > >>>Actually some of that readaheads,etc the OS does already if it \n> does> >>>some sort of throttling/clubbing of reads/writes. But its \n> not enough\n> > >>>for such types of workloads.\n> > >>>\n> > >>>Here is what I think will help:\n> > >>>\n> > >>>* Support for different Blocksize TABLESPACE without \n> recompiling the\n> > >>>code.. (Atlease support for a different Blocksize for the whole\n> > >>>database without recompiling the code)\n> > >>>\n> > >>>* Support for bigger sizes of WAL files instead of 16MB files\n> > >>>WITHOUT recompiling the code.. Should be a tuneable if you ask me\n> > >>>(with checkpoint_segments at 256.. you have too many 16MB \n> files in\n> > >>>the log directory) (This will help OLTP benchmarks more since now\n> > >>>they don't spend time rotating log files)\n> > >>>\n> > >>>* Introduce a multiblock or extent tunable variable where you can\n> > >>>define a multiple of 8K (or BlockSize tuneable) to read a bigger\n> > >>>chunk and store it in the bufferpool.. (Maybe writes too) (Most\n> > >>>devices now support upto 1MB chunks for reads and writes)\n> > >>>\n> > >>>*There should be a way to preallocate files for TABLES in\n> > >>>TABLESPACES otherwise with multiple table writes in the same\n> > >>>filesystem ends with fragmented files which causes poor \n> \"READS\" from\n> > >>>the files.\n> > >>>\n> > >>>* With 64bit 1GB file chunks is also moot.. Maybe it should be\n> > >>>tuneable too like 100GB without recompiling the code.\n> > >>>\n> > >>>Why recompiling is bad? Most companies that will support Postgres\n> > >>>will support their own binaries and they won't prefer different\n> > >>>versions of binaries for different blocksizes, different WAL file\n> > >>>sizes, etc... and hence more function using the same set of \n> binaries> >>>is more desirable in enterprise environments\n> > >>> \n> > >>>\n> > >>Every single one of these still begs the question of whether the\n> > >>changes will have a *material* impact on performance.\n> > >> \n> > >>\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------\n> ------\n> > TIP 3: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faq\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, \n> Pennsylvania 19073\n> \n\n", "msg_date": "Wed, 24 Aug 2005 16:20:34 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read/Write block sizes" } ]
[ { "msg_contents": "I have a table called 'jobs' with several million rows, and the only\ncolumns that are important to this discussion are 'start_time' and\n'completion_time'.\n\nThe sort of queries I want to execute (among others) are like:\n\nSELECT * FROM jobs\nWHERE completion_time > SOMEDATE AND start_time < SOMEDATE;\n\nIn plain english: All the jobs that were running at SOMEDATE. The\nresult of the query is on the order of 500 rows.\n\nI've got seperate indexes on 'start_time' and 'completion_time'.\n\nNow, if SOMEDATE is such that the number of rows with completion_time\n> SOMEDATE is small (say 10s of thousands), the query uses index scans\nand executes quickly. If not, the query uses sequential scans and is\nunacceptably slow (a couple of minutes). I've used EXPLAIN and\nEXPLAIN ANALYZE to confirm this. This makes perfect sense to me.\n\nI've played with some of the memory settings for PostgreSQL, but none\nhas had a significant impact.\n\nAny ideas on how to structure the query or add/change indexes in such\na way to improve its performance? In desperation, I tried using a\nsubquery, but unsurprisingly it made no (positive) difference. I feel\nlike there might be a way of using an index on both 'completion_time'\nand 'start_time', but can't put a temporal lobe on the details.\n\n\nMark\n", "msg_date": "Wed, 24 Aug 2005 14:43:51 -0600", "msg_from": "Mark Fox <[email protected]>", "msg_from_op": true, "msg_subject": "Performance indexing of a simple query" }, { "msg_contents": "Try\n\nCREATE INDEX start_complete ON jobs( start_time, completion_time );\n\nTry also completion_time, start_time. One might work better than the\nother. Or, depending on your data, you might want to keep both.\n\nIn 8.1 you'll be able to do bitmap-based index combination, which might\nallow making use of the seperate indexes.\n\nOn Wed, Aug 24, 2005 at 02:43:51PM -0600, Mark Fox wrote:\n> I have a table called 'jobs' with several million rows, and the only\n> columns that are important to this discussion are 'start_time' and\n> 'completion_time'.\n> \n> The sort of queries I want to execute (among others) are like:\n> \n> SELECT * FROM jobs\n> WHERE completion_time > SOMEDATE AND start_time < SOMEDATE;\n> \n> In plain english: All the jobs that were running at SOMEDATE. The\n> result of the query is on the order of 500 rows.\n> \n> I've got seperate indexes on 'start_time' and 'completion_time'.\n> \n> Now, if SOMEDATE is such that the number of rows with completion_time\n> > SOMEDATE is small (say 10s of thousands), the query uses index scans\n> and executes quickly. If not, the query uses sequential scans and is\n> unacceptably slow (a couple of minutes). I've used EXPLAIN and\n> EXPLAIN ANALYZE to confirm this. This makes perfect sense to me.\n> \n> I've played with some of the memory settings for PostgreSQL, but none\n> has had a significant impact.\n> \n> Any ideas on how to structure the query or add/change indexes in such\n> a way to improve its performance? In desperation, I tried using a\n> subquery, but unsurprisingly it made no (positive) difference. I feel\n> like there might be a way of using an index on both 'completion_time'\n> and 'start_time', but can't put a temporal lobe on the details.\n> \n> \n> Mark\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Wed, 24 Aug 2005 16:22:34 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance indexing of a simple query" }, { "msg_contents": "Mark Fox <[email protected]> writes:\n> The sort of queries I want to execute (among others) are like:\n> SELECT * FROM jobs\n> WHERE completion_time > SOMEDATE AND start_time < SOMEDATE;\n> In plain english: All the jobs that were running at SOMEDATE.\n\nAFAIK there is no good way to do this with btree indexes; the problem\nis that it's fundamentally a 2-dimensional query and btrees are\n1-dimensional. There are various hacks you can try if you're willing\nto constrain the problem (eg, if you can assume some not-very-large\nmaximum on the running time of jobs) but in full generality btrees are\njust the Wrong Thing.\n\nSo what you want to look at is a non-btree index, ie, rtree or gist.\nFor example, the contrib/seg data type could pretty directly be adapted\nto solve this problem, since it can index searches for overlapping\nline segments.\n\nThe main drawback of these index types in existing releases is that they\nare bad on concurrent updates and don't have WAL support. Both those\nthings are (allegedly) fixed for GIST in 8.1 ... are you interested in\ntrying out 8.1beta?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Aug 2005 19:42:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance indexing of a simple query " }, { "msg_contents": "On Wed, Aug 24, 2005 at 07:42:00PM -0400, Tom Lane wrote:\n> Mark Fox <[email protected]> writes:\n> > The sort of queries I want to execute (among others) are like:\n> > SELECT * FROM jobs\n> > WHERE completion_time > SOMEDATE AND start_time < SOMEDATE;\n> > In plain english: All the jobs that were running at SOMEDATE.\n\nUh, the plain english and the SQL don't match. That query will find\nevery job that was NOT running at the time you said.\n\n> AFAIK there is no good way to do this with btree indexes; the problem\n> is that it's fundamentally a 2-dimensional query and btrees are\n> 1-dimensional. There are various hacks you can try if you're willing\n> to constrain the problem (eg, if you can assume some not-very-large\n> maximum on the running time of jobs) but in full generality btrees are\n> just the Wrong Thing.\n\nIgnoring the SQL and doing what the author actually wanted, wouldn't a\nbitmap combination of indexes work here?\n\nOr with an index on (start_time, completion_time), start an index scan\nat start_time = SOMEDATE and only include rows where completion_time <\nSOMEDATE. Of course if SOMEDATE is near the beginning of the table that\nwouldn't help.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Fri, 26 Aug 2005 11:28:02 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance indexing of a simple query" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Uh, the plain english and the SQL don't match. That query will find\n> every job that was NOT running at the time you said.\n\nNo, I think it was right. But anyway it was just an example.\n\n> On Wed, Aug 24, 2005 at 07:42:00PM -0400, Tom Lane wrote:\n>> AFAIK there is no good way to do this with btree indexes; the problem\n>> is that it's fundamentally a 2-dimensional query and btrees are\n>> 1-dimensional. There are various hacks you can try if you're willing\n>> to constrain the problem (eg, if you can assume some not-very-large\n>> maximum on the running time of jobs) but in full generality btrees are\n>> just the Wrong Thing.\n\n> Ignoring the SQL and doing what the author actually wanted, wouldn't a\n> bitmap combination of indexes work here?\n\n> Or with an index on (start_time, completion_time), start an index scan\n> at start_time = SOMEDATE and only include rows where completion_time <\n> SOMEDATE. Of course if SOMEDATE is near the beginning of the table that\n> wouldn't help.\n\nThe trouble with either of those is that you have to scan very large\nfractions of the index (if not indeed *all* of it) in order to get your\nanswer; certainly you hit much more of the index than just the region\ncontaining matching rows. Btree just doesn't have a good way to answer\nthis type of query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 12:42:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance indexing of a simple query " } ]
[ { "msg_contents": "I'm looking for an external RAID array (with external controller);\neither ~8 15kRPM SCSI drives or something with more SATA drives. This\nwill be used in a test environment and could get moved between machines,\nso I'd like something with it's own RAID controller. Support for a broad\nrange of OSes is important.\n\nCan anyone recommend hardware as well as vendors? Feel free to reply\noff-list.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com 512-569-9461\n", "msg_date": "Wed, 24 Aug 2005 19:17:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "RAID arrays (and vendors)" } ]
[ { "msg_contents": "Andrew,\n\nOn Thu, 2005-08-25 at 12:24 -0700, Andrew Lazarus wrote:\n> Should I temporarily increase sort_mem, vacuum_mem, neither, or both \n> when doing a CLUSTER on a large (100 million row) table where as many as \n> half of the tuples are deadwood from UPDATEs or DELETEs? I have large \n> batch (10 million row) inserts, updates, and deletes so I'm not sure \n> frequent vacuuming would help.\n\nYou may need to experiment with both. What version of Postgres? What is\nthe size of your database? How many concurrent users? If you're seeing\nhalf of the tuples are dead, I look at checking your max_fsm_pages and\nmax_fsm_relations after a full vacuum analyze before doing too much with\nsort mem.\n\nYour mileage may vary.\n\nBest of luck.\n\nSteve Poe\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Thu, 25 Aug 2005 13:56:11 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What *_mem to increase when running CLUSTER" }, { "msg_contents": "> Putting pg_xlog on the IDE drives gave about 10% performance\n> improvement. Would faster disks give more performance?\n> \n> What my application does:\n> \n> Every five minutes a new logfile will be imported. Depending on the\n> source of the request it will be imported in one of three \"raw click\"\n> tables. (data from two months back, to be able to verify customer\n> complains)\n> For reporting I have a set of tables. These contain data from the last\n> two years. My app deletes all entries from today and reinserts updated\n> data calculated from the raw data tables.\n> \n> The queries contain no joins only aggregates. I have several indexes\nto\n> speed different kinds of queries.\n> \n> My problems occur when one users does a report that contains to much\nold\n> data. In that case all cache mechanisms will fail and disc io is the\n> limiting factor.\n\nIt seems like you are pushing limit of what server can handle. This\nmeans: 1. expensive server upgrade. or \n2. make software more efficient.\n\nSince you sound I/O bound, you can tackle 1. by a. adding more memory or\nb. increasing i/o throughput. \n\nUnfortunately, you already have a pretty decent server (for x86) so 1.\nmeans 64 bit platform and 2. means more expensive hard drives. The\narchives is full of information about this...\n\nIs your data well normalized? You can do tricks like:\nif table has fields a,b,c,d,e,f with a is primary key, and d,e,f not\nfrequently queried or missing, move d,e,f to seprate table.\n\nwell normalized structures are always more cache efficient. Do you have\nlots of repeating and/or empty data values in your tables?\n\nMake your indexes and data as small as possible to reduce pressure on\nthe cache, here are just a few tricks:\n1. use int2/int4 instead of numeric\n2. know when to use char and varchar \n3. use functional indexes to reduce index expression complexity. This\ncan give extreme benefits if you can, for example, reduce double field\nindex to Boolean.\n\nMerlin\n", "msg_date": "Thu, 25 Aug 2005 13:30:48 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 2" }, { "msg_contents": "Should I temporarily increase sort_mem, vacuum_mem, neither, or both \nwhen doing a CLUSTER on a large (100 million row) table where as many as \nhalf of the tuples are deadwood from UPDATEs or DELETEs? I have large \nbatch (10 million row) inserts, updates, and deletes so I'm not sure \nfrequent vacuuming would help.", "msg_date": "Thu, 25 Aug 2005 12:24:38 -0700", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "What *_mem to increase when running CLUSTER" }, { "msg_contents": "Andrew Lazarus <[email protected]> writes:\n> Should I temporarily increase sort_mem, vacuum_mem, neither, or both \n> when doing a CLUSTER on a large (100 million row) table\n\nThe only part of that job that can use lots of memory is the index\nrebuilds. In recent PG versions maintenance_work_mem is the thing\nto increase for an index build; previously sort_mem controlled it.\nI forget when the changeover was; maybe 8.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Aug 2005 16:19:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What *_mem to increase when running CLUSTER " } ]
[ { "msg_contents": "Consider this setup - which is a gross simplification of parts of our\nproduction system ;-)\n\n create table c (id integer primary key);\n create table b (id integer primary key, c_id integer);\n create index b_on_c on b(c_id)\n\n insert into c (select ... lots of IDs ...);\n insert into b (select id, id from c); /* keep it simple :-) */\n \nNow, I'm just interessted in some few rows. \n\nAll those gives good plans:\n\nexplain select c.id from c order by c.id limit 1;\nexplain select c.id from c group by c.id order by c.id limit 1;\nexplain select c.id from c join b on c_id=c.id order by c.id limit 1;\n\n... BUT ... combining join, group and limit makes havoc:\n\nexplain select c.id from c join b on c_id=c.id group by c.id order by c.id\ndesc limit 5;\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Limit (cost=3809.65..3809.67 rows=5 width=4)\n -> Group (cost=3809.65..3940.59 rows=26187 width=4)\n -> Sort (cost=3809.65..3875.12 rows=26188 width=4)\n Sort Key: c.id\n -> Hash Join (cost=559.34..1887.89 rows=26188 width=4)\n Hash Cond: (\"outer\".id = \"inner\".c_id)\n -> Seq Scan on c (cost=0.00..403.87 rows=26187 width=4)\n -> Hash (cost=403.87..403.87 rows=26187 width=4)\n -> Seq Scan on b (cost=0.00..403.87 rows=26187 width=4)\n(9 rows)\n\nI get the same behaviour on pg 7.4.7 and pg 8.0.2. Of course, I can\nprobably use subqueries instead of join - though, I would have wished the\nplanner could do better ;-)\n\n-- \nNotice of Confidentiality: This information may be confidential, and\nblah-blah-blah - so please keep your eyes closed. Please delete and destroy\nthis email. Failure to comply will cause my lawyer to yawn.\n", "msg_date": "Fri, 26 Aug 2005 02:27:09 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Limit + group + join" }, { "msg_contents": "On Fri, 2005-08-26 at 02:27 +0200, Tobias Brox wrote:\n> Consider this setup - which is a gross simplification of parts of our\n> production system ;-)\n> \n> create table c (id integer primary key);\n> create table b (id integer primary key, c_id integer);\n> create index b_on_c on b(c_id)\n> \n> insert into c (select ... lots of IDs ...);\n> insert into b (select id, id from c); /* keep it simple :-) */\n> \n> Now, I'm just interessted in some few rows. \n> \n> All those gives good plans:\n> \n> explain select c.id from c order by c.id limit 1;\n> explain select c.id from c group by c.id order by c.id limit 1;\n> explain select c.id from c join b on c_id=c.id order by c.id limit 1;\n> \n> ... BUT ... combining join, group and limit makes havoc:\n> \n> explain select c.id from c join b on c_id=c.id group by c.id order by c.id\n> desc limit 5;\n\nWhere's b in this join clause? It looks like a cartesian product to me.\n\n-jwb\n\n", "msg_date": "Thu, 25 Aug 2005 18:56:59 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "[Jeffrey W. Baker - Thu at 06:56:59PM -0700]\n> > explain select c.id from c join b on c_id=c.id group by c.id order by c.id\n> > desc limit 5;\n> \n> Where's b in this join clause?\n\n\"join b on c_id=c.id\"\n\nIt just a funny way of writing:\n\nselect c.id from c,b where c_id=c.id group by c.id order by c.id desc limit 5;\n\n> It looks like a cartesian product to me.\n\nNo. The query will return exactly the same as the simplest query:\n\n select c.id from c order by c.id desc limit 5; \n\nAs said, this is a gross oversimplification of the production envorinment.\nIn the production environment, I really need to use both join, group and\nlimit. I tested a bit with subqueries, it was not a good solution\n(selecting really a lot of rows and aggregates from many of the tables).\n\nThe next idea is to hack it up by manually finding out where the \"limit\"\nwill cut, and place a restriction in the where-part of the query.\n\n-- \nNotice of Confidentiality: This information may be confidential, and\nblah-blah-blah - so please keep your eyes closed. Please delete and destroy\nthis email. Failure to comply will cause my lawyer to yawn.\n", "msg_date": "Fri, 26 Aug 2005 04:06:35 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "On Thu, 2005-08-25 at 18:56 -0700, Jeffrey W. Baker wrote:\n> On Fri, 2005-08-26 at 02:27 +0200, Tobias Brox wrote:\n> > Consider this setup - which is a gross simplification of parts of our\n> > production system ;-)\n> > \n> > create table c (id integer primary key);\n> > create table b (id integer primary key, c_id integer);\n> > create index b_on_c on b(c_id)\n> > \n> > insert into c (select ... lots of IDs ...);\n> > insert into b (select id, id from c); /* keep it simple :-) */\n> > \n> > Now, I'm just interessted in some few rows. \n> > \n> > All those gives good plans:\n> > \n> > explain select c.id from c order by c.id limit 1;\n> > explain select c.id from c group by c.id order by c.id limit 1;\n> > explain select c.id from c join b on c_id=c.id order by c.id limit 1;\n> > \n> > ... BUT ... combining join, group and limit makes havoc:\n> > \n> > explain select c.id from c join b on c_id=c.id group by c.id order by c.id\n> > desc limit 5;\n> \n> Where's b in this join clause? It looks like a cartesian product to me.\n\nNevermind. I read c_id as c.id.\n\n-jwb\n\n", "msg_date": "Thu, 25 Aug 2005 19:31:20 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "Tobias,\nInteresting example:\n\nThe 'desc' seems to be the guy triggering the sort, e.g:\n\nexplain select c.id from c join b on c_id=c.id group by c.id order by \nc.id limit 5;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------\n Limit (cost=0.00..0.28 rows=5 width=4)\n -> Group (cost=0.00..4476.00 rows=80000 width=4)\n -> Merge Join (cost=0.00..4276.00 rows=80000 width=4)\n Merge Cond: (\"outer\".id = \"inner\".c_id)\n -> Index Scan using c_pkey on c (cost=0.00..1518.00 \nrows=80000 width=4)\n -> Index Scan using b_on_c on b (cost=0.00..1558.00 \nrows=80000 width=4)\n(6 rows)\n\nWhereas with it back in again:\n\nexplain select c.id from c join b on c_id=c.id group by c.id order by \nc.id desc limit 5;\n QUERY PLAN \n\n--------------------------------------------------------------------------------------\n Limit (cost=10741.08..10741.11 rows=5 width=4)\n -> Group (cost=10741.08..11141.08 rows=80000 width=4)\n -> Sort (cost=10741.08..10941.08 rows=80000 width=4)\n Sort Key: c.id\n -> Hash Join (cost=1393.00..4226.00 rows=80000 width=4)\n Hash Cond: (\"outer\".c_id = \"inner\".id)\n -> Seq Scan on b (cost=0.00..1233.00 rows=80000 \nwidth=4)\n -> Hash (cost=1193.00..1193.00 rows=80000 width=4)\n -> Seq Scan on c (cost=0.00..1193.00 \nrows=80000 width=4)\n(9 rows)\n\n\nHowever being a bit brutal:\n\nset enable_mergejoin=false;\nset enable_hashjoin=false;\n\nexplain select c.id from c join b on c_id=c.id group by c.id order by \nc.id desc limit 5;\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------\n Limit (cost=0.00..15.24 rows=5 width=4)\n -> Group (cost=0.00..243798.00 rows=80000 width=4)\n -> Nested Loop (cost=0.00..243598.00 rows=80000 width=4)\n -> Index Scan Backward using c_pkey on c \n(cost=0.00..1518.00 rows=80000 width=4)\n -> Index Scan using b_on_c on b (cost=0.00..3.01 \nrows=1 width=4)\n Index Cond: (b.c_id = \"outer\".id)\n(6 rows)\n\nWhat is interesting is why this plan is being rejected...\n\nCheers\n\nMark\n\nTobias Brox wrote:\n> Consider this setup - which is a gross simplification of parts of our\n> production system ;-)\n> \n> create table c (id integer primary key);\n> create table b (id integer primary key, c_id integer);\n> create index b_on_c on b(c_id)\n> \n> insert into c (select ... lots of IDs ...);\n> insert into b (select id, id from c); /* keep it simple :-) */\n> \n> Now, I'm just interessted in some few rows. \n> \n> All those gives good plans:\n> \n> explain select c.id from c order by c.id limit 1;\n> explain select c.id from c group by c.id order by c.id limit 1;\n> explain select c.id from c join b on c_id=c.id order by c.id limit 1;\n> \n> ... BUT ... combining join, group and limit makes havoc:\n> \n> explain select c.id from c join b on c_id=c.id group by c.id order by c.id\n> desc limit 5;\n>\n", "msg_date": "Fri, 26 Aug 2005 15:01:01 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "[Mark Kirkwood - Fri at 03:01:01PM +1200]\n> Tobias,\n> Interesting example:\n> \n> The 'desc' seems to be the guy triggering the sort, e.g:\n\nOh; really an accident that I didn't notice myself, I was actually going to\nremove all instances of \"desc\" in my simplification, but seems like I forgot.\n\n> However being a bit brutal:\n> \n> set enable_mergejoin=false;\n> set enable_hashjoin=false;\n\n:-) maybe I can use that in production. I'll check.\n\n-- \nNotice of Confidentiality: This information may be confidential, and\nblah-blah-blah - so please keep your eyes closed. Please delete and destroy\nthis email. Failure to comply will cause my lawyer to yawn.\n", "msg_date": "Fri, 26 Aug 2005 08:20:51 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "On Fri, 26 Aug 2005, Mark Kirkwood wrote:\n\n> However being a bit brutal:\n>\n> set enable_mergejoin=false;\n> set enable_hashjoin=false;\n>\n> explain select c.id from c join b on c_id=c.id group by c.id order by\n> c.id desc limit 5;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..15.24 rows=5 width=4)\n> -> Group (cost=0.00..243798.00 rows=80000 width=4)\n> -> Nested Loop (cost=0.00..243598.00 rows=80000 width=4)\n> -> Index Scan Backward using c_pkey on c\n> (cost=0.00..1518.00 rows=80000 width=4)\n> -> Index Scan using b_on_c on b (cost=0.00..3.01\n> rows=1 width=4)\n> Index Cond: (b.c_id = \"outer\".id)\n> (6 rows)\n>\n> What is interesting is why this plan is being rejected...\n\nWell, it expects 80000 probles into b_on_c to be more expensive than the\nhash join and sort. I wonder what explain analyze shows for the original\nand the version with the enables changed.\n", "msg_date": "Fri, 26 Aug 2005 07:18:30 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> What is interesting is why this plan is being rejected...\n\nWhich PG version are you using exactly? That mistake looks like an\nartifact of the 8.0 \"fuzzy plan cost\" patch, which we fixed recently:\nhttp://archives.postgresql.org/pgsql-committers/2005-07/msg00474.php\n\nBut Tobias wasn't happy with 7.4 either, so I'm not sure that the fuzzy\ncost issue explains his results.\n\nAs far as the \"desc\" point goes, the problem is that mergejoins aren't\ncapable of dealing with backward sort order, so a merge plan isn't\nconsidered for that case (or at least, it would have to have a sort\nafter it, which pretty much defeats the point for a fast-start plan).\nI have some ideas about fixing this but it won't happen before 8.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 11:46:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join " }, { "msg_contents": "Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n> \n>>What is interesting is why this plan is being rejected...\n> \n> \n> Which PG version are you using exactly? That mistake looks like an\n> artifact of the 8.0 \"fuzzy plan cost\" patch, which we fixed recently:\n> http://archives.postgresql.org/pgsql-committers/2005-07/msg00474.php\n>\n\nRight on - 8.0.3 (I might look at how CVS tip handles this, could be \ninteresting).\n\n> But Tobias wasn't happy with 7.4 either, so I'm not sure that the fuzzy\n> cost issue explains his results.\n> \n> As far as the \"desc\" point goes, the problem is that mergejoins aren't\n> capable of dealing with backward sort order, so a merge plan isn't\n> considered for that case (or at least, it would have to have a sort\n> after it, which pretty much defeats the point for a fast-start plan).\n> I have some ideas about fixing this but it won't happen before 8.2.\n\nThat doesn't explain why the nested loop is being kicked tho', or have I \nmissed something obvious? - it's been known to happen :-)...\n\nCheers\n\nMark\n\n", "msg_date": "Sat, 27 Aug 2005 11:29:01 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> That doesn't explain why the nested loop is being kicked tho',\n\nNo, but I think the fuzzy-cost bug does. There are two different issues\nhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 19:48:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join " }, { "msg_contents": "Interestingly enough, 7.4.8 and 8.1devel-2005-08-23 all behave the same \nas 8.0.3 for me (tables freshly ANALYZEd):\n\njoinlimit=# SELECT version();\n version \n\n-------------------------------------------------------------------------------------------------\n PostgreSQL 7.4.8 on i386-unknown-freebsd5.4, compiled by GCC gcc (GCC) \n3.4.2 [FreeBSD] 20040728\n(1 row)\n\njoinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \nc.id ORDER BY c.id DESC LIMIT 5;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Limit (cost=10591.36..10591.39 rows=5 width=4)\n -> Group (cost=10591.36..10992.02 rows=80131 width=4)\n -> Sort (cost=10591.36..10791.69 rows=80131 width=4)\n Sort Key: c.id\n -> Merge Join (cost=0.00..4064.66 rows=80131 width=4)\n Merge Cond: (\"outer\".id = \"inner\".c_id)\n -> Index Scan using c_pkey on c \n(cost=0.00..1411.31 rows=80131 width=4)\n -> Index Scan using b_on_c on b \n(cost=0.00..1451.72 rows=80172 width=4)\n(8 rows)\n\njoinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \nc.id ORDER BY c.id LIMIT 5;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------\n Limit (cost=0.00..0.27 rows=5 width=4)\n -> Group (cost=0.00..4264.99 rows=80131 width=4)\n -> Merge Join (cost=0.00..4064.66 rows=80131 width=4)\n Merge Cond: (\"outer\".id = \"inner\".c_id)\n -> Index Scan using c_pkey on c (cost=0.00..1411.31 \nrows=80131 width=4)\n -> Index Scan using b_on_c on b (cost=0.00..1451.72 \nrows=80172 width=4)\n(6 rows)\n\n\njoinlimit=# SELECT version();\n version \n\n----------------------------------------------------------------------------------------------------\n PostgreSQL 8.1devel on i386-unknown-freebsd5.4, compiled by GCC gcc \n(GCC) 3.4.2 [FreeBSD] 20040728\n(1 row)\n\njoinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \nc.id ORDER BY c.id DESC LIMIT 5;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Limit (cost=10654.53..10654.55 rows=5 width=4)\n -> Group (cost=10654.53..11054.53 rows=80000 width=4)\n -> Sort (cost=10654.53..10854.53 rows=80000 width=4)\n Sort Key: c.id\n -> Merge Join (cost=0.00..4139.44 rows=80000 width=4)\n Merge Cond: (\"outer\".id = \"inner\".c_id)\n -> Index Scan using c_pkey on c \n(cost=0.00..1450.00 rows=80000 width=4)\n -> Index Scan using b_on_c on b \n(cost=0.00..1490.00 rows=80000 width=4)\n(8 rows)\n\njoinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \nc.id ORDER BY c.id LIMIT 5;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------\n Limit (cost=0.00..0.27 rows=5 width=4)\n -> Group (cost=0.00..4339.44 rows=80000 width=4)\n -> Merge Join (cost=0.00..4139.44 rows=80000 width=4)\n Merge Cond: (\"outer\".id = \"inner\".c_id)\n -> Index Scan using c_pkey on c (cost=0.00..1450.00 \nrows=80000 width=4)\n -> Index Scan using b_on_c on b (cost=0.00..1490.00 \nrows=80000 width=4)\n(6 rows)\n\nThe non default server params of relevance are:\n\nshared_buffers = 12000\neffective_cache_size = 100000\nwork_mem/sort_mem = 20480\n\nI did wonder if the highish sort_mem might be a factor, but no, with it \n set to 1024 I get the same behaviour (just higher sort cost estimates).\n\nCheers\n\nMark\n\nTom Lane wrote:\n>\n> \n> Which PG version are you using exactly? That mistake looks like an\n> artifact of the 8.0 \"fuzzy plan cost\" patch, which we fixed recently:\n> http://archives.postgresql.org/pgsql-committers/2005-07/msg00474.php\n> \n> But Tobias wasn't happy with 7.4 either, so I'm not sure that the fuzzy\n> cost issue explains his results.\n> \n> As far as the \"desc\" point goes, the problem is that mergejoins aren't\n> capable of dealing with backward sort order, so a merge plan isn't\n> considered for that case (or at least, it would have to have a sort\n> after it, which pretty much defeats the point for a fast-start plan).\n> I have some ideas about fixing this but it won't happen before 8.2.\n> \n", "msg_date": "Sat, 27 Aug 2005 14:55:22 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> As far as the \"desc\" point goes, the problem is that mergejoins aren't\n> capable of dealing with backward sort order, so a merge plan isn't\n> considered for that case (or at least, it would have to have a sort\n> after it, which pretty much defeats the point for a fast-start plan).\n> I have some ideas about fixing this but it won't happen before 8.2.\n\nOf course in this case assuming \"id\" is an integer column you can just sort by\n-id instead.\n\n-- \ngreg\n\n", "msg_date": "26 Aug 2005 23:03:30 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> joinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \n> c.id ORDER BY c.id DESC LIMIT 5;\n> [ fails to pick an available index-scan-backward plan ]\n\nI looked into this and found that indeed the desirable join plan was\ngetting generated, but it wasn't picked because query_planner didn't\nhave an accurate idea of how much of the join needed to be scanned to\nsatisfy the GROUP BY step. I've committed some changes that hopefully\nwill let 8.1 be smarter about GROUP BY ... LIMIT queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Aug 2005 18:17:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join " }, { "msg_contents": "Tom Lane wrote:\n> \n> I looked into this and found that indeed the desirable join plan was\n> getting generated, but it wasn't picked because query_planner didn't\n> have an accurate idea of how much of the join needed to be scanned to\n> satisfy the GROUP BY step. I've committed some changes that hopefully\n> will let 8.1 be smarter about GROUP BY ... LIMIT queries.\n> \n\nVery nice :-)\n\njoinlimit=# EXPLAIN SELECT c.id FROM c JOIN b ON c_id=c.id GROUP BY \nc.id ORDER BY c.id DESC LIMIT 5;\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------\n Limit (cost=0.00..15.23 rows=5 width=4)\n -> Group (cost=0.00..243730.00 rows=80000 width=4)\n -> Nested Loop (cost=0.00..243530.00 rows=80000 width=4)\n -> Index Scan Backward using c_pkey on c \n(cost=0.00..1450.00 rows=80000 width=4)\n -> Index Scan using b_on_c on b (cost=0.00..3.01 \nrows=1 width=4)\n Index Cond: (b.c_id = \"outer\".id)\n(6 rows)\n\nThis is 8.1devel from today.\n\nregards\n\nMark\n", "msg_date": "Sun, 28 Aug 2005 12:48:59 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit + group + join" }, { "msg_contents": "[Tom Lane]\n> I looked into this and (...) I've committed some changes that hopefully will\n> let 8.1 be smarter about GROUP BY ... LIMIT queries.\n\n[Mark Kirkwood]\n> Very nice :-)\n(...)\n> This is 8.1devel from today.\n\nSplendid :-) Unfortunately we will not be upgrading for some monthes still,\nbut anyway I'm happy. This provides yet another good argument for upgrading\nsooner. I'm also happy to see such a perfect match:\n\n - A problem that can be reduced from beeing complex and\n production-specific, to simple and easily reproducible.\n \n - Enthusiastic people testing it and pinpointing even more precisely what\n conditions will cause the condition\n \n - Programmers actually fixing the issue\n \n - Testers verifying that it was fixed\n \nLong live postgresql! :-) \n\n-- \nNotice of Confidentiality: This email is sent unencrypted over the network,\nand may be stored on several email servers; it can be read by third parties\nas easy as a postcard. Do not rely on email for confidential information.\n", "msg_date": "Sun, 28 Aug 2005 03:42:40 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit + group + join" } ]
[ { "msg_contents": ">I have a pl/pgsql function that using temp table to perform searching logic,\n>we have one server running on 512MB, Red Hat 9.0, postgresql-7.4.5.\n>the problem is the pl/pgsql function that i created will increase postmaster memory when calling to function\n>become more frequent, i did a test by getting out all the logic inside the function and what left only\n>create temporary table and drop the temporary table statement (at the end if this function), i monitor the %mem for postmaster\n>using linux command, ps -eo pid,comm,user,%mem | grep postmaster.\n>when i start the postmaster, the %mem show only 2.0 something, but after i run the function for more that 1000 time, then\n>the %mem will go up until 10.0 something.\n>my question is,it is postmaster have memory leaking problem?\n>hope someone can give me some help and best is how to identify the problem it is come from postgresql?\n>\n>thanks\n>regards\n>ivan\n\n\n\n\n\n\n>I have a pl/pgsql function that using temp \ntable to perform searching logic,\n>we have one \nserver running on 512MB, Red Hat 9.0, postgresql-7.4.5.\n>the problem is the pl/pgsql function that i \ncreated will increase postmaster memory when calling to function\n>become more frequent, i did a test by getting \nout all the logic inside the function and what left only\n>create temporary table and drop the temporary \ntable statement (at the end if this function), i monitor the %mem for \npostmaster\n>using linux command, ps -eo pid,comm,user,%mem \n| grep postmaster.\n>when i start the postmaster, the %mem show only \n2.0 something, but after i run the function for more that 1000 time, \nthen\n>the %mem will go up until 10.0 \nsomething.\n>my question is,it is postmaster have memory \nleaking problem?\n>hope someone can give me some help and best is \nhow to identify the problem it is come from postgresql?\n>\n>thanks\n>regards\n>ivan", "msg_date": "Fri, 26 Aug 2005 14:08:51 +0800", "msg_from": "\"Chun Yit(Chronos)\" <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster memory keep going up????" }, { "msg_contents": "Chun Yit(Chronos) wrote:\n>>I have a pl/pgsql function that using temp table to perform searching logic,\n\n>>my question is,it is postmaster have memory leaking problem?\n\nFirst step - upgrade to the latest 7.4.x release.\n\nSecond step - read the \"release notes\" section of the manuals for 7.4.x\nand 8.0.x and see what it says about memory and plpgsql.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 26 Aug 2005 09:34:31 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster memory keep going up????" } ]
[ { "msg_contents": "Hi list,\n\nI'm writing an application that will aggregate records with a few \nmillion records into averages/sums/minimums etc grouped per day.\n\nClients can add filters and do lots of customization on what they want \nto see. And I've to translate that to one or more queries. Basically, I \nappend each filter as either an extra and-in-the-where or joined with \nthe clauses as ON-clause. The application now uses 8.1devel but I think \nthe basic plans are similar to 8.0. At least for this first query.\n\nI noticed a query taking over 25 seconds to execute:\n\nSELECT \"make a timestamp\" grouper, chart_2.Prijs as field_2_0\nFROM\n pwprijs as chart_2\n JOIN pwprodukten t_0 ON chart_2.ProduktID = t_0.ID AND t_0.Cat2 IN\n (SELECT 545 UNION SELECT ID FROM cat WHERE ParentID = 545)\n JOIN pwprijs t_1 ON chart_2.ProduktID = t_1.ProduktID\n AND t_1.LeverancierID = 938 AND t_1.recordtimestamp >= \"last \ntimestamp\"\nWHERE\n chart_2.Prijs > 0\n\nIt yields quite a long plan, so I've send that as an attachment along.\nBasically it combines two tables against an original to fetch \"all \nprices (of all suppliers) for products of a certain category that are \nsold by a certain supplier\".\n\nI was wondering how rewriting it to subselects would improve \nperformance, but that wasn't a very clear winner. It shaved of about 5 \nseconds. So I took the subselects and used INTERSECT to unite them and \nhave only one IN-clause in the query. That made it go down to around 13 \nseconds.\n\nI noticed it was doing a seq scan on the largest table to get the \"Prijs \n > 0\"-condition. But since there are only 947 of the 7692207 with prijs \n= 0 and none with < 0, it shouldn't be the main thing to look for.\nDropping the clause made a minor improvement in performance for the queries.\n\nBut disabling sequential scans allowed an improvement to only 660 ms \ncompared to the 13 seconds earlier! Row-estimates seem to be quite a bit \noff, so I already set the statistics target to 1000 and re-analyzed.\nBtw, adding the prijs-clause again would make it choose another index \nand thus resulted in much longer operation.\n\nThe final query, only taking 650ms, would be:\n\nSELECT\n \"make a timestamp\" as grouper,\n chart_2.Prijs as field_2_0\nFROM\n pwprijs as chart_2\nWHERE\n chart_2.ProduktID IN (SELECT ID FROM pwprodukten WHERE Cat2 IN \n(SELECT 545 UNION SELECT ID FROM cat WHERE ParentID = 545)\n INTERSECT\n SELECT produktid FROM pwprijs WHERE LeverancierID = 938 \nAND recordtimestamp >= \"last timestamp\")\n\nSo I'm wondering: how can I make postgres decide to use the (correct) \nindex without having to disable seq scans and how can I still add the \nprijs-clause without dropping the index for it (since it should be used \nfor other filters). And for ease of use in my application I'd prefer to \nuse the first query or the version with two seperate IN-clauses.\n\nIs that possible?\n\nI left all the configuration-stuff to the defaults since changing values \ndidn't seem to impact much. Apart from the buffers and effective cache, \nincreasing those made the performance worse.\n\nBest regards,\n\nArjen", "msg_date": "Fri, 26 Aug 2005 13:08:52 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient queryplan for query with intersectable subselects/joins" }, { "msg_contents": "Arjen van der Meijden wrote:\n> \n> I left all the configuration-stuff to the defaults since changing values \n> didn't seem to impact much. Apart from the buffers and effective cache, \n> increasing those made the performance worse.\n\nI've not looked at the rest of your problem in detail, but using the \ndefault configuration values is certainly not going to help things. In \nparticular effective_cache is supposed to tell PG how much memory your \nOS is using to cache data.\n\nRead this through and make sure your configuration settings are sane, \nthen it might be worthwhile looking in detail at this particular query.\n http://www.powerpostgresql.com/PerfList\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 26 Aug 2005 14:05:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient queryplan for query with intersectable" }, { "msg_contents": "On 26-8-2005 15:05, Richard Huxton wrote:\n> Arjen van der Meijden wrote:\n> \n>>\n>> I left all the configuration-stuff to the defaults since changing \n>> values didn't seem to impact much. Apart from the buffers and \n>> effective cache, increasing those made the performance worse.\n> \n> \n> I've not looked at the rest of your problem in detail, but using the \n> default configuration values is certainly not going to help things. In \n> particular effective_cache is supposed to tell PG how much memory your \n> OS is using to cache data.\n> \n> Read this through and make sure your configuration settings are sane, \n> then it might be worthwhile looking in detail at this particular query.\n> http://www.powerpostgresql.com/PerfList\n\nThanks for the advice. But as said, I tried such things. Adjusting \nshared buffers to 5000, 10000 or 15000 made minor improvements.\nBut adjusting the effective_cache was indeed very dramatic, to make \nmatters worse!\nChanging the random_page_cost to 2.0 also made it choose another plan, \nbut still not the fast plan.\n\nThe machine has 1GB of memory, so I tested for effective cache size \n10000, 20000, 40000, 60000 and 80000. The \"600ms\"-plan I'm talking about \nwill not come up with an effective cache set to 60000 or above and for \nthe lower values there was no improvement in performance over that \nalready very fast plan.\nAs said, it chooses sequential scans or \"the wrong index plans\" over a \nperfectly good plan that is just not selected when the parameters are \n\"too well tuned\" or sequential scanning of the table is allowed.\n\nSo I'm still looking for a way to get it to use the fast plan, instead \nof the slower ones that appear when I tend to tune the database...\n\n", "msg_date": "Fri, 26 Aug 2005 23:16:09 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient queryplan for query with intersectable" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> As said, it chooses sequential scans or \"the wrong index plans\" over a \n> perfectly good plan that is just not selected when the parameters are \n> \"too well tuned\" or sequential scanning of the table is allowed.\n\nI think some part of the problem comes from using inconsistent\ndatatypes. For instance, it seems very odd that the thing is not\nusing a hash or something to handle\n\n t_0.Cat2 IN (SELECT 545 UNION SELECT ID FROM cat WHERE ParentID = 545)\n\nseeing that it correctly guesses there are only going to be about 8 rows\nin the union. Part of the reason is that cat2 is smallint, whereas the\noutput of the union must be at least int, maybe wider depending on the\ndatatype of cat.id (which you did not show us); so the comparison isn't\nhashable. Even a smallint vs int comparison would be mergejoinable,\nthough, so I'm really wondering what cat.id is.\n\nAnother big part of the problem comes from poor result size estimation.\nI'm not sure you can eliminate that entirely given the multiple\nconditions on different columns (which'd require cross-column statistics\nto really do well, which we do not have). But you could avoid\nconstructs like\n\n WHERE ... t_1.recordtimestamp >=\n (SELECT max_date - 60 FROM last_dates WHERE table_name = 'pricetracker')\n\nThe planner is basically going to throw up its hands and make a default\nguess on the selectivity of this; it's not smart enough to decide that\nthe sub-select probably represents a constant. What I'd do with this\nis to define a function marked STABLE for the sub-select result, perhaps\nsomething like\n\ncreate function get_last_date(tabname text, offsetdays int)\nreturns timestamp as $$\nSELECT max_date - $2 FROM last_dates WHERE table_name = $1\n$$ language sql strict stable;\n\n(I'm guessing as to datatypes and the amount of parameterization you\nneed.) Then write the query like\n\n WHERE ... t_1.recordtimestamp >= get_last_date('pricetracker', 60)\n\nIn this formulation the planner will be able to make a reasonable guess\nabout how many rows will match ... at least if your statistics are up\nto date ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 18:56:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient queryplan for query with intersectable " }, { "msg_contents": "\n\nOn 27-8-2005 0:56, Tom Lane wrote:\n> Arjen van der Meijden <[email protected]> writes:\n> \n>>As said, it chooses sequential scans or \"the wrong index plans\" over a \n>>perfectly good plan that is just not selected when the parameters are \n>>\"too well tuned\" or sequential scanning of the table is allowed.\n> \n> \n> I think some part of the problem comes from using inconsistent\n> datatypes. For instance, it seems very odd that the thing is not\n> using a hash or something to handle\n> \n> t_0.Cat2 IN (SELECT 545 UNION SELECT ID FROM cat WHERE ParentID = 545)\n> \n> seeing that it correctly guesses there are only going to be about 8 rows\n> in the union. Part of the reason is that cat2 is smallint, whereas the\n> output of the union must be at least int, maybe wider depending on the\n> datatype of cat.id (which you did not show us); so the comparison isn't\n> hashable. Even a smallint vs int comparison would be mergejoinable,\n> though, so I'm really wondering what cat.id is.\n\ncat.id is a smallint. I replaced that subquery with these two:\nt_0.Cat2 IN (SELECT '545'::smallint UNION SELECT ID FROM cat WHERE \nParentID = '545'::smallint)\n\nt_0.Cat2 IN (SELECT '545' UNION SELECT ID FROM cat WHERE ParentID = '545')\n\nBut appareantly there is a bug in the explain mechanism of the 8.1devel \nI'm using (I downloaded a nightly 25 august somewhere in the morning \n(CEST)), since it returned:\nERROR: bogus varno: 9\n\nSo I can't see whether the plan changed, execution times didn't change \nmuch. I also replaced the subselect with the result of that query (like \n('545', '546', ...) ) but that didn't seem to make much difference in \nthe execution time as well. The plan did change of course, it used a \nBitmapOr of 8 Bitmap Index Scans over the pwprodukten.\n\nBy the way, as far as I know, this is the only datatype mismatch in the \nquery.\n\n> Another big part of the problem comes from poor result size estimation.\n> I'm not sure you can eliminate that entirely given the multiple\n> conditions on different columns (which'd require cross-column statistics\n> to really do well, which we do not have). But you could avoid\n> constructs like\n> \n> WHERE ... t_1.recordtimestamp >=\n> (SELECT max_date - 60 FROM last_dates WHERE table_name = 'pricetracker')\n> \n> The planner is basically going to throw up its hands and make a default\n> guess on the selectivity of this; it's not smart enough to decide that\n> the sub-select probably represents a constant. What I'd do with this\n> is to define a function marked STABLE for the sub-select result, perhaps\n> something like\n[...]\n> need.) Then write the query like\n> \n> WHERE ... t_1.recordtimestamp >= get_last_date('pricetracker', 60)\n> \n> In this formulation the planner will be able to make a reasonable guess\n> about how many rows will match ... at least if your statistics are up\n> to date ...\n\nI tried such a function and also tried replacing it with the fixed \noutcome of that suquery itself. Although it has a considerable more \naccurate estimate of the rows returned, it doesn't seem to impact the \nbasic plan much. It does make the sub-query itself use another index \n(the one on the recordtimestamp alone, rather than the combined index on \nleverancierid and recordtimestamp).\nWith that changed subquery it estimates about 4173 rows over 4405 real rows.\n\nActually with the adjusted or original query, it seems to favor the hash \njoin over a nested loop, but the rest of the plan (for the subqueries) \nseems to be exactly the same.\n\nHere is the first part of the explain analyze when it can do any trick \nit wants:\n Hash Join (cost=7367.43..186630.19 rows=132426 width=12) (actual \ntime=191.726..11072.025 rows=58065 loops=1)\n Hash Cond: (\"outer\".produktid = \"inner\".id)\n -> Seq Scan on pwprijs chart_2 (cost=0.00..137491.07 rows=7692207 \nwidth=16) (actual time=0.018..6267.744 rows=7692207 loops=1)\n -> Hash (cost=7366.02..7366.02 rows=565 width=4) (actual \ntime=123.265..123.265 rows=103 loops=1)\n -> SetOp Intersect (cost=7332.10..7360.37 rows=565 width=4) \n(actual time=115.760..123.192 rows=103 loops=1)\n[snip]\t\n\nAnd here is the first (and last) part when I disable hash joins or seq \nscans:\n Nested Loop (cost=7334.92..517159.39 rows=132426 width=12) (actual \ntime=111.905..512.575 rows=58065 loops=1)\n -> SetOp Intersect (cost=7332.10..7360.37 rows=565 width=4) \n(actual time=111.588..120.035 rows=103 loops=1)\n[snip]\n -> Bitmap Heap Scan on pwprijs chart_2 (cost=2.82..895.85 rows=234 \nwidth=16) (actual time=0.344..2.149 rows=564 loops=103)\n Recheck Cond: (chart_2.produktid = \"outer\".id)\n -> Bitmap Index Scan on pwprijs_produktid_idx \n(cost=0.00..2.82 rows=234 width=0) (actual time=0.189..0.189 rows=564 \nloops=103)\n Index Cond: (chart_2.produktid = \"outer\".id)\n\nIs a nested loop normally so much (3x) more costly than a hash join? Or \nis it just this query that gets estimated wronly?\n\nBest regards,\n\nArjen\n", "msg_date": "Sat, 27 Aug 2005 12:50:23 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient queryplan for query with intersectable" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> But appareantly there is a bug in the explain mechanism of the 8.1devel \n> I'm using (I downloaded a nightly 25 august somewhere in the morning \n> (CEST)), since it returned:\n> ERROR: bogus varno: 9\n\nYeah, someone else sent in a test case for this failure (or at least one\nwith a similar symptom) yesterday. I'll try to fix it today.\n\n> Is a nested loop normally so much (3x) more costly than a hash join? Or \n> is it just this query that gets estimated wronly?\n\nThere's been some discussion that we are overestimating the cost of\nnestloops in general, because we don't take into account that successive\nscans of the inner relation are likely to find many pages already in\ncache from the earlier scans. So far no one's come up with a good cost\nmodel to use for this, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Aug 2005 10:27:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient queryplan for query with intersectable " }, { "msg_contents": "At 10:27 AM 8/27/2005, Tom Lane wrote:\n>Arjen van der Meijden <[email protected]> writes:\n> > But appareantly there is a bug in the explain mechanism of the 8.1devel\n> > I'm using (I downloaded a nightly 25 august somewhere in the morning\n> > (CEST)), since it returned:\n> > ERROR: bogus varno: 9\n>\n>Yeah, someone else sent in a test case for this failure (or at least one\n>with a similar symptom) yesterday. I'll try to fix it today.\n>\n> > Is a nested loop normally so much (3x) more costly than a hash join? Or\n> > is it just this query that gets estimated wronly?\n>\n>There's been some discussion that we are overestimating the cost of\n>nestloops in general, because we don't take into account that successive\n>scans of the inner relation are likely to find many pages already in\n>cache from the earlier scans. So far no one's come up with a good cost\n>model to use for this, though.\n>\n> regards, tom lane\nIt certainly seems common in the EXPLAIN ANALYZE output I see that \nthe (estimated) cost of Nested Loop is far higher than the actual \ntime measured.\n\nWhat happened when someone tried the naive approach of telling the \nplanner to estimate the cost of a nested loop based on fitting \nwhatever entities are involved in the nested loop in RAM as much as \npossible? When there are multiple such mappings, use whichever one \nresults in the lowest cost for the NL in question.\n\nClearly, this should lead to an underestimate of the cost of the \nconstant of operation involved, but since nested loops have the only \npolynomial growth function of the planner's choices, NL's should \nstill have a decent chance of being more expensive than other choices \nunder most circumstances.\n\nIn addition, if those costs are based on actual measurements of how \nlong it takes to do such scans then the estimated cost has a decent \nchance of being fairly accurate under such circumstances.\n\nIt might not work well, but it seems like a reasonable first attempt \nat a solution?\nRon Peacetree\n\n\n", "msg_date": "Sat, 27 Aug 2005 11:07:37 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient queryplan for query with" }, { "msg_contents": "On 27-8-2005 16:27, Tom Lane wrote:\n> Arjen van der Meijden <[email protected]> writes:\n> \n>>Is a nested loop normally so much (3x) more costly than a hash join? Or \n>>is it just this query that gets estimated wronly?\n> \n> There's been some discussion that we are overestimating the cost of\n> nestloops in general, because we don't take into account that successive\n> scans of the inner relation are likely to find many pages already in\n> cache from the earlier scans. So far no one's come up with a good cost\n> model to use for this, though.\n\nAh, that explains. I take it, there already is an estimation for the \ncost of \"the amount of pages that will be loaded for this operation\". \nFor indexed lookups this will probably be something like \"the amount of \nexpected pages to fetch * the random page cost\"?\n\nAnd appareantly for the nested loop its something like \"iterations * \namount of pages per iteration * random page cost\" ?\nThe naive approach seems to me, is to just calculate the probable amount \nof pages to fetch from disk rather than from cache.\n\nIn this case there are 7692207 rows in 60569 pages and on average 234 \nrows per product (per nested loop) in the estimation. It estimates that \nit'll have to do 565 iterations.\nIn worst case for the first 234 rows, no pages are already cached and \nthe rows are all in a seperate page. So thats 234 pages to fetch.\nIn the second iteration, you know already 234 pages are fetched and \nthat's about 0.386% of the total pages. So the expected amount of pages \nfor the next 234 pages expected to be in cache is 234 * 0.00386 = 1. \nAfter that you'll have 234 + 233 pages in cache, etc, etc.\nFollowing that approach, the 565th iteration only has to pull in about \n27 new pages in the worst case of all records being perfectly scattered \nover the pages, not 234.\n\nOf course this has to be adjusted for the amount of available buffers \nand cache and the expected amount of pages to fetch for the iterations, \nwhich may be less than 234.\n\nWhen a fetch of a random page costs 4 and one from cache 0.01, there is \nquite a large difference: 565 * (234 * 4) = 530535 vs 215864,93\n\nActually the likeliness of a page being in cache is a bit higher, since \nthe expectation increases for each newly fetched page, not for batches \nof 234. I didn't use that in my calculation here.\n\nAnyway, this is probably been thought over already and there may be many \nflaws in it. If not, please think it over.\n\nBest regards,\n\nArjen\n", "msg_date": "Sun, 28 Aug 2005 16:44:37 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient queryplan for query with intersectable" } ]
[ { "msg_contents": "Hello all,\n\nI was hoping someone could explain the plan for a statement. \n\nWe have a table with a column of longs being used as an index. The \nquery plan in 8.0 was like this:\n\n# explain select distinct timeseriesid from tbltimeseries where \ntimeseriesid > 0 order by timeseriesid;\nSET\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..15065908.60 rows=10854026 width=8)\n -> Index Scan using idx_timeseris on tbltimeseries \n(cost=0.00..15038773.53 rows=10854026 width=8)\n Index Cond: (timeseriesid > 0)\n(3 rows)\n\n\n\nIn 8.1, (using the same database after a dump+restore+vacuum+analyze) I \nget the following:\n# explain select distinct timeseriesid from tbltimeseries where \ntimeseriesid > 0 order by timeseriesid;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Unique (cost=2717137.08..2771407.21 rows=10854026 width=8)\n -> Sort (cost=2717137.08..2744272.14 rows=10854026 width=8)\n Sort Key: timeseriesid\n -> Bitmap Heap Scan on tbltimeseries \n(cost=48714.09..1331000.42 rows=10854026 width=8)\n Recheck Cond: (timeseriesid > 0)\n -> Bitmap Index Scan on idx_timeseris \n(cost=0.00..48714.09 rows=10854026 width=0)\n Index Cond: (timeseriesid > 0)\n(7 rows)\n\n\nI'm hoping someone can explain the new query plan (as I'm not sure I \nunderstand what it is doing).\n\nThanks!\n\n-- Alan\n", "msg_date": "Fri, 26 Aug 2005 10:45:07 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "difference in plan between 8.0 and 8.1?" }, { "msg_contents": "On Fri, Aug 26, 2005 at 10:45:07AM -0400, Alan Stange wrote:\n> -> Bitmap Heap Scan on tbltimeseries (cost=48714.09..1331000.42 rows=10854026 width=8)\n> Recheck Cond: (timeseriesid > 0)\n> -> Bitmap Index Scan on idx_timeseris (cost=0.00..48714.09 rows=10854026 width=0)\n> Index Cond: (timeseriesid > 0)\n> \n> I'm hoping someone can explain the new query plan (as I'm not sure I \n> understand what it is doing).\n\nSearch for \"bitmap\" in the 8.1 Release Notes:\n\nhttp://developer.postgresql.org/docs/postgres/release.html#RELEASE-8-1\n\nYou could probably find more detailed discussion in the pgsql-hackers\narchives.\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 26 Aug 2005 09:16:08 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difference in plan between 8.0 and 8.1?" }, { "msg_contents": "Alan Stange <[email protected]> writes:\n> Unique (cost=2717137.08..2771407.21 rows=10854026 width=8)\n> -> Sort (cost=2717137.08..2744272.14 rows=10854026 width=8)\n> Sort Key: timeseriesid\n> -> Bitmap Heap Scan on tbltimeseries \n> (cost=48714.09..1331000.42 rows=10854026 width=8)\n> Recheck Cond: (timeseriesid > 0)\n> -> Bitmap Index Scan on idx_timeseris \n> (cost=0.00..48714.09 rows=10854026 width=0)\n> Index Cond: (timeseriesid > 0)\n> (7 rows)\n\n> I'm hoping someone can explain the new query plan (as I'm not sure I \n> understand what it is doing).\n\nThe index scan is reading the index to find out which heap tuple IDs\n(TIDs) the index says meet the condition. It returns a bitmap of the\ntuple locations (actually, an array of per-page bitmaps). The heap\nscan goes and fetches the tuples from the table, working in TID order\nto avoid re-reading the same page many times, as can happen for ordinary\nindex scans. Since the result isn't sorted, we have to do a sort to get\nit into the correct order for the Unique step.\n\nBecause it avoids random access to the heap, this plan can be a lot\nfaster than a regular index scan. I'm not sure at all that 8.1 is\ndoing good relative cost estimation yet, though. It would be\ninteresting to see EXPLAIN ANALYZE results for both ways. (You can\nuse enable_bitmapscan and enable_indexscan to force the planner to pick\nthe plan it thinks is slower.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 11:16:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difference in plan between 8.0 and 8.1? " }, { "msg_contents": "Tom Lane wrote:\n> Alan Stange <[email protected]> writes:\n> \n>> Unique (cost=2717137.08..2771407.21 rows=10854026 width=8)\n>> -> Sort (cost=2717137.08..2744272.14 rows=10854026 width=8)\n>> Sort Key: timeseriesid\n>> -> Bitmap Heap Scan on tbltimeseries \n>> (cost=48714.09..1331000.42 rows=10854026 width=8)\n>> Recheck Cond: (timeseriesid > 0)\n>> -> Bitmap Index Scan on idx_timeseris \n>> (cost=0.00..48714.09 rows=10854026 width=0)\n>> Index Cond: (timeseriesid > 0)\n>> (7 rows)\n>> \n>\n> \n>> I'm hoping someone can explain the new query plan (as I'm not sure I \n>> understand what it is doing).\n>> \n>\n> The index scan is reading the index to find out which heap tuple IDs\n> (TIDs) the index says meet the condition. It returns a bitmap of the\n> tuple locations (actually, an array of per-page bitmaps). The heap\n> scan goes and fetches the tuples from the table, working in TID order\n> to avoid re-reading the same page many times, as can happen for ordinary\n> index scans. Since the result isn't sorted, we have to do a sort to get\n> it into the correct order for the Unique step.\n>\n> Because it avoids random access to the heap, this plan can be a lot\n> faster than a regular index scan. I'm not sure at all that 8.1 is\n> doing good relative cost estimation yet, though. It would be\n> interesting to see EXPLAIN ANALYZE results for both ways. (You can\n> use enable_bitmapscan and enable_indexscan to force the planner to pick\n> the plan it thinks is slower.)\nJust to be clear. The index is on the timeseriesid column. Also, We \nusually have the where clause with some non-zero number.\n\nAnyway, here's the basic query, with variations added on belowe:\n\nfiasco=# explain analyze select timeseriesid from tbltimeseries where \ntimeseriesid > 0;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbltimeseries (cost=48906.82..1332935.19 \nrows=10905949 width=8) (actual time=16476.337..787480.979 rows=10907853 \nloops=1)\n Recheck Cond: (timeseriesid > 0)\n -> Bitmap Index Scan on idx_timeseris (cost=0.00..48906.82 \nrows=10905949 width=0) (actual time=16443.585..16443.585 rows=10907853 \nloops=1)\n Index Cond: (timeseriesid > 0)\n Total runtime: 791340.341 ms\n(5 rows)\n\n\n\nNow add the order:\n\nfiasco=# explain analyze select timeseriesid from tbltimeseries where \ntimeseriesid > 0 order by timeseriesid;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2726087.93..2753352.81 rows=10905949 width=8) (actual \ntime=821090.666..826353.054 rows=10913868 loops=1)\n Sort Key: timeseriesid\n -> Bitmap Heap Scan on tbltimeseries (cost=48912.82..1332941.19 \nrows=10905949 width=8) (actual time=16353.921..757075.349 rows=10913868 \nloops=1)\n Recheck Cond: (timeseriesid > 0)\n -> Bitmap Index Scan on idx_timeseris (cost=0.00..48912.82 \nrows=10905949 width=0) (actual time=16335.239..16335.239 rows=10913868 \nloops=1)\n Index Cond: (timeseriesid > 0)\n Total runtime: 830829.145 ms\n(7 rows)\n\n\n\n\nand the distinct:\n\nfiasco=# explain analyze select distinct timeseriesid from tbltimeseries \nwhere timeseriesid > 0 order by timeseriesid;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2726087.93..2780617.68 rows=10905949 width=8) (actual \ntime=816938.970..831119.423 rows=10913868 loops=1)\n -> Sort (cost=2726087.93..2753352.81 rows=10905949 width=8) (actual \ntime=816938.967..822298.802 rows=10913868 loops=1)\n Sort Key: timeseriesid\n -> Bitmap Heap Scan on tbltimeseries \n(cost=48912.82..1332941.19 rows=10905949 width=8) (actual \ntime=15866.736..752851.006 rows=10913868 loops=1)\n Recheck Cond: (timeseriesid > 0)\n -> Bitmap Index Scan on idx_timeseris \n(cost=0.00..48912.82 rows=10905949 width=0) (actual \ntime=15852.652..15852.652 rows=10913868 loops=1)\n Index Cond: (timeseriesid > 0)\n Total runtime: 835558.312 ms\n(8 rows)\n\n\n\n\nNow the usual query from 8.0:\n\nfiasco=# set enable_bitmapscan=false; explain analyze select distinct \ntimeseriesid from tbltimeseries where timeseriesid > 0 order by \ntimeseriesid;\nSET\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..14971276.10 rows=10905949 width=8) (actual \ntime=24.930..999645.638 rows=10913868 loops=1)\n -> Index Scan using idx_timeseris on tbltimeseries \n(cost=0.00..14944011.22 rows=10905949 width=8) (actual \ntime=24.926..989117.882 rows=10913868 loops=1)\n Index Cond: (timeseriesid > 0)\n Total runtime: 1003549.067 ms\n(4 rows)\n\n\n\n\nAnd now a sequential scan of the table itself:\n\nfiasco=# set enable_bitmapscan=false; set enable_indexscan=false; \nexplain analyze select distinct timeseriesid from tbltimeseries where \ntimeseriesid > 0 order by timeseriesid;\nSET\nSET\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=102677188.75..102731718.49 rows=10905949 width=8) (actual \ntime=956783.989..971036.657 rows=10919883 loops=1)\n -> Sort (cost=102677188.75..102704453.62 rows=10905949 width=8) \n(actual time=956783.985..962115.616 rows=10919883 loops=1)\n Sort Key: timeseriesid\n -> Seq Scan on tbltimeseries (cost=100000000.00..101284042.00 \nrows=10905949 width=8) (actual time=7.314..893267.030 rows=10919883 loops=1)\n Filter: (timeseriesid > 0)\n Total runtime: 975393.678 ms\n(6 rows)\n\n\nFor us, the query is best served by the index scan as the ordering comes \nfor free and results can be streamed to a client immediately. So, while \nthe whole query is a bit slower, the client can begin processing the \nresults immediately. The client has three threads which stream in two \nsets of id's and emit delete statements in smaller batches. It can be \ndone as one statement, but on our production system that statement can \nrun for 10 hours and delete 20M rows...which conflicts with the vacuum \nprocess. This version can be throttled, stopped and restarted at any \ntime and no work is lost compared to a single long running query.\n", "msg_date": "Fri, 26 Aug 2005 15:01:55 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difference in plan between 8.0 and 8.1?" } ]
[ { "msg_contents": "Mark Kirkwood\n> > The 'desc' seems to be the guy triggering the sort, e.g:\n> \n> Oh; really an accident that I didn't notice myself, I was actually\ngoing\n> to\n> remove all instances of \"desc\" in my simplification, but seems like I\n> forgot.\n\nIf desc is the problem you can push the query into a subquery without\nsorting and sort the result. This is called an inline view. Sometimes\nyou can pull a couple of tricks to force the view to materialize before\nit is sorted.\n\naka \nselect q.*\nfrom\n(\n some_complex_query\t\n) q order by ...;\n\nMerlin\n", "msg_date": "Fri, 26 Aug 2005 10:49:23 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit + group + join" } ]
[ { "msg_contents": "> Hello all,\n> \n> I was hoping someone could explain the plan for a statement.\n> \n> We have a table with a column of longs being used as an index. The\n> query plan in 8.0 was like this:\n> \n> # explain select distinct timeseriesid from tbltimeseries where\n> timeseriesid > 0 order by timeseriesid;\n\nI had the same problem. You probably already have seq scan turned off,\nor the server would be using that. You may have to turn bitmap off or\nrework you query such that the server will use the index. (between?).\n\nAnyways, distinct is code word for 'bad performance' :). Consider\nlaying out tables such that it not necessary, for example set up table\nwith RI link. Then you can do this in zero time.\n\nGood luck!\n\nMerlin\n", "msg_date": "Fri, 26 Aug 2005 11:12:48 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difference in plan between 8.0 and 8.1?" } ]
[ { "msg_contents": "\n I would like to know if the following kind of database client exists: I need a 'select' query to be sent to say 10 db servers simultaneously in parallel (using threading), the results should be re-sorted and returned. For example I have a query: 'select * from table where parent_clname = 'parent' order by name limit 10'. Now this query has to be sent to 10 servers, and the maximum number of results would be 100. Now this 100 result set has to be re-sorted, out of which 90 has to be discarded, and the 10 has to be returned.\n\n Does such a solution exist now. To me this appears to be in entirety of what should constitute a database cluster. Only the search needs to be done on all the servers simultaneously at the low level. Once you get the results, the writing can be determined by the upper level logic (which can even be in a scripting language). But the search across many servers has to be done using proper threading, and the re-sorting also needs to be done fast. \n\n Thanks a lot in advance.\n\n\n--\n:: Ligesh :: http://ligesh.com \n\n\n", "msg_date": "Fri, 26 Aug 2005 20:54:09 +0530", "msg_from": "Ligesh <[email protected]>", "msg_from_op": true, "msg_subject": "Sending a select to multiple servers." }, { "msg_contents": "On Fri, 26 Aug 2005 20:54:09 +0530\nLigesh <[email protected]> wrote:\n\n> \n> I would like to know if the following kind of database client exists:\n> I need a 'select' query to be sent to say 10 db servers\n> simultaneously in parallel (using threading), the results should be\n> re-sorted and returned. For example I have a query: 'select * from\n> table where parent_clname = 'parent' order by name limit 10'. Now\n> this query has to be sent to 10 servers, and the maximum number of\n> results would be 100. Now this 100 result set has to be re-sorted,\n> out of which 90 has to be discarded, and the 10 has to be returned.\n> \n> Does such a solution exist now. To me this appears to be in entirety\n> of what should constitute a database cluster. Only the search needs\n> to be done on all the servers simultaneously at the low level. Once\n> you get the results, the writing can be determined by the upper level\n> logic (which can even be in a scripting language). But the search\n> across many servers has to be done using proper threading, and the\n> re-sorting also needs to be done fast. \n\n This is typically handled by the application layer, not a standard\n client. Mostly because every situation is different, you may have\n 10 servers and need 10 rows of results, others may need something\n entirely different. \n\n This isn't really a \"cluster\" either. In a clustered environment\n you would send the one query to any of the 10 servers and it would\n return the proper results. \n\n But like I said this type of application is fairly trivial to write\n in most scripting or higher level languages. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 26 Aug 2005 11:04:59 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sending a select to multiple servers." }, { "msg_contents": "On Fri, Aug 26, 2005 at 11:04:59AM -0500, Frank Wiles wrote:\n> On Fri, 26 Aug 2005 20:54:09 +0530\n> This is typically handled by the application layer, not a standard\n> client. Mostly because every situation is different, you may have\n> 10 servers and need 10 rows of results, others may need something\n> entirely different. \n> \n> This isn't really a \"cluster\" either. In a clustered environment\n> you would send the one query to any of the 10 servers and it would\n> return the proper results. \n> \n> But like I said this type of application is fairly trivial to write\n> in most scripting or higher level languages. \n> \n\n The cluster logic is sort of implemented by this client library. If you write this at higher level the scalability becomes an issue. For 10 servers it is alright. If you want to retrieve 100 rows, and there are 100 servers, you will have a total result set of 10,000, which should be re-sorted and the 9900 of them should be droped, and only the 100 should be returned.\n\n Anyway, what I want to know is, if there is such a functionality offered by some C library.\n\n Thanks.\n\n--\n:: Ligesh :: http://ligesh.com \n", "msg_date": "Sat, 27 Aug 2005 04:15:40 +0530", "msg_from": "Ligesh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sending a select to multiple servers." }, { "msg_contents": "On Fri, Aug 26, 2005 at 11:04:59AM -0500, Frank Wiles wrote:\n> On Fri, 26 Aug 2005 20:54:09 +0530\n> Ligesh <[email protected]> wrote:\n\n\n> Mostly because every situation is different, you may have\n> 10 servers and need 10 rows of results, others may need something\n> entirely different. \n> \n\n No. I have say 'm' number of servers, and I need 'n' rows. To get the results, you need to run the query against all the 'm' servers, which will return 'm x n' results, then you have to re-sort it and drop the 'm x n - n' rows and return only the 'n'. So this is like retrieving the 'n' rows amongst ALL the servers, that satisfy your search criteria. \n\n Once you retrieve the data, you will know which server each row belongs to, and you can do the writes yourself at the higher level.\n\n Thanks.\n\n--\n:: Ligesh :: http://ligesh.com \n\n\n", "msg_date": "Sat, 27 Aug 2005 04:21:23 +0530", "msg_from": "Ligesh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sending a select to multiple servers." } ]
[ { "msg_contents": "> Does such a solution exist now. To me this appears to be in entirety\nof\n> what should constitute a database cluster. Only the search needs to be\n> done on all the servers simultaneously at the low level. Once you get\nthe\n> results, the writing can be determined by the upper level logic (which\ncan\n> even be in a scripting language). But the search across many servers\nhas\n> to be done using proper threading, and the re-sorting also needs to be\n> done fast.\n\nWell the fastest way would be to write a libpq wrapper, personally I\nwould choose C++ for extreme performance. STL bring super fast sorting\nto the table and will make dealing with ExecParams/ExecPrepared a little\nbit easier. To make available from scripting languages you need to make\nC wrappers for interface functions and build in a shared library.\n\nYou could use any of a number of high level scripting languages but\nperformance will not be as good. YMMV.\n\nAntother interesting take on this problem would be to use dblink\ncontrib. module. Check that out and see if it can meet your needs.\n\nMerlin\n", "msg_date": "Fri, 26 Aug 2005 12:11:18 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sending a select to multiple servers." } ]
[ { "msg_contents": "Well folks, I've been trying to track down why this Athlon 2800 \n(2.1ghz) has been handing my 2.5ghz G5 its cake. I have a query that \n(makes no io - the dataset can live in ram easily) takes about 700ms \non the athlon and about 10 seconds on the G5.\n\nTracking ti down a bit timestamp_cmp_internal (The btree was made of \na timestamp & and int) was taking a large amount of time - \nspecifically all the calls it makes to isnan(x). 14.1% in __isnand \n(which is the libSystem function & guts, which according to the \ndarwin source copies the double to memory and accesses it as 2 ints \nlooking for a specific pattern). (For reference, the other top \nfunctions are _bt_checkkeys at 30%, FunctionCall2 at 15.8% , _bt_step \nat 9% and _bt_first at 7%) .\n\nTalking to some of the mac super guru's on irc they said the problem \nis how the Mach-O ABI works, basically you get kicked in the nuts for \naccessing global or static data (like those constants __isnand \nuses). (You can read http://www.unsanity.org/archives/000044.php for \na touch of info on it).\n\nI think given the function-call-rich arch of PG may make its \nperformance on OSX always lower than other counterparts. Especially \nthings like that __isnand.\n\nI'm going to be doing a couple experiments: 1. making an inline \nversion of isnan to see how that improves performance 2. Trying it \nout on linux ppc to see how it runs. It may be worth noting these \nin the docs or faq somewhere.\n\nAlso, two things to note, one of which is quite important: On tiger \n(10.4) PG compiles with NO OPTIMIZATION. Probably a template file \nneeds to be updated.\nPanther seems to compile with -O2 though.\n\nIf you want to profile PG on Tiger do not use gprof - it seems to be \nbroken. I get func call #s, but no timing data. Instead you can do \nsomething even better - compile PG normally and attach to it with \nShark (Comes with the CHUD tools) and check out its profile. Quite \nslick actually :)\n\nI'll keep people updated on my progress, but I just wanted to get \nthese issues out in the air.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 26 Aug 2005 14:58:17 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": true, "msg_subject": "OSX & Performance" }, { "msg_contents": "Jeff Trout <[email protected]> writes:\n> Tracking ti down a bit timestamp_cmp_internal (The btree was made of \n> a timestamp & and int) was taking a large amount of time - \n> specifically all the calls it makes to isnan(x). 14.1% in __isnand \n\nHmm, can you provide a test case for other people to poke at?\n\n> Also, two things to note, one of which is quite important: On tiger \n> (10.4) PG compiles with NO OPTIMIZATION. Probably a template file \n> needs to be updated.\n> Panther seems to compile with -O2 though.\n\nI see -O2 when building PG (CVS tip) on a fully up-to-date 10.4.2\nmachine. Maybe something odd in your environment, like a preset\nCFLAGS setting?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Aug 2005 16:00:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSX & Performance " }, { "msg_contents": "\nOn Aug 28, 2005, at 4:00 PM, Tom Lane wrote:\n\n>\n> Hmm, can you provide a test case for other people to poke at?\n>\n\nI'l try to put one together as small as I can make it.\nThe table in question is roughly 22M rows. There are about 8k rows \nper timestamp (day granularity).\n\n> I see -O2 when building PG (CVS tip) on a fully up-to-date 10.4.2\n> machine. Maybe something odd in your environment, like a preset\n> CFLAGS setting?\n>\n\n8.0.3 doesn't have any optimization flags\n8.1beta1 doesn't have any optimization\nie: gcc -no-cpp-precomp -Wall -Wmissing-prototypes -Wpointer-arith - \nWdeclaration-after-statement -Wold-style-definition -Wendif-labels - \nfno-strict-aliasing -I../../src/port -I../../src/include -c \nthread.c -o thread_srv.o\n\nI'm on 10.4.2, xcode 2.1\nUsing built-in specs.\nTarget: powerpc-apple-darwin8\nConfigured with: /private/var/tmp/gcc/gcc-5026.obj~19/src/configure -- \ndisable-checking --prefix=/usr --mandir=/share/man --enable- \nlanguages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^+.-]*$/ \ns/$/-4.0/ --with-gxx-include-dir=/include/gcc/darwin/4.0/c++ -- \nbuild=powerpc-apple-darwin8 --host=powerpc-apple-darwin8 -- \ntarget=powerpc-apple-darwin8\nThread model: posix\ngcc version 4.0.0 (Apple Computer, Inc. build 5026)\n\nThe snapshot on ftp.psotgresql.org (dated 8/29) also runs with no \noptimization.\n\nNo cflags are set.\n\nneed to see anything from config.log?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Mon, 29 Aug 2005 13:27:25 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OSX & Performance " }, { "msg_contents": "Jeff Trout <[email protected]> writes:\n> On Aug 28, 2005, at 4:00 PM, Tom Lane wrote:\n>> I see -O2 when building PG (CVS tip) on a fully up-to-date 10.4.2\n>> machine. Maybe something odd in your environment, like a preset\n>> CFLAGS setting?\n\n> 8.0.3 doesn't have any optimization flags\n> 8.1beta1 doesn't have any optimization\n> ie: gcc -no-cpp-precomp -Wall -Wmissing-prototypes -Wpointer-arith - \n> Wdeclaration-after-statement -Wold-style-definition -Wendif-labels - \n> fno-strict-aliasing -I../../src/port -I../../src/include -c \n> thread.c -o thread_srv.o\n\nYou must have CFLAGS set to empty in your build environment, because\nconfigure will certainly default to -O2 if not overridden. It works\nfine for me on OS X. Maybe you want to trace through the configure\nscript and see why it's doing something else?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Aug 2005 13:57:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSX & Performance " }, { "msg_contents": "\nOn Aug 29, 2005, at 1:57 PM, Tom Lane wrote:\n>\n> You must have CFLAGS set to empty in your build environment, because\n> configure will certainly default to -O2 if not overridden. It works\n> fine for me on OS X. Maybe you want to trace through the configure\n> script and see why it's doing something else?\n>\n\n/me hangs head in shame.\n\nYes. I'd been futzing with various settings and had CFLAGS set to \nempty instead of cleared out. 8.0.3 and -snapshot (8/29) both seem \nto now compile with -O2\n\nAnyway, I tried putting together a nice self-data-producing test case \nbut that didn't cause the bug. So I'm trying to get this dump as \nsmall as possible (I'll email you a url later).\n\nTo tide things over, here's the gprof (and shark) output for my query \nof doom.\n\nlinux box:\n\n 6.36 0.41 0.41 240694 0.00 0.00 _bt_compare\n 5.97 0.79 0.38 907242 0.00 0.00 AllocSetAlloc\n 4.55 1.07 0.29 135008 0.00 0.00 hash_any\n 4.16 1.34 0.27 185684 0.00 0.00 \nMemoryContextAllocZeroAlig\nned\n 3.30 1.55 0.21 39152 0.01 0.01 localsub\n 2.98 1.74 0.19 1213172 0.00 0.00 AllocSetFreeIndex\n 2.83 1.92 0.18 52695 0.00 0.00 nocachegetattr\n 2.75 2.10 0.17 134775 0.00 0.00 hash_search\n 2.51 2.25 0.16 47646 0.00 0.01 \nStrategyBufferLookup\n 2.28 2.40 0.14 71990 0.00 0.00 fmgr_isbuiltin\n 2.20 2.54 0.14 33209 0.00 0.00 _bt_moveright\n 1.88 2.66 0.12 78864 0.00 0.00 comparetup_heap\n 1.57 2.76 0.10 63485 0.00 0.00 SearchCatCache\n 1.41 2.85 0.09 39152 0.00 0.00 timesub\n 1.26 2.93 0.08 325246 0.00 0.00 tas\n 1.26 3.01 0.08 305883 0.00 0.00 AllocSetFree\n 1.26 3.09 0.08 162622 0.00 0.00 LWLockAcquire\n\nand on osx: (self, total, library, func)\n\n 29.0% 29.0% postmaster _bt_checkkeys\n 15.6% 15.6% postmaster FunctionCall2\n 10.4% 10.4% libSystem.B.dylib __isnand\n 9.5% 9.5% postmaster timestamp_cmp_internal\n 9.3% 9.3% postmaster _bt_step\n 5.3% 5.3% postmaster timestamp_le\n 4.9% 4.9% postmaster _bt_next\n 3.6% 3.6% postmaster dyld_stub___isnand\n 3.1% 3.1% postmaster timestamp_gt\n 1.9% 1.9% postmaster int4eq\n 1.3% 1.3% postmaster BufferGetBlockNumber\n 0.6% 0.6% postmaster LWLockAcquire\n 0.5% 0.5% postmaster LWLockRelease\n 0.4% 0.4% postmaster hash_search\n\nOn my failed simulated attempt here's what things looked liek (the \ndata should have been relatively similar).\n\nlinux:\n\n 5.39 0.28 0.28 852086 0.00 0.00 AllocSetAlloc\n 4.90 0.53 0.25 130165 0.00 0.00 hash_any\n 4.12 0.73 0.21 214061 0.00 0.00 _bt_compare\n 4.12 0.94 0.21 39152 0.01 0.01 localsub\n 4.02 1.15 0.20 160487 0.00 0.00 \nMemoryContextAllocZeroAlig\nned\n 3.24 1.31 0.17 1157316 0.00 0.00 AllocSetFreeIndex\n 3.14 1.48 0.16 64375 0.00 0.00 fmgr_isbuiltin\n 2.55 1.60 0.13 56142 0.00 0.00 SearchCatCache\n 2.35 1.73 0.12 130076 0.00 0.00 hash_search\n 1.76 1.81 0.09 39152 0.00 0.00 timesub\n 1.67 1.90 0.09 221469 0.00 0.00 \ntimestamp_cmp_internal\n 1.67 1.99 0.09 56069 0.00 0.00 \nMemoryContextCreate\n 1.57 2.06 0.08 145787 0.00 0.00 LWLockRelease\n 1.37 2.13 0.07 289119 0.00 0.00 pfree\n 1.37 2.21 0.07 8002 0.01 0.02 \nExecMakeFunctionResult\n 1.37 2.27 0.07 8000 0.01 0.22 ExecInitIndexScan\n 1.18 2.33 0.06 291574 0.00 0.00 tas\n\nand on osx: (which runs very fast, usually a couple hundred ms faster \nthan the linux box)\n\n 5.9% 5.9% postmaster LWLockAcquire\n 5.2% 5.2% postmaster AllocSetAlloc\n 4.9% 4.9% postmaster LWLockRelease\n 3.9% 3.9% postmaster hash_any\n 3.6% 3.6% postmaster _bt_compare\n 2.9% 2.9% postmaster hash_search\n 2.6% 2.6% postmaster MemoryContextAllocZeroAligned\n 2.6% 2.6% postmaster ExecInitExpr\n 2.0% 2.0% mach_kernel ml_set_interrupts_enabled\n 2.0% 2.0% postmaster fmgr_info_cxt_security\n 2.0% 2.0% postmaster AllocSetFree\n 1.6% 1.6% postmaster MemoryContextAlloc\n 1.6% 1.6% postmaster FunctionCall2\n 1.6% 1.6% postmaster AllocSetDelete\n 1.6% 1.6% libSystem.B.dylib __isnand\n\nwhich to me anyway, looks like basically the same profile.\nSo there must be something about the exact nature of hte data that is \nkicking it in the nuts.\n\nI tried making a copy of hte table using select into, I get the same \nperformace. Clustered on the index.. same hting.\n\nThe table is a timestamp (no tz), 2 ints and 4 doubles. The index is \non (timestamp, int1)\n\nAs I said before, I'll send a url along to the dump once it has \ndumped and I get it somewhere good (unless I get my test data \ngenerator to invoke this problem). I could also get you access to \nthis machine, but be warned gprof on tiger is pretty useless from \nwhat I've seen.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 30 Aug 2005 10:04:44 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OSX & Performance " } ]
[ { "msg_contents": "Hopefully a quick question.\n\nIn 7.3.4, how does the planner execute a query with union alls in it?\n\nDoes it execute the unions serially, or does it launch a \"thread\" for\neach union (or maybe something else entirely).\n\nThanks,\n\nChris\n\nHere is an explain from the view I'm thinking about, how does postgres\nrun this query?\nhmd=# explain select count(1) from clmhdr where hdr_user_id = 'user_id';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=42.48..42.48 rows=1 width=924)\n -> Subquery Scan clmhdr (cost=0.00..42.41 rows=30 width=924)\n -> Append (cost=0.00..42.41 rows=30 width=924)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_live_hdr_user_id_hdr_clm_status_idx on clmhdr_live \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_2003_hdr_user_id_hdr_clm_status_idx on clmhdr_2003 \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_2004_hdr_user_id_hdr_clm_status_idx on clmhdr_2004 \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_2005_hdr_user_id_hdr_clm_status_idx on clmhdr_2005 \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n -> Subquery Scan \"*SELECT* 5\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_2006_hdr_user_id_hdr_clm_status_idx on clmhdr_2006 \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n -> Subquery Scan \"*SELECT* 6\" (cost=0.00..7.07 rows=5\nwidth=924)\n -> Index Scan using\nclmhdr_2007_hdr_user_id_hdr_clm_status_idx on clmhdr_2007 \n(cost=0.00..7.07 rows=5 width=924)\n Index Cond: (hdr_user_id =\n'user_id'::character varying)\n(21 rows)\n\nhmd=#\n", "msg_date": "Fri, 26 Aug 2005 16:14:18 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "How does the planner execute unions?" }, { "msg_contents": "On Fri, Aug 26, 2005 at 16:14:18 -0400,\n Chris Hoover <[email protected]> wrote:\n> Hopefully a quick question.\n> \n> In 7.3.4, how does the planner execute a query with union alls in it?\n> \n> Does it execute the unions serially, or does it launch a \"thread\" for\n> each union (or maybe something else entirely).\n\nPostgres doesn't have parallel execution of parts of queries. So it is\ngoing to do one part followed by the other part.\n", "msg_date": "Fri, 26 Aug 2005 15:45:54 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the planner execute unions?" } ]
[ { "msg_contents": "Hello,\n\nWe are using PostgreSQL for our business application. Recently, during\ntesting of our application with large volumes of data, we faced a weird\nproblem. Our query performance dropped *dramatically* after \"VACUUM FULL\nANALYZE\" command. We have encountered a similar problem listed on\nmailing list archives, but the submitter solved his problem by rewriting\nhis query, which is unfortunatelly very hard for us.\n\nI am attaching two EXPLAIN ANALYZE outputs, first one is just before the\nVACUUM FULL ANALYZE command and the other is the one after. Also\nattached is the SQL query, which is simplified to clearify the problem.\nIn the example query time increases from 1.8 second to > 4.0 secons. The\ndifference for the complete query is much bigger, query time increases\nfrom 7.8 seconds to > 110 seconds.\n\nAny help is appreciated, we were unable to identify what causes the\nquery planner to choose a different/poor performing plan.\n\nNotes:\nOur production platform is Ubuntu Linux Hoary on i386, PostgreSQL 8.0.3,\ncompiled from sources. Same tests were carried on Windows XP\nProfessional and PostgreSQL 8.0.1 with similar results. The queries use\nlittle IO, high CPU. The largest table involved in the sample query has\nabout 10000 rows. Indexes are used intensively, some tables use > 4\nindexes.\n\nBest regards,\nUmit Oztosun", "msg_date": "Sat, 27 Aug 2005 01:04:19 +0300", "msg_from": "=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <[email protected]>", "msg_from_op": true, "msg_subject": "Weird performance drop after VACUUM" }, { "msg_contents": "Hi,\nI have the same issue. After doing \"VACCUME ANALYZE\"\nperformance of the query dropped. \n\nHere is the query \nexplain select * from conversion_table c where \nc.conversion_date BETWEEN '2005-06-07' and\n'2005-08-17' \n\t\nBefore \"VACCUME ANALYZE\"\n\n\"Index Scan using conversion_table_pk on\nkeyword_conversion_table c (cost=0.00..18599.25\nrows=4986 width=95)\"\n\" Index Cond: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\n\nAfter \"VACCUME ANALYZE\"\n\n\n\"Seq Scan on conversion_table c (cost=0.00..29990.83\nrows=1094820 width=66)\"\n\" Filter: ((conversion_date >= '2005-06-07'::date)\nAND (conversion_date <= '2005-08-17'::date))\"\n\n\nI dont know why system is doing \"Seq scan\" now.\n\nThanks\n\nasif ali\n\n\n\n\n\n\n\n--- �mit �ztosun <[email protected]> wrote:\n\n> Hello,\n> \n> We are using PostgreSQL for our business\n> application. Recently, during\n> testing of our application with large volumes of\n> data, we faced a weird\n> problem. Our query performance dropped\n> *dramatically* after \"VACUUM FULL\n> ANALYZE\" command. We have encountered a similar\n> problem listed on\n> mailing list archives, but the submitter solved his\n> problem by rewriting\n> his query, which is unfortunatelly very hard for us.\n> \n> I am attaching two EXPLAIN ANALYZE outputs, first\n> one is just before the\n> VACUUM FULL ANALYZE command and the other is the one\n> after. Also\n> attached is the SQL query, which is simplified to\n> clearify the problem.\n> In the example query time increases from 1.8 second\n> to > 4.0 secons. The\n> difference for the complete query is much bigger,\n> query time increases\n> from 7.8 seconds to > 110 seconds.\n> \n> Any help is appreciated, we were unable to identify\n> what causes the\n> query planner to choose a different/poor performing\n> plan.\n> \n> Notes:\n> Our production platform is Ubuntu Linux Hoary on\n> i386, PostgreSQL 8.0.3,\n> compiled from sources. Same tests were carried on\n> Windows XP\n> Professional and PostgreSQL 8.0.1 with similar\n> results. The queries use\n> little IO, high CPU. The largest table involved in\n> the sample query has\n> about 10000 rows. Indexes are used intensively, some\n> tables use > 4\n> indexes.\n> \n> Best regards,\n> Umit Oztosun\n> \n> > SELECT * FROM (\n> SELECT \n> COALESCE (\n> (SELECT COALESCE (sum(irskal.anamiktar),\n> 0) \n> * (SELECT \n> birim.fiyat2 * (SELECT kur1 \n> FROM\n> sis_doviz_kuru kur \n> WHERE\n> birim._key_sis_doviz2 = kur._key_sis_doviz \n> ORDER BY tarih\n> desc \n> LIMIT 1)\n> FROM scf_stokkart_birimleri\n> birim\n> WHERE _key_scf_stokkart =\n> stok._key\n> AND anabirim = '1'\n> )\n> FROM scf_irsaliye irs,\n> scf_irsaliye_kalemi irskal\n> WHERE irskal._key_kalemturu =\n> stok._key\n> AND irskal._key_scf_irsaliye =\n> irs._key\n> AND irs.karsifirma = 'KENDI'\n> AND (irs.turu='MAI' OR\n> irs.turu='KGI' OR irs.turu='PS' OR irs.turu='TS' OR\n> irs.turu='KC' OR irs.turu='KCO')\n> AND ( irs._key_sis_depo_dest =\n> '$$$$0000003l$1$$' OR irs._key_sis_depo_dest =\n> '$$$$00000048$1$$' OR irs._key_sis_depo_dest =\n> '$$$$0000004b$1$$' OR irs._key_sis_depo_dest =\n> '$$$$0000004d$1$$' )\n> AND ((irskal._key LIKE '0000%' OR\n> irskal._key LIKE '0101%' OR irskal._key LIKE '$$%'))\n> AND irs.tarih <= '2005-08-26'\n> ), 0\n> ) as arti_fiili_irs_karsifirma,\n> stok.*\n> FROM scf_stokkart stok\n> ) AS _SWT WHERE (_key LIKE '00%' OR _key LIKE '01%'\n> OR _key LIKE '$$%') ORDER BY _key desc\n> > Before VACUUM FULL ANALYZE - Short Query\n> ---------------------------------------\n> Sort (cost=9094.31..9094.40 rows=37 width=817)\n> (actual time=1852.799..1877.738 rows=10000 loops=1)\n> Sort Key: stok._key\n> -> Seq Scan on scf_stokkart stok \n> (cost=0.00..9093.34 rows=37 width=817) (actual\n> time=8.670..1575.586 rows=10000 loops=1)\n> Filter: (((_key)::text ~~ '00%'::text) OR\n> ((_key)::text ~~ '01%'::text) OR ((_key)::text ~~\n> '$$%'::text))\n> SubPlan\n> -> Aggregate (cost=237.29..237.29 rows=1\n> width=16) (actual time=0.136..0.138 rows=1\n> loops=10000)\n> InitPlan\n> -> Index Scan using\n> scf_stokkart_birimleri_key_scf_stokkart_idx on\n> scf_stokkart_birimleri birim (cost=0.00..209.59\n> rows=1 width=58) (actual time=0.088..0.093 rows=1\n> loops=10000)\n> Index Cond:\n> ((_key_scf_stokkart)::text = ($1)::text)\n> Filter: (anabirim =\n> '1'::bpchar)\n> SubPlan\n> -> Limit \n> (cost=9.31..9.31 rows=1 width=17) (actual\n> time=0.046..0.048 rows=1 loops=10000)\n> -> Sort \n> (cost=9.31..9.31 rows=2 width=17) (actual\n> time=0.041..0.041 rows=1 loops=10000)\n> Sort Key:\n> tarih\n> -> Index Scan\n> using sis_doviz_kuru_key_sis_doviz_idx on\n> sis_doviz_kuru kur (cost=0.00..9.30 rows=2\n> width=17) (actual time=0.018..0.029 rows=2\n> loops=10000)\n> Index\n> Cond: (($0)::text = (_key_sis_doviz)::text)\n> -> Nested Loop (cost=0.00..27.69\n> rows=1 width=16) (actual time=0.033..0.033 rows=0\n> loops=10000)\n> -> Index Scan using\n> scf_irsaliye_kalemi_key_kalemturu_idx on\n> scf_irsaliye_kalemi irskal (cost=0.00..21.75 rows=1\n> width=58) (actual time=0.017..0.020 rows=0\n> loops=10000)\n> Index Cond:\n> ((_key_kalemturu)::text = ($1)::text)\n> Filter: (((_key)::text\n> ~~ '0000%'::text) OR ((_key)::text ~~ '0101%'::text)\n> OR ((_key)::text ~~ '$$%'::text))\n> -> Index Scan using\n> scf_irsaliye_pkey on scf_irsaliye irs \n> (cost=0.00..5.94 rows=1 width=42) (actual\n> time=0.021..0.021 rows=0 loops=3000)\n> Index Cond:\n> ((\"outer\"._key_scf_irsaliye)::text =\n> (irs._key)::text)\n> Filter:\n> (((karsifirma)::text = 'KENDI'::text) AND\n> (((turu)::text = 'MAI'::text) OR ((turu)::text =\n> 'KGI'::text) OR ((turu)::text = 'PS'::text) OR\n> ((turu)::text = 'TS'::text) OR ((turu)::text =\n> 'KC'::text) OR ((turu)::text = 'KCO'::text)) AND\n> (((_key_sis_depo_dest)::text =\n> '$$$$0000003l$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$00000048$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$0000004b$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$0000004d$1$$'::text)) AND (tarih <=\n> '2005-08-26'::date))\n> Total runtime: 1899.533 ms\n> > After VACUUM FULL ANALYZE - Short Query\n> ---------------------------------------\n> Index Scan Backward using scf_stokkart_pkey on\n> scf_stokkart stok (cost=0.00..392045.63 rows=9998\n> width=166) (actual time=0.661..4431.568 rows=10000\n> loops=1)\n> Filter: (((_key)::text ~~ '00%'::text) OR\n> ((_key)::text ~~ '01%'::text) OR ((_key)::text ~~\n> '$$%'::text))\n> SubPlan\n> -> Aggregate (cost=39.16..39.16 rows=1\n> width=10) (actual time=0.416..0.418 rows=1\n> loops=10000)\n> InitPlan\n> -> Index Scan using\n> scf_stokkart_birimleri_key_scf_stokkart_idx on\n> scf_stokkart_birimleri birim (cost=0.00..5.25\n> rows=2 width=28) (actual time=0.101..0.105 rows=1\n> loops=10000)\n> Index Cond:\n> ((_key_scf_stokkart)::text = ($1)::text)\n> Filter: (anabirim = '1'::bpchar)\n> SubPlan\n> -> Limit (cost=1.08..1.09\n> rows=1 width=15) (actual time=0.048..0.050 rows=1\n> loops=10000)\n> -> Sort (cost=1.08..1.09\n> rows=2 width=15) (actual time=0.043..0.043 rows=1\n> loops=10000)\n> Sort Key: tarih\n> -> Seq Scan on\n> sis_doviz_kuru kur (cost=0.00..1.07 rows=2\n> width=15) (actual time=0.009..0.026 rows=2\n> loops=10000)\n> Filter:\n> (($0)::text = (_key_sis_doviz)::text)\n> -> Nested Loop (cost=0.00..33.90 rows=1\n> width=10) (actual time=0.295..0.295 rows=0\n> loops=10000)\n> -> Seq Scan on scf_irsaliye irs \n> (cost=0.00..30.00 rows=1 width=20) (actual\n> time=0.290..0.290 rows=0 loops=10000)\n> Filter: (((karsifirma)::text =\n> 'KENDI'::text) AND (((turu)::text = 'MAI'::text) OR\n> ((turu)::text = 'KGI'::text) OR ((turu)::text =\n> 'PS'::text) OR ((turu)::text = 'TS'::text) OR\n> ((turu)::text = 'KC'::text) OR ((turu)::text =\n> 'KCO'::text)) AND (((_key_sis_depo_dest)::text =\n> '$$$$0000003l$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$00000048$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$0000004b$1$$'::text) OR\n> ((_key_sis_depo_dest)::text =\n> '$$$$0000004d$1$$'::text)) AND (tarih <=\n> '2005-08-26'::date))\n> -> Index Scan using\n> scf_irsaliye_kalemi_key_scf_irsaliye_idx on\n> scf_irsaliye_kalemi irskal (cost=0.00..3.89 rows=1\n> width=30) (never executed)\n> Index Cond:\n> ((irskal._key_scf_irsaliye)::text =\n> (\"outer\"._key)::text)\n> Filter:\n> (((_key_kalemturu)::text = ($1)::text) AND\n> (((_key)::text ~~ '0000%'::text) OR ((_key)::text ~~\n> '0101%'::text) OR ((_key)::text ~~ '$$%'::text)))\n> Total runtime: 4456.895 ms\n> > \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please\n> send an appropriate\n> subscribe-nomail command to\n> [email protected] so that your\n> message can get through to the mailing list\n> cleanly\n> \n\n\n\n\t\t\n__________________________________ \nYahoo! Mail for Mobile \nTake Yahoo! Mail with you! Check email on your mobile phone. \nhttp://mobile.yahoo.com/learn/mail \n", "msg_date": "Fri, 26 Aug 2005 15:52:24 -0700 (PDT)", "msg_from": "asif ali <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "> Hi,\n> I have the same issue. After doing \"VACCUME ANALYZE\"\n> performance of the query dropped.\n>\n> Here is the query\n> explain select * from conversion_table c where\n> c.conversion_date BETWEEN '2005-06-07' and\n> '2005-08-17'\n>\n> Before \"VACCUME ANALYZE\"\n>\n> \"Index Scan using conversion_table_pk on\n> keyword_conversion_table c (cost=0.00..18599.25\n> rows=4986 width=95)\"\n> \" Index Cond: ((conversion_date >=\n> '2005-06-07'::date) AND (conversion_date <=\n> '2005-08-17'::date))\"\n>\n>\n> After \"VACCUME ANALYZE\"\n>\n>\n> \"Seq Scan on conversion_table c (cost=0.00..29990.83\n> rows=1094820 width=66)\"\n> \" Filter: ((conversion_date >= '2005-06-07'::date)\n> AND (conversion_date <= '2005-08-17'::date))\"\n>\n>\n> I dont know why system is doing \"Seq scan\" now.\n\nI could be wrong as I'm definitely no expert on reading the output of \nEXPLAIN, but it seems to say that prior to VACUUM it was expecting to \nretrieve 4986 rows and afterwards expecting to retrieve 1094820 rows.\n\nWhich is a pretty big difference.\n\nSo maybe the statistics were just really really off prior to vacuuming and \nonce it did vacuum it realized there would be a lot more matches and since \nthere were a lot more matches the planner decided to do a seq scan since \nit would be quicker overall...\n\nMaybe? Seems I've heard Tom Lane say something to that affect, although \nmuch more eloquently :-)\n\n-philip\n", "msg_date": "Fri, 26 Aug 2005 16:13:56 -0700 (PDT)", "msg_from": "Philip Hallstrom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "On Fri, Aug 26, 2005 at 03:52:24PM -0700, asif ali wrote:\n> I have the same issue. After doing \"VACCUME ANALYZE\"\n> performance of the query dropped. \n\nYour EXPLAIN output doesn't show the actual query times -- could\nyou post the EXPLAIN ANALYZE output? That'll also show how accurate\nthe planner's row count estimates are.\n\n> Before \"VACCUME ANALYZE\"\n> \n> \"Index Scan using conversion_table_pk on\n> keyword_conversion_table c (cost=0.00..18599.25\n> rows=4986 width=95)\"\n> \" Index Cond: ((conversion_date >=\n> '2005-06-07'::date) AND (conversion_date <=\n> '2005-08-17'::date))\"\n> \n> After \"VACCUME ANALYZE\"\n> \n> \"Seq Scan on conversion_table c (cost=0.00..29990.83\n> rows=1094820 width=66)\"\n> \" Filter: ((conversion_date >= '2005-06-07'::date)\n> AND (conversion_date <= '2005-08-17'::date))\"\n> \n> I dont know why system is doing \"Seq scan\" now.\n\nNotice the row count estimates: 4986 in the \"before\" query and\n1094820 in the \"after\" query. In the latter, the planner thinks\nit has to fetch so much of the table that a sequential scan would\nbe faster than an index scan. You can see whether that guess is\ncorrect by disabling enable_seqscan to force an index scan. It\nmight be useful to see the output of the following:\n\nSET enable_seqscan TO on;\nSET enable_indexscan TO off;\nEXPLAIN ANALYZE SELECT ...;\n\nSET enable_seqscan TO off;\nSET enable_indexscan TO on;\nEXPLAIN ANALYZE SELECT ...;\n\nYou might also experiment with planner variables like effective_cache_size\nand random_page_cost to see how changing them affects the query\nplan. However, be careful of tuning the system based on one query:\nmake sure adjustments result in reasonable plans for many different\nqueries.\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 26 Aug 2005 17:26:41 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <[email protected]> writes:\n> We are using PostgreSQL for our business application. Recently, during\n> testing of our application with large volumes of data, we faced a weird\n> problem. Our query performance dropped *dramatically* after \"VACUUM FULL\n> ANALYZE\" command.\n\nI think the problem is that the planner is underestimating the cost of\nevaluating this complicated filter condition:\n\n> -> Seq Scan on scf_irsaliye irs (cost=0.00..30.00 rows=1 width=20) (actual time=0.290..0.290 rows=0 loops=10000)\n> Filter: (((karsifirma)::text = 'KENDI'::text) AND (((turu)::text = 'MAI'::text) OR ((turu)::text = 'KGI'::text) OR ((turu)::text = 'PS'::text) OR ((turu)::text = 'TS'::text) OR ((turu)::text = 'KC'::text) OR ((turu)::text = 'KCO'::text)) AND (((_key_sis_depo_dest)::text = '$$$$0000003l$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$00000048$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$0000004b$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$0000004d$1$$'::text)) AND (tarih <= '2005-08-26'::date))\n\nWhile you could attack that by raising the cpu_operator_cost parameter,\nit would also be worth inquiring *why* the condition is so expensive to\nevaluate. I am suspicious that you are running the database in a locale\nin which strcoll() is really slow. Can you run it in C locale instead,\nor do you really need locale-aware behavior? Can you switch to a\ndifferent database encoding? (A single-byte encoding such as Latin1\nmight be faster than UTF8, for example.)\n\nAnother possibility is to take a hard look at whether you can't simplify\nthe filter condition, but that'd require more knowledge of your\napplication than I have.\n\nOr you could just play with the order of the filter conditions ... for\nexample, the date condition at the end is probably far cheaper to test\nthan the text comparisons, so if that's fairly selective it'd be worth\nputting it first.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Aug 2005 19:31:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM " }, { "msg_contents": "Thanks Michael For your reply.\n\nHere is performance on the database on which i did \nVACUUM ANALYZE\n\nexplain analyze\nselect keyword_id\n\t,sum(daily_impressions) as daily_impressions \n\t,sum(daily_actions)as daily_actions \n from conversion_table c where c.conversion_date\nBETWEEN '2005-06-07' and '2005-08-17' \n\tgroup by keyword_Id \n\n\"GroupAggregate (cost=195623.66..206672.52 rows=20132\nwidth=16) (actual time=8205.283..10139.369 rows=55291\nloops=1)\"\n\" -> Sort (cost=195623.66..198360.71 rows=1094820\nwidth=16) (actual time=8205.114..9029.501 rows=863883\nloops=1)\"\n\" Sort Key: keyword_id\"\n\" -> Seq Scan on keyword_conversion_table c \n(cost=0.00..29990.83 rows=1094820 width=16) (actual\ntime=0.057..1422.319 rows=863883 loops=1)\"\n\" Filter: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 14683.617 ms\"\n\n\nNow see if am changing the query and commenting one\ncolumn.\n\nexplain analyze\nselect keyword_id\n\t,sum(daily_impressions) as daily_impressions \n--\t,sum(daily_actions)as daily_actions \n from conversion_table c where c.conversion_date\nBETWEEN '2005-06-07' and '2005-08-17' \n\tgroup by keyword_Id \n\n\n\"HashAggregate (cost=27373.51..27373.52 rows=2\nwidth=16) (actual time=3030.386..3127.073 rows=55717\nloops=1)\"\n\" -> Seq Scan on conversion_table c \n(cost=0.00..27336.12 rows=4986 width=16) (actual\ntime=0.050..1357.164 rows=885493 loops=1)\"\n\" Filter: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 3159.162 ms\"\n\n\nI noticed \"GroupAggregate\" changes to \"HashAggregate\"\nand performance from 14 sec to 3 sec.\n\n\nOn the other hand I have another database which I did\nnot do \"VACUUM ANALYZE\" working fine.\n\n\nexplain analyze\nselect keyword_id\n\t,sum(daily_impressions) as daily_impressions \n\t,sum(daily_actions)as daily_actions \n from conversion_table c where c.conversion_date\nBETWEEN '2005-06-07' and '2005-08-17' \n\tgroup by keyword_Id \n\n\n\"HashAggregate (cost=27373.51..27373.52 rows=2\nwidth=16) (actual time=3024.289..3120.324 rows=55717\nloops=1)\"\n\" -> Seq Scan on conversion_table c \n(cost=0.00..27336.12 rows=4986 width=16) (actual\ntime=0.047..1352.212 rows=885493 loops=1)\"\n\" Filter: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 3152.437 ms\"\n\n\nI am new to postgres. Thanks in advance.\n\n\nasif ali\n\n\n\n\n\n\n--- Michael Fuhr <[email protected]> wrote:\n\n> On Fri, Aug 26, 2005 at 03:52:24PM -0700, asif ali\n> wrote:\n> > I have the same issue. After doing \"VACCUME\n> ANALYZE\"\n> > performance of the query dropped. \n> \n> Your EXPLAIN output doesn't show the actual query\n> times -- could\n> you post the EXPLAIN ANALYZE output? That'll also\n> show how accurate\n> the planner's row count estimates are.\n> \n> > Before \"VACCUME ANALYZE\"\n> > \n> > \"Index Scan using conversion_table_pk on\n> > keyword_conversion_table c (cost=0.00..18599.25\n> > rows=4986 width=95)\"\n> > \" Index Cond: ((conversion_date >=\n> > '2005-06-07'::date) AND (conversion_date <=\n> > '2005-08-17'::date))\"\n> > \n> > After \"VACCUME ANALYZE\"\n> > \n> > \"Seq Scan on conversion_table c \n> (cost=0.00..29990.83\n> > rows=1094820 width=66)\"\n> > \" Filter: ((conversion_date >=\n> '2005-06-07'::date)\n> > AND (conversion_date <= '2005-08-17'::date))\"\n> > \n> > I dont know why system is doing \"Seq scan\" now.\n> \n> Notice the row count estimates: 4986 in the \"before\"\n> query and\n> 1094820 in the \"after\" query. In the latter, the\n> planner thinks\n> it has to fetch so much of the table that a\n> sequential scan would\n> be faster than an index scan. You can see whether\n> that guess is\n> correct by disabling enable_seqscan to force an\n> index scan. It\n> might be useful to see the output of the following:\n> \n> SET enable_seqscan TO on;\n> SET enable_indexscan TO off;\n> EXPLAIN ANALYZE SELECT ...;\n> \n> SET enable_seqscan TO off;\n> SET enable_indexscan TO on;\n> EXPLAIN ANALYZE SELECT ...;\n> \n> You might also experiment with planner variables\n> like effective_cache_size\n> and random_page_cost to see how changing them\n> affects the query\n> plan. However, be careful of tuning the system\n> based on one query:\n> make sure adjustments result in reasonable plans for\n> many different\n> queries.\n> \n> -- \n> Michael Fuhr\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n\n\t\t\n____________________________________________________\nStart your day with Yahoo! - make it your home page \nhttp://www.yahoo.com/r/hs \n \n", "msg_date": "Fri, 26 Aug 2005 17:10:49 -0700 (PDT)", "msg_from": "asif ali <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "On Fri, Aug 26, 2005 at 05:10:49PM -0700, asif ali wrote:\n> \"GroupAggregate (cost=195623.66..206672.52 rows=20132\n> width=16) (actual time=8205.283..10139.369 rows=55291\n> loops=1)\"\n> \" -> Sort (cost=195623.66..198360.71 rows=1094820\n> width=16) (actual time=8205.114..9029.501 rows=863883\n> loops=1)\"\n> \" Sort Key: keyword_id\"\n> \" -> Seq Scan on keyword_conversion_table c \n> (cost=0.00..29990.83 rows=1094820 width=16) (actual\n> time=0.057..1422.319 rows=863883 loops=1)\"\n> \" Filter: ((conversion_date >=\n> '2005-06-07'::date) AND (conversion_date <=\n> '2005-08-17'::date))\"\n> \"Total runtime: 14683.617 ms\"\n\nWhat are your effective_cache_size and work_mem (8.x) or sort_mem (7.x)\nsettings? How much RAM does the machine have? If you have enough\nmemory then raising those variables should result in better plans;\nyou might also want to experiment with random_page_cost. Be careful\nnot to set work_mem/sort_mem too high, though. See \"Run-time\nConfiguration\" in the \"Server Run-time Environment\" chapter of the\ndocumentation for more information about these variables.\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 26 Aug 2005 19:41:26 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "On Cum, 2005-08-26 at 19:31 -0400, Tom Lane wrote:\n> I think the problem is that the planner is underestimating the cost of\n> evaluating this complicated filter condition:\n> \n> > -> Seq Scan on scf_irsaliye irs (cost=0.00..30.00 rows=1 width=20) (actual time=0.290..0.290 rows=0 loops=10000)\n> > Filter: (((karsifirma)::text = 'KENDI'::text) AND (((turu)::text = 'MAI'::text) OR ((turu)::text = 'KGI'::text) OR ((turu)::text = 'PS'::text) OR ((turu)::text = 'TS'::text) OR ((turu)::text = 'KC'::text) OR ((turu)::text = 'KCO'::text)) AND (((_key_sis_depo_dest)::text = '$$$$0000003l$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$00000048$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$0000004b$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$0000004d$1$$'::text)) AND (tarih <= '2005-08-26'::date))\n> \n> While you could attack that by raising the cpu_operator_cost parameter,\n> it would also be worth inquiring *why* the condition is so expensive to\n> evaluate. I am suspicious that you are running the database in a locale\n> in which strcoll() is really slow. Can you run it in C locale instead,\n> or do you really need locale-aware behavior? Can you switch to a\n> different database encoding? (A single-byte encoding such as Latin1\n> might be faster than UTF8, for example.)\n\nYes, you are perfectly right. We are using UTF8 and tr_TR.UTF8 locale.\nHowever, I tried the same tests with latin1 and C locale, it is surely\nfaster, but not dramatically. i.e.:\n\n Before Vacuum After Vacuum\nUTF8 and tr_TR.UTF8: ~8 s ~110 s\nlatin1 and C: ~7 s ~65 s\n\nI also played with cpu_operator_cost parameter and it dramatically\nreduced query times, but not to the level before vacuum:\n\n Before Vacuum After Vacuum\nUTF8 and tr_TR.UTF8: ~8 s ~11 s\nlatin1 and C: ~7 s ~9 s\n\nThese values are much better but I really wonder if I can reach the\nperformance levels before vacuum. I am also worried about the\nside-effects that may be caused by the non-default cpu_operator_cost\nparameter.\n\n> Another possibility is to take a hard look at whether you can't simplify\n> the filter condition, but that'd require more knowledge of your\n> application than I have.\n\nYes that is another option, we are even considering schema changes to\nuse less character types, but these are really costly and error-prone\noperations at the moment.\n\n> Or you could just play with the order of the filter conditions ... for\n> example, the date condition at the end is probably far cheaper to test\n> than the text comparisons, so if that's fairly selective it'd be worth\n> putting it first.\n\nWe are experimenting on this.\n\nThanks your help!\n\nBest Regards,\nUmit Oztosun\n\n", "msg_date": "Sat, 27 Aug 2005 12:31:13 +0300", "msg_from": "Umit Oztosun <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "On Fri, Aug 26, 2005 at 07:31:51PM -0400, Tom Lane wrote:\n> Or you could just play with the order of the filter conditions ... for\n> example, the date condition at the end is probably far cheaper to test\n> than the text comparisons, so if that's fairly selective it'd be worth\n> putting it first.\n\nThat's an interesting approach -- could the planner do such things itself?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 27 Aug 2005 12:19:45 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Fri, Aug 26, 2005 at 07:31:51PM -0400, Tom Lane wrote:\n>> Or you could just play with the order of the filter conditions ... for\n>> example, the date condition at the end is probably far cheaper to test\n>> than the text comparisons, so if that's fairly selective it'd be worth\n>> putting it first.\n\n> That's an interesting approach -- could the planner do such things itself?\n\nIt could, but it doesn't really have enough information. We don't\ncurrently have any model that some operators are more expensive than\nothers. IIRC the only sort of reordering the current code will do\nin a filter condition list is to push clauses involving sub-SELECTs\nto the end.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Aug 2005 11:05:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM " }, { "msg_contents": "On Sat, Aug 27, 2005 at 11:05:01AM -0400, Tom Lane wrote:\n> It could, but it doesn't really have enough information. We don't\n> currently have any model that some operators are more expensive than\n> others. IIRC the only sort of reordering the current code will do\n> in a filter condition list is to push clauses involving sub-SELECTs\n> to the end.\n\nI was more thinking along the lines of reordering \"a AND/OR b\" to \"b AND/OR\na\" if b has lower selectivity than a.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 27 Aug 2005 17:26:03 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Sat, Aug 27, 2005 at 11:05:01AM -0400, Tom Lane wrote:\n>> It could, but it doesn't really have enough information. We don't\n>> currently have any model that some operators are more expensive than\n>> others. IIRC the only sort of reordering the current code will do\n>> in a filter condition list is to push clauses involving sub-SELECTs\n>> to the end.\n\n> I was more thinking along the lines of reordering \"a AND/OR b\" to \"b AND/OR\n> a\" if b has lower selectivity than a.\n\nYeah, but if b is considerably more expensive to evaluate than a, that\ncould still be a net loss. To do it correctly you really need to trade\noff cost of evaluation against selectivity, and the planner currently\nonly knows something about the latter (and all too often, not enough :-().\n\nI'd like to do this someday, but until we get some cost info in there\nI think it'd be a mistake to do much re-ordering of conditions.\nCurrently the SQL programmer can determine what happens by writing his\nquery carefully --- if we reorder based on selectivity only, we could\nmake things worse, and there'd be no way to override it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Aug 2005 11:40:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM " }, { "msg_contents": "Michael\nThe database is on the same system.\nWhat I am doing is only \"VACUUM analyze \nconversion_table\"\n\nI did the the same thing on a newly created database.\nAnd got the same result. So after \"VACUUM analyze\"\nperformance dropped.\nPlease see this. Runtime changes from \"7755.115\" to\n\"14859.291\" ms\n\n\nexplain analyze\nselect keyword_id,sum(daily_impressions) as\ndaily_impressions ,\n\t sum(daily_clicks) as daily_clicks, \nCOALESCE(sum(daily_cpc::double precision),0) as\ndaily_cpc, sum(daily_revenues)as daily_revenues,\nsum(daily_actions)as daily_actions \n\t ,count(daily_cpc) as count from conversion_table c\nwhere c.conversion_date BETWEEN '2005-06-07' and\n'2005-08-17' \n\tgroup by keyword_Id \n\n\"HashAggregate (cost=18686.51..18686.54 rows=2\nwidth=52) (actual time=7585.827..7720.370 rows=55717\nloops=1)\"\n\" -> Index Scan using conversion_table_pk on\nconversion_table c (cost=0.00..18599.25 rows=4986\nwidth=52) (actual time=0.129..2882.066 rows=885493\nloops=1)\"\n\" Index Cond: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 7755.115 ms\"\n\n\nVACUUM analyze conversion_table\n\n\nexplain analyze\n\nselect keyword_id,sum(daily_impressions) as\ndaily_impressions ,\n\t sum(daily_clicks) as daily_clicks, \nCOALESCE(sum(daily_cpc::double precision),0) as\ndaily_cpc, sum(daily_revenues)as daily_revenues,\nsum(daily_actions)as daily_actions \n\t ,count(daily_cpc) as count from conversion_table c\nwhere c.conversion_date BETWEEN '2005-06-07' and\n'2005-08-17' \n\tgroup by keyword_Id \n\n\n\"GroupAggregate (cost=182521.76..200287.99 rows=20093\nwidth=37) (actual time=8475.580..12618.793 rows=55717\nloops=1)\"\n\" -> Sort (cost=182521.76..184698.58 rows=870730\nwidth=37) (actual time=8475.246..9418.068 rows=885493\nloops=1)\"\n\" Sort Key: keyword_id\"\n\" -> Seq Scan on conversion_table c \n(cost=0.00..27336.12 rows=870730 width=37) (actual\ntime=0.007..1520.788 rows=885493 loops=1)\"\n\" Filter: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 14859.291 ms\"\n\n\n\n\n\n\n \n\n\n--- Michael Fuhr <[email protected]> wrote:\n\n> On Fri, Aug 26, 2005 at 05:10:49PM -0700, asif ali\n> wrote:\n> > \"GroupAggregate (cost=195623.66..206672.52\n> rows=20132\n> > width=16) (actual time=8205.283..10139.369\n> rows=55291\n> > loops=1)\"\n> > \" -> Sort (cost=195623.66..198360.71\n> rows=1094820\n> > width=16) (actual time=8205.114..9029.501\n> rows=863883\n> > loops=1)\"\n> > \" Sort Key: keyword_id\"\n> > \" -> Seq Scan on keyword_conversion_table\n> c \n> > (cost=0.00..29990.83 rows=1094820 width=16)\n> (actual\n> > time=0.057..1422.319 rows=863883 loops=1)\"\n> > \" Filter: ((conversion_date >=\n> > '2005-06-07'::date) AND (conversion_date <=\n> > '2005-08-17'::date))\"\n> > \"Total runtime: 14683.617 ms\"\n> \n> What are your effective_cache_size and work_mem\n> (8.x) or sort_mem (7.x)\n> settings? How much RAM does the machine have? If\n> you have enough\n> memory then raising those variables should result in\n> better plans;\n> you might also want to experiment with\n> random_page_cost. Be careful\n> not to set work_mem/sort_mem too high, though. See\n> \"Run-time\n> Configuration\" in the \"Server Run-time Environment\"\n> chapter of the\n> documentation for more information about these\n> variables.\n> \n> -- \n> Michael Fuhr\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n\n\t\t\n____________________________________________________\nStart your day with Yahoo! - make it your home page \nhttp://www.yahoo.com/r/hs \n \n", "msg_date": "Mon, 29 Aug 2005 11:07:17 -0700 (PDT)", "msg_from": "asif ali <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "On Mon, Aug 29, 2005 at 11:07:17AM -0700, asif ali wrote:\n> The database is on the same system.\n> What I am doing is only \"VACUUM analyze \n> conversion_table\"\n> \n> I did the the same thing on a newly created database.\n> And got the same result. So after \"VACUUM analyze\"\n> performance dropped.\n> Please see this. Runtime changes from \"7755.115\" to\n> \"14859.291\" ms\n\nAs has been pointed out a couple of times, you're getting a different\nplan after VACUUM ANALYZE because the row count estimates are more\naccurate. Unfortunately the more accurate estimates result in a\nquery plan that's slower than the plan for the less accurate\nestimates. PostgreSQL *thinks* the plan will be faster but your\nresults show that it isn't, so you might need to adjust some of the\nplanner's cost constants.\n\nA asked some questions that you didn't answer, so I'll ask them again:\n\nWhat's your effective_cache_size setting?\nWhat's your work_mem (8.x) or sort_mem (7.x) setting?\nWhat's your random_page_cost setting?\nHow much available RAM does the machine have?\nWhat version of PostgreSQL are you running?\n\nVarious tuning guides give advice on how to set the above and other\nconfiguration variables. Here's one such guide:\n\nhttp://www.powerpostgresql.com/PerfList/\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 29 Aug 2005 14:28:56 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" }, { "msg_contents": "Michael,\nThe \neffective_cache_size, random_page_cost, work_mem\nwere set to default. (commented).\nI have changed the setting of these and now the\nperformance is better see below.\n\n\"HashAggregate (cost=42573.89..42925.52 rows=20093\nwidth=37) (actual time=5273.984..5430.733 rows=55717\nloops=1)\"\n\" -> Seq Scan on keyword_conversion_table c \n(cost=0.00..27336.12 rows=870730 width=37) (actual\ntime=0.052..1405.576 rows=885493 loops=1)\"\n\" Filter: ((conversion_date >=\n'2005-06-07'::date) AND (conversion_date <=\n'2005-08-17'::date))\"\n\"Total runtime: 5463.764 ms\"\n\n\n\nThanks a lot\n\n\n\n--- Michael Fuhr <[email protected]> wrote:\n\n> On Mon, Aug 29, 2005 at 11:07:17AM -0700, asif ali\n> wrote:\n> > The database is on the same system.\n> > What I am doing is only \"VACUUM analyze \n> > conversion_table\"\n> > \n> > I did the the same thing on a newly created\n> database.\n> > And got the same result. So after \"VACUUM analyze\"\n> > performance dropped.\n> > Please see this. Runtime changes from \"7755.115\"\n> to\n> > \"14859.291\" ms\n> \n> As has been pointed out a couple of times, you're\n> getting a different\n> plan after VACUUM ANALYZE because the row count\n> estimates are more\n> accurate. Unfortunately the more accurate estimates\n> result in a\n> query plan that's slower than the plan for the less\n> accurate\n> estimates. PostgreSQL *thinks* the plan will be\n> faster but your\n> results show that it isn't, so you might need to\n> adjust some of the\n> planner's cost constants.\n> \n> A asked some questions that you didn't answer, so\n> I'll ask them again:\n> \n> What's your effective_cache_size setting?\n> What's your work_mem (8.x) or sort_mem (7.x)\n> setting?\n> What's your random_page_cost setting?\n> How much available RAM does the machine have?\n> What version of PostgreSQL are you running?\n> \n> Various tuning guides give advice on how to set the\n> above and other\n> configuration variables. Here's one such guide:\n> \n> http://www.powerpostgresql.com/PerfList/\n> \n> -- \n> Michael Fuhr\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n\n\t\t\n____________________________________________________\nStart your day with Yahoo! - make it your home page \nhttp://www.yahoo.com/r/hs \n \n", "msg_date": "Mon, 29 Aug 2005 15:59:12 -0700 (PDT)", "msg_from": "asif ali <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop after VACUUM" } ]
[ { "msg_contents": "Hello Friends,\nWe were having a database in pgsql7.4. The database was responding very\nslowly even after full vacuum (select\ncount(*) from some_table_having_18000_records was taking 18 Sec).\n\nWe took a backup of that db and restored it back. Now the same db on\nsame PC is responding fast (same query is taking 18 ms).\n\nBut we can't do the same as a solution of slow response. Do anybody has\nfaced similar problem? Is this due to any internal problem of pgsql? Is\nthere any clue to fasteen the database?\n\nRegards,\n\nakshay\n\n\n\n", "msg_date": "Sat, 27 Aug 2005 21:28:57 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Observation about db response time" }, { "msg_contents": " On Sat, 27 Aug 2005 21:28:57 +0530 (IST)\n<[email protected]> threw this fish to the penguins:\n\n> Hello Friends,\n> We were having a database in pgsql7.4. The database was responding very\n> slowly even after full vacuum (select\n> count(*) from some_table_having_18000_records was taking 18 Sec).\n\nOne comment here: \"select count(*)\" may seem like a good benchmark, but \nit's not generally. If your application really depends on this number, fine.\nOtherwise, you should measure performance with a real query from your\napplication. The \"select count(*)\" can be very slow because it does\nnot use indexes.\n\n> We took a backup of that db and restored it back. Now the same db on\n> same PC is responding fast (same query is taking 18 ms).\n\nThis sounds like some index is getting gooped up. If you do a lot of\ndeleting from tables, your indexes can collect dead space that vacuum\ncan not reclaim. Try in sql \"reindex table my_slow_table\" for a\nsuspect table. In the contrib directory of the postgresql\ndistribution there is a script called \"reindexdb\". You can run this\nto reindex your whole database.\n\nI also wonder about file system slowdowns. What hardware/OS/filesystem\nare you using?\n\n\n-- George\n\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n", "msg_date": "Tue, 30 Aug 2005 14:36:18 -0400", "msg_from": "george young <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Observation about db response time" } ]
[ { "msg_contents": "I tried to use intarray on 8.1 . It seems to give same estimates for\nanything I ask:\n\n \n\nexplain analyze select * from objects_hier where tg && array[10001] \n\nexplain analyze select * from objects_hier where tg && array[0] \n\nexplain analyze select * from objects_hier where tg @ array[10001] \n\nexplain analyze select * from objects_hier where tg ~ array[0] \n\n \n\nSome of queries cover whole table, some cover none, but all give same\nestimated number of rows:\n\n \n\nBitmap Heap Scan on objects_hier (cost=2.10..102.75 rows=30 width=337)\n(actual time=0.028..0.028 rows=0 loops=1)\n\n Recheck Cond: (tg && '{0}'::integer[])\n\n -> Bitmap Index Scan on gistbla2 (cost=0.00..2.10 rows= !! 30 !!\nwidth=0) (actual time=0.024..0.024 rows=0 loops=1)\n\n Index Cond: (tg && '{0}'::integer[])\n\n \n\nSee the number of estimated rows is 30 is all cases.\n\nBut actually it varies from whole table (30000 rows) to 0.\n\n \n\nLooks like GIST indexes for intarray give no statistic at all.\n\n \n\nIt makes them much much less useless than they could be.. Because planner\ncan't plan them well and makes horrible mistakes.\n\n \n\nFor example, puts nested loops in order when for each of 30k rows it makes\nan index scan within 5 rows => that leads to 30k nested scans, while it\nshould for each of 5 rows perform single index scan among those 30k.\n\n \n\n \n\nYes, I have all necessary indexes on tables.\n\nAnd yes, I run VACUUM FULL ANALYZE just before the tests.\n\n \n\nThe lack of estimation is not documented anywhere so I just hope this is a\nbug and can be fixed fast :-)\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI tried to use intarray on 8.1 . It seems to give\nsame estimates for anything I ask:\n \nexplain analyze select * from objects_hier where tg &&\narray[10001] \nexplain analyze select * from objects_hier where tg &&\narray[0] \nexplain analyze select * from objects_hier where tg @\narray[10001] \nexplain analyze select * from objects_hier where tg ~\narray[0] \n \nSome of queries cover whole table, some cover none,\nbut all give same estimated number of rows:\n \nBitmap Heap Scan on objects_hier  (cost=2.10..102.75\nrows=30 width=337) (actual time=0.028..0.028 rows=0 loops=1)\n   Recheck Cond: (tg && '{0}'::integer[])\n   ->  Bitmap Index Scan on gistbla2 \n(cost=0.00..2.10 rows= !! 30 !! width=0) (actual time=0.024..0.024 rows=0\nloops=1)\n         Index Cond: (tg && '{0}'::integer[])\n \nSee the number of estimated rows is 30 is all cases.\nBut actually it varies from whole table (30000 rows) to\n0.\n \nLooks like GIST indexes for intarray give no\nstatistic at all.\n \nIt makes them much much less useless than they could\nbe.. Because planner can’t plan them well and makes horrible mistakes.\n \nFor example, puts nested loops in order when for each\nof 30k rows it makes an index scan within 5 rows => that leads to 30k nested\nscans, while it should for each of 5 rows perform single index scan among those\n30k.\n \n \nYes, I have all necessary indexes on tables.\nAnd yes, I run VACUUM FULL ANALYZE just before the\ntests.\n \nThe lack of estimation is not documented anywhere so\nI just hope this is a bug and can be fixed fast J", "msg_date": "Mon, 29 Aug 2005 00:41:18 +0400", "msg_from": "\"Ilia Kantor\" <[email protected]>", "msg_from_op": true, "msg_subject": "intarray is broken ? (8.1b1)" }, { "msg_contents": "\"Ilia Kantor\" <[email protected]> writes:\n> Looks like GIST indexes for intarray give no statistic at all.\n\nFeel free to contribute some stats routines that aren't stubs ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Aug 2005 17:24:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: intarray is broken ? (8.1b1) " } ]
[ { "msg_contents": "explain analyze select * from objects_hier where tg && array[0] and id\n<10000;\n\n \n\nBitmap Heap Scan on objects_hier (cost=4.79..8.80 rows=1 width=337) (actual\ntime=0.110..0.110 rows=0 loops=1)\n\n Recheck Cond: ((tg && '{0}'::integer[]) AND (id < 10000))\n\n -> BitmapAnd (cost=4.79..4.79 rows=1 width=0) (actual time=0.106..0.106\nrows=0 loops=1)\n\n -> Bitmap Index Scan on gistbla2 (cost=0.00..2.10 rows=30\nwidth=0) (actual time=0.042..0.042 rows=0 loops=1)\n\n Index Cond: (tg && '{0}'::integer[])\n\n -> Bitmap Index Scan on ohid (cost=0.00..2.44 rows=1240 width=0)\n(actual time=0.058..0.058 rows=1255 loops=1)\n\n Index Cond: (id < 10000)\n\n \n\nI see, Bitmap is going to AND my indexes.. It read one with less number of\nrows estimated the first (right)..\n\nIt found 0 records at gistbla2 index.\n\n \n\nThen why is it reading ohid ? \n\n \n\nMaybe a quickfix is possible for cases when 0 records is found to stop\nreading other AND elements..\n\n \n\n\n\n\n\n\n\n\n\n\nexplain analyze select * from objects_hier where tg\n&& array[0] and id <10000;\n \nBitmap Heap Scan on objects_hier \n(cost=4.79..8.80 rows=1 width=337) (actual time=0.110..0.110 rows=0 loops=1)\n   Recheck Cond: ((tg &&\n'{0}'::integer[]) AND (id < 10000))\n   ->  BitmapAnd \n(cost=4.79..4.79 rows=1 width=0) (actual time=0.106..0.106 rows=0 loops=1)\n        \n->  Bitmap Index Scan on gistbla2  (cost=0.00..2.10 rows=30\nwidth=0) (actual time=0.042..0.042 rows=0 loops=1)\n              \nIndex Cond: (tg && '{0}'::integer[])\n        \n->  Bitmap Index Scan on ohid  (cost=0.00..2.44 rows=1240 width=0)\n(actual time=0.058..0.058 rows=1255 loops=1)\n              \nIndex Cond: (id < 10000)\n \nI see, Bitmap is going to AND my indexes.. It read\none with less number of rows estimated the first (right)..\nIt found 0 records at gistbla2 index…\n \nThen why is it reading ohid ?  \n \nMaybe a quickfix is possible for cases when 0 records\nis found to stop reading other AND elements..", "msg_date": "Mon, 29 Aug 2005 00:48:40 +0400", "msg_from": "\"Ilia Kantor\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap scan when it is not needed" }, { "msg_contents": "\"Ilia Kantor\" <[email protected]> writes:\n> Maybe a quickfix is possible for cases when 0 records is found to stop\n> reading other AND elements..\n\nNot sure how useful this will be in practice (since the planner tends\nnot to bother ANDing unselective indexes at all), but it's easy enough\nto do ... so I did it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Aug 2005 18:49:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan when it is not needed " } ]
[ { "msg_contents": "I have a query:\n\n \n\nSELECT oh.id\nFROM objects_hier oh\nwhere\noh.id < 2000 (!)\nand\noh.id in (\n SELECT id as id FROM objects_access oa\n WHERE\n oa.master IN (1,2,10001)\n AND\n oa.id < 2000 (!)\n\n)\n\n \n\nThe sense of the query is simple: I choose ids from objects_hier where\naccess has necessary masters.\n\n \n\nThe problem is: I have duplicate conditions for id here. They are marked\nwith '!'.\n\n \n\nI just can't remove any of them, because planner needs to estimate both\nouter and inner selects to calculate the order\n\nOf nested loop or choose a join. If I remove one of duplicate conditions -\nplanner can't estimate well.\n\n \n\nIt's obvious that condition on oh.id can be put inside or outside \"oh.id in\n( .. )\" statement with same result.\n\n \n\nSo I just suggest that the planner should take this into account and\n\"propagate\" the condition outside or inside for planning if needed.\n\n \n\nP.S\n\nIs there a way to fix this particular query? Usually oh.id condition is not\nlike <2000, but an inner join.\n\n \n\n \n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI have a query:\n \nSELECT oh.idFROM objects_hier ohwhereoh.id < 2000 (!)andoh.id in (      SELECT id as id FROM objects_access oa      WHERE      oa.master IN (1,2,10001)      AND      oa.id < 2000 (!)\n)\n \nThe sense of the query is simple: I choose ids from\nobjects_hier where access has necessary masters.\n \nThe problem is: I have duplicate conditions for id\nhere. They are marked with ‘!’.\n \nI just can’t remove any of them, because planner\nneeds to estimate both outer and inner selects to calculate the order\nOf nested loop or choose a join. If I remove one of duplicate\nconditions – planner can’t estimate well.\n \nIt’s obvious that condition on oh.id can be put\ninside or outside “oh.id in ( .. )” statement with same result.\n \nSo I just suggest that the planner should take this\ninto account and “propagate” the condition outside or inside for\nplanning if needed.\n \nP.S\nIs there a way to fix this particular query? Usually\noh.id condition is not like <2000, but an inner join.", "msg_date": "Mon, 29 Aug 2005 00:59:33 +0400", "msg_from": "\"Ilia Kantor\" <[email protected]>", "msg_from_op": true, "msg_subject": "Planner improvement suggestion" }, { "msg_contents": "\"Ilia Kantor\" <[email protected]> writes:\n> So I just suggest that the planner should take this into account and\n> \"propagate\" the condition outside or inside for planning if needed.\n\nI believe it does this already for equality conditions, but not for\ninequalities.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Aug 2005 17:30:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner improvement suggestion " } ]
[ { "msg_contents": "We have been trying to pinpoint what originally seem to be a I/O \nbottleneck but which now seems to be an issue with either Postgresql or \nRHES 3.\n\nWe have the following test environment on which we can reproduce the \nproblem:\n\n1) Test System A\nDell 6650 Quad Xeon Pentium 4\n8 Gig of RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS270 connected using an FC card using 10 disks\n\n2) Test System B\nDell Dual Xeon Pentium III\n2 Gig o RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS920 connected using an FC card using 28 disks\n\nOur Database size is around 30G.\n\nThe behavior we see is that when running queries that do random reads \non disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a \nthroughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 kB/s \non sequential read operations on the netapps)\n\nThe stats of the NetApp do confirm that it is sitting idle. Doing an \nstrace on the Postgresql process shows that is it doing seeks and \nreads.\n\nSo my question is where is this iowait time spent ?\nIs there a way to pinpoint the problem in more details ?\nWe are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n\nI have included the output of top,vmstat,strace and systat from the \nNetapp from System B while running a single query that generates this \nbehavior.\n\nRémy\n\ntop output:\n 06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\n cpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\n cpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\nMem: 2061696k av, 2043936k used, 17760k free, 0k shrd, \n3916k buff\n 1566332k actv, 296648k in_d, 30504k in_c\nSwap: 16771584k av, 21552k used, 16750032k free \n1933772k cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU \nCOMMAND\n30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 \npostmaster\n30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n 1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 \nmigration/0\n 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 \nmigration/1\n 4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 \nkeventd\n 5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 \nksoftirqd/0\n 6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 \nksoftirqd/1\n 9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 \nbdflush\n 7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n 8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n 10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 \nkupdated\n 11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 \nmdrecoveryd\n 17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 \nahc_dv_0\n\n\nvmstat output\nprocs memory swap io system \n cpu\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 \n1 7 3\n 0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 \n2 50 47\n 0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 \n2 50 47\n 1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 \n3 48 46\n 0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 \n3 50 46\n 0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 \n2 50 46\n 0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 \n1 50 46\n 1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 \n1 49 47\n 0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 \n1 48 48\n 0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 \n0 49 46\n 0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 \n1 48 50\n 0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 \n5 46 48\n 0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 \n2 48 49\n 1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 \n1 50 48\n 0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 \n1 50 49\n\n\nNetApp stats:\n CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s \nCache Cache CP CP Disk DAFS FCP iSCSI FCP kB/s\n in out read write read write \nage hit time ty util in out\n 2% 0 0 0 139 0 0 2788 0 0 0 \n 3 96% 0% - 15% 0 139 0 3 2277\n 2% 0 0 0 144 0 0 2504 0 0 0 \n 3 96% 0% - 18% 0 144 0 3 2150\n 2% 0 0 0 130 0 0 2212 0 0 0 \n 3 96% 0% - 13% 0 130 0 3 1879\n 3% 0 0 0 169 0 0 2937 80 0 0 \n 3 96% 0% - 13% 0 169 0 4 2718\n 2% 0 0 0 139 0 0 2448 0 0 0 \n 3 96% 0% - 12% 0 139 0 3 2096\n 2% 0 0 0 137 0 0 2116 0 0 0 \n 3 96% 0% - 10% 0 137 0 3 1892\n 3% 0 0 0 107 0 0 2660 812 0 0 \n 3 96% 24% T 20% 0 107 0 3 1739\n 2% 0 0 0 118 0 0 1788 0 0 0 \n 3 96% 0% - 13% 0 118 0 3 1608\n 2% 0 0 0 136 0 0 2228 0 0 0 \n 3 96% 0% - 11% 0 136 0 3 2018\n 2% 0 0 0 119 0 0 1940 0 0 0 \n 3 96% 0% - 13% 0 119 0 3 1998\n 2% 0 0 0 136 0 0 2175 0 0 0 \n 3 96% 0% - 14% 0 136 0 3 1929\n 2% 0 0 0 133 0 0 1924 0 0 0 \n 3 96% 0% - 19% 0 133 0 3 2292\n 2% 0 0 0 115 0 0 2044 0 0 0 \n 3 96% 0% - 11% 0 115 0 3 1682\n 2% 0 0 0 134 0 0 2256 0 0 0 \n 3 96% 0% - 12% 0 134 0 3 2096\n 2% 0 0 0 112 0 0 2184 0 0 0 \n 3 96% 0% - 12% 0 112 0 3 1633\n 2% 0 0 0 163 0 0 2348 0 0 0 \n 3 96% 0% - 13% 0 163 0 4 2421\n 2% 0 0 0 120 0 0 2056 184 0 0 \n 3 96% 8% T 14% 0 120 0 3 1703\n\nstrace output:\nread(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"..., \n8192) = 8192\n_llseek(55, 857997312, [857997312], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"..., \n8192) = 8192\n_llseek(55, 883220480, [883220480], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"..., \n8192) = 8192\n_llseek(55, 858005504, [858005504], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \\260\\236\\240\"..., \n8192) = 8192\n_llseek(55, 857964544, [857964544], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., \n8192) = 8192\n_llseek(55, 857956352, [857956352], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \\260\\236\\240\\2\\\\\"..., \n8192) = 8192\n_llseek(55, 910802944, [910802944], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., \n8192) = 8192\n_llseek(55, 857948160, [857948160], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., \n8192) = 8192\n_llseek(56, 80371712, [80371712], SEEK_SET) = 0\nread(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \\250\\236\\260\\2T\\235\"..., \n8192) = 8192\nread(102, \"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., \n8192) = 8192\n_llseek(55, 857939968, [857939968], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"..., \n8192) = 8192\n_llseek(55, 857923584, [857923584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \n\\234\\236\\310\\002\"..., 8192) = 8192\n_llseek(55, 57270272, [57270272], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \n\\310\\236j\\2\\214\\235\"..., 8192) = 8192\n_llseek(55, 870727680, [870727680], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., \n8192) = 8192\n_llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \\264\\236\\230\"..., \n8192) = 8192\n_llseek(55, 857874432, [857874432], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \\224\\236\\330\"..., \n8192) = 8192\n_llseek(55, 760872960, [760872960], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \\234\\236\\310\"..., \n8192) = 8192\n_llseek(55, 891715584, [891715584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \\230\\236\\320\\2\"..., \n8192) = 8192\n_llseek(55, 857858048, [857858048], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\254\\236\\250\"..., \n8192) = 8192\n_llseek(55, 666910720, [666910720], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \n\\254\\236\\242\\2P\\235\"..., 8192) = 8192\n_llseek(55, 857841664, [857841664], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \\214\\236\\350\\2\\30\"..., \n8192) = 8192", "msg_date": "Mon, 29 Aug 2005 09:42:46 -0400", "msg_from": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]>", "msg_from_op": true, "msg_subject": "High load and iowait but no disk access" }, { "msg_contents": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]> writes:\n> The stats of the NetApp do confirm that it is sitting idle.\n\nReally?\n\n> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s \n> Cache Cache CP CP Disk DAFS FCP iSCSI FCP kB/s\n> in out read write read write \n> age hit time ty util in out\n> 2% 0 0 0 139 0 0 2788 0 0 0 \n> 3 96% 0% - 15% 0 139 0 3 2277\n> 2% 0 0 0 144 0 0 2504 0 0 0 \n> 3 96% 0% - 18% 0 144 0 3 2150\n> 2% 0 0 0 130 0 0 2212 0 0 0 \n> 3 96% 0% - 13% 0 130 0 3 1879\n> 3% 0 0 0 169 0 0 2937 80 0 0 \n> 3 96% 0% - 13% 0 169 0 4 2718\n> 2% 0 0 0 139 0 0 2448 0 0 0 \n> 3 96% 0% - 12% 0 139 0 3 2096\n\nI know zip about NetApps, but doesn't the 8th column indicate pretty\nsteady disk reads?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Aug 2005 12:15:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access " }, { "msg_contents": "\nOn 30-Aug-05, at 12:15, Tom Lane wrote:\n\n> =?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]> writes:\n>> The stats of the NetApp do confirm that it is sitting idle.\n>\n> Really?\n\n\n>\n>> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s\n>> Cache Cache CP CP Disk DAFS FCP iSCSI FCP kB/s\n>> in out read write read write\n>> age hit time ty util in out\n>> 2% 0 0 0 139 0 0 2788 0 0 0\n>> 3 96% 0% - 15% 0 139 0 3 2277\n>> 2% 0 0 0 144 0 0 2504 0 0 0\n>> 3 96% 0% - 18% 0 144 0 3 2150\n>> 2% 0 0 0 130 0 0 2212 0 0 0\n>> 3 96% 0% - 13% 0 130 0 3 1879\n>> 3% 0 0 0 169 0 0 2937 80 0 0\n>> 3 96% 0% - 13% 0 169 0 4 2718\n>> 2% 0 0 0 139 0 0 2448 0 0 0\n>> 3 96% 0% - 12% 0 139 0 3 2096\n>\n> I know zip about NetApps, but doesn't the 8th column indicate pretty\n> steady disk reads?\nYes, but they are very low.\nAt 15% usage, it's pretty much sitting idle if you consider that the OS \nreports that one of the processor is spending more then 80% of it's \ntime in IOwait.\n\nRémy\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 30 Aug 2005 12:19:30 -0400", "msg_from": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load and iowait but no disk access " }, { "msg_contents": "On Mon, Aug 29, 2005 at 09:42:46AM -0400, R�my Beaumont wrote:\n>We have been trying to pinpoint what originally seem to be a I/O \n>bottleneck but which now seems to be an issue with either Postgresql or \n>RHES 3. \n\nNope, it's an IO bottleneck.\n\n>The behavior we see is that when running queries that do random reads \n>on disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a \n>throughput bellow 3000kB/s \n\nThat's the sign of an IO bottleneck.\n\n>The stats of the NetApp do confirm that it is sitting idle. Doing an \n>strace on the Postgresql process shows that is it doing seeks and \n>reads. \n>\n>So my question is where is this iowait time spent ? \n\nWaiting for the seeks. postgres doesn't do async io, so it requests a\nblock, waits for it to come in, then requests another block, etc. The\nutilization on the netapp isn't going to be high because it doesn't have\na queue of requests and can't do readahead because the IO is random. The\nonly way to improve the situation would be to reduce the latency of the\nseeks. If I read the numbers right you're only getting about 130\nseeks/s, which ain't great. I don't know how much latency the netapp\nadds in the this configuration; have you tried benchmarking\ndirect-attach disks?\n\nMike Stone\n", "msg_date": "Tue, 30 Aug 2005 12:25:25 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access" }, { "msg_contents": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]> writes:\n> On 30-Aug-05, at 12:15, Tom Lane wrote:\n>> I know zip about NetApps, but doesn't the 8th column indicate pretty\n>> steady disk reads?\n\n> Yes, but they are very low.\n\nSure, but that's more or less what you'd expect if the thing is randomly\nseeking all over the disk :-(. Just because it's a NetApp doesn't mean\nit's got zero seek time.\n\nYou did not say what sort of query this is, but I gather that it's doing\nan indexscan on a table that is not at all in index order. Possible\nsolutions involve reverting to a seqscan (have you forced the planner to\nchoose an indexscan here, either directly or by lowering random_page_cost?)\nor CLUSTERing the table by the index (which would need to be repeated\nperiodically, so it's not a great answer).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Aug 2005 12:29:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access " }, { "msg_contents": "\nOn 30-Aug-05, at 12:29, Tom Lane wrote:\n\n> =?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]> writes:\n>> On 30-Aug-05, at 12:15, Tom Lane wrote:\n>>> I know zip about NetApps, but doesn't the 8th column indicate pretty\n>>> steady disk reads?\n>\n>> Yes, but they are very low.\n>\n> Sure, but that's more or less what you'd expect if the thing is \n> randomly\n> seeking all over the disk :-(. Just because it's a NetApp doesn't mean\n> it's got zero seek time.\nPer NetApp, the disk utilization percentage they report does include \nseek time, not just read/write operations.\nNetApp has been involved in trying to figure out what is going on and \ntheir claim is that the NetApp filer is not IO bound.\n\n>\n> You did not say what sort of query this is, but I gather that it's \n> doing\n> an indexscan on a table that is not at all in index order.\nYes, most of those queries are doing an indexscan. It's a fresh \nrestore of our production database that we have vacuumed/analyzed.\n\n> Possible\n> solutions involve reverting to a seqscan (have you forced the planner \n> to\n> choose an indexscan here, either directly or by lowering \n> random_page_cost?)\nNo.\n> or CLUSTERing the table by the index (which would need to be repeated\n> periodically, so it's not a great answer).\nWill try to cluster the tables and see if it changes anything. Still \ndoesn't explain what is going on with those seeks.\n\nThanks,\n\nRémy\n\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 30 Aug 2005 12:42:13 -0400", "msg_from": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load and iowait but no disk access " }, { "msg_contents": "Have you tried a different kernel? We run with a netapp over NFS without\nany issues, but we have seen high IO-wait on other Dell boxes (running and\nnot running postgres) and RHES 3. We have replaced a Dell PowerEdge 350\nrunning RH 7.3 with a PE750 with more memory running RHES3 and it be bogged\ndown with IO waits due to syslog messages writing to the disk, the old\nslower server could handle it fine. I don't know if it is a Dell thing or a\nRH kernel, but we try different kernels on our boxes to try to find one that\nworks better. We have not found one that stands out over another\nconsistently but we have been moving away from Update 2 kernel\n(2.4.21-15.ELsmp) due to server lockup issues. Unfortunately we get the\nbest disk throughput on our few remaining 7.3 boxes.\n \nWoody\n \nIGLASS Networks\nwww.iglass.net\n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rémy Beaumont\nSent: Monday, August 29, 2005 9:43 AM\nTo: [email protected]\nSubject: [PERFORM] High load and iowait but no disk access\n\n\nWe have been trying to pinpoint what originally seem to be a I/O bottleneck\nbut which now seems to be an issue with either Postgresql or RHES 3.\n\nWe have the following test environment on which we can reproduce the\nproblem:\n\n1) Test System A\nDell 6650 Quad Xeon Pentium 4\n8 Gig of RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS270 connected using an FC card using 10 disks\n\n2) Test System B\nDell Dual Xeon Pentium III\n2 Gig o RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS920 connected using an FC card using 28 disks\n\nOur Database size is around 30G. \n\nThe behavior we see is that when running queries that do random reads on\ndisk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a\nthroughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 kB/s on\nsequential read operations on the netapps)\n\nThe stats of the NetApp do confirm that it is sitting idle. Doing an strace\non the Postgresql process shows that is it doing seeks and reads.\n\nSo my question is where is this iowait time spent ?\nIs there a way to pinpoint the problem in more details ?\nWe are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n\nI have included the output of top,vmstat,strace and systat from the Netapp\nfrom System B while running a single query that generates this behavior.\n\nRémy\n\ntop output:\n06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\ntotal 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\ncpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\ncpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\nMem: 2061696k av, 2043936k used, 17760k free, 0k shrd, 3916k buff\n1566332k actv, 296648k in_d, 30504k in_c\nSwap: 16771584k av, 21552k used, 16750032k free 1933772k cached\n\nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 postmaster\n30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd\n5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0\n6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1\n9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush\n7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated\n11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd\n17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0\n\n\nvmstat output \nprocs memory swap io system cpu\nr b swpd free buff cache si so bi bo in cs us sy id wa\n0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 3\n0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 47\n0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 2 50 47\n1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 3 48 46\n0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 46\n0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 46\n0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 1 50 46\n1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 1 49 47\n0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 1 48 48\n0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 46\n0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 50\n0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 5 46 48\n0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 2 48 49\n1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 48\n0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 49\n\n\nNetApp stats:\nCPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk\nDAFS FCP iSCSI FCP kB/s\nin out read write read write age hit time ty util in out\n2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% 0 139 0 3 2277\n2% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 2150\n2% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 1879\n3% 0 0 0 169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 2718\n2% 0 0 0 139 0 0 2448 0 0 0 3 96% 0% - 12% 0 139 0 3 2096\n2% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 137 0 3 1892\n3% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 1739\n2% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 1608\n2% 0 0 0 136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 2018\n2% 0 0 0 119 0 0 1940 0 0 0 3 96% 0% - 13% 0 119 0 3 1998\n2% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 136 0 3 1929\n2% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 2292\n2% 0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 1682\n2% 0 0 0 134 0 0 2256 0 0 0 3 96% 0% - 12% 0 134 0 3 2096\n2% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - 12% 0 112 0 3 1633\n2% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 2421\n2% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 1703\n\nstrace output:\nread(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"..., 8192)\n= 8192\n_llseek(55, 857997312, [857997312], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"..., 8192)\n= 8192\n_llseek(55, 883220480, [883220480], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"..., 8192)\n= 8192\n_llseek(55, 858005504, [858005504], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \\260\\236\\240\"...,\n8192) = 8192\n_llseek(55, 857964544, [857964544], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., 8192)\n= 8192\n_llseek(55, 857956352, [857956352], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \\260\\236\\240\\2\\\\\"..., 8192)\n= 8192\n_llseek(55, 910802944, [910802944], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., 8192)\n= 8192\n_llseek(55, 857948160, [857948160], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., 8192)\n= 8192\n_llseek(56, 80371712, [80371712], SEEK_SET) = 0\nread(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \\250\\236\\260\\2T\\235\"..., 8192)\n= 8192\nread(102, \"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192)\n= 8192\n_llseek(55, 857939968, [857939968], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"...,\n8192) = 8192\n_llseek(55, 857923584, [857923584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \\234\\236\\310\\002\"...,\n8192) = 8192\n_llseek(55, 57270272, [57270272], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \\310\\236j\\2\\214\\235\"...,\n8192) = 8192\n_llseek(55, 870727680, [870727680], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., 8192)\n= 8192\n_llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \\264\\236\\230\"..., 8192)\n= 8192\n_llseek(55, 857874432, [857874432], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \\224\\236\\330\"...,\n8192) = 8192\n_llseek(55, 760872960, [760872960], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \\234\\236\\310\"..., 8192)\n= 8192\n_llseek(55, 891715584, [891715584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \\230\\236\\320\\2\"..., 8192)\n= 8192\n_llseek(55, 857858048, [857858048], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\254\\236\\250\"...,\n8192) = 8192\n_llseek(55, 666910720, [666910720], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \\254\\236\\242\\2P\\235\"...,\n8192) = 8192\n_llseek(55, 857841664, [857841664], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \\214\\236\\350\\2\\30\"..., 8192)\n= 8192\n\n\n\n\n\n\n\n\nHave you tried a different kernel?  We run with a \nnetapp over NFS without any issues, but we have seen high IO-wait on other Dell \nboxes (running  and not running postgres) and RHES 3.  We have \nreplaced a Dell PowerEdge 350 running RH 7.3  with a PE750 with more memory \nrunning RHES3 and it be bogged down with IO waits due to syslog messages writing \nto the disk, the old slower server could handle it fine.  I don't know if \nit is a Dell thing or a RH kernel, but we try different kernels on our boxes to \ntry to find one that works better.  We have not found one that stands out \nover another consistently but we have been moving away from Update 2 kernel \n(2.4.21-15.ELsmp) due to server lockup issues.  Unfortunately we get the \nbest disk throughput on our few remaining 7.3 boxes.\n \nWoody\n \nIGLASS Networks\nwww.iglass.net\n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Rémy \nBeaumontSent: Monday, August 29, 2005 9:43 AMTo: \[email protected]: [PERFORM] High load and \niowait but no disk access\nWe have been trying to pinpoint what originally seem to be a I/O \nbottleneck but which now seems to be an issue with either Postgresql or RHES \n3.We have the following test environment on which we can reproduce the \nproblem:1) Test System ADell 6650 Quad Xeon Pentium 48 Gig of \nRAMOS: RHES 3 update 2Storage: NetApp FAS270 connected using an FC card \nusing 10 disks2) Test System BDell Dual Xeon Pentium III2 Gig o \nRAMOS: RHES 3 update 2Storage: NetApp FAS920 connected using an FC card \nusing 28 disksOur Database size is around 30G. The behavior we \nsee is that when running queries that do random reads on disk, IOWAIT goes over \n80% and actual disk IO falls to a crawl at a throughput bellow 3000kB/s (We \nusually average 40000 kB/s to 80000 kB/s on sequential read operations on the \nnetapps)The stats of the NetApp do confirm that it is sitting idle. \nDoing an strace on the Postgresql process shows that is it doing seeks and \nreads.So my question is where is this iowait time spent ?Is there a \nway to pinpoint the problem in more details ?We are able to reproduce this \nbehavior with Postgresql 7.4.8 and 8.0.3I have included the output of \ntop,vmstat,strace and systat from the Netapp from System B while running a \nsingle query that generates this behavior.Rémytop \noutput:06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, \n1.0172 processes: 71 sleeping, 1 running, 0 zombie, 0 stoppedCPU states: \ncpu user nice system irq softirq iowait idletotal 2.7% 0.0% 1.0% 0.1% 0.2% \n46.0% 49.5%cpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%cpu01 5.3% 0.0% 1.9% \n0.3% 0.3% 89.8% 1.9%Mem: 2061696k av, 2043936k used, 17760k free, 0k shrd, \n3916k buff1566332k actv, 296648k in_d, 30504k in_cSwap: 16771584k av, \n21552k used, 16750032k free 1933772k cachedPID USER PRI NI SIZE RSS \nSHARE STAT %CPU %MEM TIME CPU COMMAND30960 postgres 15 0 13424 10M 9908 D \n2.7 0.5 2:00 1 postmaster30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 \nsshd1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init2 root RT 0 0 0 0 SW \n0.0 0.0 0:00 0 migration/03 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 \nmigration/14 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd5 root 34 19 0 0 0 \nSWN 0.0 0.0 0:00 0 ksoftirqd/06 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 \nksoftirqd/19 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush7 root 15 0 0 0 0 \nSW 0.0 0.0 6:53 1 kswapd8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand10 \nroot 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated11 root 25 0 0 0 0 SW 0.0 0.0 0:00 \n0 mdrecoveryd17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0vmstat \noutput procs memory swap io system cpur b swpd free buff cache si so bi \nbo in cs us sy id wa0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 30 1 \n21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 470 1 21552 17976 4896 \n1931664 0 0 2468 0 407 552 2 2 50 471 0 21552 17984 4896 1931608 0 0 2124 0 \n418 538 3 3 48 460 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 \n460 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 460 1 21552 \n17968 4916 1931536 0 4 1708 4 402 554 3 1 50 461 1 21552 18052 4916 1931388 \n0 0 1772 0 409 531 3 1 49 470 1 21552 17912 4924 1931492 0 0 1772 0 408 565 \n3 1 48 480 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 460 1 \n21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 500 1 21552 17872 4944 \n1931440 0 0 2116 0 392 496 1 5 46 480 1 21552 18060 4944 1931232 0 0 2232 0 \n423 597 1 2 48 491 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 \n480 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 \n49NetApp stats:CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape \nkB/s Cache Cache CP CP Disk DAFS FCP iSCSI FCP kB/sin out read write read \nwrite age hit time ty util in out2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% \n0 139 0 3 22772% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 \n21502% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 18793% 0 0 0 \n169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 27182% 0 0 0 139 0 0 2448 0 0 0 \n3 96% 0% - 12% 0 139 0 3 20962% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 \n137 0 3 18923% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 \n17392% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 16082% 0 0 0 \n136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 20182% 0 0 0 119 0 0 1940 0 0 0 \n3 96% 0% - 13% 0 119 0 3 19982% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 \n136 0 3 19292% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 22922% \n0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 16822% 0 0 0 134 0 0 2256 \n0 0 0 3 96% 0% - 12% 0 134 0 3 20962% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - \n12% 0 112 0 3 16332% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 \n24212% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 \n1703strace output:read(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \n\\230\\236\\320\\0020\"..., 8192) = 8192_llseek(55, 857997312, [857997312], \nSEEK_SET) = 0read(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \n\\250\\236\\260\"..., 8192) = 8192_llseek(55, 883220480, [883220480], SEEK_SET) \n= 0read(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"..., \n8192) = 8192_llseek(55, 858005504, [858005504], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \\260\\236\\240\"..., 8192) = \n8192_llseek(55, 857964544, [857964544], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., 8192) = \n8192_llseek(55, 857956352, [857956352], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \\260\\236\\240\\2\\\\\"..., 8192) = \n8192_llseek(55, 910802944, [910802944], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., 8192) = \n8192_llseek(55, 857948160, [857948160], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., 8192) = \n8192_llseek(56, 80371712, [80371712], SEEK_SET) = 0read(56, \"\\4\\0\\0\\0Lf \n\\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \\250\\236\\260\\2T\\235\"..., 8192) = 8192read(102, \n\"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192) = \n8192_llseek(55, 857939968, [857939968], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"..., 8192) = \n8192_llseek(55, 857923584, [857923584], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \\234\\236\\310\\002\"..., 8192) = \n8192_llseek(55, 57270272, [57270272], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \\310\\236j\\2\\214\\235\"..., 8192) = \n8192_llseek(55, 870727680, [870727680], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., 8192) = \n8192_llseek(55, 1014734848, [1014734848], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \\264\\236\\230\"..., 8192) = \n8192_llseek(55, 857874432, [857874432], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \\224\\236\\330\"..., 8192) = \n8192_llseek(55, 760872960, [760872960], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \\234\\236\\310\"..., 8192) = \n8192_llseek(55, 891715584, [891715584], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \\230\\236\\320\\2\"..., 8192) = \n8192_llseek(55, 857858048, [857858048], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\254\\236\\250\"..., 8192) = \n8192_llseek(55, 666910720, [666910720], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \\254\\236\\242\\2P\\235\"..., 8192) = \n8192_llseek(55, 857841664, [857841664], SEEK_SET) = 0read(55, \n\"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \\214\\236\\350\\2\\30\"..., 8192) = \n8192/fontfamily>", "msg_date": "Tue, 30 Aug 2005 14:30:07 -0400", "msg_from": "\"Woody Woodring\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access" }, { "msg_contents": "Remy,\n\n> The behavior we see is that when running queries that do random reads\n> on disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a\n> throughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 kB/s\n> on sequential read operations on the netapps)\n\nThis seems pretty low for a NetApp -- you should be able to manage up to \n180mb/s, if not higher. Are you sure it's configured correctly?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Aug 2005 11:32:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access" }, { "msg_contents": "\nOn 30-Aug-05, at 14:32, Josh Berkus wrote:\n\n> Remy,\n>\n>> The behavior we see is that when running queries that do random reads\n>> on disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a\n>> throughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 \n>> kB/s\n>> on sequential read operations on the netapps)\n>\n> This seems pretty low for a NetApp -- you should be able to manage up \n> to\n> 180mb/s, if not higher. Are you sure it's configured correctly?\nHi Josh,\n\nThe config has been reviewed by NetApp. We do get rates higher then \n80mb/s, but on average, that's what we get.\n\nDo you have NetApp filers deployed ?\nHow many spindles do you have in your volume ?\nOn which OS are you running Postgres ?\n\nThanks,\n\nRémy\n\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Tue, 30 Aug 2005 14:42:38 -0400", "msg_from": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load and iowait but no disk access" } ]
[ { "msg_contents": "Hi,\n \nI�ve configured postgresql to use 1GB of shared buffers but meminfo and \"top\" are indicanting 0 shared buffers page. Why?\n \nIt�s a Linux Redhat 9 box with 4GB RAM and postgresql 7.3.\n \nThanks in advance!\n \nReimer\n\n\t\t\n---------------------------------\nYahoo! Acesso Gr�tis: Internet r�pida e gr�tis. Instale o discador agora!\nHi,\n \nI�ve configured postgresql to use 1GB of shared buffers but meminfo and \"top\" are indicanting 0 shared buffers page. Why?\n \nIt�s a Linux Redhat 9 box with 4GB RAM and postgresql 7.3.\n \nThanks in advance!\n \nReimer\nYahoo! Acesso Gr�tis: Internet r�pida e gr�tis. Instale o discador agora!", "msg_date": "Mon, 29 Aug 2005 16:23:20 +0000 (GMT)", "msg_from": "Carlos Henrique Reimer <[email protected]>", "msg_from_op": true, "msg_subject": "shared buffers" }, { "msg_contents": "> I�ve configured postgresql to use 1GB of shared buffers but meminfo and \n> \"top\" are indicanting 0 shared buffers page. Why?\n\n1GB shared buffers is far too much. Set it back to like 30000 buffers \nmax...\n\nChris\n\n", "msg_date": "Tue, 30 Aug 2005 09:27:15 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers" }, { "msg_contents": "I forgot to say that it�s a 12GB database...\n \nOk, I�ll set shared buffers to 30.000 pages but even so \"meminfo\" and \"top\" shouldn�t show some shared pages? \n \nI heard something about that Redhat 9 can�t handle very well RAM higher than 2GB. Is it right?\n\nThanks in advance!\n \nReimer\n\nChristopher Kings-Lynne <[email protected]> escreveu:\n> I�ve configured postgresql to use 1GB of shared buffers but meminfo and \n> \"top\" are indicanting 0 shared buffers page. Why?\n\n1GB shared buffers is far too much. Set it back to like 30000 buffers \nmax...\n\nChris\n\n\n\t\t\n---------------------------------\nYahoo! Acesso Gr�tis: Internet r�pida e gr�tis. Instale o discador agora!\nI forgot to say that it�s a 12GB database...\n \nOk, I�ll set shared buffers to 30.000 pages but even so \"meminfo\" and \"top\" shouldn�t show some shared pages? \n \nI heard something about that Redhat 9 can�t handle very well RAM higher than 2GB. Is it right?\nThanks in advance!\n \nReimer\nChristopher Kings-Lynne <[email protected]> escreveu:\n> I�ve configured postgresql to use 1GB of shared buffers but meminfo and > \"top\" are indicanting 0 shared buffers page. Why?1GB shared buffers is far too much. Set it back to like 30000 buffers max...Chris\nYahoo! Acesso Gr�tis: Internet r�pida e gr�tis. Instale o discador agora!", "msg_date": "Mon, 29 Aug 2005 22:54:54 -0300 (ART)", "msg_from": "Carlos Henrique Reimer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared buffers" }, { "msg_contents": "> I forgot to say that it�s a 12GB database...\n\nThat's actually not that large.\n\n> Ok, I�ll set shared buffers to 30.000 pages but even so \"meminfo\" and \n> \"top\" shouldn�t show some shared pages?\n\nYeah. The reason for not setting buffers so high is because PostgreSQL \ncannot efficiently manage huge shared buffers, so you're better off \ngiving the RAM to Linux's disk cache.\n\nChris\n\n", "msg_date": "Tue, 30 Aug 2005 10:08:21 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers" }, { "msg_contents": "Carlos Henrique Reimer <[email protected]> writes:\n> I heard something about that Redhat 9 can�t handle very well RAM higher than 2GB. Is it right?\n\nRHL 9 is certainly pretty long in the tooth. Why aren't you using a\nmore recent distro?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Aug 2005 22:16:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers " }, { "msg_contents": "Carlos Henrique Reimer wrote:\n> I forgot to say that it�s a 12GB database...\n> \n> Ok, I�ll set shared buffers to 30.000 pages but even so \"meminfo\" and \n> \"top\" shouldn�t show some shared pages?\n> \n> I heard something about that Redhat 9 can�t handle very well RAM higher \n> than 2GB. Is it right?\n> Thanks in advance!\n\nRH9, like any 32-bit OS, is limited to 2GB address space w/o special \ntricks. However, it can access > 2GB for the OS disk cache using PAE if \nyou are running the bigmem kernel.\n", "msg_date": "Tue, 30 Aug 2005 00:05:15 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers" }, { "msg_contents": "Chris,\nWould you say that 30000 pages is a good maximum for a Postgres install?\nWe're running 8.0.3 on 64-bit SUSE on a dual Opteron box with 4G and have\nshared_buffers set at 120000. I've moved it up and down (it was 160000\nwhen I got here) without any measurable performance difference.\n\nThe reason I ask is because I occasionally see large-ish queries take\nforever (like cancel-after-12-hours forever) and wondered if this could\nresult from shared_buffers being too large.\n\nThanks for your (and anyone else's) help!\nMartin Nickel\n\nOn Tue, 30 Aug 2005 10:08:21 +0800, Christopher Kings-Lynne wrote:\n\n>> I forgot to say that it�s a 12GB database...\n> \n> That's actually not that large.\n> \n>> Ok, I�ll set shared buffers to 30.000 pages but even so \"meminfo\" and \n>> \"top\" shouldn�t show some shared pages?\n> \n> Yeah. The reason for not setting buffers so high is because PostgreSQL \n> cannot efficiently manage huge shared buffers, so you're better off \n> giving the RAM to Linux's disk cache.\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Sun, 04 Sep 2005 21:56:01 -0500", "msg_from": "Martin Nickel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers" }, { "msg_contents": "Martin Nickel wrote:\n> Chris,\n> Would you say that 30000 pages is a good maximum for a Postgres install?\n> We're running 8.0.3 on 64-bit SUSE on a dual Opteron box with 4G and have\n> shared_buffers set at 120000. I've moved it up and down (it was 160000\n> when I got here) without any measurable performance difference.\n\nWhat I've read on the mailing list, is that usually the sweet spot is\nactually around 10k pages. 120k seems far too high.\n\nI believe that the major fixes to the buffer manager are more in 8.1\nrather than 8.0, so you probably are hitting some problems. (The biggest\nproblem was that there were places that require doing a complete scan\nthrough shared memory looking for dirty pages, or some such).\n\n>\n> The reason I ask is because I occasionally see large-ish queries take\n> forever (like cancel-after-12-hours forever) and wondered if this could\n> result from shared_buffers being too large.\n\nThere are lots of possibilities for why these take so long, perhaps you\nwould want to post them, and we can try to help.\nFor instance, if you have a foreign key reference from one table to\nanother, and don't have indexes on both sides, then deleting from the\nreferenced table, will cause a sequential scan on the referring table\nfor *each* deleted row. (IIRC).\n\nJohn\n=:->\n\n>\n> Thanks for your (and anyone else's) help!\n> Martin Nickel", "msg_date": "Sun, 11 Sep 2005 08:35:53 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared buffers" } ]
[ { "msg_contents": "Tobias wrote:\n> Splendid :-) Unfortunately we will not be upgrading for some monthes\n> still,\n> but anyway I'm happy. This provides yet another good argument for\n> upgrading\n> sooner. I'm also happy to see such a perfect match:\n> \n> - A problem that can be reduced from beeing complex and\n> production-specific, to simple and easily reproducible.\n> \n> - Enthusiastic people testing it and pinpointing even more precisely\nwhat\n> conditions will cause the condition\n> \n> - Programmers actually fixing the issue\n> \n> - Testers verifying that it was fixed\n> \n> Long live postgresql! :-)\n\nIn the last three or so years since I've been really active with\npostgresql, I've found two or three issues/bugs which I was able to\nreproduce and reduce to a test case. In all instances the fix was in\ncvs literally within minutes.\n\nMerlin\n", "msg_date": "Mon, 29 Aug 2005 14:41:46 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit + group + join" } ]
[ { "msg_contents": "\nTry as I might, I can't seem to get it to work ... table has >9million \nrows in it, I've created an index \"using btree ( priority ) where priority \n< 0;\", where the table distribution looks like:\n\n priority | count\n----------+---------\n -2 | 138435\n -1 | 943250\n 1 | 3416\n 9 | 1134171\n | 7276960\n(5 rows)\n\nAnd it still won't use the index:\n\n# explain update table set priority = -3 where priority = -1;\n QUERY PLAN \n------------------------------------------------------------------\n Seq Scan on table (cost=0.00..400735.90 rows=993939 width=278)\n Filter: (priority = -1)\n(2 rows)\n\nBut, ti will if I try 'priority = -2' ... what is teh threshhold for using \nthe index? obviously 10% of the records is too high ...\n\nthanks ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n", "msg_date": "Mon, 29 Aug 2005 16:56:15 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "getting an index to work with partial indices ..." }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> But, ti will if I try 'priority = -2' ... what is teh threshhold for using \n> the index? obviously 10% of the records is too high ...\n\nDepends on a lot of factors, but usually somewhere between 1% and 10%.\n(The new bitmap index scan code in 8.1 should be workable for higher\npercentages.) If this doesn't seem to square with reality for you,\nyou might try reducing random_page_cost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Aug 2005 16:19:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting an index to work with partial indices ... " } ]
[ { "msg_contents": "Actually the indexes on the child table do seem to get used - I just\nwanted to make sure there was no penalty not having indexes on the empty\nparent tables.\n \nYou are right - the parent is the best way to get at the unknown\nchildren ... \n\n\n _____ \n\n\tFrom: Thomas F. O'Connell [mailto:[email protected]] \n\tSent: Tuesday, August 30, 2005 6:15 AM\n\tTo: Lenard, Rohan (Rohan)\n\tCc: [email protected]\n\tSubject: Re: [PERFORM] Need indexes on empty tables for good\nperformance ?\n\t\n\t\n\tRohan, \n\n\tYou should note that in Postgres, indexes are not inherited by\nchild tables.\n\n\tAlso, it seems difficult to select from a child table whose name\nyou don't know unless you access the parent. And if you are accessing\nthe data via the parent, I'm reasonably certain that you will find that\nindexes aren't used (even if they exist on the children) as a result of\nthe way the children are accessed.\n\n\t\n\t--\n\tThomas F. O'Connell\n\tCo-Founder, Information Architect\n\tSitening, LLC\n\n\tStrategic Open Source: Open Your i(tm)\n\n\thttp://www.sitening.com/\n\t110 30th Avenue North, Suite 6\n\tNashville, TN 37203-6320\n\t615-469-5150\n\t615-469-5151 (fax)\n\t\n\n\tOn Aug 22, 2005, at 10:41 PM, Lenard, Rohan (Rohan) wrote:\n\n\n\t\tI've read that indexes aren't used for COUNT(*) and I've\nnoticed (7.3.x) with EXPLAIN that indexes never seem to be used on empty\ntables - is there any reason to have indexes on empty tables, or will\npostgresql never use them.\n\t\t \n\t\tThis is not as silly as it sounds - with table\ninheritance you might have table children with the data and a parent\nthat is empty. It'd be nice to make sure postgresql knows to never\nreally look at the parent - especially is you don't know the names of\nall the children ..\n\t\t \n\t\tThoughts ?\n\t\t \n\t\tthx,\n\t\t Rohan\n\n\n\n\n\n\n\nActually the indexes on the child table do seem to get used \n- I just wanted to make sure there was no penalty not having indexes on the \nempty parent tables.\n \nYou are right - the parent is the best way to get at \nthe unknown children ... \n\n\n\nFrom: Thomas F. O'Connell \n [mailto:[email protected]] Sent: Tuesday, August 30, 2005 6:15 \n AMTo: Lenard, Rohan (Rohan)Cc: \n [email protected]: Re: [PERFORM] Need indexes \n on empty tables for good performance ?\nRohan,\n \nYou should note that in Postgres, indexes are not inherited by child \n tables.\n\nAlso, it seems difficult to select from a child table whose name you \n don't know unless you access the parent. And if you are accessing the data via \n the parent, I'm reasonably certain that you will find that indexes aren't used \n (even if they exist on the children) as a result of the way the children are \n accessed.\n\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150615-469-5151 (fax)\n\nOn Aug 22, 2005, at 10:41 PM, Lenard, Rohan (Rohan) wrote:\n\nI've read that \n indexes aren't used for COUNT(*) and I've noticed (7.3.x) with EXPLAIN that \n indexes never seem to be used on empty tables - is there any reason to have \n indexes on empty tables, or will postgresql never use \n them.\n \nThis is not as \n silly as it sounds - with table inheritance you might have table children \n with the data and a parent that is empty.  It'd be nice to make sure \n postgresql knows to never really look at the parent - especially is you \n don't know the names of all the children ..\n \nThoughts \n ?\n \nthx,\n  \n Rohan", "msg_date": "Tue, 30 Aug 2005 08:13:37 +1000", "msg_from": "\"Lenard, Rohan (Rohan)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need indexes on empty tables for good performance ?" }, { "msg_contents": "Lenard, Rohan (Rohan) wrote:\n\n> Actually the indexes on the child table do seem to get used - I just \n> wanted to make sure there was no penalty not having indexes on the \n> empty parent tables.\n> \n> You are right - the parent is the best way to get at the unknown \n> children ...\n\nIndexes are created in the inheritance process, iirc. However, index \nentries are not inherited, which means that index-based unique \nconstraints don't properly get inherited.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Tue, 30 Aug 2005 10:43:05 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need indexes on empty tables for good performance ?" } ]
[ { "msg_contents": "\n\n\n\nHi All,\n I am running an application, which connects to the postgres\ndatabase at initialization time and perform the database operations like\nSelect/Update.\nDatabase queries are very simple.\nOn analyzing my application through Quantifier( Performance analyzing\ntool), I found the most of the time my application is waiting on recv,\nwaiting for Database response.\nWhen i run Vacuum on perticular tables, i observed that performance\nimprooves drastically.\n\nSo please tell me how can i improve database performance through\nconfiguration parameters. I had tried to change parameters in\npostgresql.conf file but of no avail.\nNow i am trying to Auto Vacuum, but don't know how to run Auto Vacuum.\n\nPlease help me to solve these problems.\n\nThanks in advance.\nHemant\n\n*********************** FSS-Unclassified ***********************\n\"DISCLAIMER: This message is proprietary to Flextronics Software\nSystems Limited (FSS) and is intended solely for the use of the\nindividual to whom it is addressed. It may contain privileged or\nconfidential information and should not be circulated or used for\nany purpose other than for what it is intended. If you have received\nthis message in error, please notify the originator immediately.\nIf you are not the intended recipient, you are notified that you are\nstrictly prohibited from using, copying, altering, or disclosing\nthe contents of this message. FSS accepts no responsibility for\nloss or damage arising from the use of the information transmitted\nby this email including damage from virus.\"\n\n", "msg_date": "Tue, 30 Aug 2005 10:05:02 +0530", "msg_from": "Hemant Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "How to improve Postgres performance" }, { "msg_contents": "On Tue, 30 Aug 2005, Hemant Pandey wrote:\n\n> So please tell me how can i improve database performance through\n> configuration parameters. I had tried to change parameters in\n> postgresql.conf file but of no avail.\n> Now i am trying to Auto Vacuum, but don't know how to run Auto Vacuum.\n\nThe most important part is that you need to run VACUUM ANALYZE regulary. \nVacuum can be started each night in a cron job, started from pg_autovacuum\nwhen it thinks it's needed, or started in some other way. In any case, it\nhas to be run whenever the data in the database have changed enough.\n\nThe parameters in the config that is most important in my experience is \neffective_cache_size and shared_buffers.\n\nThis is a text I like (it's for pg 7.4 but still useful):\n\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Tue, 30 Aug 2005 08:04:35 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve Postgres performance" } ]
[ { "msg_contents": "Hello,\n\nWe are about to install a new PostgreSQL server, and despite of being a \nvery humble configuration compared to the ones we see in the list, it's \nthe biggest one we've got till now.\n\nThe server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our main \ndoubt is what is the best configuration for the disks. We are thinking \nabout use them in a RAID-0 array. Is this the best option? What do you \nsuggest on partitioning? Separate partitions for the OS, data and pg_xlog?\n\nWe'll have some time to work on performance tests, and if someone is \ninterested we can provide our results.\n\nThanks in advance,\nAlvaro\n", "msg_date": "Tue, 30 Aug 2005 09:37:17 -0300", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "RAID Configuration Sugestion" }, { "msg_contents": "On Tue, Aug 30, 2005 at 09:37:17 -0300,\n Alvaro Nunes Melo <[email protected]> wrote:\n> \n> The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our main \n> doubt is what is the best configuration for the disks. We are thinking \n> about use them in a RAID-0 array. Is this the best option? What do you \n> suggest on partitioning? Separate partitions for the OS, data and pg_xlog?\n\nYou don't have a lot of options with just two disks. What are you trying\nto accomplish with raid?\n\nRaid 0 will possibly give you some speed up, while raid 1 will give you some\nfault tolerance, some speed of of reads, but cost you half your disk space.\n", "msg_date": "Tue, 30 Aug 2005 07:58:08 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "Please keep replies copied to the list so that others may contribute to\nand learn from the discussion.\n\nOn Tue, Aug 30, 2005 at 10:15:13 -0300,\n Alvaro Nunes Melo <[email protected]> wrote:\n> Hello Bruno,\n> \n> Bruno Wolff III wrote:\n> \n> >On Tue, Aug 30, 2005 at 09:37:17 -0300,\n> > Alvaro Nunes Melo <[email protected]> wrote:\n> > \n> >\n> >>The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our main \n> >>doubt is what is the best configuration for the disks. We are thinking \n> >>about use them in a RAID-0 array. Is this the best option? What do you \n> >>suggest on partitioning? Separate partitions for the OS, data and pg_xlog?\n> >\n> Our main goal is performance speedup. Disk space might not be a problem. \n> I've read a lot here about movig pg_xlog to different partitions, and \n> we'll surely make tests to see what configuration might be better.\n\nThis isn't a very good mix of hardware for running postgres. Xeons have\nsome context switching issues for which you will probably see some\nspeed up in 8.1. (So if you aren't going into production for sevral\nmonths you might want to be using 8.1beta.) Having only two disk drives\nis also not a good idea.\n\nWith what you have you either want to use raid 0 and not worry too much\nabout how the disks are partitioned or use one disk for wal logging\nand the other for other stuff. There are other people on the list who\ncan probably give you a better idea of which of these options is likely\nto be better in your case. However, they may need to know more about\nyour raid controller. In particular how much battery backed memory does\nit have and its model.\n", "msg_date": "Tue, 30 Aug 2005 08:50:18 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "At 08:37 AM 8/30/2005, Alvaro Nunes Melo wrote:\n>Hello,\n>\n>We are about to install a new PostgreSQL server, and despite of \n>being a very humble configuration compared to the ones we see in the \n>list, it's the biggest one we've got till now.\n>\n>The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our \n>main doubt is what is the best configuration for the disks. We are \n>thinking about use them in a RAID-0 array. Is this the best option? \n>What do you suggest on partitioning? Separate partitions for the OS, \n>data and pg_xlog?\n\nThis is _very_ modest HW. Unless your DB and/or DB load is similarly \nmodest, you are not going to be happy with the performance of your DBMS.\n\nAt a minimum, for safety reasons you want 4 HDs: 2 for a RAID 1 set \nfor the DB, and 2 for a RAID 1 set for the OS + pg_xlog.\n2 extra HDs, even SCSI HDs, is cheap. Especially when compared to \nthe cost of corrupted or lost data.\n\nHD's and RAM are cheap enough that you should be able to upgrade in \nmore ways, but do at least that \"upgrade\"!\n\nBeyond that, the best ways to spend you limited $ are highly \ndependent on your exact DB and its usage pattern.\n\nRon Peacetree\n\n\n", "msg_date": "Tue, 30 Aug 2005 10:45:15 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "> > >On Tue, Aug 30, 2005 at 09:37:17 -0300,\n> > > Alvaro Nunes Melo <[email protected]> wrote:\n> > >>The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our main\n> > >>doubt is what is the best configuration for the disks. We are thinking\n> > >>about use them in a RAID-0 array. Is this the best option? What do you\n> > >>suggest on partitioning? Separate partitions for the OS, data and pg_xlog?\n> > >\n> > Our main goal is performance speedup. Disk space might not be a problem.\n> > I've read a lot here about movig pg_xlog to different partitions, and\n> > we'll surely make tests to see what configuration might be better.\n> \n\nI've set up several servers with a config like this. Its not ideal,\nbut there's no reason you can't enjoy the benefits of a snappy\napplication.\n\nThe best results I've had involve dedicating one drive to OS, swap,\nlogs, tmp and everything and dedicate one drive to postgres. If you\nuse *nix you can mount the second drive as /var/lib/pgsql (or where\never postgres lives on your server) with noatime as a mount option.\n\nIn retrospect, you might have saved the money on the second CPU and\ngotten two more hard drives, but if you're running a dual task server\n(i.e. LAMP) you may appreciate the second CPU.\n\nThe beauty of a server like this is that it puts more of the wizardry\nof creating a fast application into the hands of the app developer,\nwhich results in a better db schema, optimized queries and generally\n*thinking* about the performance of the code. I personally feel that\nto be a very rewarding aspect of my job. (As a hobby I program\nmicrontrollers that run at 4MHz and have only 256 bytes of RAM, so\nthat could just be me.;-)\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 30 Aug 2005 11:10:58 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "Ron wrote:\n\n> At 08:37 AM 8/30/2005, Alvaro Nunes Melo wrote:\n>\n>> Hello,\n>>\n>> We are about to install a new PostgreSQL server, and despite of being \n>> a very humble configuration compared to the ones we see in the list, \n>> it's the biggest one we've got till now.\n>>\n>> The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. Our \n>> main doubt is what is the best configuration for the disks. We are \n>> thinking about use them in a RAID-0 array. Is this the best option? \n>> What do you suggest on partitioning? Separate partitions for the OS, \n>> data and pg_xlog?\n>\n>\n> This is _very_ modest HW. Unless your DB and/or DB load is similarly \n> modest, you are not going to be happy with the performance of your DBMS.\n\nWell that is a pretty blanket statement. I have many customers who \nhappily run in less hardware that what is mentioned above.\nIt all depends on the application itself and how the database is utilized.\n\n> At a minimum, for safety reasons you want 4 HDs: 2 for a RAID 1 set \n> for the DB, and 2 for a RAID 1 set for the OS + pg_xlog.\n> 2 extra HDs, even SCSI HDs, is cheap. Especially when compared to the \n> cost of corrupted or lost data.\n\nYour real test is going to be prototyping the performance you need. A \nsingle RAID 1 mirror (don't use RAID 0) may be more\nthan enough. However based on the fact that you speced Xeons my guess is \nyou spent money on CPUs when you should have\nspent money on hard drives.\n\nIf you still have the budget, I would suggest considering either what \nRon suggested or possibly using a 4 drive RAID 10 instead.\n\nIf you can't afford to put a couple more SCSI disks it may be worth \nwhile to put a software RAID 1 with ATA disks for the OS and\nswap and then use straight SCSI hardware RAID 1 for the DB. That will \nallow you to push any swap operations off to the OS disks\nwithout sacrificing the performance and reliability of the database itself.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> HD's and RAM are cheap enough that you should be able to upgrade in \n> more ways, but do at least that \"upgrade\"!\n>\n> Beyond that, the best ways to spend you limited $ are highly dependent \n> on your exact DB and its usage pattern.\n>\n> Ron Peacetree\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "Tue, 30 Aug 2005 09:56:36 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "At 12:56 PM 8/30/2005, Joshua D. Drake wrote:\n>Ron wrote:\n>\n>>At 08:37 AM 8/30/2005, Alvaro Nunes Melo wrote:\n>>\n>>>Hello,\n>>>\n>>>We are about to install a new PostgreSQL server, and despite of \n>>>being a very humble configuration compared to the ones we see in \n>>>the list, it's the biggest one we've got till now.\n>>>\n>>>The server is a Dual Xeon 3.0 with 2 GB RAM and two SCSI disks. \n>>>Our main doubt is what is the best configuration for the disks. We \n>>>are thinking about use them in a RAID-0 array. Is this the best \n>>>option? What do you suggest on partitioning? Separate partitions \n>>>for the OS, data and pg_xlog?\n>>\n>>\n>>This is _very_ modest HW. Unless your DB and/or DB load is \n>>similarly modest, you are not going to be happy with the \n>>performance of your DBMS.\n>\n>Well that is a pretty blanket statement. I have many customers who \n>happily run in less hardware that what is mentioned above.\n>It all depends on the application itself and how the database is utilized.\n\nIf your customers \"run happily\" on 2 HD's, then IME they have very \nmodest DB storage and/or DB performance needs. For safety reasons, \nthe best thing to do if you only have 2 HD's is to run them as a RAID \n1 with everything on them. The slightly better performing but \nconsiderably less safe alternative is to put the OS + logs on 1 HD \nand the DB on the other. Any resemblance to a semi-serious OLTP load \nwill reduce either such system to an HD IO bound one with poor IO rates.\n\nIf, as above, your DBMS is bounded by the performance of one HD, then \nyou are AT BEST getting the raw IO rate of such a device: say \n~70-80MB/s in average sustained raw sequential IO. Files system \noverhead and any seeking behavior will rapidly reduce that number to \nconsiderably less. Consider that the CPU <-> memory IO subsystem is \neasily capable of ~3.2GBps. So you are talking about slowing the DB \nserver to at most ~1/40, maybe even as little as ~1/200, its \npotential under such circumstances.\n\nIf your DB can fit completely in RAM and/or does light duty write IO, \nthis may not be a serious issue. OTOH, once you start using those \nHD's to any reasonable extent, most of the rest of the investment \nyou've made in server HW is wasted.\n\nAs I keep saying, the highest priority in purchasing a DBMS is to \nmake sure you have enough HD IO bandwidth. RAM comes second, and CPU \nis a distant third.\n\n\n>>At a minimum, for safety reasons you want 4 HDs: 2 for a RAID 1 set \n>>for the DB, and 2 for a RAID 1 set for the OS + pg_xlog.\n>>2 extra HDs, even SCSI HDs, is cheap. Especially when compared to \n>>the cost of corrupted or lost data.\n>\n>Your real test is going to be prototyping the performance you need. \n>A single RAID 1 mirror (don't use RAID 0) may be more\n>than enough. However based on the fact that you speced Xeons my \n>guess is you spent money on CPUs when you should have\n>spent money on hard drives.\n\nI agree with Josh on both points. Don't use RAID 0 for persistent \ndata unless you like losing data. Spend more on HDs and RAM and less \non CPU's (fast FSB is far more important than high clock rate. In \ngeneral buy the highest FSB with the slowest clock rate.). If fact, \nif you are that strapped for cash, exchange those 2 SCSI HD's for \ntheir $ equivalent in SATA HD's. The extra spindles will be well worth it.\n\n\n>If you still have the budget, I would suggest considering either \n>what Ron suggested or possibly using a 4 drive RAID 10 instead.\n\nIME, with only 4 HDs, it's usually better to split them them into two \nRAID 1's (one for the db, one for everything else including the logs) \nthan it is to put everything on one RAID 10. YMMV.\n\n\nRon Peacetree\n\n\n", "msg_date": "Tue, 30 Aug 2005 14:16:23 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "On 8/30/05, Ron <[email protected]> wrote:\n> >If you still have the budget, I would suggest considering either\n> >what Ron suggested or possibly using a 4 drive RAID 10 instead.\n> \n> IME, with only 4 HDs, it's usually better to split them them into two\n> RAID 1's (one for the db, one for everything else including the logs)\n> than it is to put everything on one RAID 10. YMMV.\n\nThis coresponds to what I have observed as well. Of course, we all\nknow that work loads varry.\n\nJust a note for the OP who has only two drives, there are tools for a\nvariety of OSs that monitor the S.M.A.R.T. features of the drive and\ngive an early warning in case it senses impending failure. I've caught\ntwo drives before failure with these types of tools.\n\nAlso note that when reading discussions of this nature you must take\ninto consideration the value of your data. For some people, restoring\nfrom a nightly backup is inconvienent, but not life-or-death. Some\npeople even do twice-daily backups so that in case of a failure they\ncan recover with little loss of data. This might be a good way to\nmitigate the cost of expensive server hardware. If you cannot afford\nto lose any data then you need to consider it imperitive to use some\ntype of RAID setup (not RAID 0) and to achieve great performance\nyou'll want more than 2 drives.\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 30 Aug 2005 13:50:16 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "\n>\n>> If you still have the budget, I would suggest considering either what \n>> Ron suggested or possibly using a 4 drive RAID 10 instead.\n>\n>\n> IME, with only 4 HDs, it's usually better to split them them into two \n> RAID 1's (one for the db, one for everything else including the logs) \n> than it is to put everything on one RAID 10. YMMV.\n\nReally? That's interesting. My experience is different, I assume SCSI? \nSoftware/Hardware Raid?\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n>\n> Ron Peacetree\n>\n\n", "msg_date": "Tue, 30 Aug 2005 12:27:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "At 03:27 PM 8/30/2005, Joshua D. Drake wrote:\n\n\n>>>If you still have the budget, I would suggest considering either \n>>>what Ron suggested or possibly using a 4 drive RAID 10 instead.\n>>\n>>\n>>IME, with only 4 HDs, it's usually better to split them them into \n>>two RAID 1's (one for the db, one for everything else including the \n>>logs) than it is to put everything on one RAID 10. YMMV.\n>\n>Really? That's interesting. My experience is different, I assume \n>SCSI? Software/Hardware Raid?\n\nThe issue exists regardless of technologies used, although the \ntechnology used does affect when things become an irritation or \nserious problem.\n\nThe issue with \"everything on the same HD set\" seems to be that under \nlight loads anything works reasonably well, but as load increases \ncontention between DB table access, OS access, and xlog writes can \ncause performance problems.\n\nIn particular, _everything_ else hangs while logs are being written \nwith \"everything on the same HD set\". Thus leaving you with the \nnasty choices of small log writes that cause more seeking behavior, \nand the resultant poor overall HD IO performance, or large log writes \nthat basically freeze the server until they are done.\n\nHaving the logs on a different HD, and if possible different IO bus, \nreduces this effect to a minimum and seems to be a better choice than \nthe \"shared everything\" approach.\n\nAlthough this effect seems largest when there are fewest HDs, the \ngeneral pattern is that one should use as many spindles as one can \nmake use of and that they should be as dedicated as possible in their \npurpose(s). That's why the TPC bench marked systems tend to have \nliterally 100's of HD's and they tend to be split into very focused purposes.\n\nRon Peacetree\n\n\n", "msg_date": "Tue, 30 Aug 2005 19:02:28 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "On Tue, Aug 30, 2005 at 07:02:28PM -0400, Ron wrote:\n>purpose(s). That's why the TPC bench marked systems tend to have \n>literally 100's of HD's and they tend to be split into very focused \n>purposes.\n\nOf course, TPC benchmark systems are constructed such that cost and\nstorage capacity are irrelevant--in the real world things tend to be\nmore complicated.\n\nMike Stone\n", "msg_date": "Tue, 30 Aug 2005 20:04:43 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "At 08:04 PM 8/30/2005, Michael Stone wrote:\n>On Tue, Aug 30, 2005 at 07:02:28PM -0400, Ron wrote:\n>>purpose(s). That's why the TPC bench marked systems tend to have \n>>literally 100's of HD's and they tend to be split into very focused purposes.\n>\n>Of course, TPC benchmark systems are constructed such that cost and \n>storage capacity are irrelevant--in the real world things tend to be\n>more complicated.\n\nThe scary thing is that I've worked on RW production systems that \nbore a striking resemblance to a TPC benchmark system. As you can \nimagine, they uniformly belonged to BIG organizations (read: lot's 'o \n$$$) who were using the systems for mission critical stuff where \neither it was company existence threatening for the system to be \ndone, or they would lose much $$$ per min of down time, or both.\n\nFinancial institutions, insurance companies, central data mines for \nFortune 2000 companies, etc _all_ build systems that push the state \nof the art in how much storage can be managed and how many HDs, CPUs, \nRAM DIMMs, etc are usable.\n\nHistorically, this has been the sole province of Oracle and DB2 on \nthe SW side and equally outrageously priced custom HW. Clearly, I'd \nlike to see PostgreSQL change that ;-)\n\nRon Peacetree\n\n\n", "msg_date": "Tue, 30 Aug 2005 20:41:40 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "On Tue, Aug 30, 2005 at 08:41:40PM -0400, Ron wrote:\n>The scary thing is that I've worked on RW production systems that \n>bore a striking resemblance to a TPC benchmark system. As you can \n>imagine, they uniformly belonged to BIG organizations (read: lot's 'o \n>$$$) who were using the systems for mission critical stuff where \n>either it was company existence threatening for the system to be \n>done, or they would lose much $$$ per min of down time, or both.\n\nYeah, and that market is relevant to someone with one dell server and 2\nhard disks how? \n\nMike Stone\n", "msg_date": "Tue, 30 Aug 2005 20:43:32 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" }, { "msg_contents": "At 08:43 PM 8/30/2005, Michael Stone wrote:\n>On Tue, Aug 30, 2005 at 08:41:40PM -0400, Ron wrote:\n>>The scary thing is that I've worked on RW production systems that \n>>bore a striking resemblance to a TPC benchmark system. As you can \n>>imagine, they uniformly belonged to BIG organizations (read: lot's \n>>'o $$$) who were using the systems for mission critical stuff where \n>>either it was company existence threatening for the system to be \n>>done, or they would lose much $$$ per min of down time, or both.\n>\n>Yeah, and that market is relevant to someone with one dell server \n>and 2 hard disks how?\nBecause successful small companies that _start_ with one small server \nand 2 HDs grow to _become_ companies that need far more HW; ...and in \nthe perfect world their SW scales to their increased needs...\n\n_Without_ exponentially increasing their costs or overhead (as Oracle \nand DB2 currently do)\n\nTHIS is the real long term promise of OS DBMS.\n\nRon Peacetree\n\n\n", "msg_date": "Tue, 30 Aug 2005 22:50:57 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Configuration Sugestion" } ]
[ { "msg_contents": "Hello Friends,\n \nWe were having a database in pgsql7.4.2 The database was responding very\nslowly even after full vacuum analyze (select count(*) from\nsome_table_having_18000_records was taking 18 Sec).\n \nWe took a backup of that db and restored it back. Now the same db on\nsame PC is responding fast (same query is taking 18 ms).\n \nBut we can't do the same as a solution of slow response. Do anybody has\nfaced similar problem? Is this due to any internal problem of pgsql? Is\nthere any clue to fasten the database?\n \nRegards,\n \nakshay\n \n \n---------------------------------------\nAkshay Mathur\nSMTS, Product Verification\nAirTight Networks, Inc. ( <http://www.airtightnetworks.net/>\nwww.airtightnetworks.net)\nO: +91 20 2588 1555 ext 205\nF: +91 20 2588 1445\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello Friends,\n \nWe were having a database in pgsql7.4.2 The database was\nresponding very slowly even after full vacuum analyze (select count(*) from some_table_having_18000_records was taking 18\nSec).\n \nWe took a backup of that db and restored it back. Now the\nsame db on same PC is responding fast (same query is taking 18 ms).\n \nBut we can't do the same as a solution of slow response. Do\nanybody has faced similar problem? Is this due to any internal problem of\npgsql? Is there any clue to fasten the database?\n \nRegards,\n \nakshay\n \n \n---------------------------------------\nAkshay Mathur\nSMTS, Product Verification\nAirTight Networks, Inc.\n(www.airtightnetworks.net)\nO: +91 20 2588 1555 ext 205\nF: +91 20 2588 1445", "msg_date": "Tue, 30 Aug 2005 18:35:30 +0530", "msg_from": "\"Akshay Mathur\" <[email protected]>", "msg_from_op": true, "msg_subject": "Observation about db response time" }, { "msg_contents": "On Tue, 30 Aug 2005 18:35:30 +0530\n\"Akshay Mathur\" <[email protected]> wrote:\n\n> Hello Friends,\n> \n> We were having a database in pgsql7.4.2 The database was responding\n> very slowly even after full vacuum analyze (select count(*) from\n> some_table_having_18000_records was taking 18 Sec).\n> \n> We took a backup of that db and restored it back. Now the same db on\n> same PC is responding fast (same query is taking 18 ms).\n> \n> But we can't do the same as a solution of slow response. Do anybody\n> has faced similar problem? Is this due to any internal problem of\n> pgsql? Is there any clue to fasten the database?\n\n This could be because you don't have max_fsm_pages and\n max_fsm_relations setup correctly or are not doing full vacuums \n often enough. \n\n If your database deletes a ton of data as a matter of course then\n sometimes a full vacuum will not clear up as much space as it could.\n\n Try increasing those configuration values and doing vacuums more\n often. \n\n If you should also explore upgrading to the latest 8.0 as you will\n no doubt see noticeable speed improvements. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Tue, 30 Aug 2005 08:13:22 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Observation about db response time" }, { "msg_contents": "On Aug 30, 2005, at 9:05 AM, Akshay Mathur wrote:\n\n> We were having a database in pgsql7.4.2 The database was responding \n> very slowly even after full vacuum analyze (select count(*) from \n> some_table_having_18000_records was taking 18 Sec).\nOn a 7.4.2 db, there should probably be no index bloat, but there \ncould be. Does REINDEX on your tables help? If not, then VACUUM \nFULL followed by REINDEX may help. The latter should result in \nnearly the same as your dump+restore. And you need to run vacuum \noften enough to keep your tables from bloating. How often that is \ndepends on your update/delete rate.\n\nAlso, updating to 8.0 may help.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n\nOn Aug 30, 2005, at 9:05 AM, Akshay Mathur wrote:We were having a database in pgsql7.4.2 The database was responding very slowly even after full vacuum analyze (select count(*) from some_table_having_18000_records was taking 18 Sec).On a 7.4.2 db, there should probably be no index bloat, but there could be.  Does REINDEX on your tables help?  If not, then VACUUM FULL followed by REINDEX may help.  The latter should result in nearly the same as your dump+restore.  And you need to run vacuum often enough to keep your tables from bloating.  How often that is depends on your update/delete rate.Also, updating to 8.0 may help. Vivek Khera, Ph.D. +1-301-869-4449 x806", "msg_date": "Tue, 30 Aug 2005 10:53:48 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Observation about db response time" }, { "msg_contents": "On Tue, 2005-08-30 at 08:13 -0500, Frank Wiles wrote:\n> On Tue, 30 Aug 2005 18:35:30 +0530\n> \"Akshay Mathur\" <[email protected]> wrote:\n> \n> > Hello Friends,\n> > \n> > We were having a database in pgsql7.4.2 The database was responding\n> > very slowly even after full vacuum analyze (select count(*) from\n> > some_table_having_18000_records was taking 18 Sec).\n> > \n> > We took a backup of that db and restored it back. Now the same db on\n> > same PC is responding fast (same query is taking 18 ms).\n> > \n> > But we can't do the same as a solution of slow response. Do anybody\n> > has faced similar problem? Is this due to any internal problem of\n> > pgsql? Is there any clue to fasten the database?\n> \n> This could be because you don't have max_fsm_pages and\n> max_fsm_relations setup correctly or are not doing full vacuums \n> often enough. \n> \n> If your database deletes a ton of data as a matter of course then\n> sometimes a full vacuum will not clear up as much space as it could.\n> \n> Try increasing those configuration values and doing vacuums more\n> often. \n> \n> If you should also explore upgrading to the latest 8.0 as you will\n> no doubt see noticeable speed improvements. \n\nThis can also be caused by index bloat. VACUUM does not clear out the\nindex. You must use REINDEX for that.\n\n-jwb\n", "msg_date": "Tue, 30 Aug 2005 09:39:17 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Observation about db response time" } ]
[ { "msg_contents": "I have seen references of changing the kernel io scheduler at boot time...not sure if it applies to RHEL3.0, or will help, but try setting 'elevator=deadline' during boot time or via grub.conf. Have you tried running a simple 'dd' on the LUN? The drives are in RAID10 configuration, right?\n\n \n\nThanks,\n\nAnjan\n\n _____ \n\nFrom: Woody Woodring [mailto:[email protected]] \nSent: Tuesday, August 30, 2005 2:30 PM\nTo: 'Rémy Beaumont'; [email protected]\nSubject: Re: [PERFORM] High load and iowait but no disk access\n\n \n\nHave you tried a different kernel? We run with a netapp over NFS without any issues, but we have seen high IO-wait on other Dell boxes (running and not running postgres) and RHES 3. We have replaced a Dell PowerEdge 350 running RH 7.3 with a PE750 with more memory running RHES3 and it be bogged down with IO waits due to syslog messages writing to the disk, the old slower server could handle it fine. I don't know if it is a Dell thing or a RH kernel, but we try different kernels on our boxes to try to find one that works better. We have not found one that stands out over another consistently but we have been moving away from Update 2 kernel (2.4.21-15.ELsmp) due to server lockup issues. Unfortunately we get the best disk throughput on our few remaining 7.3 boxes.\n\n \n\nWoody\n\n \n\nIGLASS Networks\n\nwww.iglass.net\n\n \n\n _____ \n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Rémy Beaumont\nSent: Monday, August 29, 2005 9:43 AM\nTo: [email protected]\nSubject: [PERFORM] High load and iowait but no disk access\n\nWe have been trying to pinpoint what originally seem to be a I/O bottleneck but which now seems to be an issue with either Postgresql or RHES 3.\n\nWe have the following test environment on which we can reproduce the problem:\n\n1) Test System A\nDell 6650 Quad Xeon Pentium 4\n8 Gig of RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS270 connected using an FC card using 10 disks\n\n2) Test System B\nDell Dual Xeon Pentium III\n2 Gig o RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS920 connected using an FC card using 28 disks\n\nOur Database size is around 30G. \n\nThe behavior we see is that when running queries that do random reads on disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a throughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 kB/s on sequential read operations on the netapps)\n\nThe stats of the NetApp do confirm that it is sitting idle. Doing an strace on the Postgresql process shows that is it doing seeks and reads.\n\nSo my question is where is this iowait time spent ?\nIs there a way to pinpoint the problem in more details ?\nWe are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n\nI have included the output of top,vmstat,strace and systat from the Netapp from System B while running a single query that generates this behavior.\n\nRémy\n\ntop output:\n06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\ntotal 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\ncpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\ncpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\nMem: 2061696k av, 2043936k used, 17760k free, 0k shrd, 3916k buff\n1566332k actv, 296648k in_d, 30504k in_c\nSwap: 16771584k av, 21552k used, 16750032k free 1933772k cached\n\nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 postmaster\n30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd\n5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0\n6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1\n9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush\n7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated\n11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd\n17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0\n\n\nvmstat output \nprocs memory swap io system cpu\nr b swpd free buff cache si so bi bo in cs us sy id wa\n0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 3\n0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 47\n0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 2 50 47\n1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 3 48 46\n0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 46\n0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 46\n0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 1 50 46\n1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 1 49 47\n0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 1 48 48\n0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 46\n0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 50\n0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 5 46 48\n0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 2 48 49\n1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 48\n0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 49\n\n\nNetApp stats:\nCPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk DAFS FCP iSCSI FCP kB/s\nin out read write read write age hit time ty util in out\n2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% 0 139 0 3 2277\n2% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 2150\n2% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 1879\n3% 0 0 0 169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 2718\n2% 0 0 0 139 0 0 2448 0 0 0 3 96% 0% - 12% 0 139 0 3 2096\n2% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 137 0 3 1892\n3% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 1739\n2% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 1608\n2% 0 0 0 136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 2018\n2% 0 0 0 119 0 0 1940 0 0 0 3 96% 0% - 13% 0 119 0 3 1998\n2% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 136 0 3 1929\n2% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 2292\n2% 0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 1682\n2% 0 0 0 134 0 0 2256 0 0 0 3 96% 0% - 12% 0 134 0 3 2096\n2% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - 12% 0 112 0 3 1633\n2% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 2421\n2% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 1703\n\nstrace output:\nread(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"..., 8192) = 8192\n_llseek(55, 857997312, [857997312], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"..., 8192) = 8192\n_llseek(55, 883220480, [883220480], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"..., 8192) = 8192\n_llseek(55, 858005504, [858005504], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \\260\\236\\240\"..., 8192) = 8192\n_llseek(55, 857964544, [857964544], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., 8192) = 8192\n_llseek(55, 857956352, [857956352], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \\260\\236\\240\\2\\\\\"..., 8192) = 8192\n_llseek(55, 910802944, [910802944], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., 8192) = 8192\n_llseek(55, 857948160, [857948160], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., 8192) = 8192\n_llseek(56, 80371712, [80371712], SEEK_SET) = 0\nread(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \\250\\236\\260\\2T\\235\"..., 8192) = 8192\nread(102, \"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192) = 8192\n_llseek(55, 857939968, [857939968], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"..., 8192) = 8192\n_llseek(55, 857923584, [857923584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \\234\\236\\310\\002\"..., 8192) = 8192\n_llseek(55, 57270272, [57270272], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \\310\\236j\\2\\214\\235\"..., 8192) = 8192\n_llseek(55, 870727680, [870727680], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., 8192) = 8192\n_llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \\264\\236\\230\"..., 8192) = 8192\n_llseek(55, 857874432, [857874432], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \\224\\236\\330\"..., 8192) = 8192\n_llseek(55, 760872960, [760872960], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \\234\\236\\310\"..., 8192) = 8192\n_llseek(55, 891715584, [891715584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \\230\\236\\320\\2\"..., 8192) = 8192\n_llseek(55, 857858048, [857858048], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\254\\236\\250\"..., 8192) = 8192\n_llseek(55, 666910720, [666910720], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \\254\\236\\242\\2P\\235\"..., 8192) = 8192\n_llseek(55, 857841664, [857841664], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \\214\\236\\350\\2\\30\"..., 8192) = 8192\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI have seen references of changing the\nkernel io scheduler at boot time…not sure if it applies to RHEL3.0, or\nwill help, but try setting ‘elevator=deadline’ during boot time or\nvia grub.conf. Have you tried running a simple ‘dd’ on the LUN? The\ndrives are in RAID10 configuration, right?\n \nThanks,\nAnjan\n\n\n\n\nFrom: Woody Woodring\n[mailto:[email protected]] \nSent: Tuesday, August 30, 2005\n2:30 PM\nTo: 'Rémy Beaumont';\[email protected]\nSubject: Re: [PERFORM] High load\nand iowait but no disk access\n\n \nHave you tried a different kernel? \nWe run with a netapp over NFS without any issues, but we have seen high IO-wait\non other Dell boxes (running  and not running postgres) and RHES 3. \nWe have replaced a Dell PowerEdge 350 running RH 7.3  with a PE750 with\nmore memory running RHES3 and it be bogged down with IO waits due to syslog\nmessages writing to the disk, the old slower server could handle it fine. \nI don't know if it is a Dell thing or a RH kernel, but we try different kernels\non our boxes to try to find one that works better.  We have not found one\nthat stands out over another consistently but we have been moving away\nfrom Update 2 kernel (2.4.21-15.ELsmp) due to server lockup issues. \nUnfortunately we get the best disk throughput on our few remaining 7.3 boxes.\n \nWoody\n \nIGLASS Networks\nwww.iglass.net\n \n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Rémy Beaumont\nSent: Monday, August 29, 2005 9:43\nAM\nTo:\[email protected]\nSubject: [PERFORM] High load and\niowait but no disk access\nWe have been trying to pinpoint what originally seem to be a I/O\nbottleneck but which now seems to be an issue with either Postgresql or RHES 3.\n\nWe have the following test environment on which we can reproduce the problem:\n\n1) Test System A\nDell 6650 Quad Xeon Pentium 4\n8 Gig of RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS270 connected using an FC card using 10 disks\n\n2) Test System B\nDell Dual Xeon Pentium III\n2 Gig o RAM\nOS: RHES 3 update 2\nStorage: NetApp FAS920 connected using an FC card using 28 disks\n\nOur Database size is around 30G. \n\nThe behavior we see is that when running queries that do random reads on disk,\nIOWAIT goes over 80% and actual disk IO falls to a crawl at a throughput bellow\n3000kB/s (We usually average 40000 kB/s to 80000 kB/s on sequential read\noperations on the netapps)\n\nThe stats of the NetApp do confirm that it is sitting idle. Doing an strace on\nthe Postgresql process shows that is it doing seeks and reads.\n\nSo my question is where is this iowait time spent ?\nIs there a way to pinpoint the problem in more details ?\nWe are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n\nI have included the output of top,vmstat,strace and systat from the Netapp from\nSystem B while running a single query that generates this behavior.\n\nRémy\n\ntop output:\n06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\ntotal 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\ncpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\ncpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\nMem: 2061696k av, 2043936k used, 17760k free, 0k shrd, 3916k buff\n1566332k actv, 296648k in_d, 30504k in_c\nSwap: 16771584k av, 21552k used, 16750032k free 1933772k cached\n\nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 postmaster\n30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd\n5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0\n6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1\n9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush\n7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated\n11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd\n17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0\n\n\nvmstat output \nprocs memory swap io system cpu\nr b swpd free buff cache si so bi bo in cs us sy id wa\n0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 3\n0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 47\n0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 2 50 47\n1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 3 48 46\n0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 46\n0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 46\n0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 1 50 46\n1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 1 49 47\n0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 1 48 48\n0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 46\n0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 50\n0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 5 46 48\n0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 2 48 49\n1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 48\n0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 49\n\n\nNetApp stats:\nCPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk\nDAFS FCP iSCSI FCP kB/s\nin out read write read write age hit time ty util in out\n2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% 0 139 0 3 2277\n2% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 2150\n2% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 1879\n3% 0 0 0 169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 2718\n2% 0 0 0 139 0 0 2448 0 0 0 3 96% 0% - 12% 0 139 0 3 2096\n2% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 137 0 3 1892\n3% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 1739\n2% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 1608\n2% 0 0 0 136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 2018\n2% 0 0 0 119 0 0 1940 0 0 0 3 96% 0% - 13% 0 119 0 3 1998\n2% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 136 0 3 1929\n2% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 2292\n2% 0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 1682\n2% 0 0 0 134 0 0 2256 0 0 0 3 96% 0% - 12% 0 134 0 3 2096\n2% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - 12% 0 112 0 3 1633\n2% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 2421\n2% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 1703\n\nstrace output:\nread(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"...,\n8192) = 8192\n_llseek(55, 857997312, [857997312], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"...,\n8192) = 8192\n_llseek(55, 883220480, [883220480], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"...,\n8192) = 8192\n_llseek(55, 858005504, [858005504], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2\n\\260\\236\\240\"..., 8192) = 8192\n_llseek(55, 857964544, [857964544], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"...,\n8192) = 8192\n_llseek(55, 857956352, [857956352], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2\n\\260\\236\\240\\2\\\\\"..., 8192) = 8192\n_llseek(55, 910802944, [910802944], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"...,\n8192) = 8192\n_llseek(55, 857948160, [857948160], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"...,\n8192) = 8192\n_llseek(56, 80371712, [80371712], SEEK_SET) = 0\nread(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2\n\\250\\236\\260\\2T\\235\"..., 8192) = 8192\nread(102,\n\"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192) =\n8192\n_llseek(55, 857939968, [857939968], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"...,\n8192) = 8192\n_llseek(55, 857923584, [857923584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2\n\\234\\236\\310\\002\"..., 8192) = 8192\n_llseek(55, 57270272, [57270272], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \\310\\236j\\2\\214\\235\"...,\n8192) = 8192\n_llseek(55, 870727680, [870727680], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2\n\\250\\236\\260\\2X\\235\"..., 8192) = 8192\n_llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2\n\\264\\236\\230\"..., 8192) = 8192\n_llseek(55, 857874432, [857874432], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2\n\\224\\236\\330\"..., 8192) = 8192\n_llseek(55, 760872960, [760872960], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2\n\\234\\236\\310\"..., 8192) = 8192\n_llseek(55, 891715584, [891715584], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2\n\\230\\236\\320\\2\"..., 8192) = 8192\n_llseek(55, 857858048, [857858048], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2\n\\254\\236\\250\"..., 8192) = 8192\n_llseek(55, 666910720, [666910720], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2\n\\254\\236\\242\\2P\\235\"..., 8192) = 8192\n_llseek(55, 857841664, [857841664], SEEK_SET) = 0\nread(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2\n\\214\\236\\350\\2\\30\"..., 8192) = 8192\n\n\n\n\n\n/fontfamily>", "msg_date": "Tue, 30 Aug 2005 14:46:10 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load and iowait but no disk access" }, { "msg_contents": "\nOn 30-Aug-05, at 14:46, Anjan Dave wrote:\n\n> I have seen references of changing the kernel io scheduler at boot \n> time…not sure if it applies to RHEL3.0, or will help, but try setting \n> ‘elevator=deadline’ during boot time or via grub.conf.\nThat's only for RHEL 4.0.\n\n> Have you tried running a simple ‘dd’ on the LUN?\nWe get amazing performance using dd.\n> The drives are in RAID10 configuration, right?\nNetApp has their own type of raid format (RAID4 aka WAFL)\n\nRémy\n>  \n> Thanks,\n> Anjan\n>\n> From: Woody Woodring [mailto:[email protected]]\n> Sent: Tuesday, August 30, 2005 2:30 PM\n> To: 'Rémy Beaumont'; [email protected]\n> Subject: Re: [PERFORM] High load and iowait but no disk access\n>  \n> Have you tried a different kernel?  We run with a netapp over NFS \n> without any issues, but we have seen high IO-wait on other Dell boxes \n> (running  and not running postgres) and RHES 3.  We have replaced a \n> Dell PowerEdge 350 running RH 7.3  with a PE750 with more memory \n> running RHES3 and it be bogged down with IO waits due to syslog \n> messages writing to the disk, the old slower server could handle it \n> fine.  I don't know if it is a Dell thing or a RH kernel, but we try \n> different kernels on our boxes to try to find one that works better.  \n> We have not found one that stands out over another consistently but we \n> have been moving away from Update 2 kernel (2.4.21-15.ELsmp) due to \n> server lockup issues.  Unfortunately we get the best disk throughput \n> on our few remaining 7.3 boxes.\n>  \n> Woody\n>  \n> IGLASS Networks\n> www.iglass.net\n>  \n>\n>\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Rémy \n> Beaumont\n> Sent: Monday, August 29, 2005 9:43 AM\n> To: [email protected]\n> Subject: [PERFORM] High load and iowait but no disk access\n> We have been trying to pinpoint what originally seem to be a I/O \n> bottleneck but which now seems to be an issue with either Postgresql \n> or RHES 3.\n>\n> We have the following test environment on which we can reproduce the \n> problem:\n>\n> 1) Test System A\n> Dell 6650 Quad Xeon Pentium 4\n> 8 Gig of RAM\n> OS: RHES 3 update 2\n> Storage: NetApp FAS270 connected using an FC card using 10 disks\n>\n> 2) Test System B\n> Dell Dual Xeon Pentium III\n> 2 Gig o RAM\n> OS: RHES 3 update 2\n> Storage: NetApp FAS920 connected using an FC card using 28 disks\n>\n> Our Database size is around 30G.\n>\n> The behavior we see is that when running queries that do random reads \n> on disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a \n> throughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 \n> kB/s on sequential read operations on the netapps)\n>\n> The stats of the NetApp do confirm that it is sitting idle. Doing an \n> strace on the Postgresql process shows that is it doing seeks and \n> reads.\n>\n> So my question is where is this iowait time spent ?\n> Is there a way to pinpoint the problem in more details ?\n> We are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n>\n> I have included the output of top,vmstat,strace and systat from the \n> Netapp from System B while running a single query that generates this \n> behavior.\n>\n> Rémy\n>\n> top output:\n> 06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n> 72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\n> cpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\n> cpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\n> Mem: 2061696k av, 2043936k used, 17760k free, 0k shrd, 3916k buff\n> 1566332k actv, 296648k in_d, 30504k in_c\n> Swap: 16771584k av, 21552k used, 16750032k free 1933772k cached\n>\n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n> 30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 postmaster\n> 30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n> 1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n> 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n> 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n> 4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd\n> 5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0\n> 6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1\n> 9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush\n> 7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n> 8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n> 10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated\n> 11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd\n> 17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0\n>\n>\n> vmstat output\n> procs memory swap io system cpu\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 3\n> 0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 47\n> 0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 2 50 47\n> 1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 3 48 46\n> 0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 46\n> 0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 46\n> 0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 1 50 46\n> 1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 1 49 47\n> 0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 1 48 48\n> 0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 46\n> 0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 50\n> 0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 5 46 48\n> 0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 2 48 49\n> 1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 48\n> 0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 49\n>\n>\n> NetApp stats:\n> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP \n> CP Disk DAFS FCP iSCSI FCP kB/s\n> in out read write read write age hit time ty util in out\n> 2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% 0 139 0 3 2277\n> 2% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 2150\n> 2% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 1879\n> 3% 0 0 0 169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 2718\n> 2% 0 0 0 139 0 0 2448 0 0 0 3 96% 0% - 12% 0 139 0 3 2096\n> 2% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 137 0 3 1892\n> 3% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 1739\n> 2% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 1608\n> 2% 0 0 0 136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 2018\n> 2% 0 0 0 119 0 0 1940 0 0 0 3 96% 0% - 13% 0 119 0 3 1998\n> 2% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 136 0 3 1929\n> 2% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 2292\n> 2% 0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 1682\n> 2% 0 0 0 134 0 0 2256 0 0 0 3 96% 0% - 12% 0 134 0 3 2096\n> 2% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - 12% 0 112 0 3 1633\n> 2% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 2421\n> 2% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 1703\n>\n> strace output:\n> read(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"..., \n> 8192) = 8192\n> _llseek(55, 857997312, [857997312], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"..., \n> 8192) = 8192\n> _llseek(55, 883220480, [883220480], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \n> \\270\\236\\220\\2D\\235\"..., 8192) = 8192\n> _llseek(55, 858005504, [858005504], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \n> \\260\\236\\240\"..., 8192) = 8192\n> _llseek(55, 857964544, [857964544], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., \n> 8192) = 8192\n> _llseek(55, 857956352, [857956352], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \n> \\260\\236\\240\\2\\\\\"..., 8192) = 8192\n> _llseek(55, 910802944, [910802944], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., \n> 8192) = 8192\n> _llseek(55, 857948160, [857948160], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., \n> 8192) = 8192\n> _llseek(56, 80371712, [80371712], SEEK_SET) = 0\n> read(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \n> \\250\\236\\260\\2T\\235\"..., 8192) = 8192\n> read(102, \n> \"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192) = \n> 8192\n> _llseek(55, 857939968, [857939968], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \n> \\244\\236\\270\"..., 8192) = 8192\n> _llseek(55, 857923584, [857923584], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \n> \\234\\236\\310\\002\"..., 8192) = 8192\n> _llseek(55, 57270272, [57270272], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \n> \\310\\236j\\2\\214\\235\"..., 8192) = 8192\n> _llseek(55, 870727680, [870727680], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., \n> 8192) = 8192\n> _llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \n> \\264\\236\\230\"..., 8192) = 8192\n> _llseek(55, 857874432, [857874432], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \n> \\224\\236\\330\"..., 8192) = 8192\n> _llseek(55, 760872960, [760872960], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \n> \\234\\236\\310\"..., 8192) = 8192\n> _llseek(55, 891715584, [891715584], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \n> \\230\\236\\320\\2\"..., 8192) = 8192\n> _llseek(55, 857858048, [857858048], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \n> \\254\\236\\250\"..., 8192) = 8192\n> _llseek(55, 666910720, [666910720], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \n> \\254\\236\\242\\2P\\235\"..., 8192) = 8192\n> _llseek(55, 857841664, [857841664], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \n> \\214\\236\\350\\2\\30\"..., 8192) = 8192\n>\n>\n>\n\n", "msg_date": "Tue, 30 Aug 2005 14:50:35 -0400", "msg_from": "=?ISO-8859-1?Q?R=E9my_Beaumont?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access" }, { "msg_contents": "This might be optimal behavior from the hardware. Random reads are hard to\noptimize for--except if you have enough physical memory to hold the entire\ndataset. Cached reads (either in array controller or OS buffer cache) should\nreturn nearly immediately. But random reads probably aren't cached. And any\nread-ahead alogorithms or other types of performance enhancements in the\nhardware or OS go out the window--because the behavior isn't predictable.\n\nEach time a drive spindle needs to move to a new track, it requires at least a\ncouple of miliseconds. Sequential reads only require this movement\ninfrequently. But random reads may be forcing this movement for every IO operation.\n\nSince the bottleneck in random reads is the physical hard drives themselves,\neverything else stands around waiting. Fancy hardware can optimize everything\nelse -- writes with write cache, sequential reads with read-ahead and read\ncache. But there's no real solution to a purely random read workload except\nperhaps creating different disk groups to help avoid spindle contention.\n\nI like this tool: http://www.soliddata.com/products/iotest.html\nIt allows you to select pure workloads (read/write/sequential/random), and it\nruns against raw devices, so you bypass the OS buffer cache. When I've run it\nI've always seen sequential activity get much much higher throughput than random.\n\nQuoting Anjan Dave <[email protected]>:\n\n> I have seen references of changing the kernel io scheduler at boot time...not\n> sure if it applies to RHEL3.0, or will help, but try setting\n> 'elevator=deadline' during boot time or via grub.conf. Have you tried running\n> a simple 'dd' on the LUN? The drives are in RAID10 configuration, right?\n> \n> \n> \n> Thanks,\n> \n> Anjan\n> \n> _____ \n> \n> From: Woody Woodring [mailto:[email protected]] \n> Sent: Tuesday, August 30, 2005 2:30 PM\n> To: 'R�my Beaumont'; [email protected]\n> Subject: Re: [PERFORM] High load and iowait but no disk access\n> \n> \n> \n> Have you tried a different kernel? We run with a netapp over NFS without any\n> issues, but we have seen high IO-wait on other Dell boxes (running and not\n> running postgres) and RHES 3. We have replaced a Dell PowerEdge 350 running\n> RH 7.3 with a PE750 with more memory running RHES3 and it be bogged down\n> with IO waits due to syslog messages writing to the disk, the old slower\n> server could handle it fine. I don't know if it is a Dell thing or a RH\n> kernel, but we try different kernels on our boxes to try to find one that\n> works better. We have not found one that stands out over another\n> consistently but we have been moving away from Update 2 kernel\n> (2.4.21-15.ELsmp) due to server lockup issues. Unfortunately we get the best\n> disk throughput on our few remaining 7.3 boxes.\n> \n> \n> \n> Woody\n> \n> \n> \n> IGLASS Networks\n> \n> www.iglass.net\n> \n> \n> \n> _____ \n> \n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of R�my Beaumont\n> Sent: Monday, August 29, 2005 9:43 AM\n> To: [email protected]\n> Subject: [PERFORM] High load and iowait but no disk access\n> \n> We have been trying to pinpoint what originally seem to be a I/O bottleneck\n> but which now seems to be an issue with either Postgresql or RHES 3.\n> \n> We have the following test environment on which we can reproduce the\n> problem:\n> \n> 1) Test System A\n> Dell 6650 Quad Xeon Pentium 4\n> 8 Gig of RAM\n> OS: RHES 3 update 2\n> Storage: NetApp FAS270 connected using an FC card using 10 disks\n> \n> 2) Test System B\n> Dell Dual Xeon Pentium III\n> 2 Gig o RAM\n> OS: RHES 3 update 2\n> Storage: NetApp FAS920 connected using an FC card using 28 disks\n> \n> Our Database size is around 30G. \n> \n> The behavior we see is that when running queries that do random reads on\n> disk, IOWAIT goes over 80% and actual disk IO falls to a crawl at a\n> throughput bellow 3000kB/s (We usually average 40000 kB/s to 80000 kB/s on\n> sequential read operations on the netapps)\n> \n> The stats of the NetApp do confirm that it is sitting idle. Doing an strace\n> on the Postgresql process shows that is it doing seeks and reads.\n> \n> So my question is where is this iowait time spent ?\n> Is there a way to pinpoint the problem in more details ?\n> We are able to reproduce this behavior with Postgresql 7.4.8 and 8.0.3\n> \n> I have included the output of top,vmstat,strace and systat from the Netapp\n> from System B while running a single query that generates this behavior.\n> \n> R�my\n> \n> top output:\n> 06:27:28 up 5 days, 16:59, 6 users, load average: 1.04, 1.30, 1.01\n> 72 processes: 71 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 2.7% 0.0% 1.0% 0.1% 0.2% 46.0% 49.5%\n> cpu00 0.2% 0.0% 0.2% 0.0% 0.2% 2.2% 97.2%\n> cpu01 5.3% 0.0% 1.9% 0.3% 0.3% 89.8% 1.9%\n> Mem: 2061696k av, 2043936k used, 17760k free, 0k shrd, 3916k buff\n> 1566332k actv, 296648k in_d, 30504k in_c\n> Swap: 16771584k av, 21552k used, 16750032k free 1933772k cached\n> \n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n> 30960 postgres 15 0 13424 10M 9908 D 2.7 0.5 2:00 1 postmaster\n> 30538 root 15 0 1080 764 524 S 0.7 0.0 0:43 0 sshd\n> 1 root 15 0 496 456 436 S 0.0 0.0 0:08 0 init\n> 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n> 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n> 4 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 keventd\n> 5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0\n> 6 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1\n> 9 root 15 0 0 0 0 SW 0.0 0.0 0:24 1 bdflush\n> 7 root 15 0 0 0 0 SW 0.0 0.0 6:53 1 kswapd\n> 8 root 15 0 0 0 0 SW 0.0 0.0 8:44 1 kscand\n> 10 root 15 0 0 0 0 SW 0.0 0.0 0:13 0 kupdated\n> 11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd\n> 17 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 ahc_dv_0\n> \n> \n> vmstat output \n> procs memory swap io system cpu\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 21552 17796 4872 1931928 2 3 3 1 27 6 2 1 7 3\n> 0 1 21552 18044 4880 1931652 0 0 1652 0 397 512 1 2 50 47\n> 0 1 21552 17976 4896 1931664 0 0 2468 0 407 552 2 2 50 47\n> 1 0 21552 17984 4896 1931608 0 0 2124 0 418 538 3 3 48 46\n> 0 1 21552 18028 4900 1931536 0 0 1592 0 385 509 1 3 50 46\n> 0 1 21552 18040 4916 1931488 0 0 1620 820 419 581 2 2 50 46\n> 0 1 21552 17968 4916 1931536 0 4 1708 4 402 554 3 1 50 46\n> 1 1 21552 18052 4916 1931388 0 0 1772 0 409 531 3 1 49 47\n> 0 1 21552 17912 4924 1931492 0 0 1772 0 408 565 3 1 48 48\n> 0 1 21552 17932 4932 1931440 0 4 1356 4 391 545 5 0 49 46\n> 0 1 21552 18320 4944 1931016 0 4 1500 840 414 571 1 1 48 50\n> 0 1 21552 17872 4944 1931440 0 0 2116 0 392 496 1 5 46 48\n> 0 1 21552 18060 4944 1931232 0 0 2232 0 423 597 1 2 48 49\n> 1 1 21552 17684 4944 1931584 0 0 1752 0 395 537 1 1 50 48\n> 0 1 21552 18000 4944 1931240 0 0 1576 0 401 549 0 1 50 49\n> \n> \n> NetApp stats:\n> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk\n> DAFS FCP iSCSI FCP kB/s\n> in out read write read write age hit time ty util in out\n> 2% 0 0 0 139 0 0 2788 0 0 0 3 96% 0% - 15% 0 139 0 3 2277\n> 2% 0 0 0 144 0 0 2504 0 0 0 3 96% 0% - 18% 0 144 0 3 2150\n> 2% 0 0 0 130 0 0 2212 0 0 0 3 96% 0% - 13% 0 130 0 3 1879\n> 3% 0 0 0 169 0 0 2937 80 0 0 3 96% 0% - 13% 0 169 0 4 2718\n> 2% 0 0 0 139 0 0 2448 0 0 0 3 96% 0% - 12% 0 139 0 3 2096\n> 2% 0 0 0 137 0 0 2116 0 0 0 3 96% 0% - 10% 0 137 0 3 1892\n> 3% 0 0 0 107 0 0 2660 812 0 0 3 96% 24% T 20% 0 107 0 3 1739\n> 2% 0 0 0 118 0 0 1788 0 0 0 3 96% 0% - 13% 0 118 0 3 1608\n> 2% 0 0 0 136 0 0 2228 0 0 0 3 96% 0% - 11% 0 136 0 3 2018\n> 2% 0 0 0 119 0 0 1940 0 0 0 3 96% 0% - 13% 0 119 0 3 1998\n> 2% 0 0 0 136 0 0 2175 0 0 0 3 96% 0% - 14% 0 136 0 3 1929\n> 2% 0 0 0 133 0 0 1924 0 0 0 3 96% 0% - 19% 0 133 0 3 2292\n> 2% 0 0 0 115 0 0 2044 0 0 0 3 96% 0% - 11% 0 115 0 3 1682\n> 2% 0 0 0 134 0 0 2256 0 0 0 3 96% 0% - 12% 0 134 0 3 2096\n> 2% 0 0 0 112 0 0 2184 0 0 0 3 96% 0% - 12% 0 112 0 3 1633\n> 2% 0 0 0 163 0 0 2348 0 0 0 3 96% 0% - 13% 0 163 0 4 2421\n> 2% 0 0 0 120 0 0 2056 184 0 0 3 96% 8% T 14% 0 120 0 3 1703\n> \n> strace output:\n> read(55, \"\\4\\0\\0\\0\\10fm}\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\230\\236\\320\\0020\"..., 8192) =\n> 8192\n> _llseek(55, 857997312, [857997312], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\\\\\315\\321|\\1\\0\\0\\0p\\0\\354\\0\\0 \\2 \\250\\236\\260\"..., 8192) =\n> 8192\n> _llseek(55, 883220480, [883220480], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0T\\17a~\\1\\0\\0\\0p\\0\\20\\1\\0 \\2 \\270\\236\\220\\2D\\235\"..., 8192)\n> = 8192\n> _llseek(55, 858005504, [858005504], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\300\\356\\321|\\1\\0\\0\\0p\\0\\330\\0\\0 \\2 \\260\\236\\240\"..., 8192)\n> = 8192\n> _llseek(55, 857964544, [857964544], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0lH\\321|\\1\\0\\0\\0p\\0<\\1\\0 \\2 \\300\\236\\200\\2p\\235\"..., 8192) =\n> 8192\n> _llseek(55, 857956352, [857956352], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0l\\'\\321|\\1\\0\\0\\0p\\0\\320\\0\\0 \\2 \\260\\236\\240\\2\\\\\"..., 8192)\n> = 8192\n> _llseek(55, 910802944, [910802944], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\10}\\25\\200\\1\\0\\0\\0l\\0\\274\\1\\0 \\2 \\250\\236\\260\"..., 8192) =\n> 8192\n> _llseek(55, 857948160, [857948160], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\370\\5\\321|\\1\\0\\0\\0p\\0\\350\\0\\0 \\2 \\230\\236\\320\"..., 8192) =\n> 8192\n> _llseek(56, 80371712, [80371712], SEEK_SET) = 0\n> read(56, \"\\4\\0\\0\\0Lf \\217\\1\\0\\0\\0p\\0\\f\\1\\0 \\2 \\250\\236\\260\\2T\\235\"..., 8192)\n> = 8192\n> read(102, \"\\2\\0\\34\\0001\\236\\0\\0\\1\\0\\0\\0\\t\\0\\0\\00020670\\0\\0\\0B\\6\\0\"..., 8192)\n> = 8192\n> _llseek(55, 857939968, [857939968], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\244\\344\\320|\\1\\0\\0\\0l\\0\\230\\1\\0 \\2 \\244\\236\\270\"..., 8192)\n> = 8192\n> _llseek(55, 857923584, [857923584], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\224\\242\\320|\\1\\0\\0\\0p\\0|\\0\\0 \\2 \\234\\236\\310\\002\"...,\n> 8192) = 8192\n> _llseek(55, 57270272, [57270272], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\3204FK\\1\\0\\0\\0t\\0\\340\\0\\0 \\2 \\310\\236j\\2\\214\\235\"...,\n> 8192) = 8192\n> _llseek(55, 870727680, [870727680], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0x>\\233}\\1\\0\\0\\0p\\0@\\1\\0 \\2 \\250\\236\\260\\2X\\235\"..., 8192) =\n> 8192\n> _llseek(55, 1014734848, [1014734848], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\34\\354\\201\\206\\1\\0\\0\\0p\\0p\\0\\0 \\2 \\264\\236\\230\"..., 8192)\n> = 8192\n> _llseek(55, 857874432, [857874432], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\214\\331\\317|\\1\\0\\0\\0l\\0\\324\\1\\0 \\2 \\224\\236\\330\"..., 8192)\n> = 8192\n> _llseek(55, 760872960, [760872960], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\30\\257\\321v\\1\\0\\0\\0p\\0\\230\\0\\0 \\2 \\234\\236\\310\"..., 8192)\n> = 8192\n> _llseek(55, 891715584, [891715584], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\370\\220\\347~\\1\\0\\0\\0p\\0P\\1\\0 \\2 \\230\\236\\320\\2\"..., 8192)\n> = 8192\n> _llseek(55, 857858048, [857858048], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0\\250\\227\\317|\\1\\0\\0\\0p\\0\\264\\0\\0 \\2 \\254\\236\\250\"..., 8192)\n> = 8192\n> _llseek(55, 666910720, [666910720], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0x\\206\\3q\\1\\0\\0\\0p\\0004\\1\\0 \\2 \\254\\236\\242\\2P\\235\"...,\n> 8192) = 8192\n> _llseek(55, 857841664, [857841664], SEEK_SET) = 0\n> read(55, \"\\4\\0\\0\\0dT\\317|\\1\\0\\0\\0p\\0\\224\\0\\0 \\2 \\214\\236\\350\\2\\30\"..., 8192)\n> = 8192\n> \n> \n> \n> \n> \n> \n\n\n", "msg_date": "Tue, 30 Aug 2005 12:34:40 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: High load and iowait but no disk access" } ]
[ { "msg_contents": "We have a highly active table that has virtually all\nentries updated every 5 minutes. Typical size of the\ntable is 50,000 entries, and entries have grown fat.\n\nWe are currently vaccuming hourly, and towards the end\nof the hour we are seeing degradation, when compared\nto the top of the hour.\n\nVaccum is slowly killing our system, as it is starting\nto take up to 10 minutes, and load at the time of\nvacuum is 6+ on a Linux box. During the vacuum,\noverall system is goin unresponsive, then comes back\nonce vacuum completes.\n\nIf we run vacuum less frequently, degradation\ncontinues to the point that we can't keep up with the\nthroughput, plus vacuum takes longer anyway.\n\nBecoming quite a pickle:-)\n\nWe are thinking of splitting the table in two: the\npart the updates often, and the part the updates\ninfrequently as we suspect that record size impacts\nvacuum.\n\nAny ideas?\n\n\nThanks,\nMark\n\n-----------------\n", "msg_date": "Tue, 30 Aug 2005 14:13:55 -0700 (PDT)", "msg_from": "Markus Benne <[email protected]>", "msg_from_op": true, "msg_subject": "When to do a vacuum for highly active table" }, { "msg_contents": "\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Markus Benne\n> Sent: Wednesday, August 31, 2005 12:14 AM\n> To: [email protected]\n> Subject: [PERFORM] When to do a vacuum for highly active table\n> \n> We have a highly active table that has virtually all\n> entries updated every 5 minutes. Typical size of the\n> table is 50,000 entries, and entries have grown fat.\n> \n> We are currently vaccuming hourly, and towards the end\n> of the hour we are seeing degradation, when compared\n> to the top of the hour.\n> \n> Vaccum is slowly killing our system, as it is starting\n> to take up to 10 minutes, and load at the time of\n> vacuum is 6+ on a Linux box. During the vacuum,\n> overall system is goin unresponsive, then comes back\n> once vacuum completes.\n\nPlay with vacuum_cost_delay option. In our case it made BIG difference\n(going from very heavy hitting to almost unnoticed vacuuming.)\n\nHope it helps.\n\nRigmor Ukuhe\n\n> \n> If we run vacuum less frequently, degradation\n> continues to the point that we can't keep up with the\n> throughput, plus vacuum takes longer anyway.\n> \n> Becoming quite a pickle:-)\n> \n> We are thinking of splitting the table in two: the\n> part the updates often, and the part the updates\n> infrequently as we suspect that record size impacts\n> vacuum.\n> \n> Any ideas?\n> \n> \n> Thanks,\n> Mark\n> \n> -----------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 31 Aug 2005 00:25:44 +0300", "msg_from": "\"Rigmor Ukuhe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table" }, { "msg_contents": "Markus Benne <[email protected]> writes:\n> We have a highly active table that has virtually all\n> entries updated every 5 minutes. Typical size of the\n> table is 50,000 entries, and entries have grown fat.\n\n> We are currently vaccuming hourly, and towards the end\n> of the hour we are seeing degradation, when compared\n> to the top of the hour.\n\nOn something like this, you really need to be vacuuming more often\nnot less so; I'd think about how to do it every five or ten minutes \nrather than backing off. With only 50K rows it should really not take\nmore than a couple of seconds to do the vacuum. When you wait till\nthere are 600K dead rows, it's going to take awhile, plus you are\nsuffering across-the-board performance degradation from all the dead\nrows.\n\nIf you are using PG 8.0, there are some \"vacuum cost\" knobs you can\nfiddle with to slow down vacuum so it doesn't impose as much I/O load.\nIdeally you could get it to where you could run vacuum as often as\nyou need to without noticing much impact on foreground processing.\n\nIf you're not using 8.0 ... maybe it's time to update.\n\nAnother thing you might want to do is look at \"vacuum verbose\" output,\nwhich will give you some idea of the time spent in each step. It might\nbe there are specific aspects that could be improved.\n\n> We are thinking of splitting the table in two: the\n> part the updates often, and the part the updates\n> infrequently as we suspect that record size impacts\n> vacuum.\n\nYou just said that virtually all rows update constantly --- where's\nthe \"infrequent\" part?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Aug 2005 17:29:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table " }, { "msg_contents": "On Tue, Aug 30, 2005 at 05:29:17PM -0400, Tom Lane wrote:\n> Markus Benne <[email protected]> writes:\n> > We have a highly active table that has virtually all\n> > entries updated every 5 minutes. Typical size of the\n> > table is 50,000 entries, and entries have grown fat.\n> ...\n> > We are thinking of splitting the table in two: the\n> > part the updates often, and the part the updates\n> > infrequently as we suspect that record size impacts\n> > vacuum.\n> You just said that virtually all rows update constantly --- where's\n> the \"infrequent\" part?\n\nI think he means splitting it vertically, instead of horizontally, and\nit sounds like an excellent idea, if a large enough portion of each\nrecord is in fact mostly fixed. Otherwise, PostgreSQL is copying data\nmultiple times, only to have the data expire as part of a dead row.\n\nI've already started to notice such issues with postgresql - but more\nbecause I'm using low-end hardware, and I'm projecting the effect for\nwhen our database becomes much larger with much higher demand on the\ndatabase.\n\nThis is the sort of scenario where a database without transactional\nintegrity would significantly out-perform one designed around it. If\nrecords are fixed sized, and updated in place, these problems would\noccur far less often. Is it heresy to suggest MySQL in here? :-)\n\nI switched from MySQL to PostgreSQL several months ago, and haven't\nlooked back - but they do work differently, and for certain uses, one\ncan destroy the other. Using a MyISAM table would be the way I would\ngo with this sort of problem.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 30 Aug 2005 18:05:03 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table" }, { "msg_contents": "[email protected] (Markus Benne) writes:\n> We have a highly active table that has virtually all\n> entries updated every 5 minutes. Typical size of the\n> table is 50,000 entries, and entries have grown fat.\n>\n> We are currently vaccuming hourly, and towards the end\n> of the hour we are seeing degradation, when compared\n> to the top of the hour.\n\nYou're not vacuuming the table nearly often enough.\n\nYou should vacuum this table every five minutes, and possibly more\noften than that.\n\n[We have some tables like that, albeit smaller than 50K entries, which\nwe vacuum once per minute in production...]\n\n> We are thinking of splitting the table in two: the part the updates\n> often, and the part the updates infrequently as we suspect that\n> record size impacts vacuum.\n\nThere's *some* merit to that.\n\nYou might discover that there's a \"hot spot\" that needs to be vacuumed\nonce per minute.\n\nBut it may be simpler to just hit the table with a vacuum once every\nfew minutes even though some tuples are seldom updated.\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/spreadsheets.html\nSigns of a Klingon Programmer #3: \"By filing this TPR you have\nchallenged the honor of my family. Prepare to die!\"\n", "msg_date": "Tue, 30 Aug 2005 18:05:38 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table" }, { "msg_contents": "[email protected] writes:\n> I think he means splitting it vertically, instead of horizontally, and\n> it sounds like an excellent idea, if a large enough portion of each\n> record is in fact mostly fixed. Otherwise, PostgreSQL is copying data\n> multiple times, only to have the data expire as part of a dead row.\n\nOnly up to a point. Fields that are wide enough to get toasted\nout-of-line (multiple Kb) do not get physically copied if there's\na row update that doesn't affect them. We don't really have enough\ninformation about his table to guess whether there's any point in\nmanually partitioning the columns, but my leaning would be \"probably\nnot\" --- the overhead in joining the resulting two tables would be\nhigh enough that you'd need a heck of a big improvement to justify it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Aug 2005 19:05:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table " }, { "msg_contents": "[email protected] (\"Rigmor Ukuhe\") writes:\n\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-performance-\n>> [email protected]] On Behalf Of Markus Benne\n>> Sent: Wednesday, August 31, 2005 12:14 AM\n>> To: [email protected]\n>> Subject: [PERFORM] When to do a vacuum for highly active table\n>> \n>> We have a highly active table that has virtually all\n>> entries updated every 5 minutes. Typical size of the\n>> table is 50,000 entries, and entries have grown fat.\n>> \n>> We are currently vaccuming hourly, and towards the end\n>> of the hour we are seeing degradation, when compared\n>> to the top of the hour.\n>> \n>> Vaccum is slowly killing our system, as it is starting\n>> to take up to 10 minutes, and load at the time of\n>> vacuum is 6+ on a Linux box. During the vacuum,\n>> overall system is goin unresponsive, then comes back\n>> once vacuum completes.\n>\n> Play with vacuum_cost_delay option. In our case it made BIG difference\n> (going from very heavy hitting to almost unnoticed vacuuming.)\n\nThat helps only if the ONLY problem you're having is from the direct\nI/O of the vacuum.\n\nIf part of the problem is that the table is so large that it takes 4h\nfor VACUUM to complete, thereby leaving a transaction open for 4h,\nthereby causing other degradations, then vacuum_cost_delay will have a\nNEGATIVE impact, as it will mean that the vacuum on that table will\ntake even /more/ than 4h. :-(\n\nFor the above scenario, it is almost certain that the solution comes\nin two pieces:\n\n1. VACUUM FULL / CLUSTER to bring the size down.\n\n The table has grown \"fat,\" and no number of repetitions of \"plain\n vacuum\" will fix this.\n\n2. Do \"plain vacuum\" on the table VASTLY more frequently, probably\n every 5 minutes, possibly more often than that.\n\n By doing this, you prevent things from getting so bad again.\n\nBy the way, in this sort of situation, _ANY_ transaction that runs\nmore than about 5 minutes represents a serious enemy to performance,\nas it will tend to cause the \"hot\" table to \"get fatter.\"\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/linux.html\nTECO Madness: a moment of regret, a lifetime of convenience.\n-- Kent Pitman\n", "msg_date": "Tue, 06 Sep 2005 08:48:56 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to do a vacuum for highly active table" } ]