threads
listlengths 1
275
|
---|
[
{
"msg_contents": "This question is related to the thread:\nhttp://archives.postgresql.org/pgsql-performance/2006-08/msg00152.php\nbut I had some questions.\n\nI am looking at setting up two general-purpose database servers,\nreplicated with Slony. Each server I'm looking at has the following\nspecs:\n\nDell PowerEdge 2950\n- 2 x Dual Core Intel® Xeon® 5130, 4MB Cache, 2.00GHz, 1333MHZ FSB\n- 4GB RAM\n- PERC 5/i, x6 Backplane, Integrated Controller Card (256MB battery-\nbacked cache)\n- 6 x 73GB, SAS, 3.5-inch, 15K RPM Hard Drive arranged in RAID 10\n\nThese servers are reasonably priced and so they seem like a good choice\nfor the overall price, and the above thread indicated good performance.\nHowever, I want to make sure that putting WAL in with PGDATA on the\nRAID-10 is wise. And if there are any other suggestions that would be\ngreat. Is the RAID controller good? Are the processors good for database\nwork or are Opterons significantly better?\n\nI may go for more storage as well (i.e. getting 300GB disks), but I am\nstill determining the potential need for storage. I can get more RAM at\na later date if necessary also.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 22 Aug 2006 14:34:08 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "PowerEdge 2950 questions"
},
{
"msg_contents": "Hi Jeff,\n\nMy experience with the 2950 seemed to indicate that RAID10x6 disks did\nnot perform as well as RAID5x6. I believe I posted some numbers to\nillustrate this in the post you mentioned. \n\nIf I remember correctly, the numbers were pretty close, but I was\nexpecting RAID10 to significantly beat RAID5. However, with 6 disks,\nRAID5 starts performing a little better, and it also has good storage\nutilization (i.e. you're only loosing 1 disk's worth of storage, so with\n6 drives, you still have 83% - 5/6 - of your storage available, as\nopposed to 50% with RAID10). \n\nKeep in mind that with 6 disks, theoretically (your mileage may vary by\nraid controller implementation) you have more fault tolerance with\nRAID10 than with RAID5.\n\nAlso, I don't think there's a lot of performance gain to going with the\n15k drives over the 10k. Even dell only says a 10% boost. I've\nbenchmarked a single drive configuration, 10k vs 15k rpm, and yes, the\n15k had substantially better seek times, but raw io isn't much\ndifferent, so again, it depends on your application's needs.\n\nLastly, re your question on putting the WAL on the RAID10- I currently\nhave the box setup as RAID5x6 with the WAL and PGDATA all on the same\nraidset. I haven't had the chance to do extensive tests, but from\nprevious readings, I gather that if you have write-back enabled on the\nRAID, it should be ok (which it is in my case).\n\nAs to how this compares with an Opteron system, if someone has some\npgbench (or other test) suggestions and a box to compare with, I'd be\nhappy to run the same on the 2950. (The 2950 is a 2-cpu dual core 3.0\nghz box, 8GB ram with 6 disks, running FreeBSD 6.1 amd64 RELEASE if\nyou're interested in picking a \"fair\" opteron equivalent ;)\n\nThanks,\n\nBucky\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeff Davis\nSent: Tuesday, August 22, 2006 5:34 PM\nTo: [email protected]\nSubject: [PERFORM] PowerEdge 2950 questions\n\nThis question is related to the thread:\nhttp://archives.postgresql.org/pgsql-performance/2006-08/msg00152.php\nbut I had some questions.\n\nI am looking at setting up two general-purpose database servers,\nreplicated with Slony. Each server I'm looking at has the following\nspecs:\n\nDell PowerEdge 2950\n- 2 x Dual Core Intel(r) Xeon(r) 5130, 4MB Cache, 2.00GHz, 1333MHZ FSB\n- 4GB RAM\n- PERC 5/i, x6 Backplane, Integrated Controller Card (256MB battery-\nbacked cache)\n- 6 x 73GB, SAS, 3.5-inch, 15K RPM Hard Drive arranged in RAID 10\n\nThese servers are reasonably priced and so they seem like a good choice\nfor the overall price, and the above thread indicated good performance.\nHowever, I want to make sure that putting WAL in with PGDATA on the\nRAID-10 is wise. And if there are any other suggestions that would be\ngreat. Is the RAID controller good? Are the processors good for database\nwork or are Opterons significantly better?\n\nI may go for more storage as well (i.e. getting 300GB disks), but I am\nstill determining the potential need for storage. I can get more RAM at\na later date if necessary also.\n\nRegards,\n\tJeff Davis\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n",
"msg_date": "Tue, 22 Aug 2006 17:56:12 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> Hi Jeff,\n> \n> My experience with the 2950 seemed to indicate that RAID10x6 disks did\n> not perform as well as RAID5x6. I believe I posted some numbers to\n> illustrate this in the post you mentioned. \n> \n\nVery interesting. I always hear that people avoid RAID 5 on database\nservers, but I suppose it always depends. Is the parity calculation\nsomething that may increase commit latency vs. a RAID 10? That's\nnormally the explanation that I get.\n\n> If I remember correctly, the numbers were pretty close, but I was\n> expecting RAID10 to significantly beat RAID5. However, with 6 disks,\n> RAID5 starts performing a little better, and it also has good storage\n> utilization (i.e. you're only loosing 1 disk's worth of storage, so with\n> 6 drives, you still have 83% - 5/6 - of your storage available, as\n> opposed to 50% with RAID10). \n\nRight, RAID 5 is certainly tempting since I get so much more storage.\n\n> Keep in mind that with 6 disks, theoretically (your mileage may vary by\n> raid controller implementation) you have more fault tolerance with\n> RAID10 than with RAID5.\n\nI'll also have the Slony system, so I think my degree of safety is still\nquite high with RAID-5.\n\n> Also, I don't think there's a lot of performance gain to going with the\n> 15k drives over the 10k. Even dell only says a 10% boost. I've\n> benchmarked a single drive configuration, 10k vs 15k rpm, and yes, the\n> 15k had substantially better seek times, but raw io isn't much\n> different, so again, it depends on your application's needs.\n\nDo you think the seek time may affect transaction commit time though,\nrather than just throughput? Or does it not make much difference since\nwe have writeback?\n\n> Lastly, re your question on putting the WAL on the RAID10- I currently\n> have the box setup as RAID5x6 with the WAL and PGDATA all on the same\n> raidset. I haven't had the chance to do extensive tests, but from\n> previous readings, I gather that if you have write-back enabled on the\n> RAID, it should be ok (which it is in my case).\n\nOk, I won't worry about that then.\n\n> As to how this compares with an Opteron system, if someone has some\n> pgbench (or other test) suggestions and a box to compare with, I'd be\n> happy to run the same on the 2950. (The 2950 is a 2-cpu dual core 3.0\n> ghz box, 8GB ram with 6 disks, running FreeBSD 6.1 amd64 RELEASE if\n> you're interested in picking a \"fair\" opteron equivalent ;)\n> \n\nBased on your results, I think the Intels should be fine. Does each of\nthe cores have independent access to memory (therefore making memory\naccess more parallel)?\n\nThanks very much for the information!\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 22 Aug 2006 15:22:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On 8/22/06, Jeff Davis <[email protected]> wrote:\n> On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> Very interesting. I always hear that people avoid RAID 5 on database\n> servers, but I suppose it always depends. Is the parity calculation\n> something that may increase commit latency vs. a RAID 10? That's\n> normally the explanation that I get.\n\nit's not the parity, it's the seeking. Raid 5 gives you great\nsequential i/o but random is often not much better than a single\ndrive. Actually it's the '1' in raid 10 that plays the biggest role\nin optimizing seeks on an ideal raid controller. Calculating parity\nwas boring 20 years ago as it inolves one of the fastest operations in\ncomputing, namely xor. :)\n\n> > If I remember correctly, the numbers were pretty close, but I was\n> > expecting RAID10 to significantly beat RAID5. However, with 6 disks,\n> > RAID5 starts performing a little better, and it also has good storage\n> > utilization (i.e. you're only loosing 1 disk's worth of storage, so with\n> > 6 drives, you still have 83% - 5/6 - of your storage available, as\n> > opposed to 50% with RAID10).\n\nwith a 6 disk raid 5, you absolutely have a hot spare in the array.\nan alternative is raid 6, which is two parity drives, however there is\nnot a lot of good data on how raid 6 performs (ideally should be\nsimilar to raid 5). raid 5 is ideal for some things, for example\ndocument storage or in databases where most of the activity takes\nplace in a small portion of the disks most of the time.\n\n> Right, RAID 5 is certainly tempting since I get so much more storage.\n>\n> > Keep in mind that with 6 disks, theoretically (your mileage may vary by\n> > raid controller implementation) you have more fault tolerance with\n> > RAID10 than with RAID5.\n>\n> I'll also have the Slony system, so I think my degree of safety is still\n> quite high with RAID-5.\n>\n> > Also, I don't think there's a lot of performance gain to going with the\n> > 15k drives over the 10k. Even dell only says a 10% boost. I've\n> > benchmarked a single drive configuration, 10k vs 15k rpm, and yes, the\n> > 15k had substantially better seek times, but raw io isn't much\n> > different, so again, it depends on your application's needs.\n\nraw sequential i/o is actually not that important in many databases.\nwhile the database tries to make data transfers sequential as much as\npossbile (especially for writing), improved random performance often\ntranslates directly into database performance, especially if your\ndatabase is big.\n\n> Do you think the seek time may affect transaction commit time though,\n> rather than just throughput? Or does it not make much difference since\n> we have writeback?\n>\n> > Lastly, re your question on putting the WAL on the RAID10- I currently\n> > have the box setup as RAID5x6 with the WAL and PGDATA all on the same\n> > raidset. I haven't had the chance to do extensive tests, but from\n> > previous readings, I gather that if you have write-back enabled on the\n> > RAID, it should be ok (which it is in my case).\n\nwith 6 relatively small disks I think single raid 10 volume is the\nbest bet. however above 6 dedicated wal is usually worth considering.\n since wal storage requirements are so small, it's becoming affordable\nto look at solid state for the wal.\n\nmerlin\n",
"msg_date": "Thu, 24 Aug 2006 09:21:27 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > Very interesting. I always hear that people avoid RAID 5 on database\n> > servers, but I suppose it always depends. Is the parity calculation\n> > something that may increase commit latency vs. a RAID 10? That's\n> > normally the explanation that I get.\n> \n> it's not the parity, it's the seeking. Raid 5 gives you great\n> sequential i/o but random is often not much better than a single\n> drive. Actually it's the '1' in raid 10 that plays the biggest role\n> in optimizing seeks on an ideal raid controller. Calculating parity\n> was boring 20 years ago as it inolves one of the fastest operations in\n> computing, namely xor. :)\n> \n\nHere's the explanation I got: If you do a write on RAID 5 to something\nthat is not in the RAID controllers cache, it needs to do a read first\nin order to properly recalculate the parity for the write.\n\nHowever, I'm sure they try to avoid this by leaving the write in the\nbattery-backed cache until it's more convenient to do the read, or maybe\nuntil the rest of the stripe is written in which case it doesn't need to\ndo the read. I am not sure the actual end effect.\n\n> > > Lastly, re your question on putting the WAL on the RAID10- I currently\n> > > have the box setup as RAID5x6 with the WAL and PGDATA all on the same\n> > > raidset. I haven't had the chance to do extensive tests, but from\n> > > previous readings, I gather that if you have write-back enabled on the\n> > > RAID, it should be ok (which it is in my case).\n> \n> with 6 relatively small disks I think single raid 10 volume is the\n> best bet. however above 6 dedicated wal is usually worth considering.\n> since wal storage requirements are so small, it's becoming affordable\n> to look at solid state for the wal.\n> \n\nI've often wondered about that. To a certain degree, that's the same\neffect as just having a bigger battery-backed cache, right?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 24 Aug 2006 09:24:50 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On 8/24/06, Jeff Davis <[email protected]> wrote:\n> On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> > On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > it's not the parity, it's the seeking. Raid 5 gives you great\n> > sequential i/o but random is often not much better than a single\n> > drive. Actually it's the '1' in raid 10 that plays the biggest role\n> > in optimizing seeks on an ideal raid controller. Calculating parity\n> > was boring 20 years ago as it inolves one of the fastest operations in\n> > computing, namely xor. :)\n>\n> Here's the explanation I got: If you do a write on RAID 5 to something\n> that is not in the RAID controllers cache, it needs to do a read first\n> in order to properly recalculate the parity for the write.\n\nit's worse than that. if you need to read something that is not in\nthe o/s cache, all the disks except for one need to be sent to a\nphysical location in order to get the data. Thats the basic rule with\nstriping: it optimizes for sequential i/o in expense of random i/o.\nThere are some optimizations that can help, but not much. caching by\nthe controller can increase performance on writes because it can\noptimize the movement across the disks by instituting a delay between\nthe write request and the actual write.\n\nraid 1 (or 1+x) is the opposite. It allows the drive heads to move\nindependantly on reads when combined with some smart algorithms.\nwrites however must involve all the disk heads however. Many\ncontrollers do not to seem to optimze raid 1 properly although linux\nsoftware raid seems to.\n\nA 4 disk raid 1, for example, could deliver four times the seek\nperformance which would make it feel much faster than a 4 disk raid 0\nunder certain conditions.\n\n> > with 6 relatively small disks I think single raid 10 volume is the\n> > best bet. however above 6 dedicated wal is usually worth considering.\n> > since wal storage requirements are so small, it's becoming affordable\n> > to look at solid state for the wal.\n>\n> I've often wondered about that. To a certain degree, that's the same\n> effect as just having a bigger battery-backed cache, right?\n\nyeah, if the cache was big enough to cover the volume. the wal is\nalso fairly sequenctial i/o though so I'm not sure this would help all\nthat much after thinking about it. would be an interesting test\nthough.\n\nmerlin\n",
"msg_date": "Thu, 24 Aug 2006 14:57:29 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "> I am looking at setting up two general-purpose database servers,\n> replicated with Slony. Each server I'm looking at has the following\n> specs:\n>\n> Dell PowerEdge 2950\n> - 2 x Dual Core Intel(r) Xeon(r) 5130, 4MB Cache, 2.00GHz, 1333MHZ FSB\n> - 4GB RAM\n> - PERC 5/i, x6 Backplane, Integrated Controller Card (256MB battery-\n> backed cache)\n> - 6 x 73GB, SAS, 3.5-inch, 15K RPM Hard Drive arranged in RAID 10\n\nHas anyone done any performance-comparison cpu-wise between the above\nmentioned cpu and an opteron 270/280?\n\nAlot of attention seems to be spent on the disks and the\nraid-controller which is somewhat important by itself, but this has\nbeen covered in numorous threads other places.\n\nregards\nClaus\n",
"msg_date": "Thu, 24 Aug 2006 21:27:34 +0200",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "> it's worse than that. if you need to read something that is not in\n> the o/s cache, all the disks except for one need to be sent to a\n> physical location in order to get the data. Thats the basic rule with\n> striping: it optimizes for sequential i/o in expense of random i/o.\n> There are some optimizations that can help, but not much. caching by\n> the controller can increase performance on writes because it can\n> optimize the movement across the disks by instituting a delay between\n> the write request and the actual write.\n> \n> raid 1 (or 1+x) is the opposite. It allows the drive heads to move\n> independantly on reads when combined with some smart algorithms.\n> writes however must involve all the disk heads however. Many\n> controllers do not to seem to optimze raid 1 properly although linux\n> software raid seems to.\n> \n> A 4 disk raid 1, for example, could deliver four times the seek\n> performance which would make it feel much faster than a 4 disk raid 0\n> under certain conditions.\n\nI understand random mid-sized seeks (seek to x and read 512k) being slow\non RAID5, but if the read size is small enough not to cross a stripe\nboundary, this could be optimized to only one seek on one drive. Do\nmost controllers just not do this, or is there some other reason that\nI'm not thinking of that would force all disks to seek?\n\n-- Mark\n",
"msg_date": "Thu, 24 Aug 2006 12:28:42 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:\n> On 8/24/06, Jeff Davis <[email protected]> wrote:\n> > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> > > On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > > it's not the parity, it's the seeking. Raid 5 gives you great\n> > > sequential i/o but random is often not much better than a single\n> > > drive. Actually it's the '1' in raid 10 that plays the biggest role\n> > > in optimizing seeks on an ideal raid controller. Calculating parity\n> > > was boring 20 years ago as it inolves one of the fastest operations in\n> > > computing, namely xor. :)\n> >\n> > Here's the explanation I got: If you do a write on RAID 5 to something\n> > that is not in the RAID controllers cache, it needs to do a read first\n> > in order to properly recalculate the parity for the write.\n> \n> it's worse than that. if you need to read something that is not in\n> the o/s cache, all the disks except for one need to be sent to a\n> physical location in order to get the data. \n\nUmmmm. No. Not in my experience. If you need to read something that's\nsignificantly larger than your stripe size, then yes, you'd need to do\nthat. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8\nto 32 PostgreSQL 8k blocks from a single disk before having to move the\nheads on the next disk to get the next part of data. A RAID 5, being\nread, acts much like a RAID 0 with n-1 disks.\n\nIt's the writes that kill performance, since you've got to read two\ndisks and write two disks for every write, at a minimum. This is why\nsmall RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two\nwriting threads is likely already starting to thrash.\n\nOr did you mean something else by that?\n",
"msg_date": "Thu, 24 Aug 2006 14:38:28 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "Here's benchmarks of RAID5x4 vs RAID10x4 on a Dell Perc5/I with 300 GB\n10k RPM SAS drives. I know these are bonnie 1.9 instead of the older\nversion, but maybe it might still make for useful analysis of RAID5 vs.\nRAID10. \n\nAlso, unfortunately I don't have the exact numbers, but RAID10x6\nperformed really poorly on the sequential IO (dd) tests- worse than the\n4 disk RAID5, something around 120 MB/s. I'm currently running the\nsystem as a RAID5x6, but would like to go back and do some further\ntesting if I get the chance to tear the box down again.\n\nThese tests were run on FreeBSD 6.1 amd64 RELEASE with UFS + soft\nupdates. For comparison, the dd for RAID5x6 was 255 MB/s so I think the\nextra disks really help out with RAID5 write performance, as Scott\npointed out. (I'm using a 128k stripe size with a 256MB writeback\ncache).\n\nPersonally, I'm not yet convinced that RAID10 offers dramatically better\nperformance than RAID5 for 6 disks (at least on the Dell PERC\ncontroller), and available storgae is a significant factor for my\nparticular application. But I do feel the need to do more testing, so\nany suggestions are appreciated. (and yes, I'll be using bonnie 1.03 in\nthe future, along with pgbench).\n\n------ RAID5x4 \n# /usr/local/sbin/bonnie++ -d bonnie -s 1000:8k -u root\nVersion 1.93c ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t 1000M 587 99 158889 30 127859 32 1005 99 824399 99\n+++++ +++\nLatency 14216us 181ms 48765us 56241us 1687us\n47997us\nVersion 1.93c ------Sequential Create------ --------Random\nCreate--------\n\t -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n+++++ +++\nLatency 40365us 25us 35us 20030us 36us\n52us\n1.93c,1.93c,beast.corp.lumeta.com,1,1155204369,1000M,,587,99,158889,30,1\n27859,32,1005,99,824399,99,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,++\n+,+++++,+++,+++++,+++,+++++,+++,14216us,181ms,48765us,56241us,1687us,479\n97us,40365us,25us,35us,20030us,36us,52us\n\n# time bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)\"\n125000+0 records in\n125000+0 records out\n1024000000 bytes transferred in 6.375067 secs (160625763 bytes/sec)\n0.037u 1.669s 0:06.42 26.3% 29+211k 30+7861io 0pf+0w\n\n------ RAID10 x 4\nbash-2.05b$ bonnie++ -d bonnie -s 1000:8k\nVersion 1.93c ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t 1000M 585 99 21705 4 28560 9 1004 99 812997 98 5436\n454\nLatency 14181us 81364us 50256us 57720us 1671us\n1059ms\nVersion 1.93c ------Sequential Create------ --------Random\nCreate--------\n\t -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 4712 10 +++++ +++ +++++ +++ 4674 10 +++++ +++\n+++++ +++\nLatency 807ms 21us 36us 804ms 110us\n36us\n1.93c,1.93c,beast.corp.lumeta.com,1,1155207445,1000M,,585,99,21705,4,285\n60,9,1004,99,812997,98,5436,454,16,,,,,4712,10,+++++,+++,+++++,+++,4674,\n10,+++++,+++,+++++,+++,14181us,81364us,50256us,57720us,1671us,1059ms,807\nms,21us,36us,804ms,110us,36us\n\nbash-2.05b$ time bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k\n&& sync)\"\n125000+0 records in\n125000+0 records out\n1024000000 bytes transferred in 45.565848 secs (22472971 bytes/sec)\n\n- Bucky\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Thursday, August 24, 2006 3:38 PM\nTo: Merlin Moncure\nCc: Jeff Davis; Bucky Jordan; [email protected]\nSubject: Re: [PERFORM] PowerEdge 2950 questions\n\nOn Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:\n> On 8/24/06, Jeff Davis <[email protected]> wrote:\n> > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> > > On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > > it's not the parity, it's the seeking. Raid 5 gives you great\n> > > sequential i/o but random is often not much better than a single\n> > > drive. Actually it's the '1' in raid 10 that plays the biggest\nrole\n> > > in optimizing seeks on an ideal raid controller. Calculating\nparity\n> > > was boring 20 years ago as it inolves one of the fastest\noperations in\n> > > computing, namely xor. :)\n> >\n> > Here's the explanation I got: If you do a write on RAID 5 to\nsomething\n> > that is not in the RAID controllers cache, it needs to do a read\nfirst\n> > in order to properly recalculate the parity for the write.\n> \n> it's worse than that. if you need to read something that is not in\n> the o/s cache, all the disks except for one need to be sent to a\n> physical location in order to get the data. \n\nUmmmm. No. Not in my experience. If you need to read something that's\nsignificantly larger than your stripe size, then yes, you'd need to do\nthat. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8\nto 32 PostgreSQL 8k blocks from a single disk before having to move the\nheads on the next disk to get the next part of data. A RAID 5, being\nread, acts much like a RAID 0 with n-1 disks.\n\nIt's the writes that kill performance, since you've got to read two\ndisks and write two disks for every write, at a minimum. This is why\nsmall RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two\nwriting threads is likely already starting to thrash.\n\nOr did you mean something else by that?\n",
"msg_date": "Thu, 24 Aug 2006 15:50:45 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On 8/24/06, Scott Marlowe <[email protected]> wrote:\n> On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:\n> > On 8/24/06, Jeff Davis <[email protected]> wrote:\n> > > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> > > > On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > > > it's not the parity, it's the seeking. Raid 5 gives you great\n> > > > sequential i/o but random is often not much better than a single\n> > > > drive. Actually it's the '1' in raid 10 that plays the biggest role\n> > > > in optimizing seeks on an ideal raid controller. Calculating parity\n> > > > was boring 20 years ago as it inolves one of the fastest operations in\n> > > > computing, namely xor. :)\n> > >\n> > > Here's the explanation I got: If you do a write on RAID 5 to something\n> > > that is not in the RAID controllers cache, it needs to do a read first\n> > > in order to properly recalculate the parity for the write.\n> >\n> > it's worse than that. if you need to read something that is not in\n> > the o/s cache, all the disks except for one need to be sent to a\n> > physical location in order to get the data.\n>\n> Ummmm. No. Not in my experience. If you need to read something that's\n> significantly larger than your stripe size, then yes, you'd need to do\n> that. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8\n> to 32 PostgreSQL 8k blocks from a single disk before having to move the\n> heads on the next disk to get the next part of data. A RAID 5, being\n> read, acts much like a RAID 0 with n-1 disks.\n\ni just don't see raid 5 benchmarks backing that up. i know how it is\nsupposed to work on paper, but all of the raid 5 systems I work with\ndeliver lousy seek performance. here is an example from the mysql\nfolks:\nhttp://peter-zaitsev.livejournal.com/14415.html\nand another:\nhttp://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/\n\nalso, with raid 5 you are squeezed on both ends, too few disks and you\nhave an efficiency problem. too many disks and you start to get\nconcerned about mtbf and raid rebuild times.\n\n> It's the writes that kill performance, since you've got to read two\n> disks and write two disks for every write, at a minimum. This is why\n> small RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two\n> writing threads is likely already starting to thrash.\n>\n> Or did you mean something else by that?\n\nwell, that's correct, my point was that a 4 disk raid 1 can deliver\nmore seeks, not necessarily that it is better. as you say writes\nwould kill performance. raid 10 seems to be a good compromise. so is\nraid 6 possibly, although i dont see a lot performance data on that.\n\nmerlin\n",
"msg_date": "Thu, 24 Aug 2006 16:03:39 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On Thu, 2006-08-24 at 15:03, Merlin Moncure wrote:\n> On 8/24/06, Scott Marlowe <[email protected]> wrote:\n> > On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:\n> > > On 8/24/06, Jeff Davis <[email protected]> wrote:\n> > > > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:\n> > > > > On 8/22/06, Jeff Davis <[email protected]> wrote:\n> > > > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:\n> > > > > it's not the parity, it's the seeking. Raid 5 gives you great\n> > > > > sequential i/o but random is often not much better than a single\n> > > > > drive. Actually it's the '1' in raid 10 that plays the biggest role\n> > > > > in optimizing seeks on an ideal raid controller. Calculating parity\n> > > > > was boring 20 years ago as it inolves one of the fastest operations in\n> > > > > computing, namely xor. :)\n> > > >\n> > > > Here's the explanation I got: If you do a write on RAID 5 to something\n> > > > that is not in the RAID controllers cache, it needs to do a read first\n> > > > in order to properly recalculate the parity for the write.\n> > >\n> > > it's worse than that. if you need to read something that is not in\n> > > the o/s cache, all the disks except for one need to be sent to a\n> > > physical location in order to get the data.\n> >\n> > Ummmm. No. Not in my experience. If you need to read something that's\n> > significantly larger than your stripe size, then yes, you'd need to do\n> > that. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8\n> > to 32 PostgreSQL 8k blocks from a single disk before having to move the\n> > heads on the next disk to get the next part of data. A RAID 5, being\n> > read, acts much like a RAID 0 with n-1 disks.\n> \n> i just don't see raid 5 benchmarks backing that up. i know how it is\n> supposed to work on paper, but all of the raid 5 systems I work with\n> deliver lousy seek performance. here is an example from the mysql\n> folks:\n> http://peter-zaitsev.livejournal.com/14415.html\n> and another:\n> http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/\n\nWell, I've seen VERY good numbers out or RAID 5 arrays. As long as I\nwasn't writing to them. :) \n\nTrust me though, I'm no huge fan of RAID 5. \n\n> > It's the writes that kill performance, since you've got to read two\n> > disks and write two disks for every write, at a minimum. This is why\n> > small RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two\n> > writing threads is likely already starting to thrash.\n> >\n> > Or did you mean something else by that?\n> \n> well, that's correct, my point was that a 4 disk raid 1 can deliver\n> more seeks, not necessarily that it is better. as you say writes\n> would kill performance. raid 10 seems to be a good compromise. so is\n> raid 6 possibly, although i dont see a lot performance data on that.\n\nYeah, I think RAID 10, in this modern day of large, inexpensive hard\ndrives, is the way to go for most transactional / heavily written\nsystems.\n\nI'm not sure RAID-6 is worth the effort. For smaller arrays (4 to 6),\nyou've got about as many \"extra\" drives as in RAID 1+0. And that old\nread twice write twice penalty becomes read twice (or is that thrice???)\nand write thrice. So, you'd chew up your iface bandwidth quicker. \nAlthough in SAS / SATA I guess that part's not a big deal, the data has\nto be moved around somewhere on the card / in the controller chips, so\nit's still a problem somewhere waiting to happen in terms of bandwidth.\n",
"msg_date": "Thu, 24 Aug 2006 15:22:21 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
},
{
"msg_contents": "On 8/24/06, Bucky Jordan <[email protected]> wrote:\n> Here's benchmarks of RAID5x4 vs RAID10x4 on a Dell Perc5/I with 300 GB\n> 10k RPM SAS drives. I know these are bonnie 1.9 instead of the older\n> version, but maybe it might still make for useful analysis of RAID5 vs.\n> RAID10.\n\n> ------ RAID5x4\ni dont see the seeks here, am i missing something?\n\n[raid 10 dd]\n> 1024000000 bytes transferred in 45.565848 secs (22472971 bytes/sec)\n\nouch. this is a problem with the controller. it should be higher than\nthis but the raid 5 should edge it out regardless. try configuring\nthe hardware as a jbod and doing the raid 10 in software.\n\nmerlin\n",
"msg_date": "Thu, 24 Aug 2006 16:29:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PowerEdge 2950 questions"
}
] |
[
{
"msg_contents": "Hello All,\n\nThis query runs forever and ever. Nature of this table being lots of\ninserts/deletes/query, I vacuum it every half hour to keep the holes\nreusable and nightly once vacuum analyze to update the optimizer. We've\ngot index on eventtime only. Running it for current day uses index range\nscan and it runs within acceptable time. Below is the explain of the\nquery. Is the order by sequencenum desc prevents from applying limit\noptimization?\n\nexplain SELECT *\nFROM EVENTLOG \nWHERE EVENTTIME>'07/23/06 16:00:00' \nAND EVENTTIME<'08/22/06 16:00:00' \nAND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') \nORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------------------------------------------\n Limit (cost=15546930.29..15546931.54 rows=500 width=327)\n -> Sort (cost=15546930.29..15581924.84 rows=13997819 width=327)\n Sort Key: eventtime, sequencenum\n -> Seq Scan on eventlog (cost=0.00..2332700.25 rows=13997819\nwidth=327)\n Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp\nwithout time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp\nwithout time zone) AND (((objdomainid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n(5 rows)\n\nThanks,\nStalin\nPg version 8.0.1, suse 64bit.\n",
"msg_date": "Tue, 22 Aug 2006 17:46:22 -0700",
"msg_from": "\"Subbiah, Stalin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query tuning"
},
{
"msg_contents": "Subbiah, Stalin wrote:\n> Hello All,\n> \n> This query runs forever and ever. Nature of this table being lots of\n> inserts/deletes/query, I vacuum it every half hour to keep the holes\n> reusable and nightly once vacuum analyze to update the optimizer. We've\n> got index on eventtime only. Running it for current day uses index range\n> scan and it runs within acceptable time. Below is the explain of the\n> query. Is the order by sequencenum desc prevents from applying limit\n> optimization?\n> \n> explain SELECT *\n> FROM EVENTLOG \n> WHERE EVENTTIME>'07/23/06 16:00:00' \n> AND EVENTTIME<'08/22/06 16:00:00' \n> AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') \n> ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;\n> \n> QUERY PLAN\n> \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------------------------------------------------------\n> Limit (cost=15546930.29..15546931.54 rows=500 width=327)\n> -> Sort (cost=15546930.29..15581924.84 rows=13997819 width=327)\n> Sort Key: eventtime, sequencenum\n> -> Seq Scan on eventlog (cost=0.00..2332700.25 rows=13997819\n> width=327)\n> Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp\n> without time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp\n> without time zone) AND (((objdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> (5 rows)\n> \n> Thanks,\n> Stalin\n> Pg version 8.0.1, suse 64bit.\n\nFirstly you should update to 8.0.8 - because it's in the same stream you \nwon't need to do a dump/initdb/reload like a major version change, it \nshould be a simple upgrade.\n\nCan you send explain analyze instead of just explain?\n\nIt sounds like you're not analyz'ing enough - if you're doing lots of \nupdates/deletes/inserts then the statistics postgresql uses to choose \nwhether to do an index scan or something else will quickly be outdated \nand so it'll have to go back to a full table scan every time..\n\nCan you set up autovacuum to handle that for you more regularly?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 23 Aug 2006 11:37:13 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning"
}
] |
[
{
"msg_contents": "Suppose, hypothetically of course, someone lacked foresight, and put a tablespace somewhere with a dumb name, like \"/disk2\", instead of using a symbolic link with a more descriptive name. And then /disk2 needs to be renamed, say to \"/postgres_data\", and this (hypothetical) DBA realizes he has made a dumb mistake.\n\nIs there a way to move a tablespace to a new location without a dump/restore? I, er, this hypothetical guy, knows he can move it and put a symbolic link in for /disk2, but this is somewhat unsatisfactory since \"/disk2\" would have to exist forever.\n\nThanks,\nCraig\n",
"msg_date": "Tue, 22 Aug 2006 18:16:54 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Moving a tablespace"
},
{
"msg_contents": "On Tue, Aug 22, 2006 at 06:16:54PM -0700, Craig A. James wrote:\n> Is there a way to move a tablespace to a new location without a \n> dump/restore? I, er, this hypothetical guy, knows he can move it and put a \n> symbolic link in for /disk2, but this is somewhat unsatisfactory since \n> \"/disk2\" would have to exist forever.\n\nThe last paragraph of the Tablespaces documentation might be helpful:\n\nhttp://www.postgresql.org/docs/8.1/interactive/manage-ag-tablespaces.html\n\n\"The directory $PGDATA/pg_tblspc contains symbolic links that point\nto each of the non-built-in tablespaces defined in the cluster.\nAlthough not recommended, it is possible to adjust the tablespace\nlayout by hand by redefining these links. Two warnings: do not do\nso while the postmaster is running; and after you restart the\npostmaster, update the pg_tablespace catalog to show the new\nlocations. (If you do not, pg_dump will continue to show the old\ntablespace locations.)\"\n\nI just tested this and it appeared to work, but this hypothetical\nDBA might want to wait for others to comment before proceeding. He\nmight also want to initdb and populate a test cluster and practice\nthe procedure before doing it for real.\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 22 Aug 2006 19:36:08 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving a tablespace"
},
{
"msg_contents": "Michael Fuhr <[email protected]> writes:\n> On Tue, Aug 22, 2006 at 06:16:54PM -0700, Craig A. James wrote:\n>> Is there a way to move a tablespace to a new location without a \n>> dump/restore?\n\n> The last paragraph of the Tablespaces documentation might be helpful:\n> http://www.postgresql.org/docs/8.1/interactive/manage-ag-tablespaces.html\n\n> I just tested this and it appeared to work, but this hypothetical\n> DBA might want to wait for others to comment before proceeding.\n\nAFAIK it works fine. Shut down postmaster, move tablespace's directory\ntree somewhere else, fix the symbolic link in $PGDATA/pg_tblspc, start\npostmaster, update the pg_tablespace entry. There isn't anyplace else\nin Postgres that knows where that link leads. But if you are running\na hot PITR backup, see the caveats in TFM about what will happen on the\nbackup machine.\n\n> He might also want to initdb and populate a test cluster and practice\n> the procedure before doing it for real.\n\n\"Always mount a scratch monkey\" ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2006 22:34:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving a tablespace "
}
] |
[
{
"msg_contents": "Actually these servers will be upgraded to 8.1.4 in couple of months.\n\nHere you go with explain analyze.\n\n# explain analyze SELECT *\nFROM EVENTLOG \nWHERE EVENTTIME>'07/23/06 16:00:00' AND EVENTTIME<'08/22/06 16:00:00' \nAND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') \nORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------------------------------------------\n Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\ntime=427771.568..427772.904 rows=500 loops=1)\n -> Sort (cost=15583108.89..15618188.88 rows=14031998 width=327)\n(actual time=427770.504..427771.894 rows=1000 loops=1)\n Sort Key: eventtime, sequencenum\n -> Seq Scan on eventlog (cost=0.00..2334535.17 rows=14031998\nwidth=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp\nwithout time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp\nwithout time zone) AND (((objdomainid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n Total runtime: 437884.134 ms\n(6 rows)\n\n-----Original Message-----\nFrom: Chris [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 6:37 PM\nTo: Subbiah, Stalin\nCc: [email protected]\nSubject: Re: [PERFORM] Query tuning\n\nSubbiah, Stalin wrote:\n> Hello All,\n> \n> This query runs forever and ever. Nature of this table being lots of \n> inserts/deletes/query, I vacuum it every half hour to keep the holes \n> reusable and nightly once vacuum analyze to update the optimizer. \n> We've got index on eventtime only. Running it for current day uses \n> index range scan and it runs within acceptable time. Below is the \n> explain of the query. Is the order by sequencenum desc prevents from \n> applying limit optimization?\n> \n> explain SELECT *\n> FROM EVENTLOG\n> WHERE EVENTTIME>'07/23/06 16:00:00' \n> AND EVENTTIME<'08/22/06 16:00:00' \n> AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')\n> ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;\n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> -------------------------------------------------------------\n> Limit (cost=15546930.29..15546931.54 rows=500 width=327)\n> -> Sort (cost=15546930.29..15581924.84 rows=13997819 width=327)\n> Sort Key: eventtime, sequencenum\n> -> Seq Scan on eventlog (cost=0.00..2332700.25 \n> rows=13997819\n> width=327)\n> Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp \n> without time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp \n> without time zone) AND (((objdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> (5 rows)\n> \n> Thanks,\n> Stalin\n> Pg version 8.0.1, suse 64bit.\n\nFirstly you should update to 8.0.8 - because it's in the same stream you\nwon't need to do a dump/initdb/reload like a major version change, it\nshould be a simple upgrade.\n\nCan you send explain analyze instead of just explain?\n\nIt sounds like you're not analyz'ing enough - if you're doing lots of\nupdates/deletes/inserts then the statistics postgresql uses to choose\nwhether to do an index scan or something else will quickly be outdated\nand so it'll have to go back to a full table scan every time..\n\nCan you set up autovacuum to handle that for you more regularly?\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 22 Aug 2006 19:53:29 -0700",
"msg_from": "\"Subbiah, Stalin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning"
},
{
"msg_contents": "Subbiah, Stalin wrote:\n> Actually these servers will be upgraded to 8.1.4 in couple of months.\n\neven so, you could get some bad data in there.\nhttp://www.postgresql.org/docs/8.0/static/release.html . Go through the \nold release notes and you'll find various race conditions, crashes etc.\n\n> Here you go with explain analyze.\n> \n> # explain analyze SELECT *\n> FROM EVENTLOG \n> WHERE EVENTTIME>'07/23/06 16:00:00' AND EVENTTIME<'08/22/06 16:00:00' \n> AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') \n> ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n> \n> QUERY PLAN\n> \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------------------------------------------------------\n> Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\n> time=427771.568..427772.904 rows=500 loops=1)\n> -> Sort (cost=15583108.89..15618188.88 rows=14031998 width=327)\n> (actual time=427770.504..427771.894 rows=1000 loops=1)\n> Sort Key: eventtime, sequencenum\n> -> Seq Scan on eventlog (cost=0.00..2334535.17 rows=14031998\n> width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n> Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp\n> without time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp\n> without time zone) AND (((objdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> Total runtime: 437884.134 ms\n> (6 rows)\n\nIf you analyze the table then run this again what plan does it come back \nwith?\n\nI can't read explain output properly but I suspect (and I'm sure I'll be \ncorrected if need be) that the sort step is way out of whack and so is \nthe seq scan because the stats aren't up to date enough.\n\nDo you have an index on objdomainid, objid and userdomainid (one index \nper field) ? I wonder if that will help much.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 23 Aug 2006 13:05:34 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning"
}
] |
[
{
"msg_contents": "I get the same plan after running vacuum analyze. Nope, I don't have\nindex on objdomainid, objid and userdomainid. Only eventime has it.\n\n-----Original Message-----\nFrom: Chris [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 8:06 PM\nTo: Subbiah, Stalin\nCc: [email protected]\nSubject: Re: [PERFORM] Query tuning\n\nSubbiah, Stalin wrote:\n> Actually these servers will be upgraded to 8.1.4 in couple of months.\n\neven so, you could get some bad data in there.\nhttp://www.postgresql.org/docs/8.0/static/release.html . Go through the\nold release notes and you'll find various race conditions, crashes etc.\n\n> Here you go with explain analyze.\n> \n> # explain analyze SELECT *\n> FROM EVENTLOG\n> WHERE EVENTTIME>'07/23/06 16:00:00' AND EVENTTIME<'08/22/06 16:00:00'\n\n> AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')\n> ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> ----------------------------------------------------------------------\n> --\n> -------------------------------------------------------------\n> Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\n> time=427771.568..427772.904 rows=500 loops=1)\n> -> Sort (cost=15583108.89..15618188.88 rows=14031998 width=327) \n> (actual time=427770.504..427771.894 rows=1000 loops=1)\n> Sort Key: eventtime, sequencenum\n> -> Seq Scan on eventlog (cost=0.00..2334535.17 \n> rows=14031998\n> width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n> Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp \n> without time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp \n> without time zone) AND (((objdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> Total runtime: 437884.134 ms\n> (6 rows)\n\nIf you analyze the table then run this again what plan does it come back\nwith?\n\nI can't read explain output properly but I suspect (and I'm sure I'll be\ncorrected if need be) that the sort step is way out of whack and so is\nthe seq scan because the stats aren't up to date enough.\n\nDo you have an index on objdomainid, objid and userdomainid (one index\nper field) ? I wonder if that will help much.\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 23 Aug 2006 11:02:35 -0700",
"msg_from": "\"Subbiah, Stalin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning"
},
{
"msg_contents": "It seems to me that what would work best is an index scan backward on the\neventtime index. I don't see why that wouldn't work for you, maybe the\nplanner is just esitmating the seq scan and sort is faster for some reason.\nWhat does EXPLAIN say if you use a small limit and offset like 10? Or what\ndoes EXPLAIN say if you first run \"set enable_seqscan=false;\" (If you get\nthe same plan, then I wouldn't bother running EXPLAIN ANALYZE, but if you\nget a different plan I would run EXPLAIN ANALYZE to see if the new plan is\nany faster.)\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Subbiah, Stalin\n> Sent: Wednesday, August 23, 2006 1:03 PM\n> To: Chris\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> \n> I get the same plan after running vacuum analyze. Nope, I don't have\n> index on objdomainid, objid and userdomainid. Only eventime has it.\n> \n> -----Original Message-----\n> From: Chris [mailto:[email protected]] \n> Sent: Tuesday, August 22, 2006 8:06 PM\n> To: Subbiah, Stalin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> Subbiah, Stalin wrote:\n> > Actually these servers will be upgraded to 8.1.4 in couple \n> of months.\n> \n> even so, you could get some bad data in there.\n> http://www.postgresql.org/docs/8.0/static/release.html . Go \n> through the\n> old release notes and you'll find various race conditions, \n> crashes etc.\n> \n> > Here you go with explain analyze.\n> > \n> > # explain analyze SELECT *\n> > FROM EVENTLOG\n> > WHERE EVENTTIME>'07/23/06 16:00:00' AND \n> EVENTTIME<'08/22/06 16:00:00'\n> \n> > AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')\n> > ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n> > \n> > QUERY PLAN\n> > \n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > -------------------------------------------------------------\n> > Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\n> > time=427771.568..427772.904 rows=500 loops=1)\n> > -> Sort (cost=15583108.89..15618188.88 rows=14031998 \n> width=327) \n> > (actual time=427770.504..427771.894 rows=1000 loops=1)\n> > Sort Key: eventtime, sequencenum\n> > -> Seq Scan on eventlog (cost=0.00..2334535.17 \n> > rows=14031998\n> > width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n> > Filter: ((eventtime > '2006-07-23 \n> 16:00:00'::timestamp \n> > without time zone) AND (eventtime < '2006-08-22 \n> 16:00:00'::timestamp \n> > without time zone) AND (((objdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> > Total runtime: 437884.134 ms\n> > (6 rows)\n> \n> If you analyze the table then run this again what plan does \n> it come back\n> with?\n> \n> I can't read explain output properly but I suspect (and I'm \n> sure I'll be\n> corrected if need be) that the sort step is way out of whack and so is\n> the seq scan because the stats aren't up to date enough.\n> \n> Do you have an index on objdomainid, objid and userdomainid (one index\n> per field) ? I wonder if that will help much.\n> \n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n",
"msg_date": "Wed, 23 Aug 2006 18:19:59 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning"
}
] |
[
{
"msg_contents": "I am planning to test various filesystems on some new hardware I'm\ngetting. Is pgbench a good way to try out the filesystem?\n\nI'm currently planning to test some or all of:\nLinux: ext2, ext3, XFS, JFS, reiser3, reiser4\nFreeBSD: UFS, UFS+SU\n\nSo, I'm looking for a good way to test just the filesystem performance\nthrough PostgreSQL (since database access is different than normal FS\nactivity). Would pgbench give me a good approximation?\n\nAlso, do ext2 or UFS without soft updates run the risk of losing or\ncorrupting my data?\n\nI saw Chris Browne did some benchmarks back in 2003 and determined that\nJFS was a good choice. However, I assume things have changed somewhat\nsince then. Does anyone have a pointer to some newer results?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 23 Aug 2006 15:23:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which benchmark to use for testing FS?"
},
{
"msg_contents": "On Wed, Aug 23, 2006 at 03:23:03PM -0700, Jeff Davis wrote:\n>Also, do ext2 or UFS without soft updates run the risk of losing or\n>corrupting my data?\n\nI suggest you check the list archives; there's a lot of stuff about \nfilesystems and disk configuration in there.\n\nMike Stone\n",
"msg_date": "Wed, 23 Aug 2006 21:50:56 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which benchmark to use for testing FS?"
},
{
"msg_contents": "On Wed, 2006-08-23 at 21:50 -0400, Michael Stone wrote:\n> On Wed, Aug 23, 2006 at 03:23:03PM -0700, Jeff Davis wrote:\n> >Also, do ext2 or UFS without soft updates run the risk of losing or\n> >corrupting my data?\n> \n> I suggest you check the list archives; there's a lot of stuff about \n> filesystems and disk configuration in there.\n> \n\nI spent a while looking in the list archives, but the list archives have\nbeen misbehaving lately (you click on a search result and the message\nthat appears doesn't have the same subject as the one you clicked on).\nThey may have fixed that (they are aware of the problem, according to\npgsql-www). Also, the messages I was able to find were mostly from a\nlong time ago.\n\nIf you have a pointer to a particularly useful thread please let me\nknow.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 24 Aug 2006 09:05:05 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Which benchmark to use for testing FS?"
}
] |
[
{
"msg_contents": "Changing limit or offset to a small number doesn't have any change in\nplans. Likewise enable_seqscan to false. They still take 8-10 mins to\nruns. \n\n-----Original Message-----\nFrom: Dave Dutcher [mailto:[email protected]] \nSent: Wednesday, August 23, 2006 4:20 PM\nTo: Subbiah, Stalin\nCc: [email protected]\nSubject: RE: [PERFORM] Query tuning\n\nIt seems to me that what would work best is an index scan backward on\nthe eventtime index. I don't see why that wouldn't work for you, maybe\nthe planner is just esitmating the seq scan and sort is faster for some\nreason.\nWhat does EXPLAIN say if you use a small limit and offset like 10? Or\nwhat does EXPLAIN say if you first run \"set enable_seqscan=false;\" (If\nyou get the same plan, then I wouldn't bother running EXPLAIN ANALYZE,\nbut if you get a different plan I would run EXPLAIN ANALYZE to see if\nthe new plan is any faster.)\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Subbiah, \n> Stalin\n> Sent: Wednesday, August 23, 2006 1:03 PM\n> To: Chris\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> \n> I get the same plan after running vacuum analyze. Nope, I don't have \n> index on objdomainid, objid and userdomainid. Only eventime has it.\n> \n> -----Original Message-----\n> From: Chris [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 8:06 PM\n> To: Subbiah, Stalin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> Subbiah, Stalin wrote:\n> > Actually these servers will be upgraded to 8.1.4 in couple\n> of months.\n> \n> even so, you could get some bad data in there.\n> http://www.postgresql.org/docs/8.0/static/release.html . Go through \n> the old release notes and you'll find various race conditions, crashes\n\n> etc.\n> \n> > Here you go with explain analyze.\n> > \n> > # explain analyze SELECT *\n> > FROM EVENTLOG\n> > WHERE EVENTTIME>'07/23/06 16:00:00' AND\n> EVENTTIME<'08/22/06 16:00:00'\n> \n> > AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')\n> > ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n> > \n> > QUERY PLAN\n> > \n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > -------------------------------------------------------------\n> > Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\n> > time=427771.568..427772.904 rows=500 loops=1)\n> > -> Sort (cost=15583108.89..15618188.88 rows=14031998\n> width=327)\n> > (actual time=427770.504..427771.894 rows=1000 loops=1)\n> > Sort Key: eventtime, sequencenum\n> > -> Seq Scan on eventlog (cost=0.00..2334535.17\n> > rows=14031998\n> > width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n> > Filter: ((eventtime > '2006-07-23\n> 16:00:00'::timestamp\n> > without time zone) AND (eventtime < '2006-08-22\n> 16:00:00'::timestamp\n> > without time zone) AND (((objdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> > Total runtime: 437884.134 ms\n> > (6 rows)\n> \n> If you analyze the table then run this again what plan does it come \n> back with?\n> \n> I can't read explain output properly but I suspect (and I'm sure I'll \n> be corrected if need be) that the sort step is way out of whack and so\n\n> is the seq scan because the stats aren't up to date enough.\n> \n> Do you have an index on objdomainid, objid and userdomainid (one index\n\n> per field) ? I wonder if that will help much.\n> \n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n",
"msg_date": "Wed, 23 Aug 2006 21:44:20 -0700",
"msg_from": "\"Subbiah, Stalin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Query tuning"
},
{
"msg_contents": " \nYou really need to index the three fields you are restricting with your\nselect query (OBJDOMAINID, OBJID and USERDOMAINID). Depending on whether\nor not you have other queries that filter for one of the three fields\nbut not the others, you might want to have separate indexes across each\nof the fields. Also, if those types are not CHAR/VARCHAR/TEXT/similar,\ntry casting your values to the types for those fields (ie,\nOBJDOMAINID='somethinghere'::WVARCHAR, and should be much less necessary\nwith pgsql v8+). Since you don't have indexes on those fields, the only\nthing the query planner can do is a full table scan and for each record\ncheck the field values. With indexes it will be able to filter by those\nvalues first and then sort the remaining values. On the other hand, if\nmost of your records have the same values for your filter fields (ie\nonly a handful of values dispersed amongst millions of rows) then you\nmay be back to the seq scan.\n\nJason Minion\[email protected]\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Subbiah, Stalin\nSent: Wednesday, August 23, 2006 11:44 PM\nTo: Dave Dutcher\nCc: [email protected]; [email protected]\nSubject: Re: [ADMIN] [PERFORM] Query tuning\n\nChanging limit or offset to a small number doesn't have any change in\nplans. Likewise enable_seqscan to false. They still take 8-10 mins to\nruns. \n\n-----Original Message-----\nFrom: Dave Dutcher [mailto:[email protected]]\nSent: Wednesday, August 23, 2006 4:20 PM\nTo: Subbiah, Stalin\nCc: [email protected]\nSubject: RE: [PERFORM] Query tuning\n\nIt seems to me that what would work best is an index scan backward on\nthe eventtime index. I don't see why that wouldn't work for you, maybe\nthe planner is just esitmating the seq scan and sort is faster for some\nreason.\nWhat does EXPLAIN say if you use a small limit and offset like 10? Or\nwhat does EXPLAIN say if you first run \"set enable_seqscan=false;\" (If\nyou get the same plan, then I wouldn't bother running EXPLAIN ANALYZE,\nbut if you get a different plan I would run EXPLAIN ANALYZE to see if\nthe new plan is any faster.)\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Subbiah, \n> Stalin\n> Sent: Wednesday, August 23, 2006 1:03 PM\n> To: Chris\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> \n> I get the same plan after running vacuum analyze. Nope, I don't have \n> index on objdomainid, objid and userdomainid. Only eventime has it.\n> \n> -----Original Message-----\n> From: Chris [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 8:06 PM\n> To: Subbiah, Stalin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query tuning\n> \n> Subbiah, Stalin wrote:\n> > Actually these servers will be upgraded to 8.1.4 in couple\n> of months.\n> \n> even so, you could get some bad data in there.\n> http://www.postgresql.org/docs/8.0/static/release.html . Go through \n> the old release notes and you'll find various race conditions, crashes\n\n> etc.\n> \n> > Here you go with explain analyze.\n> > \n> > # explain analyze SELECT *\n> > FROM EVENTLOG\n> > WHERE EVENTTIME>'07/23/06 16:00:00' AND\n> EVENTTIME<'08/22/06 16:00:00'\n> \n> > AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' \n> > OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')\n> > ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;\n> > \n> > QUERY PLAN\n> > \n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > \n> ----------------------------------------------------------------------\n> > --\n> > -------------------------------------------------------------\n> > Limit (cost=15583110.14..15583111.39 rows=500 width=327) (actual\n> > time=427771.568..427772.904 rows=500 loops=1)\n> > -> Sort (cost=15583108.89..15618188.88 rows=14031998\n> width=327)\n> > (actual time=427770.504..427771.894 rows=1000 loops=1)\n> > Sort Key: eventtime, sequencenum\n> > -> Seq Scan on eventlog (cost=0.00..2334535.17\n> > rows=14031998\n> > width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)\n> > Filter: ((eventtime > '2006-07-23\n> 16:00:00'::timestamp\n> > without time zone) AND (eventtime < '2006-08-22\n> 16:00:00'::timestamp\n> > without time zone) AND (((objdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =\n> > 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))\n> > Total runtime: 437884.134 ms\n> > (6 rows)\n> \n> If you analyze the table then run this again what plan does it come \n> back with?\n> \n> I can't read explain output properly but I suspect (and I'm sure I'll \n> be corrected if need be) that the sort step is way out of whack and so\n\n> is the seq scan because the stats aren't up to date enough.\n> \n> Do you have an index on objdomainid, objid and userdomainid (one index\n\n> per field) ? I wonder if that will help much.\n> \n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n",
"msg_date": "Thu, 24 Aug 2006 00:30:56 -0500",
"msg_from": "\"Jason Minion\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Query tuning"
}
] |
[
{
"msg_contents": "I am evaluating PostgreSQL as a candiate to cooperate with a java\napplication.\n\nPerformance test set up:\nOnly one table in the database schema.\nThe tables contains a bytea column plus some other columns.\nThe PostgreSQL server runs on Linux.\n\nTest execution:\nThe java application connects throught TCP/IP (jdbc) and performs 50000\ninserts.\n\nResult:\nMonitoring the processes using top reveals that the total amount of\nmemory used slowly increases during the test. When reaching insert\nnumber 40000, or somewhere around that, memory is exhausted, and the the\nsystems begins to swap. Each of the postmaster processes seem to use a\nconstant amount of memory, but the total memory usage increases all the\nsame.\n\nQuestions:\nIs this way of testing the performance a bad idea? Actual database usage\nwill be a mixture of inserts and queries. Maybe the test should behave\nlike that instead, but I wanted to keep things simple.\nWhy is the memory usage slowly increasing during the whole test?\nIs there a way of keeping PostgreSQL from exhausting memory during the\ntest? I have looked for some fitting parameters to used, but I am\nprobably to much of a novice to understand which to choose.\n\nThanks in advance,\nFredrik Israelsson\n",
"msg_date": "Thu, 24 Aug 2006 11:04:28 +0200",
"msg_from": "\"Fredrik Israelsson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is this way of testing a bad idea?"
},
{
"msg_contents": "\"Fredrik Israelsson\" <[email protected]> writes:\n> Monitoring the processes using top reveals that the total amount of\n> memory used slowly increases during the test. When reaching insert\n> number 40000, or somewhere around that, memory is exhausted, and the the\n> systems begins to swap. Each of the postmaster processes seem to use a\n> constant amount of memory, but the total memory usage increases all the\n> same.\n\nThat statement is basically nonsense. If there is a memory leak then\nyou should be able to pin it on some specific process.\n\nWhat's your test case exactly, and what's your basis for asserting that\nthe system starts to swap? We've seen people fooled by the fact that\nsome versions of ps report a process's total memory size as including\nwhatever pages of Postgres' shared memory area the process has actually\nchanced to touch. So as a backend randomly happens to use different\nshared buffers its reported memory size grows ... but there's no actual\nleak, and no reason why the system would start to swap. (Unless maybe\nyou've set an unreasonably high shared_buffers setting?)\n\nAnother theory is that you're watching free memory go to zero because\nthe kernel is filling free memory with copies of disk pages. This is\nnot a leak either. Zero free memory is the normal, expected state of\na Unix system that's been up for any length of time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Aug 2006 08:51:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is this way of testing a bad idea? "
},
{
"msg_contents": "> Monitoring the processes using top reveals that the total amount of\n> memory used slowly increases during the test. When reaching insert\n> number 40000, or somewhere around that, memory is exhausted, and the the\n> systems begins to swap. Each of the postmaster processes seem to use a\n> constant amount of memory, but the total memory usage increases all the\n> same.\n\nSo . . . . what's using the memory? It doesn't sound like PG is using\nit, so is it your Java app?\n\nIf it's the Java app, then it could be that your code isn't remembering\nto do things like close statements, or perhaps the max heap size is set\ntoo large for your hardware. With early RHEL3 kernels there was also a\nquirky interaction with Sun's JVM where the system swaps itself to death\neven when less than half the physical memory is in use.\n\nIf its neither PG nor Java, then perhaps you're misinterpreting the\nresults of top. Remember that the \"free\" memory on a properly running\nUnix box that's been running for a while should hover just a bit above\nzero due to normal caching; read up on the 'free' command to see the\nactual memory utilization.\n\n-- Mark\n",
"msg_date": "Thu, 24 Aug 2006 06:40:19 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is this way of testing a bad idea?"
},
{
"msg_contents": "Also, as Tom stated, defining your test cases is a good idea before you\nstart benchmarking. Our application has a load data phase, then a\nquery/active use phase. So, we benchmark both (data loads, and then\ntransactions) since they're quite different workloads, and there's\ndifferent ways to optimize for each.\n\nFor bulk loads, I would look into either batching several inserts into\none transaction or the copy command. Do some testing here to figure out\nwhat works best for your hardware/setup (for example, we usually batch\nseveral thousand inserts together for a pretty dramatic increase in\nperformance). There's usually a sweet spot in there depending on how\nyour WAL is configured and other concurrent activity.\n\nAlso, when testing bulk loads, be careful to setup a realistic test. If\nyour application requires foreign keys and indexes, these can\nsignificantly slow down bulk inserts. There's several optimizations-\ncheck the mailing lists and the manual.\n\nAnd lastly, when you're loading tons of data, as previously pointed out,\nthe normal state of the system is to be heavily utilized (in fact, I\nwould think this is ideal since you know you're making full use of your\nhardware).\n\nHTH,\n\nBucky\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark Lewis\nSent: Thursday, August 24, 2006 9:40 AM\nTo: Fredrik Israelsson\nCc: [email protected]\nSubject: Re: [PERFORM] Is this way of testing a bad idea?\n\n> Monitoring the processes using top reveals that the total amount of\n> memory used slowly increases during the test. When reaching insert\n> number 40000, or somewhere around that, memory is exhausted, and the\nthe\n> systems begins to swap. Each of the postmaster processes seem to use a\n> constant amount of memory, but the total memory usage increases all\nthe\n> same.\n\nSo . . . . what's using the memory? It doesn't sound like PG is using\nit, so is it your Java app?\n\nIf it's the Java app, then it could be that your code isn't remembering\nto do things like close statements, or perhaps the max heap size is set\ntoo large for your hardware. With early RHEL3 kernels there was also a\nquirky interaction with Sun's JVM where the system swaps itself to death\neven when less than half the physical memory is in use.\n\nIf its neither PG nor Java, then perhaps you're misinterpreting the\nresults of top. Remember that the \"free\" memory on a properly running\nUnix box that's been running for a while should hover just a bit above\nzero due to normal caching; read up on the 'free' command to see the\nactual memory utilization.\n\n-- Mark\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n",
"msg_date": "Thu, 24 Aug 2006 10:38:47 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is this way of testing a bad idea?"
}
] |
[
{
"msg_contents": "We recently here picked up a adtx san and are having good results with\nit. It's pretty flexible, having dual 4gb fc controllers and also dual\nsas controllers do you can run it as attached sas or fc. Both have\ntheir advantages and unfortuantely I didn't have time to do much\nbenchmarking becuase we had to get the unit into production pretty\nquickly.\n\nWith both controllers running (we did dual 7 drive raid 5 + hot spare)\nwe were able to push about 550 mb/sec onto the unit using a dual\nported qlogic fc hba. This was on 750g sata disks :) you can also put\nsas drives in it for more of a db oriented box. The seeks were good\nbut not great, about 400 on each side, but I have a feeling this could\nbe optmiized playing with various software/hardware raid strategies\nwhich we didn't have time to do (this is set up as a file server, not\na db server).\n\nAt some point in the future we are gearing up a new database server\nand I get to set it up one or two more as attached sas, which should\nbe interesting.\n\nWhile these are not brain-busting 'Luke Lonergan' realm numbers, it's\na very solid unit and comes cheap in my opinion.\n\nmerlin\n",
"msg_date": "Thu, 24 Aug 2006 16:47:55 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "adtx"
}
] |
[
{
"msg_contents": "Parent table has a column say column1 which is indexed (parent table and\nall child tables are indexed on that column)\n\n \n\nWhen a select max(column1) is done on parent table..takes a very long\ntime to get back with the result\n\nThe same query on a child table gives instantaneous response (the tables\nare quite large appx.each child table has about 20-30 million rows)\n\n \n\nConstraint exclusion is turned on. The column is not the basis for\npartitioning. Postgres 8.1.2\n\n\n\n\n\n\n\n\n\n\nParent table has a column say column1 which is indexed\n(parent table and all child tables are indexed on that column)\n \nWhen a select max(column1) is done on parent table..takes a\nvery long time to get back with the result\nThe same query on a child table gives instantaneous response\n(the tables are quite large appx.each child table has about 20-30 million rows)\n \nConstraint exclusion is turned on. The column is not the\nbasis for partitioning. Postgres 8.1.2",
"msg_date": "Thu, 24 Aug 2006 16:43:53 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "select max(column) from parent table very slow"
},
{
"msg_contents": "Sriram Dandapani wrote:\n> Parent table has a column say column1 which is indexed (parent table and\n> all child tables are indexed on that column)\n> \n\nDo you mean?\n\nselect max(foo) from bar;\n\nIn older versions of postgresql that would scan the whole table. In 8.1 \nand above it doesn't. However, I am guess that since this is a \npartitioned table the planner isn't smart enough to just perform the \nquery on each child and a max on the set that is returned. Thus you are \nscanning each table completely.\n\nBut that is just a guess.\n\nJoshua D. Drake\n\n\n> \n> \n> When a select max(column1) is done on parent table..takes a very long\n> time to get back with the result\n> \n> The same query on a child table gives instantaneous response (the tables\n> are quite large appx.each child table has about 20-30 million rows)\n> \n> \n> \n> Constraint exclusion is turned on. The column is not the basis for\n> partitioning. Postgres 8.1.2\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 24 Aug 2006 18:08:49 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) from parent table very slow"
},
{
"msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> Sriram Dandapani wrote:\n>> Parent table has a column say column1 which is indexed (parent table and\n>> all child tables are indexed on that column)\n\n> In older versions of postgresql that would scan the whole table. In 8.1 \n> and above it doesn't. However, I am guess that since this is a \n> partitioned table the planner isn't smart enough to just perform the \n> query on each child and a max on the set that is returned.\n\nIt is not. Feel free to submit a patch for planagg.c ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2006 00:33:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) from parent table very slow "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n>> Sriram Dandapani wrote:\n>>> Parent table has a column say column1 which is indexed (parent table and\n>>> all child tables are indexed on that column)\n> \n>> In older versions of postgresql that would scan the whole table. In 8.1 \n>> and above it doesn't. However, I am guess that since this is a \n>> partitioned table the planner isn't smart enough to just perform the \n>> query on each child and a max on the set that is returned.\n> \n> It is not. Feel free to submit a patch for planagg.c ...\n\nI think my patch to pgbench may have set your expectations of me a bit \nhigh ;)...\n\nJoshua D. Drake\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 24 Aug 2006 22:16:43 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) from parent table very slow"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Tom Lane wrote:\n> >\"Joshua D. Drake\" <[email protected]> writes:\n> >>Sriram Dandapani wrote:\n> >>>Parent table has a column say column1 which is indexed (parent table and\n> >>>all child tables are indexed on that column)\n> >\n> >>In older versions of postgresql that would scan the whole table. In 8.1 \n> >>and above it doesn't. However, I am guess that since this is a \n> >>partitioned table the planner isn't smart enough to just perform the \n> >>query on each child and a max on the set that is returned.\n> >\n> >It is not. Feel free to submit a patch for planagg.c ...\n> \n> I think my patch to pgbench may have set your expectations of me a bit \n> high ;)...\n\nActually I think this is the perfect opportunity for you -- a patch that\nnot only was absolutely unexpected, undiscussed, and posted without\nprevious warning, but one that you were actually asked about! And\nweren't you recently joking about giving Tom nightmares by sending\npatches to the optimizer?\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 25 Aug 2006 10:45:30 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) from parent table very slow"
},
{
"msg_contents": ">>>> query on each child and a max on the set that is returned.\n>>> It is not. Feel free to submit a patch for planagg.c ...\n>> I think my patch to pgbench may have set your expectations of me a bit \n>> high ;)...\n> \n> Actually I think this is the perfect opportunity for you -- a patch that\n> not only was absolutely unexpected, undiscussed, and posted without\n> previous warning, but one that you were actually asked about! And\n> weren't you recently joking about giving Tom nightmares by sending\n> patches to the optimizer?\n\nYeah, but Tom is getting up there a bit, and that might mean a heart \nattack. Then what would we do? ;)\n\nJoshua D. Drake\n\n\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Fri, 25 Aug 2006 08:23:37 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) from parent table very slow"
}
] |
[
{
"msg_contents": "Hello,\n\nI want to ask, Is there any way to insert records from XML file to the\npostgres database?\n\nPlease provide me some help regarding above query.\n\nPostgres version which we are using is 7.2.4\n\nThanks,\nSonal\n\nHello,\n \nI want to ask, Is there any way to insert records from XML file to the postgres database?\n \nPlease provide me some help regarding above query.\n \nPostgres version which we are using is 7.2.4\n \nThanks,\nSonal",
"msg_date": "Fri, 25 Aug 2006 21:23:38 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Related to Inserting into the database from XML file"
},
{
"msg_contents": "On Fri, 2006-08-25 at 21:23 +0530, soni de wrote:\n> Hello,\n> \n> I want to ask, Is there any way to insert records from XML file to the\n> postgres database?\n\nTry the contrib/xml2 module.\n\n> \n> Please provide me some help regarding above query.\n> \n> Postgres version which we are using is 7.2.4\n> \n\nI highly recommend upgrading if at all possible. That's quite an old\nversion.\n\nHope this helps,\n\tJeff Davis\n\n\n",
"msg_date": "Fri, 25 Aug 2006 09:19:27 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Related to Inserting into the database from XML file"
},
{
"msg_contents": "> On Fri, 2006-08-25 at 21:23 +0530, soni de wrote:\n> > Hello,\n> > \n> > I want to ask, Is there any way to insert records from XML \n> > file to the postgres database?\n> \n> Try the contrib/xml2 module.\n\nAlas, that module will not help you much with the insertion of records.\nIt is more about querying XML that is stored within the database. \n\nA basic question is whether you want to store XML in the DB or you just\nhave data that is in XML now and you want it loaded into a table\nstructure. The xml2 module will only be useful in the first case.\n\nIn either case the approach is to transform the data into a form that\nPGSQL's COPY understands or into a bunch of INSERT statements (much less\nperformant). To that end you probably want to become familiar with XSLT\nunless the data is so simple that a processing with regular tools (perl,\nsed, awk) will suffice.\n\nGeorge\n",
"msg_date": "Sun, 27 Aug 2006 07:18:09 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Related to Inserting into the database from XML file"
},
{
"msg_contents": "I am little bit confused between whether to insert XML file as it is or\ninsert data from the XML file in to a particular field from the table.\n\nI will decided it depending upon the performance factor\n\n\n\nFor storing the XML file as it is, will there be any performance cause if\ncompared to storing values in particular fields.\n\n\n\nIf performance issue is not there for XML formats then we have around 12 to\n13 tables,\n\nif we store XML data as it is in all tables then is there any generic format\nfor select query?\n\n\n\n\n\nThanks\nSoni\n\n\nOn 8/27/06, George Pavlov <[email protected]> wrote:\n>\n> > On Fri, 2006-08-25 at 21:23 +0530, soni de wrote:\n> > > Hello,\n> > >\n> > > I want to ask, Is there any way to insert records from XML\n> > > file to the postgres database?\n> >\n> > Try the contrib/xml2 module.\n>\n> Alas, that module will not help you much with the insertion of records.\n> It is more about querying XML that is stored within the database.\n>\n> A basic question is whether you want to store XML in the DB or you just\n> have data that is in XML now and you want it loaded into a table\n> structure. The xml2 module will only be useful in the first case.\n>\n> In either case the approach is to transform the data into a form that\n> PGSQL's COPY understands or into a bunch of INSERT statements (much less\n> performant). To that end you probably want to become familiar with XSLT\n> unless the data is so simple that a processing with regular tools (perl,\n> sed, awk) will suffice.\n>\n> George\n>\n\n\nI am little bit confused between whether to insert XML file as it is or insert data from the XML file in to a particular field from the table.\n\nI will decided it depending upon the performance factor\n \nFor storing the XML file as it is, will there be any performance cause if compared to storing values in particular fields.\n\n \nIf performance issue is not there for XML formats then we have around 12 to 13 tables, \nif we store XML data as it is in all tables then is there any generic format for select query?\n \n \n\nThanks\nSoni \nOn 8/27/06, George Pavlov <[email protected]> wrote:\n> On Fri, 2006-08-25 at 21:23 +0530, soni de wrote:> > Hello,> >> > I want to ask, Is there any way to insert records from XML\n> > file to the postgres database?>> Try the contrib/xml2 module.Alas, that module will not help you much with the insertion of records.It is more about querying XML that is stored within the database.\nA basic question is whether you want to store XML in the DB or you justhave data that is in XML now and you want it loaded into a tablestructure. The xml2 module will only be useful in the first case.\nIn either case the approach is to transform the data into a form thatPGSQL's COPY understands or into a bunch of INSERT statements (much lessperformant). To that end you probably want to become familiar with XSLT\nunless the data is so simple that a processing with regular tools (perl,sed, awk) will suffice.George",
"msg_date": "Mon, 28 Aug 2006 11:01:11 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Related to Inserting into the database from XML file"
}
] |
[
{
"msg_contents": "Hi !\n\nI'm looking for a way to change the \"max_connections\" parameter without \nrestarting the PostGreSQL database.\nAll the docs i found online are saying that this option can only be set \non startup (-N option to comand-line) or by changing it in postgresql.conf.\n\nDoes anyone know how to do it ?\n\nThanks\n\n-- \n-- Jean Arnaud\n-- Projet SARDES\n-- INRIA Rh�ne-Alpes / LSR-IMAG\n-- T�l. : +33 (0)4 76 61 52 80\n\n",
"msg_date": "Fri, 25 Aug 2006 18:57:08 +0200",
"msg_from": "Jean Arnaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changing max_connections without restart ?"
},
{
"msg_contents": "Jean Arnaud <[email protected]> writes:\n> I'm looking for a way to change the \"max_connections\" parameter without \n> restarting the PostGreSQL database.\n\nThere is none. That's one of the parameters that determines shared\nmemory array sizes, and we can't change those on-the-fly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2006 13:55:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing max_connections without restart ? "
}
] |
[
{
"msg_contents": "This did not have any takers in pgsql-general. Maybe\nperformance-oriented folks can shed light? The basic question is if\nthere is a way to preserve stats during pg_restore?\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of George Pavlov\nSent: Monday, August 21, 2006 3:15 PM\nTo: [email protected]\nSubject: [GENERAL] stats reset during pg_restore?\n\nI would like to analyze server stats offline, so I attempt to pg_dump my\nproduction database and then pg_restore it into another database. In the\nprocess all stats seem to be reset (they are not completely zeroed). So\nin production I have a table with the following stats (from\npg_stat_all_tables as an example):\n\nrelid | 25519576\nrelname | property_contact\nseq_scan | 5612\nseq_tup_read | 569971320\nidx_scan | 4486454\nidx_tup_fetch | 180100369\nn_tup_ins | 39114\nn_tup_upd | 17553\nn_tup_del | 21877\n\nAfter I restore the stats for the same table look like this:\n\nrelid | 104017313\nrelname | property_contact\nseq_scan | 9\nseq_tup_read | 992493\nidx_scan | 0\nidx_tup_fetch | 0\nn_tup_ins | 110277\nn_tup_upd | 0\nn_tup_del | 0\n\nThese look like stats for table accesses during the restore itself:\n11027 is indeed the number of rows in the table, and 992493 / 110277 =\n9, which happens to be the number of indexes and FK constraints on the\ntable.\n\nI do have stats_reset_on_server_start = off on both servers.\n\nCan someone share what exatly happens with stats upon restore? Also is\nthere anything one can do to keep them intact during a dump/restore? \n\nApologies if already discussed--I failed to find any references.\n\nTIA,\n\nGeorge\n",
"msg_date": "Sat, 26 Aug 2006 10:13:19 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "stats reset during pg_restore?"
},
{
"msg_contents": "George Pavlov wrote:\n> This did not have any takers in pgsql-general. Maybe\n> performance-oriented folks can shed light? The basic question is if\n> there is a way to preserve stats during pg_restore?\n\nNo, there isn't.\n\n> Can someone share what exatly happens with stats upon restore? Also is\n> there anything one can do to keep them intact during a dump/restore? \n\nThese stats are not stored in tables, only in memory and saved to a\nspecial file on disk to be able to preserve it across server stop/start.\nBut pg_dump does not make the slightest attempt to save it.\n\nAlso, you can't save it yourself -- while you could save the values it\nreturns on queries to the stats views, there is no way to feed those\nsaved values back to the system after a dump/restore.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Sat, 26 Aug 2006 22:45:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats reset during pg_restore?"
},
{
"msg_contents": "> These stats are not stored in tables, only in memory and saved to a\n> special file on disk to be able to preserve it across server \n> stop/start.\n> But pg_dump does not make the slightest attempt to save it.\n> \n> Also, you can't save it yourself -- while you could save the values it\n> returns on queries to the stats views, there is no way to feed those\n> saved values back to the system after a dump/restore.\n\nThanks! Sounds like I just need to query the stats tables and save the\noutput for oofline analysis before I do a dump. \n\nBased on how it works it seems that a server crash might lose the\nin-memory stats data as well? I imagine PITR does not take care of that\nspecial file (where is it by, by the way?). I have not worked with\nreplication (yet), but I imagine replica databases will also be agnostic\nof the master's stats?\n\n",
"msg_date": "Sun, 27 Aug 2006 07:24:21 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats reset during pg_restore?"
},
{
"msg_contents": "George Pavlov wrote:\n\n> Based on how it works it seems that a server crash might lose the\n> in-memory stats data as well?\n\nYeah, IIRC the postmaster removes the stat file after crash recovery.\nIt doesn't check the file for correctness.\n\n> I imagine PITR does not take care of that special file (where is it\n> by, by the way?). I have not worked with replication (yet), but I\n> imagine replica databases will also be agnostic of the master's stats?\n\nNeither PITR nor the replication systems I know about do anything about\nthe stats.\n\nThe file is $PGDATA/global/pgstat.stat\n\nThe code to read it, which is at the same time the documentation to its\nformat, is in src/backend/postmaster/pgstat.c, function\npgstat_read_statfile. It's quite simple. I think you could read it in\nPerl if you wanted; and rewrite the file again after a restore (you'd\nneed to change the Oids in the table entries, etc).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Sun, 27 Aug 2006 10:37:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats reset during pg_restore?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI need help making the below query go faster. There are about 5 of \nthese being done\nin one report. Some use joins instead of subselects but they all go \nabout the same speed.\nI'm not much of a SQL person so I just threw indices on everything \ninvolved but it still\ntakes about at least 19sec and sometimes as much as 60s. I'd be happy \nto get it to about\n5s.\n\nOther info: random_page_cost is 4 as I/O on this box as kinda slow. \nIt's a converted desktop with a single IDE drive. shared_buffers is \n12500 and effective_page_cache is 100M. I upped the statistics on \nentry_date and session_id to 1000. I analyzed the tables after \nmodifying the statistics. The actual deployment platform is a lot \nbeefier but I'd like these queries to at least be tolerable on this \nmachine.\n\nI can see that the estimate for the GroupAggregate is off, if I'm \ninterpreting things correctly, but I don't know what to do about it.\n\ntia,\narturo\n\nQuery:\n\nSELECT subscription_id, \n to_char(sum(session_length), 'HH24:MI:SS') as session_length,\n sum(hits) as hits,\n 2006 as theYear,\n 2 as theQuarter,\n sum(count) as count\nFROM (\n SELECT subscription_id,\n count(distinct session_id) as count, \n age(MAX(entry_date),MIN(entry_date)) as session_length, \n COUNT(action) as hits\n FROM\n extended_user JOIN user_tracking USING (user_id)\n WHERE subscription_id > 0 AND\n EXTRACT(year from entry_date) = 2006 AND \n EXTRACT(quarter from entry_date) = 2\n GROUP BY session_id,\n subscription_id\n ) as session_stuff\n WHERE subscription_id > 0\n GROUP BY subscription_id\n ORDER BY subscription_id;\n\n\nSort (cost=123305.88..123306.38 rows=200 width=36) (actual \ntime=75039.706..75040.500 rows=258 loops=1)\n Sort Key: session_stuff.subscription_id\n -> HashAggregate (cost=123294.24..123298.24 rows=200 width=36) \n(actual time=75036.487..75038.360 rows=258 loops=1)\n -> GroupAggregate (cost=108839.34..118475.94 rows=240915 \nwidth=72) (actual time=68016.583..74702.710 rows=38369 loops=1)\n -> Sort (cost=108839.34..109441.63 rows=240915 \nwidth=72) (actual time=67978.193..68982.962 rows=245727 loops=1)\n Sort Key: user_tracking.session_id, \nextended_user.subscription_id\n -> Hash Join (cost=7746.59..75492.37 rows=240915 \nwidth=72) (actual time=16944.487..50737.230 rows=245727 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Bitmap Heap Scan on user_tracking \n(cost=7524.10..68644.10 rows=240950 width=72) (actual \ntime=16843.695..48306.383 rows=258923 loops=1)\n Recheck Cond: \n((date_part('quarter'::text, entry_date) = 2::double precision) AND \n(date_part('year'::text, entry_date) = 2006::double precision))\n -> BitmapAnd (cost=7524.10..7524.10 \nrows=240950 width=0) (actual time=16779.178..16779.178 rows=0 loops=1)\n -> Bitmap Index Scan on \nuser_tracking_quarter_idx (cost=0.00..3331.51 rows=533288 width=0) \n(actual time=9079.545..9079.545 rows=533492 loops=1)\n Index Cond: \n(date_part('quarter'::text, entry_date) = 2::double precision)\n -> Bitmap Index Scan on \nuser_tracking_year_idx (cost=0.00..4192.34 rows=671239 width=0) (actual \ntime=7685.906..7685.906 rows=671787 loops=1)\n Index Cond: \n(date_part('year'::text, entry_date) = 2006::double precision)\n -> Hash (cost=206.42..206.42 rows=6428 \nwidth=8) (actual time=100.754..100.754 rows=6411 loops=1)\n -> Seq Scan on extended_user \n(cost=0.00..206.42 rows=6428 width=8) (actual time=0.020..28.873 \nrows=6411 loops=1)\n Filter: ((subscription_id > 0) \nAND (subscription_id > 0))\n Total runtime: 75069.453 ms\n\nTables:\n\nThis one has about 6-7k rows.\n\n Table \"public.extended_user\"\n Column | Type | Modifiers \n-------------------+--------------------------+-----------\n create_date | timestamp with time zone | not null\n email | character varying(99) | \n first_name | character varying(99) | not null\n last_name | character varying(99) | not null\n license_agreement | boolean | not null\n license_date | timestamp with time zone | \n password | character varying(32) | not null\n subscription_id | integer | not null\n user_id | integer | not null\n user_name | character varying(99) | not null\nIndexes:\n \"extended_user_pkey\" PRIMARY KEY, btree (user_id)\n \"extended_user_subscription_id_idx\" btree (subscription_id)\n \"extended_user_subscription_idx\" btree (subscription_id)\nForeign-key constraints:\n \"extended_user_subscription_id_fkey\" FOREIGN KEY (subscription_id) \nREFERENCES subscription(subscription_id) DEFERRABLE INITIALLY DEFERRED\n\nThis one has about 2k rows.\n\n Table \"public.subscription\"\n Column | Type | Modifiers \n------------------+--------------------------+-----------\n allow_printing | boolean | not null\n company_id | character varying(50) | not null\n company_name | character varying(100) | not null\n end_date | timestamp with time zone | \n licenses | integer | not null\n pass_through_key | character varying(50) | \n start_date | timestamp with time zone | not null\n subscription_id | integer | not null\nIndexes:\n \"subscription_pkey\" PRIMARY KEY, btree (subscription_id)\n\n\nThis one has about 1.4M rows. It's kind of a log of pages visited.\n\n Table \n\"public.user_tracking\"\n Column | Type | \nModifiers \n------------------+-----------------------------+------------------------\n--------------------------------------------------\n action | character varying(255) | not null\n entry_date | timestamp without time zone | not null\n note | text | \n report_id | integer | \n session_id | character varying(255) | not null\n user_id | integer | \n user_tracking_id | integer | not null default \nnextval('user_tracking_user_tracking_id_seq'::regclass)\nIndexes:\n \"user_tracking_pkey\" PRIMARY KEY, btree (user_tracking_id)\n \"user_tracking_entry_date_idx\" btree (entry_date)\n \"user_tracking_month_idx\" btree (date_part('month'::text, \nentry_date))\n \"user_tracking_quarter_idx\" btree (date_part('quarter'::text, \nentry_date))\n \"user_tracking_report_id_idx\" btree (report_id)\n \"user_tracking_session_idx\" btree (session_id)\n \"user_tracking_user_id_idx\" btree (user_id)\n \"user_tracking_year_idx\" btree (date_part('year'::text, entry_date))\nForeign-key constraints:\n \"user_tracking_report_id_fkey\" FOREIGN KEY (report_id) REFERENCES \narticle(article_id) DEFERRABLE INITIALLY DEFERRED\n \"user_tracking_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES \nextended_user(user_id) DEFERRABLE INITIALLY DEFERRED\n",
"msg_date": "Sat, 26 Aug 2006 16:11:54 -0400",
"msg_from": "Hayes <[email protected]>",
"msg_from_op": true,
"msg_subject": "[8.1.4] Help optimizing query"
},
{
"msg_contents": "Without having looked at this in detail my first suggestion would be to\ndo away with those date_part indices. I have found that indexes with few\ndistinct values usually hurt more then help and the PG optimizer is not\nalways smart enough to ignore them and the BitmapAnd and scan for dates\nseem like a waste since you can consolidate that information from the\nget-go, e.g. could you rewrite your WHERE clause to be something like:\n\n WHERE date_trunc('quarter', entry_date) = '2006-04-01' -- for 2nd\nquarter of '06\n\nor\n\n WHERE entry_date >= '2006-04-01' \n AND entry_date < '2006-07-01' \n\nYou could try an index on either the date_trunc or the entry_date\nitself, as appropriate. \n\nI assume your user_tracking table is being inserted onto on each visit,\nso you may want to be very cautious with what indexes you have on it\nanyway and pare those down.\n\nBTW, you have a redundant \"WHERE subscription_id > 0\" in the outer\nquery, not that that affects much.\n\n From then on you may want to materialize the subselect so that all of\nyour five queries use it or, if possible, consolidate those five queries\ninto one or at least less than five. You can go even further -- seems\nlike this would be a good candidate for an aggregated reporting table\n(essentially your subselect as a separate table updated by triggers, or\nevery night, or whatever). Especially since the situation will only get\nworse as you have more data in your system.\n\nGeorge\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Hayes\n> Sent: Saturday, August 26, 2006 1:12 PM\n> To: [email protected]\n> Subject: [PERFORM] [8.1.4] Help optimizing query\n> \n> Hi all,\n> \n> I need help making the below query go faster. There are about 5 of \n> these being done\n> in one report. Some use joins instead of subselects but they all go \n> about the same speed.\n> I'm not much of a SQL person so I just threw indices on everything \n> involved but it still\n> takes about at least 19sec and sometimes as much as 60s. I'd \n> be happy \n> to get it to about\n> 5s.\n> \n> Other info: random_page_cost is 4 as I/O on this box as kinda slow. \n> It's a converted desktop with a single IDE drive. shared_buffers is \n> 12500 and effective_page_cache is 100M. I upped the statistics on \n> entry_date and session_id to 1000. I analyzed the tables after \n> modifying the statistics. The actual deployment platform is a lot \n> beefier but I'd like these queries to at least be tolerable on this \n> machine.\n> \n> I can see that the estimate for the GroupAggregate is off, if I'm \n> interpreting things correctly, but I don't know what to do about it.\n> \n> tia,\n> arturo\n> \n> Query:\n> \n> SELECT subscription_id, \n> to_char(sum(session_length), 'HH24:MI:SS') as session_length,\n> sum(hits) as hits,\n> 2006 as theYear,\n> 2 as theQuarter,\n> sum(count) as count\n> FROM (\n> SELECT subscription_id,\n> count(distinct session_id) as count, \n> age(MAX(entry_date),MIN(entry_date)) as session_length, \n> COUNT(action) as hits\n> FROM\n> extended_user JOIN user_tracking USING (user_id)\n> WHERE subscription_id > 0 AND\n> EXTRACT(year from entry_date) = 2006 AND \n> EXTRACT(quarter from entry_date) = 2\n> GROUP BY session_id,\n> subscription_id\n> ) as session_stuff\n> WHERE subscription_id > 0\n> GROUP BY subscription_id\n> ORDER BY subscription_id;\n> \n> \n> Sort (cost=123305.88..123306.38 rows=200 width=36) (actual \n> time=75039.706..75040.500 rows=258 loops=1)\n> Sort Key: session_stuff.subscription_id\n> -> HashAggregate (cost=123294.24..123298.24 rows=200 width=36) \n> (actual time=75036.487..75038.360 rows=258 loops=1)\n> -> GroupAggregate (cost=108839.34..118475.94 rows=240915 \n> width=72) (actual time=68016.583..74702.710 rows=38369 loops=1)\n> -> Sort (cost=108839.34..109441.63 rows=240915 \n> width=72) (actual time=67978.193..68982.962 rows=245727 loops=1)\n> Sort Key: user_tracking.session_id, \n> extended_user.subscription_id\n> -> Hash Join (cost=7746.59..75492.37 \n> rows=240915 \n> width=72) (actual time=16944.487..50737.230 rows=245727 loops=1)\n> Hash Cond: (\"outer\".user_id = \n> \"inner\".user_id)\n> -> Bitmap Heap Scan on user_tracking \n> (cost=7524.10..68644.10 rows=240950 width=72) (actual \n> time=16843.695..48306.383 rows=258923 loops=1)\n> Recheck Cond: \n> ((date_part('quarter'::text, entry_date) = 2::double precision) AND \n> (date_part('year'::text, entry_date) = 2006::double precision))\n> -> BitmapAnd \n> (cost=7524.10..7524.10 \n> rows=240950 width=0) (actual time=16779.178..16779.178 rows=0 loops=1)\n> -> Bitmap Index Scan on \n> user_tracking_quarter_idx (cost=0.00..3331.51 rows=533288 width=0) \n> (actual time=9079.545..9079.545 rows=533492 loops=1)\n> Index Cond: \n> (date_part('quarter'::text, entry_date) = 2::double precision)\n> -> Bitmap Index Scan on \n> user_tracking_year_idx (cost=0.00..4192.34 rows=671239 \n> width=0) (actual \n> time=7685.906..7685.906 rows=671787 loops=1)\n> Index Cond: \n> (date_part('year'::text, entry_date) = 2006::double precision)\n> -> Hash (cost=206.42..206.42 rows=6428 \n> width=8) (actual time=100.754..100.754 rows=6411 loops=1)\n> -> Seq Scan on extended_user \n> (cost=0.00..206.42 rows=6428 width=8) (actual time=0.020..28.873 \n> rows=6411 loops=1)\n> Filter: ((subscription_id > 0) \n> AND (subscription_id > 0))\n> Total runtime: 75069.453 ms\n> \n> Tables:\n> \n> This one has about 6-7k rows.\n> \n> Table \"public.extended_user\"\n> Column | Type | Modifiers \n> -------------------+--------------------------+-----------\n> create_date | timestamp with time zone | not null\n> email | character varying(99) | \n> first_name | character varying(99) | not null\n> last_name | character varying(99) | not null\n> license_agreement | boolean | not null\n> license_date | timestamp with time zone | \n> password | character varying(32) | not null\n> subscription_id | integer | not null\n> user_id | integer | not null\n> user_name | character varying(99) | not null\n> Indexes:\n> \"extended_user_pkey\" PRIMARY KEY, btree (user_id)\n> \"extended_user_subscription_id_idx\" btree (subscription_id)\n> \"extended_user_subscription_idx\" btree (subscription_id)\n> Foreign-key constraints:\n> \"extended_user_subscription_id_fkey\" FOREIGN KEY \n> (subscription_id) \n> REFERENCES subscription(subscription_id) DEFERRABLE INITIALLY DEFERRED\n> \n> This one has about 2k rows.\n> \n> Table \"public.subscription\"\n> Column | Type | Modifiers \n> ------------------+--------------------------+-----------\n> allow_printing | boolean | not null\n> company_id | character varying(50) | not null\n> company_name | character varying(100) | not null\n> end_date | timestamp with time zone | \n> licenses | integer | not null\n> pass_through_key | character varying(50) | \n> start_date | timestamp with time zone | not null\n> subscription_id | integer | not null\n> Indexes:\n> \"subscription_pkey\" PRIMARY KEY, btree (subscription_id)\n> \n> \n> This one has about 1.4M rows. It's kind of a log of pages visited.\n> \n> Table \n> \"public.user_tracking\"\n> Column | Type | \n> \n> Modifiers \n> ------------------+-----------------------------+-------------\n> -----------\n> --------------------------------------------------\n> action | character varying(255) | not null\n> entry_date | timestamp without time zone | not null\n> note | text | \n> report_id | integer | \n> session_id | character varying(255) | not null\n> user_id | integer | \n> user_tracking_id | integer | not null default \n> nextval('user_tracking_user_tracking_id_seq'::regclass)\n> Indexes:\n> \"user_tracking_pkey\" PRIMARY KEY, btree (user_tracking_id)\n> \"user_tracking_entry_date_idx\" btree (entry_date)\n> \"user_tracking_month_idx\" btree (date_part('month'::text, \n> entry_date))\n> \"user_tracking_quarter_idx\" btree (date_part('quarter'::text, \n> entry_date))\n> \"user_tracking_report_id_idx\" btree (report_id)\n> \"user_tracking_session_idx\" btree (session_id)\n> \"user_tracking_user_id_idx\" btree (user_id)\n> \"user_tracking_year_idx\" btree (date_part('year'::text, \n> entry_date))\n> Foreign-key constraints:\n> \"user_tracking_report_id_fkey\" FOREIGN KEY (report_id) REFERENCES \n> article(article_id) DEFERRABLE INITIALLY DEFERRED\n> \"user_tracking_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES \n> extended_user(user_id) DEFERRABLE INITIALLY DEFERRED\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n",
"msg_date": "Sun, 27 Aug 2006 09:40:49 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [8.1.4] Help optimizing query"
}
] |
[
{
"msg_contents": "I would like to talk to one of the org member in postgre about this issue.\nThis is critical for us. Please help.\n\nIt will be great, if you could provide your contact number to discuss on\nthis. \n\nThank you in advance. \n\nRegards, Ravi\n\n-----Original Message-----\nFrom: Ravindran G - TLS, Chennai. \nSent: Tuesday, August 22, 2006 6:09 PM\nTo: '[email protected]'\nSubject: Postgre SQL 7.1 cygwin performance issue.\nImportance: High\n\n\nHi,\n\nWe are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,\nP4). \n\nWe understand that the maximum connections that can be set is 64 in\nPostgresql 7.1 version. \n\nThe performance is very slow and some time the database is not getting\nconnected from our application because of this. \n\nPlease advise us on how to increase the performance by setting any\nattributes in configuration files ?. \n\nFind enclosed the configuration file. \n\nThanks and regards,\nRavi\n\n\nTo post a message to the mailing list, send it to\n [email protected]\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]\nSent: Tuesday, August 22, 2006 5:32 PM\nTo: ravig3\nSubject: 7E88-5CD9-AD0E : CONFIRM from pgsql-performance (subscribe)\n\n\n__ \nThe following request\n\n \"subscribe pgsql-performance ravig3 <[email protected]>\"\n\nwas sent to \nby ravig3 <[email protected]>.\n\nTo accept or reject this request, please do one of the following:\n\n1. If you have web browsing capability, visit\n \n<http://mail.postgresql.org/mj/mj_confirm/domain=postgresql.org?t=7E88-5CD9-\nAD0E>\n and follow the instructions there.\n\n2. Reply to [email protected] \n with one of the following two commands in the body of the message:\n\n accept\n reject\n\n (The number 7E88-5CD9-AD0E must be in the Subject header)\n\n3. Reply to [email protected] \n with one of the following two commands in the body of the message:\n \n accept 7E88-5CD9-AD0E\n reject 7E88-5CD9-AD0E\n\nYour confirmation is required for the following reason(s):\n\n The subscribe_policy rule says that the \"subscribe\" command \n must be confirmed by the person affected by the command.\n \n\nIf you do not respond within 4 days, a reminder will be sent.\n\nIf you do not respond within 7 days, this token will expire,\nand the request will not be completed.\n\nIf you would like to communicate with a person, \nsend mail to [email protected].\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.\n",
"msg_date": "Mon, 28 Aug 2006 13:20:48 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "Ravindran G - TLS, Chennai. wrote:\n> I would like to talk to one of the org member in postgre about this issue.\n> This is critical for us. Please help.\n> \n> It will be great, if you could provide your contact number to discuss on\n> this. \n\nSure, we're happy to help. You can contact several \"org members\" via\nthis mailing list. What's your problem exactly? If you're finding that\nthe 7.1 cygwin version is too slow, please consider migrating some\nsomething more recent. 8.1 runs natively on Windows, no Cygwin required.\nIt's much faster and doesn't have that limitation on the number of\nconnections.\n\nPlease note that the name is \"PostgreSQL\" and is usually shortened to\n\"Postgres\". It's never \"postgre\".\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 28 Aug 2006 09:32:06 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
}
] |
[
{
"msg_contents": "Hi,\n\nWe noticed a slowdown on our application while traffic was kinda\nheavy. The logics after reading the docs commanded us to trim the\nenlarged tables, run VACUUM ANALYZE and then expect fast\nperformance again; but it wasn't the case[1].\n\nOut of the blue, we dumped the database, removed it, recreated\nfrom the restore, and now the performance is lightning fast\nagain.\n\nDoes it look familiar to anyone? I thought running VACUUM ANALYZE\nafter a trim should be enough so that pg has assembled the data\nand has good statistical knowledge of the tables contents..\n\nThanks for any tips.\n\nRef: \n[1] Processes were always showing one/some postmaster on SELECT,\n a constant load of 1, and vmstat always showing activity in\n IO blocks out (application generate all sort of typical\n statements, some SELECT, UPDATE, INSERT either \"directly\" or\n through stored procedures)\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "28 Aug 2006 10:14:52 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Hi, Guillaume\n\nGuillaume Cottenceau wrote:\n\n> We noticed a slowdown on our application while traffic was kinda\n> heavy. The logics after reading the docs commanded us to trim the\n> enlarged tables, run VACUUM ANALYZE and then expect fast\n> performance again; but it wasn't the case[1].\n\nWhat exactly do you mean with \"trim the enlarged tables\"?\n\n> Out of the blue, we dumped the database, removed it, recreated\n> from the restore, and now the performance is lightning fast\n> again.\n> \n> Does it look familiar to anyone? I thought running VACUUM ANALYZE\n> after a trim should be enough so that pg has assembled the data\n> and has good statistical knowledge of the tables contents..\n\nThis looks like either your free_space_map setting is way to low, or you\nhave index bloat.\n\nMaybe a VACUUM FULL fullowed by a REINDEX will have solved your problem.\n\nIt also might make sense to issue a CLUSTER instead (which combines the\neffects of VACUUM FULL, REINDEX and physically reordering the data).\n\nWhen the free_space_map is to low, VACUUM ANALYZE should have told you\nvia a warning (at least, if your logging is set appropriately).\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 28 Aug 2006 11:28:23 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Hi Markus,\n\nThanks for your message.\n\n> Guillaume Cottenceau wrote:\n> \n> > We noticed a slowdown on our application while traffic was kinda\n> > heavy. The logics after reading the docs commanded us to trim the\n> > enlarged tables, run VACUUM ANALYZE and then expect fast\n> > performance again; but it wasn't the case[1].\n> \n> What exactly do you mean with \"trim the enlarged tables\"?\n\nWe have a couple of logs files which get larger over time\n(millions of rows). As they are log files, they can be trimmed\nfrom older values.\n \n> > Out of the blue, we dumped the database, removed it, recreated\n> > from the restore, and now the performance is lightning fast\n> > again.\n> > \n> > Does it look familiar to anyone? I thought running VACUUM ANALYZE\n> > after a trim should be enough so that pg has assembled the data\n> > and has good statistical knowledge of the tables contents..\n> \n> This looks like either your free_space_map setting is way to low, or you\n\nI don't know much about free_space_map. Trying to search in\ndocumentation, I found run time configuration of the two\nfollowing parameters for which the current values follow:\n\n max_fsm_pages is 20000\n max_fsm_relations is 1000\n\nDo they look low?\n\nNotice: table data is only 600M after trim (without indexes),\nwhile it was probably 3x to 10x this size before the trim.\nMachine is a 2G Dell 1850 with lsi logic megaraid.\n\n> have index bloat.\n\nCan you elaborate? I have created a couple of indexes (according\nto multiple models of use in our application) and they do take up\nquite some disk space (table dump is 600M but after restore it\ntakes up 1.5G on disk) but I thought they could only do good or\nnever be used, not impair performance..\n\n> Maybe a VACUUM FULL fullowed by a REINDEX will have solved your problem.\n\nSo these would have reordered the data for faster sequential\naccess which is not the case of VACUUM ANALYZE?\n \n> It also might make sense to issue a CLUSTER instead (which combines the\n> effects of VACUUM FULL, REINDEX and physically reordering the data).\n\nI was reluctant in using CLUSTER because you have to choose an\nindex and there are multiple indexes on the large tables..\n\n> When the free_space_map is to low, VACUUM ANALYZE should have told you\n> via a warning (at least, if your logging is set appropriately).\n\nUnfortunately, we didn't keep the logs of VACUUM ANALYZE, so I\ncan't be sure :/\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "28 Aug 2006 11:43:16 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Guillaume,\n\nOn 28 Aug 2006 11:43:16 +0200, Guillaume Cottenceau <[email protected]> wrote:\n> max_fsm_pages is 20000\n> max_fsm_relations is 1000\n> Do they look low?\n\nYes they are probably too low if you don't run VACUUM on a regular\nbasis and you have a lot of UPDATE/DELETE activity. FSM doesn't take a\nlot of memory so it's usually recommended to have a confortable value\nfor it.\n\nI usually recommend to read:\nhttp://www.pervasive-postgres.com/instantkb13/article.aspx?id=10116&cNode=5K1C3W\nhttp://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W\nto understand better what VACUUM and FSM mean.\n\n> Can you elaborate? I have created a couple of indexes (according\n> to multiple models of use in our application) and they do take up\n> quite some disk space (table dump is 600M but after restore it\n> takes up 1.5G on disk) but I thought they could only do good or\n> never be used, not impair performance..\n\nIndex slow downs write activity (you have to maintain them). It's not\nalways a good idea to create them.\n\n> > Maybe a VACUUM FULL fullowed by a REINDEX will have solved your problem.\n>\n> So these would have reordered the data for faster sequential\n> access which is not the case of VACUUM ANALYZE?\n\nVACUUM ANALYZE won't help you if your database is completely bloated.\nAnd AFAICS you're not running it on a regular basis so your database\nwas probably completely bloated which means:\n- bloated indexes,\n- bloated tables (ie a lot of fragmentation in the pages which means\nthat you need far more pages to store the same data).\n\nThe only ways to solve this situation is either to dump/restore or run\na VACUUM FULL ANALYZE (VERBOSE is better to keep a log), and\neventually reindex any bloated index (depends on your situation).\n\n> > When the free_space_map is to low, VACUUM ANALYZE should have told you\n> > via a warning (at least, if your logging is set appropriately).\n>\n> Unfortunately, we didn't keep the logs of VACUUM ANALYZE, so I\n> can't be sure :/\n\nYou should really run VACUUM ANALYZE VERBOSE on a regular basis and\nanalyze the logs to be sure your VACUUM strategy and FSM settings are\nOK.\n\nI developed http://pgfouine.projects.postgresql.org/vacuum.html to\nhelp us doing it on our production databases.\n\nRegards,\n\n--\nGuillaume\n",
"msg_date": "Mon, 28 Aug 2006 12:17:08 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Hi, Guillaume,\n\nGuillaume Cottenceau wrote:\n\n> We have a couple of logs files which get larger over time\n> (millions of rows). As they are log files, they can be trimmed\n> from older values.\n\nAh, ok, you DELETEd the old rows.\n\nSo I assume that you never UPDATE, but only INSERT new entries and\nsometimes DELETE a big bunch of entries from the beginning.\n\nThis is a special usage pattern, where the normal \"VACUUM\" is not well\nsuited for.\n\nDELETing rows itsself does not free any space. Only after your\ntransaction is committed, a following VACUUM FULL or CLUSTER does\nactually free the space.\n\nVACUUM and VACUUM ANALYZE only remove obsolete rows from the pages and\nmarks them free (by entering them into the free space map, as long as\nthat one is large enough). That means that your table will actually stay\nas large as before, having 90% of free pages at the beginning and 10%\nused pages at the very end. New INSERTs and UPDATEs will prefer to use\npages from the free space map before allocating new pages, but the\nexisting rows will stay forever.\n\nNow, VACUUM FULL actively moves rows to the beginning of the table,\nallowing to cut the end of the table, while CLUSTER recreates the table\nfrom scratch, in index order. Both lead to a compact storage, having all\nused rows at the beginning, and no free pages.\n\nSo, I think, in your case VACUUM FULL and CLUSTER would both have solved\nyour problem.\n\n> max_fsm_pages is 20000\n> Do they look low?\n> Notice: table data is only 600M after trim (without indexes),\n> while it was probably 3x to 10x this size before the trim.\n\n10x the size means 6G, so 5.4G of data were freed by the trim. Each page\nhas 8k in size, so the fsm needs about 675000 pages. So, yes, for your\nusage, they look low, and give very suboptimal results.\n\n>> have index bloat.\n> \n> Can you elaborate? I have created a couple of indexes (according\n> to multiple models of use in our application) and they do take up\n> quite some disk space (table dump is 600M but after restore it\n> takes up 1.5G on disk) but I thought they could only do good or\n> never be used, not impair performance..\n\nLike tables, indices may suffer from getting bloated by old, unused\nentries. Especially the GIST based indices in 7.4 (used by PostGIS and\nother plugins) suffered from that problem[1], but recent PostgreSQL\nversions have improved in this area.\n\nNow, when the query planner decides to use an index, the index access is\nextremely slow because of all the deleted entries the index scan has to\nskip.\n\nHowever, from the additional information you gave above, I doubt it was\nindex bloat.\n\n>> Maybe a VACUUM FULL fullowed by a REINDEX will have solved your problem.\n> \n> So these would have reordered the data for faster sequential\n> access which is not the case of VACUUM ANALYZE?\n\nA VACUUM FULL would have reordered the data, and a REINDEX would have\noptimized the index.\n\n>> It also might make sense to issue a CLUSTER instead (which combines the\n>> effects of VACUUM FULL, REINDEX and physically reordering the data).\n> \n> I was reluctant in using CLUSTER because you have to choose an\n> index and there are multiple indexes on the large tables..\n\nUsually, CLUSTERing on one index does not necessarily slow down accesses\non other indices, compared to the non-clustered (= random) table before.\n\nIf you have some indices that are somehow related (e. G. a timestamp and\na serial number), CLUSTERing on one index does automatically help the\nother index, especially as the query planer uses corellation statistics.\n\nBtw, if your queries often include 2 or 3 columns, a multi-column index\n(and clustering on that index) might be the best.\n\n>> When the free_space_map is to low, VACUUM ANALYZE should have told you\n>> via a warning (at least, if your logging is set appropriately).\n> \n> Unfortunately, we didn't keep the logs of VACUUM ANALYZE, so I\n> can't be sure :/\n\nAFAIK, the warning is also output on the psql command line.\n\nHTH,\nMarkus\n\n[1] We once had an index that was about 100 times larger before REINDEX.\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 28 Aug 2006 12:34:20 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Guillaume,\n\nThanks for your help.\n\n> On 28 Aug 2006 11:43:16 +0200, Guillaume Cottenceau <[email protected]> wrote:\n> > max_fsm_pages is 20000\n> > max_fsm_relations is 1000\n> > Do they look low?\n> \n> Yes they are probably too low if you don't run VACUUM on a regular\n> basis and you have a lot of UPDATE/DELETE activity. FSM doesn't take a\n> lot of memory so it's usually recommended to have a confortable value\n> for it.\n\nNormally, we run VACUUM ANALYZE overnight. I'd say we have low\nDELETE activity, kinda high SELECT/INSERT activity, and UPDATE\nwould be in the middle of that.\n \n> I usually recommend to read:\n> http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10116&cNode=5K1C3W\n> http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W\n> to understand better what VACUUM and FSM mean.\n\nThanks for the pointer, will read that.\n \n> > Can you elaborate? I have created a couple of indexes (according\n> > to multiple models of use in our application) and they do take up\n> > quite some disk space (table dump is 600M but after restore it\n> > takes up 1.5G on disk) but I thought they could only do good or\n> > never be used, not impair performance..\n> \n> Index slow downs write activity (you have to maintain them). It's not\n> always a good idea to create them.\n\nOf course. How newbie did I look :/. The thing is that I once did\na few measurements and noticed no (measurable) impact in INSERT\nwith a supplementary index, so I (wrongly) forgot about this.\n \n> > > Maybe a VACUUM FULL fullowed by a REINDEX will have solved your problem.\n> >\n> > So these would have reordered the data for faster sequential\n> > access which is not the case of VACUUM ANALYZE?\n> \n> VACUUM ANALYZE won't help you if your database is completely bloated.\n\nWhat do you mean exactly by bloated? If you mean that there is a\nlot of (unused) data, the thing is that our trim removed most of\nit. I was kinda hoping that after analyzing the database, the old\ndata would exit the whole picture, which obviously wasn't the\ncase.\n\nAbout REINDEX: is it ok to consider that REINDEX is to indexes\nwhat VACUUM FULL is to table data, because it cleans up unused\nindex pages?\n\n> And AFAICS you're not running it on a regular basis so your database\n> was probably completely bloated which means:\n> - bloated indexes,\n> - bloated tables (ie a lot of fragmentation in the pages which means\n> that you need far more pages to store the same data).\n\nI suppose that table fragmentation occurs when DELETE are\ninterleaved with INSERT?\n \n> The only ways to solve this situation is either to dump/restore or run\n> a VACUUM FULL ANALYZE (VERBOSE is better to keep a log), and\n> eventually reindex any bloated index (depends on your situation).\n\nOk.\n \n> > > When the free_space_map is to low, VACUUM ANALYZE should have told you\n> > > via a warning (at least, if your logging is set appropriately).\n> >\n> > Unfortunately, we didn't keep the logs of VACUUM ANALYZE, so I\n> > can't be sure :/\n> \n> You should really run VACUUM ANALYZE VERBOSE on a regular basis and\n> analyze the logs to be sure your VACUUM strategy and FSM settings are\n> OK.\n\nVACUUM ANALYZE is normally run overnight (each night). Is it not\nregular enough? There can be hundreds of thousands of statements\na day.\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "28 Aug 2006 14:31:47 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "> > We have a couple of logs files which get larger over time\n> > (millions of rows). As they are log files, they can be trimmed\n> > from older values.\n> \n> Ah, ok, you DELETEd the old rows.\n\nYes.\n \n> So I assume that you never UPDATE, but only INSERT new entries and\n> sometimes DELETE a big bunch of entries from the beginning.\n\nActually, in the version of software where we have the problem,\nthat's exactly the case. But in newer versions, UPDATE come into\nthe picture (typically on recently inserted rows - one or two\nupdates per row). Does UPDATE change anything? Row selection is\ndone on the primary key (of SERIAL type).\n\n> This is a special usage pattern, where the normal \"VACUUM\" is not well\n> suited for.\n> \n> DELETing rows itsself does not free any space. Only after your\n> transaction is committed, a following VACUUM FULL or CLUSTER does\n> actually free the space.\n> \n> VACUUM and VACUUM ANALYZE only remove obsolete rows from the pages and\n> marks them free (by entering them into the free space map, as long as\n> that one is large enough). That means that your table will actually stay\n> as large as before, having 90% of free pages at the beginning and 10%\n> used pages at the very end. New INSERTs and UPDATEs will prefer to use\n> pages from the free space map before allocating new pages, but the\n> existing rows will stay forever.\n\nYes, that what I had in mind. But I assumed that performance\nwould be reclaimed (as if VACUUM FULL was run) because the\nstatistics after analyzing are accurate as to data distribution,\nonly disk space would not be reclaimed (but we don't care, at\nleast for the moment).\n \n> Now, VACUUM FULL actively moves rows to the beginning of the table,\n> allowing to cut the end of the table, while CLUSTER recreates the table\n> from scratch, in index order. Both lead to a compact storage, having all\n> used rows at the beginning, and no free pages.\n\nI actually assumed that VACUUM ANALYZE would order rows\nsequentially on disk (mainly because it was taking quite some\ntime and a lot of disk output activity), but obviously this was\nwrong.\n \n> So, I think, in your case VACUUM FULL and CLUSTER would both have solved\n> your problem.\n\nOk.\n \n> > max_fsm_pages is 20000\n> > Do they look low?\n> > Notice: table data is only 600M after trim (without indexes),\n> > while it was probably 3x to 10x this size before the trim.\n> \n> 10x the size means 6G, so 5.4G of data were freed by the trim. Each page\n> has 8k in size, so the fsm needs about 675000 pages. So, yes, for your\n> usage, they look low, and give very suboptimal results.\n\n\"max_fsm_pages = 675000\" means we also need to enlarge shared\nbuffers, or the shared buffers available space for data caching\nwould be reduced, right?\n\nI guess the bottom line is that I don't understand what the Free\nSpace Map behaviour really is. Is it a map containing location of\nfree disk pages, free meaning that they correspond to pages\nremoved with DELETE but not yet released to the OS with VACUUM\nFULL, which are used for INSERT in favor of enlarging the size of\ndata used on disk? If that's correct, am I right in assuming that\nwe don't care about the Free Space Map size if we perform a\nVACUUM FULL right after large bunches of DELETE?\n \n> >> have index bloat.\n> > \n> > Can you elaborate? I have created a couple of indexes (according\n> > to multiple models of use in our application) and they do take up\n> > quite some disk space (table dump is 600M but after restore it\n> > takes up 1.5G on disk) but I thought they could only do good or\n> > never be used, not impair performance..\n> \n> Like tables, indices may suffer from getting bloated by old, unused\n> entries. Especially the GIST based indices in 7.4 (used by PostGIS and\n> other plugins) suffered from that problem[1], but recent PostgreSQL\n> versions have improved in this area.\n\nWe actually are obliged to use 7.4.5 :/\n\nAm I correct in assuming that regularly running REINDEX would cut\nthis bloat? (daily)\n\n(documentation very much insists on solving index data corruption\nwith REINDEX and doesn't talk much about removing old obsolete\ndata)\n\n(also, is there any way to REINDEX all index of all tables\neasily? as when we do just \"VACUUM ANALYZE\" for the whole\ndatabase)\n\n> Now, when the query planner decides to use an index, the index access is\n> extremely slow because of all the deleted entries the index scan has to\n> skip.\n\nI see.\n \n> However, from the additional information you gave above, I doubt it was\n> index bloat.\n\n[...]\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "28 Aug 2006 15:07:33 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Hi, Guillaume,\n\nGuillaume Cottenceau wrote:\n\n> About REINDEX: is it ok to consider that REINDEX is to indexes\n> what VACUUM FULL is to table data, because it cleans up unused\n> index pages?\n\nYes, roughly speaking.\n\n>> And AFAICS you're not running it on a regular basis so your database\n>> was probably completely bloated which means:\n>> - bloated indexes,\n>> - bloated tables (ie a lot of fragmentation in the pages which means\n>> that you need far more pages to store the same data).\n> \n> I suppose that table fragmentation occurs when DELETE are\n> interleaved with INSERT?\n\nYes, and it gets ugly as soon as the fsm setting is to low / VACUUM\nfrequency is to low, so it cannot keep up.\n\nBig bunches of UPDATE/DELETE that hit more than, say 20% of the table\nbetween VACUUM runs, justify a VACUUM FULL in most cases.\n\n> VACUUM ANALYZE is normally run overnight (each night). Is it not\n> regular enough? There can be hundreds of thousands of statements\n> a day.\n\nWhich PostgreSQL version are you using? Maybe you should consider\nautovacuum (which is a contrib module at least since 7.4, and included\nin the server since 8.1). If you think that vacuum during working hours\nputs too much load on your server, there are options to tweak that, at\nleast in 8.1.\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 28 Aug 2006 15:13:48 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Hi, Guillaume,\n\nGuillaume Cottenceau wrote:\n\n>> So I assume that you never UPDATE, but only INSERT new entries and\n>> sometimes DELETE a big bunch of entries from the beginning.\n> \n> Actually, in the version of software where we have the problem,\n> that's exactly the case. But in newer versions, UPDATE come into\n> the picture (typically on recently inserted rows - one or two\n> updates per row). Does UPDATE change anything? Row selection is\n> done on the primary key (of SERIAL type).\n\nIn a MVCC database like PostgreSQL, UPDATE internally INSERTs the new\nversion of the row, and marks the old one as deleted.\n\nLater transactions use the transaction's exit state (commit or rollback)\nto decide which row version to use. VACUUM removes row versions that are\nknown to be obsolete (that's why longstanding transactions hold VACUUM,\nbeause they still can reference old, obsolete versions.).\n\nSo, for a few updates that are at the end of the table, normal VACUUM\nwith a sufficient free space map setting will work okay.\n\nHowever, when updating or deleting big bunches of data (like the 90% you\nspoke of), VACUUM FULL or CLUSTER does make sense.\n\n> Yes, that what I had in mind. But I assumed that performance\n> would be reclaimed (as if VACUUM FULL was run) because the\n> statistics after analyzing are accurate as to data distribution,\n> only disk space would not be reclaimed (but we don't care, at\n> least for the moment).\n\nPerformance is not reclaimed for everything involving a sequential scan,\nas it still has to scan the whole table.\n\nIt is partially reclaimed for index scans on UPDATEd rows, as the old\nversions are removed, and so index have less versions to check for\nvalidity in the current transaction.\n\n> I actually assumed that VACUUM ANALYZE would order rows\n> sequentially on disk (mainly because it was taking quite some\n> time and a lot of disk output activity), but obviously this was\n> wrong.\n\nIt only does so inside each page, but not across pages.\n\n> \"max_fsm_pages = 675000\" means we also need to enlarge shared\n> buffers, or the shared buffers available space for data caching\n> would be reduced, right?\n\nAFAIK, the FSM is not a part of the shared buffers memory, but they both\naccount to the kernels shared memory limit, which you may have to increase.\n\n> I guess the bottom line is that I don't understand what the Free\n> Space Map behaviour really is. Is it a map containing location of\n> free disk pages, free meaning that they correspond to pages\n> removed with DELETE but not yet released to the OS with VACUUM\n> FULL, which are used for INSERT in favor of enlarging the size of\n> data used on disk?\n\nMostly, yes. VACUUM scans the whole table, that's why it has so much\ndisk IO. On every page, it first deletes obsolete rows (by checking\ntheir transaction IDs), and compacts the rest. It then appends the page\nto the free space map, if it contains free space and the fsm has a free\nslot left. As it does not move valid rows between pages, it can run\nconcurrently with \"real\" transactions and does not need a table lock.\n\nINSERT uses the FSM before enlarging the file, UPDATE first looks for\nfree space on the same page where the old row is (which avoids updating\nthe index), then the FSM, then enlarging the file.\n\n> If that's correct, am I right in assuming that\n> we don't care about the Free Space Map size if we perform a\n> VACUUM FULL right after large bunches of DELETE?\n\nI don't know exactly, but as far as I remember, VACUUM FULL uses the FSM\nmap itsself, as it must have free target pages to move the rows to.\n\nSo an insufficient FSM may lead to the need of several VACUUM FULL runs\nuntil the table is cleaned up, or might even fail completely.\n\nTom & co, please correct me if that statement above is imprecise.\n\n> We actually are obliged to use 7.4.5 :/\n\nI URGE you to update at least to 7.4.13 (which can be done in place,\nwithout dump/restore). For a list of the urgend bug fixes, see\nhttp://www.postgresql.org/docs/7.4/static/release.html#RELEASE-7-4-13\nwhich also contains hints for a smooth upgrade.\n\n> Am I correct in assuming that regularly running REINDEX would cut\n> this bloat? (daily)\n\nYes, a regular REINDEX will cut index bloat (but not table bloat).\n\nIf you have a maintainance window every night, but very high traffic\nduring the daytime, it might make sense to have a cron script issuing a\nbunch of VACUUM FULL / REINDEX / CLUSTER commands every night.\n\nBtw, CLUSTERing a table includes the effects of VACUUM FULL and REINDEX,\nbut not ANALYZE.\n\n> (also, is there any way to REINDEX all index of all tables\n> easily? as when we do just \"VACUUM ANALYZE\" for the whole\n> database)\n\nFor 7.4, you'll need a script to do that (current versions have improved\nin this area). You might recycle the idea from the pgsql-sql list some\ndays ago:\n\nhttp://archives.postgresql.org/pgsql-sql/2006-08/msg00184.php\n\nSimply use the meta tables to get a list of all schema.table names, and\ncreate the bunch of VACUUM FULL / REINDEX commands.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 28 Aug 2006 15:43:48 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "Markus Schaber <schabi 'at' logix-tt.com> writes:\n\n> > VACUUM ANALYZE is normally run overnight (each night). Is it not\n> > regular enough? There can be hundreds of thousands of statements\n> > a day.\n> \n> Which PostgreSQL version are you using? Maybe you should consider\n> autovacuum (which is a contrib module at least since 7.4, and included\n> in the server since 8.1). If you think that vacuum during working hours\n> puts too much load on your server, there are options to tweak that, at\n> least in 8.1.\n\nOk, thanks. Unfortunately production insists on sticking on 7.4.5\nfor the moment :/\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "28 Aug 2006 15:47:42 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
},
{
"msg_contents": "On Mon, 2006-08-28 at 08:47, Guillaume Cottenceau wrote:\n> Markus Schaber <schabi 'at' logix-tt.com> writes:\n> \n> > > VACUUM ANALYZE is normally run overnight (each night). Is it not\n> > > regular enough? There can be hundreds of thousands of statements\n> > > a day.\n> > \n> > Which PostgreSQL version are you using? Maybe you should consider\n> > autovacuum (which is a contrib module at least since 7.4, and included\n> > in the server since 8.1). If you think that vacuum during working hours\n> > puts too much load on your server, there are options to tweak that, at\n> > least in 8.1.\n> \n> Ok, thanks. Unfortunately production insists on sticking on 7.4.5\n> for the moment :/\n\nThere are known data loss bugs in that version. You should at least\nmake them update to 7.4.13. Running 7.4.5 instead of 7.4.13 is a bad\ndecision. Note that there is no need for migrating your data or any of\nthat with an update within the same major / minor version. As long as\nthe first two numbers don't change, it's a very simple and fast upgrade.\n\nNOT doing it is negligent.\n",
"msg_date": "Mon, 28 Aug 2006 11:25:41 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf pb solved only after pg_dump and restore"
}
] |
[
{
"msg_contents": "Thanks Alvaro.\n\nWe are using PostgreSQL 7.1 cygwin installed on Windows 2000.\n\nWe understand that the maximum connections that can be set is 64 in\nPostgresql 7.1 version. \n\nBut our application is installed in 8 / 10 PC or more than that and it opens\nmultiple connections and it exceeds 64. \n\nBecause of this the subsequent connections are failed to connect with DB\nfrom application.\n\nPlease advise us on how to resolve this ?. \n\n--\n\nMigrating to 8.1 may not be possible at this point of time due to some\nreasons. \n\nRegards, Ravi\n\n\n-----Original Message-----\nFrom: Alvaro Herrera [mailto:[email protected]]\nSent: Monday, August 28, 2006 7:02 PM\nTo: Ravindran G - TLS, Chennai.\nCc: [email protected]\nSubject: Re: [PERFORM] Postgre SQL 7.1 cygwin performance issue.\n\n\nRavindran G - TLS, Chennai. wrote:\n> I would like to talk to one of the org member in postgre about this issue.\n> This is critical for us. Please help.\n> \n> It will be great, if you could provide your contact number to discuss on\n> this. \n\nSure, we're happy to help. You can contact several \"org members\" via\nthis mailing list. What's your problem exactly? If you're finding that\nthe 7.1 cygwin version is too slow, please consider migrating some\nsomething more recent. 8.1 runs natively on Windows, no Cygwin required.\nIt's much faster and doesn't have that limitation on the number of\nconnections.\n\nPlease note that the name is \"PostgreSQL\" and is usually shortened to\n\"Postgres\". It's never \"postgre\".\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.\n",
"msg_date": "Mon, 28 Aug 2006 19:30:44 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "On 8/28/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\n> Thanks Alvaro.\n>\n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000.\n>\n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version.\n>\n> But our application is installed in 8 / 10 PC or more than that and it opens\n> multiple connections and it exceeds 64.\n>\n> Because of this the subsequent connections are failed to connect with DB\n> from application.\n>\n> Please advise us on how to resolve this ?.\n\nI don't think you have any answer other than to migrate to a\nbetter-supportable version of PostgreSQL.\n\nThe last release of 7.1 was in August 2001; you're using a version\nthat is now over five years old, with known \"it'll eat your data\"\nproblems. That is why there have been some fifty-odd subsequent\nreleases.\n\nThe right answer is to arrange for an upgrade to a much less antiquated version.\n\nYou're going to be pretty well restricted to the 64 connections until\nyou upgrade to a more recent version.\n\nThere is an alternative: You could migrate to some Unix-like platform\n(such as Linux or FreeBSD) where version 7.1.3 could in fact support\nmore than 64 connections.\n-- \nhttp://www3.sympatico.ca/cbbrowne/linux.html\nOddly enough, this is completely standard behaviour for shells. This\nis a roundabout way of saying `don't use combined chains of `&&'s and\n`||'s unless you think Gödel's theorem is for sissies'.\n",
"msg_date": "Mon, 28 Aug 2006 14:17:51 +0000",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "am Mon, dem 28.08.2006, um 19:30:44 +0530 mailte Ravindran G - TLS, Chennai. folgendes:\n> Thanks Alvaro.\n> \n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000.\n\n*grrr*\n\n> \n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version. \n> \n> But our application is installed in 8 / 10 PC or more than that and it opens\n> multiple connections and it exceeds 64. \n> \n> Because of this the subsequent connections are failed to connect with DB\n> from application.\n> \n> Please advise us on how to resolve this ?. \n\nI'm not sure, but perhaps, pgpool can solve your problem.\n\n> \n> --\n> \n> Migrating to 8.1 may not be possible at this point of time due to some\n> reasons. \n\nPity. 8.1 is *very* nice, and 7.1 *very* old, slow and out of\nlife-cycle.\n\n\n\n> \n> Regards, Ravi\n> \n> \n> -----Original Message-----\n\nPlease, no top-posting with fullquote below.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Mon, 28 Aug 2006 16:18:34 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "Ravindran G - TLS, Chennai. wrote:\n> Thanks Alvaro.\n> \n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000.\n> \n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version. \n\nThis is because of Cygwin limitations.\n\n> But our application is installed in 8 / 10 PC or more than that and it opens\n> multiple connections and it exceeds 64. \n> \n> Because of this the subsequent connections are failed to connect with DB\n> from application.\n> \n> Please advise us on how to resolve this ?. \n\nThere's no solution short of upgrading.\n\n> Migrating to 8.1 may not be possible at this point of time due to some\n> reasons. \n\nThat's too bad :-(\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 28 Aug 2006 10:19:59 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "On 8/28/06, Alvaro Herrera <[email protected]> wrote:\n> > Please advise us on how to resolve this ?.\n>\n> There's no solution short of upgrading.\n\nThat's a little too negative. There is at least one alternative,\npossibly two...\n\n1. Migrate the database to a Unix platform that does not suffer from\nthe Cygwin 64 connection restriction. (If running Linux, it may be\nnecessary to look for an old release, as there were changes to GLIBC\nat around the same time as 7.2 that don't play perfectly well with\n7.1...)\n\n2. It is *possible* that pg_pool could be usable as a proxy that\nlimits the number of connections actually used. I'm not sure how well\nit'll play with 7.1, mind you...\n-- \nhttp://www3.sympatico.ca/cbbrowne/linux.html\nOddly enough, this is completely standard behaviour for shells. This\nis a roundabout way of saying `don't use combined chains of `&&'s and\n`||'s unless you think Gödel's theorem is for sissies'.\n",
"msg_date": "Mon, 28 Aug 2006 14:28:42 +0000",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "On Mon, 2006-08-28 at 09:00, Ravindran G - TLS, Chennai. wrote:\n> Thanks Alvaro.\n> \n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000.\n> \n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version. \n> \n> But our application is installed in 8 / 10 PC or more than that and it opens\n> multiple connections and it exceeds 64. \n> \n> Because of this the subsequent connections are failed to connect with DB\n> from application.\n> \n> Please advise us on how to resolve this ?. \n\nAs someone else mentioned, pg_pool might help here. But you're kind of\nfighting an uphill battle here. I'm guessing that your effort will be\nbetter spent on upgrading your db server than on trying to patch up the\nsystem you have.\n\nIs there any chance of changing your client app so it doesn't open so\nmany connections? That would seem the easiest fix of all, if you have\naccess to that code.\n",
"msg_date": "Mon, 28 Aug 2006 09:56:46 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "\"Christopher Browne\" <[email protected]> writes:\n> On 8/28/06, Alvaro Herrera <[email protected]> wrote:\n>> There's no solution short of upgrading.\n\n> That's a little too negative. There is at least one alternative,\n> possibly two...\n\nBut both of those would probably involve work comparable to an upgrade.\n\nThere is another reason for not encouraging these folk to stay on 7.1\nindefinitely, which is that 7.1 still has the transaction ID wraparound\nproblem. It *will* --- not might, WILL --- eat their data someday.\nWithout knowing anything about their transaction rate, I can't say\nwhether that will happen tomorrow or not for many years, but insisting\non staying on 7.1 is a dangerous game.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2006 11:13:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue. "
},
{
"msg_contents": "On 8/28/06, Tom Lane <[email protected]> wrote:\n> \"Christopher Browne\" <[email protected]> writes:\n> > On 8/28/06, Alvaro Herrera <[email protected]> wrote:\n> >> There's no solution short of upgrading.\n>\n> > That's a little too negative. There is at least one alternative,\n> > possibly two...\n>\n> But both of those would probably involve work comparable to an upgrade.\n\nWe don't know what is preventing the upgrade; we haven't been told\nanything about the details surrounding that.\n\n> There is another reason for not encouraging these folk to stay on 7.1\n> indefinitely, which is that 7.1 still has the transaction ID wraparound\n> problem. It *will* --- not might, WILL --- eat their data someday.\n> Without knowing anything about their transaction rate, I can't say\n> whether that will happen tomorrow or not for many years, but insisting\n> on staying on 7.1 is a dangerous game.\n\nFair enough. I would only suggest these workarounds as a way of\ngetting a bit of temporary \"breathing room\" before doing the upgrade.\n\nThese should at best be considered temporary workarounds, because\nthere are around 50 releases that have been made since 7.1.3. All but\na handful of those releases (namely 7.2.0, 7.3.0, 7.4.0, 8.0.0, and\n8.1.0) were created because of discovering \"eat your data\" problems of\none variety or another.\n-- \nhttp://www3.sympatico.ca/cbbrowne/linux.html\nOddly enough, this is completely standard behaviour for shells. This\nis a roundabout way of saying `don't use combined chains of `&&'s and\n`||'s unless you think Gödel's theorem is for sissies'.\n",
"msg_date": "Mon, 28 Aug 2006 16:20:07 +0000",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "On 8/28/06, Christopher Browne <[email protected]> wrote:\n> On 8/28/06, Tom Lane <[email protected]> wrote:\n> > \"Christopher Browne\" <[email protected]> writes:\n> > > On 8/28/06, Alvaro Herrera <[email protected]> wrote:\n> > >> There's no solution short of upgrading.\n> >\n> > > That's a little too negative. There is at least one alternative,\n> > > possibly two...\n> >\n> > But both of those would probably involve work comparable to an upgrade.\n>\n> We don't know what is preventing the upgrade; we haven't been told\n> anything about the details surrounding that.\n\nbe sure and check out\nhttp://archives.postgresql.org/pgsql-hackers/2006-08/msg00655.php.\n(read the entire thread) moving off 7.1 is a great idea, but it may\nor may not solve the connection the problem (its windows) :).\n\nmerlin\n",
"msg_date": "Mon, 28 Aug 2006 15:47:18 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
}
] |
[
{
"msg_contents": "I just put together a view, which helps us in indentifying which \ndatabase tables are suffering from space bloat, ie. they take up much \nmore space than they actually should. I though this might be useful for \nsome folk here, because the questions about bloat-related performance \ndegradation are quite common.\n\nWhen using this view, you are interested in tables, which have the \n\"bloat\" column higher that say 2.0 (in freshly dump/restored/analyzed \ndatabase they should all be around 1.0).\n\nThe bloat problem can be one-time fixed either by VACUUM FULL or \nCLUSTER, but if the problem is coming back after while, you should \nconsider doing VACUUM more often or increasing you FSM settings in \npostgresql.conf.\n\nI hope I did the view right, it is more or less accurate, for our \npurposes (for tables of just few pages the numbers may be off, but then \nagain, you are usually not much concerned about these tiny 5-page tables \nperformance-wise).\n\nHope this helps someone.\n\nHere comes the view.\n\n\nCREATE OR REPLACE VIEW \"public\".\"relbloat\" (\n nspname,\n relname,\n reltuples,\n relpages,\n avgwidth,\n expectedpages,\n bloat,\n wastedspace)\nAS\nSELECT pg_namespace.nspname, pg_class.relname, pg_class.reltuples,\n pg_class.relpages, rowwidths.avgwidth, ceil(((pg_class.reltuples *\n (rowwidths.avgwidth)::double precision) /\n (current_setting('block_size'::text))::double precision)) AS \nexpectedpages,\n ((pg_class.relpages)::double precision / ceil(((pg_class.reltuples *\n (rowwidths.avgwidth)::double precision) /\n (current_setting('block_size'::text))::double precision))) AS bloat,\n ceil(((((pg_class.relpages)::double precision *\n (current_setting('block_size'::text))::double precision) -\n ceil((pg_class.reltuples * (rowwidths.avgwidth)::double precision))) /\n (1024)::double precision)) AS wastedspace\nFROM (((\n SELECT pg_statistic.starelid, sum(pg_statistic.stawidth) AS avgwidth\n FROM pg_statistic\n GROUP BY pg_statistic.starelid\n ) rowwidths JOIN pg_class ON ((rowwidths.starelid = pg_class.oid))) \nJOIN\n pg_namespace ON ((pg_namespace.oid = pg_class.relnamespace)))\nWHERE (pg_class.relpages > 1);\n\n\nBye.\n\n-- \nMichal T�borsk�\nIT operations chief\nInternet Mall, a.s.\n<http://www.MALL.cz>\n",
"msg_date": "Mon, 28 Aug 2006 16:39:30 +0200",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Identifying bloated tables"
},
{
"msg_contents": "On Mon, 2006-08-28 at 16:39 +0200, Michal Taborsky - Internet Mall\nwrote:\n> I just put together a view, which helps us in indentifying which \n> database tables are suffering from space bloat, ie. they take up much \n> more space than they actually should. I though this might be useful for \n> some folk here, because the questions about bloat-related performance \n> degradation are quite common.\n\nAre you sure you haven't reinvented the wheel? Have you checked out\ncontrib/pgstattuple ?\n\nBrad.\n\n",
"msg_date": "Mon, 28 Aug 2006 10:48:18 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identifying bloated tables"
},
{
"msg_contents": "Brad Nicholson napsal(a):\n>> I just put together a view, which helps us in indentifying which \n>> database tables are suffering from space bloat, ie. they take up much \n\n> Are you sure you haven't reinvented the wheel? Have you checked out\n> contrib/pgstattuple ?\n\nWell, I wasn't aware of it, so I guess I did reinvent the wheel. I \nGoogled for a solution to this problem, but Googled poorly I suppose.\n\nOn the other hand, pgstattuple might be a bit difficult to use for \nnot-so-experienced users in answering the question \"Which table should I \nshrink?\", as you have to first install it from contrib and then come up \nwith a select to pick the \"worst\" relations.\n\nAnyway, if someone finds this view useful, good. If not, ignore it.\n\nBye.\n\n-- \nMichal T�borsk�\nIT operations chief\nInternet Mall, a.s.\n<http://www.MALL.cz>\n",
"msg_date": "Mon, 28 Aug 2006 17:27:54 +0200",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Identifying bloated tables"
},
{
"msg_contents": "Brad Nicholson wrote:\n> On Mon, 2006-08-28 at 16:39 +0200, Michal Taborsky - Internet Mall\n> wrote:\n> > I just put together a view, which helps us in indentifying which \n> > database tables are suffering from space bloat, ie. they take up much \n> > more space than they actually should. I though this might be useful for \n> > some folk here, because the questions about bloat-related performance \n> > degradation are quite common.\n> \n> Are you sure you haven't reinvented the wheel? Have you checked out\n> contrib/pgstattuple ?\n\nActually, pgstattuple needs to scan the whole table, so I think having a\ncheap workaround that gives approximate figures is a good idea anyway.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 28 Aug 2006 11:31:52 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identifying bloated tables"
},
{
"msg_contents": "Hi, Michal,\n\nMichal Taborsky - Internet Mall wrote:\n\n> When using this view, you are interested in tables, which have the\n> \"bloat\" column higher that say 2.0 (in freshly dump/restored/analyzed\n> database they should all be around 1.0).\n\nI just noticed some columns in pg_catalog with a bloat value <1 and a\nnegative \"wasted space\" - is this due to the pseudo nature of them?\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 28 Aug 2006 17:49:28 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identifying bloated tables"
},
{
"msg_contents": "Markus Schaber napsal(a):\n> Hi, Michal,\n> \n> Michal Taborsky - Internet Mall wrote:\n> \n>> When using this view, you are interested in tables, which have the\n>> \"bloat\" column higher that say 2.0 (in freshly dump/restored/analyzed\n>> database they should all be around 1.0).\n> \n> I just noticed some columns in pg_catalog with a bloat value <1 and a\n> negative \"wasted space\" - is this due to the pseudo nature of them?\n\nIt is more likely due to the fact, that these numbers are just \nestimates, based on collected table statistics, so for small or \nnon-standard tables the statistical error is greater that the actual \nvalue. You are usually not interested in tables, which have wasted space \nof 1000kB or -1000kB. Also the database must be ANALYZEd properly for \nthese numbers to carry any significance.\n\n-- \nMichal T�borsk�\nIT operations chief\nInternet Mall, a.s.\n\nInternet Mall - obchody, kter� si obl�b�te\n<http://www.MALL.cz>\n",
"msg_date": "Mon, 28 Aug 2006 17:56:08 +0200",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Identifying bloated tables"
},
{
"msg_contents": "On 28/08/06, Michal Taborsky - Internet Mall <[email protected]> wrote:\n> Markus Schaber napsal(a):\n> > Hi, Michal,\n> >\n> > Michal Taborsky - Internet Mall wrote:\n> >\n> >> When using this view, you are interested in tables, which have the\n> >> \"bloat\" column higher that say 2.0 (in freshly dump/restored/analyzed\n> >> database they should all be around 1.0).\n> >\n> > I just noticed some columns in pg_catalog with a bloat value <1 and a\n> > negative \"wasted space\" - is this due to the pseudo nature of them?\n>\n> It is more likely due to the fact, that these numbers are just\n> estimates, based on collected table statistics, so for small or\n> non-standard tables the statistical error is greater that the actual\n> value. You are usually not interested in tables, which have wasted space\n> of 1000kB or -1000kB. Also the database must be ANALYZEd properly for\n> these numbers to carry any significance.\n>\n\nI was just playing around with this table and noticed it preforms the\nbadly in tables with very small record sizes. This seams to be because\nit ignores the system overhead (oid, xmin ctid etc) which seams to be\nabout 28 bytes per a record this can be quite significate in small\nrecord tables and can cause trouble even with a smal numbers of\nrecord. Hence I've got a table thats static and fresly \"vacuum full\"\nwhich reads with a bloat of 4.\n\nEasy to recreate problem to\n\nCreate table regionpostcode (area varchar(4), regionid int);\n\nthen insert 120000 records.\n\nPeter.\n",
"msg_date": "Tue, 29 Aug 2006 07:35:23 +0100",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identifying bloated tables"
}
] |
[
{
"msg_contents": "Hi everyone,\nWe have a postgresql 8.1 installed on Solaris 10. It is running fine.\nHowever, for the past couple days, we have seen the i/o reports indicating\nthat the i/o is busy most of the time. Before this, we only saw i/o being\nbusy occasionally (very rare). So far, there has been no performance\ncomplaints by customers, and the slow query reports doesn't indicate\nanything out of the ordinary.\nThere's no code changes on the applications layer and no database\nconfiguration changes.\nI am wondering if there's a tool out there on Solaris to tell which process\nis doing most of the i/o activity?\nThank you in advance.\n\nJ\n\nHi everyone,\nWe have a postgresql 8.1 installed on Solaris 10. It is running fine.\nHowever, for the past couple days, we have seen the i/o reports\nindicating that the i/o is busy most of the time. Before this, we only\nsaw i/o being busy occasionally (very rare). So far, there has been no\nperformance complaints by customers, and the slow query reports doesn't\nindicate anything out of the ordinary.\nThere's no code changes on the applications layer and no database configuration changes.\nI am wondering if there's a tool out there on Solaris to tell which process is doing most of the i/o activity?\nThank you in advance.\n\nJ",
"msg_date": "Mon, 28 Aug 2006 16:06:50 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow i/o"
},
{
"msg_contents": "Did you increase the checkpoint segments and changed the default WAL lock method to fdsync?\n\nhttp://blogs.sun.com/jkshah/entry/postgresql_on_solaris_better_use\n\nTry fdsync instead of fysnc as mentioned in the entry.\n\nRegards,\nJignesh\n\n\nJunaili Lie wrote:\n> Hi everyone,\n> We have a postgresql 8.1 installed on Solaris 10. It is running fine. \n> However, for the past couple days, we have seen the i/o reports \n> indicating that the i/o is busy most of the time. Before this, we only \n> saw i/o being busy occasionally (very rare). So far, there has been no \n> performance complaints by customers, and the slow query reports doesn't \n> indicate anything out of the ordinary.\n> There's no code changes on the applications layer and no database \n> configuration changes.\n> I am wondering if there's a tool out there on Solaris to tell which \n> process is doing most of the i/o activity?\n> Thank you in advance.\n> \n> J\n> \n",
"msg_date": "Tue, 29 Aug 2006 16:03:38 +0100",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "Also to answer your real question:\n\nDTrace On Solaris 10:\n\n# dtrace -s /usr/demo/dtrace/whoio.d\n\nIt will tell you the pids doing the io activity and on which devices.\nThere are more scripts in that directory like iosnoop.d, iotime.d and others which also will give \nother details like file accessed, time it took for the io etc.\n\nHope this helps.\n\nRegards,\nJignesh\n\n\nJunaili Lie wrote:\n> Hi everyone,\n> We have a postgresql 8.1 installed on Solaris 10. It is running fine. \n> However, for the past couple days, we have seen the i/o reports \n> indicating that the i/o is busy most of the time. Before this, we only \n> saw i/o being busy occasionally (very rare). So far, there has been no \n> performance complaints by customers, and the slow query reports doesn't \n> indicate anything out of the ordinary.\n> There's no code changes on the applications layer and no database \n> configuration changes.\n> I am wondering if there's a tool out there on Solaris to tell which \n> process is doing most of the i/o activity?\n> Thank you in advance.\n> \n> J\n> \n",
"msg_date": "Tue, 29 Aug 2006 16:10:56 +0100",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "Hi Jignesh,\nThank you for my reply.\nI have the setting just like what you described:\n\nwal_sync_method = fsync\nwal_buffers = 128\ncheckpoint_segments = 128\nbgwriter_all_percent = 0\nbgwriter_maxpages = 0\n\n\nI ran the dtrace script and found the following:\nDuring the i/o busy time, there are postgres processes that has very high\nBYTES count. During that non i/o busy time, this same process doesn't do a\nlot of i/o activity. I checked the pg_stat_activity but couldn't found this\nprocess. Doing ps revealed that this process is started at the same time\nsince the postgres started, which leads me to believe that it maybe\nbackground writer or some other internal process. This process are not\nautovacuum because it doesn't disappear when I tried turning autovacuum\noff.\nExcept for the ones mentioned above, I didn't modify the other background\nsetting:\nMONSOON=# show bgwriter_delay ;\n bgwriter_delay\n----------------\n 200\n(1 row)\n\nMONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n-----------------------\n 5\n(1 row)\n\nMONSOON=# show bgwriter_lru_percent ;\n bgwriter_lru_percent\n----------------------\n 1\n(1 row)\n\nThis i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56) . If\nI do select * from pg_stat_activity during this time, I will see a lot of\nwrite queries waiting to be processed. After a few seconds, everything seems\nto be gone. All writes that are not happening at the time of this i/o jump\nare being processed very fast, thus do not show on pg_stat_activity.\n\nThanks in advance for the reply,\nBest,\n\nJ\n\nOn 8/29/06, Jignesh K. Shah <[email protected]> wrote:\n>\n> Also to answer your real question:\n>\n> DTrace On Solaris 10:\n>\n> # dtrace -s /usr/demo/dtrace/whoio.d\n>\n> It will tell you the pids doing the io activity and on which devices.\n> There are more scripts in that directory like iosnoop.d, iotime.d and\n> others which also will give\n> other details like file accessed, time it took for the io etc.\n>\n> Hope this helps.\n>\n> Regards,\n> Jignesh\n>\n>\n> Junaili Lie wrote:\n> > Hi everyone,\n> > We have a postgresql 8.1 installed on Solaris 10. It is running fine.\n> > However, for the past couple days, we have seen the i/o reports\n> > indicating that the i/o is busy most of the time. Before this, we only\n> > saw i/o being busy occasionally (very rare). So far, there has been no\n> > performance complaints by customers, and the slow query reports doesn't\n> > indicate anything out of the ordinary.\n> > There's no code changes on the applications layer and no database\n> > configuration changes.\n> > I am wondering if there's a tool out there on Solaris to tell which\n> > process is doing most of the i/o activity?\n> > Thank you in advance.\n> >\n> > J\n> >\n>\n\nHi Jignesh,\nThank you for my reply.\nI have the setting just like what you described:\nwal_sync_method = fsyncwal_buffers = 128checkpoint_segments = 128bgwriter_all_percent = 0bgwriter_maxpages = 0\n\nI ran the dtrace script and found the following:\nDuring the i/o busy time, there are postgres processes that has very\nhigh BYTES count. During that non i/o busy time, this same process\ndoesn't do a lot of i/o activity. I checked the pg_stat_activity but\ncouldn't found this process. Doing ps revealed that this process is\nstarted at the same time since the postgres started, which leads me to\nbelieve that it maybe background writer or some other internal process.\n\nThis process are not autovacuum because it doesn't disappear when I tried turning autovacuum off. \nExcept for the ones mentioned above, I didn't modify the other background setting:\n\nMONSOON=# show bgwriter_delay ;\n\n bgwriter_delay\n\n----------------\n\n 200\n\n(1 row)\n\n\nMONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n\n-----------------------\n\n 5\n\n(1 row)\n\n\nMONSOON=# show bgwriter_lru_percent ;\n\n bgwriter_lru_percent\n\n----------------------\n\n 1\n\n(1 row)\nThis\ni/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56) . If\nI do select * from pg_stat_activity during this time, I will see a lot\nof write queries waiting to be processed. After a few seconds,\neverything seems to be gone. All writes that are not happening at the\ntime of this i/o jump are being processed very fast, thus do not show on pg_stat_activity.\n\nThanks in advance for the reply,\nBest,\n\nJ\nOn 8/29/06, Jignesh K. Shah <[email protected]\n> wrote:\nAlso to answer your real question:DTrace On Solaris 10:# dtrace -s /usr/demo/dtrace/whoio.dIt will tell you the pids doing the io activity and on which devices.There are more scripts in that directory like \niosnoop.d, iotime.d and others which also will giveother details like file accessed, time it took for the io etc.Hope this helps.Regards,JigneshJunaili Lie wrote:> Hi everyone,\n\n> We have a postgresql 8.1 installed on Solaris 10. It is running fine.> However, for the past couple days, we have seen the i/o reports> indicating that the i/o is busy most of the time. Before this, we only\n> saw i/o being busy occasionally (very rare). So far, there has been no> performance complaints by customers, and the slow query reports doesn't> indicate anything out of the ordinary.> There's no code changes on the applications layer and no database\n> configuration changes.> I am wondering if there's a tool out there on Solaris to tell which> process is doing most of the i/o activity?> Thank you in advance.>> J>",
"msg_date": "Tue, 29 Aug 2006 10:56:50 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "The bgwriter parameters changed in 8.1\n\nTry\n\nbgwriter_lru_maxpages=0\nbgwriter_lru_percent=0\n\nto turn off bgwriter and see if there is any change.\n\n-Jignesh\n\n\nJunaili Lie wrote:\n> Hi Jignesh,\n> Thank you for my reply.\n> I have the setting just like what you described:\n> \n> wal_sync_method = fsync\n> wal_buffers = 128\n> checkpoint_segments = 128\n> bgwriter_all_percent = 0\n> bgwriter_maxpages = 0\n> \n> \n> I ran the dtrace script and found the following:\n> During the i/o busy time, there are postgres processes that has very \n> high BYTES count. During that non i/o busy time, this same process \n> doesn't do a lot of i/o activity. I checked the pg_stat_activity but \n> couldn't found this process. Doing ps revealed that this process is \n> started at the same time since the postgres started, which leads me to \n> believe that it maybe background writer or some other internal process. \n> This process are not autovacuum because it doesn't disappear when I \n> tried turning autovacuum off.\n> Except for the ones mentioned above, I didn't modify the other \n> background setting:\n> MONSOON=# show bgwriter_delay ;\n> bgwriter_delay\n> ----------------\n> 200\n> (1 row)\n> \n> MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n> -----------------------\n> 5\n> (1 row)\n> \n> MONSOON=# show bgwriter_lru_percent ;\n> bgwriter_lru_percent\n> ----------------------\n> 1\n> (1 row)\n> \n> This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56) \n> . If I do select * from pg_stat_activity during this time, I will see a \n> lot of write queries waiting to be processed. After a few seconds, \n> everything seems to be gone. All writes that are not happening at the \n> time of this i/o jump are being processed very fast, thus do not show on \n> pg_stat_activity.\n> \n> Thanks in advance for the reply,\n> Best,\n> \n> J\n> \n> On 8/29/06, *Jignesh K. Shah* <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Also to answer your real question:\n> \n> DTrace On Solaris 10:\n> \n> # dtrace -s /usr/demo/dtrace/whoio.d\n> \n> It will tell you the pids doing the io activity and on which devices.\n> There are more scripts in that directory like iosnoop.d, iotime.d\n> and others which also will give\n> other details like file accessed, time it took for the io etc.\n> \n> Hope this helps.\n> \n> Regards,\n> Jignesh\n> \n> \n> Junaili Lie wrote:\n> > Hi everyone,\n> > We have a postgresql 8.1 installed on Solaris 10. It is running fine.\n> > However, for the past couple days, we have seen the i/o reports\n> > indicating that the i/o is busy most of the time. Before this, we\n> only\n> > saw i/o being busy occasionally (very rare). So far, there has\n> been no\n> > performance complaints by customers, and the slow query reports\n> doesn't\n> > indicate anything out of the ordinary.\n> > There's no code changes on the applications layer and no database\n> > configuration changes.\n> > I am wondering if there's a tool out there on Solaris to tell which\n> > process is doing most of the i/o activity?\n> > Thank you in advance.\n> >\n> > J\n> >\n> \n> \n",
"msg_date": "Wed, 30 Aug 2006 19:17:02 +0100",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "I have tried this to no avail.\nI have also tried changing the bg_writer_delay parameter to 10. The spike in\ni/o still occurs although not in a consistent basis and it is only happening\nfor a few seconds.\n\n\n\n\nOn 8/30/06, Jignesh K. Shah <[email protected]> wrote:\n>\n> The bgwriter parameters changed in 8.1\n>\n> Try\n>\n> bgwriter_lru_maxpages=0\n> bgwriter_lru_percent=0\n>\n> to turn off bgwriter and see if there is any change.\n>\n> -Jignesh\n>\n>\n> Junaili Lie wrote:\n> > Hi Jignesh,\n> > Thank you for my reply.\n> > I have the setting just like what you described:\n> >\n> > wal_sync_method = fsync\n> > wal_buffers = 128\n> > checkpoint_segments = 128\n> > bgwriter_all_percent = 0\n> > bgwriter_maxpages = 0\n> >\n> >\n> > I ran the dtrace script and found the following:\n> > During the i/o busy time, there are postgres processes that has very\n> > high BYTES count. During that non i/o busy time, this same process\n> > doesn't do a lot of i/o activity. I checked the pg_stat_activity but\n> > couldn't found this process. Doing ps revealed that this process is\n> > started at the same time since the postgres started, which leads me to\n> > believe that it maybe background writer or some other internal process.\n> > This process are not autovacuum because it doesn't disappear when I\n> > tried turning autovacuum off.\n> > Except for the ones mentioned above, I didn't modify the other\n> > background setting:\n> > MONSOON=# show bgwriter_delay ;\n> > bgwriter_delay\n> > ----------------\n> > 200\n> > (1 row)\n> >\n> > MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n> > -----------------------\n> > 5\n> > (1 row)\n> >\n> > MONSOON=# show bgwriter_lru_percent ;\n> > bgwriter_lru_percent\n> > ----------------------\n> > 1\n> > (1 row)\n> >\n> > This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56)\n> > . If I do select * from pg_stat_activity during this time, I will see a\n> > lot of write queries waiting to be processed. After a few seconds,\n> > everything seems to be gone. All writes that are not happening at the\n> > time of this i/o jump are being processed very fast, thus do not show on\n> > pg_stat_activity.\n> >\n> > Thanks in advance for the reply,\n> > Best,\n> >\n> > J\n> >\n> > On 8/29/06, *Jignesh K. Shah* <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Also to answer your real question:\n> >\n> > DTrace On Solaris 10:\n> >\n> > # dtrace -s /usr/demo/dtrace/whoio.d\n> >\n> > It will tell you the pids doing the io activity and on which\n> devices.\n> > There are more scripts in that directory like iosnoop.d, iotime.d\n> > and others which also will give\n> > other details like file accessed, time it took for the io etc.\n> >\n> > Hope this helps.\n> >\n> > Regards,\n> > Jignesh\n> >\n> >\n> > Junaili Lie wrote:\n> > > Hi everyone,\n> > > We have a postgresql 8.1 installed on Solaris 10. It is running\n> fine.\n> > > However, for the past couple days, we have seen the i/o reports\n> > > indicating that the i/o is busy most of the time. Before this, we\n> > only\n> > > saw i/o being busy occasionally (very rare). So far, there has\n> > been no\n> > > performance complaints by customers, and the slow query reports\n> > doesn't\n> > > indicate anything out of the ordinary.\n> > > There's no code changes on the applications layer and no database\n> > > configuration changes.\n> > > I am wondering if there's a tool out there on Solaris to tell\n> which\n> > > process is doing most of the i/o activity?\n> > > Thank you in advance.\n> > >\n> > > J\n> > >\n> >\n> >\n>\n\nI have tried this to no avail.\nI have also tried changing the bg_writer_delay parameter to 10. The spike in i/o still occurs although not in a consistent basis and it is only happening for a few seconds.\n \n \nOn 8/30/06, Jignesh K. Shah <[email protected]> wrote:\nThe bgwriter parameters changed in 8.1Trybgwriter_lru_maxpages=0bgwriter_lru_percent=0\nto turn off bgwriter and see if there is any change.-JigneshJunaili Lie wrote:> Hi Jignesh,> Thank you for my reply.> I have the setting just like what you described:>\n> wal_sync_method = fsync> wal_buffers = 128> checkpoint_segments = 128> bgwriter_all_percent = 0> bgwriter_maxpages = 0>>> I ran the dtrace script and found the following:\n> During the i/o busy time, there are postgres processes that has very> high BYTES count. During that non i/o busy time, this same process> doesn't do a lot of i/o activity. I checked the pg_stat_activity but\n> couldn't found this process. Doing ps revealed that this process is> started at the same time since the postgres started, which leads me to> believe that it maybe background writer or some other internal process.\n> This process are not autovacuum because it doesn't disappear when I> tried turning autovacuum off.> Except for the ones mentioned above, I didn't modify the other> background setting:> MONSOON=# show bgwriter_delay ;\n> bgwriter_delay> ----------------> 200> (1 row)>> MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages> -----------------------> 5> (1 row)>\n> MONSOON=# show bgwriter_lru_percent ;> bgwriter_lru_percent> ----------------------> 1> (1 row)>> This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56\n)> . If I do select * from pg_stat_activity during this time, I will see a> lot of write queries waiting to be processed. After a few seconds,> everything seems to be gone. All writes that are not happening at the\n> time of this i/o jump are being processed very fast, thus do not show on> pg_stat_activity.>> Thanks in advance for the reply,> Best,>> J>> On 8/29/06, *Jignesh K. Shah* <\[email protected]> <mailto:[email protected]>> wrote:>> Also to answer your real question:>> DTrace On Solaris 10:\n>> # dtrace -s /usr/demo/dtrace/whoio.d>> It will tell you the pids doing the io activity and on which devices.> There are more scripts in that directory like iosnoop.d, iotime.d\n> and others which also will give> other details like file accessed, time it took for the io etc.>> Hope this helps.>> Regards,> Jignesh>>\n> Junaili Lie wrote:> > Hi everyone,> > We have a postgresql 8.1 installed on Solaris 10. It is running fine.> > However, for the past couple days, we have seen the i/o reports\n> > indicating that the i/o is busy most of the time. Before this, we> only> > saw i/o being busy occasionally (very rare). So far, there has> been no> > performance complaints by customers, and the slow query reports\n> doesn't> > indicate anything out of the ordinary.> > There's no code changes on the applications layer and no database> > configuration changes.> > I am wondering if there's a tool out there on Solaris to tell which\n> > process is doing most of the i/o activity?> > Thank you in advance.> >> > J> >>>",
"msg_date": "Wed, 30 Aug 2006 11:53:41 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "Hi all,\nI am still encountering this issue.\nI am doing further troubleshooting.\nHere is what I found:\nWhen I do: dtrace -s /usr/demo/dtrace/whoio.d\nI found that there's one process that is doing majority of i/o, but that\nprocess is not listed on pg_stat_activity.\nI am also seeing more of this type of query being slow:\nEXECUTE <unnamed> [PREPARE: ...\nI am also seeing some article recommending adding some entries on\n/etc/system:\nsegmapsize=2684354560 set ufs:freebehind=0\nI haven't tried this, I am wondering if this will help.\n\nAlso, here is the output of iostat -xcznmP 1 at approx time during the i/o\nspike:\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 4.0 213.0 32.0 2089.9 0.0 17.0 0.0 78.5 0 61 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 54 6 0 40\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 0.0 0.0 0.0 0.0 0.9 0.0 0.0 0 90 c1t0d0s1 (/var)\n 2.0 335.0 16.0 3341.6 0.2 73.3 0.6 217.4 4 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 30 4 0 66\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 1.0 0.0 4.0 0.0 0.1 0.0 102.0 0 10 c1t0d0s1 (/var)\n 1.0 267.0 8.0 2729.1 0.0 117.8 0.0 439.5 0 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 28 8 0 64\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 270.0 8.0 2589.0 0.0 62.0 0.0 228.7 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 26 2 0 72\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 2.0 269.0 16.0 2971.5 0.0 66.6 0.0 245.7 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 8 7 0 86\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 268.0 8.0 2343.5 0.0 110.3 0.0 410.2 0 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 4 4 0 92\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 260.0 0.0 2494.5 0.0 63.5 0.0 244.2 0 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 24 3 0 74\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 286.0 8.0 2519.1 35.4 196.5 123.3 684.7 49 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 65 4 0 30\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 2.0 316.0 16.0 2913.8 0.0 117.2 0.0 368.7 0 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 84 7 0 9\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 5.0 263.0 40.0 2406.1 0.0 55.8 0.0 208.1 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 77 4 0 20\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 4.0 286.0 32.0 2750.6 0.0 75.0 0.0 258.5 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 21 3 0 77\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 2.0 273.0 16.0 2516.4 0.0 90.8 0.0 330.0 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 15 6 0 78\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 2.0 280.0 16.0 2711.6 0.0 65.6 0.0 232.6 0 100 c1t0d0s6 (/usr)\n cpu\n us sy wt id\n 6 3 0 92\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 308.0 8.0 2661.5 61.0 220.2 197.4 712.7 67 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 7 4 0 90\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 268.0 8.0 2839.9 0.0 97.1 0.0 360.9 0 100 c1t0d0s6 (/usr)\n\n cpu\n us sy wt id\n 11 10 0 80\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 309.0 0.0 3333.5 175.2 208.9 566.9 676.2 81 99 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 0 0 0 100\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 330.0 0.0 2704.0 145.6 256.0 441.1 775.7 100 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 4 2 0 94\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 311.0 0.0 2543.9 151.0 256.0 485.6 823.2 100 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 2 0 0 98\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 319.0 0.0 2576.0 147.4 256.0 462.0 802.5 100 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 0 1 0 98\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 2 13 c1t0d0s1 (/var)\n 0.0 366.0 0.0 3088.0 124.4 255.8 339.9 698.8 100 100 c1t0d0s6\n(/usr)\n cpu\n us sy wt id\n 6 5 0 90\n extended device statistics\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n 0.0 2.0 0.0 16.0 0.0 1.1 0.0 533.2 0 54 c1t0d0s1 (/var)\n 1.0 282.0 8.0 2849.0 1.5 129.2 5.2 456.5 10 100 c1t0d0s6\n(/usr)\n\nThank you in advance for your help!\n\nJun\n\nOn 8/30/06, Junaili Lie <[email protected]> wrote:\n>\n> I have tried this to no avail.\n> I have also tried changing the bg_writer_delay parameter to 10. The spike\n> in i/o still occurs although not in a consistent basis and it is only\n> happening for a few seconds.\n>\n>\n>\n>\n> On 8/30/06, Jignesh K. Shah <[email protected]> wrote:\n> >\n> > The bgwriter parameters changed in 8.1\n> >\n> > Try\n> >\n> > bgwriter_lru_maxpages=0\n> > bgwriter_lru_percent=0\n> >\n> > to turn off bgwriter and see if there is any change.\n> >\n> > -Jignesh\n> >\n> >\n> > Junaili Lie wrote:\n> > > Hi Jignesh,\n> > > Thank you for my reply.\n> > > I have the setting just like what you described:\n> > >\n> > > wal_sync_method = fsync\n> > > wal_buffers = 128\n> > > checkpoint_segments = 128\n> > > bgwriter_all_percent = 0\n> > > bgwriter_maxpages = 0\n> > >\n> > >\n> > > I ran the dtrace script and found the following:\n> > > During the i/o busy time, there are postgres processes that has very\n> > > high BYTES count. During that non i/o busy time, this same process\n> > > doesn't do a lot of i/o activity. I checked the pg_stat_activity but\n> > > couldn't found this process. Doing ps revealed that this process is\n> > > started at the same time since the postgres started, which leads me to\n> > > believe that it maybe background writer or some other internal\n> > process.\n> > > This process are not autovacuum because it doesn't disappear when I\n> > > tried turning autovacuum off.\n> > > Except for the ones mentioned above, I didn't modify the other\n> > > background setting:\n> > > MONSOON=# show bgwriter_delay ;\n> > > bgwriter_delay\n> > > ----------------\n> > > 200\n> > > (1 row)\n> > >\n> > > MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n> > > -----------------------\n> > > 5\n> > > (1 row)\n> > >\n> > > MONSOON=# show bgwriter_lru_percent ;\n> > > bgwriter_lru_percent\n> > > ----------------------\n> > > 1\n> > > (1 row)\n> > >\n> > > This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56)\n> > > . If I do select * from pg_stat_activity during this time, I will see\n> > a\n> > > lot of write queries waiting to be processed. After a few seconds,\n> > > everything seems to be gone. All writes that are not happening at the\n> > > time of this i/o jump are being processed very fast, thus do not show\n> > on\n> > > pg_stat_activity.\n> > >\n> > > Thanks in advance for the reply,\n> > > Best,\n> > >\n> > > J\n> > >\n> > > On 8/29/06, *Jignesh K. Shah* < [email protected]\n> > > <mailto:[email protected]>> wrote:\n> > >\n> > > Also to answer your real question:\n> > >\n> > > DTrace On Solaris 10:\n> > >\n> > > # dtrace -s /usr/demo/dtrace/whoio.d\n> > >\n> > > It will tell you the pids doing the io activity and on which\n> > devices.\n> > > There are more scripts in that directory like iosnoop.d, iotime.d\n> > > and others which also will give\n> > > other details like file accessed, time it took for the io etc.\n> > >\n> > > Hope this helps.\n> > >\n> > > Regards,\n> > > Jignesh\n> > >\n> > >\n> > > Junaili Lie wrote:\n> > > > Hi everyone,\n> > > > We have a postgresql 8.1 installed on Solaris 10. It is running\n> > fine.\n> > > > However, for the past couple days, we have seen the i/o reports\n> >\n> > > > indicating that the i/o is busy most of the time. Before this,\n> > we\n> > > only\n> > > > saw i/o being busy occasionally (very rare). So far, there has\n> > > been no\n> > > > performance complaints by customers, and the slow query reports\n> >\n> > > doesn't\n> > > > indicate anything out of the ordinary.\n> > > > There's no code changes on the applications layer and no\n> > database\n> > > > configuration changes.\n> > > > I am wondering if there's a tool out there on Solaris to tell\n> > which\n> > > > process is doing most of the i/o activity?\n> > > > Thank you in advance.\n> > > >\n> > > > J\n> > > >\n> > >\n> > >\n> >\n>\n>\n\nHi all,\nI am still encountering this issue.\nI am doing further troubleshooting.\nHere is what I found:\nWhen I do: dtrace -s /usr/demo/dtrace/whoio.d\nI found that there's one process that is doing majority of i/o, but that process is not listed on pg_stat_activity.\nI am also seeing more of this type of query being slow:\nEXECUTE <unnamed> [PREPARE: ...\n\nI am also seeing some article recommending adding some entries on /etc/system:\nsegmapsize=2684354560 set ufs:freebehind=0\nI haven't tried this, I am wondering if this will help.\n\nAlso, here is the output of iostat -xcznmP 1 at approx time during the i/o spike:\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 4.0 213.0 32.0 2089.9 0.0\n17.0 0.0 78.5 0 61\nc1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 54 6 0 40\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 0.0 \n0.0 0.0 0.0 0.9 \n0.0 0.0 0 90 c1t0d0s1 (/var)\n\n 2.0 335.0 16.0 3341.6 0.2\n73.3 0.6 217.4 4 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 30 4 0 66\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 1.0 \n0.0 4.0 0.0 0.1 \n0.0 102.0 0 10 c1t0d0s1 (/var)\n\n 1.0 267.0 8.0 2729.1 \n0.0 117.8 0.0 439.5 0 100 c1t0d0s6\n(/usr)\n\n cpu\n\n us sy wt id\n\n 28 8 0 64\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 1.0 270.0 8.0 2589.0 \n0.0 62.0 0.0 228.7 0 100 c1t0d0s6\n(/usr)\n\n cpu\n\n us sy wt id\n\n 26 2 0 72\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 2.0 269.0 16.0 2971.5 0.0\n66.6 0.0 245.7 0 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 8 7 0 86\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 1.0 268.0 8.0 2343.5 \n0.0 110.3 0.0 410.2 0 100 c1t0d0s6\n(/usr)\n\n cpu\n\n us sy wt id\n\n 4 4 0 92\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 260.0 0.0 2494.5 \n0.0 63.5 0.0 244.2 0 100 c1t0d0s6\n(/usr)\n\n cpu\n\n us sy wt id\n\n 24 3 0 74\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 1.0 286.0 8.0 2519.1 35.4 196.5 123.3 684.7 49 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 65 4 0 30\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 2.0 316.0 16.0 2913.8 0.0\n117.2 0.0 368.7 0 100 c1t0d0s6\n(/usr)\n\n cpu\n\n us sy wt id\n\n 84 7 0 9\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 5.0 263.0 40.0 2406.1 0.0\n55.8 0.0 208.1 0 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 77 4 0 20\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 4.0 286.0 32.0 2750.6 0.0\n75.0 0.0 258.5 0 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 21 3 0 77\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 2.0 273.0 16.0 2516.4 0.0\n90.8 0.0 330.0 0 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 15 6 0 78\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 2.0 280.0 16.0 2711.6 0.0\n65.6 0.0 232.6 0 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 6 3 0 92\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 1.0 308.0 8.0 2661.5 61.0 220.2 197.4 712.7 67 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 7 4 0 90\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 1.0 268.0 8.0 2839.9 \n0.0 97.1 0.0 360.9 0 100 c1t0d0s6\n(/usr)\n\n\n cpu\n\n us sy wt id\n\n 11 10 0 80\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 309.0 0.0 3333.5 175.2\n208.9 566.9 676.2 81 99 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 0 0 0 100\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 330.0 0.0 2704.0 145.6 256.0 441.1 775.7 100 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 4 2 0 94\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 311.0 0.0 2543.9 151.0 256.0 485.6 823.2 100 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 2 0 0 98\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 319.0 0.0 2576.0 147.4 256.0 462.0 802.5 100 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 0 1 0 98\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 0.0 \n0.0 0.0 0.0 0.2 \n0.0 0.0 2 13 c1t0d0s1 (/var)\n\n 0.0 366.0 0.0 3088.0 124.4 255.8 339.9 698.8 100 100 c1t0d0s6 (/usr)\n\n cpu\n\n us sy wt id\n\n 6 5 0 90\n\n \nextended device statistics\n\n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n\n 0.0 2.0 \n0.0 16.0 0.0 1.1 0.0 \n533.2 0 54 c1t0d0s1 (/var)\n\n 1.0 282.0 8.0 2849.0 \n1.5 129.2 5.2 456.5 10 100 c1t0d0s6 (/usr)\n\nThank you in advance for your help!\n\nJunOn 8/30/06, Junaili Lie <[email protected]> wrote:\nI have tried this to no avail.\nI have also tried changing the bg_writer_delay parameter to 10.\nThe spike in i/o still occurs although not in a consistent basis and it\nis only happening for a few seconds.\n \n \nOn 8/30/06, Jignesh K. Shah <[email protected]\n> wrote:\nThe bgwriter parameters changed in 8.1Trybgwriter_lru_maxpages=0bgwriter_lru_percent=0\nto turn off bgwriter and see if there is any change.-JigneshJunaili Lie wrote:> Hi Jignesh,> Thank you for my reply.> I have the setting just like what you described:>\n> wal_sync_method = fsync> wal_buffers = 128> checkpoint_segments = 128> bgwriter_all_percent = 0> bgwriter_maxpages = 0>>> I ran the dtrace script and found the following:\n> During the i/o busy time, there are postgres processes that has very> high BYTES count. During that non i/o busy time, this same process> doesn't do a lot of i/o activity. I checked the pg_stat_activity but\n> couldn't found this process. Doing ps revealed that this process is> started at the same time since the postgres started, which leads me to> believe that it maybe background writer or some other internal process.\n> This process are not autovacuum because it doesn't disappear when I> tried turning autovacuum off.> Except for the ones mentioned above, I didn't modify the other> background setting:> MONSOON=# show bgwriter_delay ;\n> bgwriter_delay> ----------------> 200> (1 row)>> MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages> -----------------------> 5> (1 row)>\n> MONSOON=# show bgwriter_lru_percent ;> bgwriter_lru_percent> ----------------------> 1> (1 row)>> This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56\n\n)> . If I do select * from pg_stat_activity during this time, I will see a> lot of write queries waiting to be processed. After a few seconds,> everything seems to be gone. All writes that are not happening at the\n> time of this i/o jump are being processed very fast, thus do not show on> pg_stat_activity.>> Thanks in advance for the reply,> Best,>> J>> On 8/29/06, *Jignesh K. Shah* <\[email protected]> <mailto:\[email protected]>> wrote:>> Also to answer your real question:>> DTrace On Solaris 10:\n>> # dtrace -s /usr/demo/dtrace/whoio.d>> It will tell you the pids doing the io activity and on which devices.> There are more scripts in that directory like iosnoop.d, iotime.d\n\n> and others which also will give> other details like file accessed, time it took for the io etc.>> Hope this helps.>> Regards,> Jignesh>>\n> Junaili Lie wrote:> > Hi everyone,> > We have a postgresql 8.1 installed on Solaris 10. It is running fine.> > However, for the past couple days, we have seen the i/o reports\n> > indicating that the i/o is busy most of the time. Before this, we> only> > saw i/o being busy occasionally (very rare). So far, there has> been no> > performance complaints by customers, and the slow query reports\n> doesn't> > indicate anything out of the ordinary.> > There's no code changes on the applications layer and no database> > configuration changes.> > I am wondering if there's a tool out there on Solaris to tell which\n> > process is doing most of the i/o activity?> > Thank you in advance.> >> > J> >>>",
"msg_date": "Tue, 26 Sep 2006 16:27:41 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow i/o"
},
{
"msg_contents": "\nUsing segmapsize will increase the memory available for doing file system cache and ufs:freebehind=0 \nhelps in caching bigger files in memory.\n\nIts worth a try if you are using default UFS (without forcedirectio mount option).\n\nIn your cache it seems the writes are having the problems\n\nTypically single disk (with cache disabled) should not be stressed in excess of 100 iops per sec \nhowever your app is doing 3X that which is too much for the internal disk. If it is doing sequential \nwrites then UFS (on buffered file system) should be coalescing the writes.. If its random, you just \nneed more spindles. (Using segmapsize and freebehind might make a difference)\n\nIf you can't afford more spindles then you can take a \"RISK\" by turning on your write cache on the \ndisk using \"format -e\" -> cache -> write_cache -> enable which will improve that number quite a \nbit. But then make sure the server has UPS attached to it.\n\n\n-Jignesh\n\n\n\nJunaili Lie wrote:\n> Hi all,\n> I am still encountering this issue.\n> I am doing further troubleshooting.\n> Here is what I found:\n> When I do: dtrace -s /usr/demo/dtrace/whoio.d\n> I found that there's one process that is doing majority of i/o, but that \n> process is not listed on pg_stat_activity.\n> I am also seeing more of this type of query being slow:\n> EXECUTE <unnamed> [PREPARE: ...\n> I am also seeing some article recommending adding some entries on \n> /etc/system:\n> segmapsize=2684354560 set ufs:freebehind=0\n> I haven't tried this, I am wondering if this will help.\n> \n> Also, here is the output of iostat -xcznmP 1 at approx time during the \n> i/o spike:\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 4.0 213.0 32.0 2089.9 0.0 17.0 0.0 78.5 0 61 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 54 6 0 40\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 0.0 0.0 0.0 0.0 0.9 0.0 0.0 0 90 c1t0d0s1 (/var)\n> 2.0 335.0 16.0 3341.6 0.2 73.3 0.6 217.4 4 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 30 4 0 66\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 1.0 0.0 4.0 0.0 0.1 0.0 102.0 0 10 c1t0d0s1 (/var)\n> 1.0 267.0 8.0 2729.1 0.0 117.8 0.0 439.5 0 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 28 8 0 64\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 1.0 270.0 8.0 2589.0 0.0 62.0 0.0 228.7 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 26 2 0 72\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 2.0 269.0 16.0 2971.5 0.0 66.6 0.0 245.7 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 8 7 0 86\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 1.0 268.0 8.0 2343.5 0.0 110.3 0.0 410.2 0 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 4 4 0 92\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 260.0 0.0 2494.5 0.0 63.5 0.0 244.2 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 24 3 0 74\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 1.0 286.0 8.0 2519.1 35.4 196.5 123.3 684.7 49 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 65 4 0 30\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 2.0 316.0 16.0 2913.8 0.0 117.2 0.0 368.7 0 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 84 7 0 9\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 5.0 263.0 40.0 2406.1 0.0 55.8 0.0 208.1 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 77 4 0 20\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 4.0 286.0 32.0 2750.6 0.0 75.0 0.0 258.5 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 21 3 0 77\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 2.0 273.0 16.0 2516.4 0.0 90.8 0.0 330.0 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 15 6 0 78\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 2.0 280.0 16.0 2711.6 0.0 65.6 0.0 232.6 0 100 c1t0d0s6 (/usr)\n> cpu\n> us sy wt id\n> 6 3 0 92\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 1.0 308.0 8.0 2661.5 61.0 220.2 197.4 712.7 67 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 7 4 0 90\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 1.0 268.0 8.0 2839.9 0.0 97.1 0.0 360.9 0 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 11 10 0 80\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 309.0 0.0 3333.5 175.2 208.9 566.9 676.2 81 99 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 0 0 0 100\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 330.0 0.0 2704.0 145.6 256.0 441.1 775.7 100 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 4 2 0 94\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 311.0 0.0 2543.9 151.0 256.0 485.6 823.2 100 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 2 0 0 98\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 319.0 0.0 2576.0 147.4 256.0 462.0 802.5 100 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 0 1 0 98\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 2 13 c1t0d0s1 (/var)\n> 0.0 366.0 0.0 3088.0 124.4 255.8 339.9 698.8 100 100 c1t0d0s6 \n> (/usr)\n> cpu\n> us sy wt id\n> 6 5 0 90\n> extended device statistics\n> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device\n> 0.0 2.0 0.0 16.0 0.0 1.1 0.0 533.2 0 54 c1t0d0s1 (/var)\n> 1.0 282.0 8.0 2849.0 1.5 129.2 5.2 456.5 10 100 c1t0d0s6 \n> (/usr)\n> \n> Thank you in advance for your help!\n> \n> Jun\n> \n> On 8/30/06, *Junaili Lie* <[email protected] <mailto:[email protected]>> \n> wrote:\n> \n> I have tried this to no avail.\n> I have also tried changing the bg_writer_delay parameter to 10. The\n> spike in i/o still occurs although not in a consistent basis and it\n> is only happening for a few seconds.\n> \n> \n> \n> \n> On 8/30/06, *Jignesh K. Shah* <[email protected]\n> <mailto:[email protected]> > wrote:\n> \n> The bgwriter parameters changed in 8.1\n> \n> Try\n> \n> bgwriter_lru_maxpages=0\n> bgwriter_lru_percent=0\n> \n> to turn off bgwriter and see if there is any change.\n> \n> -Jignesh\n> \n> \n> Junaili Lie wrote:\n> > Hi Jignesh,\n> > Thank you for my reply.\n> > I have the setting just like what you described:\n> >\n> > wal_sync_method = fsync\n> > wal_buffers = 128\n> > checkpoint_segments = 128\n> > bgwriter_all_percent = 0\n> > bgwriter_maxpages = 0\n> >\n> >\n> > I ran the dtrace script and found the following:\n> > During the i/o busy time, there are postgres processes that\n> has very\n> > high BYTES count. During that non i/o busy time, this same process\n> > doesn't do a lot of i/o activity. I checked the\n> pg_stat_activity but\n> > couldn't found this process. Doing ps revealed that this\n> process is\n> > started at the same time since the postgres started, which\n> leads me to\n> > believe that it maybe background writer or some other internal\n> process.\n> > This process are not autovacuum because it doesn't disappear\n> when I\n> > tried turning autovacuum off.\n> > Except for the ones mentioned above, I didn't modify the other\n> > background setting:\n> > MONSOON=# show bgwriter_delay ;\n> > bgwriter_delay\n> > ----------------\n> > 200\n> > (1 row)\n> >\n> > MONSOON=# show bgwriter_lru_maxpages ; bgwriter_lru_maxpages\n> > -----------------------\n> > 5\n> > (1 row)\n> >\n> > MONSOON=# show bgwriter_lru_percent ;\n> > bgwriter_lru_percent\n> > ----------------------\n> > 1\n> > (1 row)\n> >\n> > This i/o spike only happens at minute 1 and minute 6 (ie.\n> 10.51, 10.56 )\n> > . If I do select * from pg_stat_activity during this time, I\n> will see a\n> > lot of write queries waiting to be processed. After a few seconds,\n> > everything seems to be gone. All writes that are not happening\n> at the\n> > time of this i/o jump are being processed very fast, thus do\n> not show on\n> > pg_stat_activity.\n> >\n> > Thanks in advance for the reply,\n> > Best,\n> >\n> > J\n> >\n> > On 8/29/06, *Jignesh K. Shah* < [email protected]\n> <mailto:[email protected]>\n> > <mailto: [email protected] <mailto:[email protected]>>> wrote:\n> >\n> > Also to answer your real question:\n> >\n> > DTrace On Solaris 10:\n> >\n> > # dtrace -s /usr/demo/dtrace/whoio.d\n> >\n> > It will tell you the pids doing the io activity and on\n> which devices.\n> > There are more scripts in that directory like iosnoop.d,\n> iotime.d\n> > and others which also will give\n> > other details like file accessed, time it took for the io etc.\n> >\n> > Hope this helps.\n> >\n> > Regards,\n> > Jignesh\n> >\n> >\n> > Junaili Lie wrote:\n> > > Hi everyone,\n> > > We have a postgresql 8.1 installed on Solaris 10. It is\n> running fine.\n> > > However, for the past couple days, we have seen the i/o\n> reports\n> > > indicating that the i/o is busy most of the time. Before\n> this, we\n> > only\n> > > saw i/o being busy occasionally (very rare). So far,\n> there has\n> > been no\n> > > performance complaints by customers, and the slow query\n> reports\n> > doesn't\n> > > indicate anything out of the ordinary.\n> > > There's no code changes on the applications layer and no\n> database\n> > > configuration changes.\n> > > I am wondering if there's a tool out there on Solaris to\n> tell which\n> > > process is doing most of the i/o activity?\n> > > Thank you in advance.\n> > >\n> > > J\n> > >\n> >\n> >\n> \n> \n> \n",
"msg_date": "Thu, 28 Sep 2006 13:35:36 +0100",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow i/o"
}
] |
[
{
"msg_contents": "Hi Friends,\n\n I have one doubt in LIMIT & OFFSET clause operation.\nI have a table \"test_limit\", and it contain,\n\nSELECT * from test_limit;\n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 12 | jyothi\n 6 | mahalakshmi\n 4 | maheswari\n 2 | manju\n 5 | ramkumar\n 7 | sangeetha\n 11 | sasikala\n 10 | thekkamalai\n 9 | vivek\n 13 | ganeshwari\n 3 | anandhi\n(13 rows)\n\nHere, I applied LIMIT clause as bellow.\nSELECT * from test_limit LIMIT 5;\n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 12 | jyothi\n 6 | mahalakshmi\n 4 | maheswari\n(5 rows)\n\nIn this above query was processed only five records OR all the 13 record\nwas got and then only 5 record printed.\nthis is what my doubt.\n\nI tried where clause in above query as bellow.\nSELECT * from test_limit where s_no IN (1,2,3,4,5,6,7,8,9) LIMIT 5;\n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 6 | mahalakshmi\n 4 | maheswari\n 2 | manju\n(5 rows)\n\nIn this case It should process up to records fulfill the requirement.\ni.e atleast it should process 6 records.\nMy question is it is processed only 6 records (fulfill the requirement) or\nall (13) the records.\n\nI also tried ORDER BY clause as bellow.\nSELECT * from test_limit ORDER BY s_no LIMIT 5;\n s_no | name\n------+-----------\n 1 | anbarasu\n 2 | manju\n 3 | anandhi\n 4 | maheswari\n 5 | ramkumar\n(5 rows)\n\n From this output, I know it is processed all(13) the records and the printed\nonly 5 records.\nBut, without ORDER BY clause I don't know how many record processing when\napplying LIMIT clause.\n\n---\nVanitha Jaya\n\nHi Friends,\n\n I have one doubt in LIMIT & OFFSET clause operation.\nI have a table \"test_limit\", and it contain,\nSELECT * from test_limit; \n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 12 | jyothi\n 6 | mahalakshmi\n 4 | maheswari\n 2 | manju\n 5 | ramkumar\n 7 | sangeetha\n 11 | sasikala\n 10 | thekkamalai\n 9 | vivek\n 13 | ganeshwari\n 3 | anandhi\n(13 rows)\n\nHere, I applied LIMIT clause as bellow.\nSELECT * from test_limit LIMIT 5;\n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 12 | jyothi\n 6 | mahalakshmi\n 4 | maheswari\n(5 rows)\n\nIn this above query was processed only five records OR all the 13 record was got and then only 5 record printed.\nthis is what my doubt.\n\nI tried where clause in above query as bellow.\nSELECT * from test_limit where s_no IN (1,2,3,4,5,6,7,8,9) LIMIT 5;\n s_no | name\n------+-------------\n 1 | anbarasu\n 8 | egambaram\n 6 | mahalakshmi\n 4 | maheswari\n 2 | manju\n(5 rows)\n\nIn this case It should process up to records fulfill the requirement.\ni.e atleast it should process 6 records.\nMy question is it is processed only 6 records (fulfill the requirement) or all (13) the records.\n\nI also tried ORDER BY clause as bellow.\nSELECT * from test_limit ORDER BY s_no LIMIT 5;\n s_no | name\n------+-----------\n 1 | anbarasu\n 2 | manju\n 3 | anandhi\n 4 | maheswari\n 5 | ramkumar\n(5 rows)\n\n From this output, I know it is processed all(13) the records and the printed only 5 records.\nBut, without ORDER BY clause I don't know how many record processing when applying LIMIT clause.\n\n---Vanitha Jaya",
"msg_date": "Tue, 29 Aug 2006 12:51:27 +0530",
"msg_from": "\"Vanitha Jaya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Internal Operations on LIMIT & OFFSET clause"
},
{
"msg_contents": "am Tue, dem 29.08.2006, um 12:51:27 +0530 mailte Vanitha Jaya folgendes:\n> Hi Friends,\n> \n> I have one doubt in LIMIT & OFFSET clause operation.\n> I have a table \"test_limit\", and it contain,\n\nFirst of all, you can use EXPLAIN ANALYSE for such tasks!\n\ntest=*# explain analyse select * from mira limit 13;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.20 rows=13 width=12) (actual time=0.073..0.146 rows=13 loops=1)\n -> Seq Scan on mira (cost=0.00..2311.00 rows=150000 width=12) (actual time=0.068..0.097 rows=13 loops=1)\n Total runtime: 0.223 ms\n(3 rows)\n\nThis is a Seq-Scan for the first 13 records. The table contains 15.000 records.\n\n> \n> I also tried ORDER BY clause as bellow.\n> SELECT * from test_limit ORDER BY s_no LIMIT 5;\n\ntest=*# explain analyse select * from mira order by 1 limit 13;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=17263.70..17263.73 rows=13 width=12) (actual time=1149.554..1149.624 rows=13 loops=1)\n -> Sort (cost=17263.70..17638.70 rows=150000 width=12) (actual time=1149.548..1149.574 rows=13 loops=1)\n Sort Key: x\n -> Seq Scan on mira (cost=0.00..2311.00 rows=150000 width=12) (actual time=0.013..362.187 rows=150000 loops=1)\n Total runtime: 1153.545 ms\n(5 rows)\n\nThis is a komplete seq-scan, than the sort, then the limit.\n\n\n> But, without ORDER BY clause I don't know how many record processing when\n> applying LIMIT clause.\n\nHere, with 8.1, it processed only LIMIT records, see my example and notice the\nruntime (0.223 ms versus 1153.545 ms).\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 29 Aug 2006 09:38:11 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal Operations on LIMIT & OFFSET clause"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're running PostgreSQL 8.1.4 on CentOS 4 (Linux version \n2.6.9-34.0.1.ELsmp). Hardware specs:\n\n2x AMD Dual-Core Opteron 270 Italy 1Ghz HT 2 x 1MB L2 Cache Socket 940\n4 GB Registered ECC PC3200 DDR RAM\nSuperMicro Server-Class 1U AS1020S series system\nDual-channel Ultra320 SCSI controller\n1 x 73 GB 10,000rpm Ultra320 SCSI drive with 8MB cache\n\nI use it to drive a web application. Everything was working fine when \nall of a sudden today, things went belly up. Load on the server started \nincreasing and query speeds decreased rapidly. After dropping all the \nclients I did some quick tests and found the following:\n\nI have a log table looking like this:\n Table \"public.log\"\n Column | Type | Modifiers\n---------+-----------------------------+---------------------------------\n site | bigint | not null\n stamp | timestamp without time zone | default now()\n type | character(8) | not null default 'log'::bpchar\n user | text | not null default 'public'::text\n message | text |\nIndexes:\n \"fki_log_sites\" btree (site)\n \"ix_log_stamp\" btree (stamp)\n \"ix_log_type\" btree (\"type\")\n \"ix_log_user\" btree (\"user\")\nForeign-key constraints:\n \"log_sites\" FOREIGN KEY (site) REFERENCES sites(id) ON UPDATE \nCASCADE ON DELETE CASCADE\n\nand it has 743321 rows and a explain analyze select count(*) from \nproperty_values;\n QUERY \nPLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual \ntime=4557.797..4557.798 rows=1 loops=1)\n -> Seq Scan on property_values (cost=0.00..51848.56 rows=1309356 \nwidth=0) (actual time=0.026..2581.418 rows=1309498 loops=1)\n Total runtime: 4557.978 ms\n(3 rows)\n\n4 1/2 seconds for a count(*) ? This seems a bit rough - is there \nanything else I can try to optimize my Database? You can imagine that \nslightly more complex queries goes out the roof.\n\nAny help appreciated\n\nRegards\n\nWillo van der Merwe\n\n\n\n\n\n\nHi,\n\nWe're running PostgreSQL 8.1.4 on CentOS 4 (Linux version\n2.6.9-34.0.1.ELsmp). Hardware specs:\n\n2x AMD Dual-Core Opteron 270 Italy 1Ghz HT 2 x 1MB L2 Cache Socket 940\n4 GB Registered ECC PC3200 DDR RAM\nSuperMicro Server-Class 1U AS1020S series system\nDual-channel Ultra320 SCSI controller\n1 x 73 GB 10,000rpm Ultra320 SCSI drive with 8MB cache\nI use it to drive a web\napplication. Everything\nwas working fine when all of a sudden today, things went belly up. Load\non the server started increasing and query speeds decreased rapidly.\nAfter dropping all the clients I did some quick tests and found the\nfollowing:\n\nI have a log table looking like this:\n \nTable \"public.log\"\n Column | Type | Modifiers\n---------+-----------------------------+---------------------------------\n site | bigint | not null\n stamp | timestamp without time zone | default now()\n type | character(8) | not null default 'log'::bpchar\n user | text | not null default 'public'::text\n message | text |\nIndexes:\n \"fki_log_sites\" btree (site)\n \"ix_log_stamp\" btree (stamp)\n \"ix_log_type\" btree (\"type\")\n \"ix_log_user\" btree (\"user\")\nForeign-key constraints:\n \"log_sites\" FOREIGN KEY (site) REFERENCES sites(id) ON UPDATE\nCASCADE ON DELETE CASCADE\n\nand it has 743321 rows and a explain analyze select count(*) from\nproperty_values;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual\ntime=4557.797..4557.798 rows=1 loops=1)\n -> Seq Scan on property_values (cost=0.00..51848.56\nrows=1309356 width=0) (actual time=0.026..2581.418 rows=1309498 loops=1)\n Total runtime: 4557.978 ms\n(3 rows)\n\n4 1/2 seconds for a count(*) ? This seems a bit rough - is there\nanything else I can try to optimize my Database? You can imagine that\nslightly more complex queries goes out the roof.\n\nAny help appreciated\n\nRegards\n\nWillo van der Merwe",
"msg_date": "Tue, 29 Aug 2006 15:52:50 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL performance issues"
},
{
"msg_contents": "am Tue, dem 29.08.2006, um 15:52:50 +0200 mailte Willo van der Merwe folgendes:\n> and it has 743321 rows and a explain analyze select count(*) from\n> property_values;\n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual time=\n> 4557.797..4557.798 rows=1 loops=1)\n> -> Seq Scan on property_values (cost=0.00..51848.56 rows=1309356 width=0)\n> (actual time=0.026..2581.418 rows=1309498 loops=1)\n> Total runtime: 4557.978 ms\n> (3 rows)\n> \n> 4 1/2 seconds for a count(*) ? This seems a bit rough - is there anything else\n\nBecause of MVCC.\nhttp://www.thescripts.com/forum/thread173678.html\nhttp://www.varlena.com/GeneralBits/120.php\nhttp://www.varlena.com/GeneralBits/49.php\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 29 Aug 2006 16:46:36 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "\n> 4 1/2 seconds for a count(*) ? This seems a bit rough - is there \n> anything else I can try to optimize my Database? You can imagine that \n> slightly more complex queries goes out the roof.\n\nWell a couple of things.\n\n1. You put all your money in the wrong place.. 1 hard drive!!??!!\n2. What is your maintenance regimen? Vacuum, Analyze????\n\nJoshua D. Drake\n\n> \n> Any help appreciated\n> \n> Regards\n> \n> Willo van der Merwe\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 29 Aug 2006 07:51:50 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Joshua D. Drake wrote:\n>\n>> 4 1/2 seconds for a count(*) ? This seems a bit rough - is there \n>> anything else I can try to optimize my Database? You can imagine that \n>> slightly more complex queries goes out the roof.\n>\n> Well a couple of things.\n>\n> 1. You put all your money in the wrong place.. 1 hard drive!!??!!\nYes, I realize 1 hard drive could cause a bottle neck, but on average \nI'm sitting on a 1-2% wait for IO.\n> 2. What is your maintenance regimen? Vacuum, Analyze????\nI'm doing a daily VACUUM ANALYZE, but just to be on the safe side, I \nperformed one manually before I ran my test, thinking that I might have \nto up the frequency.\n>\n> Joshua D. Drake\n>\n>>\n>> Any help appreciated\n>>\n>> Regards\n>>\n>> Willo van der Merwe\n>>\n>\n>\n\n",
"msg_date": "Tue, 29 Aug 2006 17:06:02 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "\n> 4 1/2 seconds for a count(*) ?\n\n\tIs this a real website query ? Do you need this query ?\n\n",
"msg_date": "Tue, 29 Aug 2006 17:07:10 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "am Tue, dem 29.08.2006, um 16:55:11 +0200 mailte Willo van der Merwe folgendes:\n> >>4 1/2 seconds for a count(*) ? This seems a bit rough - is there anything \n> >>else\n> >> \n> >\n> >Because of MVCC.\n> >http://www.thescripts.com/forum/thread173678.html\n> >http://www.varlena.com/GeneralBits/120.php\n> >http://www.varlena.com/GeneralBits/49.php\n> >\n> >\n> >Andreas\n> > \n> Hi Andreas,\n> \n> Thanks for your prompt reply. I understand why this is a sequential \n> scan, I'm just a bit perturbed that it takes 4.5 seconds to execute said \n> scan. The table is only 750,000 records big. What happens when this \n> table 7 million records big? Will this query then take 45 seconds to \n> execute?\n\nHow often do you need a 'select count(*) from big_table'?\n\nI assume, not frequently. And if you need realy this, you can write a\ntrigger or read the statistics for the table.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 29 Aug 2006 17:15:33 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On Aug 29, 2006, at 7:52 AM, Willo van der Merwe wrote:\n\n> Hi,\n>\n> We're running PostgreSQL 8.1.4 on CentOS 4 (Linux version \n> 2.6.9-34.0.1.ELsmp). Hardware specs:\n> 2x AMD Dual-Core Opteron 270 Italy 1Ghz HT 2 x 1MB L2 Cache Socket \n> 940\n> 4 GB Registered ECC PC3200 DDR RAM\n> SuperMicro Server-Class 1U AS1020S series system\n> Dual-channel Ultra320 SCSI controller\n> 1 x 73 GB 10,000rpm Ultra320 SCSI drive with 8MB cache\n> I use it to drive a web application. Everything was working fine \n> when all of a sudden today, things went belly up. Load on the \n> server started increasing and query speeds decreased rapidly. After \n> dropping all the clients I did some quick tests and found the \n> following:\n>\n> I have a log table looking like this:\n> Table \"public.log\"\n> Column | Type | Modifiers\n> ---------+----------------------------- \n> +---------------------------------\n> site | bigint | not null\n> stamp | timestamp without time zone | default now()\n> type | character(8) | not null default \n> 'log'::bpchar\n> user | text | not null default \n> 'public'::text\n> message | text |\n> Indexes:\n> \"fki_log_sites\" btree (site)\n> \"ix_log_stamp\" btree (stamp)\n> \"ix_log_type\" btree (\"type\")\n> \"ix_log_user\" btree (\"user\")\n> Foreign-key constraints:\n> \"log_sites\" FOREIGN KEY (site) REFERENCES sites(id) ON UPDATE \n> CASCADE ON DELETE CASCADE\n>\n> and it has 743321 rows and a explain analyze select count(*) from \n> property_values;\n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ------------------------------------------------------------\n> Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual \n> time=4557.797..4557.798 rows=1 loops=1)\n> -> Seq Scan on property_values (cost=0.00..51848.56 \n> rows=1309356 width=0) (actual time=0.026..2581.418 rows=1309498 \n> loops=1)\n> Total runtime: 4557.978 ms\n> (3 rows)\n>\n> 4 1/2 seconds for a count(*) ? This seems a bit rough - is there \n> anything else I can try to optimize my Database? You can imagine \n> that slightly more complex queries goes out the roof.\n>\n> Any help appreciated\n>\n> Regards\n>\n> Willo van der Merwe\n\n\nHi,\n\nWhat about doing a little bit of normalization?\n\nWith 700k rows you could probably gain some improvements by:\n\n* normalizing the type and user columns to integer keys (dropping the \n8 byte overhead for storing the field lengths)\n* maybe change the type column so that its a smallint if there is \njust a small range of possible values (emulating a enum type in other \ndatabases) rather the joining to another table.\n* maybe move message (if the majority of the rows are big and not \nnull but not big enough to be TOASTed, ergo causing only a small \nnumber of rows to fit onto a 8k page) out of this table into a \nseparate table that is joined only when you need the column's content.\n\nDoing these things would fit more rows onto each page, making the \nscan less intensive by not causing the drive to seek as much. Of \ncourse all of these suggestions depend on your workload.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\n\n\nOn Aug 29, 2006, at 7:52 AM, Willo van der Merwe wrote: Hi, We're running PostgreSQL 8.1.4 on CentOS 4 (Linux version 2.6.9-34.0.1.ELsmp). Hardware specs: 2x AMD Dual-Core Opteron 270 Italy 1Ghz HT 2 x 1MB L2 Cache Socket 940\n4 GB Registered ECC PC3200 DDR RAM\nSuperMicro Server-Class 1U AS1020S series system\nDual-channel Ultra320 SCSI controller\n1 x 73 GB 10,000rpm Ultra320 SCSI drive with 8MB cache I use it to drive a web application. Everything was working fine when all of a sudden today, things went belly up. Load on the server started increasing and query speeds decreased rapidly. After dropping all the clients I did some quick tests and found the following: I have a log table looking like this: Table \"public.log\" Column | Type | Modifiers ---------+-----------------------------+--------------------------------- site | bigint | not null stamp | timestamp without time zone | default now() type | character(8) | not null default 'log'::bpchar user | text | not null default 'public'::text message | text | Indexes: \"fki_log_sites\" btree (site) \"ix_log_stamp\" btree (stamp) \"ix_log_type\" btree (\"type\") \"ix_log_user\" btree (\"user\") Foreign-key constraints: \"log_sites\" FOREIGN KEY (site) REFERENCES sites(id) ON UPDATE CASCADE ON DELETE CASCADE and it has 743321 rows and a explain analyze select count(*) from property_values; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual time=4557.797..4557.798 rows=1 loops=1) -> Seq Scan on property_values (cost=0.00..51848.56 rows=1309356 width=0) (actual time=0.026..2581.418 rows=1309498 loops=1) Total runtime: 4557.978 ms (3 rows) 4 1/2 seconds for a count(*) ? This seems a bit rough - is there anything else I can try to optimize my Database? You can imagine that slightly more complex queries goes out the roof. Any help appreciated Regards Willo van der Merwe Hi,What about doing a little bit of normalization? With 700k rows you could probably gain some improvements by:* normalizing the type and user columns to integer keys (dropping the 8 byte overhead for storing the field lengths)* maybe change the type column so that its a smallint if there is just a small range of possible values (emulating a enum type in other databases) rather the joining to another table.* maybe move message (if the majority of the rows are big and not null but not big enough to be TOASTed, ergo causing only a small number of rows to fit onto a 8k page) out of this table into a separate table that is joined only when you need the column's content.Doing these things would fit more rows onto each page, making the scan less intensive by not causing the drive to seek as much. Of course all of these suggestions depend on your workload.Cheers,Rusty --Rusty ConoverInfoGears Inc.",
"msg_date": "Tue, 29 Aug 2006 15:47:17 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On 8/29/06, Willo van der Merwe <[email protected]> wrote:\n\n> and it has 743321 rows and a explain analyze select count(*) from\n> property_values;\n>\n\nyou have a number of options:\n1. keep a sequence on the property values and query it. if you want\nexact count you must do some clever locking however. this can be made\nto be exact and very fast.\n2. analyze the table periodically and query pg_class (inexact)\n3. keep a control record and update it in a transaction. this has\nconcurrency issues vs. #1 but is a bit easier to control\n4. normalize\n\nother databases for example mysql optimize the special case select\ncount(*). because of mvcc, postgresql cannot do this easily. you\nwill find that applying any where condition to the count will slow\nthose servers down substantially becuase the special case optimization\ndoes not apply.\n\nI am curious why you need to query the count of records in the log\ntable to six digits of precision.\n\nmerlin\n",
"msg_date": "Tue, 29 Aug 2006 19:39:57 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On Tue, 2006-08-29 at 15:52 +0200, Willo van der Merwe wrote:\n> (cost=0.00..51848.56 rows=1309356 width=0)\n\nIt is going through way more number of rows than what is returned by the\ncount(*).\n\nIt appears that you need to VACUUM the table (not VACUUM ANALYZE).\n\n",
"msg_date": "Tue, 29 Aug 2006 17:16:36 -0700",
"msg_from": "Codelogic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 8/29/06, Willo van der Merwe <[email protected]> wrote:\n>\n>> and it has 743321 rows and a explain analyze select count(*) from\n>> property_values;\n>>\n>\n> you have a number of options:\nAll good ideas and I'll be sure to implement them later.\n\n> I am curious why you need to query the count of records in the log\n> table to six digits of precision.\nI'm not with you you here.\nI'm drawing statistic for the my users on a per user basis in real-time, \nso there are a couple of where clauses attached.\n>\n> merlin\n>\nHi Merlin,\n\nThis was just an example. All queries have slowed down. Could it be that \nI've reached some cut-off and now my disk is thrashing?\n\nCurrently the load looks like this:\nCpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 1.0% si\nCpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\nCpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\nCpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n\n\n",
"msg_date": "Wed, 30 Aug 2006 12:19:53 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Rusty Conover wrote:\n>\n> On Aug 29, 2006, at 7:52 AM, Willo van der Merwe wrote:\n>\n>> Hi,\n>>\n>> We're running PostgreSQL 8.1.4 on CentOS 4 (Linux version \n>> 2.6.9-34.0.1.ELsmp). Hardware specs:\n>> 2x AMD Dual-Core Opteron 270 Italy 1Ghz HT 2 x 1MB L2 Cache Socket 940\n>> 4 GB Registered ECC PC3200 DDR RAM\n>> SuperMicro Server-Class 1U AS1020S series system\n>> Dual-channel Ultra320 SCSI controller\n>> 1 x 73 GB 10,000rpm Ultra320 SCSI drive with 8MB cache\n>> I use it to drive a web application. Everything was working fine when \n>> all of a sudden today, things went belly up. Load on the server \n>> started increasing and query speeds decreased rapidly. After dropping \n>> all the clients I did some quick tests and found the following:\n>>\n>> I have a log table looking like this:\n>> Table \"public.log\"\n>> Column | Type | Modifiers\n>> ---------+-----------------------------+---------------------------------\n>> site | bigint | not null\n>> stamp | timestamp without time zone | default now()\n>> type | character(8) | not null default 'log'::bpchar\n>> user | text | not null default 'public'::text\n>> message | text |\n>> Indexes:\n>> \"fki_log_sites\" btree (site)\n>> \"ix_log_stamp\" btree (stamp)\n>> \"ix_log_type\" btree (\"type\")\n>> \"ix_log_user\" btree (\"user\")\n>> Foreign-key constraints:\n>> \"log_sites\" FOREIGN KEY (site) REFERENCES sites(id) ON UPDATE \n>> CASCADE ON DELETE CASCADE\n>>\n>> and it has 743321 rows and a explain analyze select count(*) from \n>> property_values;\n>> QUERY \n>> PLAN \n>> ----------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=55121.95..55121.96 rows=1 width=0) (actual \n>> time=4557.797..4557.798 rows=1 loops=1)\n>> -> Seq Scan on property_values (cost=0.00..51848.56 rows=1309356 \n>> width=0) (actual time=0.026..2581.418 rows=1309498 loops=1)\n>> Total runtime: 4557.978 ms\n>> (3 rows)\n>>\n>> 4 1/2 seconds for a count(*) ? This seems a bit rough - is there \n>> anything else I can try to optimize my Database? You can imagine that \n>> slightly more complex queries goes out the roof.\n>>\n>> Any help appreciated\n>>\n>> Regards\n>>\n>> Willo van der Merwe\n>\n>\n> Hi,\n>\n> What about doing a little bit of normalization? \n>\n> With 700k rows you could probably gain some improvements by:\n>\n> * normalizing the type and user columns to integer keys (dropping the \n> 8 byte overhead for storing the field lengths)\n> * maybe change the type column so that its a smallint if there is just \n> a small range of possible values (emulating a enum type in other \n> databases) rather the joining to another table.\n> * maybe move message (if the majority of the rows are big and not null \n> but not big enough to be TOASTed, ergo causing only a small number of \n> rows to fit onto a 8k page) out of this table into a separate table \n> that is joined only when you need the column's content.\n>\n> Doing these things would fit more rows onto each page, making the scan \n> less intensive by not causing the drive to seek as much. Of course \n> all of these suggestions depend on your workload.\n>\n> Cheers,\n>\n> Rusty\n> --\n> Rusty Conover\n> InfoGears Inc.\n>\nHi Rusty,\n\nGood ideas and I've implemented some of them, and gained about 10%. I'm \nstill sitting on a load avg of about 60.\n\nAny ideas on optimizations on my postgresql.conf, that might have an effect?\n\n",
"msg_date": "Wed, 30 Aug 2006 12:48:09 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On Wed, 30 Aug 2006, Willo van der Merwe wrote:\n\n> Merlin Moncure wrote:\n> > On 8/29/06, Willo van der Merwe <[email protected]> wrote:\n> >\n> >> and it has 743321 rows and a explain analyze select count(*) from\n> >> property_values;\n> >>\n> >\n> > you have a number of options:\n> All good ideas and I'll be sure to implement them later.\n>\n> > I am curious why you need to query the count of records in the log\n> > table to six digits of precision.\n> I'm not with you you here.\n> I'm drawing statistic for the my users on a per user basis in real-time,\n> so there are a couple of where clauses attached.\n\nMost of the advice so far has been aimed at improving the performance of\nthe query you gave. If this query isn't representative of your load then\nyou'll get better advice if you post the queries you are actually making\nalong with EXPLAIN ANALYZE output.\n\n> Hi Merlin,\n>\n> This was just an example. All queries have slowed down. Could it be that\n> I've reached some cut-off and now my disk is thrashing?\n>\n> Currently the load looks like this:\n> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 1.0% si\n> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n\nIt seems to be a sort of standing assumption on this list that databases\nare much larger than memory and that database servers are almost always IO\nbound. This isn't always true, but as we don't know the size of your\ndatabase or working set we can't tell. You'd have to look at your OS's IO\nstatistics to be sure, but it doesn't look to me to be likely that you're\nIO bound.\n\nIf there are significant writes going on then it may also be interesting\nto know your context switch rate and whether dropping your foreign key\nconstraint makes any difference. IIRC your foreign key constraint will\nresult in the row in log_sites being locked FOR UPDATE and cause updates\nand inserts into your log table for a particular site to be serialized (I\nmay be out of date on this, it's a while since I heavily used foreign\nkeys).\n",
"msg_date": "Wed, 30 Aug 2006 12:22:37 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Alex Hayward wrote:\n> On Wed, 30 Aug 2006, Willo van der Merwe wrote:\n>\n> \n>> Merlin Moncure wrote:\n>> \n>>> On 8/29/06, Willo van der Merwe <[email protected]> wrote:\n>>>\n>>> \n>>>> and it has 743321 rows and a explain analyze select count(*) from\n>>>> property_values;\n>>>>\n>>>> \n>>> you have a number of options:\n>>> \n>> All good ideas and I'll be sure to implement them later.\n>>\n>> \n>>> I am curious why you need to query the count of records in the log\n>>> table to six digits of precision.\n>>> \n>> I'm not with you you here.\n>> I'm drawing statistic for the my users on a per user basis in real-time,\n>> so there are a couple of where clauses attached.\n>> \n>\n> Most of the advice so far has been aimed at improving the performance of\n> the query you gave. If this query isn't representative of your load then\n> you'll get better advice if you post the queries you are actually making\n> along with EXPLAIN ANALYZE output.\n>\n> \n>> Hi Merlin,\n>>\n>> This was just an example. All queries have slowed down. Could it be that\n>> I've reached some cut-off and now my disk is thrashing?\n>>\n>> Currently the load looks like this:\n>> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 1.0% si\n>> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n>> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n>> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n>> \n>\n> It seems to be a sort of standing assumption on this list that databases\n> are much larger than memory and that database servers are almost always IO\n> bound. This isn't always true, but as we don't know the size of your\n> database or working set we can't tell. You'd have to look at your OS's IO\n> statistics to be sure, but it doesn't look to me to be likely that you're\n> IO bound.\n>\n> If there are significant writes going on then it may also be interesting\n> to know your context switch rate and whether dropping your foreign key\n> constraint makes any difference. IIRC your foreign key constraint will\n> result in the row in log_sites being locked FOR UPDATE and cause updates\n> and inserts into your log table for a particular site to be serialized (I\n> may be out of date on this, it's a while since I heavily used foreign\n> keys).\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\nHi Alex,\n\nYes, I haven't noticed any major I/O waits either. The crazy thing here \nis that all the queries were running an an acceptable time limit, but \nthen suddenly it went haywire. I did not change any of the queries or \nfiddle with the server in any way. Previously we've experienced 1 or 2 \nspikes a day (where load would suddenly spike to 67 or so, but then \nquickly drop down to below 4) but in this case it stayed up. So I \nrestarted the service and started fiddling with options, with no \napparent effect.\n",
"msg_date": "Wed, 30 Aug 2006 14:03:43 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On 8/30/06, Willo van der Merwe <[email protected]> wrote:\n> This was just an example. All queries have slowed down. Could it be that\n> I've reached some cut-off and now my disk is thrashing?\n>\n> Currently the load looks like this:\n> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 1.0% si\n> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 0.3% si\n>\nI don't think so, it looks like you are cpu bound. Your server has a\n(fairly high) budget of records per second it can crunch through. You\nhave hit that limit and backpressure is building up and server load is\nescalating. This almost certainly due to inefficient sql, which is\nvery easy to do especially if you are using some type of middleware\nwhich writes the sql for you. The trick here would be to turn all sql\nlogging on and find out where your budget is getting spent. solving\nthe problem may be a simple matter of adding an index or crafting a\nstored procedure.\n\nmerlin\n",
"msg_date": "Wed, 30 Aug 2006 11:08:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "On Wednesday 30 August 2006 03:48, Willo van der Merwe \n<[email protected]> wrote:\n> Hi Rusty,\n>\n> Good ideas and I've implemented some of them, and gained about 10%. I'm\n> still sitting on a load avg of about 60.\n>\n> Any ideas on optimizations on my postgresql.conf, that might have an\n> effect?\n\nIf all of those sessions are truly doing a select count(*) from a .75 \nmillion row table (plus half a million dead rows), then I'm not suprised \nit's bogged down. Every query has to loop through the cache of the full \ntable in memory every time it's run.\n\nYour CPU is doing something. I doubt that postgresql.conf settings are \ngoing to help. What exactly are all those high CPU usage sessions doing?\n\n-- \n\"Government big enough to supply everything you need is big enough to take\neverything you have ... the course of history shows that as a government\ngrows, liberty decreases.\" -- Thomas Jefferson\n",
"msg_date": "Wed, 30 Aug 2006 09:24:26 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
}
] |
[
{
"msg_contents": "All,\n\nGot a little bit of a performance problem I hope that can be resolved.\n\nAll the files/info I believe you are going to ask for are here:\n\nhttp://www.au.sorbs.net/~matthew/postgres/30.8.06/\n\nThe odd thing was it originally was fast (1-2 seconds) which is all I \nneed - the query is a permissions check and I added a permissions \ncaching engine to the client code. However, I was testing part of my \nnew interface and added and \"expired\" some rows in the permissions, and \nauthorisation tables (taking the row count to ~15) the performance \ndropped to 86seconds (ish) which is unusable... :-(\n\nUnfortunately I do not have a query plan from before the performance issue.\n\nwork_mem has been adjusted from 512 to 8192, 65536 and 1000000 with no \napparent effect.\nrandom_page_cost has been 4 and 2 - 2 results in 89seconds for the query.\n\nThe hardware is a Compaq 6400r with 4G of EDO RAM, 4x500MHz Xeons and a \nCompaq RAID 3200 in RAID 5 configuration running across 3 spindles (34G \ntotal space).\n\nThe OS is FreeBSD 5.4-RELEASE-p14\nThe PG Version is 8.1.3\n\nSolutions/tips greatly appreciated.\n\nRegards,\n\nMat\n",
"msg_date": "Wed, 30 Aug 2006 19:29:35 +1000",
"msg_from": "Matthew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance problems."
},
{
"msg_contents": "On Aug 30, 2006, at 5:29 AM, Matthew Sullivan wrote:\n\n> The hardware is a Compaq 6400r with 4G of EDO RAM, 4x500MHz Xeons \n> and a Compaq RAID 3200 in RAID 5 configuration running across 3 \n> spindles (34G total space).\n>\n> The OS is FreeBSD 5.4-RELEASE-p14\n> The PG Version is 8.1.3\n\nWhat else does this box do?\n\nI think you should try these settings, which I use on 4GB dual \nOpteron boxes running FreeBSD 6.x dedicated to Postgres only. Your \neffective_cache_size seems overly optimistic for freebsd. cranking \nup the shared buffers seems to be one of the best bangs for the buck \nunder pg 8.1. I recently doubled them and nearly tripled my \nperformance on a massive write-mostly (insert/update) load. Unless \nyour disk system is *really* slow, random_page_cost should be reduced \nfrom the default 4.\n\nAs you can see, I change *very* little from the default config.\n\n\nshared_buffers = 70000 # min 16 or \nmax_connections*2, 8KB each\nwork_mem = 262144 # min 64, size in KB\nmaintenance_work_mem = 524288 # min 1024, size in KB\n\ncheckpoint_segments = 256\ncheckpoint_timeout = 900\n\neffective_cache_size = 27462 # `sysctl -n \nvfs.hibufspace` / 8192 (BLKSZ)\nrandom_page_cost = 2\n\nif you're feeling adventurous try these to reduce the checkpoint \nimpact on the system:\n\nbgwriter_lru_percent = 2.0\nbgwriter_lru_maxpages = 40\nbgwriter_all_percent = 0.666\nbgwriter_all_maxpages = 40\n\n\n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. MailerMailer, LLC Rockville, MD\nhttp://www.MailerMailer.com/ +1-301-869-4449 x806",
"msg_date": "Wed, 30 Aug 2006 10:10:28 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "On Wed, Aug 30, 2006 at 10:10:28AM -0400, Vivek Khera wrote:\n> effective_cache_size = 27462 # `sysctl -n \n> vfs.hibufspace` / 8192 (BLKSZ)\n> random_page_cost = 2\n\nYou misunderstand how effective_cache_size is used. It's the *only*\nmemory factor that plays a role in cost estimator functions. This means\nit should include the memory set aside for caching in shared_buffers.\n\nAlso, hibufspace is only talking about filesystem buffers in FreeBSD,\nwhich AFAIK has nothing to do with total memory available for caching,\nsince VM pages are also used to cache data.\n\nBasically, your best bet for setting effective_cache_size is to use the\ntotal memory in the machine, and substract some overhead for the OS and\nother processes. I'll typically subtract 1G.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 30 Aug 2006 11:26:57 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "On Wed, 30 Aug 2006, Jim C. Nasby wrote:\n\n> On Wed, Aug 30, 2006 at 10:10:28AM -0400, Vivek Khera wrote:\n> > effective_cache_size = 27462 # `sysctl -n\n> > vfs.hibufspace` / 8192 (BLKSZ)\n> > random_page_cost = 2\n>\n> You misunderstand how effective_cache_size is used. It's the *only*\n> memory factor that plays a role in cost estimator functions. This means\n> it should include the memory set aside for caching in shared_buffers.\n>\n> Also, hibufspace is only talking about filesystem buffers in FreeBSD,\n> which AFAIK has nothing to do with total memory available for caching,\n> since VM pages are also used to cache data.\n\nI believe it's not talking about quantities of buffers at all, but about\nkernel virtual address space. It's something like the amount of kernel\nvirtual address space available for mapping buffer-cache pages in to\nkernel memory. It certainly won't tell you (or even approximate) how much\nPostgreSQL data is being cached by the OS. Cached PostgreSQL data will\nappear in the active, inactive and cached values - and (AFAIK) there isn't\nany distinction between file-backed pages and swap-backed pages amongst\nthose.\n",
"msg_date": "Wed, 30 Aug 2006 19:22:18 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "Vivek Khera wrote:\n\n>\n> On Aug 30, 2006, at 5:29 AM, Matthew Sullivan wrote:\n>\n>> The hardware is a Compaq 6400r with 4G of EDO RAM, 4x500MHz Xeons \n>> and a Compaq RAID 3200 in RAID 5 configuration running across 3 \n>> spindles (34G total space).\n>>\n>> The OS is FreeBSD 5.4-RELEASE-p14\n>> The PG Version is 8.1.3\n>\n>\n> What else does this box do?\n\nNotihing - it's the developement DB and is dedicated to the development \nwebsite - which has a total number of users of '1' ;-)\n\n> I think you should try these settings, which I use on 4GB dual \n> Opteron boxes running FreeBSD 6.x dedicated to Postgres only. Your \n> effective_cache_size seems overly optimistic for freebsd. cranking \n> up the shared buffers seems to be one of the best bangs for the buck \n> under pg 8.1. I recently doubled them and nearly tripled my \n> performance on a massive write-mostly (insert/update) load. Unless \n> your disk system is *really* slow, random_page_cost should be reduced \n> from the default 4.\n\nI'll give this a try.\n\n>\n> As you can see, I change *very* little from the default config.\n>\n>\n> shared_buffers = 70000 # min 16 or \n> max_connections*2, 8KB each\n> work_mem = 262144 # min 64, size in KB\n> maintenance_work_mem = 524288 # min 1024, size in KB\n>\n> checkpoint_segments = 256\n> checkpoint_timeout = 900\n>\n> effective_cache_size = 27462 # `sysctl -n vfs.hibufspace` \n> / 8192 (BLKSZ)\n> random_page_cost = 2\n>\n> if you're feeling adventurous try these to reduce the checkpoint \n> impact on the system:\n>\n> bgwriter_lru_percent = 2.0\n> bgwriter_lru_maxpages = 40\n> bgwriter_all_percent = 0.666\n> bgwriter_all_maxpages = 40\n>\nThat might have some impact on the production server (which is also \nrunning PG - but the old DB and RT3) however the new DB is only me in \ndevel, so I think that it will not have much of an effect (I'll still \ntry it though)\n\nRegards,\n\nMat\n",
"msg_date": "Thu, 31 Aug 2006 08:17:46 +1000",
"msg_from": "Matthew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "Matthew Sullivan wrote:\n\n> \n> The OS is FreeBSD 5.4-RELEASE-p14\n> The PG Version is 8.1.3\n> \n> Solutions/tips greatly appreciated.\n> \n\nThis won't help this particular query, but 6.1-RELEASE will possibly be \na better performer generally, in particular for your SMP system - e.g. \nthe vfs layer is no longer under the Giant lock in the 6.x series, so \nparallel io should be much better!\n\nCheers\n\nMark\n",
"msg_date": "Thu, 31 Aug 2006 11:36:11 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "\nOn 30-Aug-06, at 10:10 AM, Vivek Khera wrote:\n\n>\n> On Aug 30, 2006, at 5:29 AM, Matthew Sullivan wrote:\n>\n>> The hardware is a Compaq 6400r with 4G of EDO RAM, 4x500MHz Xeons \n>> and a Compaq RAID 3200 in RAID 5 configuration running across 3 \n>> spindles (34G total space).\n>>\n>> The OS is FreeBSD 5.4-RELEASE-p14\n>> The PG Version is 8.1.3\n>\n> What else does this box do?\n>\n> I think you should try these settings, which I use on 4GB dual \n> Opteron boxes running FreeBSD 6.x dedicated to Postgres only. Your \n> effective_cache_size seems overly optimistic for freebsd. cranking \n> up the shared buffers seems to be one of the best bangs for the \n> buck under pg 8.1. I recently doubled them and nearly tripled my \n> performance on a massive write-mostly (insert/update) load. Unless \n> your disk system is *really* slow, random_page_cost should be \n> reduced from the default 4.\n>\nActually unless you have a ram disk you should probably leave \nrandom_page_cost at 4, shared buffers should be 2x what you have \nhere, maintenance work mem is pretty high\neffective cache should be much larger 3/4 of 4G or about 360000\n\nSetting work _mem this high should be done with caution. From the \nmanual \"Note that for a complex query, several sort or hash \noperations might be running in parallel; each one will be allowed to \nuse as much memory as this value specifies before it starts to put \ndata into temporary files. Also, several running sessions could be \ndoing such operations concurrently. So the total memory used could be \nmany times the value of work_mem\"\n> As you can see, I change *very* little from the default config.\n>\n>\n> shared_buffers = 70000 # min 16 or \n> max_connections*2, 8KB each\n> work_mem = 262144 # min 64, size in KB\n> maintenance_work_mem = 524288 # min 1024, size in KB\n>\n> checkpoint_segments = 256\n> checkpoint_timeout = 900\n>\n> effective_cache_size = 27462 # `sysctl -n \n> vfs.hibufspace` / 8192 (BLKSZ)\n> random_page_cost = 2\n>\n> if you're feeling adventurous try these to reduce the checkpoint \n> impact on the system:\n>\n> bgwriter_lru_percent = 2.0\n> bgwriter_lru_maxpages = 40\n> bgwriter_all_percent = 0.666\n> bgwriter_all_maxpages = 40\n>\n>\n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. MailerMailer, LLC Rockville, MD\n> http://www.MailerMailer.com/ +1-301-869-4449 x806\n>\n>\n\n",
"msg_date": "Wed, 30 Aug 2006 19:48:12 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "On Aug 30, 2006, at 12:26 PM, Jim C. Nasby wrote:\n\n> You misunderstand how effective_cache_size is used. It's the *only*\n> memory factor that plays a role in cost estimator functions. This \n> means\n> it should include the memory set aside for caching in shared_buffers.\n>\n> Also, hibufspace is only talking about filesystem buffers in FreeBSD,\n> which AFAIK has nothing to do with total memory available for caching,\n> since VM pages are also used to cache data.\n>\n\nCurious... See Message-ID: <[email protected]> \nfrom the October 2003 archives. (I'd provide a full link to it, but \nthe http://archives.postgresql.org/pgsql-performance/ archives are \nbotched -- only some posts are on the browsable archive but it is all \nin the raw mailbox download, so that's the only way to get the full \nmessage.) It reads in part:\n\nFrom: Sean Chittenden <[email protected]>\nDate: Sat, 11 Oct 2003 02:23:08 -0700\n\n [...]\n > echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n >\n > I've used it for my dedicated servers. Is this calculation correct?\n\nYes, or it's real close at least. vfs.hibufspace is the amount of\nkernel space that's used for caching IO operations (minus the\nnecessary space taken for the kernel). If you're real paranoid, you\ncould do some kernel profiling and figure out how much of the cache is\nactually disk IO and multiply the above by some percentage, say 80%?\nI haven't found it necessary to do so yet. Since hibufspace is all IO\nand caching any net activity is kinda pointless and I assume that 100%\nof it is used for a disk cache and don't use a multiplier. The 8192,\nhowever, is the size of a PG page, so, if you tweak PG's page size,\nyou have to change this constant (*grumbles*).\n\n--END QUOTE--\n\nGiven who Sean is, I tend to believe him. Whether this is still \nvalid for FreeBSD 6.x, I'm unable to verify.\n\n> Basically, your best bet for setting effective_cache_size is to use \n> the\n> total memory in the machine, and substract some overhead for the OS \n> and\n> other processes. I'll typically subtract 1G.\n\nI'll give this a whirl and see if it helps.\n\nAny opinions on using the FreeBSD sysctl kern.ipc.shm_use_phys to \nbypass the VM system for shared pages?",
"msg_date": "Thu, 31 Aug 2006 14:10:55 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "On Aug 30, 2006, at 7:48 PM, Dave Cramer wrote:\n\n> Actually unless you have a ram disk you should probably leave \n> random_page_cost at 4, shared buffers should be 2x what you have \n> here, maintenance work mem is pretty high\n> effective cache should be much larger 3/4 of 4G or about 360000\n>\n\nI've been pondering bumping up SHM settings more, but it is a very \nbig imposition to have to restart the production server to do so. \nThis weekend being a long weekend might be a good opportunity to try \nit, though...\n\nAs for maintenence mem, when you have HUGE tables, you want to give a \nlot of memory to vacuum. With 4GB of RAM giving it 512MB is not an \nissue.\n\nThe effective cache size is the big issue with FreeBSD. There are \nopposing claims of how much memory it will use for cache, and throw \nin the kern.ipc.shm_use_phys sysctl which causes SHM to bypass the VM \nsystem entirely, and who knows what's going on.\n\n> Setting work _mem this high should be done with caution. From the \n> manual \"Note that for a complex query, several sort or hash \n> operations might be running in parallel; each one will be allowed \n> to use as much memory as this value specifies before it starts to \n> put data into temporary files. Also, several running sessions could \n> be doing such operations concurrently. So the total memory used \n> could be many times the value of work_mem\"\n\nAgain, with boat-loads of RAM why not let the queries use it? We \nonly have a handful of connections at a time so that's not eating up \nmuch memory...",
"msg_date": "Thu, 31 Aug 2006 14:15:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "\nOn 31-Aug-06, at 2:15 PM, Vivek Khera wrote:\n\n>\n> On Aug 30, 2006, at 7:48 PM, Dave Cramer wrote:\n>\n>> Actually unless you have a ram disk you should probably leave \n>> random_page_cost at 4, shared buffers should be 2x what you have \n>> here, maintenance work mem is pretty high\n>> effective cache should be much larger 3/4 of 4G or about 360000\n>>\n>\n> I've been pondering bumping up SHM settings more, but it is a very \n> big imposition to have to restart the production server to do so. \n> This weekend being a long weekend might be a good opportunity to \n> try it, though...\n>\n> As for maintenence mem, when you have HUGE tables, you want to give \n> a lot of memory to vacuum. With 4GB of RAM giving it 512MB is not \n> an issue.\n>\n> The effective cache size is the big issue with FreeBSD. There are \n> opposing claims of how much memory it will use for cache, and throw \n> in the kern.ipc.shm_use_phys sysctl which causes SHM to bypass the \n> VM system entirely, and who knows what's going on.\n\nYes, I have to admit, the setting I proposed works well for linux, \nbut may not for bsd.\n>\n>> Setting work _mem this high should be done with caution. From the \n>> manual \"Note that for a complex query, several sort or hash \n>> operations might be running in parallel; each one will be allowed \n>> to use as much memory as this value specifies before it starts to \n>> put data into temporary files. Also, several running sessions \n>> could be doing such operations concurrently. So the total memory \n>> used could be many times the value of work_mem\"\n>\n> Again, with boat-loads of RAM why not let the queries use it? We \n> only have a handful of connections at a time so that's not eating \n> up much memory...\n>\nAs long as you are aware of the ramifications....\n\n",
"msg_date": "Thu, 31 Aug 2006 14:28:31 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems."
},
{
"msg_contents": "Vivek Khera <[email protected]> writes:\n> Curious... See Message-ID: <[email protected]> \n> from the October 2003 archives. (I'd provide a full link to it, but \n> the http://archives.postgresql.org/pgsql-performance/ archives are \n> botched --\n\nStill? I found it easily enough with a search for 'hibufspace':\nhttp://archives.postgresql.org/pgsql-performance/2003-10/msg00383.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2006 15:08:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems. "
},
{
"msg_contents": "On Aug 31, 2006, at 3:08 PM, Tom Lane wrote:\n\n> Vivek Khera <[email protected]> writes:\n>> Curious... See Message-ID: <[email protected]>\n>> from the October 2003 archives. (I'd provide a full link to it, but\n>> the http://archives.postgresql.org/pgsql-performance/ archives are\n>> botched --\n>\n> Still? I found it easily enough with a search for 'hibufspace':\n> http://archives.postgresql.org/pgsql-performance/2003-10/msg00383.php\n>\n> \t\t\tregards, tom lane\n\ngo to \"view by thread\" or \"view by date\" for October 2003. Or August \n2003. Most messages are missing.",
"msg_date": "Thu, 31 Aug 2006 15:40:39 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problems. "
}
] |
[
{
"msg_contents": "> Currently the load looks like this:\n> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, 1.0% si\n> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, 0.3% si\n> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, 0.3% si\n> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, 0.3% si\n\nAll four CPUs are hammered busy - check \"top\" and look for runaway\nprocesses.\n\n- Luke\n\n",
"msg_date": "Wed, 30 Aug 2006 06:34:03 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Luke Lonergan wrote:\n>> Currently the load looks like this:\n>> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>> 0.0% hi, 1.0% si\n>> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>> 0.0% hi, 0.3% si\n>> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>> 0.0% hi, 0.3% si\n>> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>> 0.0% hi, 0.3% si\n>> \n>\n> All four CPUs are hammered busy - check \"top\" and look for runaway\n> processes.\n>\n> - Luke\n>\n>\n> \nYes, the first 463 process are all postgres. In the meanwhile I've done:\nDropped max_connections from 500 to 250 and\nUpped shared_buffers = 50000\n\nWithout any apparent effect.\n",
"msg_date": "Wed, 30 Aug 2006 13:35:20 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "\nOn 30-Aug-06, at 7:35 AM, Willo van der Merwe wrote:\n\n> Luke Lonergan wrote:\n>>> Currently the load looks like this:\n>>> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>> hi, 1.0% si\n>>> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>> hi, 0.3% si\n>>> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>> hi, 0.3% si\n>>> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>> hi, 0.3% si\n>>>\n>>\n>> All four CPUs are hammered busy - check \"top\" and look for runaway\n>> processes.\n>>\n>> - Luke\n>>\n>>\n>>\n> Yes, the first 463 process are all postgres. In the meanwhile I've \n> done:\n> Dropped max_connections from 500 to 250 and\n> Upped shared_buffers = 50000\n\nWith 4G of memory you can push shared buffers to double that.\neffective_cache should be 3/4 of available memory.\n\nCan you also check vmstat 1 for high context switches during this \nquery, high being over 100k\n\nDave\n>\n> Without any apparent effect.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Wed, 30 Aug 2006 08:35:08 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Dave Cramer wrote:\n>\n> On 30-Aug-06, at 7:35 AM, Willo van der Merwe wrote:\n>\n>> Luke Lonergan wrote:\n>>>> Currently the load looks like this:\n>>>> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>>> hi, 1.0% si\n>>>> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>>> hi, 0.3% si\n>>>> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>>> hi, 0.3% si\n>>>> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% \n>>>> hi, 0.3% si\n>>>>\n>>>\n>>> All four CPUs are hammered busy - check \"top\" and look for runaway\n>>> processes.\n>>>\n>>> - Luke\n>>>\n>>>\n>>>\n>> Yes, the first 463 process are all postgres. In the meanwhile I've done:\n>> Dropped max_connections from 500 to 250 and\n>> Upped shared_buffers = 50000\n>\n> With 4G of memory you can push shared buffers to double that.\n> effective_cache should be 3/4 of available memory.\n>\n> Can you also check vmstat 1 for high context switches during this \n> query, high being over 100k\n>\n> Dave\n>>\n>> Without any apparent effect.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>\n>\nHi Dave,\n\nOk, I've upped shared_buffers = 150000\nand effective_cache_size = 100000\n\nand restarted the service\ntop now reads:\n\ntop - 15:08:28 up 20:12, 1 user, load average: 19.55, 22.48, 26.59\nTasks: 132 total, 24 running, 108 sleeping, 0 stopped, 0 zombie\nCpu0 : 97.0% us, 1.0% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.3% hi, 1.3% si\nCpu1 : 98.3% us, 1.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu2 : 98.0% us, 1.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.3% si\nCpu3 : 96.7% us, 3.3% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si\nMem: 4060084k total, 2661772k used, 1398312k free, 108152k buffers\nSwap: 4192956k total, 0k used, 4192956k free, 2340936k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11446 postgres 17 0 1280m 97m 95m R 28.9 2.5 0:03.63 postmaster\n11435 postgres 16 0 1279m 120m 117m R 26.9 3.0 0:05.18 postmaster\n11438 postgres 16 0 1279m 31m 30m R 24.6 0.8 0:04.43 postmaster\n11163 postgres 16 0 1279m 120m 118m R 23.2 3.0 0:42.61 postmaster\n11167 postgres 16 0 1279m 120m 118m R 23.2 3.0 0:41.04 postmaster\n11415 postgres 15 0 1279m 299m 297m R 22.2 7.5 0:07.07 postmaster\n11428 postgres 15 0 1279m 34m 32m R 21.9 0.9 0:05.53 postmaster\n11225 postgres 16 0 1279m 31m 30m R 21.6 0.8 0:34.95 postmaster\n11298 postgres 16 0 1279m 118m 117m R 21.6 3.0 0:23.82 postmaster\n11401 postgres 15 0 1279m 31m 30m R 21.6 0.8 0:08.18 postmaster\n11377 postgres 15 0 1279m 122m 120m R 20.9 3.1 0:09.54 postmaster\n11357 postgres 17 0 1280m 126m 123m R 19.9 3.2 0:13.98 postmaster\n11415 postgres 16 0 1279m 299m 297m R 17.1 7.5 0:06.40 postmaster\n11461 postgres 17 0 1279m 81m 78m R 17.1 2.0 0:00.77 postmaster\n11357 postgres 15 0 1279m 120m 118m S 16.8 3.0 0:13.38 postmaster\n11458 postgres 16 0 1279m 31m 30m R 15.8 0.8 0:00.97 postmaster\n11446 postgres 15 0 1279m 31m 30m S 15.5 0.8 0:02.76 postmaster\n11428 postgres 15 0 1279m 34m 32m S 15.2 0.9 0:04.87 postmaster\n11435 postgres 16 0 1279m 120m 117m R 14.2 3.0 0:04.37 postmaster\n11466 postgres 16 0 1279m 33m 32m S 7.9 0.9 0:00.24 postmaster\n\nload avg is climbing...\n\nvmstat 1\n\nI don't see any cs > 100k\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n33 0 0 1352128 108248 2352604 0 0 7 33 147 26 65 \n2 33 0\n19 0 0 1348360 108264 2352656 0 0 0 348 3588 1408 98 \n2 0 0\n26 0 0 1346024 108264 2352996 0 0 0 80 3461 1154 98 \n2 0 0\n27 0 0 1349496 108264 2352996 0 0 0 100 3611 1199 98 \n2 0 0\n31 0 0 1353872 108264 2353064 0 0 0 348 3329 1227 97 \n2 0 0\n21 0 0 1352528 108264 2353064 0 0 0 80 3201 1437 97 \n2 0 0\n28 0 0 1352096 108280 2353184 0 0 0 64 3579 1073 98 \n2 0 0\n29 0 0 1352096 108284 2353180 0 0 0 0 3538 1293 98 \n2 0 0\n28 0 0 1351776 108288 2353244 0 0 0 36 3339 1313 99 \n1 0 0\n22 0 0 1366392 108288 2353244 0 0 0 588 3663 1303 99 \n1 0 0\n27 0 0 1366392 108288 2353312 0 0 0 84 3276 1028 99 \n1 0 0\n28 0 0 1365504 108296 2353372 0 0 0 140 3500 1164 98 \n2 0 0\n26 0 0 1368272 108296 2353372 0 0 0 68 3268 1082 98 \n2 0 0\n25 0 0 1372232 108296 2353508 0 0 0 260 3261 1278 97 \n3 0 0\n26 0 0 1366056 108296 2353644 0 0 0 0 3268 1178 98 \n2 0 0\n24 1 0 1368704 108296 2353780 0 0 0 1788 3548 1614 97 \n3 0 0\n29 0 0 1367728 108296 2353304 0 0 0 60 3637 1105 99 \n1 0 0\n21 0 0 1365224 108300 2353640 0 0 0 12 3257 918 99 \n1 0 0\n27 0 0 1363944 108300 2354116 0 0 0 72 3052 1365 98 \n2 0 0\n25 0 0 1366968 108300 2354184 0 0 0 212 3314 1696 99 \n1 0 0\n30 0 0 1363552 108300 2354184 0 0 0 72 3147 1420 97 \n2 0 0\n27 0 0 1367792 108300 2354184 0 0 0 184 3245 1310 97 \n2 0 0\n21 0 0 1369088 108308 2354380 0 0 0 140 3306 987 98 \n2 0 0\n11 1 0 1366056 108308 2354448 0 0 0 88 3210 1183 98 \n1 0 0\n27 0 0 1361104 108308 2354516 0 0 0 0 3598 1015 98 \n2 0 0\n28 0 0 1356808 108308 2354584 0 0 0 64 2835 1326 98 \n2 0 0\n 3 0 0 1352888 108308 2354856 0 0 0 88 2829 1111 97 \n3 0 0\n29 0 0 1351408 108316 2354848 0 0 0 180 2916 939 97 \n3 0 0\n30 0 0 1352568 108316 2354848 0 0 0 112 2962 1122 98 \n2 0 0\n29 0 0 1356936 108316 2355052 0 0 0 176 2987 976 98 \n2 0 0\n27 0 0 1363816 108316 2355188 0 0 0 220 2990 1809 98 \n2 0 0\n24 0 0 1361944 108316 2355256 0 0 0 0 3043 1213 98 \n2 0 0\n24 0 0 1368808 108324 2355248 0 0 0 112 3168 1464 98 \n2 0 0\n24 0 0 1370120 108324 2355248 0 0 0 112 3179 997 99 \n1 0 0\n12 0 0 1370752 108324 2355248 0 0 0 16 3255 1081 97 \n3 0 0\n26 0 0 1372752 108324 2355248 0 0 0 112 3416 1169 98 \n2 0 0\n27 0 0 1369088 108324 2355248 0 0 0 0 3011 828 98 \n2 0 0\n20 0 0 1366848 108324 2355316 0 0 0 64 3062 959 98 \n2 0 0\n26 0 0 1368064 108328 2355312 0 0 0 264 3069 1064 97 \n3 0 0\n24 0 0 1365624 108328 2355448 0 0 0 152 2940 1344 98 \n2 0 0\n26 0 0 1363880 108328 2355584 0 0 0 128 3294 1122 98 \n2 0 0\n26 0 0 1370048 108328 2355652 0 0 0 152 3198 1340 97 \n3 0 0\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n12 0 0 1369344 108328 2355720 0 0 0 184 2994 1030 98 \n2 0 0\n\n\n\n\n\n\n\n\nDave Cramer wrote:\n\nOn 30-Aug-06, at 7:35 AM, Willo van der Merwe wrote:\n \n\nLuke Lonergan wrote:\n \n\nCurrently the load looks like this:\n \nCpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, \n1.0% si\n \nCpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, \n0.3% si\n \nCpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, \n0.3% si\n \nCpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, \n0.3% si\n \n\n\n\nAll four CPUs are hammered busy - check \"top\" and look for runaway\n \nprocesses.\n \n\n- Luke\n \n\n\n\n\nYes, the first 463 process are all postgres. In the meanwhile I've\ndone:\n \nDropped max_connections from 500 to 250 and\n \nUpped shared_buffers = 50000\n \n\n\nWith 4G of memory you can push shared buffers to double that.\n \neffective_cache should be 3/4 of available memory.\n \n\nCan you also check vmstat 1 for high context switches during this\nquery, high being over 100k\n \n\nDave\n \n\nWithout any apparent effect.\n \n\n---------------------------(end of\nbroadcast)---------------------------\n \nTIP 9: In versions below 8.0, the planner will ignore your desire to\n \n choose an index scan if your joining column's datatypes do not\n \n match\n \n\n\n\n\n\nHi Dave,\n\nOk, I've upped shared_buffers = 150000\nand effective_cache_size = 100000\n\nand restarted the service\ntop now reads:\n\ntop - 15:08:28 up\n20:12, 1 user, load average: 19.55, 22.48, 26.59\nTasks: 132 total, 24 running, 108 sleeping, 0 stopped, 0 zombie\nCpu0 : 97.0% us, 1.0% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.3% hi, \n1.3% si\nCpu1 : 98.3% us, 1.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, \n0.0% si\nCpu2 : 98.0% us, 1.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, \n0.3% si\nCpu3 : 96.7% us, 3.3% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, \n0.0% si\nMem: 4060084k total, 2661772k used, 1398312k free, 108152k buffers\nSwap: 4192956k total, 0k used, 4192956k free, 2340936k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11446 postgres 17 0 1280m 97m 95m R 28.9 2.5 0:03.63 postmaster\n11435 postgres 16 0 1279m 120m 117m R 26.9 3.0 0:05.18 postmaster\n11438 postgres 16 0 1279m 31m 30m R 24.6 0.8 0:04.43 postmaster\n11163 postgres 16 0 1279m 120m 118m R 23.2 3.0 0:42.61 postmaster\n11167 postgres 16 0 1279m 120m 118m R 23.2 3.0 0:41.04 postmaster\n11415 postgres 15 0 1279m 299m 297m R 22.2 7.5 0:07.07 postmaster\n11428 postgres 15 0 1279m 34m 32m R 21.9 0.9 0:05.53 postmaster\n11225 postgres 16 0 1279m 31m 30m R 21.6 0.8 0:34.95 postmaster\n11298 postgres 16 0 1279m 118m 117m R 21.6 3.0 0:23.82 postmaster\n11401 postgres 15 0 1279m 31m 30m R 21.6 0.8 0:08.18 postmaster\n11377 postgres 15 0 1279m 122m 120m R 20.9 3.1 0:09.54 postmaster\n11357 postgres 17 0 1280m 126m 123m R 19.9 3.2 0:13.98 postmaster\n11415 postgres 16 0 1279m 299m 297m R 17.1 7.5 0:06.40 postmaster\n11461 postgres 17 0 1279m 81m 78m R 17.1 2.0 0:00.77 postmaster\n11357 postgres 15 0 1279m 120m 118m S 16.8 3.0 0:13.38 postmaster\n11458 postgres 16 0 1279m 31m 30m R 15.8 0.8 0:00.97 postmaster\n11446 postgres 15 0 1279m 31m 30m S 15.5 0.8 0:02.76 postmaster\n11428 postgres 15 0 1279m 34m 32m S 15.2 0.9 0:04.87 postmaster\n11435 postgres 16 0 1279m 120m 117m R 14.2 3.0 0:04.37 postmaster\n11466 postgres 16 0 1279m 33m 32m S 7.9 0.9 0:00.24 postmaster\n\nload avg is climbing...\n\nvmstat 1\n\nI don't see any cs > 100k\n\nprocs\n-----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us\nsy id wa\n33 0 0 1352128 108248 2352604 0 0 7 33 147 26\n65 2 33 0\n19 0 0 1348360 108264 2352656 0 0 0 348 3588 1408\n98 2 0 0\n26 0 0 1346024 108264 2352996 0 0 0 80 3461 1154\n98 2 0 0\n27 0 0 1349496 108264 2352996 0 0 0 100 3611 1199\n98 2 0 0\n31 0 0 1353872 108264 2353064 0 0 0 348 3329 1227\n97 2 0 0\n21 0 0 1352528 108264 2353064 0 0 0 80 3201 1437\n97 2 0 0\n28 0 0 1352096 108280 2353184 0 0 0 64 3579 1073\n98 2 0 0\n29 0 0 1352096 108284 2353180 0 0 0 0 3538 1293\n98 2 0 0\n28 0 0 1351776 108288 2353244 0 0 0 36 3339 1313\n99 1 0 0\n22 0 0 1366392 108288 2353244 0 0 0 588 3663 1303\n99 1 0 0\n27 0 0 1366392 108288 2353312 0 0 0 84 3276 1028\n99 1 0 0\n28 0 0 1365504 108296 2353372 0 0 0 140 3500 1164\n98 2 0 0\n26 0 0 1368272 108296 2353372 0 0 0 68 3268 1082\n98 2 0 0\n25 0 0 1372232 108296 2353508 0 0 0 260 3261 1278\n97 3 0 0\n26 0 0 1366056 108296 2353644 0 0 0 0 3268 1178\n98 2 0 0\n24 1 0 1368704 108296 2353780 0 0 0 1788 3548 1614\n97 3 0 0\n29 0 0 1367728 108296 2353304 0 0 0 60 3637 1105\n99 1 0 0\n21 0 0 1365224 108300 2353640 0 0 0 12 3257 918\n99 1 0 0\n27 0 0 1363944 108300 2354116 0 0 0 72 3052 1365\n98 2 0 0\n25 0 0 1366968 108300 2354184 0 0 0 212 3314 1696\n99 1 0 0\n30 0 0 1363552 108300 2354184 0 0 0 72 3147 1420\n97 2 0 0\n27 0 0 1367792 108300 2354184 0 0 0 184 3245 1310\n97 2 0 0\n21 0 0 1369088 108308 2354380 0 0 0 140 3306 987\n98 2 0 0\n11 1 0 1366056 108308 2354448 0 0 0 88 3210 1183\n98 1 0 0\n27 0 0 1361104 108308 2354516 0 0 0 0 3598 1015\n98 2 0 0\n28 0 0 1356808 108308 2354584 0 0 0 64 2835 1326\n98 2 0 0\n 3 0 0 1352888 108308 2354856 0 0 0 88 2829 1111\n97 3 0 0\n29 0 0 1351408 108316 2354848 0 0 0 180 2916 939\n97 3 0 0\n30 0 0 1352568 108316 2354848 0 0 0 112 2962 1122\n98 2 0 0\n29 0 0 1356936 108316 2355052 0 0 0 176 2987 976\n98 2 0 0\n27 0 0 1363816 108316 2355188 0 0 0 220 2990 1809\n98 2 0 0\n24 0 0 1361944 108316 2355256 0 0 0 0 3043 1213\n98 2 0 0\n24 0 0 1368808 108324 2355248 0 0 0 112 3168 1464\n98 2 0 0\n24 0 0 1370120 108324 2355248 0 0 0 112 3179 997\n99 1 0 0\n12 0 0 1370752 108324 2355248 0 0 0 16 3255 1081\n97 3 0 0\n26 0 0 1372752 108324 2355248 0 0 0 112 3416 1169\n98 2 0 0\n27 0 0 1369088 108324 2355248 0 0 0 0 3011 828\n98 2 0 0\n20 0 0 1366848 108324 2355316 0 0 0 64 3062 959\n98 2 0 0\n26 0 0 1368064 108328 2355312 0 0 0 264 3069 1064\n97 3 0 0\n24 0 0 1365624 108328 2355448 0 0 0 152 2940 1344\n98 2 0 0\n26 0 0 1363880 108328 2355584 0 0 0 128 3294 1122\n98 2 0 0\n26 0 0 1370048 108328 2355652 0 0 0 152 3198 1340\n97 3 0 0\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us\nsy id wa\n12 0 0 1369344 108328 2355720 0 0 0 184 2994 1030\n98 2 0 0",
"msg_date": "Wed, 30 Aug 2006 15:12:41 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "That's an interesting situation. Your CPU's are pegged, and you're hardly\ndoing any IO. I wonder if there is some ineficient query, or if its just\nvery high query volume. Maybe you could try setting\nlog_min_duration_statement to try to track down the slowest of the queries.\nThen post the slow queries with an explain analyze to the list.\n \nHere is some info on setting up logging:\nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-logging.html\n \nAre your queries standard SQL or do you call functions you wrote in PL/pgSQl\nor PL/Python or anything?\n \n \n\n\n\nMessage\n\n\nThat's an \ninteresting situation. Your CPU's are pegged, and you're hardly doing any \nIO. I wonder if there is some ineficient query, or if its just very high \nquery volume. Maybe you could try setting log_min_duration_statement to \ntry to track down the slowest of the queries. Then post the slow queries \nwith an explain analyze to the list.\n \nHere is some info \non setting up logging:\nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-logging.html\n \n\nAre your queries \nstandard SQL or do you call functions you wrote in PL/pgSQl or PL/Python or \nanything?",
"msg_date": "Wed, 30 Aug 2006 09:03:01 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "Dave Dutcher wrote:\n> That's an interesting situation. Your CPU's are pegged, and you're \n> hardly doing any IO. I wonder if there is some ineficient query, or \n> if its just very high query volume. Maybe you could try setting \n> log_min_duration_statement to try to track down the slowest of the \n> queries. Then post the slow queries with an explain analyze to the list.\n> \n> Here is some info on setting up logging:\n> http://www.postgresql.org/docs/8.1/interactive/runtime-config-logging.html\n> \n> Are your queries standard SQL or do you call functions you wrote in \n> PL/pgSQl or PL/Python or anything?\n> \n> \nIt might be a combo of queries and load. My queries use almost \nexclusively functions, but on an unloaded dev machine performs its \nqueries in aprox 10ms. When is it appropriate to start clustering \ndatabase servers?\n",
"msg_date": "Wed, 30 Aug 2006 16:48:57 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
}
] |
[
{
"msg_contents": "Interesting - in this quick snapshot there is no I/O happening at all.\nWhat happens when you track the activity for a longer period of time?\n\nHow about just capturing vmstat during a period when the queries are\nslow?\n\nHas the load average been this high forever or are you experiencing a\ngrowth in workload? 463 processes all doing CPU work will take 100x as\nlong as one query on a 4 CPU box, have you worked through how long you\nshould expect the queries to take?\n\n- Luke \n\n> -----Original Message-----\n> From: Willo van der Merwe [mailto:[email protected]] \n> Sent: Wednesday, August 30, 2006 4:35 AM\n> To: Luke Lonergan\n> Cc: Merlin Moncure; [email protected]\n> Subject: Re: [PERFORM] PostgreSQL performance issues\n> \n> Luke Lonergan wrote:\n> >> Currently the load looks like this:\n> >> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, \n> >> 1.0% si\n> >> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, \n> >> 0.3% si\n> >> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, \n> >> 0.3% si\n> >> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, \n> 0.0% hi, \n> >> 0.3% si\n> >> \n> >\n> > All four CPUs are hammered busy - check \"top\" and look for runaway \n> > processes.\n> >\n> > - Luke\n> >\n> >\n> > \n> Yes, the first 463 process are all postgres. In the meanwhile \n> I've done:\n> Dropped max_connections from 500 to 250 and Upped \n> shared_buffers = 50000\n> \n> Without any apparent effect.\n> \n> \n\n",
"msg_date": "Wed, 30 Aug 2006 07:41:51 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL performance issues"
},
{
"msg_contents": "That's exactly what I'm experiencing.\n\nEverything was fine until yesterday, when we noticed a considerable site \nslow-down. Graphs showed the server suddenly spiking to a load of 67. At \nfirst I thought somebody executed a ran-away query, so I restarted \npostgres, but after it came back up, it climbed back up to this load.\n\nIn the meanwhile I've applied some table level optimizations and the \npostgres.conf optimizatrions ... nothing\n\nHere's the vmstat output, since reboot last night\n\n[root@srv1 ~]# vmstat -a\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free inact active si so bi bo in cs us sy \nid wa\n27 0 0 595312 248100 2962764 0 0 8 31 105 7 63 \n2 35 0\n[root@srv1 ~]# vmstat -d\ndisk- ------------reads------------ ------------writes----------- \n-----IO------\n total merged sectors ms total merged sectors ms \ncur sec\nram0 0 0 0 0 0 0 0 0 \n0 0\nram1 0 0 0 0 0 0 0 0 \n0 0\nram2 0 0 0 0 0 0 0 0 \n0 0\nram3 0 0 0 0 0 0 0 0 \n0 0\nram4 0 0 0 0 0 0 0 0 \n0 0\nram5 0 0 0 0 0 0 0 0 \n0 0\nram6 0 0 0 0 0 0 0 0 \n0 0\nram7 0 0 0 0 0 0 0 0 \n0 0\nram8 0 0 0 0 0 0 0 0 \n0 0\nram9 0 0 0 0 0 0 0 0 \n0 0\nram10 0 0 0 0 0 0 0 0 \n0 0\nram11 0 0 0 0 0 0 0 0 \n0 0\nram12 0 0 0 0 0 0 0 0 \n0 0\nram13 0 0 0 0 0 0 0 0 \n0 0\nram14 0 0 0 0 0 0 0 0 \n0 0\nram15 0 0 0 0 0 0 0 0 \n0 0\nsda 197959 38959 4129737 952923 777438 1315162 16839981 \n39809324 0 2791\nfd0 0 0 0 0 0 0 0 0 \n0 0\nmd0 0 0 0 0 0 0 0 0 \n0 0\n\n\n\nLuke Lonergan wrote:\n> Interesting - in this quick snapshot there is no I/O happening at all.\n> What happens when you track the activity for a longer period of time?\n>\n> How about just capturing vmstat during a period when the queries are\n> slow?\n>\n> Has the load average been this high forever or are you experiencing a\n> growth in workload? 463 processes all doing CPU work will take 100x as\n> long as one query on a 4 CPU box, have you worked through how long you\n> should expect the queries to take?\n>\n> - Luke \n>\n> \n>> -----Original Message-----\n>> From: Willo van der Merwe [mailto:[email protected]] \n>> Sent: Wednesday, August 30, 2006 4:35 AM\n>> To: Luke Lonergan\n>> Cc: Merlin Moncure; [email protected]\n>> Subject: Re: [PERFORM] PostgreSQL performance issues\n>>\n>> Luke Lonergan wrote:\n>> \n>>>> Currently the load looks like this:\n>>>> Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>>>> \n>> 0.0% hi, \n>> \n>>>> 1.0% si\n>>>> Cpu1 : 97.8% us, 1.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>>>> \n>> 0.0% hi, \n>> \n>>>> 0.3% si\n>>>> Cpu2 : 96.8% us, 2.6% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>>>> \n>> 0.0% hi, \n>> \n>>>> 0.3% si\n>>>> Cpu3 : 96.2% us, 3.2% sy, 0.0% ni, 0.3% id, 0.0% wa, \n>>>> \n>> 0.0% hi, \n>> \n>>>> 0.3% si\n>>>> \n>>>> \n>>> All four CPUs are hammered busy - check \"top\" and look for runaway \n>>> processes.\n>>>\n>>> - Luke\n>>>\n>>>\n>>> \n>>> \n>> Yes, the first 463 process are all postgres. In the meanwhile \n>> I've done:\n>> Dropped max_connections from 500 to 250 and Upped \n>> shared_buffers = 50000\n>>\n>> Without any apparent effect.\n>>\n>>\n>> \n>\n>\n> \n\n",
"msg_date": "Wed, 30 Aug 2006 13:55:30 +0200",
"msg_from": "Willo van der Merwe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance issues"
}
] |
[
{
"msg_contents": "Good morning,\n\nI'd like to ask you some advice on pg tuning in a high\nconcurrency OLTP-like environment.\nThe application I'm talking about is running on Pg 8.0.1.\nUnder average users load, iostat and vmstat show that iowait stays\nwell under 1%. Tables and indexes scan and seek times are also good.\nI can be reasonably sure that disk I/O is not the *main* bottleneck\nhere.\n\nThese OLTP transactions are composed each of 50-1000+ small queries, on\nsingle tables or 2/3 joined tables. Write operations are very frequent,\nand done concurrently by many users on the same data.\n\nOften there are also queries which involve record lookups like:\n\n SELECT DISTINCT rowid2 FROM table\n WHERE rowid1 IN (<long_list_of_numerical_ids>) OR\n refrowid1 IN (<long_list_of_numerical_ids>)\n\nThese files are structured with rowid fields which link\nother external tables, and the links are fairly complex to follow.\nSQL queries and indexes have been carefully(?) built and tested,\neach with its own \"explain analyze\".\n\nThe problem is that under peak load, when n. of concurrent transactions\nraises, there is a sensible performance degradation.\nI'm looking for tuning ideas/tests. I plan to concentrate,\nin priority order, on:\n\n- postgresql.conf, especially:\n effective_cache_size (now 5000)\n bgwriter_delay (500)\n commit_delay/commit_siblings (default)\n- start to use tablespaces for most intensive tables\n- analyze the locks situation while queries run\n- upgrade to 8.1.n\n- convert db partition filesystem to ext2/xfs?\n (now ext3+noatime+data=writeback)\n- ???\n\nServer specs:\n 2 x P4 Xeon 2.8 Ghz\n 4 Gb RAM\n LSI Logic SCSI 2x U320 controller\n 6 disks in raid 1 for os, /var, WAL\n 14 disks in raid 10 for db on FC connected storage\n\nCurrent config is now (the rest is like the default):\n max_connections = 100\n shared_buffers = 8192\n work_mem = 8192\n maintenance_work_mem = 262144\n max_fsm_pages = 200000\n max_fsm_relations = 1000\n bgwriter_delay = 500\n fsync = false\n wal_buffers = 256\n checkpoint_segments = 32\n effective_cache_size = 5000\n random_page_cost = 2\n\nThanks for your ideas...\n\n-- \nCosimo\n\n",
"msg_date": "Thu, 31 Aug 2006 17:45:18 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": true,
"msg_subject": "High concurrency OLTP database performance tuning"
},
{
"msg_contents": "Cosimo,\n\nOn 8/31/06, Cosimo Streppone <[email protected]> wrote:\n> The problem is that under peak load, when n. of concurrent transactions\n> raises, there is a sensible performance degradation.\n\nCould you give us more information about the performance degradation?\nEspecially cpu load/iostat/vmstat data when the problem occurs can be\ninteresting.\n\n--\nGuillaume\n",
"msg_date": "Thu, 31 Aug 2006 19:06:29 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency OLTP database performance tuning"
},
{
"msg_contents": "\n\n--On August 31, 2006 5:45:18 PM +0200 Cosimo Streppone \n<[email protected]> wrote:\n\n> Good morning,\n\n> - postgresql.conf, especially:\n> effective_cache_size (now 5000)\n> bgwriter_delay (500)\n> commit_delay/commit_siblings (default)\n\ncommit delay and siblings should be turned up, also you'll want to probably \nincrease log_segments, unless you're not getting any warnings about it. \nalso increase shared_buffers. i'd also make sure write caching is on on \nthe RAID arrays as long as they're battery backed caches.\n\n> - start to use tablespaces for most intensive tables\n> - analyze the locks situation while queries run\n> - upgrade to 8.1.n\n> - convert db partition filesystem to ext2/xfs?\n> (now ext3+noatime+data=writeback)\n> - ???\n>\n> Server specs:\n> 2 x P4 Xeon 2.8 Ghz\n> 4 Gb RAM\n> LSI Logic SCSI 2x U320 controller\n> 6 disks in raid 1 for os, /var, WAL\n> 14 disks in raid 10 for db on FC connected storage\n>\n> Current config is now (the rest is like the default):\n> max_connections = 100\n> shared_buffers = 8192\n> work_mem = 8192\n> maintenance_work_mem = 262144\n> max_fsm_pages = 200000\n> max_fsm_relations = 1000\n> bgwriter_delay = 500\n> fsync = false\n> wal_buffers = 256\n> checkpoint_segments = 32\n> effective_cache_size = 5000\n> random_page_cost = 2\n>\n> Thanks for your ideas...\n>\n> --\n> Cosimo\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n--\n\"Genius might be described as a supreme capacity for getting its possessors\ninto trouble of all kinds.\"\n-- Samuel Butler\n",
"msg_date": "Thu, 31 Aug 2006 11:32:00 -0600",
"msg_from": "Michael Loftis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency OLTP database performance tuning"
},
{
"msg_contents": "On 8/31/06, Cosimo Streppone <[email protected]> wrote:\n> Good morning,\n> - postgresql.conf, especially:\n> effective_cache_size (now 5000)\n> bgwriter_delay (500)\n> commit_delay/commit_siblings (default)\n\nwhile thse settings may help, don't expect too much. ditto shared\nbuffers. your fsync is false btw. the major gotcha in high\ntransaction volume systems is stats_command_string (leave it off).\n\n> - start to use tablespaces for most intensive tables\nthis is an i/o optimization mostly. again, dont expect much.\n\n> - analyze the locks situation while queries run\n> - upgrade to 8.1.n\nabsolutely you want to do this. when I moved my converted isam\nprojects which dont sound too far from your workload, I saw a huge\nspeed increase with 8.1.\n\n> - convert db partition filesystem to ext2/xfs?\n> (now ext3+noatime+data=writeback)\n> - ???\n\nmeh. :-)\n\nI think application level improvements are the name of the game here.\nMake sure your application or middleware is using the parameterized\nquery interface in libpq.\n\nAnother possible optimiation is to attempt application level caching\nin conjunction with some server side locking, Since details are\nlight, only general hints are possible :)\n\nconsider move to opteron or intel woodcrest platform. a single opteron\n170 will easily beat your two xeons, and 2x270 will be a whole new\nworld. woodcrests are great as well if you can get them.\n\nalso, if you are not already on a *nix kernel, get yourself on one.\n\nMerlin\n",
"msg_date": "Thu, 31 Aug 2006 13:42:06 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency OLTP database performance tuning"
},
{
"msg_contents": "It will be very important to determine if as performance degrades you \nare either i/o bound, cpu bound or hindered by some other contention \n(db locks, context switching, etc).\n\nTry turning on statement duration logging for all statments or \"slow\" \nstatments (like those over 100ms or some arbitrary threshold). Either \neyeball or write a script to see which statement(s) are frequently \nslowest. This can greatly aid in tuning.\n\nYou say the db is write intensive. In what way, inserts or updates? \nThe former tend to be much cheaper than the latter. If the latter are \nthings being adequately vacuumed? loss of dead tuple space can really \nhurt performance. If you have lots of concurrent writes, commit_delay/ \ncommit_siblings can help, as can increasing checkpoint_segments \nfurther. I see you have fsync off, are you feeling lucky? ;^)\n\nIf you are i/o bound see what the disks are doing. How fast are they \nreading/writing? How close are they to their max throughput? \nTypically I find the disks are nowhere near that due to excessive \nseeking. If that's the case you can typically only fix it by putting \nmore of the DB in RAM -- buy more RAM, crank up shared_buffers I \nwould say double what you have it, maybe more (much more with 8.1), \nor by arranging the data better on disk (clustering, denormalizing \ndata, putting tables and indices on different disks, etc).\n\n-Casey\n\n\nOn Aug 31, 2006, at 8:45 AM, Cosimo Streppone wrote:\n\n> Good morning,\n>\n> I'd like to ask you some advice on pg tuning in a high\n> concurrency OLTP-like environment.\n> The application I'm talking about is running on Pg 8.0.1.\n> Under average users load, iostat and vmstat show that iowait stays\n> well under 1%. Tables and indexes scan and seek times are also good.\n> I can be reasonably sure that disk I/O is not the *main* bottleneck\n> here.\n>\n> These OLTP transactions are composed each of 50-1000+ small \n> queries, on\n> single tables or 2/3 joined tables. Write operations are very \n> frequent,\n> and done concurrently by many users on the same data.\n>\n> Often there are also queries which involve record lookups like:\n>\n> SELECT DISTINCT rowid2 FROM table\n> WHERE rowid1 IN (<long_list_of_numerical_ids>) OR\n> refrowid1 IN (<long_list_of_numerical_ids>)\n>\n> These files are structured with rowid fields which link\n> other external tables, and the links are fairly complex to follow.\n> SQL queries and indexes have been carefully(?) built and tested,\n> each with its own \"explain analyze\".\n>\n> The problem is that under peak load, when n. of concurrent \n> transactions\n> raises, there is a sensible performance degradation.\n> I'm looking for tuning ideas/tests. I plan to concentrate,\n> in priority order, on:\n>\n> - postgresql.conf, especially:\n> effective_cache_size (now 5000)\n> bgwriter_delay (500)\n> commit_delay/commit_siblings (default)\n> - start to use tablespaces for most intensive tables\n> - analyze the locks situation while queries run\n> - upgrade to 8.1.n\n> - convert db partition filesystem to ext2/xfs?\n> (now ext3+noatime+data=writeback)\n> - ???\n>\n> Server specs:\n> 2 x P4 Xeon 2.8 Ghz\n> 4 Gb RAM\n> LSI Logic SCSI 2x U320 controller\n> 6 disks in raid 1 for os, /var, WAL\n> 14 disks in raid 10 for db on FC connected storage\n>\n> Current config is now (the rest is like the default):\n> max_connections = 100\n> shared_buffers = 8192\n> work_mem = 8192\n> maintenance_work_mem = 262144\n> max_fsm_pages = 200000\n> max_fsm_relations = 1000\n> bgwriter_delay = 500\n> fsync = false\n> wal_buffers = 256\n> checkpoint_segments = 32\n> effective_cache_size = 5000\n> random_page_cost = 2\n>\n> Thanks for your ideas...\n>\n> -- \n> Cosimo\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Thu, 31 Aug 2006 10:50:14 -0700",
"msg_from": "Casey Duncan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency OLTP database performance tuning"
},
{
"msg_contents": "\nOn 31-Aug-06, at 11:45 AM, Cosimo Streppone wrote:\n\n> Good morning,\n>\n> I'd like to ask you some advice on pg tuning in a high\n> concurrency OLTP-like environment.\n> The application I'm talking about is running on Pg 8.0.1.\n> Under average users load, iostat and vmstat show that iowait stays\n> well under 1%. Tables and indexes scan and seek times are also good.\n> I can be reasonably sure that disk I/O is not the *main* bottleneck\n> here.\n>\n> These OLTP transactions are composed each of 50-1000+ small \n> queries, on\n> single tables or 2/3 joined tables. Write operations are very \n> frequent,\n> and done concurrently by many users on the same data.\n>\n> Often there are also queries which involve record lookups like:\n>\n> SELECT DISTINCT rowid2 FROM table\n> WHERE rowid1 IN (<long_list_of_numerical_ids>) OR\n> refrowid1 IN (<long_list_of_numerical_ids>)\n>\n> These files are structured with rowid fields which link\n> other external tables, and the links are fairly complex to follow.\n> SQL queries and indexes have been carefully(?) built and tested,\n> each with its own \"explain analyze\".\n>\n> The problem is that under peak load, when n. of concurrent \n> transactions\n> raises, there is a sensible performance degradation.\n> I'm looking for tuning ideas/tests. I plan to concentrate,\n> in priority order, on:\n>\n> - postgresql.conf, especially:\n> effective_cache_size (now 5000)\n> bgwriter_delay (500)\n> commit_delay/commit_siblings (default)\n> - start to use tablespaces for most intensive tables\n> - analyze the locks situation while queries run\n> - upgrade to 8.1.n\n> - convert db partition filesystem to ext2/xfs?\n> (now ext3+noatime+data=writeback)\n> - ???\n>\n> Server specs:\n> 2 x P4 Xeon 2.8 Ghz\n> 4 Gb RAM\n> LSI Logic SCSI 2x U320 controller\n> 6 disks in raid 1 for os, /var, WAL\n> 14 disks in raid 10 for db on FC connected storage\n>\n> Current config is now (the rest is like the default):\n> max_connections = 100\n> shared_buffers = 8192\nway too low, shared buffers should be 50k\n> work_mem = 8192\n> maintenance_work_mem = 262144\n\n> max_fsm_pages = 200000\nwhy ?\n> max_fsm_relations = 1000\n> bgwriter_delay = 500\n> fsync = false\nyou will lose data with this!\n> wal_buffers = 256\n> checkpoint_segments = 32\n> effective_cache_size = 5000\nway too low should be on the order of 300k\n> random_page_cost = 2\nagain why ?\n>\n> Thanks for your ideas...\n>\n> -- \n> Cosimo\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Thu, 31 Aug 2006 14:13:40 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency OLTP database performance tuning"
}
] |
[
{
"msg_contents": "Hey guys,\n\n We are running a Linux 2.4 enterprise edition box with 6GB of RAM,\nPostgres 8.0.3. Our applications are running on JBoss 3.2.6. We are having a\nDatabase of over 22GB in size.\n\nThe problem is when we are querying a specific set of table (which all\ntables having over 100K of rows), the Postgres user process takes over or\nclose 700MB of memory. This is just to return 3000 odd rows. Even though we\nhave lot of data we still do not have that much to eat up this much of\nmemory.\n\n \n\nWhat I would like to know is, is there any setting in the Postgres or in\nLinux that we can tune this with?\n\n \n\nOur Postgres.conf file has the following settings, we have been playing\naround wit this but still no success.\n\n \n\nshared_buffers = 5000 \n\neffective_cache_size = 10000 \n\nwork_mem = 2048 \n\nrandom_page_cost = 2 \n\n \n\nA sample of the top command is given below.\n\n \n\n12:38:05 up 136 days, 7:06, 10 users, load average: 7.69, 4.83, 3.78\n\n459 processes: 458 sleeping, 1 running, 0 zombie, 0 stopped\n\nCPU states: cpu user nice system irq softirq iowait idle\n\n total 9.6% 0.0% 1.8% 0.0% 0.0% 88.3% 0.0%\n\n cpu00 11.3% 0.0% 0.3% 0.0% 0.1% 88.0% 0.0%\n\n cpu01 8.9% 0.0% 2.5% 0.0% 0.0% 88.4% 0.0%\n\n cpu02 14.1% 0.0% 2.9% 0.0% 0.0% 82.9% 0.0%\n\n cpu03 4.1% 0.0% 1.5% 0.1% 0.1% 93.8% 0.0%\n\nMem: 6153976k av, 6092084k used, 61892k free, 0k shrd, 6232k\nbuff\n\n 4769364k actv, 916224k in_d, 111336k in_c\n\nSwap: 1052216k av, 761912k used, 290304k free 3036700k\ncached\n\n \n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n\n19736 postgres 15 0 508M 448M 42840 D 0.9 7.4 0:22 0 postmaster\n\n19740 postgres 15 0 507M 441M 41428 D 0.8 7.3 0:21 0 postmaster\n\n19779 postgres 15 0 508M 472M 42828 D 0.8 7.8 0:21 0 postmaster\n\n19789 postgres 15 0 508M 477M 42412 D 0.6 7.9 0:21 0 postmaster\n\n19738 postgres 15 0 507M 438M 41852 D 0.4 7.3 0:21 0 postmaster\n\n14647 postgres 15 0 63948 56M 44236 D 0.1 0.9 0:41 3 postmaster\n\n \n\nAs you can see the postmaster users are taking way over the memory that\nshould be taken.\n\n \n\nIf any of you can give us some pointers we would really appreciate that and\nthanks in advance.\n\n \n\nRegards\n\nIndika.\n\n\n\n\n\n\n\n\n\n\nHey guys,\n We are running a Linux 2.4 enterprise edition box\nwith 6GB of RAM, Postgres 8.0.3. Our applications are running on JBoss 3.2.6.\nWe are having a Database of over 22GB in size.\nThe problem is when we are querying a specific set of table\n(which all tables having over 100K of rows), the Postgres user process takes\nover or close 700MB of memory. This is just to return 3000 odd rows. Even\nthough we have lot of data we still do not have that much to eat up this much\nof memory.\n \nWhat I would like to know is, is there any setting in the Postgres\nor in Linux that we can tune this with?\n \nOur Postgres.conf file has the following settings, we have\nbeen playing around wit this but still no success.\n \nshared_buffers = 5000 \neffective_cache_size = 10000 \nwork_mem = 2048 \n\nrandom_page_cost = 2 \n \nA sample of the top command is given below.\n \n12:38:05 up 136 days, 7:06, 10 users, load\naverage: 7.69, 4.83, 3.78\n459 processes: 458 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu \nuser nice system irq \nsoftirq iowait idle\n \ntotal 9.6% 0.0% \n1.8% 0.0% 0.0% \n88.3% 0.0%\n \ncpu00 11.3% 0.0% \n0.3% 0.0% 0.1% \n88.0% 0.0%\n \ncpu01 8.9% 0.0% \n2.5% 0.0% 0.0% \n88.4% 0.0%\n \ncpu02 14.1% 0.0% \n2.9% 0.0% 0.0% \n82.9% 0.0%\n \ncpu03 4.1% 0.0% \n1.5% 0.1% 0.1% \n93.8% 0.0%\nMem: 6153976k av, 6092084k used, 61892k\nfree, 0k shrd, 6232k buff\n \n4769364k actv, 916224k in_d, 111336k in_c\nSwap: 1052216k av, 761912k used, 290304k\nfree \n3036700k cached\n \n PID USER PRI NI \nSIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n19736 postgres 15 0 508M 448M 42840\nD 0.9 7.4 0:22 0\npostmaster\n19740 postgres 15 0 507M 441M 41428\nD 0.8 7.3 0:21 0\npostmaster\n19779 postgres 15 0 508M 472M 42828\nD 0.8 7.8 0:21 0\npostmaster\n19789 postgres 15 0 508M 477M 42412\nD 0.6 7.9 0:21 0\npostmaster\n19738 postgres 15 0 507M 438M 41852\nD 0.4 7.3 0:21 0\npostmaster\n14647 postgres 15 0 63948 56M 44236\nD 0.1 0.9 0:41 3\npostmaster\n \nAs you can see the postmaster users are taking way over the\nmemory that should be taken.\n \nIf any of you can give us some pointers we would really appreciate\nthat and thanks in advance.\n \nRegards\nIndika.",
"msg_date": "Thu, 31 Aug 2006 22:22:48 +0530",
"msg_from": "\"Indika Maligaspe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgress memory leak with JBoss3.2.6 and large DB"
},
{
"msg_contents": "\"Indika Maligaspe\" <[email protected]> writes:\n> The problem is when we are querying a specific set of table (which all\n> tables having over 100K of rows), the Postgres user process takes over or\n> close 700MB of memory. This is just to return 3000 odd rows. Even though we\n> have lot of data we still do not have that much to eat up this much of\n> memory.\n\nPlaying with server-side settings won't have the slightest effect on a\nclient-side problem. I'd suggest asking about this on the pgsql-jdbc\nlist; they are more likely to have useful suggestions than backend\nhackers will.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2006 13:54:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress memory leak with JBoss3.2.6 and large DB "
},
{
"msg_contents": "\nOn 31-Aug-06, at 1:54 PM, Tom Lane wrote:\n\n> \"Indika Maligaspe\" <[email protected]> writes:\n>> The problem is when we are querying a specific set of table (which \n>> all\n>> tables having over 100K of rows), the Postgres user process takes \n>> over or\n>> close 700MB of memory. This is just to return 3000 odd rows. Even \n>> though we\n>> have lot of data we still do not have that much to eat up this \n>> much of\n>> memory.\n>\n> Playing with server-side settings won't have the slightest effect on a\n> client-side problem. I'd suggest asking about this on the pgsql-jdbc\n> list; they are more likely to have useful suggestions than backend\n> hackers will.\n\nWhat is the query here. I doubt this is a client side problem, as we \nare still looking at the server side processes, not the java \nprocesses here.\n\nAlso your memory settings are *way* too low\nDave\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Thu, 31 Aug 2006 14:11:03 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress memory leak with JBoss3.2.6 and large DB "
},
{
"msg_contents": "Indika Maligaspe wrote:\n> Hey guys,\n> \n> We are running a Linux 2.4 enterprise edition box with 6GB of RAM, \n> **Postgres 8.0.3**. (snippage)\n> \n\nYou might want to consider upgrading to 8.0.8 (see below), and seeing if\nthe problem still persists.\n\n\n> As you can see the postmaster users are taking way over the memory that \n> should be taken.\n> \n> \n> If any of you can give us some pointers we would really appreciate that \n> and thanks in advance.\n> \n\nI notice that there are a number of fixes for memory leaks since 8.0.3 -\n8.0.4 and 8.0.8 is where I see 'em specifically (reading release notes\nfor 8.0.8). So you may be experiencing an issue that is fixed in the\ncurrent 8.0 releases! I recommend upgrading to 8.0.8.\n\nYou didn't say what your HW was, but if you are on a 32-bit platform,\nthen a 2.4 kernel when you have >2G ram may leak noticeable amounts of\nmemory itself...\n\nCheers\n\nMark\n\n",
"msg_date": "Mon, 04 Sep 2006 15:53:22 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress memory leak with JBoss3.2.6 and large DB"
},
{
"msg_contents": "Hi Guys,\n\tWe found the issue regarding our memory leak. It was the query. It\nseams were using functions with fetch cursors on large data sets and the\ncursors were not getting closed properly. Hence the memory was building up.\nSo I guess this was an application error. In fact we bought the Query memory\nfrom 1.4 GB to 2 MB.........\n\nThanks for all the help guys. Because by reading all your comments I was\nable to understand a lot about Postgres memory settings. \n\n\nK.Indika Maligaspe\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark Kirkwood\nSent: Monday, September 04, 2006 9:23 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Postgress memory leak with JBoss3.2.6 and large DB\n\nIndika Maligaspe wrote:\n> Hey guys,\n> \n> We are running a Linux 2.4 enterprise edition box with 6GB of RAM, \n> **Postgres 8.0.3**. (snippage)\n> \n\nYou might want to consider upgrading to 8.0.8 (see below), and seeing if\nthe problem still persists.\n\n\n> As you can see the postmaster users are taking way over the memory that \n> should be taken.\n> \n> \n> If any of you can give us some pointers we would really appreciate that \n> and thanks in advance.\n> \n\nI notice that there are a number of fixes for memory leaks since 8.0.3 -\n8.0.4 and 8.0.8 is where I see 'em specifically (reading release notes\nfor 8.0.8). So you may be experiencing an issue that is fixed in the\ncurrent 8.0 releases! I recommend upgrading to 8.0.8.\n\nYou didn't say what your HW was, but if you are on a 32-bit platform,\nthen a 2.4 kernel when you have >2G ram may leak noticeable amounts of\nmemory itself...\n\nCheers\n\nMark\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Tue, 5 Sep 2006 09:32:04 +0530",
"msg_from": "\"Indika Maligaspe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgress memory leak with JBoss3.2.6 and large DB"
}
] |
[
{
"msg_contents": "Hi, probably this is a very frequenfly question... I read archivies of\nthis list but I didn't found a finally solution for this aspect. I'll\nexplain my situation.\n\nPSQL version 8.1.3\nconfiguration of fsm,etcc default\nautovacuum and statistics activated\n\n22 daemons that have a persistent connection to this database(all\nconnection are in \"idle\"(no transaction opened).\n\nthis is the vacuum output of a table that it's updated frequently:\ndatabase=# VACUUM ANALYZE verbose cliente;\nINFO: vacuuming \"public.cliente\"\nINFO: index \"cliente_pkey\" now contains 29931 row versions in 88 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"cliente_login_key\" now contains 29931 row versions in 165 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.00 sec.\nINFO: \"cliente\": found 0 removable, 29931 nonremovable row versions in 559 pages\nDETAIL: 29398 dead row versions cannot be removed yet.\nThere were 9 unused item pointers.\n0 pages are entirely empty.\nCPU 0.01s/0.01u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_370357\"\nINFO: index \"pg_toast_370357_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_370357\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.cliente\"\nINFO: \"cliente\": scanned 559 of 559 pages, containing 533 live rows and 29398 dead rows; 533 rows in sample, 533 estimated total rows\nVACUUM\n\ndatabase=# SELECT * from pgstattuple('cliente');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent \n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 4579328 | 533 | 84522 | 1.85 | 29398 | 4279592 | 93.45 | 41852 | 0.91\n(1 row)\n\nThe performance of this table it's degraded now and autovacuum/vacuum full\ndon't remove these dead tuples. Only if I do a CLUSTER of the table the tuples\nare removed.\n\nThe same problem is on other very trafficated tables.\n\nI think that the problems probably are:\n- tune the value of my fsm/etc settings in postgresql.conf but i don't\nunderstdand how to tune it correctly.\n- the persistent connections to this db conflict with the\nautovacuum but i don't understand why. there are no transaction opened,\nonly connections in \"idle\" state.\n\nTell me what do you think...\n\nRegards,\n\nMatteo\n\n\n",
"msg_date": "Fri, 1 Sep 2006 14:39:15 +0200",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "database bloat,non removovable rows, slow query etc..."
},
{
"msg_contents": "Matteo Sgalaberni <[email protected]> writes:\n> 22 daemons that have a persistent connection to this database(all\n> connection are in \"idle\"(no transaction opened).\n\nYou may think that, but you are wrong.\n\n> INFO: \"cliente\": found 0 removable, 29931 nonremovable row versions in 559 pages\n> DETAIL: 29398 dead row versions cannot be removed yet.\n\nThe only way the above can happen is if there are some fairly old open\ntransactions. Looking in pg_stat_activity might help you identify the\nculprit(s).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2006 10:43:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc... "
},
{
"msg_contents": "Are there open transactions on the table in question? We had the same\nissue. A 100K row table was so bloated that the system thought there was\n1M rows. We had many <IDLE> transaction that we noticed in TOP, but since\nwe could not track down which process or user was holding the table we had\nto restart Pg. Once restarted we were able to do a VACUUM FULL and this\ntook care of the issue.\nhth\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n\n\n\n \n Matteo Sgalaberni \n <[email protected]> \n Sent by: To \n pgsql-performance [email protected] \n -owner@postgresql cc \n .org \n Subject \n [PERFORM] database bloat,non \n 09/01/06 05:39 AM removovable rows, slow query etc... \n \n \n \n \n \n \n\n\n\n\nHi, probably this is a very frequenfly question... I read archivies of\nthis list but I didn't found a finally solution for this aspect. I'll\nexplain my situation.\n\nPSQL version 8.1.3\nconfiguration of fsm,etcc default\nautovacuum and statistics activated\n\n22 daemons that have a persistent connection to this database(all\nconnection are in \"idle\"(no transaction opened).\n\nthis is the vacuum output of a table that it's updated frequently:\ndatabase=# VACUUM ANALYZE verbose cliente;\nINFO: vacuuming \"public.cliente\"\nINFO: index \"cliente_pkey\" now contains 29931 row versions in 88 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"cliente_login_key\" now contains 29931 row versions in 165\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.00 sec.\nINFO: \"cliente\": found 0 removable, 29931 nonremovable row versions in 559\npages\nDETAIL: 29398 dead row versions cannot be removed yet.\nThere were 9 unused item pointers.\n0 pages are entirely empty.\nCPU 0.01s/0.01u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_370357\"\nINFO: index \"pg_toast_370357_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_370357\": found 0 removable, 0 nonremovable row versions in\n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.cliente\"\nINFO: \"cliente\": scanned 559 of 559 pages, containing 533 live rows and\n29398 dead rows; 533 rows in sample, 533 estimated total rows\nVACUUM\n\ndatabase=# SELECT * from pgstattuple('cliente');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\ndead_tuple_len | dead_tuple_percent | free_space | free_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n\n 4579328 | 533 | 84522 | 1.85 | 29398 |\n4279592 | 93.45 | 41852 | 0.91\n(1 row)\n\nThe performance of this table it's degraded now and autovacuum/vacuum full\ndon't remove these dead tuples. Only if I do a CLUSTER of the table the\ntuples\nare removed.\n\nThe same problem is on other very trafficated tables.\n\nI think that the problems probably are:\n- tune the value of my fsm/etc settings in postgresql.conf but i don't\nunderstdand how to tune it correctly.\n- the persistent connections to this db conflict with the\nautovacuum but i don't understand why. there are no transaction opened,\nonly connections in \"idle\" state.\n\nTell me what do you think...\n\nRegards,\n\nMatteo\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n",
"msg_date": "Fri, 1 Sep 2006 08:33:34 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc..."
},
{
"msg_contents": "Hi, Tom and Matteo,\n\nTom Lane wrote:\n> Matteo Sgalaberni <[email protected]> writes:\n>> 22 daemons that have a persistent connection to this database(all\n>> connection are in \"idle\"(no transaction opened).\n> \n> You may think that, but you are wrong.\n> \n>> INFO: \"cliente\": found 0 removable, 29931 nonremovable row versions in 559 pages\n>> DETAIL: 29398 dead row versions cannot be removed yet.\n> \n> The only way the above can happen is if there are some fairly old open\n> transactions. Looking in pg_stat_activity might help you identify the\n> culprit(s).\n\nAnother possibility might be an outstanding two-phase-commit transaction.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 01 Sep 2006 18:09:31 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc..."
},
{
"msg_contents": "On Fri, Sep 01, 2006 at 10:43:30AM -0400, Tom Lane wrote:\n> Matteo Sgalaberni <[email protected]> writes:\n> > 22 daemons that have a persistent connection to this database(all\n> > connection are in \"idle\"(no transaction opened).\n> \n> You may think that, but you are wrong.\nOk. I stopped all clients. No connections to this database. Only psql\nconsole. Made vacuum\nfull/freeze all cominations... again dead rows non removable. Nothing\nchanged as in production.\n\nthis is my postgres config:\n\nhttp://pastebin.com/781480\n\nI read a lot about bloat tables related to\nnot appropriate fsm settings... can be the mine a case of\nmisconfiguration of these parameters?\n\nThx\n\nMatteo\n\n\n",
"msg_date": "Fri, 1 Sep 2006 19:28:29 +0200",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc..."
},
{
"msg_contents": "Matteo Sgalaberni <[email protected]> writes:\n> Ok. I stopped all clients. No connections to this database.\n\nWhen you say \"this database\", do you mean the whole postmaster cluster,\nor just the one database? Open transactions in other databases of the\nsame cluster can be a problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2006 13:35:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc... "
},
{
"msg_contents": "On Fri, 2006-09-01 at 12:28, Matteo Sgalaberni wrote:\n> On Fri, Sep 01, 2006 at 10:43:30AM -0400, Tom Lane wrote:\n> > Matteo Sgalaberni <[email protected]> writes:\n> > > 22 daemons that have a persistent connection to this database(all\n> > > connection are in \"idle\"(no transaction opened).\n> > \n> > You may think that, but you are wrong.\n> Ok. I stopped all clients. No connections to this database. Only psql\n> console. Made vacuum\n> full/freeze all cominations... again dead rows non removable. Nothing\n> changed as in production.\n> \n> this is my postgres config:\n> \n> http://pastebin.com/781480\n> \n> I read a lot about bloat tables related to\n> not appropriate fsm settings... can be the mine a case of\n> misconfiguration of these parameters?\n\nSomething is holding a lock, somewhere. \n\nHave you tried shutting down and restarting the database to see if you\ncan get it to vacuum that way? You're not in a transaction in psql,\nright?\n",
"msg_date": "Fri, 01 Sep 2006 15:01:42 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query"
},
{
"msg_contents": "On Fri, Sep 01, 2006 at 01:35:20PM -0400, Tom Lane wrote:\n> Matteo Sgalaberni <[email protected]> writes:\n> > Ok. I stopped all clients. No connections to this database.\n> \n> When you say \"this database\", do you mean the whole postmaster cluster,\n> or just the one database? Open transactions in other databases of the\n> same cluster can be a problem.\n> \nAGH!!!! AGHR!!! \n\nA my collegue JDBC application that stay in \"idle intransaction\" 24h/24h\n(but in another database, non in the bloated-reported db...)!\n\nI killed it now(jdbc app).\n\nvacuumed full and PG have cleaned all!! So if I have a idle transaction in\none database of the cluster it \"lock\" vacuums of all databases of the cluster.\n\nGood to know this...but why this behaviour? it'is lovely...:)\n\nTom , can you explain why?...\n\nThanks a lot!!\n\nMatteo\n\n",
"msg_date": "Sat, 2 Sep 2006 10:37:25 +0200",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database bloat, non removovable rows,\n slow query etc... [RESOLVED]"
},
{
"msg_contents": "Matteo,\n\nOn 2-Sep-06, at 4:37 AM, Matteo Sgalaberni wrote:\n\n> On Fri, Sep 01, 2006 at 01:35:20PM -0400, Tom Lane wrote:\n>> Matteo Sgalaberni <[email protected]> writes:\n>>> Ok. I stopped all clients. No connections to this database.\n>>\n>> When you say \"this database\", do you mean the whole postmaster \n>> cluster,\n>> or just the one database? Open transactions in other databases of \n>> the\n>> same cluster can be a problem.\n>>\n> AGH!!!! AGHR!!!\n>\n> A my collegue JDBC application that stay in \"idle intransaction\" \n> 24h/24h\n> (but in another database, non in the bloated-reported db...)!\n>\n> I killed it now(jdbc app).\nthis behaviour has been fixed in later versions of the jdbc driver\n>\n> vacuumed full and PG have cleaned all!! So if I have a idle \n> transaction in\n> one database of the cluster it \"lock\" vacuums of all databases of \n> the cluster.\n>\n> Good to know this...but why this behaviour? it'is lovely...:)\n>\n> Tom , can you explain why?...\n>\n> Thanks a lot!!\n>\n> Matteo\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Sat, 2 Sep 2006 10:16:36 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n slow query etc... [RESOLVED]"
},
{
"msg_contents": "Matteo Sgalaberni <[email protected]> writes:\n> Good to know this...but why this behaviour? it'is lovely...:)\n\nOpen transactions are tracked across the whole cluster. This is\nnecessary when vacuuming shared catalogs. In principle we could\ntrack per-database xmin values as well, but the distributed overhead\nthat'd be added to *every* GetSnapshotData call is a bit worrisome.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2006 10:21:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n\tslow query etc... [RESOLVED]"
},
{
"msg_contents": "Hi, Matteo,\n\nMatteo Sgalaberni wrote:\n\n> A my collegue JDBC application that stay in \"idle intransaction\" 24h/24h\n\nJust a little note: For most applications, this can be fixed updating\nthe JDBC driver. Old versions had the behaviour of auto-opening a new\nbackend transaction on commit/rollback, whereas new versions delay that\nuntil the first statement in the new transaction is sent.\n\nThis won't fix applications that do a select and then sit idle for days\nbefore committing/rolling back, however. Those should be fixed or use\nautocommit mode.\n\n> Good to know this...but why this behaviour? it'is lovely...:)\n> \n> Tom , can you explain why?...\n\nIt is because the transaction IDs are global per cluster.\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 04 Sep 2006 09:35:13 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat,non removovable rows, slow query etc..."
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Matteo Sgalaberni <[email protected]> writes:\n> > Good to know this...but why this behaviour? it'is lovely...:)\n> \n> Open transactions are tracked across the whole cluster. This is\n> necessary when vacuuming shared catalogs. In principle we could\n> track per-database xmin values as well, but the distributed overhead\n> that'd be added to *every* GetSnapshotData call is a bit worrisome.\n\nDon't we do that now in CVS (ie, in 8.2)?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "04 Sep 2006 18:13:34 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n slow query etc... [RESOLVED]"
},
{
"msg_contents": "Gregory Stark wrote:\n> Tom Lane <[email protected]> writes:\n> \n> > Matteo Sgalaberni <[email protected]> writes:\n> > > Good to know this...but why this behaviour? it'is lovely...:)\n> > \n> > Open transactions are tracked across the whole cluster. This is\n> > necessary when vacuuming shared catalogs. In principle we could\n> > track per-database xmin values as well, but the distributed overhead\n> > that'd be added to *every* GetSnapshotData call is a bit worrisome.\n> \n> Don't we do that now in CVS (ie, in 8.2)?\n\nNo, we don't.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 4 Sep 2006 18:47:16 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n slow query etc... [RESOLVED]"
},
{
"msg_contents": "\nAlvaro Herrera <[email protected]> writes:\n\n> Gregory Stark wrote:\n> > Tom Lane <[email protected]> writes:\n> > \n> > > Matteo Sgalaberni <[email protected]> writes:\n> > > > Good to know this...but why this behaviour? it'is lovely...:)\n> > > \n> > > Open transactions are tracked across the whole cluster. This is\n> > > necessary when vacuuming shared catalogs. In principle we could\n> > > track per-database xmin values as well, but the distributed overhead\n> > > that'd be added to *every* GetSnapshotData call is a bit worrisome.\n> > \n> > Don't we do that now in CVS (ie, in 8.2)?\n> \n> No, we don't.\n\nI must be misunderstanding Tom's comment then. \n\nWhat I'm referring to is lazy_vacuum_rel() calls vacuum_set_xid_limits with\nthe relisshared flag of the relation. vacuum_set_xid_limits passes that to\nGetOldestXmin as the allDbs parameter. GetOldestXmin ignores transactions not\nconnected to the same database unless allDbs is true.\n\n-- \ngreg\n\n",
"msg_date": "04 Sep 2006 20:30:15 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n slow query etc... [RESOLVED]"
},
{
"msg_contents": "Gregory Stark <[email protected]> writes:\n> I must be misunderstanding Tom's comment then. \n\n> What I'm referring to is lazy_vacuum_rel() calls vacuum_set_xid_limits with\n> the relisshared flag of the relation. vacuum_set_xid_limits passes that to\n> GetOldestXmin as the allDbs parameter. GetOldestXmin ignores transactions not\n> connected to the same database unless allDbs is true.\n\nThe problem is the indirect effect of other backends' xmin values,\nwhich are computed across all live backends.\n\nIn the current structure, it's hard to see how to fix this except\nby making each backend compute and advertise both a global and\ndatabase-local xmin. This seems a bit ugly. Also, someone asked\nrecently whether we could avoid counting prepared xacts when figuring\nvacuum cutoffs, which seems a fair question --- but again, how to do\nthat without doubling the number of advertised xmin values yet again?\n\nI'm starting to feel that we've reached the limits of this system of\naccounting for live XIDs, but I have no idea what the next step might\nlook like...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Sep 2006 23:17:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, non removovable rows,\n\tslow query etc... [RESOLVED]"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been looking at the results from the pg_statio* tables, to\nview the impact of increasing the shared buffers to increase\nperformance.\n\nAs expected, increasing from the default by a factor of 10~20\nmoves table/index disk blocks reads to cache hits, but the\noverall service time of my test page is not changed (I'm testing\nwith a set of queries implying an increase of 170,000 of\nsum(heap_blks_hit) and 2,000 of sum(idx_blks_hit) from\npg_statio_user_tables).\n\nI've seen that documentation says:\n\n data that is not in the PostgreSQL buffer cache may still\n reside in the kernel's I/O cache, and may therefore still be\n fetched without requiring a physical read\n\nI guess this is the best explanation (btw, my test machine runs\nLinux 2.6 on 1G of RAM), but I'm still wondering what should be\nexpected from moving caching from OS filesystem to PG - probably\nPG can \"cleverly\" flush its cache when it is full (e.g. table\ndata before index data maybe?), whereas the OS will do it\n\"blindly\", but I'm wondering about the limits of this behaviour,\nparticularly considering that being \"very clever\" about cache\nflush would probably need realtime query statistics which I am\nnot sure PG does.\n\nAfter all, memory added to shared buffers should be mecanically\nremoved from effective cache size (or others), so I cannot just\nincrease it until the OS cannot cache anymore :)\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "01 Sep 2006 19:00:52 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "On 01 Sep 2006 19:00:52 +0200, Guillaume Cottenceau <[email protected]> wrote:\n> Hi,\n>\n> I've been looking at the results from the pg_statio* tables, to\n> view the impact of increasing the shared buffers to increase\n> performance.\n>\n\nI think 'shared buffers' is one of the most overrated settings from a\nperformance standpoint. however you must ensure there is enough for\nthings the server does besides caching. It used to be a bigger deal\nthan it is in modern versionf of postgresql modern operating systems.\n\nmerlin\n",
"msg_date": "Fri, 1 Sep 2006 15:49:43 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "Guillaume\n\n1G is really not a significant amount of memory these days,\n\nThat said 6-10% of available memory should be given to an 8.0 or \nolder version of postgresql\n\nNewer versions work better around 25%\n\nI'm not sure what you mean by mechanically removed from effective_cache\n\neffective cache is really a representation of shared buffers plus OS \ncache\n\nDave\nOn 1-Sep-06, at 1:00 PM, Guillaume Cottenceau wrote:\n\n> Hi,\n>\n> I've been looking at the results from the pg_statio* tables, to\n> view the impact of increasing the shared buffers to increase\n> performance.\n>\n> As expected, increasing from the default by a factor of 10~20\n> moves table/index disk blocks reads to cache hits, but the\n> overall service time of my test page is not changed (I'm testing\n> with a set of queries implying an increase of 170,000 of\n> sum(heap_blks_hit) and 2,000 of sum(idx_blks_hit) from\n> pg_statio_user_tables).\n>\n> I've seen that documentation says:\n>\n> data that is not in the PostgreSQL buffer cache may still\n> reside in the kernel's I/O cache, and may therefore still be\n> fetched without requiring a physical read\n>\n> I guess this is the best explanation (btw, my test machine runs\n> Linux 2.6 on 1G of RAM), but I'm still wondering what should be\n> expected from moving caching from OS filesystem to PG - probably\n> PG can \"cleverly\" flush its cache when it is full (e.g. table\n> data before index data maybe?), whereas the OS will do it\n> \"blindly\", but I'm wondering about the limits of this behaviour,\n> particularly considering that being \"very clever\" about cache\n> flush would probably need realtime query statistics which I am\n> not sure PG does.\n>\n> After all, memory added to shared buffers should be mecanically\n> removed from effective cache size (or others), so I cannot just\n> increase it until the OS cannot cache anymore :)\n>\n> -- \n> Guillaume Cottenceau\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Fri, 1 Sep 2006 16:02:41 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "\nOn 1-Sep-06, at 3:49 PM, Merlin Moncure wrote:\n\n> On 01 Sep 2006 19:00:52 +0200, Guillaume Cottenceau <[email protected]> wrote:\n>> Hi,\n>>\n>> I've been looking at the results from the pg_statio* tables, to\n>> view the impact of increasing the shared buffers to increase\n>> performance.\n>>\n>\n> I think 'shared buffers' is one of the most overrated settings from a\n> performance standpoint. however you must ensure there is enough for\n> things the server does besides caching. It used to be a bigger deal\n> than it is in modern versionf of postgresql modern operating systems.\n>\n> merlin\n>\nSo if shared buffers is the most overrated, what do you consider the \nproper way of tuning ?\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Fri, 1 Sep 2006 19:22:51 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": ">>\n>> I think 'shared buffers' is one of the most overrated settings from a\n>> performance standpoint. however you must ensure there is enough for\n>> things the server does besides caching. It used to be a bigger deal\n>> than it is in modern versionf of postgresql modern operating systems.\n\nPrevious to 8.1 I would agree with you, but as of 8.1 it is probably the \nmost underrated.\n\nJoshua D. Drake\n\n\n>>\n>> merlin\n>>\n> So if shared buffers is the most overrated, what do you consider the \n> proper way of tuning ?\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Fri, 01 Sep 2006 17:24:18 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed"
},
{
"msg_contents": "Dave Cramer <pg 'at' fastcrypt.com> writes:\n\n> Guillaume\n> \n> 1G is really not a significant amount of memory these days,\n\nYeah though we have 2G or 4G of RAM in our servers (and not only\npostgres running on it).\n \n> That said 6-10% of available memory should be given to an 8.0 or\n> older version of postgresql\n> \n> Newer versions work better around 25%\n> \n> I'm not sure what you mean by mechanically removed from effective_cache\n\nI mean that when you allocate more memory to applications, the\nconsequence is less memory the OS will be able to use for disk\ncache.\n \n> effective cache is really a representation of shared buffers plus OS\n> cache\n\nAre you sure the shared buffers should be counted in? As I\nunderstand the documentation, they should not (as shared buffers\nis allocated memory for the OS, not part of \"kernel's disk\ncache\"):\n\n Sets the planner's assumption about the effective size of the\n disk cache (that is, the portion of the kernel's disk cache\n that will be used for PostgreSQL data files). This is\n measured in disk pages, which are normally 8192 bytes each.\n The default is 1000.\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "04 Sep 2006 14:07:37 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "\"Merlin Moncure\" <mmoncure 'at' gmail.com> writes:\n\n> On 01 Sep 2006 19:00:52 +0200, Guillaume Cottenceau <[email protected]> wrote:\n> > Hi,\n> >\n> > I've been looking at the results from the pg_statio* tables, to\n> > view the impact of increasing the shared buffers to increase\n> > performance.\n> >\n> \n> I think 'shared buffers' is one of the most overrated settings from a\n> performance standpoint. however you must ensure there is enough for\n> things the server does besides caching. It used to be a bigger deal\n\n\"Beside caching\".. It's unfornatunate that the documentation on\npg.org is very vague about the actual use(s) of the shared\nbuffers :/\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "04 Sep 2006 14:10:14 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "\nOn 4-Sep-06, at 8:07 AM, Guillaume Cottenceau wrote:\n\n> Dave Cramer <pg 'at' fastcrypt.com> writes:\n>\n>> Guillaume\n>>\n>> 1G is really not a significant amount of memory these days,\n>\n> Yeah though we have 2G or 4G of RAM in our servers (and not only\n> postgres running on it).\n>\n>> That said 6-10% of available memory should be given to an 8.0 or\n>> older version of postgresql\n>>\n>> Newer versions work better around 25%\n>>\n>> I'm not sure what you mean by mechanically removed from \n>> effective_cache\n>\n> I mean that when you allocate more memory to applications, the\n> consequence is less memory the OS will be able to use for disk\n> cache.\n>\n>> effective cache is really a representation of shared buffers plus OS\n>> cache\n>\n> Are you sure the shared buffers should be counted in? As I\n> understand the documentation, they should not (as shared buffers\n> is allocated memory for the OS, not part of \"kernel's disk\n> cache\"):\nYes, I am sure this should be counted, however effective_cache is not \nactually allocating anything so it doesn't have to be exact, but it \nhas to be in the correct order of magnitude\n>\n> Sets the planner's assumption about the effective size of the\n> disk cache (that is, the portion of the kernel's disk cache\n> that will be used for PostgreSQL data files). This is\n> measured in disk pages, which are normally 8192 bytes each.\n> The default is 1000.\n>\n> -- \n> Guillaume Cottenceau\n> Create your personal SMS or WAP Service - visit http:// \n> mobilefriends.ch/\n>\n\n",
"msg_date": "Mon, 4 Sep 2006 11:14:44 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed from OS\n\tfilesystem cache?"
},
{
"msg_contents": "On 9/1/06, Joshua D. Drake <[email protected]> wrote:\n> >>\n> >> I think 'shared buffers' is one of the most overrated settings from a\n> >> performance standpoint. however you must ensure there is enough for\n> >> things the server does besides caching. It used to be a bigger deal\n> >> than it is in modern versionf of postgresql modern operating systems.\n>\n> Previous to 8.1 I would agree with you, but as of 8.1 it is probably the\n> most underrated.\n\nreally? what are the relative advantages of raising shared buffers? I\nwas thinking maybe there might be less context switches in high load\nenvironments...I'm really curious what you have to say here.\n\nmerlin\n",
"msg_date": "Tue, 5 Sep 2006 09:31:59 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed"
},
{
"msg_contents": "\nOn 5-Sep-06, at 9:31 AM, Merlin Moncure wrote:\n\n> On 9/1/06, Joshua D. Drake <[email protected]> wrote:\n>> >>\n>> >> I think 'shared buffers' is one of the most overrated settings \n>> from a\n>> >> performance standpoint. however you must ensure there is \n>> enough for\n>> >> things the server does besides caching. It used to be a bigger \n>> deal\n>> >> than it is in modern versionf of postgresql modern operating \n>> systems.\n>>\n>> Previous to 8.1 I would agree with you, but as of 8.1 it is \n>> probably the\n>> most underrated.\n>\n> really? what are the relative advantages of raising shared buffers? I\n> was thinking maybe there might be less context switches in high load\n> environments...I'm really curious what you have to say here.\n\nHave you tried it ? The results are quite dramatic.\n\nSo if shared buffers aren't the first tool you reach for, what is ?\n>\n> merlin\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Tue, 5 Sep 2006 14:12:46 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increasing shared buffers: how much should be removed"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are seeing hanging queries on Windows 2003 Server SP1 with dual CPU,\nlooks like one of the process is blocked. In a lot of cases, the whole\nDB is blocked if this process is holding important locks.\n\nLooks like this issue was discussed in the following thread a few month\nago, but didn't seem to have a solution mention. I would liek to know\nif there is a patch for this already?\n\nhttp://archives.postgresql.org/pgsql-performance/2006-03/msg00129.php\n\nI would appreciate your feedbaek,\n\nThanks,\nWei\n\n",
"msg_date": "1 Sep 2006 14:56:20 -0700",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hanging queries on Windows 2003 SP1"
},
{
"msg_contents": "> Hi,\n> \n> We are seeing hanging queries on Windows 2003 Server SP1 with dual\n> CPU, looks like one of the process is blocked. In a lot of cases,\n> the whole DB is blocked if this process is holding important locks.\n> \n> Looks like this issue was discussed in the following thread a few\n> month ago, but didn't seem to have a solution mention. I would liek\n> to know if there is a patch for this already?\n> \n> http://archives.postgresql.org/pgsql-performance/2006-\n> 03/msg00129.php\n> \n\nThere have been some fairly extensive changes in the semaphore code for\n8.2. Any chance you can try the cvs snapshot version and see if the\nproblem exists there as well?\n\n//Magnus\n",
"msg_date": "Mon, 4 Sep 2006 13:39:40 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on Windows 2003 SP1"
},
{
"msg_contents": "Hi Magnus,\n\nSure, I could try that out. Is there a place to download a 8.2 image to\nbypass setting up the tool chain?\n\n-Wei\n\n\nOn 9/4/06, Magnus Hagander <[email protected]> wrote:\n>\n> > Hi,\n> >\n> > We are seeing hanging queries on Windows 2003 Server SP1 with dual\n> > CPU, looks like one of the process is blocked. In a lot of cases,\n> > the whole DB is blocked if this process is holding important locks.\n> >\n> > Looks like this issue was discussed in the following thread a few\n> > month ago, but didn't seem to have a solution mention. I would liek\n> > to know if there is a patch for this already?\n> >\n> > http://archives.postgresql.org/pgsql-performance/2006-\n> > 03/msg00129.php\n> >\n>\n> There have been some fairly extensive changes in the semaphore code for\n> 8.2. Any chance you can try the cvs snapshot version and see if the\n> problem exists there as well?\n>\n> //Magnus\n>\n\nHi Magnus,Sure, I could try that out. Is there a place to download a 8.2 image to bypass setting up the tool chain?-WeiOn 9/4/06, Magnus Hagander\n <[email protected]> wrote:> Hi,\n>> We are seeing hanging queries on Windows 2003 Server SP1 with dual> CPU, looks like one of the process is blocked. In a lot of cases,> the whole DB is blocked if this process is holding important locks.\n>> Looks like this issue was discussed in the following thread a few> month ago, but didn't seem to have a solution mention. I would liek> to know if there is a patch for this already?>\n> http://archives.postgresql.org/pgsql-performance/2006-> 03/msg00129.php>There have been some fairly extensive changes in the semaphore code for\n8.2. Any chance you can try the cvs snapshot version and see if theproblem exists there as well?//Magnus",
"msg_date": "Mon, 4 Sep 2006 23:31:24 -0700",
"msg_from": "\"Wei Song\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on Windows 2003 SP1"
}
] |
[
{
"msg_contents": "Hi.\nMy config:\ngentoo linux \"2005.1\" on amd64x2 in 64-bit mode,\nkernel 2.6.16.12\nglibc 3.3.5(NPTL),\ngcc 3.4.3.\nI had not used portage for building.\nI built two versions of postgres from sources:\npostgresql-8.1.4 native(64bit)\nand 32-bit with CFLAGS=... -m32, and \"LD =\n/usr/x86_64-pc-linux-gnu/bin/ld -melf_i386\" in src/Makefile.global.\n32-bit build runs much faster than 64 apparently.\nWhat benchmark utility should I run to provide more concrete info (numbers)?\nWhat could be the reason of that difference in performance?\n\nRegards,\n Roman.\n\n",
"msg_date": "Tue, 05 Sep 2006 00:42:28 +0400",
"msg_from": "Roman Krylov <[email protected]>",
"msg_from_op": true,
"msg_subject": "64bit vs 32bit build on amd64"
},
{
"msg_contents": "On Tue, 2006-09-05 at 00:42 +0400, Roman Krylov wrote:\n> Hi.\n> My config:\n> gentoo linux \"2005.1\" on amd64x2 in 64-bit mode,\n> kernel 2.6.16.12\n> glibc 3.3.5(NPTL),\n> gcc 3.4.3.\n> I had not used portage for building.\n> I built two versions of postgres from sources:\n> postgresql-8.1.4 native(64bit)\n> and 32-bit with CFLAGS=... -m32, and \"LD =\n> /usr/x86_64-pc-linux-gnu/bin/ld -melf_i386\" in src/Makefile.global.\n> 32-bit build runs much faster than 64 apparently.\n> What benchmark utility should I run to provide more concrete info (numbers)?\n> What could be the reason of that difference in performance?\n> \n\nI am also interested in 32-bit versus 64-bit performance. If I only have\n4GB of RAM, does it make sense to compile postgresql as a 64-bit\nexecutable? I assume there's no reason for PostgreSQL's shared buffers,\netc., to add up to more than 2GB on a system with 4GB of RAM.\n\nIs there a general consensus on the matter, or is it highly application-\ndependent? I am not doing any huge amount of 64-bit arithmetic.\n\nI am using Woodcrest, not Opteron.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Thu, 07 Sep 2006 09:56:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 64bit vs 32bit build on amd64"
}
] |
[
{
"msg_contents": "Hi,\n\nIs the use of lists \"Where field IN ('A',...'Z')\" faster than using\nmultiple conditions like \"Where field = 'A' OR .... field = 'Z'\" ? In\nevery situation ? Depends on what ?\n\nNote: (I guess im mistaken, but i remeber seeing that at the pgsql\nmanual somewhere)\n\nThanks\n\nMarcus\n",
"msg_date": "Wed, 6 Sep 2006 09:45:16 -0300",
"msg_from": "\"Marcus Vinicius\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lists (In) performance"
}
] |
[
{
"msg_contents": "Hi!..\n\nI noticed that the age of template0 is increasing very rapidly..Can you\nplease let me know how we can control this ....and what causes such\nproblems.\n\nWe also noticed that the database slow downs heavily at a particular\ntime..Can you suggest any tools which will help in diagnosing the root cause\nbehiond the data load.\n\n\n\nRegards,\nNimesh.\n\nHi!..\n \nI noticed that the age of template0 is increasing very rapidly..Can you please let me know how we can control this ....and what causes such problems. \n \nWe also noticed that the database slow downs heavily at a particular time..Can you suggest any tools which will help in diagnosing the root cause behiond the data load.\n \n \n \nRegards,\nNimesh.",
"msg_date": "Thu, 7 Sep 2006 16:01:34 +0530",
"msg_from": "\"Nimesh Satam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Template0 age is increasing speedily."
},
{
"msg_contents": "On Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:\n \n> I noticed that the age of template0 is increasing very rapidly..Can\n> you please let me know how we can control this ....and what causes\n> such problems. \n> \n> We also noticed that the database slow downs heavily at a particular\n> time..Can you suggest any tools which will help in diagnosing the root\n> cause behiond the data load.\n\n\nHi,\n\nfirst of all: there is no need to cross post on 4 lists.\nIf you have a performance problem, post on pgsql-performance.\n\nSecond, please tell us which version of PostgreSQL on\nwhich operating system you're using. Diagnosing your\nproblem might depend on which OS you use...\n\nFinally, explain what you mean by \"the age of template0 is\nincreasing very rapidly\", you mean \"the size is increasing\"?\n\nBye,\nChris.\n\n\n\n-- \n\nChris Mair\nhttp://www.1006.org\n\n",
"msg_date": "Thu, 07 Sep 2006 12:40:25 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Template0 age is increasing speedily."
},
{
"msg_contents": "Hi,\n\nPostgres Version used is 8.1.3\nOS: Linux\n\n\n'SELECT datname, age(datfrozenxid) FROM pg_database'\n\npostgres | 1575\nxyz | 1073743934\ntemplate1 | 1632\ntemplate0 | 61540256\n\nThis is the command which I tried and got the above output, and the number\nis increasing pretty fast for template0.\n\nPlease let me know if this a problem.\n\n\n\nRegards,\nNimesh.\n\n\nOn 9/7/06, Chris Mair <[email protected]> wrote:\n>\n> On Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:\n>\n> > I noticed that the age of template0 is increasing very rapidly..Can\n> > you please let me know how we can control this ....and what causes\n> > such problems.\n> >\n> > We also noticed that the database slow downs heavily at a particular\n> > time..Can you suggest any tools which will help in diagnosing the root\n> > cause behiond the data load.\n>\n>\n> Hi,\n>\n> first of all: there is no need to cross post on 4 lists.\n> If you have a performance problem, post on pgsql-performance.\n>\n> Second, please tell us which version of PostgreSQL on\n> which operating system you're using. Diagnosing your\n> problem might depend on which OS you use...\n>\n> Finally, explain what you mean by \"the age of template0 is\n> increasing very rapidly\", you mean \"the size is increasing\"?\n>\n> Bye,\n> Chris.\n>\n>\n>\n> --\n>\n> Chris Mair\n> http://www.1006.org\n>\n>\n\nHi,\n \nPostgres Version used is 8.1.3\nOS: Linux\n \n \n'SELECT datname, age(datfrozenxid) FROM pg_database'\n \npostgres | 1575xyz | 1073743934template1 | 1632template0 | 61540256 \nThis is the command which I tried and got the above output, and the number is increasing pretty fast for template0.\n \nPlease let me know if this a problem.\n \n \n \nRegards,\nNimesh.\n \nOn 9/7/06, Chris Mair <[email protected]> wrote:\nOn Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:> I noticed that the age of template0 is increasing very rapidly..Can\n> you please let me know how we can control this ....and what causes> such problems.>> We also noticed that the database slow downs heavily at a particular> time..Can you suggest any tools which will help in diagnosing the root\n> cause behiond the data load.Hi,first of all: there is no need to cross post on 4 lists.If you have a performance problem, post on pgsql-performance.Second, please tell us which version of PostgreSQL on\nwhich operating system you're using. Diagnosing yourproblem might depend on which OS you use...Finally, explain what you mean by \"the age of template0 isincreasing very rapidly\", you mean \"the size is increasing\"?\nBye,Chris.--Chris Mairhttp://www.1006.org",
"msg_date": "Thu, 7 Sep 2006 16:19:01 +0530",
"msg_from": "\"Nimesh Satam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Template0 age is increasing speedily."
},
{
"msg_contents": "In response to \"Nimesh Satam\" <[email protected]>:\n\n> Hi,\n> \n> Postgres Version used is 8.1.3\n> OS: Linux\n> \n> \n> 'SELECT datname, age(datfrozenxid) FROM pg_database'\n> \n> postgres | 1575\n> xyz | 1073743934\n> template1 | 1632\n> template0 | 61540256\n> \n> This is the command which I tried and got the above output, and the number\n> is increasing pretty fast for template0.\n> \n> Please let me know if this a problem.\n\nShort answer: no, this is not a problem.\n\nLong answer: \nhttp://www.postgresql.org/docs/8.1/interactive/manage-ag-templatedbs.html\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Thu, 7 Sep 2006 08:48:35 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Template0 age is increasing speedily."
},
{
"msg_contents": "I would expect that the age of Template0 is increasing at the same rate as\nevery other database in your cluster. Transaction IDs are global across all\ndatabases in the cluster, so as I understand it, executing a transaction in\nany database will increase the age of all databases by 1.\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Nimesh Satam\nSent: Thursday, September 07, 2006 5:49 AM\nTo: Chris Mair\nCc: [email protected]\nSubject: Re: [PERFORM] [PATCHES] Template0 age is increasing speedily.\n\n\n\nHi,\n \nPostgres Version used is 8.1.3\nOS: Linux\n \n \n'SELECT datname, age(datfrozenxid) FROM pg_database'\n \npostgres | 1575\nxyz | 1073743934\ntemplate1 | 1632\ntemplate0 | 61540256\n \nThis is the command which I tried and got the above output, and the number\nis increasing pretty fast for template0.\n \nPlease let me know if this a problem.\n \n \n \nRegards,\nNimesh.\n\n \nOn 9/7/06, Chris Mair <[email protected]> wrote: \n\nOn Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:\n\n> I noticed that the age of template0 is increasing very rapidly..Can \n> you please let me know how we can control this ....and what causes\n> such problems.\n>\n> We also noticed that the database slow downs heavily at a particular\n> time..Can you suggest any tools which will help in diagnosing the root \n> cause behiond the data load.\n\n\nHi,\n\nfirst of all: there is no need to cross post on 4 lists.\nIf you have a performance problem, post on pgsql-performance.\n\nSecond, please tell us which version of PostgreSQL on \nwhich operating system you're using. Diagnosing your\nproblem might depend on which OS you use...\n\nFinally, explain what you mean by \"the age of template0 is\nincreasing very rapidly\", you mean \"the size is increasing\"? \n\nBye,\nChris.\n\n\n\n--\n\nChris Mair\nhttp://www.1006.org\n\n\n\n\n\n\n\nMessage\n\n\nI \nwould expect that the age of Template0 is increasing at the same rate as every \nother database in your cluster. Transaction IDs are global across all \ndatabases in the cluster, so as I understand it, executing a transaction in any \ndatabase will increase the age of all databases by 1.\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Nimesh \n SatamSent: Thursday, September 07, 2006 5:49 AMTo: Chris \n MairCc: [email protected]: Re: \n [PERFORM] [PATCHES] Template0 age is increasing speedily.\nHi,\n \nPostgres Version used is 8.1.3\nOS: Linux\n \n \n'SELECT datname, age(datfrozenxid) FROM pg_database'\n \npostgres \n | \n 1575xyz \n | 1073743934template1 \n | \n 1632template0 | \n 61540256 \nThis is the command which I tried and got the above output, and the \n number is increasing pretty fast for template0.\n \nPlease let me know if this a problem.\n \n \n \nRegards,\nNimesh.\n \nOn 9/7/06, Chris \n Mair <[email protected]> wrote:\nOn \n Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:> I noticed \n that the age of template0 is increasing very rapidly..Can \n > you please let me know how we can control this ....and what \n causes> such problems.>> We also noticed that the \n database slow downs heavily at a particular> time..Can you suggest \n any tools which will help in diagnosing the root > cause behiond the \n data load.Hi,first of all: there is no need to cross \n post on 4 lists.If you have a performance problem, post on \n pgsql-performance.Second, please tell us which version of PostgreSQL \n on which operating system you're using. Diagnosing yourproblem might \n depend on which OS you use...Finally, explain what you mean by \"the \n age of template0 isincreasing very rapidly\", you mean \"the size is \n increasing\"? Bye,Chris.--Chris \n Mairhttp://www.1006.org",
"msg_date": "Thu, 7 Sep 2006 07:55:01 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Template0 age is increasing speedily."
},
{
"msg_contents": "On 9/7/06, Nimesh Satam <[email protected]> wrote:\n> We also noticed that the database slow downs heavily at a particular\n> time..Can you suggest any tools which will help in diagnosing the root cause\n> behiond the data load.\n\npossible checkpoint? poorly formulated query? it could be any number\nof things. use standard tools to diagnose the problem, including:\n\nunix tools: top, vmstat, etc\npostgresql query logging, including min_statement_duration\nexplain analyze\n",
"msg_date": "Thu, 7 Sep 2006 09:04:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Template0 age is increasing speedily."
}
] |
[
{
"msg_contents": "Hi,\n\nWe've been running our \"webapp database\"-benchmark again on mysql and \npostgresql. This time using a Fujitsu-Siemens RX300 S3 machine equipped \nwith a 2.66Ghz Woodcrest (5150) and a 3.73Ghz Dempsey (5080). And \ncompared those results to our earlier undertaken Opteron benchmarks on \n2.4GHz' Socket F- and 940-versions (2216, 280).\n\nYou can see the english translation here:\nhttp://tweakers.net/reviews/646\n\nThe Woodcrest is quite a bit faster than the Opterons. Actually... With \nHyperthreading *enabled* the older Dempsey-processor is also faster than \nthe Opterons with PostgreSQL. But then again, it is the top-model \nDempsey and not a top-model Opteron so that isn't a clear win.\nOf course its clear that even a top-Opteron wouldn't beat the Dempsey's \nas easily as it would have beaten the older Xeon's before that.\n\nAgain PostgreSQL shows very good scalability, so good even \nHyperThreading adds extra performance to it with 4 cores enabled... \nwhile MySQL in every version we tested (5.1.9 is not displayed, but \nshowed similar performance) was slower with HT enabled.\n\nFurther more we received our ordered Dell MD1000 SAS-enclosure which has \n15 SAS Fujitsu MAX3036RC disks and that unit is controlled using a Dell \nPERC 5/e.\nWe've done some benchmarks (unfortunately everything is in Dutch for this).\n\nWe tested varying amounts of disks in RAID10 (a set of 4,5,6 and 7 \n2-disk-mirrors striped), RAID50 and RAID5. The interfaces to display the \nresults are in a google-stylee beta-state, but here is a list of all \nbenchmarks done:\nhttp://tweakers.net/benchdb/search?query=md1000&ColcomboID=5\n\nHover over the left titles to see how many disks and in what raid-level \n was done. Here is a comparison of 14 disk RAID5/50/10's:\nhttp://tweakers.net/benchdb/testcombo/wide/?TestcomboIDs%5B1156%5D=1&TestcomboIDs%5B1178%5D=1&TestcomboIDs%5B1176%5D=1&DB=Nieuws&Query=Keyword\n\nFor raid5 we have some graphs:\nhttp://tweakers.net/benchdb/testcombo/1156\nScroll down to see how adding disks improves performance on it. The \nAreca 1280 with WD Raptor's is a very good alternative (or even better) \nas you can see for most benchmarks, but is beaten as soon as the \nrelative weight of random-IO increases (I/O-meter fileserver and \ndatabase benchmarks), the processor on the 1280 is faster than the one \non the Dell-controller so its faster in sequential IO.\nThese benchmarks were not done using postgresql, so you shouldn't read \nthem as absolute for all your situations ;-) But you can get a good \nimpression I think.\n\nBest regards,\n\nArjen van der Meijden\nTweakers.net\n",
"msg_date": "Fri, 08 Sep 2006 07:51:04 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with postgresql and\n\tsome SAS raid-figures"
},
{
"msg_contents": "Very nice!\n\nThe 3Ware cards have fallen far behind Areca it seems. They look close in\nRaid 10 performance, but with RAID5 they get crushed.\n\nI'm about to purchase 20 machines for the lab and I think this article\npushes me toward Woodcrest, though I think it's a short term decision with\nquad core AMD socket F coming later this year. Right now it seems that the\nIntel advantage is about 30%-40%.\n\n- Luke\n\n\nOn 9/7/06 10:51 PM, \"Arjen van der Meijden\" <[email protected]> wrote:\n\n> Hi,\n> \n> We've been running our \"webapp database\"-benchmark again on mysql and\n> postgresql. This time using a Fujitsu-Siemens RX300 S3 machine equipped\n> with a 2.66Ghz Woodcrest (5150) and a 3.73Ghz Dempsey (5080). And\n> compared those results to our earlier undertaken Opteron benchmarks on\n> 2.4GHz' Socket F- and 940-versions (2216, 280).\n> \n> You can see the english translation here:\n> http://tweakers.net/reviews/646\n> \n> The Woodcrest is quite a bit faster than the Opterons. Actually... With\n> Hyperthreading *enabled* the older Dempsey-processor is also faster than\n> the Opterons with PostgreSQL. But then again, it is the top-model\n> Dempsey and not a top-model Opteron so that isn't a clear win.\n> Of course its clear that even a top-Opteron wouldn't beat the Dempsey's\n> as easily as it would have beaten the older Xeon's before that.\n> \n> Again PostgreSQL shows very good scalability, so good even\n> HyperThreading adds extra performance to it with 4 cores enabled...\n> while MySQL in every version we tested (5.1.9 is not displayed, but\n> showed similar performance) was slower with HT enabled.\n> \n> Further more we received our ordered Dell MD1000 SAS-enclosure which has\n> 15 SAS Fujitsu MAX3036RC disks and that unit is controlled using a Dell\n> PERC 5/e.\n> We've done some benchmarks (unfortunately everything is in Dutch for this).\n> \n> We tested varying amounts of disks in RAID10 (a set of 4,5,6 and 7\n> 2-disk-mirrors striped), RAID50 and RAID5. The interfaces to display the\n> results are in a google-stylee beta-state, but here is a list of all\n> benchmarks done:\n> http://tweakers.net/benchdb/search?query=md1000&ColcomboID=5\n> \n> Hover over the left titles to see how many disks and in what raid-level\n> was done. Here is a comparison of 14 disk RAID5/50/10's:\n> http://tweakers.net/benchdb/testcombo/wide/?TestcomboIDs%5B1156%5D=1&Testcombo\n> IDs%5B1178%5D=1&TestcomboIDs%5B1176%5D=1&DB=Nieuws&Query=Keyword\n> \n> For raid5 we have some graphs:\n> http://tweakers.net/benchdb/testcombo/1156\n> Scroll down to see how adding disks improves performance on it. The\n> Areca 1280 with WD Raptor's is a very good alternative (or even better)\n> as you can see for most benchmarks, but is beaten as soon as the\n> relative weight of random-IO increases (I/O-meter fileserver and\n> database benchmarks), the processor on the 1280 is faster than the one\n> on the Dell-controller so its faster in sequential IO.\n> These benchmarks were not done using postgresql, so you shouldn't read\n> them as absolute for all your situations ;-) But you can get a good\n> impression I think.\n> \n> Best regards,\n> \n> Arjen van der Meijden\n> Tweakers.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Thu, 07 Sep 2006 23:17:05 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940"
},
{
"msg_contents": "Hi, Arjen,\n\n\nOn 8-Sep-06, at 1:51 AM, Arjen van der Meijden wrote:\n\n> Hi,\n>\n> We've been running our \"webapp database\"-benchmark again on mysql \n> and postgresql. This time using a Fujitsu-Siemens RX300 S3 machine \n> equipped with a 2.66Ghz Woodcrest (5150) and a 3.73Ghz Dempsey \n> (5080). And compared those results to our earlier undertaken \n> Opteron benchmarks on 2.4GHz' Socket F- and 940-versions (2216, 280).\n>\n> You can see the english translation here:\n> http://tweakers.net/reviews/646\n>\n> The Woodcrest is quite a bit faster than the Opterons. Actually... \n> With Hyperthreading *enabled* the older Dempsey-processor is also \n> faster than the Opterons with PostgreSQL. But then again, it is the \n> top-model Dempsey and not a top-model Opteron so that isn't a clear \n> win.\n> Of course its clear that even a top-Opteron wouldn't beat the \n> Dempsey's as easily as it would have beaten the older Xeon's before \n> that.\n\nWhy wouldn't you use a top of the line Opteron ?\n>\n> Again PostgreSQL shows very good scalability, so good even \n> HyperThreading adds extra performance to it with 4 cores enabled... \n> while MySQL in every version we tested (5.1.9 is not displayed, but \n> showed similar performance) was slower with HT enabled.\n>\n> Further more we received our ordered Dell MD1000 SAS-enclosure \n> which has 15 SAS Fujitsu MAX3036RC disks and that unit is \n> controlled using a Dell PERC 5/e.\n> We've done some benchmarks (unfortunately everything is in Dutch \n> for this).\n>\n> We tested varying amounts of disks in RAID10 (a set of 4,5,6 and 7 \n> 2-disk-mirrors striped), RAID50 and RAID5. The interfaces to \n> display the results are in a google-stylee beta-state, but here is \n> a list of all benchmarks done:\n> http://tweakers.net/benchdb/search?query=md1000&ColcomboID=5\n>\n> Hover over the left titles to see how many disks and in what raid- \n> level was done. Here is a comparison of 14 disk RAID5/50/10's:\n> http://tweakers.net/benchdb/testcombo/wide/?TestcomboIDs%5B1156% \n> 5D=1&TestcomboIDs%5B1178%5D=1&TestcomboIDs%5B1176% \n> 5D=1&DB=Nieuws&Query=Keyword\n>\n> For raid5 we have some graphs:\n> http://tweakers.net/benchdb/testcombo/1156\n> Scroll down to see how adding disks improves performance on it. The \n> Areca 1280 with WD Raptor's is a very good alternative (or even \n> better) as you can see for most benchmarks, but is beaten as soon \n> as the relative weight of random-IO increases (I/O-meter fileserver \n> and database benchmarks), the processor on the 1280 is faster than \n> the one on the Dell-controller so its faster in sequential IO.\n> These benchmarks were not done using postgresql, so you shouldn't \n> read them as absolute for all your situations ;-) But you can get a \n> good impression I think.\n>\n> Best regards,\n>\n> Arjen van der Meijden\n> Tweakers.net\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Fri, 8 Sep 2006 07:48:57 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with postgresql\n\tand some SAS raid-figures"
},
{
"msg_contents": "Dave Cramer wrote:\n> Hi, Arjen,\n> \n> \n>> The Woodcrest is quite a bit faster than the Opterons. Actually... \n>> With Hyperthreading *enabled* the older Dempsey-processor is also \n>> faster than the Opterons with PostgreSQL. But then again, it is the \n>> top-model Dempsey and not a top-model Opteron so that isn't a clear win.\n>> Of course its clear that even a top-Opteron wouldn't beat the \n>> Dempsey's as easily as it would have beaten the older Xeon's before that.\n> \n> Why wouldn't you use a top of the line Opteron ?\n\nWhat do you mean by this question? Why we didn't test the Opteron 285 \ninstead of the 280?\n\nWell, its not that you can just go up to a hardware supplier and pick \nexactly the system you want to review/benchmar... especially not with \npre-production hardware that (at the time) wasn't very widely available.\nNormally, you just get what system they have available at their \nmarketing or pre-sales department.\n\nThe Opteron 280 was from an earlier review and was fitted in the \"Try \nand Buy\"-version of the Sun Fire x4200. In that system; you only have a \nfew options where the 280 was the fastest at the time.\n\nBut then again, systems with the Woodcrest 5150 (the subtop one) and \nOpteron 280 (also the subtop one) are about equal in price, so its not a \nbad comparison in a bang-for-bucks point of view. The Dempsey was added \nto show how both the Opteron and the newer Woodcrest would compete \nagainst that one.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 08 Sep 2006 14:44:00 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with"
},
{
"msg_contents": "\nOn 8-Sep-06, at 8:44 AM, Arjen van der Meijden wrote:\n\n> Dave Cramer wrote:\n>> Hi, Arjen,\n>>> The Woodcrest is quite a bit faster than the Opterons. \n>>> Actually... With Hyperthreading *enabled* the older Dempsey- \n>>> processor is also faster than the Opterons with PostgreSQL. But \n>>> then again, it is the top-model Dempsey and not a top-model \n>>> Opteron so that isn't a clear win.\n>>> Of course its clear that even a top-Opteron wouldn't beat the \n>>> Dempsey's as easily as it would have beaten the older Xeon's \n>>> before that.\n>> Why wouldn't you use a top of the line Opteron ?\n>\n> What do you mean by this question? Why we didn't test the Opteron \n> 285 instead of the 280?\nYes, that is the question.\n>\n> Well, its not that you can just go up to a hardware supplier and \n> pick exactly the system you want to review/benchmar... especially \n> not with pre-production hardware that (at the time) wasn't very \n> widely available.\n> Normally, you just get what system they have available at their \n> marketing or pre-sales department.\nUnderstandable.\n>\n> The Opteron 280 was from an earlier review and was fitted in the \n> \"Try and Buy\"-version of the Sun Fire x4200. In that system; you \n> only have a few options where the 280 was the fastest at the time.\n\n>\n> But then again, systems with the Woodcrest 5150 (the subtop one) \n> and Opteron 280 (also the subtop one) are about equal in price, so \n> its not a bad comparison in a bang-for-bucks point of view. The \n> Dempsey was added to show how both the Opteron and the newer \n> Woodcrest would compete against that one.\n\nDid I read this correctly that one of the Opterons in the test only \nhad 4G of ram vs 7 G in the Intel boxes ? If so this is a severely \nlimiting factor for postgresql at least?\n\nDave\n>\n> Best regards,\n>\n> Arjen\n>\n\n",
"msg_date": "Fri, 8 Sep 2006 09:01:57 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with postgresql\n\tand some SAS raid-figures"
},
{
"msg_contents": "On 8-9-2006 15:01 Dave Cramer wrote:\n> \n>> But then again, systems with the Woodcrest 5150 (the subtop one) and \n>> Opteron 280 (also the subtop one) are about equal in price, so its not \n>> a bad comparison in a bang-for-bucks point of view. The Dempsey was \n>> added to show how both the Opteron and the newer Woodcrest would \n>> compete against that one.\n> \n> Did I read this correctly that one of the Opterons in the test only had \n> 4G of ram vs 7 G in the Intel boxes ? If so this is a severely limiting \n> factor for postgresql at least?\n\nActually, its not in this benchmark. Its not a large enough dataset to \nput any pressure on IO, not even with just 2GB of memory.\n\nBut, to display it more acurately have a look here:\nhttp://tweakers.net/reviews/638/2 and then scroll down to the bottom-graph.\nAs you can see, the 8GB-version was faster, but not that much to call it \n'severely'. Unfortunately, the system just wasn't very stable with that \n8GB memory (it was other memory, not just more). So we couldn't finish \nmuch benchmarks with it and decided, partially based on this graph to \njust go for the 4GB.\n\nAnyway, you can always compare the results of the Woodcrest with the Sun \nFire x4200-results (called 'Opteron DDR' or 'Opteron 940' in the latest \narticle) to see how a Opteron with 8GB of memory compares to the Woodcrest.\n\nMore of those results can be found in this english article:\nhttp://tweakers.net/reviews/638\nAnd in this Dutch one:\nhttp://tweakers.net/reviews/633\n",
"msg_date": "Fri, 08 Sep 2006 18:01:31 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> On 8-9-2006 15:01 Dave Cramer wrote:\n>>\n>>> But then again, systems with the Woodcrest 5150 (the subtop one) and\n>>> Opteron 280 (also the subtop one) are about equal in price, so its\n>>> not a bad comparison in a bang-for-bucks point of view. The Dempsey\n>>> was added to show how both the Opteron and the newer Woodcrest would\n>>> compete against that one.\n>>\n>> Did I read this correctly that one of the Opterons in the test only\n>> had 4G of ram vs 7 G in the Intel boxes ? If so this is a severely\n>> limiting factor for postgresql at least?\n> \n> Actually, its not in this benchmark. Its not a large enough dataset to\n> put any pressure on IO, not even with just 2GB of memory.\n\ninteresting - so this is a mostly CPU-bound benchmark ?\nOut of curiousity have you done any profiling on the databases under\ntest to see where they are spending their time ?\n\n\nStefan\n",
"msg_date": "Fri, 08 Sep 2006 18:18:50 +0200",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with"
},
{
"msg_contents": "On 8-9-2006 18:18 Stefan Kaltenbrunner wrote:\n> \n> interesting - so this is a mostly CPU-bound benchmark ?\n> Out of curiousity have you done any profiling on the databases under\n> test to see where they are spending their time ?\n\nYeah, it is.\n\nWe didn't do any profiling.\nWe had a Sun-engineer visit us to see why MySQL performed so bad on the \nT2000 and he has done some profiling, but that is of course just a small \nand specific part of our total set of benchmarks.\nPostgresql was mostly left out of that picture since it performed pretty \nwell (although it may even do better with more tuning and profiling).\n\nWe are/were not interested enough in the profiling-part, since we just \nrun the benchmark to see how fast each system is. Not really to see how \nfast each database is or why a database is faster on X or Y.\n\nThe latter is of course pretty interesting, but also requires quite a \nbit of knowledge of the internals and a bit of time to analyze the \nresults...\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 08 Sep 2006 18:45:48 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with"
}
] |
[
{
"msg_contents": "I am in the process of speccing out a new box for a highly utilized \n(updates, inserts, selects) 40GB+ database. I'm trying to maximize \nperformance on a budget, and I would appreciate any feedback on any \nof the following.\n\nHardware:\n2 - Intel Xeon 5160 3.0 GHz 4MB 1333MHz\n8 - Kingston 4GB DDR2 ECC Fully Buffered\n16 - WD Raptor 150GB 10000 RPM SATA II HD\n1 - 3Ware 9550SX-16ML Serial ATA II RAID card\n1 - Supermicro X7DBE+ Motherboard\n1 - Supermicro SC836TQ-R800V Case\n\nThe Woodcrest CPU, Raptor HD and 3Ware 9550SX RAID card have all been \nhighly recommended on this list, so there may not be much to comment \non there. The Supermicro X7DBE+ motherboard is appealing because of \nits 16 RAM slots and 64GB of maximum memory. The Supermicro SC836TQ- \nR800 case was selected because of its 16 drive bays in 3U and dual \n800W power supplies (the motherboard requires 700W).\n\nRAID Configuration/File System:\nA lot of info in the lists, but I'm not sure of the best approach for \nthis setup.\nI was thinking of using 14 drives in a RAID 10 (ext3) for the db and \nwals and 2 mirrored drives (ext3 or xfs) for the OS. Another option \nwould be 12 drives in a RAID 10 for the database (ext3, maybe ext2) \nand 4 drives in a RAID 10 for the OS and wals (ext3) (with separate \nRAID cards). There are many choices here. Any suggestions?\n\nOS:\nThe consensus in the list seems to be as long as you have the 2.6 \nLinux Kernel, it's really a matter of personal preference. However, \nit's hard to have a preference when you're new to the Linux world, \nlike I am. Red Hat, Fedora Core, Slackware, Suse, Gentoo? I guess my \nprimary goal is speed, stability, and ease of use. Any advice here, \nno matter how minimal, would be appreciated.\n\nThanks,\n\nBrian Wipf\n\n",
"msg_date": "Fri, 8 Sep 2006 01:52:28 -0600",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuring System for Speed"
},
{
"msg_contents": "Brian,\n\nI like all of the HW - I just thoroughly reviewed this and came to the same\nHW choices you did. The new SuperMicro chassis is an improvement on one we\nhave used for 21 servers like this.\n\nOne modification: we implemented two internal 60GB laptop hard drives with\nan additional 3Ware 8006-2LP controller on each machine for the OS. This\nfrees up all 16 SATA II drives for data. The laptop drives are super stable\nand reliable under high heat conditions and they are small, so having them\ninside the case is no big deal. BTW - we had ASAcomputers.com make our\nsystems for us and they did a good job.\n\nWe use CentOS 4 on our lab systems and it works fine. I recommend XFS for\nthe DBMS data drives, along with RAID10 on the 3Ware controllers. With\nnormal Postgres you shouldn't expect to get more than about 350MB/s on those\nCPUs for a single query, but with increased user count, the RAID10 should\nscale fine to about 1200MB/s sequential transfer using XFS and a lot slower\nwith ext3.\n\n- Luke\n\n\nOn 9/8/06 12:52 AM, \"Brian Wipf\" <[email protected]> wrote:\n\n> I am in the process of speccing out a new box for a highly utilized\n> (updates, inserts, selects) 40GB+ database. I'm trying to maximize\n> performance on a budget, and I would appreciate any feedback on any\n> of the following.\n> \n> Hardware:\n> 2 - Intel Xeon 5160 3.0 GHz 4MB 1333MHz\n> 8 - Kingston 4GB DDR2 ECC Fully Buffered\n> 16 - WD Raptor 150GB 10000 RPM SATA II HD\n> 1 - 3Ware 9550SX-16ML Serial ATA II RAID card\n> 1 - Supermicro X7DBE+ Motherboard\n> 1 - Supermicro SC836TQ-R800V Case\n> \n> The Woodcrest CPU, Raptor HD and 3Ware 9550SX RAID card have all been\n> highly recommended on this list, so there may not be much to comment\n> on there. The Supermicro X7DBE+ motherboard is appealing because of\n> its 16 RAM slots and 64GB of maximum memory. The Supermicro SC836TQ-\n> R800 case was selected because of its 16 drive bays in 3U and dual\n> 800W power supplies (the motherboard requires 700W).\n> \n> RAID Configuration/File System:\n> A lot of info in the lists, but I'm not sure of the best approach for\n> this setup.\n> I was thinking of using 14 drives in a RAID 10 (ext3) for the db and\n> wals and 2 mirrored drives (ext3 or xfs) for the OS. Another option\n> would be 12 drives in a RAID 10 for the database (ext3, maybe ext2)\n> and 4 drives in a RAID 10 for the OS and wals (ext3) (with separate\n> RAID cards). There are many choices here. Any suggestions?\n> \n> OS:\n> The consensus in the list seems to be as long as you have the 2.6\n> Linux Kernel, it's really a matter of personal preference. However,\n> it's hard to have a preference when you're new to the Linux world,\n> like I am. Red Hat, Fedora Core, Slackware, Suse, Gentoo? I guess my\n> primary goal is speed, stability, and ease of use. Any advice here,\n> no matter how minimal, would be appreciated.\n> \n> Thanks,\n> \n> Brian Wipf\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Fri, 08 Sep 2006 01:44:14 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuring System for Speed"
},
{
"msg_contents": "Brian Wipf wrote:\n> I am in the process of speccing out a new box for a highly utilized \n> (updates, inserts, selects) 40GB+ database. I'm trying to maximize \n> performance on a budget, and I would appreciate any feedback on any of \n> the following.\n>\nPerhaps this is off topic, but here is bit from my experience. Using \nsingle server for both read (select) and write (insert, update, delete) \noperations is the way to slow things down. Consider to split query \nworkload into OLTP and OLAP queries . Set up Slony replication and use \nslave server for read operations only . Buy 1 cheap box (slave) and \nanother more expensive one for master. Keep in mind that DB schema \noptimization for particular query workload is essential . You just can \nnot get good performance for selects if schema is optimized for inserts \nand vice versa.\n>\n> OS:\n> The consensus in the list seems to be as long as you have the 2.6 \n> Linux Kernel, it's really a matter of personal preference. However, \n> it's hard to have a preference when you're new to the Linux world, \n> like I am. Red Hat, Fedora Core, Slackware, Suse, Gentoo? I guess my \n> primary goal is speed, stability, and ease of use. Any advice here, no \n> matter how minimal, would be appreciated.\nMy answer is FreeBSD6.1.\n>\n> Thanks,\n>\n> Brian Wipf\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n\n-- \nBest Regards,\nAlvis \n\n",
"msg_date": "Fri, 08 Sep 2006 17:46:07 +0000",
"msg_from": "alvis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuring System for Speed"
},
{
"msg_contents": "I agree with Luke's comments, although Luke knows I'm an unabashed \nAreca admirer where RAID controllers are concerned.\nAs tweakers.net recently reaffirmed, Areca cards with 1GB of BB cache \non them are faster than the AMCC/3Ware 9550 series.\n\nWhere Linux kernels are concerned, make sure you are at least at \n2.6.15 Most decent distros will be.\n\n4GB DIMMs are still much more than 2x as expensive per than 2GB DIMMs \nare. You may want to buy 16 2GB DIMMs instead of 8 4GB DIMMs given \nthe current price difference. Especially if you won't be upgrading \nyour RAM for a while.\n\nRon\n\n\nAt 04:44 AM 9/8/2006, Luke Lonergan wrote:\n>Brian,\n>\n>I like all of the HW - I just thoroughly reviewed this and came to the same\n>HW choices you did. The new SuperMicro chassis is an improvement on one we\n>have used for 21 servers like this.\n>\n>One modification: we implemented two internal 60GB laptop hard drives with\n>an additional 3Ware 8006-2LP controller on each machine for the OS. This\n>frees up all 16 SATA II drives for data. The laptop drives are super stable\n>and reliable under high heat conditions and they are small, so having them\n>inside the case is no big deal. BTW - we had ASAcomputers.com make our\n>systems for us and they did a good job.\n>\n>We use CentOS 4 on our lab systems and it works fine. I recommend XFS for\n>the DBMS data drives, along with RAID10 on the 3Ware controllers. With\n>normal Postgres you shouldn't expect to get more than about 350MB/s on those\n>CPUs for a single query, but with increased user count, the RAID10 should\n>scale fine to about 1200MB/s sequential transfer using XFS and a lot slower\n>with ext3.\n>\n>- Luke\n>\n>\n>On 9/8/06 12:52 AM, \"Brian Wipf\" <[email protected]> wrote:\n>\n> > I am in the process of speccing out a new box for a highly utilized\n> > (updates, inserts, selects) 40GB+ database. I'm trying to maximize\n> > performance on a budget, and I would appreciate any feedback on any\n> > of the following.\n> >\n> > Hardware:\n> > 2 - Intel Xeon 5160 3.0 GHz 4MB 1333MHz\n> > 8 - Kingston 4GB DDR2 ECC Fully Buffered\n> > 16 - WD Raptor 150GB 10000 RPM SATA II HD\n> > 1 - 3Ware 9550SX-16ML Serial ATA II RAID card\n> > 1 - Supermicro X7DBE+ Motherboard\n> > 1 - Supermicro SC836TQ-R800V Case\n> >\n> > The Woodcrest CPU, Raptor HD and 3Ware 9550SX RAID card have all been\n> > highly recommended on this list, so there may not be much to comment\n> > on there. The Supermicro X7DBE+ motherboard is appealing because of\n> > its 16 RAM slots and 64GB of maximum memory. The Supermicro SC836TQ-\n> > R800 case was selected because of its 16 drive bays in 3U and dual\n> > 800W power supplies (the motherboard requires 700W).\n> >\n> > RAID Configuration/File System:\n> > A lot of info in the lists, but I'm not sure of the best approach for\n> > this setup.\n> > I was thinking of using 14 drives in a RAID 10 (ext3) for the db and\n> > wals and 2 mirrored drives (ext3 or xfs) for the OS. Another option\n> > would be 12 drives in a RAID 10 for the database (ext3, maybe ext2)\n> > and 4 drives in a RAID 10 for the OS and wals (ext3) (with separate\n> > RAID cards). There are many choices here. Any suggestions?\n> >\n> > OS:\n> > The consensus in the list seems to be as long as you have the 2.6\n> > Linux Kernel, it's really a matter of personal preference. However,\n> > it's hard to have a preference when you're new to the Linux world,\n> > like I am. Red Hat, Fedora Core, Slackware, Suse, Gentoo? I guess my\n> > primary goal is speed, stability, and ease of use. Any advice here,\n> > no matter how minimal, would be appreciated.\n> >\n> > Thanks,\n> >\n> > Brian Wipf\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n> >\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Sat, 09 Sep 2006 10:48:20 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuring System for Speed"
},
{
"msg_contents": "\nOn 8-Sep-06, at 2:44 AM, Luke Lonergan wrote:\n\n> One modification: we implemented two internal 60GB laptop hard \n> drives with\n> an additional 3Ware 8006-2LP controller on each machine for the OS. \n> This\n> frees up all 16 SATA II drives for data.\n\nThat's a great idea. One question though. If I put all 16 drives in a \nRAID 10 for the database, where should I put the logs? On that large \nRAID set? If I use a RAID controller with a BB cache for the mirrored \nlaptop drives, might I be able to use that for the logs and OS?\n\nBrian Wipf\n\n",
"msg_date": "Mon, 11 Sep 2006 09:50:43 -0600",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuring System for Speed"
},
{
"msg_contents": "Brian,\n\nOn 9/11/06 8:50 AM, \"Brian Wipf\" <[email protected]> wrote:\n\n> That's a great idea. One question though. If I put all 16 drives in a\n> RAID 10 for the database, where should I put the logs? On that large\n> RAID set? If I use a RAID controller with a BB cache for the mirrored\n> laptop drives, might I be able to use that for the logs and OS?\n\nI think I'd probably reserve a couple of the data drives for the WAL, at 16\nyou have far oversubscribed non-MPP postgres' ability to use them for\nbandwidth and you'd only lose about 5% of your seek performance by dropping\na couple of drives.\n\nThe laptop drives are really slow, so putting your WAL on them might be a\nnet loss.\n\nAlternately you could go with dual SFF SAS drives, which are the same\nphysical size as the laptop drives but are 10,000 RPM SCSI equivalent. Heat\nremoval will be more of an issue with these potentially, especially if you\nhave a lot of activity on them.\n\n- Luke\n\n\n",
"msg_date": "Mon, 11 Sep 2006 16:37:36 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuring System for Speed"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a customer who wants a database solution for a 7 TB database.\nInsert will be the main action in the database.\n\nThere are some case studies with detail information about performance \nand hardware solution on this database size?\nWhat are the minimum hardware requirements for this kind of database?\n\nThanks is advance,\nNuno\nDISCLAIMER: This message may contain confidential information or privileged material and is intended only for the individual(s) named. If you are not a named addressee and mistakenly received this message you should not copy or otherwise disseminate it: please delete this e-mail from your system and notify the sender immediately. E-mail transmissions are not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete or contain viruses. Therefore, the sender does not accept liability for any errors or omissions in the contents of this message that arise as a result of e-mail transmissions. Please request a hard copy version if verification is required. Critical Software, SA.\n",
"msg_date": "Fri, 08 Sep 2006 10:30:52 +0100",
"msg_from": "Nuno Alexandre Alves <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance in a 7 TB database."
},
{
"msg_contents": "On Fri, 2006-09-08 at 10:30 +0100, Nuno Alexandre Alves wrote:\n> Hi,\n> \n> I have a customer who wants a database solution for a 7 TB database.\n> Insert will be the main action in the database.\n> \n> There are some case studies with detail information about performance \n> and hardware solution on this database size?\n> What are the minimum hardware requirements for this kind of database?\n> \n\nThis is a good place to start:\nhttp://www.postgresql.org/about/users\n\nI would expect that the case studies for databases greater than 7TB are\nfew and far between (for any database software). If you decide\nPostgreSQL is right, I'm sure the advocacy mailing list would like to\nsee your case study when you are finished.\n\nYour hardware requirements mostly depend on how you're going to use the\ndata. If you expect that most of the data will never be read, and that\nthe database will be more of an archive, the requirements might be quite\nreasonable. However, if or when you do need to search through that data,\nexpect it to take a long time.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 08 Sep 2006 09:19:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance in a 7 TB database."
}
] |
[
{
"msg_contents": "Hi \n\ni have a severe performance problem with one of my\nviews which has 6 to 8 joins .. any help will be\nappreciated.. \nthe view is:\n\nCREATE OR REPLACE VIEW thsn.trade_view AS \nSELECT tra.tra_id, tra.per_id, tra.fir_id,\ntra.tra_dcn, tra.tra_startdate::date AS tra_startdate,\ntra.tra_enddate::date AS tra_enddate,\ntra.tra_highprice, tra.tra_lowprice, tra.tra_shares,\ntra.tra_marketvalue, tra.tra_commonsharesheld,\ntra.tra_directsharesheld, tra.tra_indirectsharesheld,\ntra.fun_id, tra.tra_amended, tra.tra_ownership,\ntra.tra_touchdate::date AS tra_touchdate,\ntra.tra_cdate, tra.tra_udate, tra.tra_relevant,\ntra.tra_type, tra.tra_date::date AS tra_date, \nper.per_fullname, fir.fir_name, fir.bra_id,\ncac90.pc_perf AS tra_performance90, incac90.pc_perf AS\ntra_indexperformance90, cac180.pc_perf AS\ntra_performance180, incac180.pc_perf AS\ntra_indexperformance180, cac270.pc_perf AS\ntra_performance270, incac270.pc_perf AS\ntra_indexperformance270, kurl.kur_marketcap AS\ntra_marketkap, kurl.kur_close AS tra_close,\nthsn.per_letztebewertungkauf(per.per_id, fir.fir_id)\nAS per_punktekauf,\nthsn.per_letztebewertungverkauf(per.per_id,\nfir.fir_id) AS per_punkteverkauf, fun.fun_thid,\nfun.fun_name, wp.wp_symbol\nFROM thsn.trade tra\nJOIN thsn.person per ON tra.per_id = per.per_id\nJOIN thsn.firma fir ON tra.fir_id = fir.fir_id\nLEFT JOIN thsn.kurs_latest kurl ON ('U'::text ||\nfir.fir_cusip::text) =kurl.fir_cusip\nLEFT JOIN thsn.perfcache90 cac90 ON tra.tra_id =\ncac90.tra_id\nLEFT JOIN thsn.indexperfcache90 incac90 ON tra.tra_id\n= incac90.tra_id\nLEFT JOIN thsn.perfcache180 cac180 ON tra.tra_id =\ncac180.tra_id\nLEFT JOIN thsn.indexperfcache180 incac180 ON\ntra.tra_id = incac180.tra_id\nLEFT JOIN thsn.perfcache270 cac270 ON tra.tra_id =\ncac270.tra_id\nLEFT JOIN thsn.indexperfcache270 incac270 ON\ntra.tra_id = incac270.tra_id\nLEFT JOIN thsn.funktion fun ON tra.fun_id = fun.fun_id\nLEFT JOIN thsn.wertpapier wp ON fir.wp_id = wp.wp_id;\n\n\nand now if i query this view with this explain query :\n\nexplain select * from thsn.trade_view tra where\ntra_date>'2006-05-29'\n\nthe output:\n\n\"Merge Right Join (cost=304605.98..319519.02\nrows=324367 width=370)\"\n\" Merge Cond: (\"outer\".wp_id = \"inner\".wp_id)\"\n\" -> Index Scan using pk_wertpapier on wertpapier wp\n(cost=0.00..1134.06 rows=30651 width=12)\"\n\" -> Sort (cost=304605.98..305416.90 rows=324367\nwidth=370)\"\n\" Sort Key: fir.wp_id\"\n\" -> Hash Left Join (cost=102943.82..274914.62\nrows=324367 width=370)\"\n\" Hash Cond: (\"outer\".fun_id = \"inner\".fun_id)\"\n\" -> Hash Left Join (cost=102942.07..271019.38\nrows=324367 width=340)\"\n\" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n\" -> Hash Left Join (cost=71679.05..216585.25\nrows=324367 width=308)\"\n\" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n\" -> Hash Left Join (cost=53148.50..189791.47\nrows=324367 width=297)\"\n\" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n\" -> Hash Left Join (cost=25994.49..148209.39\nrows=324367 width=275)\"\n\" Hash Cond: (('U'::text || (\"outer\".fir_cusip)::text)\n= (\"inner\".fir_cusip)::text)\"\n\" -> Hash Join (cost=24702.75..133134.22 rows=324367\nwidth=264)\"\n\" Hash Cond: (\"outer\".per_id = \"inner\".per_id)\"\n\" -> Hash Join (cost=1450.91..99340.45 rows=324367\nwidth=237)\"\n\" Hash Cond: (\"outer\".fir_id = \"inner\".fir_id)\"\n\" -> Seq Scan on trade tra (cost=0.00..88158.53\nrows=324367 width=181)\"\n\" Filter: ((tra_date)::date > '2006-05-29'::date)\"\n\"-> Hash (cost=1374.53..1374.53 rows=30553 width=56)\"\n\"-> Seq Scan on firma fir (cost=0.00..1374.53\nrows=30553 width=56)\"\n\"-> Hash (cost=22629.87..22629.87 rows=248787\nwidth=27)\"\n\"-> Seq Scan on person per (cost=0.00..22629.87\nrows=248787 width=27)\"\n\"-> Hash (cost=1232.59..1232.59 rows=23659 width=35)\"\n\"-> Seq Scan on kurs_latest kurl (cost=0.00..1232.59\nrows=23659 width=35)\"\n\"-> Hash (cost=17244.44..17244.44 rows=814044\nwidth=19)\"\n\"-> Seq Scan on perfcache90 cac90 (cost=0.00..17244.44\nrows=814044 width=19)\"\n\" -> Hash (cost=6994.97..6994.97 rows=351797\nwidth=19)\"\n\" -> Seq Scan on indexperfcache90 incac90\n(cost=0.00..6994.97 rows=351797 width=19)\"\n\" -> Hash (cost=16590.44..16590.44 rows=776044\nwidth=19)\"\n\" -> Seq Scan on perfcache180 cac180\n(cost=0.00..16590.44 rows=776044 width=19)\"\n\" -> Hash (cost=6704.00..6704.00 rows=336800\nwidth=18)\"\n\" -> Seq Scan on indexperfcache180 incac180\n(cost=0.00..6704.00 rows=336800 width=18)\"\n\" -> Hash (cost=14755.09..14755.09 rows=695309\nwidth=19)\"\n\" -> Seq Scan on perfcache270 cac270\n(cost=0.00..14755.09 rows=695309 width=19)\"\n\" -> Hash (cost=6413.93..6413.93 rows=323893\nwidth=19)\"\n\" -> Seq Scan on indexperfcache270 incac270\n(cost=0.00..6413.93 rows=323893 width=19)\"\n\" -> Hash (cost=1.60..1.60 rows=60 width=34)\"\n\" -> Seq Scan on funktion fun (cost=0.00..1.60 rows=60\nwidth=34)\"\n\n\nand without the joins if i run a explain on this\nquery:\n\nEXPLAIN SELECT tra.tra_id, tra.per_id, tra.fir_id,\ntra.tra_dcn, tra.tra_startdate::date AS tra_startdate,\ntra.tra_enddate::date AS tra_enddate,\ntra.tra_highprice, tra.tra_lowprice, tra.tra_shares,\ntra.tra_marketvalue, tra.tra_commonsharesheld,\ntra.tra_directsharesheld, tra.tra_indirectsharesheld,\ntra.fun_id, tra.tra_amended, tra.tra_ownership,\ntra.tra_touchdate::date AS tra_touchdate,\ntra.tra_cdate, tra.tra_udate, tra.tra_relevant,\ntra.tra_type, tra.tra_date::date AS tra_date,\nper.per_fullname, fir.fir_name, fir.bra_id,\ncac90.pc_perf AS tra_performance90, incac90.pc_perf AS\ntra_indexperformance90, cac180.pc_perf AS\ntra_performance180, incac180.pc_perf AS\ntra_indexperformance180, cac270.pc_perf AS\ntra_performance270, incac270.pc_perf AS\ntra_indexperformance270, kurl.kur_marketcap AS\ntra_marketkap, kurl.kur_close AS tra_close, \nthsn.per_letztebewertungkauf(per.per_id, fir.fir_id)\nAS per_punktekauf,\nthsn.per_letztebewertungverkauf(per.per_id,\nfir.fir_id) AS per_punkteverkauf, fun.fun_thid,\nfun.fun_name, wp.wp_symbol\n\nFROM thsn.trade tra , thsn.person per, thsn.firma\nfir,thsn.kurs_latest kurl , thsn.perfcache90 cac90,\nthsn.indexperfcache90 incac90 , thsn.perfcache180\ncac180 ,thsn.indexperfcache180 incac180\n,thsn.perfcache270 cac270, thsn.indexperfcache270\nincac270 , thsn.funktion fun, thsn.wertpapier wp \n\nwhere tra_date>'2006-06-30' and tra.per_id =\nper.per_id and tra.fir_id = fir.fir_id and ('U'::text\n|| fir.fir_cusip::text) = kurl.fir_cusip::text and\ntra.tra_id = cac90.tra_id and tra.tra_id =\nincac90.tra_id and tra.tra_id = cac180.tra_id and\ntra.tra_id = incac180.tra_id and tra.tra_id =\ncac270.tra_id and tra.tra_id = incac270.tra_id and\ntra.fun_id = fun.fun_id and fir.wp_id = wp.wp_id \n\nthe output:\n\n\"Nested Loop (cost=64179.28..90645.20 rows=394\nwidth=370)\"\n\" -> Nested Loop (cost=64179.28..89072.83 rows=394\nwidth=343)\"\n\" -> Nested Loop (cost=64179.28..87183.66 rows=471\nwidth=372)\"\n\" -> Nested Loop (cost=64179.28..81962.24 rows=1304\nwidth=353)\"\n\" -> Nested Loop (cost=64179.28..74632.57 rows=1825\nwidth=334)\"\n\" -> Merge Join (cost=64179.28..65424.31 rows=2289\nwidth=315)\"\n\" Merge Cond: (\"outer\".wp_id = \"inner\".wp_id)\"\n\" -> Index Scan using pk_wertpapier on wertpapier wp\n(cost=0.00..1134.06 rows=30651 width=12)\"\n\" -> Sort (cost=64179.28..64185.15 rows=2349\nwidth=315)\"\n\" Sort Key: fir.wp_id\"\n\" -> Seq Scan on indexperfcache180 incac180\n(cost=0.00..6704.00 rows=336800 width=18)\"\n\" -> Hash (cost=54717.99..54717.99 rows=9690\nwidth=267)\"\n\" -> Merge Join (cost=42275.34..54717.99 rows=9690\nwidth=267)\"\n\" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n\" -> Index Scan using pk_indexperfcache270 on\nindexperfcache270 incac270 (cost=0.00..11393.83\nrows=323893 width=19)\"\n\" -> Sort (cost=42275.34..42348.12 rows=29114\nwidth=248)\"\n\" Sort Key: tra.tra_id\"\n\" -> Hash Join (cost=4224.87..40116.62 rows=29114\nwidth=248)\"\n\" Hash Cond: (\"outer\".fir_id = \"inner\".fir_id)\"\n\" -> Bitmap Heap Scan on trade tra\n(cost=183.96..35201.91 rows=29133 width=181)\"\n\" Recheck Cond: (tra_date > '2006-06-30\n00:00:00'::timestamp without time zone)\"\n\" -> Bitmap Index Scan on trade_date_index\n(cost=0.00..183.96 rows=29133 width=0)\"\n\" Index Cond: (tra_date > '2006-06-30\n00:00:00'::timestamp without time zone)\"\n\" -> Hash (cost=3964.57..3964.57 rows=30533 width=67)\"\n\" -> Hash Join (cost=1291.74..3964.57 rows=30533\nwidth=67)\"\n\" Hash Cond: (('U'::text || (\"outer\".fir_cusip)::text)\n= (\"inner\".fir_cusip)::text)\"\n\" -> Seq Scan on firma fir (cost=0.00..1374.53\nrows=30553 width=56)\"\n\" -> Hash (cost=1232.59..1232.59 rows=23659 width=35)\"\n\" -> Seq Scan on kurs_latest kurl (cost=0.00..1232.59\nrows=23659 width=35)\"\n\" -> Hash (cost=1.60..1.60 rows=60 width=34)\"\n\"-> Seq Scan on funktion fun (cost=0.00..1.60 rows=60\nwidth=34)\"\n\"-> Index Scan using pk_perfcache180 on perfcache180\ncac180 (cost=0.00..4.01 rows=1 width=19)\"\n\" Index Cond: (\"outer\".tra_id = cac180.tra_id)\"\n\"-> Index Scan using pk_perfcache270 on perfcache270\ncac270 (cost=0.00..4.00 rows=1 width=19)\"\n\" Index Cond: (\"outer\".tra_id = cac270.tra_id)\"\n\"-> Index Scan using pk_indexperfcache90 on\nindexperfcache90 incac90 (cost=0.00..3.99 rows=1\nwidth=19)\"\n\"Index Cond: (\"outer\".tra_id = incac90.tra_id)\"\n\" -> Index Scan using pk_perfcache90 on perfcache90\ncac90 (cost=0.00..4.00 rows=1 width=19)\"\n\" Index Cond: (\"outer\".tra_id = cac90.tra_id)\"\n\"-> Index Scan using pk_person on person per\n(cost=0.00..3.96 rows=1 width=27)\"\n\" Index Cond: (\"outer\".per_id = per.per_id)\"\n\n\nIn this case the time taken is much less and also the\nindex in the tra_date cloumn is considered while with\nthe view the index is not considered and also other\nindexes are not considered. \n\nWhat is it that i am doing wrong?\n\nThanks in advance.\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Fri, 8 Sep 2006 05:48:23 -0700 (PDT)",
"msg_from": "fardeen memon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem with joins "
},
{
"msg_contents": "fardeen memon <[email protected]> writes:\n> What is it that i am doing wrong?\n\nI think the forced coercion to date type in the view case is preventing\nthe planner from making a good guess about the selectivity of the\ncondition on tra_date. It has stats about tra_date's distribution,\nbut none about the distribution of \"tra_date::date\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Sep 2006 10:46:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with joins "
},
{
"msg_contents": "Thanks for the reply .. you are right after i changed tra_date to timestamp in the view it considered the index and the performance did increase a bit .. but still compared to the query without the joins its much less .. any idea why? \n \n here is the output of the explain query after changing the tra_date column to timestamp.\n \n \"Merge Right Join (cost=229025.77..231549.17 rows=32995 width=366)\"\n \"Merge Cond: (\"outer\".wp_id = \"inner\".wp_id)\"\n \"-> Index Scan using pk_wertpapier on wertpapier wp (cost=0.00..1132.90 rows=30654 width=12)\"\n \"-> Sort (cost=229025.77..229108.26 rows=32995 width=366)\"\n \" Sort Key: fir.wp_id\"\n \"-> Hash Left Join (cost=190376.33..226549.51 rows=32995 width=366)\"\n \" Hash Cond: (\"outer\".fun_id = \"inner\".fun_id)\"\n \"-> Hash Left Join (cost=190374.58..226147.09 rows=32995 width=336)\"\n \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Merge Right Join (cost=182608.86..211107.70 rows=32995 width=326)\"\n \" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Index Scan using uk1_perfcache270 on perfcache270 cac270 (cost=0.00..26360.00 rows=695309 width=19)\"\n \"-> Sort (cost=182608.86..182691.35 rows=32995 width=315)\"\n \" Sort Key: tra.tra_id\"\n \"-> Hash Left Join (cost=143070.31..180132.60 rows=32995 width=315)\"\n \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Hash Left Join (cost=134981.89..163162.72 rows=32995 width=305)\"\n \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Merge Right Join (cost=116451.34..130707.85 rows=32995 width=294)\"\n \" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Index Scan using pk_indexperfcache90 on indexperfcache90 incac90 (cost=0.00..12969.65 rows=395189 width=18)\"\n \"-> Sort (cost=116451.34..116533.83 rows=32995 width=284)\"\n \" Sort Key: tra.tra_id\"\n \"-> Merge Right Join (cost=80758.34..113975.08 rows=32995 width=284)\"\n \" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\"\n \"-> Index Scan using uk1_perfcache90 on perfcache90 cac90 (cost=0.00..30740.84 rows=814044 width=19)\"\n \"-> Sort (cost=80758.34..80840.83 rows=32995 width=273)\"\n \" Sort Key: tra.tra_id\"\n \"-> Hash Left Join (cost=26205.11..78282.08 rows=32995 width=273)\"\n \" Hash Cond: (('U'::text || (\"outer\".fir_cusip)::text) = (\"inner\".fir_cusip)::text)\"\n \"-> Hash Join (cost=24911.18..75586.30 rows=32995 width=263)\"\n \" Hash Cond: (\"outer\".per_id = \"inner\".per_id)\"\n \"-> Hash Join (cost=1658.41..40649.44 rows=32995 width=236)\"\n \" Hash Cond: (\"outer\".fir_id = \"inner\".fir_id)\"\n \"-> Bitmap Heap Scan on trade tra (cost=207.48..38208.67 rows=32995 width=180)\"\n \" Recheck Cond: (tra_date > '2006-06-30 00:00:00'::timestamp without time zone)\"\n \"-> Bitmap Index Scan on trade_date_index (cost=0.00..207.48 rows=32995 width=0)\"\n \" Index Cond: (tra_date > '2006-06-30 00:00:00'::timestamp without time zone)\"\n \"-> Hash (cost=1374.54..1374.54 rows=30554 width=56)\"\n \"-> Seq Scan on firma fir (cost=0.00..1374.54 rows=30554 width=56)\"\n \"-> Hash (cost=22630.62..22630.62 rows=248862 width=27)\"\n \"-> Seq Scan on person per (cost=0.00..22630.62 rows=248862 width=27)\"\n \"-> Hash (cost=1234.74..1234.74 rows=23674 width=34)\"\n \"-> Seq Scan on kurs_latest kurl (cost=0.00..1234.74 rows=23674 width=34)\"\n \"-> Hash (cost=16590.44..16590.44 rows=776044 width=19)\"\n \"-> Seq Scan on perfcache180 cac180 (cost=0.00..16590.44 rows=776044 width=19)\"\n \"-> Hash (cost=7137.93..7137.93 rows=380193 width=18)\"\n \"-> Seq Scan on indexperfcache180 incac180 (cost=0.00..7137.93 rows=380193 width=18)\"\n \"-> Hash (cost=6847.57..6847.57 rows=367257 width=18)\"\n \"-> Seq Scan on indexperfcache270 incac270 (cost=0.00..6847.57 rows=367257 width=18)\"\n \"-> Hash (cost=1.60..1.60 rows=60 width=34)\"\n \"-> Seq Scan on funktion fun (cost=0.00..1.60 rows=60 width=34)\"\n \n \n It is still doing a sequence scan on the person , perfcache180 and perfcache270 table and with out the joins it performs a index scan on these tables. \n \n Is something wrong with the view?\n \n once again thanks for your help.\n\nTom Lane <[email protected]> wrote: fardeen memon writes:\n> What is it that i am doing wrong?\n\nI think the forced coercion to date type in the view case is preventing\nthe planner from making a good guess about the selectivity of the\ncondition on tra_date. It has stats about tra_date's distribution,\nbut none about the distribution of \"tra_date::date\".\n\n regards, tom lane\n\n\n \t\t\n---------------------------------\nStay in the know. Pulse on the new Yahoo.com. Check it out. \nThanks for the reply .. you are right after i changed tra_date to timestamp in the view it considered the index and the performance did increase a bit .. but still compared to the query without the joins its much less .. any idea why? here is the output of the explain query after changing the tra_date column to timestamp. \"Merge Right Join (cost=229025.77..231549.17 rows=32995 width=366)\" \"Merge Cond: (\"outer\".wp_id = \"inner\".wp_id)\" \"-> Index Scan using pk_wertpapier on wertpapier wp (cost=0.00..1132.90 rows=30654 width=12)\" \"-> Sort (cost=229025.77..229108.26 rows=32995 width=366)\" \" Sort Key: fir.wp_id\" \"-> Hash Left Join (cost=190376.33..226549.51 rows=32995 width=366)\" \" Hash Cond: (\"outer\".fun_id = \"inner\".fun_id)\" \"-> Hash Left Join (cost=190374.58..226147.09 rows=32995\n width=336)\" \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\" \"-> Merge Right Join (cost=182608.86..211107.70 rows=32995 width=326)\" \" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\" \"-> Index Scan using uk1_perfcache270 on perfcache270 cac270 (cost=0.00..26360.00 rows=695309 width=19)\" \"-> Sort (cost=182608.86..182691.35 rows=32995 width=315)\" \" Sort Key: tra.tra_id\" \"-> Hash Left Join (cost=143070.31..180132.60 rows=32995 width=315)\" \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\" \"-> Hash Left Join (cost=134981.89..163162.72 rows=32995 width=305)\" \" Hash Cond: (\"outer\".tra_id = \"inner\".tra_id)\" \"-> Merge Right Join (cost=116451.34..130707.85 rows=32995 width=294)\" \" Merge Cond: (\"outer\".tra_id\n = \"inner\".tra_id)\" \"-> Index Scan using pk_indexperfcache90 on indexperfcache90 incac90 (cost=0.00..12969.65 rows=395189 width=18)\" \"-> Sort (cost=116451.34..116533.83 rows=32995 width=284)\" \" Sort Key: tra.tra_id\" \"-> Merge Right Join (cost=80758.34..113975.08 rows=32995 width=284)\" \" Merge Cond: (\"outer\".tra_id = \"inner\".tra_id)\" \"-> Index Scan using uk1_perfcache90 on perfcache90 cac90 (cost=0.00..30740.84 rows=814044 width=19)\" \"-> Sort (cost=80758.34..80840.83 rows=32995 width=273)\" \" Sort Key: tra.tra_id\" \"-> Hash Left Join (cost=26205.11..78282.08 rows=32995 width=273)\" \" Hash Cond: (('U'::text || (\"outer\".fir_cusip)::text) = (\"inner\".fir_cusip)::text)\" \"-> Hash Join (cost=24911.18..75586.30 rows=32995\n width=263)\" \" Hash Cond: (\"outer\".per_id = \"inner\".per_id)\" \"-> Hash Join (cost=1658.41..40649.44 rows=32995 width=236)\" \" Hash Cond: (\"outer\".fir_id = \"inner\".fir_id)\" \"-> Bitmap Heap Scan on trade tra (cost=207.48..38208.67 rows=32995 width=180)\" \" Recheck Cond: (tra_date > '2006-06-30 00:00:00'::timestamp without time zone)\" \"-> Bitmap Index Scan on trade_date_index (cost=0.00..207.48 rows=32995 width=0)\" \" Index Cond: (tra_date > '2006-06-30 00:00:00'::timestamp without time zone)\" \"-> Hash (cost=1374.54..1374.54 rows=30554 width=56)\" \"-> Seq Scan on firma fir (cost=0.00..1374.54 rows=30554 width=56)\" \"-> Hash (cost=22630.62..22630.62 rows=248862 width=27)\" \"-> Seq Scan on person per (cost=0.00..22630.62 rows=248862 width=27)\" \"-> Hash (cost=1234.74..1234.74 rows=23674 width=34)\" \"-> Seq Scan on kurs_latest kurl (cost=0.00..1234.74 rows=23674 width=34)\" \"-> Hash (cost=16590.44..16590.44 rows=776044 width=19)\" \"-> Seq Scan on perfcache180 cac180 (cost=0.00..16590.44 rows=776044 width=19)\" \"-> Hash (cost=7137.93..7137.93 rows=380193 width=18)\" \"-> Seq Scan on indexperfcache180 incac180 (cost=0.00..7137.93 rows=380193 width=18)\" \"-> Hash (cost=6847.57..6847.57 rows=367257 width=18)\" \"-> Seq Scan on indexperfcache270 incac270 (cost=0.00..6847.57 rows=367257 width=18)\" \"-> Hash (cost=1.60..1.60 rows=60 width=34)\" \"-> Seq Scan on funktion fun (cost=0.00..1.60 rows=60 width=34)\" It is still doing a sequence scan on the person , perfcache180 and perfcache270 table and with out the joins it performs a index scan on these tables. Is something wrong with the view? once again thanks for your help.Tom Lane <[email protected]> wrote: fardeen memon writes:> What is it that i am doing\n wrong?I think the forced coercion to date type in the view case is preventingthe planner from making a good guess about the selectivity of thecondition on tra_date. It has stats about tra_date's distribution,but none about the distribution of \"tra_date::date\". regards, tom lane\nStay in the know. Pulse on the new Yahoo.com. Check it out.",
"msg_date": "Mon, 11 Sep 2006 04:29:13 -0700 (PDT)",
"msg_from": "fardeen memon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem with joins "
},
{
"msg_contents": "fardeen memon <[email protected]> writes:\n> here is the output of the explain query after changing the tra_date column to timestamp.\n\nIf you want intelligent commentary, please (a) post EXPLAIN ANALYZE not\nEXPLAIN output, and (b) don't mangle the indentation. This is just\nabout unreadable :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2006 10:21:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with joins "
}
] |
[
{
"msg_contents": "hi\n\nplz unsubscribe me.. \n\ni am sending mail to this id.. for unsubscribing.. is it correct..\nmy mail box is gettin flooded..\n\n\n \nhi\n\nplz unsubscribe me.. \n\ni am sending mail to this id.. for unsubscribing.. is it correct..\nmy mail box is gettin flooded..",
"msg_date": "8 Sep 2006 14:49:32 -0000",
"msg_from": "\"Phadnis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "unsubscribe me"
},
{
"msg_contents": "\"Phadnis\" <shphadnis 'at' rediffmail.com> writes:\n\n> plz unsubscribe me.. \n> \n> i am sending mail to this id.. for unsubscribing.. is it correct..\n> my mail box is gettin flooded..\n\nyou managed to subscribe, you'll probably manage to unsubcribe.\n\nhint: the email headers contain the information for unsubscribing.\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "08 Sep 2006 17:07:23 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unsubscribe me"
}
] |
[
{
"msg_contents": "Hi,\n\nfor this simple join of two tables,\n\nSELECT * FROM large_rel n, smaller_rel a\n WHERE n.field_1 = a.field_2 AND a.key = '127.0.0.1';\n\nPostgreSQL 8.1.4 chooses an extremely bad query plan:\n\n Hash Join (cost=283.45..8269374.38 rows=14137 width=94)\n Hash Cond: (\"outer\".field_1 = \"inner\".field_2)\n -> Seq Scan on large_rel n (cost=0.00..6760690.04 rows=301651904 width=52)\n -> Hash (cost=283.27..283.27 rows=74 width=42)\n -> Bitmap Heap Scan on smaller_rel a (cost=2.26..283.27 rows=74 width=42)\n Recheck Cond: (key = '127.0.0.1'::inet)\n -> Bitmap Index Scan on smaller_rel_1_key (cost=0.00..2.26 rows=74 width=0)\n Index Cond: (key = '127.0.0.1'::inet)\n\nNote the sequential scan over the whole large_rel table (and the\ncorresponding row estimate is roughly correct).\n\nIf I turn off hash joins, I get this plan, which actually completes in\nfinite time:\n\n Nested Loop (cost=2005.35..46955689.59 rows=14137 width=94) (actual time=0.325..0.678 rows=12 loops=1)\n -> Bitmap Heap Scan on smaller_rel a (cost=2.26..283.27 rows=74 width=42) (actual time=0.132..0.133 rows=1 loops=1)\n Recheck Cond: (key = '127.0.0.1'::inet)\n -> Bitmap Index Scan on smaller_rel_1_key (cost=0.00..2.26 rows=74 width=0) (actual time=0.095..0.095 rows=1 loops=1)\n Index Cond: (key = '127.0.0.1'::inet)\n -> Bitmap Heap Scan on large_rel n (cost=2003.09..632110.78 rows=193739 width=52) (actual time=0.182..0.501 rows=12 loops=1)\n Recheck Cond: (n.field_1 = \"outer\".field_2)\n -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09 rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)\n Index Cond: (n.field_1 = \"outer\".field_2)\n\nThe row estimate for\n\n SELECT * FROM smaller_rel a WHERE a.key = '127.0.0.1';\n\nis somewhat off: \n\n Bitmap Heap Scan on smaller_rel a (cost=2.26..283.27 rows=74 width=42) (actual time=0.134..0.135 rows=1 loops=1)\n Recheck Cond: (key = '127.0.0.1'::inet)\n -> Bitmap Index Scan on smaller_rel_1_key (cost=0.00..2.26 rows=74 width=0) (actual time=0.108..0.108 rows=1 loops=1)\n Index Cond: (key = '127.0.0.1'::inet)\n\nHowever, I can't believe that the hash join would be faster even if\nthere where 74 matching rows in smaller_rel instead of just one. The\nestimate decreases when I increase the portion of smaller_rel which is\nscanned by ANALYZE (to something like 10% of the table), but this\ndoesn't look like a solution.\n\nAny suggestions?\n\n(The queries have been pseudonzmized and may contain typos.)\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 11 Sep 2006 15:48:03 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Abysmal hash join"
},
{
"msg_contents": "Florian Weimer <[email protected]> writes:\n> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09 rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)\n> Index Cond: (n.field_1 = \"outer\".field_2)\n\nWhat you need to look into is why that rowcount estimate is off by four\norders of magnitude.\n\nThe estimate on the smaller table is only off by a factor of 75 but\nthat's still pretty darn awful. Are the statistics up to date? Maybe\nlarger stats targets would help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2006 10:28:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Abysmal hash join "
},
{
"msg_contents": "* Tom Lane:\n\n> Florian Weimer <[email protected]> writes:\n>> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09 rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)\n>> Index Cond: (n.field_1 = \"outer\".field_2)\n>\n> What you need to look into is why that rowcount estimate is off by four\n> orders of magnitude.\n\nAh, thanks.\n\n> The estimate on the smaller table is only off by a factor of 75 but\n> that's still pretty darn awful. Are the statistics up to date?\n\nSeems so. Running ANALYZE only increased the row estimate, instead of\ndecreasing it. 8-(\n\n> Maybe larger stats targets would help.\n\nI've set default_statistics_target to 100 and rerun ANALYZE on that\ntable. The estimate went down to 43108 (and the hash join is still\nthe preferred plan). ANALZE with default_statistics_target = 200\n(which seems pretty large to me) is down to 26050 and the bitmap scan\nplan is chosen.\n\nPostgreSQL seems to think that there are only very few distinct values\nfor that column (with default_statistics_target = 100 and 200):\n\nEXPLAIN SELECT DISTINCT field_1 FROM large_rel;\n\n Unique (cost=82841534.37..84400982.21 rows=7235 width=24)\n -> Sort (cost=82841534.37..83621258.29 rows=311889568 width=24)\n Sort Key: field_1\n -> Seq Scan on large_rel (cost=0.00..6863066.68 rows=311889568 width=24)\n\n Unique (cost=82733282.28..84290654.92 rows=11957 width=24)\n -> Sort (cost=82733282.28..83511968.60 rows=311474528 width=24)\n Sort Key: field_1\n -> Seq Scan on large_rel (cost=0.00..6858916.28 rows=311474528 width=24)\n\nI don't know the exact value, but it's closer to a few millions. The\ndistribution is quite odd. A large sample of the column (10 million\nrows) looks like this:\n\nSELECT cnt, COUNT(*) FROM \n (SELECT COUNT(*) AS cnt FROM\n (SELECT field_1 FROM large_rel LIMIT 10000000) x GROUP BY field_1) y \n GROUP BY cnt ORDER BY cnt;\n\n cnt | count\n--------+--------\n 1 | 258724\n 2 | 85685\n 3 | 46215\n 4 | 29333\n 5 | 20512\n 6 | 15276\n 7 | 11444\n 8 | 9021\n[...]\n 59379 | 1\n 59850 | 1\n 111514 | 1\n 111783 | 1\n 111854 | 1\n 112259 | 1\n 112377 | 1\n 116379 | 1\n 116473 | 1\n 116681 | 1\n\nMaybe I'm just screwed with such a distribution, but it's still rather\nunfortunate.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 11 Sep 2006 17:15:33 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Abysmal hash join"
},
{
"msg_contents": "Florian Weimer <[email protected]> writes:\n>> Maybe larger stats targets would help.\n\n> I've set default_statistics_target to 100 and rerun ANALYZE on that\n> table. The estimate went down to 43108 (and the hash join is still\n> the preferred plan). ANALZE with default_statistics_target = 200\n> (which seems pretty large to me) is down to 26050 and the bitmap scan\n> plan is chosen.\n\n> PostgreSQL seems to think that there are only very few distinct values\n> for that column (with default_statistics_target = 100 and 200):\n\nYeah, n_distinct estimation from a sample is inherently hard :-(. Given\nthat you have such a long tail on the distribution, it might be worth\nyour while to crank the stats target for that column all the way to the\nmaximum (1000). Also you need to experiment with extending the stats\nfor the smaller table.\n\nI believe what's happening here is that the smaller table joins only to\nless-frequent entries in the big table (correct?). The hash join would\nbe appropriate if there were many rows joining to the very-frequent\nentries, and the problem for the planner is to determine that that's not\nso. Given enough stats on the two joining columns, it should be able to\ndetermine that.\n\nOf course, large stats targets will slow down planning to some extent,\nso you should also keep an eye on how long it takes to plan the query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2006 11:39:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Abysmal hash join "
},
{
"msg_contents": "* Tom Lane:\n\n> Yeah, n_distinct estimation from a sample is inherently hard :-(. Given\n> that you have such a long tail on the distribution, it might be worth\n> your while to crank the stats target for that column all the way to the\n> maximum (1000).\n\nI've done that. Fortunately, ANALYZE time didn't increase by that\nmuch, compared to the default (by just a factor of 10). The bitmap\nscan estimate is still way off (around 8000), but let's hope that it\nwon't matter in practice.\n\n> Also you need to experiment with extending the stats for the smaller\n> table.\n\nYeah, the situation is quite similar, but on a much smaller scale.\n\n> I believe what's happening here is that the smaller table joins only to\n> less-frequent entries in the big table (correct?).\n\nAlmost. We won't select the rows based on these values, at least not\nin queries of that type. The reason is simply that the result set is\ntoo large to be useful.\n\n> Of course, large stats targets will slow down planning to some extent,\n> so you should also keep an eye on how long it takes to plan the query.\n\nThese queries are mostly ad-hoc, so a delay of a couple of seconds\ndoesn't matter. Only if you need to wait five minutes, it's a\ndifferent story.\n\nIt seems that the situation is under control now. Thanks.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 11 Sep 2006 18:28:49 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Abysmal hash join"
},
{
"msg_contents": "\nFlorian Weimer <[email protected]> writes:\n\n> I've done that. Fortunately, ANALYZE time didn't increase by that\n> much, compared to the default (by just a factor of 10). \n\nWith really high stats times you also have to keep an eye on planning time.\nThe extra data in the stats table can cause planning to take longer.\n\n\n-- \ngreg\n\n",
"msg_date": "11 Sep 2006 20:56:27 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Abysmal hash join"
},
{
"msg_contents": "PG 8.0.3 is choosing a bad plan between a query.\nI'm going to force the plan (by making one join into a function).\n\nI'd like to know if this is unexpected; in general,\ncan PG see that a join on an grouped-by field\ncan be pushed down into the query as an indexable filter?\n\nThe query below joins a table \"message\", to an aggregate of \n\"message_recipient\" joined to \"recipient\". The joins are all on\nindexed PK-FK columns. \"message_recipient\" is an intersect table.\n\n message :<: message_recipient :>: recipient\n\nIn the query plan below, the right side of the join returns one row of \"message\", \nand PG knows it.\n\nThe left side of the join compute the entire aggregate of \"message_recipient\"\n(est 700K rows), then does a merge join against the single message row.\n\nI would have hoped for a nested-loop join, where the message \"id\"\nfield would be used to index-scan \"message_recipient\",\nwhich in turn would index-scan \"recipient\" by recipient \"id\".\n\nThis is PG 8.0.3. All tables have been (very) recently analyzed.\nThe query plans estimated rowcounts all look bang-on.\n\"message\" and \"message_recipient\" are tables of about 3M rows each.\n\nAs usual, this is on a system to which I only have restricted access.\nBut I'd be happy to expand on the info below with anything short of\nthe pg_dump.\n\n-----------------------------------========================================================\n\nEXPLAIN\nSELECT message.id AS m_db_id, message.m_global_id AS id, m_global_id, m_queue_id, h_message_id,\n m_date AS c_date_iso, m_date, c_subject_utf8, message.reason_id AS reason_id,\n m_reason.name AS m_reason, m_spam_probability, m_spam_level, h_to, m_message_size,\n m_header_size, date_part('epoch', message.m_date) AS c_qdate_time,\n h_from_local || '@' || h_from_domain AS h_from,\n env_from_local || '@' || env_from_domain AS env_from,\n env_from_local || '@' || env_from_domain AS m_envelope_from, location_name AS location,\n m_milter_host, m_relay, virus_name AS m_virus_name, m_all_recipients\nFROM message\nJOIN m_reason ON message.reason_id = m_reason.reason_id\nJOIN message_all_recipients ON message.id = message_all_recipients.m_id\nWHERE message.m_global_id = '2211000-1';\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------\nNested Loop (cost=254538.42..283378.44 rows=1 width=425)\n Join Filter: (\"outer\".reason_id = \"inner\".reason_id)\n -> Merge Join (cost=254538.42..283377.33 rows=1 width=416)\n Merge Cond: (\"outer\".m_id = \"inner\".id)\n -> Subquery Scan message_all_recipients (cost=254535.40..281604.95 rows=707735 width=40)\n -> GroupAggregate (cost=254535.40..274527.60 rows=707735 width=36)\n -> Sort (cost=254535.40..258250.57 rows=1486069 width=36)\n Sort Key: message_recipient.message_id\n -> Merge Join (cost=0.00..78970.52 rows=1486069 width=36)\n Merge Cond: (\"outer\".id = \"inner\".recipient_id)\n -> Index Scan using pk_recipient on recipient (cost=0.00..5150.65 rows=204514 width=36)\n -> Index Scan using pk_message_recipient on message_recipient (cost=0.00..56818.25 rows=1486069 width=16)\n Filter: (is_mapped = 1)\n -> Sort (cost=3.02..3.03 rows=1 width=384)\n Sort Key: message.id\n -> Index Scan using unq_message_m_global_id on message (cost=0.00..3.01 rows=1 width=384)\n Index Cond: ((m_global_id)::text = '2211000-1'::text)\n -> Seq Scan on m_reason (cost=0.00..1.04 rows=4 width=13)\n\n\n\n----------------------------------- Relevant tables and view:\n\n# \\d message\n Table \"public.message\"\n Column | Type | Modifiers\n--------------------+-----------------------------+---------------------------------------------------------\n id | bigint | not null default nextval('public.message_id_seq'::text)\n m_global_id | character varying(255) | not null\n reason_id | smallint | not null\n location_name | character varying(255) | not null\n m_date | timestamp without time zone |\n m_queue_id | character varying(255) |\n h_message_id | character varying(255) |\n c_subject_utf8 | character varying(255) |\n env_from_local | character varying(255) |\n env_from_domain | character varying(255) |\n h_from_local | character varying(255) |\n h_from_domain | character varying(255) |\n h_from | character varying(255) |\n h_to | character varying(255) |\n m_milter_host | character varying(255) |\n m_relay | character varying(255) |\n m_spam_probability | double precision |\n m_message_size | integer |\n m_header_size | integer |\n m_spam_level | character varying(255) |\n virus_name | text |\nIndexes:\n \"pk_message\" PRIMARY KEY, btree (id)\n \"unq_message_m_global_id\" UNIQUE, btree (m_global_id)\n \"message_h_message_id_index\" btree (h_message_id)\n \"message_m_date_index\" btree (m_date)\n \"message_m_queue_id_index\" btree (m_queue_id)\n\n\n# \\d message_recipient\n Table \"public.message_recipient\"\n Column | Type | Modifiers\n---------------+----------+--------------------\n recipient_id | bigint | not null\n message_id | bigint | not null\n is_mapped | smallint | not null default 0\n is_calculated | smallint | not null default 0\n is_envelope | smallint | not null default 0\n reason_id | smallint | not null\n action | smallint |\nIndexes:\n \"pk_message_recipient\" PRIMARY KEY, btree (recipient_id, message_id)\n \"message_recipient_message_id_index\" btree (message_id)\nForeign-key constraints:\n \"rc_rcpnt_map_msg_id\" FOREIGN KEY (message_id) REFERENCES message(id) ON DELETE CASCADE\n\n\nCREATE AGGREGATE catenate (\n BASETYPE = text,\n SFUNC = textcat,\n STYPE = text,\n INITCOND = ''\n);\n\n\n\nCREATE OR REPLACE VIEW message_all_recipients AS\nSELECT message_id AS m_id,\n substr(catenate(','||local||'@'||domain),2) AS m_all_recipients\nFROM message_recipient\nJOIN recipient ON id = recipient_id\nWHERE is_mapped = 1\nGROUP BY message_id;\n\n----------------------------------- pg_statistics info, problably not of much interest\nObject DiskIO CacheIO Ins Upd Del SeqScan TupRead IdxScan IdxFetch\nm_reason 308 599679 1 0 0 599985 2399935 0 0\nmessage 4658766 14977816 2210967 0 933643 7299 81428503 5855900 8833404\nmessage.pk_~ 227834 31683671 0 0 0 0 3897054 5850229 3897054\nmessage.unq_~_m_global_id 252753 8591251 0 0 0 0 5552 5564 5552\nmessage.~_h_~_id_index 1879172 8496722 0 0 0 0 0 0 0\nmessage.~_m_date_index 245405 8526765 0 0 0 0 4930798 107 4930798\nmessage.~_m_queue_id_index 245719 8598360 0 0 0 0 0 0 0\nmessage_recipient 41862572 81546465 2703260 104 1144977 0 0 2648101 117192003\nmessage_recipient.pk_~ 4541776 16430539 0 0 0 0 116042206 1710555 116042206\nmessage_recipient.~_message_id_index 243379 14235956 0 0 0 0 1149797 937546 1149797\nrecipient 55288623 955926871 223057 0 112158 584592 103499192990 5726999 62036712\nrecipient.pk_~ 180080 1125073 0 0 0 0 7440446 117045 7440446\nrecipient.unq_~ 2205366 21513447 0 0 0 0 54166857 5609472 54166857\nrecipient.~_domain_index 191722 734683 0 0 0 0 429409 482 429409\n\n----------------------------------- output of \"pgdisk\", showing actual disk space vs pg_class info:\n..DISK-KB ..DATA-KB ...EST-KB .EST-ROWS ...OID.... NAME\n 1625360 1021104 979000 1315620 17261 public.message\n 369208 159200 159032 1558240 17272 public.message_recipient\n 45752 16408 14552 181646 17293 public.recipient\n\n",
"msg_date": "Tue, 12 Sep 2006 13:59:56 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Bad plan for join to aggregate of join."
},
{
"msg_contents": "Mischa Sandberg <[email protected]> writes:\n> can PG see that a join on an grouped-by field\n> can be pushed down into the query as an indexable filter?\n\nNo. The GROUP BY serves as a partial optimization fence. If you're\nconcerned about the speed of this query, I recommend making a different\nview in which 'message' is joined inside the GROUP BY.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2006 19:42:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan for join to aggregate of join. "
},
{
"msg_contents": "Tom Lane wrote:\n> Mischa Sandberg <[email protected]> writes:\n>> can PG see that a join on an grouped-by field\n>> can be pushed down into the query as an indexable filter?\n> \n> No. The GROUP BY serves as a partial optimization fence. If you're\n> concerned about the speed of this query, I recommend making a different\n> view in which 'message' is joined inside the GROUP BY.\n\nThanks.\n",
"msg_date": "Tue, 12 Sep 2006 17:51:27 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan for join to aggregate of join."
}
] |
[
{
"msg_contents": "Hi,\n\na week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre\n7.4.7). To migrate the database we use a dump, using pg_dump with this\noptions:\npg_dump -U <username> -c -F p -O -v -f <filename> <DBname>\n\nWe have a search, that using woody take about 1-2 minutes, but with\nsarge it is executing about 2 hours, and at least it crashes, with a\nmessage about a temporal file and no more disk space ( i have more than\na GB of free disk space).\n\nThe search is very long, with a lot of joins (generated by a ERP we\nmanage). We think that the problem can be at the indices, but we are not\nsure. At the original woody database we create indices, but when the\ndump is being installed at sarge, it creates an implicit index, so there\nare times that there are duplicates indices. But we try to remove the\nduplicate indices and we don't resove the problem.\n\nThe select is the next one (sorry if it is too big):\n\n(SELECT facturaabono.numeroFactura as \nnumeroFacturaFactura,facturaabono.codigoFactura as \ncodigoFacturaFactura,facturaabono.codigoEmpresa as \ncodigoEmpresaFactura,facturaabono.codigoTienda as \ncodigoTiendaFactura,facturaabono.estado as \nestadoFactura,facturaabono.fechaemision as \nfechaEmisionFactura,facturaabono.tipoIva as \ntipoIvaFactura,facturaAbono.baseImponibleModificada as \nbaseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \nas baseImponibleNuevaFactura,refactura as \nrefacturaFactura,participanteShop.codigoParty as \ncodigoPartyParticipantShop,participanteShop.nombre as \nnombreParticipantShop,participanteCliente.codigoParty as \ncodigoPartyParticipantPagador,participanteCliente.nick as \nnickParticipantPagador,participanteCliente.nombreCorto as \nshortnameparticipantPagador,participanteCliente.cif as \ncifParticipantPagador,reparacion.codigoReparacion as \ncodigoReparacionRepair,reparacion.codigoTienda as \ncodigoTiendaRepair,reparacion.codigoCliente as \ncodigoClienteRepair,reparacion.codigoCompania as \ncodigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \nfacturaAbono.codigoEmpresa as \ncodigoPartyParticipantEnter,participanteCompany.nombre as \nnombreParticipantCompany,participanteCompany.nombreCorto as \nshortnameparticipantCompany,participanteCompany.codigoParty as \ncodigoPartyParticipantCompany,participanteCompany.cif as \ncifParticipantCompany, pago.codigoPago as codigoPagoPago, \npago.codigobanco as codigoBancoPago, pago.codigooficina as \ncodigoOficinaPago, pago.numerocuenta as numeroCuentaPago,\npago.esAPlazos \nas esAPlazosPago, pago.pagosRealizados as pagosRealizadosPago, \npago.numeroVencimientos as numeroVencimientosPago, pago.fechaInicio as \nfechaInicioPago, pago.esdomiciliacion as esdomiciliacionpago from \nreparacion left outer join participante participanteCompany ON \n(reparacion.codigoCompania=participanteCompany.codigoParty) left outer \njoin siniestro on \n(siniestro.codigoReparacion=reparacion.codigoReparacion and \nsiniestro.codigoTienda=reparacion.codigoTienda and \nsiniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \nparticipanteCliente, participante participanteShop, tienda,\nfacturaabono \nleft outer join pago on (facturaabono.codigoPago=pago.codigoPago and \nfacturaabono.codigoTienda=pago.codigoTienda and \nfacturaabono.codigoEmpresa=pago.codigoEmpresa) where \nfacturaabono.estado >= 0 and (facturaabono.numeroFactura is not null) \nand facturaabono.codigoTienda=participanteShop.codigoParty and \nfacturaabono.codigoTienda=reparacion.codigoTienda and \nfacturaabono.codigoEmpresa=reparacion.codigoEmpresa and \nfacturaabono.codigoPagador = participanteCliente.codigoParty and \ntienda.codigoTienda = facturaabono.codigoTienda and \n(participanteCliente.nick ilike '%ASITUR%') and \n(facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n') and facturaabono.tipoIva is NULL and (facturaabono.codigoReparacion \n= reparacion.codigoReparacion) order by \nparticipantecompany.nombre,facturaabono.numeroFactura) union (SELECT \nDISTINCT facturaabono.numeroFactura as \nnumeroFacturaFactura,facturaabono.codigoFactura as \ncodigoFacturaFactura,facturaabono.codigoEmpresa as \ncodigoEmpresaFactura,facturaabono.codigoTienda as \ncodigoTiendaFactura,facturaabono.estado as \nestadoFactura,albaranes.fechaemision as \nfechaEmisionFactura,facturaabono.tipoIva as \ntipoIvaFactura,facturaAbono.baseImponibleModificada as \nbaseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \nas baseImponibleNuevaFactura,refactura as \nrefacturaFactura,participanteShop.codigoParty as \ncodigoPartyParticipantShop,participanteShop.nombre as \nnombreParticipantShop,participanteCliente.codigoParty as \ncodigoPartyParticipantPagador,participanteCliente.nick as \nnickParticipantPagador,participanteCliente.nombreCorto as \nshortnameparticipantPagador,participanteCliente.cif as \ncifParticipantPagador,(case WHEN reparacion.codigoCompania is not NULL \nTHEN reparacion.codigoReparacion ELSE NULL END) as \ncodigoReparacionRepair,reparacion.codigoTienda as \ncodigoTiendaRepair,reparacion.codigoCliente as \ncodigoClienteRepair,reparacion.codigoCompania as \ncodigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \nfacturaAbono.codigoEmpresa as \ncodigoPartyParticipantEnter,participanteCompany.nombre as \nnombreParticipantCompany,participanteCompany.nombreCorto as \nshortnameparticipantCompany,participanteCompany.codigoParty as \ncodigoPartyParticipantCompany,participanteCompany.cif as \ncifParticipantCompany, pago.codigoPago as codigoPagoPago, \npago.codigobanco as codigoBancoPago, pago.codigooficina as \ncodigoOficinaPago, pago.numerocuenta as numeroCuentaPago,\npago.esAPlazos \nas esAPlazosPago, pago.pagosRealizados as pagosRealizadosPago, \npago.numeroVencimientos as numeroVecimientosPago, pago.fechaInicio as \nfechaInicioPago, pago.esdomiciliacion as esdomiciliacionpago from \nreparacion left outer join participante participanteCompany ON \n(reparacion.codigoCompania=participanteCompany.codigoParty) left outer \njoin siniestro on \n(siniestro.codigoReparacion=reparacion.codigoReparacion and \nsiniestro.codigoTienda=reparacion.codigoTienda and \nsiniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \nparticipanteCliente, participante participanteShop, tienda,\nfacturaabono \nleft outer join pago on (facturaabono.codigoPago=pago.codigoPago and \nfacturaabono.codigoTienda=pago.codigoTienda and \nfacturaabono.codigoEmpresa=pago.codigoEmpresa), (select \na.codigofactura,a.fechaemision, \nalbaranabono.codigoReparacion,a.codigoTienda,a.codigoEmpresa from \nalbaranabono,facturaabono a where \nalbaranabono.numeroFactura=a.codigoFactura and \na.codigoEmpresa=albaranAbono.codigoEmpresa and \na.codigoTienda=albaranabono.codigoTienda) as albaranes where \nfacturaabono.estado >= 0 and (facturaabono.numeroFactura is not null) \nand facturaabono.codigoTienda=participanteShop.codigoParty and \nfacturaabono.codigoPagador = participanteCliente.codigoParty and \ntienda.codigoTienda = facturaabono.codigoTienda and \n(albaranes.codigoFactura = facturaAbono.codigoFactura) and \n(albaranes.codigoEmpresa = facturaAbono.codigoEmpresa) and \n(albaranes.codigoTienda = facturaAbono.codigoTienda) and \n(albaranes.codigoReparacion=reparacion.codigoReparacion) and \n(albaranes.codigoTienda=reparacion.codigoTienda) and \n(albaranes.codigoEmpresa=reparacion.codigoEmpresa) and \n(participanteCliente.nick ilike '%ASITUR%') and \n(facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n') and facturaabono.tipoIva is NULL order by \nparticipantecompany.nombre,facturaabono.numeroFactura) union (SELECT \nfacturaabono.numeroFactura as \nnumeroFacturaFactura,facturaabono.codigoFactura as \ncodigoFacturaFactura,facturaabono.codigoEmpresa as \ncodigoEmpresaFactura,facturaabono.codigoTienda as \ncodigoTiendaFactura,facturaabono.estado as \nestadoFactura,facturaabono.fechaemision as \nfechaEmisionFactura,facturaabono.tipoIva as \ntipoIvaFactura,facturaAbono.baseImponibleModificada as \nbaseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \nas baseImponibleNuevaFactura,refactura as \nrefacturaFactura,participanteShop.codigoParty as \ncodigoPartyParticipantShop,participanteShop.nombre as \nnombreParticipantShop,participanteCliente.codigoParty as \ncodigoPartyParticipantPagador,participanteCliente.nick as \nnickParticipantPagador,participanteCliente.nombreCorto as \nshortnameparticipantPagador,participanteCliente.cif as \ncifParticipantPagador,NULL as \ncodigoReparacionRepair,reparacion.codigoTienda as \ncodigoTiendaRepair,NULL as codigoClienteRepair,NULL as \ncodigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \nfacturaAbono.codigoEmpresa as codigoPartyParticipantEnter,NULL as \nnombreParticipantCompany,NULL as shortnameparticipantCompany,NULL as \ncodigoPartyParticipantCompany,NULL as cifParticipantCompany, \npago.codigoPago as codigoPagoPago, pago.codigobanco as codigoBancoPago, \npago.codigooficina as codigoOficinaPago, pago.numerocuenta as \nnumeroCuentaPago, pago.esAPlazos as esAPlazosPago, pago.pagosRealizados \nas pagosRealizadosPago, pago.numeroVencimientos as \nnumeroVecimientosPago, pago.fechaInicio as fechaInicioPago, \npago.esdomiciliacion as esdomiciliacionpago from reparacion left outer \njoin participante participanteCompany ON \n(reparacion.codigoCompania=participanteCompany.codigoParty) left outer \njoin siniestro on \n(siniestro.codigoReparacion=reparacion.codigoReparacion and \nsiniestro.codigoTienda=reparacion.codigoTienda and \nsiniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \nparticipanteCliente, participante participanteShop, tienda,\nfacturaabono \nleft outer join pago on (facturaabono.codigoPago=pago.codigoPago and \nfacturaabono.codigoTienda=pago.codigoTienda and \nfacturaabono.codigoEmpresa=pago.codigoEmpresa), (select distinct \nfacturaabono.codigofactura as \nnumeroFacturaFactura,facturaabono.codigoPago,albaranabono.numeroFactura, \ncodigoreparacionTaller,facturatalleres.codigoEmpresaAlbaran as \ncodigoEMpresaAlbaranTaller,facturatalleres.codigoTiendaAlbaran as \ncodigoTiendaAlbaranTaller from facturaabono left outer join \nalbaranabono on (facturaabono.codigoFactura=albaranabono.numeroFactura \nand (facturaabono.codigoTienda=albaranabono.codigoTienda) and \n(facturaabono.codigoEMpresa=albaranAbono.codigoEmpresa)), (select \ncodigoReparacion as codigoReparacionTaller,numeroFacturaTaller as \nnumeroFacturaTaller \n,codigoEmpresaFactura,codigoTiendaFactura,codigoEmpresaAlbaran,codigoTiendaAlbaran \nfrom facturataller,albaranabono where \nalbaranabono.numeroAlbaran=facturaTaller.numeroalbaran and \nalbaranabono.codigoTienda=facturataller.codigoTiendaAlbaran and \nalbaranabono.codigoEmpresa=facturaTaller.codigoEmpresaAlbaran ) as \nfacturaTalleres where albaranabono.numeroFactura is null and \nfacturaabono.codigoFactura=numeroFacturaTaller and \nfacturaabono.codigoTienda=facturaTalleres.codigoTiendaFactura and \nfacturaabono.codigoEmpresa=facturaTalleres.codigoEmpresaFactura ) as \nfacturasTalleres where facturaabono.estado >= 0 and \n(facturaabono.numeroFactura is not null) and \nfacturaabono.codigoTienda=participanteShop.codigoParty and \nfacturaabono.codigoTienda=reparacion.codigoTienda and \nfacturaabono.codigoEmpresa=reparacion.codigoEmpresa and \nfacturaabono.codigoPagador = participanteCliente.codigoParty and \ntienda.codigoTienda = facturaabono.codigoTienda and \n(participanteCliente.nick ilike '%ASITUR%') and \n(facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n') and facturaabono.tipoIva is NULL and \nfacturaabono.codigoFactura=facturasTalleres.numeroFacturaFactura and \nreparacion.codigoReparacion=facturasTalleres.codigoReparacionTaller\nand \nreparacion.codigoTienda = facturasTalleres.codigoTiendaAlbaranTaller\nand \nreparacion.codigoEmpresa = facturasTalleres.codigoEmpresaAlbaranTaller \ngroup by facturaabono.codigoFactura, \nfacturaabono.numeroFactura,facturaabono.codigoempresa, \nfacturaabono.codigotienda, facturaabono.estado, \nfacturaabono.fechaemision, \nfacturaabono.tipoIva,facturaabono.baseimponiblemodificada,facturaabono.baseimponiblenueva, \nfacturaabono.refactura,participanteshop.codigoparty, \nparticipanteshop.nombre, \nparticipantecliente.codigoparty,participantecliente.nick,participanteCliente.nombreCorto,participantecompany.nombre,participantecliente.cif,reparacion.codigotienda,tienda.codigoautoarte,pago.codigopago \n,pago.codigobanco, pago.codigooficina, pago.numerocuenta, \npago.esAPlazos,pago.pagosRealizados,pago.numeroVencimientos,pago.fechainicio, \npago.esdomiciliacion order by \nparticipantecompany.nombre,facturaabono.numeroFactura);\n\n\n\nAny idea ?\n\n-- \nPiñeiro <[email protected]>\n",
"msg_date": "Mon, 11 Sep 2006 20:14:13 +0200",
"msg_from": "=?ISO-8859-1?Q?Pi=F1eiro?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Piñeiro\n> Subject: [PERFORM] Performance problem with Sarge compared with Woody\n\n> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre\n> 7.4.7). To migrate the database we use a dump, using pg_dump with this\n> options:\n> pg_dump -U <username> -c -F p -O -v -f <filename> <DBname>\n> \n> We have a search, that using woody take about 1-2 minutes, but with\n> sarge it is executing about 2 hours, and at least it crashes, with a\n> message about a temporal file and no more disk space ( i have \n> more than\n> a GB of free disk space).\n> \n> Any idea ?\n\nThe first question is did you run ANALYZE on the new database after\nimporting your data? \n\n",
"msg_date": "Mon, 11 Sep 2006 14:02:17 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "On Mon, 2006-09-11 at 20:14 +0200, Piñeiro wrote:\n> Hi,\n> \n> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre\n> 7.4.7). To migrate the database we use a dump, using pg_dump with this\n> options:\n> pg_dump -U <username> -c -F p -O -v -f <filename> <DBname>\n> \n> We have a search, that using woody take about 1-2 minutes, but with\n> sarge it is executing about 2 hours, and at least it crashes, with a\n> message about a temporal file and no more disk space ( i have more than\n> a GB of free disk space).\n> \n\nIt sounds to me like it's choosing a bad sort plan, and unable to write\nenough temporary disk files.\n\nA likely cause is that you did not \"vacuum analyze\" after you loaded the\ndata. Try running that command and see if it helps. If not, can you\nprovide the output of \"explain\" and \"explain analyze\" on both the old\ndatabase and the new?\n\nAlso, I suggest that you upgrade to 8.1. 7.4 is quite old, and many\nimprovements have been made since then.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 11 Sep 2006 12:11:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "On Mon, 2006-09-11 at 13:14, Piñeiro wrote:\n> Hi,\n> \n> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre\n> 7.4.7). To migrate the database we use a dump, using pg_dump with this\n> options:\n> pg_dump -U <username> -c -F p -O -v -f <filename> <DBname>\n> \n> We have a search, that using woody take about 1-2 minutes, but with\n> sarge it is executing about 2 hours, and at least it crashes, with a\n> message about a temporal file and no more disk space ( i have more than\n> a GB of free disk space).\n> \n> The search is very long, with a lot of joins (generated by a ERP we\n> manage). We think that the problem can be at the indices, but we are not\n> sure. At the original woody database we create indices, but when the\n> dump is being installed at sarge, it creates an implicit index, so there\n> are times that there are duplicates indices. But we try to remove the\n> duplicate indices and we don't resove the problem.\n\nThat query made my head hurt. However, reading as much of it as I could\nmake myself, it seemed to have the common problem where it has lots of\ntables in the middle of the joins, i.e.\n\nselect <select list> from\ntable1 join table2 on (...\njoin table3, table4, table5 \nleft join table 6 on (table2.xx = table6.yy)\nwhere table3=...\n\nSo, the theoretical way to create this is to first join table1 to\ntable2, then table3, table4, and table5 with NO CONSTRAINT then table6,\nthen separate out all the rows from that huge unconstrained join with\nthe where clause. \n\nI'd suggest two things.\n\none: Get a better ERP... :) or at least one you can inject some\nintelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which\nwill be released moderately soon, and if you won't be going into\nproduction directly, might be ready about the time you are.\n",
"msg_date": "Mon, 11 Sep 2006 17:03:41 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "On 9/11/06, Scott Marlowe <[email protected]> wrote:\n> I'd suggest two things.\n>\n> one: Get a better ERP... :) or at least one you can inject some\n> intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which\n> will be released moderately soon, and if you won't be going into\n> production directly, might be ready about the time you are.\n\nfor 3 months I ran a 400M$ manufacturing company's erp off of a\npre-beta 8.0 windows pg server converted from cobol using some hacked\nout c++ middleware. I remember having to change how the middleware\nhandled transactions when Alvaro changed them to a checkpoint\nmechanism. I also remember being relieved when I no longer had to\nmanually edit pg_config.h so nobody would notice they would notice\nthey were running a beta version of postgresql had one of the\ntechnical people casually logged into psql. I scraped out almost\ncompletely unscathed except for a nasty crash due to low stack\nallocation of the compiler on windows.\n\nthe point of all this? get onto a recent version of postgresql, what\ncould possbily go wrong?\n\nmerlin\n",
"msg_date": "Mon, 11 Sep 2006 21:53:28 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:\n\n> Also also, you should be running at LEAST 7.4.13, the latest release of\n> 7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes\n> your problem. Doubt it, but it could be. However, the more important\n> point is that there are REAL data eating bugs in 7.4.7 that may take a\n> bite out of your data.\nFirst, thanks for all your answers. \n\nAbout your comments:\n * Yes, i have executed VACUUM FULL ANALYZE VERBOSE after the dump,\nand after all my tries to solve this.\n\n * About another ERP: this ERP is one developed by us, we are\ndeveloping the next version, but until this is finished we need to\nmaintain the old one, with all his problems (as the \"montrous\" selects).\n\n * About Postgre version: you advice me to upgrade from 7.4.7 (postgre\nversion at sarge) to 8.2. Well, I don't want to be a troll, but I\nupgrade from 7.2.1 (woody) to 7.4.7 and I get worse, do you really think\nthat upgrade to 8.1 will solve something? \n\nAbout the indices:\n I comment previously that I think that the problem could be at the\nindices. Well, at the woody postgre version we add all the indices by\nhand, including the primary key index. The dump takes all these and\ninserts at the sarge version, but sarge inserts an implicit index using\nthe primary key, so at the sarge version we have duplicate indices.\nThere are any difference between 7.2.1 and 7.4.2 versions about this?\nWith the 7.4.2 there are more indices, or there was duplicated indices\nwith the woody version too? \n(before you comment this: yes I try to remove the duplicate indices to\ncheck if this was the problem)\n\n\n-- \nPiñeiro <[email protected]>\n",
"msg_date": "Tue, 12 Sep 2006 09:18:27 +0200",
"msg_from": "=?ISO-8859-1?Q?Pi=F1eiro?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "On Tue, 2006-09-12 at 02:18, Piñeiro wrote:\n> El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:\n> \n> > Also also, you should be running at LEAST 7.4.13, the latest release of\n> > 7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes\n> > your problem. Doubt it, but it could be. However, the more important\n> > point is that there are REAL data eating bugs in 7.4.7 that may take a\n> > bite out of your data.\n> First, thanks for all your answers. \n> \n> About your comments:\n> * Yes, i have executed VACUUM FULL ANALYZE VERBOSE after the dump,\n> and after all my tries to solve this.\n> \n> * About another ERP: this ERP is one developed by us, we are\n> developing the next version, but until this is finished we need to\n> maintain the old one, with all his problems (as the \"montrous\" selects).\n\nI feel your pain. I've written a few apps that created queries on the\nfly that quickly grew into monstrosities that stomped my pg servers into\nthe ground.\n\n> * About Postgre version: you advice me to upgrade from 7.4.7 (postgre\n> version at sarge) to 8.2. Well, I don't want to be a troll, but I\n> upgrade from 7.2.1 (woody) to 7.4.7 and I get worse, do you really think\n> that upgrade to 8.1 will solve something? \n\nIt's likely that something in 7.4.7 is happening as a side effect.\n\nThe 7.2.x query planner, if I remember correctly, did ALL The join ons\nfirst, then did the joins in the where clause in whatever order it\nthought best.\n\nStarting with 7.3 or 7.4 (not sure which) the planner was able to try\nand decide which tables in both the join on() syntax and with where\nclauses it wanted to run.\n\nIs it possible to fix the strangness of the ERP so it doesn't do that\nthing where it puts a lot of unconstrained tables in the middle of the\nfrom list? Also, moving where clause join condititions into the join\non() syntax is usually a huge win.\n\n I'd probably put 8.1.4 (or the latest 8.2 snapshot) on a test box and\nsee what it could do with this query for an afternoon. It might run\njust as slow, or it might \"get it right\" and run it in a few seconds. \nWhile there are the occasions where a query does run slower when\nmigrating from an older version to a newer version, the opposite is\nusually true. From 7.2 to 7.4 there was a lot of work done in \"getting\nthings right\" and some of this caused some things to go slower, although\nnot much.\n\n>From 7.4 to 8.1 (and now 8.2) a lot of focus has been on optimizing the\nquery planner and adding methods of joining that have made huge strides\nin performance.\n\nHowever, running 7.4.7 instead of 7.4.13 is a mistake, 100%. Updates\nhappen for a reason, reasons like your data could get eaten, or the\nquery planner makes a really stupid decision that causes it to take\nhours to run a query... You can upgrade from 7.4.7 to 7.4.13 in place,\nno need to dump and restore (take a backup just in case, but that's a\ngiven).\n\n> About the indices:\n> I comment previously that I think that the problem could be at the\n> indices. Well, at the woody postgre version we add all the indices by\n> hand, including the primary key index. The dump takes all these and\n> inserts at the sarge version, but sarge inserts an implicit index using\n> the primary key, so at the sarge version we have duplicate indices.\n\nProbably not a big issue.\n\n> There are any difference between 7.2.1 and 7.4.2 versions about this?\n> With the 7.4.2 there are more indices, or there was duplicated indices\n> with the woody version too? \n> (before you comment this: yes I try to remove the duplicate indices to\n> check if this was the problem)\n\nWait, are you running 7.4.2 or 7.4.7? 7.4.7 is bad enough, but 7.4.2 is\ntruly dangerous. Upgrade to 7.4.13 whichever version you're running.\n\n",
"msg_date": "Tue, 12 Sep 2006 09:27:11 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "On Mon, 2006-09-11 at 20:53, Merlin Moncure wrote:\n> On 9/11/06, Scott Marlowe <[email protected]> wrote:\n> > I'd suggest two things.\n> >\n> > one: Get a better ERP... :) or at least one you can inject some\n> > intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which\n> > will be released moderately soon, and if you won't be going into\n> > production directly, might be ready about the time you are.\n> \n> for 3 months I ran a 400M$ manufacturing company's erp off of a\n> pre-beta 8.0 windows pg server converted from cobol using some hacked\n> out c++ middleware. I remember having to change how the middleware\n> handled transactions when Alvaro changed them to a checkpoint\n> mechanism. I also remember being relieved when I no longer had to\n> manually edit pg_config.h so nobody would notice they would notice\n> they were running a beta version of postgresql had one of the\n> technical people casually logged into psql. I scraped out almost\n> completely unscathed except for a nasty crash due to low stack\n> allocation of the compiler on windows.\n> \n> the point of all this? get onto a recent version of postgresql, what\n> could possbily go wrong?\n\nYou did notice I mentioned that it would only make sense if they weren't\ngoing into production right away. I.e. develop the app while pgdg\ndevelops the database, and release at about the same time.\n\nI wouldn't put 8.2 into production just yet, but if I had a launch date\nof next spring, I'd certainly consider developing on it now.\n",
"msg_date": "Tue, 12 Sep 2006 09:28:43 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "=?ISO-8859-1?Q?Pi=F1eiro?= <[email protected]> writes:\n> * About Postgre version: you advice me to upgrade from 7.4.7 (postgre\n> version at sarge) to 8.2. Well, I don't want to be a troll, but I\n> upgrade from 7.2.1 (woody) to 7.4.7 and I get worse, do you really think\n> that upgrade to 8.1 will solve something? \n\nIf you really want informed answers rather than speculation, show us\nEXPLAIN ANALYZE reports for the problem query on both machines.\nI don't offhand know why 7.4 would be slower, but I speculate\nthat it's picking a worse plan for some reason.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2006 11:13:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody "
},
{
"msg_contents": "On 9/12/06, Scott Marlowe <[email protected]> wrote:\n> On Mon, 2006-09-11 at 20:53, Merlin Moncure wrote:\n> > for 3 months I ran a 400M$ manufacturing company's erp off of a\n> > pre-beta 8.0 windows pg server converted from cobol using some hacked\n> > out c++ middleware. I remember having to change how the middleware\n> > handled transactions when Alvaro changed them to a checkpoint\n> > mechanism. I also remember being relieved when I no longer had to\n> > manually edit pg_config.h so nobody would notice they would notice\n> > they were running a beta version of postgresql had one of the\n> > technical people casually logged into psql. I scraped out almost\n> > completely unscathed except for a nasty crash due to low stack\n> > allocation of the compiler on windows.\n> >\n> > the point of all this? get onto a recent version of postgresql, what\n> > could possbily go wrong?\n>\n> You did notice I mentioned that it would only make sense if they weren't\n> going into production right away. I.e. develop the app while pgdg\n> develops the database, and release at about the same time.\n>\n> I wouldn't put 8.2 into production just yet, but if I had a launch date\n> of next spring, I'd certainly consider developing on it now.\n\nright, very good advice :) I was giving more of a \"don't try this at\nhome\" type post. To the OP, though, I would advise that each version\nof PostgreSQL is much faster (sometimes, drastically so). Once in a\nwhile you get a query that you have to rethink but the engine improves\nwith each release.\n\nmerlin\n",
"msg_date": "Tue, 12 Sep 2006 11:38:38 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with Woody"
},
{
"msg_contents": "\n[ Hint: If you want someone to help you with your query, take some time\n yourself to make the query easy to read. ]\n\n---------------------------------------------------------------------------\n\nPi�eiro wrote:\n> Hi,\n> \n> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre\n> 7.4.7). To migrate the database we use a dump, using pg_dump with this\n> options:\n> pg_dump -U <username> -c -F p -O -v -f <filename> <DBname>\n> \n> We have a search, that using woody take about 1-2 minutes, but with\n> sarge it is executing about 2 hours, and at least it crashes, with a\n> message about a temporal file and no more disk space ( i have more than\n> a GB of free disk space).\n> \n> The search is very long, with a lot of joins (generated by a ERP we\n> manage). We think that the problem can be at the indices, but we are not\n> sure. At the original woody database we create indices, but when the\n> dump is being installed at sarge, it creates an implicit index, so there\n> are times that there are duplicates indices. But we try to remove the\n> duplicate indices and we don't resove the problem.\n> \n> The select is the next one (sorry if it is too big):\n> \n> (SELECT facturaabono.numeroFactura as \n> numeroFacturaFactura,facturaabono.codigoFactura as \n> codigoFacturaFactura,facturaabono.codigoEmpresa as \n> codigoEmpresaFactura,facturaabono.codigoTienda as \n> codigoTiendaFactura,facturaabono.estado as \n> estadoFactura,facturaabono.fechaemision as \n> fechaEmisionFactura,facturaabono.tipoIva as \n> tipoIvaFactura,facturaAbono.baseImponibleModificada as \n> baseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \n> as baseImponibleNuevaFactura,refactura as \n> refacturaFactura,participanteShop.codigoParty as \n> codigoPartyParticipantShop,participanteShop.nombre as \n> nombreParticipantShop,participanteCliente.codigoParty as \n> codigoPartyParticipantPagador,participanteCliente.nick as \n> nickParticipantPagador,participanteCliente.nombreCorto as \n> shortnameparticipantPagador,participanteCliente.cif as \n> cifParticipantPagador,reparacion.codigoReparacion as \n> codigoReparacionRepair,reparacion.codigoTienda as \n> codigoTiendaRepair,reparacion.codigoCliente as \n> codigoClienteRepair,reparacion.codigoCompania as \n> codigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \n> facturaAbono.codigoEmpresa as \n> codigoPartyParticipantEnter,participanteCompany.nombre as \n> nombreParticipantCompany,participanteCompany.nombreCorto as \n> shortnameparticipantCompany,participanteCompany.codigoParty as \n> codigoPartyParticipantCompany,participanteCompany.cif as \n> cifParticipantCompany, pago.codigoPago as codigoPagoPago, \n> pago.codigobanco as codigoBancoPago, pago.codigooficina as \n> codigoOficinaPago, pago.numerocuenta as numeroCuentaPago,\n> pago.esAPlazos \n> as esAPlazosPago, pago.pagosRealizados as pagosRealizadosPago, \n> pago.numeroVencimientos as numeroVencimientosPago, pago.fechaInicio as \n> fechaInicioPago, pago.esdomiciliacion as esdomiciliacionpago from \n> reparacion left outer join participante participanteCompany ON \n> (reparacion.codigoCompania=participanteCompany.codigoParty) left outer \n> join siniestro on \n> (siniestro.codigoReparacion=reparacion.codigoReparacion and \n> siniestro.codigoTienda=reparacion.codigoTienda and \n> siniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \n> participanteCliente, participante participanteShop, tienda,\n> facturaabono \n> left outer join pago on (facturaabono.codigoPago=pago.codigoPago and \n> facturaabono.codigoTienda=pago.codigoTienda and \n> facturaabono.codigoEmpresa=pago.codigoEmpresa) where \n> facturaabono.estado >= 0 and (facturaabono.numeroFactura is not null) \n> and facturaabono.codigoTienda=participanteShop.codigoParty and \n> facturaabono.codigoTienda=reparacion.codigoTienda and \n> facturaabono.codigoEmpresa=reparacion.codigoEmpresa and \n> facturaabono.codigoPagador = participanteCliente.codigoParty and \n> tienda.codigoTienda = facturaabono.codigoTienda and \n> (participanteCliente.nick ilike '%ASITUR%') and \n> (facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n> ') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n> ') and facturaabono.tipoIva is NULL and (facturaabono.codigoReparacion \n> = reparacion.codigoReparacion) order by \n> participantecompany.nombre,facturaabono.numeroFactura) union (SELECT \n> DISTINCT facturaabono.numeroFactura as \n> numeroFacturaFactura,facturaabono.codigoFactura as \n> codigoFacturaFactura,facturaabono.codigoEmpresa as \n> codigoEmpresaFactura,facturaabono.codigoTienda as \n> codigoTiendaFactura,facturaabono.estado as \n> estadoFactura,albaranes.fechaemision as \n> fechaEmisionFactura,facturaabono.tipoIva as \n> tipoIvaFactura,facturaAbono.baseImponibleModificada as \n> baseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \n> as baseImponibleNuevaFactura,refactura as \n> refacturaFactura,participanteShop.codigoParty as \n> codigoPartyParticipantShop,participanteShop.nombre as \n> nombreParticipantShop,participanteCliente.codigoParty as \n> codigoPartyParticipantPagador,participanteCliente.nick as \n> nickParticipantPagador,participanteCliente.nombreCorto as \n> shortnameparticipantPagador,participanteCliente.cif as \n> cifParticipantPagador,(case WHEN reparacion.codigoCompania is not NULL \n> THEN reparacion.codigoReparacion ELSE NULL END) as \n> codigoReparacionRepair,reparacion.codigoTienda as \n> codigoTiendaRepair,reparacion.codigoCliente as \n> codigoClienteRepair,reparacion.codigoCompania as \n> codigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \n> facturaAbono.codigoEmpresa as \n> codigoPartyParticipantEnter,participanteCompany.nombre as \n> nombreParticipantCompany,participanteCompany.nombreCorto as \n> shortnameparticipantCompany,participanteCompany.codigoParty as \n> codigoPartyParticipantCompany,participanteCompany.cif as \n> cifParticipantCompany, pago.codigoPago as codigoPagoPago, \n> pago.codigobanco as codigoBancoPago, pago.codigooficina as \n> codigoOficinaPago, pago.numerocuenta as numeroCuentaPago,\n> pago.esAPlazos \n> as esAPlazosPago, pago.pagosRealizados as pagosRealizadosPago, \n> pago.numeroVencimientos as numeroVecimientosPago, pago.fechaInicio as \n> fechaInicioPago, pago.esdomiciliacion as esdomiciliacionpago from \n> reparacion left outer join participante participanteCompany ON \n> (reparacion.codigoCompania=participanteCompany.codigoParty) left outer \n> join siniestro on \n> (siniestro.codigoReparacion=reparacion.codigoReparacion and \n> siniestro.codigoTienda=reparacion.codigoTienda and \n> siniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \n> participanteCliente, participante participanteShop, tienda,\n> facturaabono \n> left outer join pago on (facturaabono.codigoPago=pago.codigoPago and \n> facturaabono.codigoTienda=pago.codigoTienda and \n> facturaabono.codigoEmpresa=pago.codigoEmpresa), (select \n> a.codigofactura,a.fechaemision, \n> albaranabono.codigoReparacion,a.codigoTienda,a.codigoEmpresa from \n> albaranabono,facturaabono a where \n> albaranabono.numeroFactura=a.codigoFactura and \n> a.codigoEmpresa=albaranAbono.codigoEmpresa and \n> a.codigoTienda=albaranabono.codigoTienda) as albaranes where \n> facturaabono.estado >= 0 and (facturaabono.numeroFactura is not null) \n> and facturaabono.codigoTienda=participanteShop.codigoParty and \n> facturaabono.codigoPagador = participanteCliente.codigoParty and \n> tienda.codigoTienda = facturaabono.codigoTienda and \n> (albaranes.codigoFactura = facturaAbono.codigoFactura) and \n> (albaranes.codigoEmpresa = facturaAbono.codigoEmpresa) and \n> (albaranes.codigoTienda = facturaAbono.codigoTienda) and \n> (albaranes.codigoReparacion=reparacion.codigoReparacion) and \n> (albaranes.codigoTienda=reparacion.codigoTienda) and \n> (albaranes.codigoEmpresa=reparacion.codigoEmpresa) and \n> (participanteCliente.nick ilike '%ASITUR%') and \n> (facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n> ') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n> ') and facturaabono.tipoIva is NULL order by \n> participantecompany.nombre,facturaabono.numeroFactura) union (SELECT \n> facturaabono.numeroFactura as \n> numeroFacturaFactura,facturaabono.codigoFactura as \n> codigoFacturaFactura,facturaabono.codigoEmpresa as \n> codigoEmpresaFactura,facturaabono.codigoTienda as \n> codigoTiendaFactura,facturaabono.estado as \n> estadoFactura,facturaabono.fechaemision as \n> fechaEmisionFactura,facturaabono.tipoIva as \n> tipoIvaFactura,facturaAbono.baseImponibleModificada as \n> baseImponibleModificadaFactura,to_char(facturaAbono.baseImponibleNueva,'99999999D99') \n> as baseImponibleNuevaFactura,refactura as \n> refacturaFactura,participanteShop.codigoParty as \n> codigoPartyParticipantShop,participanteShop.nombre as \n> nombreParticipantShop,participanteCliente.codigoParty as \n> codigoPartyParticipantPagador,participanteCliente.nick as \n> nickParticipantPagador,participanteCliente.nombreCorto as \n> shortnameparticipantPagador,participanteCliente.cif as \n> cifParticipantPagador,NULL as \n> codigoReparacionRepair,reparacion.codigoTienda as \n> codigoTiendaRepair,NULL as codigoClienteRepair,NULL as \n> codigoCompaniaRepair,tienda.codigoAutoArte as codigoAutoarteShop, \n> facturaAbono.codigoEmpresa as codigoPartyParticipantEnter,NULL as \n> nombreParticipantCompany,NULL as shortnameparticipantCompany,NULL as \n> codigoPartyParticipantCompany,NULL as cifParticipantCompany, \n> pago.codigoPago as codigoPagoPago, pago.codigobanco as codigoBancoPago, \n> pago.codigooficina as codigoOficinaPago, pago.numerocuenta as \n> numeroCuentaPago, pago.esAPlazos as esAPlazosPago, pago.pagosRealizados \n> as pagosRealizadosPago, pago.numeroVencimientos as \n> numeroVecimientosPago, pago.fechaInicio as fechaInicioPago, \n> pago.esdomiciliacion as esdomiciliacionpago from reparacion left outer \n> join participante participanteCompany ON \n> (reparacion.codigoCompania=participanteCompany.codigoParty) left outer \n> join siniestro on \n> (siniestro.codigoReparacion=reparacion.codigoReparacion and \n> siniestro.codigoTienda=reparacion.codigoTienda and \n> siniestro.codigoEmpresa=reparacion.codigoEmpresa), participante \n> participanteCliente, participante participanteShop, tienda,\n> facturaabono \n> left outer join pago on (facturaabono.codigoPago=pago.codigoPago and \n> facturaabono.codigoTienda=pago.codigoTienda and \n> facturaabono.codigoEmpresa=pago.codigoEmpresa), (select distinct \n> facturaabono.codigofactura as \n> numeroFacturaFactura,facturaabono.codigoPago,albaranabono.numeroFactura, \n> codigoreparacionTaller,facturatalleres.codigoEmpresaAlbaran as \n> codigoEMpresaAlbaranTaller,facturatalleres.codigoTiendaAlbaran as \n> codigoTiendaAlbaranTaller from facturaabono left outer join \n> albaranabono on (facturaabono.codigoFactura=albaranabono.numeroFactura \n> and (facturaabono.codigoTienda=albaranabono.codigoTienda) and \n> (facturaabono.codigoEMpresa=albaranAbono.codigoEmpresa)), (select \n> codigoReparacion as codigoReparacionTaller,numeroFacturaTaller as \n> numeroFacturaTaller \n> ,codigoEmpresaFactura,codigoTiendaFactura,codigoEmpresaAlbaran,codigoTiendaAlbaran \n> from facturataller,albaranabono where \n> albaranabono.numeroAlbaran=facturaTaller.numeroalbaran and \n> albaranabono.codigoTienda=facturataller.codigoTiendaAlbaran and \n> albaranabono.codigoEmpresa=facturaTaller.codigoEmpresaAlbaran ) as \n> facturaTalleres where albaranabono.numeroFactura is null and \n> facturaabono.codigoFactura=numeroFacturaTaller and \n> facturaabono.codigoTienda=facturaTalleres.codigoTiendaFactura and \n> facturaabono.codigoEmpresa=facturaTalleres.codigoEmpresaFactura ) as \n> facturasTalleres where facturaabono.estado >= 0 and \n> (facturaabono.numeroFactura is not null) and \n> facturaabono.codigoTienda=participanteShop.codigoParty and \n> facturaabono.codigoTienda=reparacion.codigoTienda and \n> facturaabono.codigoEmpresa=reparacion.codigoEmpresa and \n> facturaabono.codigoPagador = participanteCliente.codigoParty and \n> tienda.codigoTienda = facturaabono.codigoTienda and \n> (participanteCliente.nick ilike '%ASITUR%') and \n> (facturaabono.fechaEmision<='Thu Sep 7 00:00:00 2006\n> ') and (facturaabono.fechaEmision>='Sun Aug 7 00:00:00 2005\n> ') and facturaabono.tipoIva is NULL and \n> facturaabono.codigoFactura=facturasTalleres.numeroFacturaFactura and \n> reparacion.codigoReparacion=facturasTalleres.codigoReparacionTaller\n> and \n> reparacion.codigoTienda = facturasTalleres.codigoTiendaAlbaranTaller\n> and \n> reparacion.codigoEmpresa = facturasTalleres.codigoEmpresaAlbaranTaller \n> group by facturaabono.codigoFactura, \n> facturaabono.numeroFactura,facturaabono.codigoempresa, \n> facturaabono.codigotienda, facturaabono.estado, \n> facturaabono.fechaemision, \n> facturaabono.tipoIva,facturaabono.baseimponiblemodificada,facturaabono.baseimponiblenueva, \n> facturaabono.refactura,participanteshop.codigoparty, \n> participanteshop.nombre, \n> participantecliente.codigoparty,participantecliente.nick,participanteCliente.nombreCorto,participantecompany.nombre,participantecliente.cif,reparacion.codigotienda,tienda.codigoautoarte,pago.codigopago \n> ,pago.codigobanco, pago.codigooficina, pago.numerocuenta, \n> pago.esAPlazos,pago.pagosRealizados,pago.numeroVencimientos,pago.fechainicio, \n> pago.esdomiciliacion order by \n> participantecompany.nombre,facturaabono.numeroFactura);\n> \n> \n> \n> Any idea ?\n> \n> -- \n> Pi?eiro <[email protected]>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 14 Sep 2006 15:25:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with Sarge compared with"
}
] |
[
{
"msg_contents": "\n Hello,\n\nI have a big table called products. Table size: 1123MB. Toast table \nsize: 32MB. Indexes size: 380MB.\nI try to do a query like this:\n\nselect id,name from products where name like '%Mug%';\n\nYes, I know that tsearch2 is better for this, but please read on. The \nabove query gives this plan:\n\nSeq Scan on product (cost=0.00..153489.52 rows=31390 width=40)\n Filter: (name ~~ '%Mug%'::text)\n\nWhen I use this with explain analyze:\n\n\"Seq Scan on product (cost=0.00..153489.52 rows=31390 width=40) (actual \ntime=878.873..38300.588 rows=72567 loops=1)\"\n\" Filter: (name ~~ '%Mug%'::text)\"\n\"Total runtime: 38339.026 ms\"\n\nMeanwhile, \"iostat 5\" gives something like this:\n\n tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n 0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0\n 0 12 124.66 129 15.67 124.39 129 15.64 12 0 85 3 0\n 0 12 117.13 121 13.87 117.95 121 13.96 12 0 84 5 0\n 0 12 104.84 118 12.05 105.84 118 12.19 10 0 87 2 0\n\n130 transfers per second with 12-15MB/sec transfer speed. (FreeBSD 6.1 \nwith two STATA150 drives in gmirror RAID1)\n\nI made another test. I create a file with the identifiers and names of \nthe products:\n\npsql#\\o products.txt\npsql#select id,name from product;\n\nThen I can search using grep:\n\ngrep \"Mug\" products.txt | cut -f1 -d\\|\n\nThere is a huge difference. This command runs within 0.5 seconds. That \nis, at least 76 times faster than the seq scan. It is the same if I \nvacuum, backup and restore the database. I thought that the table is \nstored in one file, and the seq scan will be actually faster than \ngrepping the file. Can you please tell me what am I doing wrong? I'm not \nsure if I can increase the performance of a seq scan by adjusting the \nvalues in postgresql.conf. I do not like the idea of exporting the \nproduct table periodically into a txt file, and search with grep. :-)\n\nAnother question: I have a btree index on product(name). It contains all \nproduct names and the identifiers of the products. Wouldn't it be easier \nto seq scan the index instead of seq scan the table? The index is only \n66MB, the table is 1123MB.\n\nI'm new to this list and also I just recently started to tune postgresql \nso please forgive me if this is a dumb question.\n\nRegards,\n\n Laszlo\n",
"msg_date": "Tue, 12 Sep 2006 11:59:08 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance on seq scan"
},
{
"msg_contents": "Laszlo Nagy wrote:\n> I made another test. I create a file with the identifiers and names of \n> the products:\n>\n> psql#\\o products.txt\n> psql#select id,name from product;\n>\n> Then I can search using grep:\n>\n> grep \"Mug\" products.txt | cut -f1 -d\\|\n>\n> There is a huge difference. This command runs within 0.5 seconds. That \n> is, at least 76 times faster than the seq scan. It is the same if I \n> vacuum, backup and restore the database. I thought that the table is \n> stored in one file, and the seq scan will be actually faster than \n> grepping the file. Can you please tell me what am I doing wrong? I'm \n> not sure if I can increase the performance of a seq scan by adjusting \n> the values in postgresql.conf. I do not like the idea of exporting the \n> product table periodically into a txt file, and search with grep. :-)\n\nIs there any other columns besides id and name in the table? How big is \nproducts.txt compared to the heap file?\n\n> Another question: I have a btree index on product(name). It contains \n> all product names and the identifiers of the products. Wouldn't it be \n> easier to seq scan the index instead of seq scan the table? The index \n> is only 66MB, the table is 1123MB.\n\nProbably, but PostgreSQL doesn't know how to do that. Even if it did, it \ndepends on how many matches there is. If you scan the index and then \nfetch the matching rows from the heap, you're doing random I/O to the \nheap. That becomes slower than scanning the heap sequentially if you're \ngoing to get more than a few hits.\n\n\n-- \nHeikki Linnakangas\nEnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 12 Sep 2006 11:47:08 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "On Tuesday 12 September 2006 12:47, Heikki Linnakangas wrote:\n> Laszlo Nagy wrote:\n> > I made another test. I create a file with the identifiers and names of\n> > the products:\n> >\n> > psql#\\o products.txt\n> > psql#select id,name from product;\n> >\n> > Then I can search using grep:\n> >\n> > grep \"Mug\" products.txt | cut -f1 -d\\|\n> >\n> > There is a huge difference. This command runs within 0.5 seconds. That\n> > is, at least 76 times faster than the seq scan. It is the same if I\n> > vacuum, backup and restore the database. I thought that the table is\n> > stored in one file, and the seq scan will be actually faster than\n> > grepping the file. Can you please tell me what am I doing wrong? I'm\n> > not sure if I can increase the performance of a seq scan by adjusting\n> > the values in postgresql.conf. I do not like the idea of exporting the\n> > product table periodically into a txt file, and search with grep. :-)\n>\n> Is there any other columns besides id and name in the table? How big is\n> products.txt compared to the heap file?\n>\n> > Another question: I have a btree index on product(name). It contains\n> > all product names and the identifiers of the products. Wouldn't it be\n> > easier to seq scan the index instead of seq scan the table? The index\n> > is only 66MB, the table is 1123MB.\n>\n> Probably, but PostgreSQL doesn't know how to do that. Even if it did, it\n> depends on how many matches there is. If you scan the index and then\n> fetch the matching rows from the heap, you're doing random I/O to the\n> heap. That becomes slower than scanning the heap sequentially if you're\n> going to get more than a few hits.\n\nWhy match rows from the heap if ALL required data are in the index itself?\nWhy look at the heap at all?\n\nThis is the same performance problem in PostgreSQL I noticed when doing \nsome \"SELECT count(*)\" queries. Look at this:\n\nexplain analyze select count(*) from transakcja where data > '2005-09-09' and \nmiesiac >= (9 + 2005 * 12) and kwota < 50;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=601557.86..601557.87 rows=1 width=0) (actual \ntime=26733.479..26733.484 rows=1 loops=1)\n -> Bitmap Heap Scan on transakcja (cost=154878.00..596928.23 rows=1851852 \nwidth=0) (actual time=9974.208..18796.060 rows=1654218 loops=1)\n Recheck Cond: ((miesiac >= 24069) AND (kwota < 50::double precision))\n Filter: (data > '2005-09-09 00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on idx_transakcja_miesiac_kwota \n(cost=0.00..154878.00 rows=5555556 width=0) (actual time=9919.967..9919.967 \nrows=1690402 loops=1)\n Index Cond: ((miesiac >= 24069) AND (kwota < 50::double \nprecision))\n Total runtime: 26733.980 ms\n(7 rows)\n\nThe actual time retrieving tuples from the index is less than 10 seconds, but \nthe system executes needless heap scan that takes up additional 16 seconds.\n\nBest regards,\nPeter\n\n\n\n",
"msg_date": "Tue, 12 Sep 2006 14:10:02 +0200",
"msg_from": "Piotr =?iso-8859-2?q?Ko=B3aczkowski?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n>\n> Is there any other columns besides id and name in the table? How big \n> is products.txt compared to the heap file?\nYes, many other columns. The products.txt is only 59MB. It is similar to \nthe size of the index size (66MB).\n>\n>> Another question: I have a btree index on product(name). It contains \n>> all product names and the identifiers of the products. Wouldn't it be \n>> easier to seq scan the index instead of seq scan the table? The index \n>> is only 66MB, the table is 1123MB.\n>\n> Probably, but PostgreSQL doesn't know how to do that. Even if it did, \n> it depends on how many matches there is. If you scan the index and \n> then fetch the matching rows from the heap, you're doing random I/O to \n> the heap. That becomes slower than scanning the heap sequentially if \n> you're going to get more than a few hits.\nI have 700 000 rows in the table, and usually there are less than 500 \nhits. So probably using a \"seq index scan\" would be faster. :-) Now I \nalso tried this:\n\ncreate table test(id int8 not null primary key, name text);\ninsert into test select id,name from product;\n\nAnd then:\n\nzeusd1=> explain analyze select id,name from test where name like \n'%Tiffany%';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..26559.62 rows=79 width=40) (actual \ntime=36.595..890.903 rows=117 loops=1)\n Filter: (name ~~ '%Tiffany%'::text)\n Total runtime: 891.063 ms\n(3 rows)\n\nBut this might be coming from the disk cache. Thank you for your \ncomments. We are making progress.\n\n Laszlo\n\n",
"msg_date": "Tue, 12 Sep 2006 14:36:55 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Laszlo Nagy <gandalf 'at' designaproduct.biz> writes:\n\n> > Probably, but PostgreSQL doesn't know how to do that. Even if it\n> > did, it depends on how many matches there is. If you scan the index\n> > and then fetch the matching rows from the heap, you're doing random\n> > I/O to the heap. That becomes slower than scanning the heap\n> > sequentially if you're going to get more than a few hits.\n> I have 700 000 rows in the table, and usually there are less than 500\n> hits. So probably using a \"seq index scan\" would be faster. :-) Now I\n\nYou can confirm this idea by temporarily disabling sequential\nscans. Have a look at this chapter:\n\nhttp://www.postgresql.org/docs/7.4/interactive/runtime-config.html#RUNTIME-CONFIG-QUERY\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "12 Sep 2006 14:46:18 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n> Laszlo Nagy <gandalf 'at' designaproduct.biz> writes:\n> \n>>> Probably, but PostgreSQL doesn't know how to do that. Even if it\n>>> did, it depends on how many matches there is. If you scan the index\n>>> and then fetch the matching rows from the heap, you're doing random\n>>> I/O to the heap. That becomes slower than scanning the heap\n>>> sequentially if you're going to get more than a few hits.\n>>> \n>> I have 700 000 rows in the table, and usually there are less than 500\n>> hits. So probably using a \"seq index scan\" would be faster. :-) Now I\n>> \n>\n> You can confirm this idea by temporarily disabling sequential\n> scans. Have a look at this chapter:\n> \n\nI don't think it will anyway do a \"seq index scan\" as Laszlo envisions. \nPostgreSQL cannot do \"fetch index tuple and apply %Mug% to it. If it \nmatches, fetch heap tuple\". Even if you disable sequential scans, it's \nstill going to fetch every heap tuple to see if it matches \"%Mug%\". It's \njust going to do it in index order, which is slower than a seq scan.\n\nBTW: in addition to setting enable_seqscan=false, you probably have to \nadd a dummy where-clause like \"name > ''\" to force the index scan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Sep 2006 14:24:24 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n> Guillaume Cottenceau wrote:\n> >Laszlo Nagy <gandalf 'at' designaproduct.biz> writes:\n> > \n> >>>Probably, but PostgreSQL doesn't know how to do that. Even if it\n> >>>did, it depends on how many matches there is. If you scan the index\n> >>>and then fetch the matching rows from the heap, you're doing random\n> >>>I/O to the heap. That becomes slower than scanning the heap\n> >>>sequentially if you're going to get more than a few hits.\n> >>> \n> >>I have 700 000 rows in the table, and usually there are less than 500\n> >>hits. So probably using a \"seq index scan\" would be faster. :-) Now I\n> >> \n> >\n> >You can confirm this idea by temporarily disabling sequential\n> >scans. Have a look at this chapter:\n> \n> I don't think it will anyway do a \"seq index scan\" as Laszlo envisions. \n> PostgreSQL cannot do \"fetch index tuple and apply %Mug% to it. If it \n> matches, fetch heap tuple\". Even if you disable sequential scans, it's \n> still going to fetch every heap tuple to see if it matches \"%Mug%\". It's \n> just going to do it in index order, which is slower than a seq scan.\n\nAre you saying that an indexscan \"Filter\" only acts after getting the\nheap tuple? If that's the case, then there's room for optimization\nhere, namely if the affected column is part of the index key, then we\ncould do the filtering before fetching the heap tuple.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 12 Sep 2006 09:32:55 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Are you saying that an indexscan \"Filter\" only acts after getting the\n> heap tuple? If that's the case, then there's room for optimization\n> here, namely if the affected column is part of the index key, then we\n> could do the filtering before fetching the heap tuple.\n\nThat's right. Yes, there's definitely room for optimization. In general, \nit seems we should detach the index scan and heap fetch more. Perhaps \nmake them two different nodes, like the bitmap index scan and bitmap \nheap scan. It would allow us to do the above. It's also going to be \nnecessary if we ever get to implement index-only scans.\n\n-- \nHeikki Linnakangas\nEnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 12 Sep 2006 14:45:06 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Are you saying that an indexscan \"Filter\" only acts after getting the\n> heap tuple?\n\nCorrect.\n\n> If that's the case, then there's room for optimization\n> here, namely if the affected column is part of the index key, then we\n> could do the filtering before fetching the heap tuple.\n\nOnly if the index is capable of disgorging the original value of the\nindexed column, a fact not in evidence in general (counterexample:\npolygons indexed by their bounding boxes in an r-tree). But yeah,\nit's interesting to think about applying filters at the index fetch\nstep for index types that can hand back full values. This has been\ndiscussed before --- I think we had gotten as far as speculating about\ndoing joins with just index values. See eg here:\nhttp://archives.postgresql.org/pgsql-hackers/2004-05/msg00944.php\nA lot of the low-level concerns have already been dealt with in order to\nsupport bitmap indexscans, but applying non-indexable conditions before\nfetching from the heap is still not done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2006 10:52:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan "
},
{
"msg_contents": "Laszlo Nagy <[email protected]> writes:\n> Meanwhile, \"iostat 5\" gives something like this:\n\n> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n> 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n> 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n> 0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0\n> 0 12 124.66 129 15.67 124.39 129 15.64 12 0 85 3 0\n> 0 12 117.13 121 13.87 117.95 121 13.96 12 0 84 5 0\n> 0 12 104.84 118 12.05 105.84 118 12.19 10 0 87 2 0\n\nWhy is that showing 85+ percent *system* CPU time?? I could believe a\nlot of idle CPU if the query is I/O bound, or a lot of user time if PG\nwas being a hog about doing the ~~ comparisons (not too unlikely BTW).\nBut if the kernel is eating all the CPU, there's something very wrong,\nand I don't think it's Postgres' fault.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2006 12:52:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan "
},
{
"msg_contents": "Tom Lane wrote:\n> Only if the index is capable of disgorging the original value of the\n> indexed column, a fact not in evidence in general (counterexample:\n> polygons indexed by their bounding boxes in an r-tree). But yeah,\n> it's interesting to think about applying filters at the index fetch\n> step for index types that can hand back full values. This has been\n> discussed before --- I think we had gotten as far as speculating about\n> doing joins with just index values. See eg here:\n> http://archives.postgresql.org/pgsql-hackers/2004-05/msg00944.php\n> A lot of the low-level concerns have already been dealt with in order to\n> support bitmap indexscans, but applying non-indexable conditions before\n> fetching from the heap is still not done.\n> \nTo overcome this problem, I created a smaller \"shadow\" table:\n\nCREATE TABLE product_search\n(\n id int8 NOT NULL,\n name_desc text,\n CONSTRAINT pk_product_search PRIMARY KEY (id)\n);\n\ninsert into product_search\n select\n id, \n name || ' ' || coalesce(description,'')\n from product;\n\n\nObviously, this is almost like an index, but I need to maintain it \nmanually. I'm able to search with\n\nzeusd1=> explain analyze select id from product_search where name_desc \nlike '%Mug%';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on product_search (cost=0.00..54693.34 rows=36487 width=8) \n(actual time=20.036..2541.971 rows=91399 loops=1)\n Filter: (name_desc ~~ '%Mug%'::text)\n Total runtime: 2581.272 ms\n(3 rows)\n\nThe total runtime remains below 3 sec in all cases. Of course I still \nneed to join the main table to the result:\n\nexplain analyze select s.id,p.name from product_search s inner join \nproduct p on (p.id = s.id) where s.name_desc like '%Tiffany%'\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..55042.84 rows=58 width=40) (actual \ntime=164.437..3982.610 rows=117 loops=1)\n -> Seq Scan on product_search s (cost=0.00..54693.34 rows=58 \nwidth=8) (actual time=103.651..2717.914 rows=117 loops=1)\n Filter: (name_desc ~~ '%Tiffany%'::text)\n -> Index Scan using pk_product_id on product p (cost=0.00..6.01 \nrows=1 width=40) (actual time=10.793..10.796 rows=1 loops=117)\n Index Cond: (p.id = \"outer\".id)\n Total runtime: 4007.283 ms\n(6 rows)\n\nTook 4 seconds. Awesome! With the original table, it used to be one or \ntwo minutes!\n\nNow you can ask, why am I not using tsearch2 for this? Here is answer:\n\nCREATE TABLE product_search\n(\n id int8 NOT NULL,\n ts_name_desc tsvector,\n CONSTRAINT pk_product_search PRIMARY KEY (id)\n);\n\ninsert into product_search\n select\n id, \n to_tsvector(name || ' ' coalesce(description,''))\n from product;\n \nCREATE INDEX idx_product_search_ts_name_desc ON product_search USING \ngist (ts_name_desc);\nVACUUM product_search;\n\nzeusd1=> explain analyze select id from product_search where \nts_name_desc @@ to_tsquery('mug');\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------- \n\nBitmap Heap Scan on product_search (cost=25.19..3378.20 rows=912 \nwidth=8) (actual time=954.669..13112.009 rows=91434 loops=1)\n Filter: (ts_name_desc @@ '''mug'''::tsquery)\n -> Bitmap Index Scan on idx_product_search_ts_name_desc \n(cost=0.00..25.19 rows=912 width=0) (actual time=932.455..932.455 \nrows=91436 loops=1)\n Index Cond: (ts_name_desc @@ '''mug'''::tsquery)\nTotal runtime: 13155.724 ms\n(5 rows)\n\nzeusd1=> explain analyze select id from product_search where \nts_name_desc @@ to_tsquery('tiffany');\n \nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------- \n\nBitmap Heap Scan on product_search (cost=25.19..3378.20 rows=912 \nwidth=8) (actual time=13151.725..13639.112 rows=76 loops=1)\n Filter: (ts_name_desc @@ '''tiffani'''::tsquery)\n -> Bitmap Index Scan on idx_product_search_ts_name_desc \n(cost=0.00..25.19 rows=912 width=0) (actual time=13123.705..13123.705 \nrows=81 loops=1)\n Index Cond: (ts_name_desc @@ '''tiffani'''::tsquery)\nTotal runtime: 13639.478 ms\n(5 rows)\n\nAt least 13 seconds, and the main table is not joined yet. Can anybody \nexplain to me, why the seq scan is faster than the bitmap index? In the \nlast example there were only 81 rows returned, but it took more than 13 \nseconds. :( Even if the whole table can be cached into memory (which \nisn't the case), the bitmap index should be much faster. Probably there \nis a big problem with my schema but I cannot find it. What am I doing wrong?\n\nThanks,\n\n Laszlo\n\n",
"msg_date": "Tue, 12 Sep 2006 19:01:32 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "tsearch2 question (was: Poor performance on seq scan)"
},
{
"msg_contents": "Tom Lane wrote:\n> Why is that showing 85+ percent *system* CPU time?? I could believe a\n> lot of idle CPU if the query is I/O bound, or a lot of user time if PG\n> was being a hog about doing the ~~ comparisons (not too unlikely BTW).\n> \nI'm sorry, this was really confusing. I don't know what it was - \nprobably a background system process, started from cron (?). I retried \nthe same query and I got this:\n\nzeusd1=> explain analyze select id,name from product where name like \n'%Mug%';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Seq Scan on product (cost=0.00..206891.34 rows=36487 width=40) (actual \ntime=17.188..44585.176 rows=91399 loops=1)\n Filter: (name ~~ '%Mug%'::text)\n Total runtime: 44631.150 ms\n(3 rows)\n\n tty ad4 ad6 cpu\n tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n 0 62 115.25 143 16.06 116.03 143 16.17 3 0 9 3 85\n 0 62 122.11 144 17.12 121.78 144 17.07 6 0 3 2 89\n 0 62 126.18 158 19.45 125.86 157 19.28 5 0 11 6 79\n 0 62 126.41 131 16.13 127.52 132 16.39 5 0 9 6 80\n 0 62 127.80 159 19.81 126.89 158 19.55 5 0 9 0 86\n 0 62 125.29 165 20.15 126.26 165 20.30 5 0 14 2 80\n 0 62 127.22 164 20.32 126.74 165 20.37 5 0 9 0 86\n 0 62 121.34 150 17.75 120.76 149 17.54 1 0 13 3 82\n 0 62 121.40 143 16.92 120.33 144 16.89 5 0 11 3 82\n 0 62 127.38 154 19.12 127.17 154 19.09 8 0 8 5 80\n 0 62 126.88 129 15.95 127.00 128 15.84 5 0 9 5 82\n 0 62 118.48 121 13.97 119.28 121 14.06 6 0 17 3 74\n 0 62 127.23 146 18.10 126.79 146 18.04 9 0 20 2 70\n 0 62 127.27 153 18.98 128.00 154 19.21 5 0 17 0 79\n 0 62 127.02 130 16.09 126.28 130 16.00 10 0 16 3 70\n 0 62 123.17 125 15.00 122.40 125 14.91 5 0 14 2 80\n 0 62 112.37 130 14.24 112.62 130 14.27 0 0 14 3 83\n 0 62 115.83 138 15.58 113.97 138 15.33 3 0 18 0 79\n\nA bit better transfer rate, but nothing serious.\n\nRegards,\n\n Laszlo\n\n\n",
"msg_date": "Tue, 12 Sep 2006 19:12:36 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Lazlo,\n\nOn 9/12/06 10:01 AM, \"Laszlo Nagy\" <[email protected]> wrote:\n\n> zeusd1=> explain analyze select id from product_search where name_desc\n> like '%Mug%';\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> ------------------------------------------\n> Seq Scan on product_search (cost=0.00..54693.34 rows=36487 width=8)\n> (actual time=20.036..2541.971 rows=91399 loops=1)\n> Filter: (name_desc ~~ '%Mug%'::text)\n> Total runtime: 2581.272 ms\n> (3 rows)\n> \n> The total runtime remains below 3 sec in all cases.\n\nBy creating a table with only the name field you are searching, you have\njust reduced the size of rows so that they fit in memory. That is why your\nquery runs faster.\n\nIf your searched data doesn't grow, this is fine. If it does, you will need\nto fix your disk drive OS problem.\n\n- Luke\n\n\n",
"msg_date": "Tue, 12 Sep 2006 11:36:10 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2 question (was: Poor performance on seq"
},
{
"msg_contents": "Laszlo Nagy <[email protected]> writes:\n> Tom Lane wrote:\n>> Why is that showing 85+ percent *system* CPU time??\n\n> I'm sorry, this was really confusing. I don't know what it was - \n> probably a background system process, started from cron (?). I retried \n> the same query and I got this:\n> [ around 80% idle CPU, 10% system, < 10% user ]\n\nOK, so then the thing really is I/O bound, and Luke is barking up the\nright tree. The system CPU percentage still seems high though.\nI wonder if there is a software aspect to your I/O speed woes ...\ncould the thing be doing PIO instead of DMA for instance?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2006 16:48:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan "
},
{
"msg_contents": "\n>> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n>> 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n>> 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n>> 0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0\n>> 0 12 124.66 129 15.67 124.39 129 15.64 12 0 85 3 0\n>> 0 12 117.13 121 13.87 117.95 121 13.96 12 0 84 5 0\n>> 0 12 104.84 118 12.05 105.84 118 12.19 10 0 87 2 0\n> \n> Why is that showing 85+ percent *system* CPU time?? I could believe a\n> lot of idle CPU if the query is I/O bound, or a lot of user time if PG\n> was being a hog about doing the ~~ comparisons (not too unlikely BTW).\n> But if the kernel is eating all the CPU, there's something very wrong,\n> and I don't think it's Postgres' fault.\n\nThere IS a bug for SATA disk drives in some versions of the Linux kernel. On a lark I ran some of the I/O tests in this thread, and much to my surprise discovered my write speed was 6 MB/sec ... ouch! On an identical machine, different kernel, the write speed was 54 MB/sec.\n\nA couple of hours of research turned up this:\n\n https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=168363\n\nThe fix for me was to edit /boot/grub/grub.conf, like this:\n\n kernel /vmlinuz-2.6.12-1.1381_FC3 ro root=LABEL=/ rhgb quiet \\\n ramdisk_size=12000000 ide0=noprobe ide1=noprobe\n\nNotice the \"ideX=noprobe\". Instant fix -- after reboot the disk write speed jumped to what I expected.\n\nCraig\n\n",
"msg_date": "Tue, 12 Sep 2006 14:05:52 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Craig A. James wrote:\n>\n> There IS a bug for SATA disk drives in some versions of the Linux \n> kernel. On a lark I ran some of the I/O tests in this thread, and \n> much to my surprise discovered my write speed was 6 MB/sec ... ouch! \n> On an identical machine, different kernel, the write speed was 54 MB/sec.\nMy disks are running in SATA150 mode. Whatever it means.\n\nI'm using FreeBSD, and not just because it dynamically alters the \npriority of long running processes. :-)\n\n Laszlo\n\n",
"msg_date": "Tue, 12 Sep 2006 23:49:21 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Lazlo,\n\nOn 9/12/06 2:49 PM, \"Laszlo Nagy\" <[email protected]> wrote:\n\n> I'm using FreeBSD, and not just because it dynamically alters the\n> priority of long running processes. :-)\n\nUnderstood.\n\nLinux and FreeBSD often share some driver technology.\n\nI have had extremely bad performance historically with onboard SATA chipsets\non Linux. The one exception has been with the Intel based chipsets (not the\nCPU, the I/O chipset).\n\nIt is very likely that you are having problems with the driver for the\nchipset.\n\nAre you running RAID1 in hardware? If so, turn it off and see what the\nperformance is. The onboard hardware RAID is worse than useless, it\nactually slows the I/O down.\n\nIf you want RAID with onboard chipsets, use software RAID, or buy an adapter\nfrom 3Ware or Areca for $200.\n\n- Luke \n\n\n",
"msg_date": "Tue, 12 Sep 2006 15:18:11 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Laszlo Nagy wrote:\n> Craig A. James wrote:\n>>\n>> There IS a bug for SATA disk drives in some versions of the Linux \n>> kernel. On a lark I ran some of the I/O tests in this thread, and \n>> much to my surprise discovered my write speed was 6 MB/sec ... ouch! \n>> On an identical machine, different kernel, the write speed was 54 MB/sec.\n> My disks are running in SATA150 mode. Whatever it means.\n> \n> I'm using FreeBSD, and not just because it dynamically alters the \n> priority of long running processes. :-)\n> \n\nI dunno if this has been suggested, but try changing the sysctl \nvfs.read_max. The default is 8 and results in horrible RAID performance \n(having said that, not sure if RAID1 is effected, only striped RAID \nlevels...), anyway try 16 or 32 and see if you seq IO rate improves at \nall (tho the underlying problem does look like a poor SATA \nchipset/driver combination).\n\nI also found that building your ufs2 filesystems with 32K blocks and 4K \nfragments improved sequential performance considerably (even for 8K reads).\n\nCheers\n\nMark\n",
"msg_date": "Wed, 13 Sep 2006 12:14:52 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "\n> I have had extremely bad performance historically with onboard SATA chipsets\n> on Linux. The one exception has been with the Intel based chipsets (not the\n> CPU, the I/O chipset).\n> \nThis board has Intel chipset. I cannot remember the exact type but it \nwas not in the low end category.\ndmesg says:\n\n<Intel ICH7 SATA300 controller>\nkernel: ad4: 152626MB <SAMSUNG HD160JJ ZM100-33> at ata2-master SATA150\nkernel: ad4: 152627MB <SAMSUNG HD160JJ ZM100-33> at ata3-master SATA150\n\n> It is very likely that you are having problems with the driver for the\n> chipset.\n>\n> Are you running RAID1 in hardware? If so, turn it off and see what the\n> performance is. The onboard hardware RAID is worse than useless, it\n> actually slows the I/O down.\n> \nI'm using software raid, namely gmirror:\n\nGEOM_MIRROR: Device gm0 created (id=2574033628).\nGEOM_MIRROR: Device gm0: provider ad4 detected.\nGEOM_MIRROR: Device gm0: provider ad6 detected.\nGEOM_MIRROR: Device gm0: provider ad4 activated.\nGEOM_MIRROR: Device gm0: provider ad6 activated.\n\n#gmirror list\nGeom name: gm0\nState: COMPLETE\nComponents: 2\nBalance: round-robin\nSlice: 4096\nFlags: NONE\nGenID: 0\nSyncID: 1\nID: 2574033628\nProviders:\n1. Name: mirror/gm0\n Mediasize: 160040803328 (149G)\n Sectorsize: 512\n Mode: r5w5e6\nConsumers:\n1. Name: ad4\n Mediasize: 160040803840 (149G)\n Sectorsize: 512\n Mode: r1w1e1\n State: ACTIVE\n Priority: 0\n Flags: DIRTY\n GenID: 0\n SyncID: 1\n ID: 1153981856\n2. Name: ad6\n Mediasize: 160041885696 (149G)\n Sectorsize: 512\n Mode: r1w1e1\n State: ACTIVE\n Priority: 0\n Flags: DIRTY\n GenID: 0\n SyncID: 1\n ID: 3520427571\n\n\nI tried to do:\n\n#sysctl vfs.read_max=32\nvfs.read_max: 6 -> 32\n\nbut I could not reach better disk read performance.\n\nThank you for your suggestions. Looks like I need to buy SCSI disks.\n\nRegards,\n\n Laszlo\n\n",
"msg_date": "Wed, 13 Sep 2006 12:16:36 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "\nOn 13-Sep-06, at 6:16 AM, Laszlo Nagy wrote:\n\n>\n>> I have had extremely bad performance historically with onboard \n>> SATA chipsets\n>> on Linux. The one exception has been with the Intel based \n>> chipsets (not the\n>> CPU, the I/O chipset).\n>>\n> This board has Intel chipset. I cannot remember the exact type but \n> it was not in the low end category.\n> dmesg says:\n>\n> <Intel ICH7 SATA300 controller>\n> kernel: ad4: 152626MB <SAMSUNG HD160JJ ZM100-33> at ata2-master \n> SATA150\n> kernel: ad4: 152627MB <SAMSUNG HD160JJ ZM100-33> at ata3-master \n> SATA150\n>\n>> It is very likely that you are having problems with the driver for \n>> the\n>> chipset.\n>>\n>> Are you running RAID1 in hardware? If so, turn it off and see \n>> what the\n>> performance is. The onboard hardware RAID is worse than useless, it\n>> actually slows the I/O down.\n>>\n> I'm using software raid, namely gmirror:\n>\n> GEOM_MIRROR: Device gm0 created (id=2574033628).\n> GEOM_MIRROR: Device gm0: provider ad4 detected.\n> GEOM_MIRROR: Device gm0: provider ad6 detected.\n> GEOM_MIRROR: Device gm0: provider ad4 activated.\n> GEOM_MIRROR: Device gm0: provider ad6 activated.\n>\n> #gmirror list\n> Geom name: gm0\n> State: COMPLETE\n> Components: 2\n> Balance: round-robin\n> Slice: 4096\n> Flags: NONE\n> GenID: 0\n> SyncID: 1\n> ID: 2574033628\n> Providers:\n> 1. Name: mirror/gm0\n> Mediasize: 160040803328 (149G)\n> Sectorsize: 512\n> Mode: r5w5e6\n> Consumers:\n> 1. Name: ad4\n> Mediasize: 160040803840 (149G)\n> Sectorsize: 512\n> Mode: r1w1e1\n> State: ACTIVE\n> Priority: 0\n> Flags: DIRTY\n> GenID: 0\n> SyncID: 1\n> ID: 1153981856\n> 2. Name: ad6\n> Mediasize: 160041885696 (149G)\n> Sectorsize: 512\n> Mode: r1w1e1\n> State: ACTIVE\n> Priority: 0\n> Flags: DIRTY\n> GenID: 0\n> SyncID: 1\n> ID: 3520427571\n>\n>\n> I tried to do:\n>\n> #sysctl vfs.read_max=32\n> vfs.read_max: 6 -> 32\n>\n> but I could not reach better disk read performance.\n>\n> Thank you for your suggestions. Looks like I need to buy SCSI disks.\n\nWell before you go do that try the areca SATA raid card\n>\n> Regards,\n>\n> Laszlo\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Wed, 13 Sep 2006 08:22:37 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "\nLazlo,\n\n>> Thank you for your suggestions. Looks like I need to buy SCSI disks.\n> \n> Well before you go do that try the areca SATA raid card\n\nYes, by all means spend $200 and buy the Areca or 3Ware RAID card - it's a\nsimple switch out of the cables and you should be golden.\n\nAgain - you should only expect an increase in performance from 4-6 times\nfrom what you are getting now unless you increase the number of disks.\n\n- Luke\n\n\n",
"msg_date": "Wed, 13 Sep 2006 13:33:48 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Hi, Piotr,\n\nPiotr Kołaczkowski wrote:\n\n> Why match rows from the heap if ALL required data are in the index itself?\n> Why look at the heap at all?\n\nBecause the index does not contain any transaction informations, so it\nhas to look to the heap to find out which of the rows are current.\n\nThis is one of the more debated points in the PostgreSQL way of MVCC\nimplementation.\n\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n\n",
"msg_date": "Mon, 18 Sep 2006 10:22:00 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Because there is no MVCC information in the index.\n\ncug\n\n2006/9/12, Piotr Kołaczkowski <[email protected]>:\n> On Tuesday 12 September 2006 12:47, Heikki Linnakangas wrote:\n> > Laszlo Nagy wrote:\n> > > I made another test. I create a file with the identifiers and names of\n> > > the products:\n> > >\n> > > psql#\\o products.txt\n> > > psql#select id,name from product;\n> > >\n> > > Then I can search using grep:\n> > >\n> > > grep \"Mug\" products.txt | cut -f1 -d\\|\n> > >\n> > > There is a huge difference. This command runs within 0.5 seconds. That\n> > > is, at least 76 times faster than the seq scan. It is the same if I\n> > > vacuum, backup and restore the database. I thought that the table is\n> > > stored in one file, and the seq scan will be actually faster than\n> > > grepping the file. Can you please tell me what am I doing wrong? I'm\n> > > not sure if I can increase the performance of a seq scan by adjusting\n> > > the values in postgresql.conf. I do not like the idea of exporting the\n> > > product table periodically into a txt file, and search with grep. :-)\n> >\n> > Is there any other columns besides id and name in the table? How big is\n> > products.txt compared to the heap file?\n> >\n> > > Another question: I have a btree index on product(name). It contains\n> > > all product names and the identifiers of the products. Wouldn't it be\n> > > easier to seq scan the index instead of seq scan the table? The index\n> > > is only 66MB, the table is 1123MB.\n> >\n> > Probably, but PostgreSQL doesn't know how to do that. Even if it did, it\n> > depends on how many matches there is. If you scan the index and then\n> > fetch the matching rows from the heap, you're doing random I/O to the\n> > heap. That becomes slower than scanning the heap sequentially if you're\n> > going to get more than a few hits.\n>\n> Why match rows from the heap if ALL required data are in the index itself?\n> Why look at the heap at all?\n>\n> This is the same performance problem in PostgreSQL I noticed when doing\n> some \"SELECT count(*)\" queries. Look at this:\n>\n> explain analyze select count(*) from transakcja where data > '2005-09-09' and\n> miesiac >= (9 + 2005 * 12) and kwota < 50;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=601557.86..601557.87 rows=1 width=0) (actual\n> time=26733.479..26733.484 rows=1 loops=1)\n> -> Bitmap Heap Scan on transakcja (cost=154878.00..596928.23 rows=1851852\n> width=0) (actual time=9974.208..18796.060 rows=1654218 loops=1)\n> Recheck Cond: ((miesiac >= 24069) AND (kwota < 50::double precision))\n> Filter: (data > '2005-09-09 00:00:00'::timestamp without time zone)\n> -> Bitmap Index Scan on idx_transakcja_miesiac_kwota\n> (cost=0.00..154878.00 rows=5555556 width=0) (actual time=9919.967..9919.967\n> rows=1690402 loops=1)\n> Index Cond: ((miesiac >= 24069) AND (kwota < 50::double\n> precision))\n> Total runtime: 26733.980 ms\n> (7 rows)\n>\n> The actual time retrieving tuples from the index is less than 10 seconds, but\n> the system executes needless heap scan that takes up additional 16 seconds.\n>\n> Best regards,\n> Peter\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n-- \nPostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\nhttp://www.bignerdranch.com/news/2006-08-21.shtml\n",
"msg_date": "Mon, 18 Sep 2006 10:50:43 +0200",
"msg_from": "\"Guido Neitzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
}
] |
[
{
"msg_contents": "Lazlo, \n\n> Meanwhile, \"iostat 5\" gives something like this:\n> \n> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n> 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n> 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n\nThis is your problem. Do the following and report the results here:\n\nTake the number of GB of memory you have (say 2 for 2GB), multiply it by\n250000. This is the number of 8KB pages you can fit in twice your ram.\nLet's say you have 2GB - the result is 500,000.\n\nUse that number to do the following test on your database directory:\n time bash -c \"dd if=/dev/zero of=/<dbdir>/bigfile bs=8k\ncount=<number_from_above> && sync\"\n\nThen do this:\n time bash -c \"dd if=/<dbdir>/bigfile of=/dev/null bs=8k\"\n\n> \n> I made another test. I create a file with the identifiers and \n> names of the products:\n> \n> psql#\\o products.txt\n> psql#select id,name from product;\n> \n> Then I can search using grep:\n> \n> grep \"Mug\" products.txt | cut -f1 -d\\|\n> \n> There is a huge difference. This command runs within 0.5 \n> seconds. That is, at least 76 times faster than the seq scan. \n\nThe file probably fits in the I/O cache. Your disks will at most go\nbetween 60-80MB/s, or from 5-7 times faster than what you see now. RAID\n1 with one query will only deliver one disk worth of bandwidth.\n\n- Luke\n\n",
"msg_date": "Tue, 12 Sep 2006 06:50:56 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Luke Lonergan �rta:\n> Lazlo, \n>\n> \n>> Meanwhile, \"iostat 5\" gives something like this:\n>>\n>> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n>> 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n>> 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n>> \n>\n> This is your problem. Do the following and report the results here:\n>\n> Take the number of GB of memory you have (say 2 for 2GB), multiply it by\n> 250000. This is the number of 8KB pages you can fit in twice your ram.\n> Let's say you have 2GB - the result is 500,000.\n>\n> Use that number to do the following test on your database directory:\n> time bash -c \"dd if=/dev/zero of=/<dbdir>/bigfile bs=8k\n> count=<number_from_above> && sync\"\n> \nI have 1GB RAM. The data directory is in /usr/local/pgsql/data. The root \nof this fs is /usr.\n\ntime sh -c \"dd if=/dev/zero of=/usr/test/bigfile bs=8k count=250000 && \nsync \"\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 48.030627 secs (42639460 bytes/sec)\n0.178u 8.912s 0:48.31 18.7% 9+96k 37+15701io 0pf+0w\n\n\n> Then do this:\n> time bash -c \"dd if=/<dbdir>/bigfile of=/dev/null bs=8k\"\n> \ntime sh -c \"dd if=/usr/test/bigfile of=/dev/null bs=8k\"\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 145.293473 secs (14095609 bytes/sec)\n0.110u 5.857s 2:25.31 4.1% 10+99k 32923+0io 0pf+0w\n\nAt this point I thought there was another process reading doing I/O so I \nretried:\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 116.395211 secs (17595226 bytes/sec)\n0.137u 5.658s 1:56.51 4.9% 10+103k 29082+0io 0pf+1w\n\nand again:\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 120.198224 secs (17038521 bytes/sec)\n0.063u 5.780s 2:00.21 4.8% 10+98k 29776+0io 0pf+0w\n\nThis is a mirrored disk with two SATA disks. In theory, writing should \nbe slower than reading. Is this a hardware problem? Or is it that \"sync\" \ndid not do the sync?\n\n Laszlo\n\n",
"msg_date": "Tue, 12 Sep 2006 14:15:30 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
},
{
"msg_contents": "Laszlo Nagy <gandalf 'at' designaproduct.biz> writes:\n\n> This is a mirrored disk with two SATA disks. In theory, writing should\n> be slower than reading. Is this a hardware problem? Or is it that\n> \"sync\" did not do the sync?\n\nSATA disks are supposed to be capable of lying to pg's fsync (pg\nasking the kernel to synchronize a write and waiting until it is\nfinished). Same can probably happen to the \"sync\" command.\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "12 Sep 2006 14:36:22 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on seq scan"
}
] |
[
{
"msg_contents": "Hi All\n I have installed a application with postgres-8.1.4 , I \nhave to optimize the performance, As a measure i thought of enabling \nAuto commit , is it a right decision to take , If correct please suggest \nthe steps that i need to follow in order to implement the Auto Vacuum.\n\n And also please suggest other steps that i need to \nimprove the performance .\n\nThanks and Regards\nKris\n\n",
"msg_date": "Tue, 12 Sep 2006 20:25:26 +0530",
"msg_from": "krishnaraj D <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reg - Autovacuum"
},
{
"msg_contents": "\n> Hi All\n> I have installed a application with postgres-8.1.4 , I \n> have to optimize the performance, As a measure i thought of enabling \n> Auto commit , is it a right decision to take , If correct please suggest \n> the steps that i need to follow in order to implement the Auto Vacuum.\n\nhttp://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n\n\n\n> \n> And also please suggest other steps that i need to \n> improve the performance .\n> \n\nhttp://www.powerpostgresql.com/PerfList\n\n\nBye,\nChris.\n\n\n-- \n\nChris Mair\nhttp://www.1006.org\n\n",
"msg_date": "Tue, 12 Sep 2006 17:18:27 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reg - Autovacuum"
}
] |
[
{
"msg_contents": "Sorry I answer the message only to Scott Marlowe. I re-send the response\n\n--------- Mensaje reenviado --------\nDe: Piñeiro <[email protected]>\nPara: Scott Marlowe <[email protected]>\nAsunto: Re: [PERFORM] Performance problem with Sarge compared with Woody\nFecha: Tue, 12 Sep 2006 17:36:41 +0200\nEl mar, 12-09-2006 a las 09:27 -0500, Scott Marlowe escribió:\n> On Tue, 2006-09-12 at 02:18, Piñeiro wrote:\n> > El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:\n\n\n> The 7.2.x query planner, if I remember correctly, did ALL The join ons\n> first, then did the joins in the where clause in whatever order it\n> thought best.\n> \n> Starting with 7.3 or 7.4 (not sure which) the planner was able to try\n> and decide which tables in both the join on() syntax and with where\n> clauses it wanted to run.\n> \n> Is it possible to fix the strangness of the ERP so it doesn't do that\n> thing where it puts a lot of unconstrained tables in the middle of the\n> from list? Also, moving where clause join condititions into the join\n> on() syntax is usually a huge win.\nWell, I'm currently one of the new version of this ERP developer, but\nI'm a \"recent adquisition\" at the staff. I don't take part at the\ndeveloping of the old version, and manage how the application creates\nthis huge query could be a madness.\n\n> \n> I'd probably put 8.1.4 (or the latest 8.2 snapshot) on a test box and\n> see what it could do with this query for an afternoon. It might run\n> just as slow, or it might \"get it right\" and run it in a few seconds. \n> While there are the occasions where a query does run slower when\n> migrating from an older version to a newer version, the opposite is\n> usually true. From 7.2 to 7.4 there was a lot of work done in \"getting\n> things right\" and some of this caused some things to go slower, although\n> not much.\n\nI tried recently to execute this query on a database installed on a\nlaptop with 256 MB RAM, ubuntu, and the 8.0.7 postgreSQL version, and I\ndon't solve nothing... well the next try will be use 8.1.4\n\n\n> > There are any difference between 7.2.1 and 7.4.2 versions about this?\n> > With the 7.4.2 there are more indices, or there was duplicated indices\n> > with the woody version too? \n> > (before you comment this: yes I try to remove the duplicate indices to\n> > check if this was the problem)\n> \n> Wait, are you running 7.4.2 or 7.4.7? 7.4.7 is bad enough, but 7.4.2 is\n> truly dangerous. Upgrade to 7.4.13 whichever version you're running.\n> \nSorry a mistmatch, we are using the sarge postgre version, 7.4.7\n\n\n-- \nPiñeiro <[email protected]>\n",
"msg_date": "Tue, 12 Sep 2006 18:06:46 +0200",
"msg_from": "=?ISO-8859-1?Q?Pi=F1eiro?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: Performance problem with Sarge compared with\n\tWoody]"
},
{
"msg_contents": "On Tue, 2006-09-12 at 11:06, Piñeiro wrote:\n> --------- Mensaje reenviado --------\n> De: Piñeiro <[email protected]>\n> Para: Scott Marlowe <[email protected]>\n> Asunto: Re: [PERFORM] Performance problem with Sarge compared with Woody\n> Fecha: Tue, 12 Sep 2006 17:36:41 +0200\n> El mar, 12-09-2006 a las 09:27 -0500, Scott Marlowe escribió:\n> > On Tue, 2006-09-12 at 02:18, Piñeiro wrote:\n> > > El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:\n> \n> \n> > The 7.2.x query planner, if I remember correctly, did ALL The join ons\n> > first, then did the joins in the where clause in whatever order it\n> > thought best.\n> > \n> > Starting with 7.3 or 7.4 (not sure which) the planner was able to try\n> > and decide which tables in both the join on() syntax and with where\n> > clauses it wanted to run.\n> > \n> > Is it possible to fix the strangness of the ERP so it doesn't do that\n> > thing where it puts a lot of unconstrained tables in the middle of the\n> > from list? Also, moving where clause join condititions into the join\n> > on() syntax is usually a huge win.\n> Well, I'm currently one of the new version of this ERP developer, but\n> I'm a \"recent adquisition\" at the staff. I don't take part at the\n> developing of the old version, and manage how the application creates\n> this huge query could be a madness.\n> \n> > \n> > I'd probably put 8.1.4 (or the latest 8.2 snapshot) on a test box and\n> > see what it could do with this query for an afternoon. It might run\n> > just as slow, or it might \"get it right\" and run it in a few seconds. \n> > While there are the occasions where a query does run slower when\n> > migrating from an older version to a newer version, the opposite is\n> > usually true. From 7.2 to 7.4 there was a lot of work done in \"getting\n> > things right\" and some of this caused some things to go slower, although\n> > not much.\n> \n> I tried recently to execute this query on a database installed on a\n> laptop with 256 MB RAM, ubuntu, and the 8.0.7 postgreSQL version, and I\n> don't solve nothing... well the next try will be use 8.1.4\n\nOK, I'm gonna guess that 8.1 or 8.2 will likely not fix your problem, as\nit's likely that somewhere along the line the planner is making some\ninefficient unconstrained join on your data in some intermediate step.\n\nAs Tom asked, post the explain analyze output for this query. I'm\nguessing there'll be a stage that is creating millions (possibly upon\nmillions) of rows from a cross product.\n",
"msg_date": "Tue, 12 Sep 2006 11:20:04 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Performance problem with Sarge compared"
},
{
"msg_contents": "El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribió:\n> As Tom asked, post the explain analyze output for this query. I'm\n> guessing there'll be a stage that is creating millions (possibly upon\n> millions) of rows from a cross product.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\nWell, yes, it is a friend, but as the select at postgre Sarge version\nnever finished I can't use a explain analyze. I show you the explain,\nwith the hope that someone has any idea, but i think that this is almost\nindecipherable (if you want the Woody ones i can post the explain\nanalyze). Thanks in advance.\n\n \n*****************************************************************************\n****************************************************************************** QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=91324.61..91324.88 rows=3 width=294)\n -> Sort (cost=91324.61..91324.62 rows=3 width=294)\n Sort Key: numerofacturafactura, codigofacturafactura,\ncodigoempresafactura, codigotiendafactura, estadofactura,\nfechaemisionfactura, tipoivafactura, baseimponiblemodificadafactura,\nbaseimponiblenuevafactura, refacturafactura, codigopartyparticipantshop,\nnombreparticipantshop, codigopartyparticipantpagador,\nnickparticipantpagador, shortnameparticipantpagador,\ncifparticipantpagador, codigoreparacionrepair, codigotiendarepair,\ncodigoclienterepair, codigocompaniarepair, codigoautoarteshop,\ncodigopartyparticipantenter, nombreparticipantcompany,\nshortnameparticipantcompany, codigopartyparticipantcompany,\ncifparticipantcompany, codigopagopago, codigobancopago,\ncodigooficinapago, numerocuentapago, esaplazospago, pagosrealizadospago,\nnumerovencimientospago, fechainiciopago, esdomiciliacionpago\n -> Append (cost=27613.94..91324.59 rows=3 width=294)\n -> Subquery Scan \"*SELECT* 1\" (cost=27613.94..27613.96\nrows=1 width=294)\n -> Sort (cost=27613.94..27613.95 rows=1\nwidth=294)\n Sort Key: participantecompany.nombre,\nfacturaabono.numerofactura\n -> Nested Loop (cost=21240.09..27613.93\nrows=1 width=294)\n -> Hash Join (cost=21240.09..27609.14\nrows=1 width=230)\n Hash Cond: ((\"outer\".codigotienda\n= \"inner\".codigoparty) AND (\"outer\".codigoempresa =\n\"inner\".codigoempresa) AND (\"outer\".codigoreparacion =\n\"inner\".codigoreparacion))\n -> Merge Right Join\n(cost=2381.66..8569.33 rows=12091 width=119)\n Merge Cond:\n((\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda) AND (\"outer\".codigopago =\n\"inner\".codigopago))\n -> Index Scan using\ncodigopago_pk on pago (cost=0.00..5479.51 rows=77034 width=56)\n -> Sort\n(cost=2381.66..2411.89 rows=12091 width=87)\n Sort Key:\nfacturaabono.codigoempresa, facturaabono.codigotienda,\nfacturaabono.codigopago\n -> Seq Scan on\nfacturaabono (cost=0.00..1561.79 rows=12091 width=87)\n Filter:\n((estado >= 0) AND (numerofactura IS NOT NULL) AND (fechaemision <=\n'2006-09-07 00:00:00+02'::timestamp with time zone) AND (fechaemision >=\n'2005-08-07 00:00:00+02'::timestamp with time zone) AND (tipoiva IS\nNULL))\n -> Hash\n(cost=18858.26..18858.26 rows=23 width=135)\n -> Hash Join\n(cost=13965.21..18858.26 rows=23 width=135)\n Hash Cond:\n(\"outer\".codigotienda = \"inner\".codigoparty)\n -> Merge Right Join\n(cost=13887.40..18468.57 rows=62329 width=100)\n Merge Cond:\n((\"outer\".codigoreparacion = \"inner\".codigoreparacion) AND\n(\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda))\n -> Index Scan\nusing codigosiniestro_pk on siniestro (cost=0.00..3638.20 rows=38380\nwidth=24)\n -> Sort\n(cost=13887.40..14043.22 rows=62329 width=100)\n Sort Key:\nreparacion.codigoreparacion, reparacion.codigoempresa,\nreparacion.codigotienda\n -> Hash\nLeft Join (cost=2299.69..7033.53 rows=62329 width=100)\n\nHash Cond: (\"outer\".codigocompania = \"inner\".codigoparty)\n ->\nSeq Scan on reparacion (cost=0.00..1803.29 rows=62329 width=40)\n ->\nHash (cost=1695.35..1695.35 rows=47335 width=60)\n\n-> Seq Scan on participante participantecompany (cost=0.00..1695.35\nrows=47335 width=60)\n -> Hash\n(cost=77.77..77.77 rows=17 width=35)\n -> Nested Loop\n(cost=0.00..77.77 rows=17 width=35)\n -> Seq\nScan on tienda (cost=0.00..1.16 rows=16 width=13)\n -> Index\nScan using codigoparticipante_pk on participante participanteshop\n(cost=0.00..4.78 rows=1 width=22)\n\nIndex Cond: (\"outer\".codigotienda = participanteshop.codigoparty)\n -> Index Scan using\ncodigoparticipante_pk on participante participantecliente\n(cost=0.00..4.78 rows=1 width=72)\n Index Cond:\n(\"outer\".codigopagador = participantecliente.codigoparty)\n Filter: ((nick)::text ~~* '%\nASITUR%'::text)\n -> Subquery Scan \"*SELECT* 2\" (cost=27572.17..27572.27\nrows=1 width=294)\n -> Unique (cost=27572.17..27572.26 rows=1\nwidth=294)\n -> Sort (cost=27572.17..27572.18 rows=1\nwidth=294)\n Sort Key: participantecompany.nombre,\nfacturaabono.numerofactura, facturaabono.codigofactura,\nfacturaabono.codigoempresa, facturaabono.codigotienda,\nfacturaabono.estado, a.fechaemision, facturaabono.tipoiva,\nfacturaabono.baseimponiblemodificada,\nto_char(facturaabono.baseimponiblenueva, '99999999D99'::text),\nfacturaabono.refactura, participanteshop.codigoparty,\nparticipanteshop.nombre, participantecliente.codigoparty,\nparticipantecliente.nick, participantecliente.nombrecorto,\nparticipantecliente.cif, CASE WHEN (reparacion.codigocompania IS NOT\nNULL) THEN reparacion.codigoreparacion ELSE NULL::bigint END,\nreparacion.codigotienda, reparacion.codigocliente,\nreparacion.codigocompania, tienda.codigoautoarte,\nfacturaabono.codigoempresa, participantecompany.nombrecorto,\nparticipantecompany.codigoparty, participantecompany.cif,\npago.codigopago, pago.codigobanco, pago.codigooficina,\npago.numerocuenta, pago.esaplazos, pago.pagosrealizados,\npago.numerovencimientos, pago.fechainicio, pago.esdomiciliacion\n -> Nested Loop\n(cost=21240.03..27572.16 rows=1 width=294)\n -> Nested Loop\n(cost=21240.03..27566.23 rows=1 width=326)\n Join Filter:\n((\"outer\".codigoparty = \"inner\".codigotienda) AND (\"outer\".codigoempresa\n= \"inner\".codigoempresa) AND (\"inner\".codigoreparacion =\n\"outer\".codigoreparacion))\n -> Nested Loop\n(cost=21240.03..27563.02 rows=1 width=302)\n -> Hash Join\n(cost=21240.03..27548.65 rows=3 width=238)\n Hash Cond:\n((\"outer\".codigotienda = \"inner\".codigoparty) AND (\"outer\".codigoempresa\n= \"inner\".codigoempresa))\n -> Merge Right\nJoin (cost=2381.66..8569.33 rows=12091 width=103)\n Merge\nCond: ((\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda) AND (\"outer\".codigopago =\n\"inner\".codigopago))\n -> Index\nScan using codigopago_pk on pago (cost=0.00..5479.51 rows=77034\nwidth=56)\n -> Sort\n(cost=2381.66..2411.89 rows=12091 width=71)\n\nSort Key: facturaabono.codigoempresa, facturaabono.codigotienda,\nfacturaabono.codigopago\n ->\nSeq Scan on facturaabono (cost=0.00..1561.79 rows=12091 width=71)\n\nFilter: ((estado >= 0) AND (numerofactura IS NOT NULL) AND (fechaemision\n<= '2006-09-07 00:00:00+02'::timestamp with time zone) AND (fechaemision\n>= '2005-08-07 00:00:00+02'::timestamp with time zone) AND (tipoiva IS\nNULL))\n -> Hash\n(cost=18858.26..18858.26 rows=23 width=135)\n -> Hash\nJoin (cost=13965.21..18858.26 rows=23 width=135)\n\nHash Cond: (\"outer\".codigotienda = \"inner\".codigoparty)\n ->\nMerge Right Join (cost=13887.40..18468.57 rows=62329 width=100)\n\nMerge Cond: ((\"outer\".codigoreparacion = \"inner\".codigoreparacion) AND\n(\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda))\n\n-> Index Scan using codigosiniestro_pk on siniestro\n(cost=0.00..3638.20 rows=38380 width=24)\n\n-> Sort (cost=13887.40..14043.22 rows=62329 width=100)\n\nSort Key: reparacion.codigoreparacion, reparacion.codigoempresa,\nreparacion.codigotienda\n\n-> Hash Left Join (cost=2299.69..7033.53 rows=62329 width=100)\n\nHash Cond: (\"outer\".codigocompania = \"inner\".codigoparty)\n\n-> Seq Scan on reparacion (cost=0.00..1803.29 rows=62329 width=40)\n\n-> Hash (cost=1695.35..1695.35 rows=47335 width=60)\n\n-> Seq Scan on participante participantecompany (cost=0.00..1695.35\nrows=47335 width=60)\n ->\nHash (cost=77.77..77.77 rows=17 width=35)\n\n-> Nested Loop (cost=0.00..77.77 rows=17 width=35)\n\n-> Seq Scan on tienda (cost=0.00..1.16 rows=16 width=13)\n\n-> Index Scan using codigoparticipante_pk on participante\nparticipanteshop (cost=0.00..4.78 rows=1 width=22)\n\nIndex Cond: (\"outer\".codigotienda = participanteshop.codigoparty)\n -> Index Scan using\ncodigoparticipante_pk on participante participantecliente\n(cost=0.00..4.78 rows=1 width=72)\n Index Cond:\n(\"outer\".codigopagador = participantecliente.codigoparty)\n Filter:\n((nick)::text ~~* '%ASITUR%'::text)\n -> Index Scan using\nalbaranabono_codigofact_index on albaranabono (cost=0.00..3.16 rows=3\nwidth=32)\n Index Cond:\n(\"outer\".codigofactura = albaranabono.numerofactura)\n -> Index Scan using\ncodigofacturaabono_pk on facturaabono a (cost=0.00..5.91 rows=1\nwidth=32)\n Index Cond:\n((a.codigoempresa = \"outer\".codigoempresa) AND (a.codigotienda =\n\"outer\".codigoparty) AND (a.codigofactura = \"outer\".codigofactura))\n -> Subquery Scan \"*SELECT* 3\" (cost=36138.34..36138.36\nrows=1 width=224)\n -> Sort (cost=36138.34..36138.35 rows=1\nwidth=224)\n Sort Key: participantecompany.nombre,\nfacturaabono.numerofactura\n -> Group (cost=36138.26..36138.33 rows=1\nwidth=224)\n -> Sort (cost=36138.26..36138.26\nrows=1 width=224)\n Sort Key:\nfacturaabono.codigofactura, facturaabono.numerofactura,\nfacturaabono.codigoempresa, facturaabono.codigotienda,\nfacturaabono.estado, facturaabono.fechaemision, facturaabono.tipoiva,\nfacturaabono.baseimponiblemodificada, facturaabono.baseimponiblenueva,\nfacturaabono.refactura, participanteshop.codigoparty,\nparticipanteshop.nombre, participantecliente.codigoparty,\nparticipantecliente.nick, participantecliente.nombrecorto,\nparticipantecompany.nombre, participantecliente.cif,\nreparacion.codigotienda, tienda.codigoautoarte, pago.codigopago,\npago.codigobanco, pago.codigooficina, pago.numerocuenta, pago.esaplazos,\npago.pagosrealizados, pago.numerovencimientos, pago.fechainicio,\npago.esdomiciliacion\n -> Nested Loop\n(cost=36133.33..36138.25 rows=1 width=224)\n -> Merge Join\n(cost=36133.33..36133.46 rows=1 width=160)\n Merge Cond:\n(\"outer\".numerofacturafactura = \"inner\".codigofactura)\n Join Filter:\n((\"outer\".codigotiendaalbarantaller = \"inner\".codigoparty) AND\n(\"outer\".codigoempresaalbarantaller = \"inner\".codigoempresa) AND\n(\"inner\".codigoreparacion = \"outer\".codigoreparaciontaller))\n -> Subquery Scan\nfacturastalleres (cost=10036.48..10036.56 rows=3 width=32)\n -> Unique\n(cost=10036.48..10036.53 rows=3 width=48)\n -> Sort\n(cost=10036.48..10036.48 rows=3 width=48)\n\nSort Key: facturaabono.codigofactura, facturaabono.codigopago,\npublic.albaranabono.numerofactura, public.albaranabono.codigoreparacion,\nfacturataller.codigoempresaalbaran, facturataller.codigotiendaalbaran\n ->\nHash Join (cost=6159.37..10036.45 rows=3 width=48)\n\nHash Cond: ((\"outer\".codigofactura = \"inner\".numerofacturataller) AND\n(\"outer\".codigotienda = \"inner\".codigotiendafactura) AND\n(\"outer\".codigoempresa = \"inner\".codigoempresafactura))\n\n-> Merge Right Join (cost=5735.27..8868.50 rows=49588 width=40)\n\nMerge Cond: ((\"outer\".numerofactura = \"inner\".codigofactura) AND\n(\"outer\".codigotienda = \"inner\".codigotienda) AND (\"outer\".codigoempresa\n= \"inner\".codigoempresa))\n\nFilter: (\"outer\".numerofactura IS NULL)\n\n-> Index Scan using albaranabono_codigofacttot_inde on albaranabono\n(cost=0.00..2521.19 rows=48704 width=24)\n\n-> Sort (cost=5735.27..5859.24 rows=49588 width=32)\n\nSort Key: facturaabono.codigofactura, facturaabono.codigotienda,\nfacturaabono.codigoempresa\n\n-> Seq Scan on facturaabono (cost=0.00..1189.88 rows=49588 width=32)\n\n-> Hash (cost=424.00..424.00 rows=13 width=48)\n\n-> Nested Loop (cost=0.00..424.00 rows=13 width=48)\n\nJoin Filter: ((\"inner\".codigotienda = \"outer\".codigotiendaalbaran) AND\n(\"inner\".codigoempresa = \"outer\".codigoempresaalbaran))\n\n-> Seq Scan on facturataller (cost=0.00..1.73 rows=73 width=48)\n\n-> Index Scan using albaranabono_codigoalb_index on albaranabono\n(cost=0.00..5.77 rows=1 width=32)\n\nIndex Cond: (albaranabono.numeroalbaran = \"outer\".numeroalbaran)\n -> Sort\n(cost=26096.86..26096.86 rows=3 width=184)\n Sort Key:\nfacturaabono.codigofactura\n -> Hash Join\n(cost=19788.22..26096.83 rows=3 width=184)\n Hash\nCond: ((\"outer\".codigotienda = \"inner\".codigoparty) AND\n(\"outer\".codigoempresa = \"inner\".codigoempresa))\n -> Merge\nRight Join (cost=2381.66..8569.33 rows=12091 width=111)\n\nMerge Cond: ((\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda) AND (\"outer\".codigopago =\n\"inner\".codigopago))\n ->\nIndex Scan using codigopago_pk on pago (cost=0.00..5479.51 rows=77034\nwidth=56)\n ->\nSort (cost=2381.66..2411.89 rows=12091 width=79)\n\nSort Key: facturaabono.codigoempresa, facturaabono.codigotienda,\nfacturaabono.codigopago\n\n-> Seq Scan on facturaabono (cost=0.00..1561.79 rows=12091 width=79)\n\nFilter: ((estado >= 0) AND (numerofactura IS NOT NULL) AND (fechaemision\n<= '2006-09-07 00:00:00+02'::timestamp with time zone) AND (fechaemision\n>= '2005-08-07 00:00:00+02'::timestamp with time zone) AND (tipoiva IS\nNULL))\n -> Hash\n(cost=17406.45..17406.45 rows=23 width=73)\n ->\nHash Join (cost=12513.40..17406.45 rows=23 width=73)\n\nHash Cond: (\"outer\".codigotienda = \"inner\".codigoparty)\n\n-> Merge Right Join (cost=12435.59..17016.76 rows=62329 width=38)\n\nMerge Cond: ((\"outer\".codigoreparacion = \"inner\".codigoreparacion) AND\n(\"outer\".codigoempresa = \"inner\".codigoempresa) AND\n(\"outer\".codigotienda = \"inner\".codigotienda))\n\n-> Index Scan using codigosiniestro_pk on siniestro\n(cost=0.00..3638.20 rows=38380 width=24)\n\n-> Sort (cost=12435.59..12591.41 rows=62329 width=38)\n\nSort Key: reparacion.codigoreparacion, reparacion.codigoempresa,\nreparacion.codigotienda\n\n-> Hash Left Join (cost=2091.69..6497.53 rows=62329 width=38)\n\nHash Cond: (\"outer\".codigocompania = \"inner\".codigoparty)\n\n-> Seq Scan on reparacion (cost=0.00..1803.29 rows=62329 width=32)\n\n-> Hash (cost=1695.35..1695.35 rows=47335 width=22)\n\n-> Seq Scan on participante participantecompany (cost=0.00..1695.35\nrows=47335 width=22)\n\n-> Hash (cost=77.77..77.77 rows=17 width=35)\n\n-> Nested Loop (cost=0.00..77.77 rows=17 width=35)\n\n-> Seq Scan on tienda (cost=0.00..1.16 rows=16 width=13)\n\n-> Index Scan using codigoparticipante_pk on participante\nparticipanteshop (cost=0.00..4.78 rows=1 width=22)\n\nIndex Cond: (\"outer\".codigotienda = participanteshop.codigoparty)\n -> Index Scan using\ncodigoparticipante_pk on participante participantecliente\n(cost=0.00..4.78 rows=1 width=72)\n Index Cond:\n(\"outer\".codigopagador = participantecliente.codigoparty)\n Filter: ((nick)::text\n~~* '%ASITUR%'::text)\n(141 filas)\n\n************************************************************************\n*************************************************************************\n\n\n-- \nPiñeiro <[email protected]>\n",
"msg_date": "Tue, 12 Sep 2006 20:28:28 +0200",
"msg_from": "=?ISO-8859-1?Q?Pi=F1eiro?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Re: Performance problem with Sarge compared"
},
{
"msg_contents": "\n\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Piñeiro\n> > TIP 6: explain analyze is your friend\n> Well, yes, it is a friend, but as the select at postgre Sarge version\n> never finished I can't use a explain analyze. I show you the explain,\n> with the hope that someone has any idea, but i think that \n> this is almost\n> indecipherable (if you want the Woody ones i can post the explain\n> analyze). Thanks in advance.\n\nDoes the machine run out of disk space every time? Is it possible to try\nthe query on a different machine with more hard drive room? An explain\nanalyze of the slow plan will be much more helpful than an explain, even if\nits from a different machine. If its generating a large temp file, it is\nanother sign that the query is doing some kind of large cross product. \n\n",
"msg_date": "Tue, 12 Sep 2006 14:05:35 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Performance problem with Sarge compared"
},
{
"msg_contents": "Pi�eiro wrote:\n> El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribi�:\n> > As Tom asked, post the explain analyze output for this query. I'm\n> > guessing there'll be a stage that is creating millions (possibly upon\n> > millions) of rows from a cross product.\n> > \n\n> Well, yes, it is a friend, but as the select at postgre Sarge version\n> never finished I can't use a explain analyze. I show you the explain,\n> with the hope that someone has any idea, but i think that this is almost\n> indecipherable (if you want the Woody ones i can post the explain\n> analyze). Thanks in advance.\n\nThe only advice I can give you at this point is to provide both the\nEXPLAIN output and the query itself in formats more easily readable for\nthose that could help you. This EXPLAIN you post below is totally\nwhitespace-mangled, making it much harder to read than it should be; and\nthe query you posted, AFAICS, is a continuous stream of lowercase\nletters. The EXPLAIN would be much better if you posted it as an\nattachment; and the query would be much better if you separated the\nlogically distinct clauses in different lines, with clean indentation,\nusing uppercase for the SQL keywords (SELECT, FROM, WHERE, etc). That\nway you're more likely to get useful responses.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 12 Sep 2006 15:08:22 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Performance problem with Sarge compared"
}
] |
[
{
"msg_contents": "Lazlo, \n\nYou can ignore tuning postgres and trying to use indexes, your problem is a bad hardware / OS configuration. The disks you are using should read 4-5 times faster than they are doing. Look to the SATA chipset driver in your FreeBSD config - perhaps upgrading your kernel would help.\n\nStill, the most you should expect is 5-6 times faster query than before. The data in your table is slightly larger than RAM. When you took it out of the DBMS it was smaller than RAM, so it fit in the I/O cache.\n\nWith a text scan query you are stuck with a seqscan unless you use a text index like tsearch. Buy more disks and a Raid controller and use Raid5 or Raid10.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tLaszlo Nagy [mailto:[email protected]]\nSent:\tTuesday, September 12, 2006 08:16 AM Eastern Standard Time\nTo:\tLuke Lonergan; [email protected]\nSubject:\tRe: [PERFORM] Poor performance on seq scan\n\nLuke Lonergan írta:\n> Lazlo, \n>\n> \n>> Meanwhile, \"iostat 5\" gives something like this:\n>>\n>> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n>> 1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0\n>> 0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0\n>> \n>\n> This is your problem. Do the following and report the results here:\n>\n> Take the number of GB of memory you have (say 2 for 2GB), multiply it by\n> 250000. This is the number of 8KB pages you can fit in twice your ram.\n> Let's say you have 2GB - the result is 500,000.\n>\n> Use that number to do the following test on your database directory:\n> time bash -c \"dd if=/dev/zero of=/<dbdir>/bigfile bs=8k\n> count=<number_from_above> && sync\"\n> \nI have 1GB RAM. The data directory is in /usr/local/pgsql/data. The root \nof this fs is /usr.\n\ntime sh -c \"dd if=/dev/zero of=/usr/test/bigfile bs=8k count=250000 && \nsync \"\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 48.030627 secs (42639460 bytes/sec)\n0.178u 8.912s 0:48.31 18.7% 9+96k 37+15701io 0pf+0w\n\n\n> Then do this:\n> time bash -c \"dd if=/<dbdir>/bigfile of=/dev/null bs=8k\"\n> \ntime sh -c \"dd if=/usr/test/bigfile of=/dev/null bs=8k\"\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 145.293473 secs (14095609 bytes/sec)\n0.110u 5.857s 2:25.31 4.1% 10+99k 32923+0io 0pf+0w\n\nAt this point I thought there was another process reading doing I/O so I \nretried:\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 116.395211 secs (17595226 bytes/sec)\n0.137u 5.658s 1:56.51 4.9% 10+103k 29082+0io 0pf+1w\n\nand again:\n\n250000+0 records in\n250000+0 records out\n2048000000 bytes transferred in 120.198224 secs (17038521 bytes/sec)\n0.063u 5.780s 2:00.21 4.8% 10+98k 29776+0io 0pf+0w\n\nThis is a mirrored disk with two SATA disks. In theory, writing should \nbe slower than reading. Is this a hardware problem? Or is it that \"sync\" \ndid not do the sync?\n\n Laszlo\n\n\n\n",
"msg_date": "Tue, 12 Sep 2006 12:21:45 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
}
] |
[
{
"msg_contents": "Hello All\n\nI am getting this message in my log files for my database.\n \nLOG: out of file descriptors: Too many open files; release and retry.\n\nAt some point the memomy didn't get released and the postmaster reset itself terminating all client connections. I am not sure what direction to go. I can increase the file-max in the kernel but it looks reasonably sized already . Or decrease the max_file_per_process. Has anyone on the list encountered this issue. I am running Postgres 7.4.7.\n\n\nThanks\n\nJohn Allgood\n",
"msg_date": "Tue, 12 Sep 2006 15:33:08 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "On Tue, Sep 12, 2006 at 03:33:08PM -0400, [email protected] wrote:\n> Hello All\n> \n> I am getting this message in my log files for my database.\n> \n> LOG: out of file descriptors: Too many open files; release and retry.\n> \n> At some point the memomy didn't get released and the postmaster reset itself terminating all client connections. I am not sure what direction to go. I can increase the file-max in the kernel but it looks reasonably sized already . Or decrease the max_file_per_process. Has anyone on the list encountered this issue. I am running Postgres 7.4.7.\n\nPostgreSQL could be using somewhere around as much as\nmax_files_per_process * ( max_connections + 5 ), so make sure that\nmatches file-max (the + 5 is because there are non-connection processes\nsuch as the bgwriter).\n\nIf that looks OK, some file descriptors might have been left around from\nthe crash... I know this can happen with shared memory segments. It\nnormally won't happen with file descriptors, but perhaps it is possible.\nIf that's the case, a reboot would certainly fix it.\n\nBTW, you should upgrade to the latest 7.4 release.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 13 Sep 2006 01:35:03 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "I am having problems performing a join on two large tables. It seems to only\nwant to use a sequential scan on the join, but that method seems to be slower\nthan an index scan. I've never actually had it complete the sequential scan\nbecause I stop it after 24+ hours. I've run joins against large tables before\nand an index scan was always faster (a few hours at the most).\n\nHere is some information on the two tables:\ndata=# analyze view_505;\nANALYZE\ndata=# analyze r3s169;\nANALYZE\ndata=# \\d view_505\n Table \"public.view_505\"\n Column | Type | Modifiers\n------------------+-----------------------+-----------\ndsiacctno | numeric |\nname | boolean |\ntitle | boolean |\ncompany | boolean |\nzip4 | boolean |\nacceptcall | boolean |\nphonedirect | smallint |\nphonetollfree | smallint |\nfax | smallint |\neditdrop | boolean |\npostsuppress | boolean |\nfirstnameinit | boolean |\nprefix | integer |\ncrrt | boolean |\ndpbc | boolean |\nexecutive | integer |\naddressline | integer |\nmultibuyer | integer |\nactivemultibuyer | integer |\nactive | boolean |\nemails | integer |\ndomains | integer |\nzip1 | character varying(1) |\nzip3 | character varying(3) |\ngender | character varying(1) |\ntopdomains | bit varying |\ncity | character varying(35) |\nstate | character varying(35) |\nzip | character varying(20) |\ncountry | character varying(30) |\nselects | bit varying |\nfiles | integer[] |\nsics | integer[] |\ncustdate | date |\nIndexes:\n \"view_505_city\" btree (city)\n \"view_505_dsiacctno\" btree (dsiacctno)\n \"view_505_state\" btree (state)\n \"view_505_zip\" btree (zip)\n \"view_505_zip1\" btree (zip1)\n \"view_505_zip3\" btree (zip3)\n\ndata=# \\d r3s169\n Table \"public.r3s169\"\n Column | Type | Modifiers\n-------------+------------------------+-----------\ndsiacctno | numeric |\nfileid | integer |\ncustomerid | character varying(20) |\nemail | character varying(100) |\nsic2 | character varying(2) |\nsic4 | character varying(4) |\nsic6 | character varying(6) |\ncustdate | date |\ninqdate | date |\neentrydate | date |\nesubdate | date |\nefaildate | date |\neunlistdate | date |\npentrydate | date |\npsubdate | date |\npunlistdate | date |\npexpiredate | date |\nlastupdate | date |\nemaildrop | numeric |\nsic8 | character varying(8) |\nIndexes:\n \"r3s169_dsiacctno\" btree (dsiacctno)\n\ndata=# select count(*) from view_505;\n count\n-----------\n112393845\n(1 row)\n\ndata=# select count(*) from r3s169;\n count\n-----------\n285230264\n(1 row)\n\n\nHere is what EXPLAIN says:\n\ndata=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\nFROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\ns.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\nMerge Join (cost=293767607.69..305744319.52 rows=285392608 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Sort (cost=127304933.87..127585815.71 rows=112352736 width=20)\n Sort Key: v.dsiacctno\n -> Seq Scan on view_505 v (cost=100000000.00..104604059.36\nrows=112352736 width=20)\n -> Sort (cost=166462673.82..167176155.34 rows=285392608 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106875334.08\nrows=285392608 width=17)\n(8 rows)\n\n\n\nI can't really do and EXPLAIN ANALYZE because the query never really finishes.\nAlso, I use a cursor to loop through the data. view_505 isn't a pgsql view, its\njust how we decided to name the table. There is a one to many\nrelationship between\nview_505 and r3s169.\n\nSince enable_seqscan is off, my understanding is that in order for the query\nplanner to user a sequential scan it must think there is no other alternative.\nBoth sides are indexed and anaylzed, so that confuses me a little.\n\nI tried it on a smaller sample set of the data and it works fine:\n\ndata=# select * into r3s169_test from r3s169 limit 1000000;\nSELECT\ndata=# select * into view_505_test from view_505 limit 1000000;\nSELECT\ndata=# create index r3s169_test_dsiacctno on r3s169_test (dsiacctno);\nCREATE INDEX\ndata=# create index view_505_test_dsiacctno on view_505_test (dsiacctno);\nCREATE INDEX\ndata=# analyze r3s169_test;\nANALYZE\ndata=# analyze view_505_test;\nANALYZE\ndata=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\nFROM s.custdate) FROM view_505_test v INNER JOIN r3s169_test s ON\nv.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\nMerge Join (cost=0.00..1976704.69 rows=1000187 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_test_dsiacctno on view_505_test v\n(cost=0.00..1676260.67 rows=999985 width=20)\n -> Index Scan using r3s169_test_dsiacctno on r3s169_test s\n(cost=0.00..1089028.66 rows=1000186 width=17)\n(4 rows)\n\n\nIs there anything I'm missing that is preventing it from using the index? It\njust seems weird to me that other joins like this work fine and fast\nwith indexes,\nbut this one won't.\n",
"msg_date": "Tue, 12 Sep 2006 16:17:34 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance With Joins on Large Tables"
},
{
"msg_contents": "On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote:\n> data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\n> v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\n> FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\n> s.dsiacctno;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Merge Join (cost=293767607.69..305744319.52 rows=285392608 width=11)\n> Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n> -> Sort (cost=127304933.87..127585815.71 rows=112352736 width=20)\n> Sort Key: v.dsiacctno\n> -> Seq Scan on view_505 v (cost=100000000.00..104604059.36\n> rows=112352736 width=20)\n> -> Sort (cost=166462673.82..167176155.34 rows=285392608 width=17)\n> Sort Key: s.dsiacctno\n> -> Seq Scan on r3s169 s (cost=100000000.00..106875334.08\n> rows=285392608 width=17)\n> (8 rows)\n> \n> \n> Since enable_seqscan is off, my understanding is that in order for the query\n> planner to user a sequential scan it must think there is no other \n> alternative.\n> Both sides are indexed and anaylzed, so that confuses me a little.\n> \n> I tried it on a smaller sample set of the data and it works fine:\n\nActually, enable_seqscan=off just adds a fixed overhead to the seqscan\ncost estimate. That's why the cost for the seqscans in that plan starts\nat 100000000. I've suggested changing that to a variable overhead based\non the expected rowcount, but the counter-argument was that anyone with\nso much data that the fixed amount wouldn't work would most likely be\nhaving bigger issues anyway.\n\nOther things you can try to get the index scan back would be to reduce\nrandom_page_cost and to analyze the join fields in those tables with a\nhigher statistics target (though I'm not 100% certain the join cost\nestimator actually takes that into account). Or if you don't mind\npatching your source code, it wouldn't be difficult to make\nenable_seqscan use a bigger 'penalty value' than 10000000.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 13 Sep 2006 01:41:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On 9/13/06, Jim C. Nasby <[email protected]> wrote:\n> On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote:\n> > data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\n> > v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\n> > FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\n> > s.dsiacctno;\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------\n> > Merge Join (cost=293767607.69..305744319.52 rows=285392608 width=11)\n> > Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n> > -> Sort (cost=127304933.87..127585815.71 rows=112352736 width=20)\n> > Sort Key: v.dsiacctno\n> > -> Seq Scan on view_505 v (cost=100000000.00..104604059.36\n> > rows=112352736 width=20)\n> > -> Sort (cost=166462673.82..167176155.34 rows=285392608 width=17)\n> > Sort Key: s.dsiacctno\n> > -> Seq Scan on r3s169 s (cost=100000000.00..106875334.08\n> > rows=285392608 width=17)\n> > (8 rows)\n> >\n> >\n> > Since enable_seqscan is off, my understanding is that in order for the query\n> > planner to user a sequential scan it must think there is no other\n> > alternative.\n> > Both sides are indexed and anaylzed, so that confuses me a little.\n> >\n> > I tried it on a smaller sample set of the data and it works fine:\n>\n> Actually, enable_seqscan=off just adds a fixed overhead to the seqscan\n> cost estimate. That's why the cost for the seqscans in that plan starts\n> at 100000000. I've suggested changing that to a variable overhead based\n> on the expected rowcount, but the counter-argument was that anyone with\n> so much data that the fixed amount wouldn't work would most likely be\n> having bigger issues anyway.\n>\n> Other things you can try to get the index scan back would be to reduce\n> random_page_cost and to analyze the join fields in those tables with a\n> higher statistics target (though I'm not 100% certain the join cost\n> estimator actually takes that into account). Or if you don't mind\n> patching your source code, it wouldn't be difficult to make\n> enable_seqscan use a bigger 'penalty value' than 10000000.\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\nThanks for the tip. I lowered random_page_cost and got these results:\n\ndata=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\nFROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\ns.dsiacctno;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..20921221.49 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v\n(cost=0.00..2838595.79 rows=112393848 width=20)\n -> Index Scan using r3s169_dsiacctno on r3s169 s\n(cost=0.00..7106203.68 rows=285230272 width=17)\n(4 rows)\n\nThat seems to have done it. Are there any side effects to this\nchange? I read about random_page_cost in the documentation and it\nseems like this is strictly for planning. All the tables on this\ndatabase will be indexed and of a size similar to these two, so I\ndon't see it causing any other problems. Though I would check though\n:)\n",
"msg_date": "Wed, 13 Sep 2006 08:49:24 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On Wed, 2006-09-13 at 08:49 -0600, Joshua Marsh wrote:\n> That seems to have done it. Are there any side effects to this\n> change? I read about random_page_cost in the documentation and it\n> seems like this is strictly for planning. All the tables on this\n> database will be indexed and of a size similar to these two, so I\n> don't see it causing any other problems. Though I would check though\n> :)\n> \n\nRight, it's just used for planning. Avoid setting it too low, if it's\nbelow about 2.0 you would most likely see some very strange plans.\nCertainly it doesn't make sense at all to set it below 1.0, since that\nis saying it's cheaper to get a random page than a sequential one.\n\nWhat was your original random_page_cost, and what is the new value you\nset it to?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 13 Sep 2006 09:07:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On 9/13/06, Jeff Davis <[email protected]> wrote:\n> On Wed, 2006-09-13 at 08:49 -0600, Joshua Marsh wrote:\n> > That seems to have done it. Are there any side effects to this\n> > change? I read about random_page_cost in the documentation and it\n> > seems like this is strictly for planning. All the tables on this\n> > database will be indexed and of a size similar to these two, so I\n> > don't see it causing any other problems. Though I would check though\n> > :)\n> >\n>\n> Right, it's just used for planning. Avoid setting it too low, if it's\n> below about 2.0 you would most likely see some very strange plans.\n> Certainly it doesn't make sense at all to set it below 1.0, since that\n> is saying it's cheaper to get a random page than a sequential one.\n>\n> What was your original random_page_cost, and what is the new value you\n> set it to?\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n>\n\nI tried it at several levels. It was initially at 4 (the default). I\ntried 3 and 2 with no changes. When I set it to 1, it used and index\non view_505 but no r3s169:\n\ndata=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\nFROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\ns.dsiacctno;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Merge Join (cost=154730044.01..278318711.49 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v\n(cost=0.00..111923570.63 rows=112393848 width=20)\n -> Sort (cost=154730044.01..155443119.69 rows=285230272 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\nrows=285230272 width=17)\n\n\nSetting to 0.1 finally gave me the result I was looking for. I know\nthat the index scan is faster though. The seq scan never finished (i\nkilled it after 24+ hours) and I'm running the query now with indexes\nand it's progressing nicely (will probably take 4 hours).\n",
"msg_date": "Wed, 13 Sep 2006 10:19:04 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On Wed, 2006-09-13 at 10:19 -0600, Joshua Marsh wrote:\n> > Right, it's just used for planning. Avoid setting it too low, if it's\n> > below about 2.0 you would most likely see some very strange plans.\n> > Certainly it doesn't make sense at all to set it below 1.0, since that\n> > is saying it's cheaper to get a random page than a sequential one.\n> >\n> > What was your original random_page_cost, and what is the new value you\n> > set it to?\n> >\n> > Regards,\n> > Jeff Davis\n> >\n> >\n> >\n> >\n> \n> I tried it at several levels. It was initially at 4 (the default). I\n> tried 3 and 2 with no changes. When I set it to 1, it used and index\n> on view_505 but no r3s169:\n> \n> data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\n> v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH\n> FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =\n> s.dsiacctno;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=154730044.01..278318711.49 rows=285230272 width=11)\n> Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n> -> Index Scan using view_505_dsiacctno on view_505 v\n> (cost=0.00..111923570.63 rows=112393848 width=20)\n> -> Sort (cost=154730044.01..155443119.69 rows=285230272 width=17)\n> Sort Key: s.dsiacctno\n> -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\n> rows=285230272 width=17)\n> \n> \n> Setting to 0.1 finally gave me the result I was looking for. I know\n> that the index scan is faster though. The seq scan never finished (i\n> killed it after 24+ hours) and I'm running the query now with indexes\n> and it's progressing nicely (will probably take 4 hours).\n\nHmm... that sounds bad. I'm sure your system will always choose indexes\nwith that value.\n\nIs it overestimating the cost of using indexes or underestimating the\ncost of a seq scan, or both? Maybe explain with the 0.1 setting will\nhelp?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 13 Sep 2006 10:04:14 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "\nSetting to 0.1 finally gave me the result I was looking for. I know\nthat the index scan is faster though. The seq scan never finished (i\nkilled it after 24+ hours) and I'm running the query now with indexes\nand it's progressing nicely (will probably take 4 hours).\n\n\nIn regards to \"progressing nicely (will probably take 4 hours)\" - is\nthis just an estimate or is there some way to get progress status (or\nsomething similar- e.g. on step 6 of 20 planned steps) on a query in pg?\nI looked through Chap 24, Monitoring DB Activity, but most of that looks\nlike aggregate stats. Trying to relate these to a particular query\ndoesn't really seem feasible.\n\nThis would be useful in the case where you have a couple of long running\ntransactions or stored procedures doing analysis and you'd like to give\nthe user some feedback where you're at. \n\nThanks,\n\nBucky\n",
"msg_date": "Wed, 13 Sep 2006 14:19:04 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Query Progress (was: Performance With Joins on Large Tables)"
},
{
"msg_contents": "> Is there anything I'm missing that is preventing it from using the index?\nIt\n> just seems weird to me that other joins like this work fine and fast\n> with indexes,\n> but this one won't.\n\n\nDid You consider clustering both tables on the dsiacctno index?\n\nI just checked that for a 4M rows table even with enable_seqscan=on and\ndefault *page_cost on PG 8.1.4 an index scan is being chosen for\nselect * from table order by serial_pkey_field\n\n\nThis is essentially the question in Your case - sort it, or get it sorted\nvia the index at the expense of more random IO.\n\nI think clustering should work for You, but I am no expert, check with\nothers.\n\nGreetings\nMarcin\n\n",
"msg_date": "Wed, 13 Sep 2006 20:39:59 +0200",
"msg_from": "\"Marcin Mank\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On 9/13/06, Bucky Jordan <[email protected]> wrote:\n>\n> Setting to 0.1 finally gave me the result I was looking for. I know\n> that the index scan is faster though. The seq scan never finished (i\n> killed it after 24+ hours) and I'm running the query now with indexes\n> and it's progressing nicely (will probably take 4 hours).\n>\n>\n> In regards to \"progressing nicely (will probably take 4 hours)\" - is\n> this just an estimate or is there some way to get progress status (or\n> something similar- e.g. on step 6 of 20 planned steps) on a query in pg?\n> I looked through Chap 24, Monitoring DB Activity, but most of that looks\n> like aggregate stats. Trying to relate these to a particular query\n> doesn't really seem feasible.\n>\n> This would be useful in the case where you have a couple of long running\n> transactions or stored procedures doing analysis and you'd like to give\n> the user some feedback where you're at.\n>\n> Thanks,\n>\n> Bucky\n>\n\nI do it programmatically, not through postgresql. I'm using a cursor,\nso I can keep track of how many records I've handled. I'm not aware\nof a way to do this in Postgresql.\n",
"msg_date": "Wed, 13 Sep 2006 12:56:58 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Progress (was: Performance With Joins on Large Tables)"
},
{
"msg_contents": "> Hmm... that sounds bad. I'm sure your system will always choose indexes\n> with that value.\n>\n> Is it overestimating the cost of using indexes or underestimating the\n> cost of a seq scan, or both? Maybe explain with the 0.1 setting will\n> help?\n>\n> Regards,\n> Jeff Davis\n\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..51808909.26 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v\n(cost=0.00..12755411.69 rows=112393848 width=20)\n -> Index Scan using r3s169_dsiacctno on r3s169 s\n(cost=0.00..32357747.90 rows=285230272 width=17)\n(4 rows)\n\nThis is what I wanted, two index scans. Just to give you an idea of\nthe difference in time, this plan would allow me to process 100,000\nrecords ever few seconds, while the sequential scan would only\nproduces 100,000 every 10 minutes.\n",
"msg_date": "Wed, 13 Sep 2006 13:27:06 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "Jeff Davis wrote:\n> Is it overestimating the cost of using indexes or underestimating the\n> cost of a seq scan, or both? Maybe explain with the 0.1 setting will\n> help?\n> \n\nIf enable_seqscan is off, and cost is still set to 100000000, it could \nbe that it's quite simply forcibly underestimating the cost of a seqscan \nin this case.\n\nIf enable_secscan was off for the mentioned plan, it'd be interesting to \nsee if things would be saner with seqscans enabled, and a more \nreasonable random page cost. If more 'sane' values still produce the \ndesired plan, it might be better for other plans etc.\n\nTerje\n\n",
"msg_date": "Wed, 13 Sep 2006 21:42:50 +0200",
"msg_from": "Terje Elde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "On 9/13/06, Terje Elde <[email protected]> wrote:\n> Jeff Davis wrote:\n> > Is it overestimating the cost of using indexes or underestimating the\n> > cost of a seq scan, or both? Maybe explain with the 0.1 setting will\n> > help?\n> >\n>\n> If enable_seqscan is off, and cost is still set to 100000000, it could\n> be that it's quite simply forcibly underestimating the cost of a seqscan\n> in this case.\n>\n> If enable_secscan was off for the mentioned plan, it'd be interesting to\n> see if things would be saner with seqscans enabled, and a more\n> reasonable random page cost. If more 'sane' values still produce the\n> desired plan, it might be better for other plans etc.\n>\n> Terje\n>\n>\n\nI turned enable_seqscan to off and got similar results.\n\nrandom_age_cost at 4.0:\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Merge Join (cost=293737539.01..301430139.34 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Sort (cost=127311593.00..127592577.62 rows=112393848 width=20)\n Sort Key: v.dsiacctno\n -> Seq Scan on view_505 v (cost=100000000.00..104602114.48\nrows=112393848 width=20)\n -> Sort (cost=166425946.01..167139021.69 rows=285230272 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\nrows=285230272 width=17)\n(8 rows)\n\n\n\nrandom_page_cost at 3.0:\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Merge Join (cost=288303269.01..295995869.34 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Sort (cost=125775957.00..126056941.62 rows=112393848 width=20)\n Sort Key: v.dsiacctno\n -> Seq Scan on view_505 v (cost=100000000.00..104602114.48\nrows=112393848 width=20)\n -> Sort (cost=162527312.01..163240387.69 rows=285230272 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\nrows=285230272 width=17)\n(8 rows)\n\n\n\nrandom_age_cost ad 2,0:\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Merge Join (cost=282868999.01..290561599.34 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Sort (cost=124240321.00..124521305.62 rows=112393848 width=20)\n Sort Key: v.dsiacctno\n -> Seq Scan on view_505 v (cost=100000000.00..104602114.48\nrows=112393848 width=20)\n -> Sort (cost=158628678.01..159341753.69 rows=285230272 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\nrows=285230272 width=17)\n(8 rows)\n\n\n\nrandom_page_cost at 1.0:\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Merge Join (cost=154730044.01..274040257.41 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v\n(cost=0.00..111923570.63 rows=112393848 width=20)\n -> Sort (cost=154730044.01..155443119.69 rows=285230272 width=17)\n Sort Key: s.dsiacctno\n -> Seq Scan on r3s169 s (cost=100000000.00..106873675.72\nrows=285230272 width=17)\n(6 rows)\n\n\n\nrandom_page_cost ad 0.1:\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER\nJOIN r3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..51808909.26 rows=285230272 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v\n(cost=0.00..12755411.69 rows=112393848 width=20)\n -> Index Scan using r3s169_dsiacctno on r3s169 s\n(cost=0.00..32357747.90 rows=285230272 width=17)\n(4 rows)\n\nI have a suspision that pgsql isn't tuned to properly deal with tables\nof this size. Are there other things I should look at when dealing\nwith a database of this size.\n",
"msg_date": "Wed, 13 Sep 2006 14:27:47 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "\"Joshua Marsh\" <[email protected]> writes:\n> I have a suspision that pgsql isn't tuned to properly deal with tables\n> of this size.\n\nActually, it is. Most of the planner complaints we get are from people\nwhose tables fit in memory and they find that the default planner\nbehavior doesn't apply real well to that case. I find your\nindexscan-is-faster-than-sort results pretty suspicious for large\ntables. Are the tables perhaps nearly in order by the dsiacctno fields?\nIf that were the case, and the planner were missing it for some reason,\nthese results would be plausible.\n\nBTW, what are you using for work_mem, and how does that compare to your\navailable RAM?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 17:09:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables "
},
{
"msg_contents": "> Are the tables perhaps nearly in order by the dsiacctno fields?\n> If that were the case, and the planner were missing it for some reason,\n> these results would be plausible.\n>\n> BTW, what are you using for work_mem, and how does that compare to your\n> available RAM?\n>\n> regards, tom lane\n>\n\nMy assumption would be they are in exact order. The text file I used\nin the COPY statement had them in order, so if COPY preserves that in\nthe database, then it is in order.\n\nThe system has 8GB of ram and work_mem is set to 256MB.\n\nI'll see if I can't make time to run the sort-seqscan method so we can\nhave an exact time to work with.\n",
"msg_date": "Wed, 13 Sep 2006 15:18:49 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "\"Joshua Marsh\" <[email protected]> writes:\n>> Are the tables perhaps nearly in order by the dsiacctno fields?\n\n> My assumption would be they are in exact order. The text file I used\n> in the COPY statement had them in order, so if COPY preserves that in\n> the database, then it is in order.\n\nAh. So the question is why the planner isn't noticing that. What do\nyou see in the pg_stats view for the two dsiacctno fields --- the\ncorrelation field in particular?\n\n> The system has 8GB of ram and work_mem is set to 256MB.\n\nSeems reasonable enough. BTW, I don't think you've mentioned exactly\nwhich PG version you're using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 17:23:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables "
},
{
"msg_contents": "On 9/13/06, Tom Lane <[email protected]> wrote:\n> \"Joshua Marsh\" <[email protected]> writes:\n> >> Are the tables perhaps nearly in order by the dsiacctno fields?\n>\n> > My assumption would be they are in exact order. The text file I used\n> > in the COPY statement had them in order, so if COPY preserves that in\n> > the database, then it is in order.\n>\n> Ah. So the question is why the planner isn't noticing that. What do\n> you see in the pg_stats view for the two dsiacctno fields --- the\n> correlation field in particular?\n\n\nHere are the results:\ndata=# select tablename, attname, n_distinct, avg_width, correlation\nfrom pg_stats where tablename in ('view_505', 'r3s169') and attname =\n'dsiacctno';\n tablename | attname | n_distinct | avg_width | correlation\n-----------+-----------+------------+-----------+-------------\n view_505 | dsiacctno | -1 | 13 | -0.13912\n r3s169 | dsiacctno | 44156 | 13 | -0.126824\n(2 rows)\n\n\nSomeone suggested CLUSTER to make sure they are in fact ordered, I can\ntry that to and let everyone know the results.\n\n> > The system has 8GB of ram and work_mem is set to 256MB.\n>\n> Seems reasonable enough. BTW, I don't think you've mentioned exactly\n> which PG version you're using?\n>\n> regards, tom lane\n>\n\nI am using 8.0.3.\n",
"msg_date": "Wed, 13 Sep 2006 15:45:12 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
},
{
"msg_contents": "\"Joshua Marsh\" <[email protected]> writes:\n>>> On 9/13/06, Tom Lane <[email protected]> wrote:\n>>>> Are the tables perhaps nearly in order by the dsiacctno fields?\n>> \n>>> My assumption would be they are in exact order. The text file I used\n>>> in the COPY statement had them in order, so if COPY preserves that in\n>>> the database, then it is in order.\n>> \n>> Ah. So the question is why the planner isn't noticing that. What do\n>> you see in the pg_stats view for the two dsiacctno fields --- the\n>> correlation field in particular?\n\n> Here are the results:\n> data=# select tablename, attname, n_distinct, avg_width, correlation\n> from pg_stats where tablename in ('view_505', 'r3s169') and attname =\n> 'dsiacctno';\n> tablename | attname | n_distinct | avg_width | correlation\n> -----------+-----------+------------+-----------+-------------\n> view_505 | dsiacctno | -1 | 13 | -0.13912\n> r3s169 | dsiacctno | 44156 | 13 | -0.126824\n> (2 rows)\n\nWow, that correlation value is *way* away from order. If they were\nreally in exact order by dsiacctno then I'd expect to see 1.0 in\nthat column. Can you take another look at the tables and confirm\nthe ordering? Does the correlation change if you do an ANALYZE on the\ntables? (Some small change is to be expected due to random sampling,\nbut this is way off.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 18:08:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance With Joins on Large Tables "
},
{
"msg_contents": ">\n> Wow, that correlation value is *way* away from order. If they were\n> really in exact order by dsiacctno then I'd expect to see 1.0 in\n> that column. Can you take another look at the tables and confirm\n> the ordering? Does the correlation change if you do an ANALYZE on the\n> tables? (Some small change is to be expected due to random sampling,\n> but this is way off.)\n>\n> regards, tom lane\n\n\nThanks for pointing that out. Generally we load the tables via COPY and\nthen never touch the data. Because of the slowdown, I have been updating\ntuples. I reloaded it from scratch, set enable_seqscan=off and\nrandom_access_age=4 and I got the results I was looking for:\n\n\ndata=# analyze view_505;\nANALYZE\ndata=# analyze r3s169;\nANALYZE\ndata=# select tablename, attname, n_distinct, avg_width, correlation from\npg_stats where tablename in ('view_505', 'r3s169') and attname =\n'dsiacctno';\n tablename | attname | n_distinct | avg_width | correlation\n-----------+-----------+------------+-----------+-------------\n view_505 | dsiacctno | -1 | 13 | 1\n r3s169 | dsiacctno | 42140 | 13 | 1\n(2 rows)\n\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM\nv.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER JOIN\nr3s169 s on v.dsiacctno = s.dsiacctno;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..20099712.79 rows=285153952 width=11)\n Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v (cost=\n0.00..5147252.74 rows=112282976 width=20)\n -> Index Scan using r3s169_dsiacctno on r3s169 s\n(cost=0.00..8256331.47rows=285153952 width=17)\n(4 rows)\n\n Thanks for you help everyone.\n\nWow, that correlation value is *way* away from order. If they werereally in exact order by dsiacctno then I'd expect to see \n1.0 inthat column. Can you take another look at the tables and confirmthe ordering? Does the correlation change if you do an ANALYZE on thetables? (Some small change is to be expected due to random sampling,\nbut this is way off.) regards, tom lane\n \nThanks for pointing that out. Generally we load the tables via COPY and then never touch the data. Because of the slowdown, I have been updating tuples. I reloaded it from scratch, set enable_seqscan=off and random_access_age=4 and I got the results I was looking for:\n\n \n \ndata=# analyze view_505;ANALYZEdata=# analyze r3s169;ANALYZEdata=# select tablename, attname, n_distinct, avg_width, correlation from pg_stats where tablename in ('view_505', 'r3s169') and attname = 'dsiacctno';\n tablename | attname | n_distinct | avg_width | correlation-----------+-----------+------------+-----------+------------- view_505 | dsiacctno | -1 | 13 | 1 r3s169 | dsiacctno | 42140 | 13 | 1\n(2 rows)\ndata=# explain SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM v.custdate), EXTRACT (EPOCH FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s on v.dsiacctno = s.dsiacctno; QUERY PLAN\n---------------------------------------------------------------------------------------------------------- Merge Join (cost=0.00..20099712.79 rows=285153952 width=11) Merge Cond: (\"outer\".dsiacctno = \"inner\".dsiacctno)\n -> Index Scan using view_505_dsiacctno on view_505 v (cost=0.00..5147252.74 rows=112282976 width=20) -> Index Scan using r3s169_dsiacctno on r3s169 s (cost=0.00..8256331.47 rows=285153952 width=17)\n(4 rows)\n Thanks for you help everyone.",
"msg_date": "Thu, 14 Sep 2006 08:18:30 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance With Joins on Large Tables"
}
] |
[
{
"msg_contents": "Hi\n\nI am trying to run sql-bench against PostgreSQL 8.1.4 on Linux.\nSome of the insert tests seems to be ver slow \n\nFor example: select_join_in\n\nAre there any tuning parameters that can be changed to speed these queries? Or are these queries\nespecially tuned to show MySQL's stgrenths?\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 13 Sep 2006 05:24:14 -0700 (PDT)",
"msg_from": "yoav x <[email protected]>",
"msg_from_op": true,
"msg_subject": "sql-bench"
},
{
"msg_contents": "All of the tuning parameters would affect all queries\n\nshared buffers, wal buffers, effective cache, to name a few\n\n--dc--\nOn 13-Sep-06, at 8:24 AM, yoav x wrote:\n\n> Hi\n>\n> I am trying to run sql-bench against PostgreSQL 8.1.4 on Linux.\n> Some of the insert tests seems to be ver slow\n>\n> For example: select_join_in\n>\n> Are there any tuning parameters that can be changed to speed these \n> queries? Or are these queries\n> especially tuned to show MySQL's stgrenths?\n>\n>\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around\n> http://mail.yahoo.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Wed, 13 Sep 2006 08:27:12 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "So why are these queries so slow in PG?\n\n\n--- Dave Cramer <[email protected]> wrote:\n\n> All of the tuning parameters would affect all queries\n> \n> shared buffers, wal buffers, effective cache, to name a few\n> \n> --dc--\n> On 13-Sep-06, at 8:24 AM, yoav x wrote:\n> \n> > Hi\n> >\n> > I am trying to run sql-bench against PostgreSQL 8.1.4 on Linux.\n> > Some of the insert tests seems to be ver slow\n> >\n> > For example: select_join_in\n> >\n> > Are there any tuning parameters that can be changed to speed these \n> > queries? Or are these queries\n> > especially tuned to show MySQL's stgrenths?\n> >\n> >\n> >\n> >\n> > __________________________________________________\n> > Do You Yahoo!?\n> > Tired of spam? Yahoo! Mail has the best spam protection around\n> > http://mail.yahoo.com\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 13 Sep 2006 05:50:13 -0700 (PDT)",
"msg_from": "yoav x <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "First of all you are going to have to show use what these queries are \nexactly, what the machine is you are running on (CPU, memory, and \ndisk) , and how you have tuned it.\n\nslow is a relative term.. we need information to determine what \n\"slow\" means.\n\nDave\nOn 13-Sep-06, at 8:50 AM, yoav x wrote:\n\n> So why are these queries so slow in PG?\n>\n>\n> --- Dave Cramer <[email protected]> wrote:\n>\n>> All of the tuning parameters would affect all queries\n>>\n>> shared buffers, wal buffers, effective cache, to name a few\n>>\n>> --dc--\n>> On 13-Sep-06, at 8:24 AM, yoav x wrote:\n>>\n>>> Hi\n>>>\n>>> I am trying to run sql-bench against PostgreSQL 8.1.4 on Linux.\n>>> Some of the insert tests seems to be ver slow\n>>>\n>>> For example: select_join_in\n>>>\n>>> Are there any tuning parameters that can be changed to speed these\n>>> queries? Or are these queries\n>>> especially tuned to show MySQL's stgrenths?\n>>>\n>>>\n>>>\n>>>\n>>> __________________________________________________\n>>> Do You Yahoo!?\n>>> Tired of spam? Yahoo! Mail has the best spam protection around\n>>> http://mail.yahoo.com\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>> choose an index scan if your joining column's datatypes do \n>>> not\n>>> match\n>>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so \n>> that your\n>> message can get through to the mailing list cleanly\n>>\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around\n> http://mail.yahoo.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 13 Sep 2006 10:03:12 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "The last I checked (years ago), sql-bench was very synthetic (i.e.\nreflecting no realistic use case). It's the sort of test suite that's\nuseful for database developers when testing the effects of a particular\ncode change or optimization, but not so applicable to real-world uses. \n\nHistorically the test was also bad for PG because it did nasty things\nlike 10,000 inserts each in separate transactions because the test was\nwritten for MySQL which at the time didn't support transactions. Not\nsure if that's been fixed yet or not.\n\nCan you provide details about the schema and the queries that are slow?\n\n-- Mark\n\nOn Wed, 2006-09-13 at 05:24 -0700, yoav x wrote:\n> Hi\n> \n> I am trying to run sql-bench against PostgreSQL 8.1.4 on Linux.\n> Some of the insert tests seems to be ver slow \n> \n> For example: select_join_in\n> \n> Are there any tuning parameters that can be changed to speed these queries? Or are these queries\n> especially tuned to show MySQL's stgrenths?\n> \n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n",
"msg_date": "Wed, 13 Sep 2006 07:44:30 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "yoav x <[email protected]> writes:\n> Are there any tuning parameters that can be changed to speed these\n> queries? Or are these queries especially tuned to show MySQL's\n> stgrenths?\n\nThe latter. I've ranted about this before --- there are both obvious\nand subtle biases in that benchmark. The last time I spent any time\nwith it, I ended up testing with these nondefault settings:\n\nshared_buffers = 10000 \nwork_mem = 100000 \nmaintenance_work_mem = 100000 \nfsync = false \ncheckpoint_segments = 30 \nmax_locks_per_transaction = 128\n\n(fsync = false is pretty bogus for production purposes, but if you're\ncomparing to mysql using myisam tables, I think it's a reasonably fair\nbasis for comparison, as myisam is certainly not crash-safe. It'd be\ninteresting to see what mysql's performance looks like on this test\nusing innodb tables, which should be compared against fsync = true\n... but I don't know how to change it to get all the tables to be\ninnodb.)\n\nAlso, on some of the tests it makes a material difference whether you\nare using C locale or some other one --- C is faster. And make sure you\nhave a recent version of DBD::Pg --- a year or two back I recall seeing\nthe perl test program eating more CPU than the backend in some of these\ntests, because of inefficiencies in DBD::Pg.\n\nIIRC, with these settings PG 8.0 seemed to be about half the speed of\nmysql 5.0 w/myisam, which is probably somewhere in the ballpark of the\ntruth for tests of this nature, ie, single query stream of fairly simple\nqueries. If you try concurrent-update scenarios or something that\nstresses planning ability you may arrive at different results though.\nI have not retested with more recent versions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 11:32:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench "
},
{
"msg_contents": "On 9/13/06, Tom Lane <[email protected]> wrote:\n> IIRC, with these settings PG 8.0 seemed to be about half the speed of\n> mysql 5.0 w/myisam, which is probably somewhere in the ballpark of the\n> truth for tests of this nature, ie, single query stream of fairly simple\n> queries. If you try concurrent-update scenarios or something that\n> stresses planning ability you may arrive at different results though.\n> I have not retested with more recent versions.\n\nif postgresql uses prepared statements for such queries, it will\nroughly tie mysql/myisam in raw query output on this type of load\nwhich also happens to be very easy to prepare...afaik mysql gets zero\nperformance benefit from preparing statements This is extremely\ntrivial to test&confirm even on a shell script. [aside: will this\nstill be the case if peter e's planner changes become reality?]\n\nanother cheater trick benchmarkers do to disparage postgresql is to\nnot run analyze intentionally. Basically all production postgresql\nsystems of any size will run analyze on cron.\n\nanother small aside, I caught the sqlite people actually *detuning*\npostgresql for performance by turning stats_command_string=on in\npostgresql.conf. The way it was portrayed it almost looked like\ncheating. I busted them on it (go to\nhttp://www.sqlite.org/cvstrac/wiki?p=SpeedComparison and look for the\nremarks right below the results)\n\nmerlin\n",
"msg_date": "Wed, 13 Sep 2006 15:36:21 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> another small aside, I caught the sqlite people actually *detuning*\n> postgresql for performance by turning stats_command_string=on in\n> postgresql.conf.\n\nHm, well, that's not unreasonable if a comparable facility is enabled\nin the other databases they're testing ... but it'll hardly matter in\n8.2 anyway ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 16:51:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench "
},
{
"msg_contents": "On Wed, 2006-09-13 at 14:36, Merlin Moncure wrote:\n> On 9/13/06, Tom Lane <[email protected]> wrote:\n> > IIRC, with these settings PG 8.0 seemed to be about half the speed of\n> > mysql 5.0 w/myisam, which is probably somewhere in the ballpark of the\n> > truth for tests of this nature, ie, single query stream of fairly simple\n> > queries. If you try concurrent-update scenarios or something that\n> > stresses planning ability you may arrive at different results though.\n> > I have not retested with more recent versions.\n> \n> if postgresql uses prepared statements for such queries, it will\n> roughly tie mysql/myisam in raw query output on this type of load\n> which also happens to be very easy to prepare...afaik mysql gets zero\n> performance benefit from preparing statements This is extremely\n> trivial to test&confirm even on a shell script. [aside: will this\n> still be the case if peter e's planner changes become reality?]\n> \n> another cheater trick benchmarkers do to disparage postgresql is to\n> not run analyze intentionally. Basically all production postgresql\n> systems of any size will run analyze on cron.\n> \n> another small aside, I caught the sqlite people actually *detuning*\n> postgresql for performance by turning stats_command_string=on in\n> postgresql.conf. The way it was portrayed it almost looked like\n> cheating. I busted them on it (go to\n> http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison and look for the\n> remarks right below the results)\n\nThey're running autovacuum, which requires that, doesn't it?\n\nI'd rather them be running autovacuum than not vacuuming / analyzing at\nall. And autovacuum is a pretty realistic setting for most databases (I\nuse it on my production machines.)\n",
"msg_date": "Wed, 13 Sep 2006 16:10:25 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Wed, 2006-09-13 at 14:36, Merlin Moncure wrote:\n>> another small aside, I caught the sqlite people actually *detuning*\n>> postgresql for performance by turning stats_command_string=on in\n>> postgresql.conf.\n\n> They're running autovacuum, which requires that, doesn't it?\n\nNo, you're thinking of stats_row_level.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2006 17:31:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench "
},
{
"msg_contents": "On 9/14/06, Scott Marlowe <[email protected]> wrote:\n> On Wed, 2006-09-13 at 14:36, Merlin Moncure wrote:\n\n> > another small aside, I caught the sqlite people actually *detuning*\n> > postgresql for performance by turning stats_command_string=on in\n> > postgresql.conf. The way it was portrayed it almost looked like\n> > cheating. I busted them on it (go to\n> > http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison and look for the\n> > remarks right below the results)\n>\n> They're running autovacuum, which requires that, doesn't it?\n\nactually, you are right, it was row_level, not command_string (i got\nit right on their wiki, just not in the email here)...rmy bad on that.\n still, they did not disclose it.\n\nmerlin\n",
"msg_date": "Thu, 14 Sep 2006 05:12:21 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "You can use the test with InnoDB by giving the --create-options=engine=innodb option in the\ncommand line. Even with InnoDB, in some specific tests PG looks very bad compared to InnoDB.\n\n--- Tom Lane <[email protected]> wrote:\n\n> yoav x <[email protected]> writes:\n> > Are there any tuning parameters that can be changed to speed these\n> > queries? Or are these queries especially tuned to show MySQL's\n> > stgrenths?\n> \n> The latter. I've ranted about this before --- there are both obvious\n> and subtle biases in that benchmark. The last time I spent any time\n> with it, I ended up testing with these nondefault settings:\n> \n> shared_buffers = 10000 \n> work_mem = 100000 \n> maintenance_work_mem = 100000 \n> fsync = false \n> checkpoint_segments = 30 \n> max_locks_per_transaction = 128\n> \n> (fsync = false is pretty bogus for production purposes, but if you're\n> comparing to mysql using myisam tables, I think it's a reasonably fair\n> basis for comparison, as myisam is certainly not crash-safe. It'd be\n> interesting to see what mysql's performance looks like on this test\n> using innodb tables, which should be compared against fsync = true\n> ... but I don't know how to change it to get all the tables to be\n> innodb.)\n> \n> Also, on some of the tests it makes a material difference whether you\n> are using C locale or some other one --- C is faster. And make sure you\n> have a recent version of DBD::Pg --- a year or two back I recall seeing\n> the perl test program eating more CPU than the backend in some of these\n> tests, because of inefficiencies in DBD::Pg.\n> \n> IIRC, with these settings PG 8.0 seemed to be about half the speed of\n> mysql 5.0 w/myisam, which is probably somewhere in the ballpark of the\n> truth for tests of this nature, ie, single query stream of fairly simple\n> queries. If you try concurrent-update scenarios or something that\n> stresses planning ability you may arrive at different results though.\n> I have not retested with more recent versions.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 13 Sep 2006 23:55:22 -0700 (PDT)",
"msg_from": "yoav x <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sql-bench "
},
{
"msg_contents": "Hi, Yoav X,\n\nyoav x wrote:\n> You can use the test with InnoDB by giving the --create-options=engine=innodb option in the\n> command line. Even with InnoDB, in some specific tests PG looks very bad compared to InnoDB.\n\nAs far as I've seen, they include the CREATE TABLE command in their\nbenchmarks.\n\nRealistic in-production workloads don't have so much create table\ncommands, I think.\n\nWondering,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Thu, 14 Sep 2006 09:05:34 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "Have you tuned postgresql ?\n\nYou still haven't told us what the machine is, or the tuning \nparameters. If you follow Merlin's links you will find his properly \ntuned postgres out performs mysql in every case.\n\n--dc--\nOn 14-Sep-06, at 2:55 AM, yoav x wrote:\n\n> You can use the test with InnoDB by giving the --create- \n> options=engine=innodb option in the\n> command line. Even with InnoDB, in some specific tests PG looks \n> very bad compared to InnoDB.\n>\n> --- Tom Lane <[email protected]> wrote:\n>\n>> yoav x <[email protected]> writes:\n>>> Are there any tuning parameters that can be changed to speed these\n>>> queries? Or are these queries especially tuned to show MySQL's\n>>> stgrenths?\n>>\n>> The latter. I've ranted about this before --- there are both obvious\n>> and subtle biases in that benchmark. The last time I spent any time\n>> with it, I ended up testing with these nondefault settings:\n>>\n>> shared_buffers = 10000\n>> work_mem = 100000\n>> maintenance_work_mem = 100000\n>> fsync = false\n>> checkpoint_segments = 30\n>> max_locks_per_transaction = 128\n>>\n>> (fsync = false is pretty bogus for production purposes, but if you're\n>> comparing to mysql using myisam tables, I think it's a reasonably \n>> fair\n>> basis for comparison, as myisam is certainly not crash-safe. It'd be\n>> interesting to see what mysql's performance looks like on this test\n>> using innodb tables, which should be compared against fsync = true\n>> ... but I don't know how to change it to get all the tables to be\n>> innodb.)\n>>\n>> Also, on some of the tests it makes a material difference whether you\n>> are using C locale or some other one --- C is faster. And make \n>> sure you\n>> have a recent version of DBD::Pg --- a year or two back I recall \n>> seeing\n>> the perl test program eating more CPU than the backend in some of \n>> these\n>> tests, because of inefficiencies in DBD::Pg.\n>>\n>> IIRC, with these settings PG 8.0 seemed to be about half the speed of\n>> mysql 5.0 w/myisam, which is probably somewhere in the ballpark of \n>> the\n>> truth for tests of this nature, ie, single query stream of fairly \n>> simple\n>> queries. If you try concurrent-update scenarios or something that\n>> stresses planning ability you may arrive at different results though.\n>> I have not retested with more recent versions.\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so \n>> that your\n>> message can get through to the mailing list cleanly\n>>\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around\n> http://mail.yahoo.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n",
"msg_date": "Thu, 14 Sep 2006 07:33:34 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench "
},
{
"msg_contents": "Tom Lane wrote:\n> >> It'd be interesting to see what mysql's performance looks like on this\n> >> test using innodb tables, which should be compared against fsync = true\n> >> ... but I don't know how to change it to get all the tables to be\n> >> innodb.)\n\nJust a point (I've taught some MySQL courses before, sorry 'bout that;\nif you're not, I am, sort of :)) - the crash-proof version of\ntransactional tables in MySQL was supposed to be the Berkeley ones, but\n(oh, the irony) they're still beta. InnoDB were just supposed to be\noptimized to perform well with loads of data and a mediocre amount of\nclients, and *finally* support referential integrity and the rest of the\nlot.\n\nAnyways... with Oracle buying off all that stuff, don't even know if it\nstill matters: the incantation is to either add the ENGINE= or TYPE=\nclause after each CREATE TABLE statement, which would look like\n\n CREATE TABLE foo (\n\t...\n ) ENGINE=InnoDB;\n\nor specify the --default-storage-engine or --default-table-type server\nstartup option (or, alternatively, set the default-storage-engine or\ndefault-table-type option in my.cnf).\n\nThe trick being, mysqldump will be quite explicit in CREATE TABLE\nstatements, so a vi(1) and a regular expression will probably be needed.\n\nKind regards,\n-- \n Grega Bremec\n gregab at p0f dot net",
"msg_date": "Fri, 15 Sep 2006 02:11:23 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
},
{
"msg_contents": "On Fri, Sep 15, 2006 at 02:11:23AM +0200, Grega Bremec wrote:\n> Just a point (I've taught some MySQL courses before, sorry 'bout that;\n> if you're not, I am, sort of :)) - the crash-proof version of\n> transactional tables in MySQL was supposed to be the Berkeley ones, but\n> (oh, the irony) they're still beta.\n\nThey are being dropped in 5.1.12 (yes, across a minor revision). From\nhttp://dev.mysql.com/doc/refman/5.1/en/news-5-1-12.html:\n\n Incompatible change: Support for the BerkeleyDB (BDB) engine has been\n dropped from this release. Any existing tables that are in BDB format will\n not be readable from within MySQL from 5.1.12 or newer. You should convert\n your tables to another storage engine before upgrading to 5.1.12.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 15 Sep 2006 02:32:39 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql-bench"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n\n> This board has Intel chipset. I cannot remember the exact type but it\n> was not in the low end category.\n> dmesg says:\n> \n> <Intel ICH7 SATA300 controller>\n> kernel: ad4: 152626MB <SAMSUNG HD160JJ ZM100-33> at ata2-master SATA150\n> kernel: ad4: 152627MB <SAMSUNG HD160JJ ZM100-33> at ata3-master SATA150\n\nThere have been reported problems with ICH7 on FreeBSD mailing lists,\nthough I can't find any that affect performance.\n\n> Components: 2\n> Balance: round-robin\n> Slice: 4096\n\nSee if changing balance algorithm to \"split\", and slice size to 8192 or\nmore, while keeping vfs.read_max to 16 or more helps your performance.\n\n(e.g. gmirror configure -b split -s 8192 gm0)\n\nAlso, how is your file system mounted? (what does output from 'mount' say?)\n",
"msg_date": "Wed, 13 Sep 2006 23:02:01 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on seq scan"
}
] |
[
{
"msg_contents": "Hi All,\n\n I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\ncompletely full, by moment load average > 40\n\n\tAll queries analyzed by EXPLAIN, all indexes are used .. IO is good ...\n\nMy configuration is correct ?\n\n- default configuration and se + somes updates : \n\nmax_connections = 512\nsuperuser_reserved_connections = 2\nshared_buffers = 65536\nwork_mem = 65536\neffective_cache_size = 131072\nlog_destination = 'syslog'\nredirect_stderr = off\nlog_directory = '/var/log/pgsql'\nlog_min_duration_statement = 100\nsilent_mode = on\nlog_statement = 'none'\ndefault_with_oids = on\n\nMy Server is Dual Xeon 3.06GHz with 2 Go RAM and good SCSI disks.\n\nBest Regards,\nJérôme BENOIS.",
"msg_date": "Thu, 14 Sep 2006 15:08:59 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU Load"
},
{
"msg_contents": "On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> completely full, by moment load average > 40\n> All queries analyzed by EXPLAIN, all indexes are used .. IO is good ...\n\nWhat is the bottleneck? Are you CPU bound? Do you have iowait? Do you\nswap? Any weird things in vmstat output?\n\n> My configuration is correct ?\n> work_mem = 65536\n\nIf you have a lot of concurrent queries, it's probably far too much.\nThat said, if you don't swap, it's probably not the problem.\n\n--\nGuillaume\n",
"msg_date": "Thu, 14 Sep 2006 15:46:32 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Guillaume,\n\nLe jeudi 14 septembre 2006 à 15:46 +0200, Guillaume Smet a écrit :\n> On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> > I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> > completely full, by moment load average > 40\n> > All queries analyzed by EXPLAIN, all indexes are used .. IO is good ...\n> \n> What is the bottleneck? Are you CPU bound? Do you have iowait? Do you\n> swap? Any weird things in vmstat output?\nthe load average goes up and goes down between 1 and 70, it's strange.\nIO wait and swap are good. I have just very high CPU load. And it's user\nland time.\n\ntop output : \n\ntop - 15:57:57 up 118 days, 9:04, 4 users, load average: 8.16, 9.16,\n15.51\nTasks: 439 total, 7 running, 432 sleeping, 0 stopped, 0 zombie\nCpu(s): 87.3% us, 6.8% sy, 0.0% ni, 4.8% id, 0.1% wa, 0.2% hi,\n0.8% si\nMem: 2076404k total, 2067812k used, 8592k free, 13304k buffers\nSwap: 1954312k total, 236k used, 1954076k free, 1190296k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n15667 postgres 25 0 536m 222m 532m R 98.8 11.0 1:39.29 postmaster\n19533 postgres 25 0 535m 169m 532m R 92.9 8.3 0:38.68 postmaster\n16278 postgres 25 0 537m 285m 532m R 86.3 14.1 1:37.56 postmaster\n18695 postgres 16 0 535m 171m 532m S 16.1 8.5 0:14.46 postmaster\n18092 postgres 16 0 544m 195m 532m R 11.5 9.7 0:31.87 postmaster\n16896 postgres 15 0 534m 215m 532m S 6.3 10.6 0:27.13 postmaster\n 4835 postgres 15 0 535m 147m 532m S 2.6 7.3 1:27.20 postmaster\n 4836 postgres 15 0 536m 154m 532m S 2.0 7.6 1:26.07 postmaster\n 4833 postgres 15 0 535m 153m 532m S 1.0 7.6 1:26.54 postmaster\n 4839 postgres 15 0 535m 148m 532m S 1.0 7.3 1:25.10 postmaster\n15083 postgres 15 0 535m 44m 532m S 1.0 2.2 0:16.13 postmaster\n\nVmstat output :\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 4 0 236 13380 13876 1192036 0 0 0 0 1 1 19\n6 70 5\n 4 0 236 13252 13876 1192036 0 0 10 0 0 0 92\n8 0 0\n16 0 236 13764 13884 1192096 0 0 52 28 0 0 91\n9 0 0\n 4 0 236 11972 13904 1192824 0 0 320 17 0 0 92\n8 0 0\n 4 0 236 12548 13904 1192892 0 0 16 0 0 0 92\n8 0 0\n 9 0 236 11908 13912 1192884 0 0 4 38 0 0 91\n9 0 0\n 8 0 236 8832 13568 1195676 0 0 6975 140 0 0 91\n9 0 0\n 8 0 236 10236 13588 1193208 0 0 82 18 0 0 93\n7 0 0\n 6 0 236 9532 13600 1193264 0 0 76 18 0 0 92\n8 0 0\n10 1 236 11060 13636 1193432 0 0 54 158 0 0 91\n9 0 0\n 6 0 236 10204 13636 1193432 0 0 8 0 0 0 92\n8 0 0\n 8 1 236 10972 13872 1192720 0 0 28 316 0 0 91\n9 0 0\n 6 0 236 11004 13936 1192724 0 0 4 90 0 0 92\n8 0 0\n 7 0 236 10300 13936 1192996 0 0 150 0 0 0 92\n8 0 0\n11 0 236 11004 13944 1192988 0 0 16 6 0 0 91\n8 0 0\n17 0 236 10732 13996 1193208 0 0 118 94 0 0 91\n9 0 0\n 6 0 236 10796 13996 1193820 0 0 274 0 0 0 91\n9 0 0\n24 0 236 9900 13996 1193820 0 0 8 0 0 0 92\n8 0 0\n13 0 236 9420 14016 1194004 0 0 100 98 0 0 92\n8 0 0\n 8 0 236 9276 13944 1188976 0 0 42 0 0 0 92\n8 0 0\n 3 0 236 14524 13952 1188968 0 0 0 38 0 0 77\n8 16 0\n 3 0 236 15164 13960 1189164 0 0 92 6 0 0 65\n7 28 0\n 3 0 236 16380 13968 1189156 0 0 8 36 0 0 57\n7 36 0\n 1 0 236 15604 14000 1189260 0 0 38 37 0 0 39\n6 54 1\n 1 0 236 16564 14000 1189328 0 0 0 0 0 0 38\n5 57 0\n 1 1 236 14900 14024 1189372 0 0 28 140 0 0 47\n7 46 0\n 1 1 236 10212 14100 1195280 0 0 2956 122 0 0 21\n3 71 5\n 5 0 236 13156 13988 1192400 0 0 534 6 0 0 19\n3 77 1\n 0 0 236 8408 13996 1197016 0 0 4458 200 0 0 18\n2 78 2\n 1 0 236 9784 13996 1195588 0 0 82 0 0 0 16\n3 81 0\n 0 0 236 10728 14028 1195556 0 0 30 118 0 0 11\n2 87 1\n\n\nThanks for your help,\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"\n\n\n> > My configuration is correct ?\n> > work_mem = 65536\n> \n> If you have a lot of concurrent queries, it's probably far too much.\n> That said, if you don't swap, it's probably not the problem.\n> \n> --\n> Guillaume\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>",
"msg_date": "Thu, 14 Sep 2006 16:00:13 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> completely full, by moment load average > 40\n\nDid you remember to ANALYZE the whole database after reloading it?\npg_dump/reload won't by itself regenerate statistics.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2006 10:13:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load "
},
{
"msg_contents": "Hi Tom,\n\nLe jeudi 14 septembre 2006 à 10:13 -0400, Tom Lane a écrit :\n> =?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> > I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> > completely full, by moment load average > 40\n> \n> Did you remember to ANALYZE the whole database after reloading it?\n> pg_dump/reload won't by itself regenerate statistics.\n> \n> \t\t\tregards, tom lane\nI tested, dump + restore + vaccumdb --analyze on all databases but no change ...\n\nCheers,\n\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Thu, 14 Sep 2006 16:17:15 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On Thu, 2006-09-14 at 09:00, Jérôme BENOIS wrote:\n> Hi Guillaume,\n> \n> Le jeudi 14 septembre 2006 à 15:46 +0200, Guillaume Smet a écrit :\n> > On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> > > I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> > > completely full, by moment load average > 40\n> > > All queries analyzed by EXPLAIN, all indexes are used .. IO is good ...\n> > \n> > What is the bottleneck? Are you CPU bound? Do you have iowait? Do you\n> > swap? Any weird things in vmstat output?\n> the load average goes up and goes down between 1 and 70, it's strange.\n> IO wait and swap are good. I have just very high CPU load. And it's user\n> land time.\n> \n> top output : \n> \n> top - 15:57:57 up 118 days, 9:04, 4 users, load average: 8.16, 9.16,\n> 15.51\n> Tasks: 439 total, 7 running, 432 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 87.3% us, 6.8% sy, 0.0% ni, 4.8% id, 0.1% wa, 0.2% hi,\n> 0.8% si\n> Mem: 2076404k total, 2067812k used, 8592k free, 13304k buffers\n> Swap: 1954312k total, 236k used, 1954076k free, 1190296k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 15667 postgres 25 0 536m 222m 532m R 98.8 11.0 1:39.29 postmaster\n> 19533 postgres 25 0 535m 169m 532m R 92.9 8.3 0:38.68 postmaster\n> 16278 postgres 25 0 537m 285m 532m R 86.3 14.1 1:37.56 postmaster\n> 18695 postgres 16 0 535m 171m 532m S 16.1 8.5 0:14.46 postmaster\n> 18092 postgres 16 0 544m 195m 532m R 11.5 9.7 0:31.87 postmaster\n> 16896 postgres 15 0 534m 215m 532m S 6.3 10.6 0:27.13 postmaster\n\nSomewhere, the query planner is likely making a really bad decision.\n\nHave you analyzed your dbs?\n",
"msg_date": "Thu, 14 Sep 2006 09:17:24 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On Thu, 2006-09-14 at 09:17, Jérôme BENOIS wrote:\n> Hi Tom,\n> \n> Le jeudi 14 septembre 2006 à 10:13 -0400, Tom Lane a écrit :\n> > =?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> > > I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> > > completely full, by moment load average > 40\n> > \n> > Did you remember to ANALYZE the whole database after reloading it?\n> > pg_dump/reload won't by itself regenerate statistics.\n> > \n> > \t\t\tregards, tom lane\n> I tested, dump + restore + vaccumdb --analyze on all databases but no change ...\n\n\nOK, set your db to log queries that take more than a few seconds to\nrun. Execute those queries by hand with an explain analyze in front and\npost the output here.\n",
"msg_date": "Thu, 14 Sep 2006 09:21:50 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 15667 postgres 25 0 536m 222m 532m R 98.8 11.0 1:39.29 postmaster\n> 19533 postgres 25 0 535m 169m 532m R 92.9 8.3 0:38.68 postmaster\n> 16278 postgres 25 0 537m 285m 532m R 86.3 14.1 1:37.56 postmaster\n\nEnable stats_command_string and see which queries are running on these\nbackends by selecting on pg_stat_activity.\n\nDo the queries finish? Do you have them in your query log?\n\n--\nGuillaume\n",
"msg_date": "Thu, 14 Sep 2006 16:26:10 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hello,\n\n\n\n\nLe jeudi 14 septembre 2006 à 09:21 -0500, Scott Marlowe a écrit :\n> On Thu, 2006-09-14 at 09:17, Jérôme BENOIS wrote:\n> > Hi Tom,\n> > \n> > Le jeudi 14 septembre 2006 à 10:13 -0400, Tom Lane a écrit :\n> > > =?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> > > > I migrated Postgres server from 7.4.6 to 8.1.4, But my server is\n> > > > completely full, by moment load average > 40\n> > > \n> > > Did you remember to ANALYZE the whole database after reloading it?\n> > > pg_dump/reload won't by itself regenerate statistics.\n> > > \n> > > \t\t\tregards, tom lane\n> > I tested, dump + restore + vaccumdb --analyze on all databases but no change ...\n> \n> \n> OK, set your db to log queries that take more than a few seconds to\n> run. Execute those queries by hand with an explain analyze in front and\n> post the output here.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\ni tested all queries, but she used indexes ... an example :\n\n explain analyze select distinct\nINTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( select distinct ei_id as EIID from mpng2_ei_attribute as reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE ilike '' and ei_id in ( select distinct ei_id as EIID from mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct ei_id as EIID from mpng2_ei_attribute as reqin3 where reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as req0 join mpng2_ei_attribute on req0.eiid = mpng2_ei_attribute.ei_id order by ei_id asc;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=758.53..762.19 rows=122 width=233) (actual\ntime=0.191..0.191 rows=0 loops=1)\n -> Sort (cost=758.53..758.84 rows=122 width=233) (actual\ntime=0.182..0.182 rows=0 loops=1)\n Sort Key: mpng2_ei_attribute.ei_id,\nmpng2_ei_attribute.integer_value, mpng2_ei_attribute.date_value,\nmpng2_ei_attribute.value_type, mpng2_ei_attribute.float_value,\nmpng2_ei_attribute.id, mpng2_ei_attribute.text_value,\nmpng2_ei_attribute.category_id, mpng2_ei_attribute.string_value,\nmpng2_ei_attribute.categoryattr_id, mpng2_ei_attribute.name\n -> Nested Loop (cost=365.83..754.31 rows=122 width=233)\n(actual time=0.126..0.126 rows=0 loops=1)\n -> Unique (cost=365.83..374.34 rows=1 width=4) (actual\ntime=0.116..0.116 rows=0 loops=1)\n -> Nested Loop (cost=365.83..374.34 rows=1\nwidth=4) (actual time=0.108..0.108 rows=0 loops=1)\n -> Unique (cost=350.22..354.69 rows=1\nwidth=4) (actual time=0.097..0.097 rows=0 loops=1)\n -> Nested Loop (cost=350.22..354.69\nrows=1 width=4) (actual time=0.089..0.089 rows=0 loops=1)\n -> Unique (cost=334.60..335.03\nrows=1 width=4) (actual time=0.080..0.080 rows=0 loops=1)\n -> Sort\n(cost=334.60..334.82 rows=86 width=4) (actual time=0.072..0.072 rows=0\nloops=1)\n Sort Key:\nreqin3.ei_id\n -> Bitmap Heap Scan\non mpng2_ei_attribute reqin3 (cost=2.52..331.84 rows=86 width=4)\n(actual time=0.056..0.056 rows=0 loops=1)\n Recheck Cond:\n(((name)::text = ''::text) AND ((string_value)::text = ''::text))\n -> Bitmap\nIndex Scan on mpng2_ei_attribute_name_svalue (cost=0.00..2.52 rows=86\nwidth=0) (actual time=0.043..0.043 rows=0 loops=1)\n Index\nCond: (((name)::text = ''::text) AND ((string_value)::text = ''::text))\n -> Bitmap Heap Scan on\nmpng2_ei_attribute reqin2 (cost=15.61..19.63 rows=1 width=4) (never\nexecuted)\n Recheck Cond:\n((reqin2.ei_id = \"outer\".ei_id) AND (reqin2.categoryattr_id = 0))\n Filter: (text_value ~~*\n''::text)\n -> BitmapAnd\n(cost=15.61..15.61 rows=1 width=0) (never executed)\n -> Bitmap Index Scan\non mpng2_ei_attribute_ei_id (cost=0.00..2.43 rows=122 width=0) (never\nexecuted)\n Index Cond:\n(reqin2.ei_id = \"outer\".ei_id)\n -> Bitmap Index Scan\non mpng2_ei_attribute_categoryattr (cost=0.00..12.94 rows=1982 width=0)\n(never executed)\nIndex Cond: (categoryattr_id = 0)\n -> Bitmap Heap Scan on mpng2_ei_attribute\nreqin1 (cost=15.61..19.63 rows=1 width=4) (never executed)\n Recheck Cond: ((reqin1.ei_id =\n\"outer\".ei_id) AND (reqin1.categoryattr_id = 0))\n Filter: (text_value ~~* ''::text)\n -> BitmapAnd (cost=15.61..15.61\nrows=1 width=0) (never executed)\n -> Bitmap Index Scan on\nmpng2_ei_attribute_ei_id (cost=0.00..2.43 rows=122 width=0) (never\nexecuted)\n Index Cond: (reqin1.ei_id =\n\"outer\".ei_id)\n -> Bitmap Index Scan on\nmpng2_ei_attribute_categoryattr (cost=0.00..12.94 rows=1982 width=0)\n(never executed)\n Index Cond:\n(categoryattr_id = 0)\n -> Index Scan using mpng2_ei_attribute_ei_id on\nmpng2_ei_attribute (cost=0.00..378.43 rows=122 width=233) (never\nexecuted)\n Index Cond: (\"outer\".ei_id =\nmpng2_ei_attribute.ei_id)\n\nThanks,\n\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Thu, 14 Sep 2006 16:27:12 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Jérôme BENOIS\n> \n explain analyze select distinct\n> INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> select distinct ei_id as EIID from mpng2_ei_attribute as \n> reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> ilike '' and ei_id in ( select distinct ei_id as EIID from \n> mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> req0 join mpng2_ei_attribute on req0.eiid = \n> mpng2_ei_attribute.ei_id order by ei_id asc;\n\n\nThat is a lot of distinct's. Sorts are one thing that can really use up\nCPU. This query is doing lots of sorts, so its not surprising the CPU usage\nis high. \n\nOn the subqueries you have a couple of cases where you say \"... in (select\ndistinct ...)\" I dont think the distinct clause is necessary in that case.\nI'm not a hundred percent sure, but you might want to try removing them and\nsee if the query results are the same and maybe the query will execute\nfaster.\n\n",
"msg_date": "Thu, 14 Sep 2006 10:02:04 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Dave,\nLe jeudi 14 septembre 2006 à 10:02 -0500, Dave Dutcher a écrit :\n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf Of \n> > Jérôme BENOIS\n> > \n> explain analyze select distinct\n> > INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> > VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> > select distinct ei_id as EIID from mpng2_ei_attribute as \n> > reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> > ilike '' and ei_id in ( select distinct ei_id as EIID from \n> > mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> > AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> > ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> > reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> > req0 join mpng2_ei_attribute on req0.eiid = \n> > mpng2_ei_attribute.ei_id order by ei_id asc;\n> \n> \n> That is a lot of distinct's. Sorts are one thing that can really use up\n> CPU. This query is doing lots of sorts, so its not surprising the CPU usage\n> is high. \n> \n> On the subqueries you have a couple of cases where you say \"... in (select\n> distinct ...)\" I don’t think the distinct clause is necessary in that case.\n> I'm not a hundred percent sure, but you might want to try removing them and\n> see if the query results are the same and maybe the query will execute\n> faster.\n\nThanks for your advice, but the load was good with previous version of\npostgres -> 7.4.6 on the same server and same datas, same application,\nsame final users ...\n\nSo we supect some system parameter, but which ?\n\nWith vmstat -s is showing a lot of \"pages swapped out\", have you an\nidea ?\n\nThanls a lot,\n\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Thu, 14 Sep 2006 17:09:25 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On Thu, 2006-09-14 at 10:02, Dave Dutcher wrote:\n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf Of \n> > Jérôme BENOIS\n> > \n> explain analyze select distinct\n> > INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> > VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> > select distinct ei_id as EIID from mpng2_ei_attribute as \n> > reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> > ilike '' and ei_id in ( select distinct ei_id as EIID from \n> > mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> > AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> > ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> > reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> > req0 join mpng2_ei_attribute on req0.eiid = \n> > mpng2_ei_attribute.ei_id order by ei_id asc;\n> \n> \n> That is a lot of distinct's. Sorts are one thing that can really use up\n> CPU. This query is doing lots of sorts, so its not surprising the CPU usage\n> is high. \n\nI'm gonna make a SWAG here and guess that maybe your 7.4 db was initdb'd\nwith a locale of C and the new one is initdb'd with a real locale, like\nen_US. Can Jérôme confirm or deny this?\n",
"msg_date": "Thu, 14 Sep 2006 10:56:52 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Jérôme,\n\nHow many concurrent connections do you have?\n\nBecause You've got only 2GB of ram this is important! Postgres process\ntakes some bytes in memory =) .. I don't exactly how many,\nbut thinking if it is about 2Mb you'll get about 1Gb of ram used only by\npostgres' processes (for 512 connections)!\nDon't forget about your 512Mb shared memory setting,\npostgres shared libraries and the OS filesystem cache...\n\nI hope your postgres binaries are not statically linked?\n\nTry using connection pooling in your software, or add some RAM, it's cheap.\nAnd I think that work_mem of 65536 is too high for your system...\n\nOn Thu, 14 Sep 2006 17:09:25 +0200\nJérôme BENOIS <[email protected]> wrote:\n\n> Hi Dave,\n> Le jeudi 14 septembre 2006 à 10:02 -0500, Dave Dutcher a écrit :\n> > > -----Original Message-----\n> > > From: [email protected] \n> > > [mailto:[email protected]] On Behalf Of \n> > > Jérôme BENOIS\n> > > \n> > explain analyze select distinct\n> > > INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> > > VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> > > select distinct ei_id as EIID from mpng2_ei_attribute as \n> > > reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> > > ilike '' and ei_id in ( select distinct ei_id as EIID from \n> > > mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> > > AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> > > ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> > > reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> > > req0 join mpng2_ei_attribute on req0.eiid = \n> > > mpng2_ei_attribute.ei_id order by ei_id asc;\n> > \n> > \n> > That is a lot of distinct's. Sorts are one thing that can really use up\n> > CPU. This query is doing lots of sorts, so its not surprising the CPU usage\n> > is high. \n> > \n> > On the subqueries you have a couple of cases where you say \"... in (select\n> > distinct ...)\" I don’t think the distinct clause is necessary in that case.\n> > I'm not a hundred percent sure, but you might want to try removing them and\n> > see if the query results are the same and maybe the query will execute\n> > faster.\n> \n> Thanks for your advice, but the load was good with previous version of\n> postgres -> 7.4.6 on the same server and same datas, same application,\n> same final users ...\n> \n> So we supect some system parameter, but which ?\n> \n> With vmstat -s is showing a lot of \"pages swapped out\", have you an\n> idea ?\n> \n> Thanls a lot,\n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Thu, 14 Sep 2006 20:47:39 +0400",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Scott,\n\nLe jeudi 14 septembre 2006 à 10:56 -0500, Scott Marlowe a écrit :\n> On Thu, 2006-09-14 at 10:02, Dave Dutcher wrote:\n> > > -----Original Message-----\n> > > From: [email protected] \n> > > [mailto:[email protected]] On Behalf Of \n> > > Jérôme BENOIS\n> > > \n> > explain analyze select distinct\n> > > INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> > > VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> > > select distinct ei_id as EIID from mpng2_ei_attribute as \n> > > reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> > > ilike '' and ei_id in ( select distinct ei_id as EIID from \n> > > mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> > > AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> > > ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> > > reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> > > req0 join mpng2_ei_attribute on req0.eiid = \n> > > mpng2_ei_attribute.ei_id order by ei_id asc;\n> > \n> > \n> > That is a lot of distinct's. Sorts are one thing that can really use up\n> > CPU. This query is doing lots of sorts, so its not surprising the CPU usage\n> > is high. \n> \n> I'm gonna make a SWAG here and guess that maybe your 7.4 db was initdb'd\n> with a locale of C and the new one is initdb'd with a real locale, like\n> en_US. Can Jérôme confirm or deny this?\n> \n\nThe locale used to run initdb is :\n\nsu - postgres\n:~$ locale\nLANG=POSIX\nLC_CTYPE=\"POSIX\"\nLC_NUMERIC=\"POSIX\"\nLC_TIME=\"POSIX\"\nLC_COLLATE=\"POSIX\"\nLC_MONETARY=\"POSIX\"\nLC_MESSAGES=\"POSIX\"\nLC_PAPER=\"POSIX\"\nLC_NAME=\"POSIX\"\nLC_ADDRESS=\"POSIX\"\nLC_TELEPHONE=\"POSIX\"\nLC_MEASUREMENT=\"POSIX\"\nLC_IDENTIFICATION=\"POSIX\"\nLC_ALL=\n\nCheers,\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Thu, 14 Sep 2006 23:07:37 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> Le jeudi 14 septembre 2006 =C3=A0 10:56 -0500, Scott Marlowe a =C3=A9crit :\n>> I'm gonna make a SWAG here and guess that maybe your 7.4 db was initdb'd\n>> with a locale of C and the new one is initdb'd with a real locale, like\n>> en_US. Can J=C3=A9r=C3=B4me confirm or deny this?\n\n> The locale used to run initdb is :\n\n> su - postgres\n> :~$ locale\n> LANG=POSIX\n\nIt'd be more convincing if \"show lc_collate\" etc. display C or POSIX.\nThe fact that postgres' current default environment is LANG=POSIX\ndoesn't prove much about what initdb saw.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2006 17:14:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load "
},
{
"msg_contents": "Jérôme,\n\nPerhaps it's a stupid question but are your queries slower than\nbefore? You didn't tell it.\n\nIMHO, it's not a problem to have a high load if you have a lot of\nusers and your queries are fast (and with 8.1, they should be far\nfaster than before).\n\nTo take a real example, we had a problem with a quad xeon running\npostgres 7.4 and even when there were a lot of queries, the load was\nalways lower than 4 and suddenly the queries were really slow and the\ndatabase was completely unusable.\nWhen we upgraded to 8.1, on very high load, we had a far higher cpu\nload but queries were far faster even with a high cpu load.\n\nConsidering your top output, I suspect you use HT and you should\nreally remove it if it's the case.\n\n--\nGuillaume\n",
"msg_date": "Thu, 14 Sep 2006 23:22:48 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Evgeny,\n\nLe jeudi 14 septembre 2006 à 20:47 +0400, Evgeny Gridasov a écrit :\n> Jérôme,\n> \n> How many concurrent connections do you have?\nI have between 300 and 400 concurrent connections.\n\n> Because You've got only 2GB of ram this is important! Postgres process\n> takes some bytes in memory =) .. I don't exactly how many,\n> but thinking if it is about 2Mb you'll get about 1Gb of ram used only by\n> postgres' processes (for 512 connections)!\n> Don't forget about your 512Mb shared memory setting,\n> postgres shared libraries and the OS filesystem cache...\n> \n> I hope your postgres binaries are not statically linked?\nno, i not use static binaries\n\n> Try using connection pooling in your software, or add some RAM, it's cheap.\n> And I think that work_mem of 65536 is too high for your system...\n\nI already use connection pool but i have many servers in front of database server.\n\nOk i will test new lower work_mem tomorrow.\n\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"\n> On Thu, 14 Sep 2006 17:09:25 +0200\n> Jérôme BENOIS <[email protected]> wrote:\n> \n> > Hi Dave,\n> > Le jeudi 14 septembre 2006 à 10:02 -0500, Dave Dutcher a écrit :\n> > > > -----Original Message-----\n> > > > From: [email protected] \n> > > > [mailto:[email protected]] On Behalf Of \n> > > > Jérôme BENOIS\n> > > > \n> > > explain analyze select distinct\n> > > > INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_\n> > > > VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( \n> > > > select distinct ei_id as EIID from mpng2_ei_attribute as \n> > > > reqin1 where reqin1.CATEGORYATTR_ID = 0 AND reqin1.TEXT_VALUE \n> > > > ilike '' and ei_id in ( select distinct ei_id as EIID from \n> > > > mpng2_ei_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 0 \n> > > > AND reqin2.TEXT_VALUE ilike '' and ei_id in ( select distinct \n> > > > ei_id as EIID from mpng2_ei_attribute as reqin3 where \n> > > > reqin3.NAME = '' AND reqin3.STRING_VALUE = '' ) ) ) ) ) as \n> > > > req0 join mpng2_ei_attribute on req0.eiid = \n> > > > mpng2_ei_attribute.ei_id order by ei_id asc;\n> > > \n> > > \n> > > That is a lot of distinct's. Sorts are one thing that can really use up\n> > > CPU. This query is doing lots of sorts, so its not surprising the CPU usage\n> > > is high. \n> > > \n> > > On the subqueries you have a couple of cases where you say \"... in (select\n> > > distinct ...)\" I don’t think the distinct clause is necessary in that case.\n> > > I'm not a hundred percent sure, but you might want to try removing them and\n> > > see if the query results are the same and maybe the query will execute\n> > > faster.\n> > \n> > Thanks for your advice, but the load was good with previous version of\n> > postgres -> 7.4.6 on the same server and same datas, same application,\n> > same final users ...\n> > \n> > So we supect some system parameter, but which ?\n> > \n> > With vmstat -s is showing a lot of \"pages swapped out\", have you an\n> > idea ?\n> > \n> > Thanls a lot,\n> \n>",
"msg_date": "Thu, 14 Sep 2006 23:37:21 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Guillaume,\n\nLe jeudi 14 septembre 2006 à 23:22 +0200, Guillaume Smet a écrit :\n> Jérôme,\n> \n> Perhaps it's a stupid question but are your queries slower than\n> before? You didn't tell it.\nNo, it's not stupid question !\nYes queries speed but when the load average exceeds 40 all queries are slower than before.\n\n> IMHO, it's not a problem to have a high load if you have a lot of\n> users and your queries are fast (and with 8.1, they should be far\n> faster than before).\nYes i have a lot of users ;-)\n> \n> To take a real example, we had a problem with a quad xeon running\n> postgres 7.4 and even when there were a lot of queries, the load was\n> always lower than 4 and suddenly the queries were really slow and the\n> database was completely unusable.\n> When we upgraded to 8.1, on very high load, we had a far higher cpu\n> load but queries were far faster even with a high cpu load.\n\nI agree but by moment DB Server is so slow.\n\n> Considering your top output, I suspect you use HT and you should\n> really remove it if it's the case.\n\nwhat's means \"HT\" please ?\n\n> --\n> Guillaume\n\nIf you want, my JabberId : jerome.benois AT gmail.com\n\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Thu, 14 Sep 2006 23:48:42 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> Yes i have a lot of users ;-)\n\nSo your work_mem is probably far too high (that's what I told you in\nmy first message) and you probably swap when you have too many users.\nRemember that work_mem can be used several times per query (and it's\nespecially the case when you have a lot of sorts).\nWhen your load is high, check your swap activity and your io/wait. top\ngives you these information. If you swap, lower your work_mem to 32 MB\nfor example then see if it's enough for your queries to run fast (you\ncan check if there are files created in the $PGDATA/base/<your\ndatabase oid>/pg_tmp) and if it doesn't swap. Retry with a\nlower/higher value to find the one that fits best to your queries and\nload.\n\n> I agree but by moment DB Server is so slow.\n\nYep, that's the information that was missing :).\n\n> what's means \"HT\" please ?\n\nHyper threading. It's usually not recommended to enable it on\nPostgreSQL servers. On most servers, you can disable it directly in\nthe BIOS.\n\n--\nGuillaume\n",
"msg_date": "Fri, 15 Sep 2006 00:24:43 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": ">Hyper threading. It's usually not recommended to enable it on\n>PostgreSQL servers. On most servers, you can disable it directly in\n>the BIOS.\n\nMaybe for specific usage scenarios, but that's generally not been my experience with relatively recent versions of PG. We ran some tests with pgbench, and averaged 10% or more performance improvement. Now, I agree pgbench isn't the most realistic performance, but we did notice a slight improvement in our application performance too.\n\nAlso, here's some benchmarks that were posted earlier by the folks at tweakers.net also showing hyperthreading to be faster:\n\nhttp://tweakers.net/reviews/646/10\n\nI'm not sure if it's dependent on OS- our tests were on BSD 5.x and PG 7.4 and 8.0/8.1 and were several months ago, so I don't remember many more specifics than that. \n\nSo, not saying it's a best practice one way or another, but this is pretty easy to test and you should definitely try it out both ways for your workload.\n\n- Bucky \n\n",
"msg_date": "Thu, 14 Sep 2006 18:50:21 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi, Jérôme,\n\nJérôme BENOIS wrote:\n\n> max_connections = 512\n\nDo you really have that much concurrent connections? Then you should\nthink about getting a larger machine, probably.\n\nYou will definitely want to play with commit_delay and commit_siblings\nsettings in that case, especially if you have write access.\n\n> work_mem = 65536\n> effective_cache_size = 131072\n\nhmm, 131072*8*1024 + 512*65536*1024 = 35433480192 - thats 33 Gig of\nMemory you assume here, not counting OS usage, and the fact that certain\nqueries can use up a multiple of work_mem.\n\nEven on amachine that big, I'd be inclined to dedicate more memory to\ncaching, and less to the backends, unless specific needs dictate it. You\ncould try to use sqlrelay or pgpool to cut down the number of backends\nyou need.\n\n> My Server is Dual Xeon 3.06GHz\n\nFor xeons, there were rumours about \"context switch storms\" which kill\nperformance.\n\n> with 2 Go RAM and good SCSI disks.\n\nFor 2 Gigs of ram, you should cut down the number of concurrent backends.\n\nDoes your machine go into swap?\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n\n",
"msg_date": "Fri, 15 Sep 2006 11:43:58 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On 9/15/06, Markus Schaber <[email protected]> wrote:\n> For xeons, there were rumours about \"context switch storms\" which kill\n> performance.\n\nIt's not that much a problem in 8.1. There are a few corner cases when\nyou still have the problem but on a regular load you don't have it\nanymore (validated here with a quad Xeon MP and a dual Xeon).\n\n--\nGuillaume\n",
"msg_date": "Fri, 15 Sep 2006 12:10:06 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Guillaume,\n\n\tNow i disable Hyper Threading in BIOS, and \"context switch storms\"\ndisappeared. (when i look with command sar -t)\n\n\tI decreased work_mem parameter to 32768. My CPU load is better. But it\nis still too high, in example : \n\ntop - 16:27:05 up 9:13, 3 users, load average: 45.37, 43.43, 41.43\nTasks: 390 total, 26 running, 363 sleeping, 0 stopped, 1 zombie\nCpu(s): 89.5% us, 9.8% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.2% hi,\n0.4% si\nMem: 2076404k total, 2039552k used, 36852k free, 40412k buffers\nSwap: 1954312k total, 468k used, 1953844k free, 1232000k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n30907 postgres 16 0 537m 51m 532m R 20.4 2.5 1:44.73 postmaster\n25631 postgres 16 0 538m 165m 532m R 17.4 8.2 8:43.76 postmaster\n29357 postgres 16 0 537m 311m 532m R 17.4 15.3 0:26.47 postmaster\n32294 postgres 16 0 535m 86m 532m R 14.9 4.3 0:04.97 postmaster\n31406 postgres 16 0 536m 180m 532m R 14.4 8.9 0:22.04 postmaster\n31991 postgres 16 0 535m 73m 532m R 14.4 3.6 0:08.21 postmaster\n30782 postgres 16 0 536m 205m 532m R 14.0 10.1 0:19.63 postmaster\n\n\tTomorrow morning i plan to add 2Go RAM in order to test difference with\nmy actual config.\n\nHave you another ideas ?\n\nBest Regards,\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"\n\nLe vendredi 15 septembre 2006 à 00:24 +0200, Guillaume Smet a écrit :\n> On 9/14/06, Jérôme BENOIS <[email protected]> wrote:\n> > Yes i have a lot of users ;-)\n> \n> So your work_mem is probably far too high (that's what I told you in\n> my first message) and you probably swap when you have too many users.\n> Remember that work_mem can be used several times per query (and it's\n> especially the case when you have a lot of sorts).\n> When your load is high, check your swap activity and your io/wait. top\n> gives you these information. If you swap, lower your work_mem to 32 MB\n> for example then see if it's enough for your queries to run fast (you\n> can check if there are files created in the $PGDATA/base/<your\n> database oid>/pg_tmp) and if it doesn't swap. Retry with a\n> lower/higher value to find the one that fits best to your queries and\n> load.\n> \n> > I agree but by moment DB Server is so slow.\n> \n> Yep, that's the information that was missing :).\n> \n> > what's means \"HT\" please ?\n> \n> Hyper threading. It's usually not recommended to enable it on\n> PostgreSQL servers. On most servers, you can disable it directly in\n> the BIOS.\n> \n> --\n> Guillaume\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>",
"msg_date": "Mon, 18 Sep 2006 16:30:51 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi Markus,\n\nLe vendredi 15 septembre 2006 à 11:43 +0200, Markus Schaber a écrit :\n> Hi, Jérôme,\n> \n> Jérôme BENOIS wrote:\n> \n> > max_connections = 512\n> \n> Do you really have that much concurrent connections? Then you should\n> think about getting a larger machine, probably.\n> \n> You will definitely want to play with commit_delay and commit_siblings\n> settings in that case, especially if you have write access.\n> \n> > work_mem = 65536\n> > effective_cache_size = 131072\n> \n> hmm, 131072*8*1024 + 512*65536*1024 = 35433480192 - thats 33 Gig of\n> Memory you assume here, not counting OS usage, and the fact that certain\n> queries can use up a multiple of work_mem.\n\nNow i Have 335 concurrent connections, i decreased work_mem parameter to\n32768 and disabled Hyper Threading in BIOS. But my CPU load is still\nvery important.\n\nTomorrow morning i plan to add 2Giga RAM ... But I don't understand why\nmy database server worked good with previous version of postgres and\nsame queries ...\n\n> Even on amachine that big, I'd be inclined to dedicate more memory to\n> caching, and less to the backends, unless specific needs dictate it. You\n> could try to use sqlrelay or pgpool to cut down the number of backends\n> you need.\nI used already database pool on my application and when i decrease\nnumber of connection my application is more slow ;-(\n> \n> > My Server is Dual Xeon 3.06GHz\n> \n> For xeons, there were rumours about \"context switch storms\" which kill\n> performance.\nI disabled Hyper Threading.\n> > with 2 Go RAM and good SCSI disks.\n> \n> For 2 Gigs of ram, you should cut down the number of concurrent backends.\n> \n> Does your machine go into swap?\nNo, 0 swap found and i cannot found pgsql_tmp files in $PG_DATA/base/...\n> \n> Markus\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Mon, 18 Sep 2006 16:44:05 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "On 9/18/06, Jérôme BENOIS <[email protected]> wrote:\n> Tomorrow morning i plan to add 2Go RAM in order to test difference with\n> my actual config.\n\nI don't think more RAM will change anything if you don't swap at all.\nYou can try to set shared_buffers lower (try 32768 and 16384) but I\ndon't think it will change anything in 8.1.\n\nThe only thing left IMHO is that 8.1 is choosing a bad plan which\nconsumes a lot of CPU for at least a query.\n\nWhen you analyze your logs, did you see a particularly slow query? Can\nyou compare query log analysis from your old server and your new one?\n\n--\nGuillaume\n",
"msg_date": "Mon, 18 Sep 2006 17:48:53 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi, Jerome,\n\nJérôme BENOIS wrote:\n\n> Now i Have 335 concurrent connections, i decreased work_mem parameter to\n> 32768 and disabled Hyper Threading in BIOS. But my CPU load is still\n> very important.\n\nWhat are your settings for commit_siblings and commit_delay?\n\n> Tomorrow morning i plan to add 2Giga RAM ... But I don't understand why\n> my database server worked good with previous version of postgres and\n> same queries ...\n\nI don't think any more that it's the RAM, as you told you don't go into\nswap. It has to be something else.\n\nCould you try logging which are the problematic queries, maybe they have\nbad plans for whatever reason.\n\n> I used already database pool on my application and when i decrease\n> number of connection my application is more slow ;-(\n\nCould you just make sure that the pool really uses persistent\nconnections, and is not broken or misconfigured, always reconnect?\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n\n",
"msg_date": "Tue, 19 Sep 2006 11:53:25 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Markus,\n\nLe mardi 19 septembre 2006 à 11:53 +0200, Markus Schaber a écrit :\n> Hi, Jerome,\n> \n> Jérôme BENOIS wrote:\n> \n> > Now i Have 335 concurrent connections, i decreased work_mem parameter to\n> > 32768 and disabled Hyper Threading in BIOS. But my CPU load is still\n> > very important.\n> \n> What are your settings for commit_siblings and commit_delay?\nIt default :\n\n#commit_delay = 01 # range 0-100000, inmicroseconds\n#commit_siblings = 5 # range 1-1000\n\n> > Tomorrow morning i plan to add 2Giga RAM ... But I don't understand why\n> > my database server worked good with previous version of postgres and\n> > same queries ...\n> \n> I don't think any more that it's the RAM, as you told you don't go into\n> swap. It has to be something else.\nYes, i agree with you.\n> \n> Could you try logging which are the problematic queries, maybe they have\n> bad plans for whatever reason.\n> \n> > I used already database pool on my application and when i decrease\n> > number of connection my application is more slow ;-(\n> \n> Could you just make sure that the pool really uses persistent\n> connections, and is not broken or misconfigured, always reconnect?\nYes it's persistent.\n\nI plan to return to previous version : 7.4.6 in and i will reinstall all\nin a dedicated server in order to reproduce and solve the problem.\n\nJérôme.\n\n> HTH,\n> Markus\n> \n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Tue, 19 Sep 2006 14:48:20 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi, Jerome,\n\nJérôme BENOIS wrote:\n\n>>> Now i Have 335 concurrent connections, i decreased work_mem parameter to\n>>> 32768 and disabled Hyper Threading in BIOS. But my CPU load is still\n>>> very important.\n>> What are your settings for commit_siblings and commit_delay?\n> It default :\n> \n> #commit_delay = 01 # range 0-100000, inmicroseconds\n> #commit_siblings = 5 # range 1-1000\n\nYou should uncomment them, and play with different settings. I'd try a\ncommit_delay of 100, and commit_siblings of 5 to start with.\n\n> I plan to return to previous version : 7.4.6 in and i will reinstall all\n> in a dedicated server in order to reproduce and solve the problem.\n\nYou should use at least 7.4.13 as it fixes some critical buts that were\nin 7.4.6. They use the same on-disk format and query planner logic, so\nthey should not have any difference.\n\nI don't have much more ideas what the problem could be.\n\nCan you try to do some profiling (e. G. with statement logging) to see\nwhat specific statements are the one that cause high cpu load?\n\nAre there other differences (besides the PostgreSQL version) between the\ntwo installations? (Kernel, libraries, other software...)\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n\n",
"msg_date": "Tue, 19 Sep 2006 15:09:50 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi, Markus,\n\nLe mardi 19 septembre 2006 à 15:09 +0200, Markus Schaber a écrit :\n> Hi, Jerome,\n> \n> Jérôme BENOIS wrote:\n> \n> >>> Now i Have 335 concurrent connections, i decreased work_mem parameter to\n> >>> 32768 and disabled Hyper Threading in BIOS. But my CPU load is still\n> >>> very important.\n> >> What are your settings for commit_siblings and commit_delay?\n> > It default :\n> > \n> > #commit_delay = 01 # range 0-100000, inmicroseconds\n> > #commit_siblings = 5 # range 1-1000\n> \n> You should uncomment them, and play with different settings. I'd try a\n> commit_delay of 100, and commit_siblings of 5 to start with.\n> \n> > I plan to return to previous version : 7.4.6 in and i will reinstall all\n> > in a dedicated server in order to reproduce and solve the problem.\n> \n> You should use at least 7.4.13 as it fixes some critical buts that were\n> in 7.4.6. They use the same on-disk format and query planner logic, so\n> they should not have any difference.\n> \n> I don't have much more ideas what the problem could be.\n> \n> Can you try to do some profiling (e. G. with statement logging) to see\n> what specific statements are the one that cause high cpu load?\n> \n> Are there other differences (besides the PostgreSQL version) between the\n> two installations? (Kernel, libraries, other software...)\nnothing.\n\nI returned to the previous version 7.4.6 in my production server, it's\nwork fine !\n\nAnd I plan to reproduce this problem in a dedicated server, and i will\nsend all informations in this list in the next week.\n\nI hope your help for solve this problem.\n\nCheers,\nJérôme.\n\n> HTH,\n> Markus\n-- \nJérôme,\n\npython -c \"print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for\np in '[email protected]'.split('@')])\"",
"msg_date": "Fri, 22 Sep 2006 09:43:52 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
},
{
"msg_contents": "Hi All,\n\n\tI reply to me, we solved a CPU Load problem. We had an external batch\nwho used an expensive SQL view and took 99% of the CPU.\n\n\tThanks all for you help !\n\n -------------------\n\n\tI started the HAPlatform open-source project is a part of Share'nGo\nProject, this goal is define all documentation and scripts required to\ninstall and maintain High Available platform.\n\nTow platform are targeted :\n\n * LAPJ : Linux Apache PostgreSQL Java\n\n * LAMP : Linux Apache MySQL PHP\n\n\tThe first documentation is here (it's my postgres configuration) :\n\nhttp://sharengo.org/haplatform/docs/PostgreSQL/en/html_single/index.html\n\n\nCheers,\nJérôme.\n-- \nOpen-Source : http://www.sharengo.org\nCorporate : http://www.argia-engineering.fr\n\nLe vendredi 22 septembre 2006 à 09:43 +0200, Jérôme BENOIS a écrit :\n> Hi, Markus,\n> \n> Le mardi 19 septembre 2006 à 15:09 +0200, Markus Schaber a écrit :\n> > Hi, Jerome,\n> > \n> > Jérôme BENOIS wrote:\n> > \n> > >>> Now i Have 335 concurrent connections, i decreased work_mem parameter to\n> > >>> 32768 and disabled Hyper Threading in BIOS. But my CPU load is still\n> > >>> very important.\n> > >> What are your settings for commit_siblings and commit_delay?\n> > > It default :\n> > > \n> > > #commit_delay = 01 # range 0-100000, inmicroseconds\n> > > #commit_siblings = 5 # range 1-1000\n> > \n> > You should uncomment them, and play with different settings. I'd try a\n> > commit_delay of 100, and commit_siblings of 5 to start with.\n> > \n> > > I plan to return to previous version : 7.4.6 in and i will reinstall all\n> > > in a dedicated server in order to reproduce and solve the problem.\n> > \n> > You should use at least 7.4.13 as it fixes some critical buts that were\n> > in 7.4.6. They use the same on-disk format and query planner logic, so\n> > they should not have any difference.\n> > \n> > I don't have much more ideas what the problem could be.\n> > \n> > Can you try to do some profiling (e. G. with statement logging) to see\n> > what specific statements are the one that cause high cpu load?\n> > \n> > Are there other differences (besides the PostgreSQL version) between the\n> > two installations? (Kernel, libraries, other software...)\n> nothing.\n> \n> I returned to the previous version 7.4.6 in my production server, it's\n> work fine !\n> \n> And I plan to reproduce this problem in a dedicated server, and i will\n> send all informations in this list in the next week.\n> \n> I hope your help for solve this problem.\n> \n> Cheers,\n> Jérôme.\n> \n> > HTH,\n> > Markus",
"msg_date": "Tue, 03 Oct 2006 09:36:22 +0200",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Load"
}
] |
[
{
"msg_contents": "My setup:\nFreebsd 6.1\nPostgresql 8.1.4\nMemory: 8GB\nSATA Disks \n\nRaid 1 10 spindles (2 as hot spares)\n500GB disks (16MB buffer), 7200 rpm\nRaid 10\n\nRaid 2 4 spindles\n150GB 10K rpm disks\nRaid 10\n\nshared_buffers = 10000\ntemp_buffers = 1500\nwork_mem = 32768 # 32MB\nmaintenance_work_mem = 524288 # 512MB\n\ncheckpoint_segments = 64\nJust increased to 64 today.. after reading this may help. Was 5 before.\n\npg_xlog on second raid (which sees very little activity)\n\nDatabase sizes: 1 200GB+ Db and 2 100GB+\n\nI run 3 daily \"vacuumdb -azv\". The vacuums were taking 2 to 3 hours.\nRecently we have started to do some data mass loading and now the vacuums \nare taking close to 5 hours AND it seems they may be slowing down the loads.\n\nThese are not bulk loads in the sense that we don't have a big file that we \ncan do a copy.. instead it is data which several programs are processing \nfrom some temporary tables so we have lots of inserts. There are also \nupdates to keep track of some totals.\n\nI am looking to either improve the time of the vacuum or decrease it's \nimpact on the loads.\nAre the variables:\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\nIs that the way to go to decrease impact?\nOr should I try increasing maintenance_work_mem to 1GB?\n\nA sum of all running processes from \"ps auxw\" shows about 3.5GB in \"VSZ\" and \n1.5GB in \"RSS\".\n\nI am also going to check if I have enough space to move the stage DB to the \nsecond raid which shows very little activity in iostat.\n\nAny other suggestions?\n",
"msg_date": "Thu, 14 Sep 2006 11:23:01 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuums on large busy databases"
},
{
"msg_contents": "\nOn 14-Sep-06, at 11:23 AM, Francisco Reyes wrote:\n\n> My setup:\n> Freebsd 6.1\n> Postgresql 8.1.4\n> Memory: 8GB\n> SATA Disks\n> Raid 1 10 spindles (2 as hot spares)\n> 500GB disks (16MB buffer), 7200 rpm\n> Raid 10\n>\n> Raid 2 4 spindles\n> 150GB 10K rpm disks\n> Raid 10\n>\n> shared_buffers = 10000\nshared buffers should be considerably more, depending on what else \nis running\n> temp_buffers = 1500\n> work_mem = 32768 # 32MB\n> maintenance_work_mem = 524288 # 512MB\n>\n> checkpoint_segments = 64\n> Just increased to 64 today.. after reading this may help. Was 5 \n> before.\n\nWhat is effective_cache set to ?\n>\n> pg_xlog on second raid (which sees very little activity)\n>\n> Database sizes: 1 200GB+ Db and 2 100GB+\n>\n> I run 3 daily \"vacuumdb -azv\". The vacuums were taking 2 to 3 hours\nwhy not just let autovac do it's thing ?\n\n.\n> Recently we have started to do some data mass loading and now the \n> vacuums are taking close to 5 hours AND it seems they may be \n> slowing down the loads.\n>\n> These are not bulk loads in the sense that we don't have a big file \n> that we can do a copy.. instead it is data which several programs \n> are processing from some temporary tables so we have lots of \n> inserts. There are also updates to keep track of some totals.\n>\n> I am looking to either improve the time of the vacuum or decrease \n> it's impact on the loads.\n> Are the variables:\n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n>\n> Is that the way to go to decrease impact?\n> Or should I try increasing maintenance_work_mem to 1GB?\n>\n> A sum of all running processes from \"ps auxw\" shows about 3.5GB in \n> \"VSZ\" and 1.5GB in \"RSS\".\n>\n> I am also going to check if I have enough space to move the stage \n> DB to the second raid which shows very little activity in iostat.\n>\n> Any other suggestions?\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Thu, 14 Sep 2006 12:57:46 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Dave Cramer writes:\n\n> What is effective_cache set to ?\n\nDefault of 1000. Was just reading about this parameter.\nWill try increasing it to 8192 (8192 * 8K = 64MB)\n\n> why not just let autovac do it's thing ?\n\nHave been playing with decresing the autovac values. With 100GB+ tables even \n1% in autovacuum_vacuum_scale_factor is going to be 1GB.\n\nRight now trying:\nautovacuum_vacuum_threshold = 50000\nautovacuum_analyze_threshold = 100000\nautovacuum_vacuum_scale_factor = 0.05\nautovacuum_analyze_scale_factor = 0.1\n\nInitially I had tried autovacuum_vacuum_scale_factor = 0.2 and that was not \nenough. Had to end up bumping fsm_pages several times. After I started to do \nthe 3 daily vacuums they are holding steady.. perhaps I will try only 1 \nvacuum now that I decreased the threshold to 0.05\n\nI would be curious what others have in their autovacuum parameters for \n100GB+ databases\n",
"msg_date": "Thu, 14 Sep 2006 13:17:40 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Dave Cramer writes:\n\n> What is effective_cache set to ?\n\nIncreasing this seems to have helped significantly a web app. Load times \nseem magnitudes faster.\n\nIncreased it to effective_cache_size = 12288 # 96MB\n\nWhat is a reasonable number?\nI estimate I have at least 1 to 2 GB free of memory.\n\nDon't want to get too carried away right now with too many changes.. because \nright now we have very few connections to that database (usually less than \n10), but I expect it to go to a norm of 20+.. so need to make sure I won't \nmake changes that will be a problem in that scenario.\n\nSo far only see one setting that can be an issue: work_mem \nso have it set to only 32768.\n",
"msg_date": "Thu, 14 Sep 2006 13:36:55 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Francisco\nOn 14-Sep-06, at 1:36 PM, Francisco Reyes wrote:\n\n> Dave Cramer writes:\n>\n>> What is effective_cache set to ?\n>\n> Increasing this seems to have helped significantly a web app. Load \n> times seem magnitudes faster.\n>\n> Increased it to effective_cache_size = 12288 # 96MB\n>\n> What is a reasonable number?\n> I estimate I have at least 1 to 2 GB free of memory.\nYou are using 6G of memory for something else ?\n\neffective cache should be set to 75% of free memory\n>\n> Don't want to get too carried away right now with too many \n> changes.. because right now we have very few connections to that \n> database (usually less than 10), but I expect it to go to a norm of \n> 20+.. so need to make sure I won't make changes that will be a \n> problem in that scenario.\n>\n> So far only see one setting that can be an issue: work_mem so have \n> it set to only 32768.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Thu, 14 Sep 2006 14:43:49 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Dave Cramer writes:\n\n>> What is a reasonable number?\n>> I estimate I have at least 1 to 2 GB free of memory.\n> You are using 6G of memory for something else ?\n\nRight now adding up from ps the memory I have about 2GB.\nHave an occassional program which uses up to 2GB.\n\nThen I want to give some breathing room for when we have more connections so \nthat work_mem doesn't make the macihne hit swap.\nAt 32MB say worst case scenario I may have 50 operations using those 32MB, \nthat's about 1.5GB.\n\n2+2+1.5 = 5.5\nSo I believe I have free about 2.5GB \n \n> effective cache should be set to 75% of free memory\n\nSo I will increase to 1.5GB then.\n\nI may have more memory, but this is likely a safe value.\n\nThanks for the feedback.. Didn't even know about this setting.\n",
"msg_date": "Thu, 14 Sep 2006 16:30:46 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, Sep 14, 2006 at 04:30:46PM -0400, Francisco Reyes wrote:\n>Right now adding up from ps the memory I have about 2GB.\n\nThat's not how you find out how much memory you have. Try \"free\" or \nsomesuch.\n\nMike Stone\n\n",
"msg_date": "Thu, 14 Sep 2006 16:42:22 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 11:23 -0400, Francisco Reyes wrote:\n> My setup:\n> Freebsd 6.1\n> Postgresql 8.1.4\n> Memory: 8GB\n> SATA Disks \n> \n> Raid 1 10 spindles (2 as hot spares)\n> 500GB disks (16MB buffer), 7200 rpm\n> Raid 10\n> \n> Raid 2 4 spindles\n> 150GB 10K rpm disks\n> Raid 10\n> \n> shared_buffers = 10000\n\nWhy so low? You have a lot of memory, and shared_buffers are an\nimportant performance setting. I have a machine with 4GB of RAM, and I\nfound my best performance was around 150000 shared buffers, which is a\nlittle more than 1GB.\n\nThe default value of 1000 was chosen so that people who use PostgreSQL\nonly incidentally among many other programs do not notice an impact on\ntheir system. It should be drastically increased when using PostgreSQL\non a dedicated system, particularly with versions 8.1 and later.\n\nAlso, a VACUUM helps a table that gets UPDATEs and DELETEs. If you're\ndoing mostly inserts on a big table, there may be no need to VACUUM it 3\ntimes per day. Try VACUUMing the tables that get more UPDATEs and\nDELETEs more often, and if a table has few UPDATEs/DELETEs, VACUUM it\nonly occasionally. You can run ANALYZE more frequently on all the\ntables, because it does not have to read the entire table and doesn't\ninterfere with the rest of the operations.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 14 Sep 2006 14:36:28 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Francisco\nOn 14-Sep-06, at 4:30 PM, Francisco Reyes wrote:\n\n> Dave Cramer writes:\n>\n>>> What is a reasonable number?\n>>> I estimate I have at least 1 to 2 GB free of memory.\n>> You are using 6G of memory for something else ?\n>\n> Right now adding up from ps the memory I have about 2GB.\n> Have an occassional program which uses up to 2GB.\n>\n> Then I want to give some breathing room for when we have more \n> connections so that work_mem doesn't make the macihne hit swap.\n> At 32MB say worst case scenario I may have 50 operations using \n> those 32MB, that's about 1.5GB.\n>\n> 2+2+1.5 = 5.5\n> So I believe I have free about 2.5GB\n>> effective cache should be set to 75% of free memory\n>\n> So I will increase to 1.5GB then.\npersonally, I'd set this to about 6G. This doesn't actually consume \nmemory it is just a setting to tell postgresql how much memory is \nbeing used for cache and kernel buffers\n>\n> I may have more memory, but this is likely a safe value.\n>\n> Thanks for the feedback.. Didn't even know about this setting.\nregarding shared buffers I'd make this much bigger, like 2GB or more\n\ndave\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Thu, 14 Sep 2006 17:53:09 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Jeff Davis writes:\n\n>> shared_buffers = 10000\n> \n> Why so low?\n\nMy initial research was not thorough enough with regards to how to compute \nhow many to use.\n\n You have a lot of memory, and shared_buffers are an\n> important performance setting. I have a machine with 4GB of RAM, and I\n> found my best performance was around 150000 shared buffers, which is a\n> little more than 1GB.\n\nGoing to make it 256,000 (2GB) \n \n> on a dedicated system, particularly with versions 8.1 and later.\n\nWas reading that. Seems to be that around 1/4 of real memory is a good \nstarting point.\n \n> Also, a VACUUM helps a table that gets UPDATEs and DELETEs. If you're\n> doing mostly inserts on a big table, there may be no need to VACUUM it 3\n> times per day. Try VACUUMing the tables that get more UPDATEs and\n> DELETEs more often, and if a table has few UPDATEs/DELETEs, VACUUM it\n> only occasionally.\n\nWill have to talk to the developers. In particular for every insert there \nare updates. I know they have at least one table that gets udpated to have \nsummarized totals.\n\nOne of the reasons I was doing the vacuumdb of the entire DB was to get the \nnumber of shared-buffers. Now that I have an idea of how much I need I will \nlikely do something along the lines of what you suggest. One full for \neverything at night and during the days perhaps do the tables that get more \nupdated. I also set more aggresive values on autovacuum so that should help \nsome too.\n\n> You can run ANALYZE more frequently on all the\n> tables, because it does not have to read the entire table and doesn't\n> interfere with the rest of the operations.\n\nOn a related question. Right now I have my autovacuums set as:\nautovacuum_vacuum_threshold = 50000 \nautovacuum_analyze_threshold = 100000\nautovacuum_vacuum_scale_factor = 0.05\nautovacuum_analyze_scale_factor = 0.1\n\nBased on what you described above then I could set my analyze values to the \nsame as the vacuum to have something like\nautovacuum_vacuum_threshold = 50000\nautovacuum_analyze_threshold = 50000\nautovacuum_vacuum_scale_factor = 0.05\nautovacuum_analyze_scale_factor = 0.05\n\nFor DBs with hundreds of GBs would it be better to get \nautovacuum_analyze_scale_factor to even 0.01? The permanent DB is over 200GB \nand growing.. the 100GB ones are staging.. By the time we have finished \nmigrating all the data from the old system it will be at least 300GB. 0.01 \nis still 3GB.. pretty sizable.\n\nDo the thresholds tabke presedence over the scale factors? Is it basically \nif either one of them gets hit that the action will take place?\n",
"msg_date": "Thu, 14 Sep 2006 19:30:52 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Dave Cramer writes:\n\n> personally, I'd set this to about 6G. This doesn't actually consume \n> memory it is just a setting to tell postgresql how much memory is \n> being used for cache and kernel buffers\n\nGotcha. Will increase further. \n\n\n> regarding shared buffers I'd make this much bigger, like 2GB or more\n\nWill do 2GB on the weekend. From what I read this requires shared memory so \nhave to restart my machine (FreeBSD).\n\nif I plan to give shared buffers 2GB, how much more over that should I give \nthe total shared memory kern.ipc.shmmax? 2.5GB?\n\nAlso will shared buffers impact inserts/updates at all?\nI wish the postgresql.org site docs would mention what will be impacted.\n\nComments like: This setting must be at least 16, as well as at least twice \nthe value of max_connections; however, settings significantly higher than \nthe minimum are usually needed for good performance.\n\nAre usefull, but could use some improvement.. increase on what? All \nperformance? inserts? updates? selects?\n\nFor instance, increasing effective_cache_size has made a noticeable \ndifference in selects. However as I talk to the developers we are still \ndoing marginally in the inserts. About 150/min.\n\nThere is spare CPU cycles, both raid cards are doing considerably less they \ncan do.. so next I am going to try and research what parameters I need to \nbump to increase inserts. Today I increased checkpoint_segments from the \ndefault to 64. Now looking at wall_buffers.\n\nIt would be most helpfull to have something on the docs to specify what each \nsetting affects most such as reads, writes, updates, inserts, etc.. \n",
"msg_date": "Thu, 14 Sep 2006 19:50:56 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Michael Stone writes:\n\n> On Thu, Sep 14, 2006 at 04:30:46PM -0400, Francisco Reyes wrote:\n>>Right now adding up from ps the memory I have about 2GB.\n> \n> That's not how you find out how much memory you have. Try \"free\" or \n> somesuch.\n\nWasn't trying to get an accurate value, just a ballpark figure.\n\nWhen you say \"free\" are you refering to the free value from top? or some \nprogram called free?\n",
"msg_date": "Thu, 14 Sep 2006 20:04:39 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "\nOn 14-Sep-06, at 7:50 PM, Francisco Reyes wrote:\n\n> Dave Cramer writes:\n>\n>> personally, I'd set this to about 6G. This doesn't actually \n>> consume memory it is just a setting to tell postgresql how much \n>> memory is being used for cache and kernel buffers\n>\n> Gotcha. Will increase further.\n>\n>> regarding shared buffers I'd make this much bigger, like 2GB or more\n>\n> Will do 2GB on the weekend. From what I read this requires shared \n> memory so have to restart my machine (FreeBSD).\n>\n> if I plan to give shared buffers 2GB, how much more over that \n> should I give the total shared memory kern.ipc.shmmax? 2.5GB?\n\nI generally make it slightly bigger. is shmmax the size of the \nmaximum chunk allowed or the total ?\n>\n> Also will shared buffers impact inserts/updates at all?\n> I wish the postgresql.org site docs would mention what will be \n> impacted.\nYes, it will, however not as dramatically as what you are seeing with \neffective_cache\n>\n> Comments like: This setting must be at least 16, as well as at \n> least twice the value of max_connections; however, settings \n> significantly higher than the minimum are usually needed for good \n> performance.\n>\n> Are usefull, but could use some improvement.. increase on what? All \n> performance? inserts? updates? selects?\n>\n> For instance, increasing effective_cache_size has made a noticeable \n> difference in selects. However as I talk to the developers we are \n> still doing marginally in the inserts. About 150/min.\nThe reason is that with effective_cache the select plans changed (for \nthe better) ; it's unlikely that the insert plans will change.\n>\n> There is spare CPU cycles, both raid cards are doing considerably \n> less they can do.. so next I am going to try and research what \n> parameters I need to bump to increase inserts. Today I increased \n> checkpoint_segments from the default to 64. Now looking at \n> wall_buffers.\n>\n> It would be most helpfull to have something on the docs to specify \n> what each setting affects most such as reads, writes, updates, \n> inserts, etc..\nIt's an art unfortunately.\n>\n\nDave\n\n",
"msg_date": "Thu, 14 Sep 2006 20:07:09 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, Sep 14, 2006 at 08:04:39PM -0400, Francisco Reyes wrote:\n>Wasn't trying to get an accurate value, just a ballpark figure.\n\nWon't even be a ballpark.\n\n>When you say \"free\" are you refering to the free value from top? or some \n>program called free?\n\nDepends on your OS.\n\nMike Stone\n",
"msg_date": "Thu, 14 Sep 2006 20:08:21 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 19:30 -0400, Francisco Reyes wrote:\n> Will have to talk to the developers. In particular for every insert there \n> are updates. I know they have at least one table that gets udpated to have \n> summarized totals.\n> \n\nIf the table being updated is small, you have no problems at all. VACUUM\nthat table frequently, and the big tables rarely. If the big tables are\nonly INSERTs and SELECTs, the only reason to VACUUM is to avoid the xid\nwraparound. See:\n\n<http://www.postgresql.org/docs/8.1/static/maintenance.html>\n\nSee which tables need VACUUM, and how often. Use the statistics to see\nif VACUUMing will gain you anything before you do it.\n\n> One of the reasons I was doing the vacuumdb of the entire DB was to get the \n> number of shared-buffers. Now that I have an idea of how much I need I will \n> likely do something along the lines of what you suggest. One full for \n> everything at night and during the days perhaps do the tables that get more \n> updated. I also set more aggresive values on autovacuum so that should help \n> some too.\n\nWhy VACUUM FULL? That is generally not needed. Re-evaluate whether\nyou're gaining things with all these VACUUMs.\n\n> > You can run ANALYZE more frequently on all the\n> > tables, because it does not have to read the entire table and doesn't\n> > interfere with the rest of the operations.\n> \n> On a related question. Right now I have my autovacuums set as:\n> autovacuum_vacuum_threshold = 50000 \n> autovacuum_analyze_threshold = 100000\n> autovacuum_vacuum_scale_factor = 0.05\n> autovacuum_analyze_scale_factor = 0.1\n> \n> Based on what you described above then I could set my analyze values to the \n> same as the vacuum to have something like\n> autovacuum_vacuum_threshold = 50000\n> autovacuum_analyze_threshold = 50000\n> autovacuum_vacuum_scale_factor = 0.05\n> autovacuum_analyze_scale_factor = 0.05\n> \n> For DBs with hundreds of GBs would it be better to get \n> autovacuum_analyze_scale_factor to even 0.01? The permanent DB is over 200GB \n> and growing.. the 100GB ones are staging.. By the time we have finished \n> migrating all the data from the old system it will be at least 300GB. 0.01 \n> is still 3GB.. pretty sizable.\n\nJust test how long an ANALYZE takes, and compare that to how quickly\nyour statistics get out of date. As long as postgres is choosing correct\nplans, you are ANALYZE-ing often enough.\n\nANALYZE takes statistical samples to avoid reading the whole table, so\nit's really not a major influence on performance in my experience.\n\n> Do the thresholds tabke presedence over the scale factors? Is it basically \n> if either one of them gets hit that the action will take place?\n\nu = number of tuples UPDATE-ed or DELETE-ed (i.e. dead tuples)\nr = the (estimated) number of total live tuples in the relation\n\nIn a loop, autovacuum checks to see if u >\n(r*autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold), and if\nso, it runs VACUUM. If not, it sleeps. It works the same way for\nANALYZE.\n\nSo, in a large table, the scale_factor is the dominant term. In a small\ntable, the threshold is the dominant term. But both are taken into\naccount.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 14 Sep 2006 17:35:02 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 20:04 -0400, Francisco Reyes wrote:\n> Michael Stone writes:\n> \n> > On Thu, Sep 14, 2006 at 04:30:46PM -0400, Francisco Reyes wrote:\n> >>Right now adding up from ps the memory I have about 2GB.\n> > \n> > That's not how you find out how much memory you have. Try \"free\" or \n> > somesuch.\n> \n> Wasn't trying to get an accurate value, just a ballpark figure.\n> \n> When you say \"free\" are you refering to the free value from top? or some \n> program called free?\n> \n\nAny long-running system will have very little \"free\" memory. Free memory\nis wasted memory, so the OS finds some use for it.\n\nThe VM subsystem of an OS uses many tricks, including the sharing of\nmemory among processes and the disk buffer cache (which is shared also).\nIt's hard to put a number on the memory demands of a given process, and\nit's also hard to put a number on the ability of a system to accommodate\na new process with new memory demands.\n\nYou have 8GB total, which sounds like plenty to me. Keep in mind that if\nyou have the shared_memory all allocated on physical memory (i.e.\n\"kern.ipc.shm_use_phys: 1\" on FreeBSD), then that amount of physical\nmemory will never be available to processes other than postgres. At 2GB,\nthat still leaves 6GB for the other process, so you should be fine.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Thu, 14 Sep 2006 17:52:02 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 20:07 -0400, Dave Cramer wrote:\n> On 14-Sep-06, at 7:50 PM, Francisco Reyes wrote:\n> \n> > Dave Cramer writes:\n> >\n> >> personally, I'd set this to about 6G. This doesn't actually \n> >> consume memory it is just a setting to tell postgresql how much \n> >> memory is being used for cache and kernel buffers\n> >\n> > Gotcha. Will increase further.\n> >\n> >> regarding shared buffers I'd make this much bigger, like 2GB or more\n> >\n> > Will do 2GB on the weekend. From what I read this requires shared \n> > memory so have to restart my machine (FreeBSD).\n> >\n> > if I plan to give shared buffers 2GB, how much more over that \n> > should I give the total shared memory kern.ipc.shmmax? 2.5GB?\n> \n> I generally make it slightly bigger. is shmmax the size of the \n> maximum chunk allowed or the total ?\n\nThat's the total on FreeBSD, per process. I think to allow more than 2GB\nthere you may need a special compile option in the kernel.\n\n> > Also will shared buffers impact inserts/updates at all?\n> > I wish the postgresql.org site docs would mention what will be \n> > impacted.\n> Yes, it will, however not as dramatically as what you are seeing with \n> effective_cache\n> >\n> > Comments like: This setting must be at least 16, as well as at \n> > least twice the value of max_connections; however, settings \n> > significantly higher than the minimum are usually needed for good \n> > performance.\n> >\n> > Are usefull, but could use some improvement.. increase on what? All \n> > performance? inserts? updates? selects?\n> >\n> > For instance, increasing effective_cache_size has made a noticeable \n> > difference in selects. However as I talk to the developers we are \n> > still doing marginally in the inserts. About 150/min.\n> The reason is that with effective_cache the select plans changed (for \n> the better) ; it's unlikely that the insert plans will change.\n\nThere aren't multiple INSERT plans (however, there could be a subselect\nor something, which would be planned separately). INSERT is INSERT. That\nmeans effective_cache_size will have zero effect on INSERT.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 14 Sep 2006 17:59:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, Sep 14, 2006 at 05:52:02PM -0700, Jeff Davis wrote:\n>Any long-running system will have very little \"free\" memory. Free memory\n>is wasted memory, so the OS finds some use for it.\n\nThe important part of the output of \"free\" in this context isn't how \nmuch is free, it's how much is cache vs how much is allocated to \nprograms. Other os's have other ways of telling the same thing. Neither \nof those numbers generally has much to do with how much shows up in ps \nwhen large amounts of shared memory are in use.\n\nMike Stone\n",
"msg_date": "Thu, 14 Sep 2006 21:04:32 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 19:50 -0400, Francisco Reyes wrote:\n> > regarding shared buffers I'd make this much bigger, like 2GB or more\n> \n> Will do 2GB on the weekend. From what I read this requires shared memory so \n> have to restart my machine (FreeBSD).\n> \n\nYou should be able to do:\n# sysctl -w kern.ipc.shmmax=2147483647\n\n> if I plan to give shared buffers 2GB, how much more over that should I give \n> the total shared memory kern.ipc.shmmax? 2.5GB?\n> \n\nTo get it higher than 2GB, you may need to recompile the kernel, but you\nshould be able to get 2GB without a restart.\n\n> Also will shared buffers impact inserts/updates at all?\n> I wish the postgresql.org site docs would mention what will be impacted.\n> \n\nThey will not have a real impact on INSERTs, because an INSERT still has\nto be logged in the WAL before commit. Technically, it may make a\ndifference, but I would not expect much.\n\nshared_buffers has a big impact on UPDATEs, because an UPDATE needs to\nfind the record to UPDATE first. An UPDATE is basically a DELETE and an\nINSERT in one transaction.\n\n> Comments like: This setting must be at least 16, as well as at least twice \n> the value of max_connections; however, settings significantly higher than \n> the minimum are usually needed for good performance.\n> \n> Are usefull, but could use some improvement.. increase on what? All \n> performance? inserts? updates? selects?\n\nMore shared_buffers means fewer reads from disk. If you have 10MB worth\nof tables, having 100MB worth of shared buffers is useless because they\nwill be mostly empty. However, if you have 100MB of shared buffers and\nyou access records randomly from a 100 petabyte database, increasing\nshared_buffers to 200MB doesn't help much, because the chances that the\nrecord you need is in a shared buffer already are almost zero.\n\nShared buffers are a cache, pure and simple. When you have \"locality of\nreference\", caches are helpful. Sometimes that's temporal locality (if\nyou are likely to access data that you recently accessed), and sometimes\nthat's spatial locality (if you access block 10, you're likely to access\nblock 11). If you have \"locality of referece\" -- and almost every\ndatabase does -- shared_buffers help.\n\n> For instance, increasing effective_cache_size has made a noticeable \n> difference in selects. However as I talk to the developers we are still \n> doing marginally in the inserts. About 150/min.\n\neffective_cache_size affects only the plan generated. INSERTs aren't\nplanned because, well, it's an INSERT and there's only one thing to do\nand only one way to do it.\n\n> There is spare CPU cycles, both raid cards are doing considerably less they \n> can do.. so next I am going to try and research what parameters I need to \n> bump to increase inserts. Today I increased checkpoint_segments from the \n> default to 64. Now looking at wall_buffers.\n\nYou won't see any amazing increases from those. You can improve INSERTs\na lot if you have a battery-backed cache on your RAID card and set it to\nWriteBack mode (make sure to disable disk caches though, those aren't\nbattery backed and you could lose data). If you do this, you should be\nable to do 1000's of inserts per second.\n\nAnother thing to look at is \"commit_delay\". If you are trying to commit\nmany INSERTs at once, normally they will be fsync()d individually, which\nis slow. However, by adding a commit delay, postgres can batch a few\ninserts into one fsync() call, which can help a lot.\n\n> It would be most helpfull to have something on the docs to specify what each \n> setting affects most such as reads, writes, updates, inserts, etc.. \n\nI agree that they could be improved. It gets complicated quickly though,\nand it's hard to generalize the effect that a performance setting will\nhave. They are all very interdependent.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Thu, 14 Sep 2006 18:34:22 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, 2006-09-14 at 21:04 -0400, Michael Stone wrote:\n> On Thu, Sep 14, 2006 at 05:52:02PM -0700, Jeff Davis wrote:\n> >Any long-running system will have very little \"free\" memory. Free memory\n> >is wasted memory, so the OS finds some use for it.\n> \n> The important part of the output of \"free\" in this context isn't how \n> much is free, it's how much is cache vs how much is allocated to \n> programs. Other os's have other ways of telling the same thing. Neither \n> of those numbers generally has much to do with how much shows up in ps \n> when large amounts of shared memory are in use.\n\nRight, ps doesn't give you much help. But he didn't tell us about the\nprocess. If a process is using all the buffer cache, and you take away\nthat memory, it could turn all the reads that previously came from the\nbuffer cache into disk reads, leading to major slowdown and interference\nwith the database.\n\nConversely, if you have a large program running, it may not use much of\nit's own memory, and perhaps some rarely-accessed pages could be paged\nout in favor of more buffer cache. So even if all your memory is taken\nwith resident programs, your computer may easily accommodate more\nprocesses by paging out rarely-used process memory.\n\nIf he knows a little more about the process than he can make a better\ndetermination. But I don't think it will be much of a problem with 8GB\nof physical memory.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 14 Sep 2006 18:44:28 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Hi, Francisco,\n\nFrancisco Reyes wrote:\n\n> I am looking to either improve the time of the vacuum or decrease it's\n> impact on the loads.\n> Are the variables:\n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n\nJust to avoid a silly mistake:\n\nYou pasted those settings with # sign, that means that PostgreSQL does\ntreat them as comments, and uses the defaults instead. You should make\nshure that you use \"real\" settings in your config.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Fri, 15 Sep 2006 11:48:07 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "On Thu, Sep 14, 2006 at 11:23:01AM -0400, Francisco Reyes wrote:\n> My setup:\n> Freebsd 6.1\n> Postgresql 8.1.4\n> Memory: 8GB\n> SATA Disks \n> \n> Raid 1 10 spindles (2 as hot spares)\n> 500GB disks (16MB buffer), 7200 rpm\n> Raid 10\n> \n> Raid 2 4 spindles\n> 150GB 10K rpm disks\n> Raid 10\n> \n> shared_buffers = 10000\n> temp_buffers = 1500\n> work_mem = 32768 # 32MB\n> maintenance_work_mem = 524288 # 512MB\n> \n> checkpoint_segments = 64\n> Just increased to 64 today.. after reading this may help. Was 5 before.\n> \n> pg_xlog on second raid (which sees very little activity)\n\nBTW, on some good raid controllers (with battery backup and\nwrite-caching), putting pg_xlog on a seperate partition doesn't really\nhelp, so you might want to try combining everything.\n\nEven if you stay with 2 partitions, I'd cut pg_xlog back to just a\nsimple mirror.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 18 Sep 2006 17:40:30 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuums on large busy databases"
},
{
"msg_contents": "Jim C. Nasby writes:\n\n> BTW, on some good raid controllers (with battery backup and\n> write-caching), putting pg_xlog on a seperate partition doesn't really\n> help, so you might want to try combining everything.\n\nPlanning to put a busy database on second raid or perhaps some index files.\nSo far the second raid is highly under utilized.\n \n> Even if you stay with 2 partitions, I'd cut pg_xlog back to just a\n> simple mirror.\n\nI am considering to put the pg_xlog back to the main raid. Primarily because \nwe have two hot spares.. on top of RAID 10.. so it is safer. \n",
"msg_date": "Mon, 18 Sep 2006 20:21:16 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuums on large busy databases"
}
] |
[
{
"msg_contents": "I'm experiment with RAID, looking for an inexpensive way to boost performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG dual-port SATA cards. (NB: I don't plan to run RAID 0 in production, probably RAID 10, so no need to comment on the failure rate of RAID 0.)\n\nI used this raw serial-speed test:\n\n time sh -c \"dd if=/dev/zero of=./bigfile bs=8k count=1000000 && sync\"\n (unmount/remount)\n time sh -c \"dd if=./bigfile of=/dev/null bs=8k count=1000000 && sync\"\n\nWhich showed that the RAID 0 4-disk array was almost exactly twice as fast as each disk individually. I expected 4X performance for a 4-disk RAID 0. My suspicion is that each of these budget SATA cards is bandwidth limited; they can't actually handle two disks simultaneously, and I'd need to get four separate SATA cards to get 4X performance (or a more expensive card such as the Areca someone mentioned the other day).\n\nOn the other hand, it \"feels like\" (using our application) the seek performance is quite a bit better, which I'd expect given my hypothesis about the SIIG cards. I don't have concrete benchmarks on seek speed.\n\nThanks,\nCraig\n",
"msg_date": "Thu, 14 Sep 2006 11:05:32 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID 0 not as fast as expected"
},
{
"msg_contents": "On Thursday 14 September 2006 11:05, \"Craig A. James\" \n<[email protected]> wrote:\n> I'm experiment with RAID, looking for an inexpensive way to boost\n> performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG\n> dual-port SATA cards. (NB: I don't plan to run RAID 0 in production,\n> probably RAID 10, so no need to comment on the failure rate of RAID 0.)\n>\n\nAre those PCI cards? If yes, it's just a bus bandwidth limit.\n\n-- \nAlan\n",
"msg_date": "Thu, 14 Sep 2006 11:36:03 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Craig A. James wrote:\n> I'm experiment with RAID, looking for an inexpensive way to boost \n> performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG \n> dual-port SATA cards. (NB: I don't plan to run RAID 0 in production, \n> probably RAID 10, so no need to comment on the failure rate of RAID 0.)\n> \n> I used this raw serial-speed test:\n> \n> time sh -c \"dd if=/dev/zero of=./bigfile bs=8k count=1000000 && sync\"\n> (unmount/remount)\n> time sh -c \"dd if=./bigfile of=/dev/null bs=8k count=1000000 && sync\"\n> \n> Which showed that the RAID 0 4-disk array was almost exactly twice as \n> fast as each disk individually. I expected 4X performance for a 4-disk \n> RAID 0. My suspicion is that each of these budget SATA cards is \n\nI am assuming linux here, Linux software raid 0 is known not to be super \nduper.\n\nSecondly remember that there is overhead involved with using raid. The \ndirect correlation doesn't work.\n\nJoshua D. Drake\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 14 Sep 2006 11:49:43 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Alan Hodgson wrote:\n> On Thursday 14 September 2006 11:05, \"Craig A. James\" \n> <[email protected]> wrote:\n>> I'm experiment with RAID, looking for an inexpensive way to boost\n>> performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG\n>> dual-port SATA cards. (NB: I don't plan to run RAID 0 in production,\n>> probably RAID 10, so no need to comment on the failure rate of RAID 0.)\n>>\n> \n> Are those PCI cards? If yes, it's just a bus bandwidth limit.\n\nOk, that makes sense.\n\n One SATA disk = 52 MB/sec\n 4-disk RAID0 = 106 MB/sec\n \n PCI at 33 MHz x 32 bits (4 bytes) = 132 MB/sec.\n\nI guess getting to 80% of the theoretical speed is as much as I should expect.\n\nThanks,\nCraig\n",
"msg_date": "Thu, 14 Sep 2006 14:35:00 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "On Thu, 2006-09-14 at 16:35, Craig A. James wrote:\n> Alan Hodgson wrote:\n> > On Thursday 14 September 2006 11:05, \"Craig A. James\" \n> > <[email protected]> wrote:\n> >> I'm experiment with RAID, looking for an inexpensive way to boost\n> >> performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG\n> >> dual-port SATA cards. (NB: I don't plan to run RAID 0 in production,\n> >> probably RAID 10, so no need to comment on the failure rate of RAID 0.)\n> >>\n> > \n> > Are those PCI cards? If yes, it's just a bus bandwidth limit.\n> \n> Ok, that makes sense.\n> \n> One SATA disk = 52 MB/sec\n> 4-disk RAID0 = 106 MB/sec\n> \n> PCI at 33 MHz x 32 bits (4 bytes) = 132 MB/sec.\n> \n> I guess getting to 80% of the theoretical speed is as much as I should expect.\n\nNote that many mid to high end motherboards have multiple PCI busses /\nchannels, and you could put a card in each one and get > 132MByte/sec on\nthem.\n\nBut for a database, sequential throughput is almost never the real\nproblem. It's usually random access that counts, and for that a RAID 10\nis a pretty good choice.\n",
"msg_date": "Thu, 14 Sep 2006 16:58:35 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Josh,\n\nOn 9/14/06 11:49 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n> I am assuming linux here, Linux software raid 0 is known not to be super\n> duper.\n\nI've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n\n- Luke\n\n\n",
"msg_date": "Thu, 14 Sep 2006 20:36:54 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Josh,\n> \n> On 9/14/06 11:49 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n> \n>> I am assuming linux here, Linux software raid 0 is known not to be super\n>> duper.\n> \n> I've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n\nWith what? :)\n\n> \n> - Luke\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 14 Sep 2006 20:47:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Josh,\n\nOn 9/14/06 8:47 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n>> I've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n> \n> With what? :)\n\nSun X4500 (aka Thumper) running stock RedHat 4.3 (actually CentOS 4.3) with\nXFS and the linux md driver without lvm. Here is a summary of the results:\n\n \n Read Test \n RAID Level Max Readahead (KB) RAID Chunksize Max Readahead on Disks (KB)\nMax Time (s) Read Bandwidth (MB/s)\n 0 65536 64 256 16.689 1,917.43\n 0 4096 64 256 21.269 1,504.54\n 0 65536 256 256 17.967 1,781.04\n 0 2816 256 256 18.835 1,698.96\n 0 65536 1024 256 18.538 1,726.18\n 0 65536 64 512 18.295 1,749.11\n 0 65536 64 256 18.931 1,690.35\n 0 65536 64 256 18.873 1,695.54\n 0 64768 64 256 18.545 1,725.53\n 0 131172 64 256 18.548 1,725.25\n 0 131172 64 65536 19.046 1,680.14\n 0 131172 64 524288 18.125 1,765.52\n 0 131172 64 1048576 18.701 1,711.14\n 5 2560 64 256 39.933 801.34\n 5 16777216 64 256 37.76 847.46\n 5 524288 64 256 53.497 598.16\n 5 65536 32 256 38.472 831.77\n 5 65536 32 256 38.004 842.02\n 5 65536 32 256 37.884 844.68\n 5 2560 16 256 41.39 773.13\n 5 65536 16 256 48.902 654.37\n 10 65536 64 256 83.256 384.36\n 1+0 65536 64 256 19.394 1,649.99\n 1+0 65536 64 256 19.047 1,680.05\n 1+0 65536 64 256 19.195 1,667.10\n 1+0 65536 64 256 18.806 1,701.58\n 1+0 65536 64 256 18.848 1,697.79\n 1+0 65536 64 256 18.371 1,741.88\n 1+0 65536 64 256 21.446 1,492.12\n 1+0 65536 64 256 20.254 1,579.93 \n\n\n\n",
"msg_date": "Thu, 14 Sep 2006 20:51:42 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Josh,\n> \n> On 9/14/06 8:47 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n> \n>>> I've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n>> With what? :)\n> \n> Sun X4500 (aka Thumper) running stock RedHat 4.3 (actually CentOS 4.3) with\n> XFS and the linux md driver without lvm. Here is a summary of the results:\n> \n\n\nGood god!\n\n> \n> Read Test \n> RAID Level Max Readahead (KB) RAID Chunksize Max Readahead on Disks (KB)\n> Max Time (s) Read Bandwidth (MB/s)\n> 0 65536 64 256 16.689 1,917.43\n> 0 4096 64 256 21.269 1,504.54\n> 0 65536 256 256 17.967 1,781.04\n> 0 2816 256 256 18.835 1,698.96\n> 0 65536 1024 256 18.538 1,726.18\n> 0 65536 64 512 18.295 1,749.11\n> 0 65536 64 256 18.931 1,690.35\n> 0 65536 64 256 18.873 1,695.54\n> 0 64768 64 256 18.545 1,725.53\n> 0 131172 64 256 18.548 1,725.25\n> 0 131172 64 65536 19.046 1,680.14\n> 0 131172 64 524288 18.125 1,765.52\n> 0 131172 64 1048576 18.701 1,711.14\n> 5 2560 64 256 39.933 801.34\n> 5 16777216 64 256 37.76 847.46\n> 5 524288 64 256 53.497 598.16\n> 5 65536 32 256 38.472 831.77\n> 5 65536 32 256 38.004 842.02\n> 5 65536 32 256 37.884 844.68\n> 5 2560 16 256 41.39 773.13\n> 5 65536 16 256 48.902 654.37\n> 10 65536 64 256 83.256 384.36\n> 1+0 65536 64 256 19.394 1,649.99\n> 1+0 65536 64 256 19.047 1,680.05\n> 1+0 65536 64 256 19.195 1,667.10\n> 1+0 65536 64 256 18.806 1,701.58\n> 1+0 65536 64 256 18.848 1,697.79\n> 1+0 65536 64 256 18.371 1,741.88\n> 1+0 65536 64 256 21.446 1,492.12\n> 1+0 65536 64 256 20.254 1,579.93 \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 14 Sep 2006 21:01:24 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
}
] |
[
{
"msg_contents": "Hi,\n\nI have two table: customers and salesorders. salesorders have a foreign\nkey to the customer\n\nIf I run this query:\n\nSELECT \nsalesOrders.objectid, \nsalesOrders.ordernumber, \nsalesOrders.orderdate, \ncustomers.objectid, \ncustomers.customernumber, \ncustomers.lastname \nFROM prototype.salesorders \nINNER JOIN prototype.customers ON ( \ncustomers.objectid = salesorders.customer \n) \nwhere \nlastname ilike 'Boonk' \norder by ordernumber asc LIMIT 1\n\n\nWITHOUT \"LIMIT 1\" this query plan is executed (EXPLAIN ANALYZE):\n\n\nSort (cost=41811.90..41812.78 rows=353 width=103) (actual time=623.855..623.867 rows=7 loops=1)\n Sort Key: salesorders.ordernumber\n -> Nested Loop (cost=2.15..41796.96 rows=353 width=103) (actual time=0.166..623.793 rows=7 loops=1)\n -> Seq Scan on customers (cost=0.00..21429.44 rows=118 width=55) (actual time=0.037..623.325 rows=5 loops=1)\n Filter: (lastname ~~* 'Boonk'::text)\n -> Bitmap Heap Scan on salesorders (cost=2.15..172.06 rows=44 width=88) (actual time=0.075..0.079 rows=1 loops=5)\n Recheck Cond: (\"outer\".objectid = salesorders.customer)\n -> Bitmap Index Scan on orders_customer (cost=0.00..2.15 rows=44 width=0) (actual time=0.066..0.066 rows=1 loops=5)\n Index Cond: (\"outer\".objectid = salesorders.customer)\nTotal runtime: 624.051 ms\n\n\n\nWith the limit this query plan is used (EXPLAIN ANALYZE):\n\nLimit (cost=0.00..18963.24 rows=1 width=103) (actual time=18404.730..18404.732 rows=1 loops=1)\n -> Nested Loop (cost=0.00..6694025.41 rows=353 width=103) (actual time=18404.723..18404.723 rows=1 loops=1)\n -> Index Scan using prototype_orders_ordernumber on salesorders (cost=0.00..37263.14 rows=1104381 width=88) (actual time=26.715..1862.408 rows=607645 loops=1)\n -> Index Scan using pk_prototype_customers on customers (cost=0.00..6.02 rows=1 width=55) (actual time=0.023..0.023 rows=0 loops=607645)\n Index Cond: (customers.objectid = \"outer\".customer)\n Filter: (lastname ~~* 'Boonk'::text)\nTotal runtime: 18404.883 ms\n\n\nBoth tables are freshly fully vacuumed analyzed.\n\nWhy the difference and can I influence the result so that the first\nquery plan (which is the fastest) is actually used in both cases (I\nwould expect that the limit would be done after the sort?)? \n\nTIA\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n\n\n",
"msg_date": "Fri, 15 Sep 2006 10:39:27 +0200",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why the difference in plans ??"
},
{
"msg_contents": "\"Joost Kraaijeveld\" <[email protected]> writes:\n> Why the difference and can I influence the result so that the first\n> query plan (which is the fastest) is actually used in both cases (I\n> would expect that the limit would be done after the sort?)? \n\nIt likes the second plan because 6694025.41/353 < 41812.78. It would\nprobably be right, too, if the number of matching rows were indeed 353,\nbut it seems there are only 7. Try increasing your statistics target\nand re-analyzing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Sep 2006 10:08:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the difference in plans ?? "
},
{
"msg_contents": "On Fri, 2006-09-15 at 10:08 -0400, Tom Lane wrote:\n> but it seems there are only 7. Try increasing your statistics target\n> and re-analyzing.\n\nDo you mean with \"increasing my statistics target\" changing the value of\n\"default_statistics_target = 10\" to a bigger number? If so, changing it\nto 900 did not make any difference (PostgreSQL restarted, vacuumed\nanalysed etc).\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n",
"msg_date": "Fri, 15 Sep 2006 18:05:09 +0200",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the difference in plans ??"
},
{
"msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> Do you mean with \"increasing my statistics target\" changing the value of\n> \"default_statistics_target = 10\" to a bigger number? If so, changing it\n> to 900 did not make any difference (PostgreSQL restarted, vacuumed\n> analysed etc).\n\nHm, did the \"353\" rowcount estimate not change at all?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Sep 2006 12:17:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the difference in plans ?? "
}
] |
[
{
"msg_contents": "Novedad PortalTPV\n\n\n\n\n\n\nNovedad \nPortalTPV.com !!!\n \nToda la informaci�n para \nel profesional del TPV, novedades, noticias del sector, art�culos de opini�n, \nforos, blogs y una fant�stica bolsa de trabajo del sector.\nConoce todo el Hardware \ny el Software disponible de una voz profesional y comprometida. Descarga gratis \nversiones demo y shareware del mejor software TPV as� como todos los drivers, \nmanuales y gu�as de programaci�n de las mejores impresoras, scanners, touch \nscreens y terminales fijos o inal�mbricos del mercado.\n \nwww.portaltpv.com porque al final t� \ndecides.\n \n\n\n \n \n \n \n \n \n \n \n \n \n \nEste mail le ha sido \nremitido por alguno de los siguientes motivos: \n \n- Usted figura en \nnuestra base de datos como cliente o proveedor nuestro.\n- Nos ha solicitado la \ninformaci�n contenida en este correo.\n- Entendemos que esta \ninformaci�n puede ser de su inter�s.\n \nSi no desea recibir m�s \ncorreos de nuestra empresa ind�quelo enviando un correo a la siguiente direcci�n \[email protected]\n",
"msg_date": "Fri, 15 Sep 2006 10:58:18 +0200",
"msg_from": "\"Portal TPV\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lanzamiento www.PortalTPV.com"
}
] |
[
{
"msg_contents": "That's an all PCI-X box which makes sense. There are 6 SATA controllers\nin that little beastie also. You can always count on Sun to provide\nover engineered boxes.\n\n \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Joshua D. Drake\n> Sent: Friday, September 15, 2006 12:01 AM\n> To: Luke Lonergan\n> Cc: Craig A. James; [email protected]\n> Subject: Re: [PERFORM] RAID 0 not as fast as expected\n> \n> Luke Lonergan wrote:\n> > Josh,\n> > \n> > On 9/14/06 8:47 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n> > \n> >>> I've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n> >> With what? :)\n> > \n> > Sun X4500 (aka Thumper) running stock RedHat 4.3 (actually \n> CentOS 4.3) \n> > with XFS and the linux md driver without lvm. Here is a \n> summary of the results:\n> > \n> \n> \n> Good god!\n> \n> > \n> > Read Test \n> > RAID Level Max Readahead (KB) RAID Chunksize Max Readahead \n> on Disks \n> > (KB) Max Time (s) Read Bandwidth (MB/s) 0 65536 64 256 16.689 \n> > 1,917.43 0 4096 64 256 21.269 1,504.54 0 65536 256 256 17.967 \n> > 1,781.04 0 2816 256 256 18.835 1,698.96 0 65536 1024 256 18.538 \n> > 1,726.18 0 65536 64 512 18.295 1,749.11 0 65536 64 256 18.931 \n> > 1,690.35 0 65536 64 256 18.873 1,695.54 0 64768 64 256 18.545 \n> > 1,725.53 0 131172 64 256 18.548 1,725.25 0 131172 64 \n> 65536 19.046 \n> > 1,680.14 0 131172 64 524288 18.125 1,765.52 0 131172 64 1048576 \n> > 18.701 1,711.14\n> > 5 2560 64 256 39.933 801.34\n> > 5 16777216 64 256 37.76 847.46\n> > 5 524288 64 256 53.497 598.16\n> > 5 65536 32 256 38.472 831.77\n> > 5 65536 32 256 38.004 842.02\n> > 5 65536 32 256 37.884 844.68\n> > 5 2560 16 256 41.39 773.13\n> > 5 65536 16 256 48.902 654.37\n> > 10 65536 64 256 83.256 384.36\n> > 1+0 65536 64 256 19.394 1,649.99\n> > 1+0 65536 64 256 19.047 1,680.05\n> > 1+0 65536 64 256 19.195 1,667.10\n> > 1+0 65536 64 256 18.806 1,701.58\n> > 1+0 65536 64 256 18.848 1,697.79\n> > 1+0 65536 64 256 18.371 1,741.88\n> > 1+0 65536 64 256 21.446 1,492.12\n> > 1+0 65536 64 256 20.254 1,579.93\n> > \n> > \n> \n> \n> -- \n> \n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Fri, 15 Sep 2006 08:43:26 -0400",
"msg_from": "\"Spiegelberg, Greg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Greg, Josh,\n\nSomething I found out while doing this - lvm (and lvm2) slows the block\nstream down dramatically. At first I was using it for convenience sake to\nimplement partitions on top of the md devices, but I found I was stuck at\nabout 700 MB/s. Removing lvm2 from the picture allowed me to get within\nchucking distance of 2GB/s.\n\nWhen we first started working with Solaris ZFS, we were getting about\n400-600 MB/s, and after working with the Solaris Engineering team we now get\nrates approaching 2GB/s. The updates needed to Solaris are part of the\nSolaris 10 U3 available in October (and already in Solaris Express, aka\nSolaris 11).\n\n- Luke \n\n\nOn 9/15/06 5:43 AM, \"Spiegelberg, Greg\" <[email protected]> wrote:\n\n> That's an all PCI-X box which makes sense. There are 6 SATA controllers\n> in that little beastie also. You can always count on Sun to provide\n> over engineered boxes.\n> \n> \n> \n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of\n>> Joshua D. Drake\n>> Sent: Friday, September 15, 2006 12:01 AM\n>> To: Luke Lonergan\n>> Cc: Craig A. James; [email protected]\n>> Subject: Re: [PERFORM] RAID 0 not as fast as expected\n>> \n>> Luke Lonergan wrote:\n>>> Josh,\n>>> \n>>> On 9/14/06 8:47 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n>>> \n>>>>> I've obtained 1,950 MB/s using Linux software RAID on SATA drives.\n>>>> With what? :)\n>>> \n>>> Sun X4500 (aka Thumper) running stock RedHat 4.3 (actually\n>> CentOS 4.3) \n>>> with XFS and the linux md driver without lvm. Here is a\n>> summary of the results:\n>>> \n>> \n>> \n>> Good god!\n>> \n>>> \n>>> Read Test \n>>> RAID Level Max Readahead (KB) RAID Chunksize Max Readahead\n>> on Disks \n>>> (KB) Max Time (s) Read Bandwidth (MB/s) 0 65536 64 256 16.689\n>>> 1,917.43 0 4096 64 256 21.269 1,504.54 0 65536 256 256 17.967\n>>> 1,781.04 0 2816 256 256 18.835 1,698.96 0 65536 1024 256 18.538\n>>> 1,726.18 0 65536 64 512 18.295 1,749.11 0 65536 64 256 18.931\n>>> 1,690.35 0 65536 64 256 18.873 1,695.54 0 64768 64 256 18.545\n>>> 1,725.53 0 131172 64 256 18.548 1,725.25 0 131172 64\n>> 65536 19.046 \n>>> 1,680.14 0 131172 64 524288 18.125 1,765.52 0 131172 64 1048576\n>>> 18.701 1,711.14\n>>> 5 2560 64 256 39.933 801.34\n>>> 5 16777216 64 256 37.76 847.46\n>>> 5 524288 64 256 53.497 598.16\n>>> 5 65536 32 256 38.472 831.77\n>>> 5 65536 32 256 38.004 842.02\n>>> 5 65536 32 256 37.884 844.68\n>>> 5 2560 16 256 41.39 773.13\n>>> 5 65536 16 256 48.902 654.37\n>>> 10 65536 64 256 83.256 384.36\n>>> 1+0 65536 64 256 19.394 1,649.99\n>>> 1+0 65536 64 256 19.047 1,680.05\n>>> 1+0 65536 64 256 19.195 1,667.10\n>>> 1+0 65536 64 256 18.806 1,701.58\n>>> 1+0 65536 64 256 18.848 1,697.79\n>>> 1+0 65536 64 256 18.371 1,741.88\n>>> 1+0 65536 64 256 21.446 1,492.12\n>>> 1+0 65536 64 256 20.254 1,579.93\n>>> \n>>> \n>> \n>> \n>> -- \n>> \n>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n>> Providing the most comprehensive PostgreSQL solutions since 1997\n>> http://www.commandprompt.com/\n>> \n>> \n>> \n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>> \n> \n\n\n",
"msg_date": "Fri, 15 Sep 2006 08:42:10 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": ">When we first started working with Solaris ZFS, we were getting about\n>400-600 MB/s, and after working with the Solaris Engineering team we\nnow >get\n>rates approaching 2GB/s. The updates needed to Solaris are part of the\n>Solaris 10 U3 available in October (and already in Solaris Express, aka\n>Solaris 11).\n\nLuke,\n\nWhat other file systems have you had good success with? Solaris would be\nnice, but it looks like I'm stuck running on FreeBSD (6.1, amd64) so\nUFS2 would be the default. Not sure about XFS on BSD, and I'm not sure\nat the moment that ext2/3 provide enough benefit over UFS to spend much\ntime on. \n\nAlso, has anyone had any experience with gmirror (good or bad)? I'm\nthinking of trying to use it to stripe two hardware mirrored sets since\nHW RAID10 wasn't doing as well as I had hoped (Dell Perc5/I controller).\nFor a 4 disk RAID 10 (10k rpm SAS/SCSI disks) what would be a good\ntarget performance number? Right now, dd shows 224 MB/s. \n\nAnd lastly, for a more OLAP style database, would I be correct in\nassuming that sequential access speed would be more important than is\nnormally the case? (I have a relatively small number of connections, but\neach running on pretty large data sets). \n\nThanks,\n\nBucky\n",
"msg_date": "Fri, 15 Sep 2006 14:28:02 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Bucky,\n\nOn 9/15/06 11:28 AM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> What other file systems have you had good success with? Solaris would be\n> nice, but it looks like I'm stuck running on FreeBSD (6.1, amd64) so\n> UFS2 would be the default. Not sure about XFS on BSD, and I'm not sure\n> at the moment that ext2/3 provide enough benefit over UFS to spend much\n> time on. \n\nIt won't matter much between UFS2 or others until you get past about 350\nMB/s.\n \n> Also, has anyone had any experience with gmirror (good or bad)? I'm\n> thinking of trying to use it to stripe two hardware mirrored sets since\n> HW RAID10 wasn't doing as well as I had hoped (Dell Perc5/I controller).\n> For a 4 disk RAID 10 (10k rpm SAS/SCSI disks) what would be a good\n> target performance number? Right now, dd shows 224 MB/s.\n\nEach disk should sustain somewhere between 60-80 MB/s (see\nhttp://www.storagereview.com/ for a profile of your disk).\n\nYour dd test sounds suspiciously too fast unless you were running two\nsimultaneous dd processes. Did you read from a file that was at least twice\nthe size of RAM?\n\nA single dd stream would run between 120 and 160 MB/s on a RAID10, two\nstreams would be between 240 and 320 MB/s.\n \n> And lastly, for a more OLAP style database, would I be correct in\n> assuming that sequential access speed would be more important than is\n> normally the case? (I have a relatively small number of connections, but\n> each running on pretty large data sets).\n\nYes. What's pretty large? We've had to redefine large recently, now we're\ntalking about systems with between 100TB and 1,000TB.\n\n- Luke\n\n\n",
"msg_date": "Sat, 16 Sep 2006 16:46:04 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "On Sat, Sep 16, 2006 at 04:46:04PM -0700, Luke Lonergan wrote:\n> Yes. What's pretty large? We've had to redefine large recently, now we're\n> talking about systems with between 100TB and 1,000TB.\n\nDo you actually have PostgreSQL databases in that size range?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 17 Sep 2006 02:08:43 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Sat, Sep 16, 2006 at 04:46:04PM -0700, Luke Lonergan wrote:\n>> Yes. What's pretty large? We've had to redefine large recently, now we're\n>> talking about systems with between 100TB and 1,000TB.\n> \n> Do you actually have PostgreSQL databases in that size range?\n\nNo, they have databases in MPP that are that large :)\n\nJoshua D. Drake\n\n\n> \n> /* Steinar */\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Sat, 16 Sep 2006 17:44:48 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID 0 not as fast as expected"
},
{
"msg_contents": ">Yes. What's pretty large? We've had to redefine large recently, now\nwe're\n>talking about systems with between 100TB and 1,000TB.\n>\n>- Luke\n\nWell, I said large, not gargantuan :) - Largest would probably be around\na few TB, but the problem I'm having to deal with at the moment is large\nnumbers (potentially > 1 billion) of small records (hopefully I can get\nit down to a few int4's and a int2 or so) in a single table. Currently\nwe're testing for and targeting in the 500M records range, but the\ndesign needs to scale to 2-3 times that at least. \n \nI read one of your presentations on very large databases in PG, and saw\nmention of some tables over a billion rows, so that was encouraging. The\nnew table partitioning in 8.x will be very useful. What's the largest DB\nyou've seen to date on PG (in terms of total disk storage, and records\nin largest table(s) )? \n\nMy question is at what point do I have to get fancy with those big\ntables? From your presentation, it looks like PG can handle 1.2 billion\nrecords or so as long as you write intelligent queries. (And normal PG\nshould be able to handle that, correct?)\n\nAlso, does anyone know if/when any of the MPP stuff will be ported to\nPostgres, or is the plan to keep that separate?\n\nThanks,\n\nBucky\n",
"msg_date": "Mon, 18 Sep 2006 10:37:58 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "On 9/18/06, Bucky Jordan <[email protected]> wrote:\n> My question is at what point do I have to get fancy with those big\n> tables? From your presentation, it looks like PG can handle 1.2 billion\n> records or so as long as you write intelligent queries. (And normal PG\n> should be able to handle that, correct?)\n\nI would rephrase that: large databses are less forgiving of\nunintelligent queries, particularly of the form of your average stupid\ndatabase abstracting middleware :-). seek times on a 1gb database are\ngoing to be zero all the time, not so on a 1tb+ database.\n\ngood normalization skills are really important for large databases,\nalong with materialization strategies for 'denormalized sets'.\n\nregarding the number of rows, there is no limit to how much pg can\nhandle per se, just some practical limitations, especially vacuum and\nreindex times. these are important because they are required to keep\na handle on mvcc bloat and its very nice to be able to vaccum bits of\nyour database at a time.\n\njust another fyi, if you have a really big database, you can forget\nabout doing pg_dump for backups (unless you really don't care about\nbeing x day or days behind)...you simply have to due some type of\nreplication/failover strategy. i would start with pitr.\n\nmerlin\n",
"msg_date": "Mon, 18 Sep 2006 16:56:22 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "On Monday 18 September 2006 13:56, \"Merlin Moncure\" <[email protected]> \nwrote:\n> just another fyi, if you have a really big database, you can forget\n> about doing pg_dump for backups (unless you really don't care about\n> being x day or days behind)...you simply have to due some type of\n> replication/failover strategy. i would start with pitr.\n\nAnd, of course, the biggest problem of all; upgrades.\n\n-- \nEat right. Exercise regularly. Die anyway.\n\n",
"msg_date": "Mon, 18 Sep 2006 14:01:03 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "> good normalization skills are really important for large databases,\n> along with materialization strategies for 'denormalized sets'.\n\nGood points- thanks. I'm especially curious what others have done for\nthe materialization. The matview project on gborg appears dead, and I've\nonly found a smattering of references on google. My guess is, you roll\nyour own for optimal performance... \n\n> regarding the number of rows, there is no limit to how much pg can\n> handle per se, just some practical limitations, especially vacuum and\n> reindex times. these are important because they are required to keep\n> a handle on mvcc bloat and its very nice to be able to vaccum bits of\n> your database at a time.\n\nI was hoping for some actual numbers on \"practical\". Hardware isn't too\nmuch of an issue (within reason- we're not talking an amazon or google\nhere... the SunFire X4500 looks interesting... )- if a customer wants to\nstore that much data, and pay for it, we'll figure out how to do it. I'd\njust rather not have to re-design the database. Say the requirement is\nto keep 12 months of data accessible, each \"scan\" produces 100M records,\nand I run one per month. What happens if the customer wants to run it\nonce a week? I was more trying to figure out at what point (ballpark)\nI'm going to have to look into archive tables and things of that nature\n(or at Bizgres/MPP). It's easier for us to add more/bigger hardware, but\nnot so easy to redesign/add history tables...\n\n> \n> just another fyi, if you have a really big database, you can forget\n> about doing pg_dump for backups (unless you really don't care about\n> being x day or days behind)...you simply have to due some type of\n> replication/failover strategy. i would start with pitr.\n> \n> merlin\nI was originally thinking replication, but I did notice some nice pitr\nfeatures in 8.x - I'll have to look into that some more.\n\nThanks for the pointers though... \n\n- Bucky\n",
"msg_date": "Mon, 18 Sep 2006 18:40:51 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Do the basic math:\n\nIf you have a table with 100million records, each of which is 200bytes long,\nthat gives you roughtly 20 gig of data (assuming it was all written neatly\nand hasn't been updated much). If you have to do a full table scan, then\nit will take roughly 400 seconds with a single 10k RPM SCSI drive with an\naverage read speed of 50MB/sec. If you are going to read indexes, figure\nout how big your index is, and how many blocks will be returned, and figure\nout how many blocks this will require transferring from the main table, make\nan estimate of the seeks, add in the transfer total, and you have a time to\nget your data. A big array with a good controller can pass 1000MB/sec\ntransfer on the right bus if you buy the write technologies. But be warned,\nif you buy the wrong ones, your big array can end up being slower than a\nsingle drive for sequential transfer. At 1000MB/sec your scan would take 20\nseconds.\n\nBe warned, the tech specs page:\nhttp://www.sun.com/servers/x64/x4500/specs.xml#anchor3\ndoesn't mention RAID 10 as a possible, and this is probably what most would\nrecommend for fast data access if you are doing both read and write\noperations. If you are doing mostly Read, then RAID 5 is passable, but it's\nredundancy with large numbers of drives is not so great.\n\nAlex.\n\nOn 9/18/06, Bucky Jordan <[email protected]> wrote:\n>\n> > good normalization skills are really important for large databases,\n> > along with materialization strategies for 'denormalized sets'.\n>\n> Good points- thanks. I'm especially curious what others have done for\n> the materialization. The matview project on gborg appears dead, and I've\n> only found a smattering of references on google. My guess is, you roll\n> your own for optimal performance...\n>\n> > regarding the number of rows, there is no limit to how much pg can\n> > handle per se, just some practical limitations, especially vacuum and\n> > reindex times. these are important because they are required to keep\n> > a handle on mvcc bloat and its very nice to be able to vaccum bits of\n> > your database at a time.\n>\n> I was hoping for some actual numbers on \"practical\". Hardware isn't too\n> much of an issue (within reason- we're not talking an amazon or google\n> here... the SunFire X4500 looks interesting... )- if a customer wants to\n> store that much data, and pay for it, we'll figure out how to do it. I'd\n> just rather not have to re-design the database. Say the requirement is\n> to keep 12 months of data accessible, each \"scan\" produces 100M records,\n> and I run one per month. What happens if the customer wants to run it\n> once a week? I was more trying to figure out at what point (ballpark)\n> I'm going to have to look into archive tables and things of that nature\n> (or at Bizgres/MPP). It's easier for us to add more/bigger hardware, but\n> not so easy to redesign/add history tables...\n>\n> >\n> > just another fyi, if you have a really big database, you can forget\n> > about doing pg_dump for backups (unless you really don't care about\n> > being x day or days behind)...you simply have to due some type of\n> > replication/failover strategy. i would start with pitr.\n> >\n> > merlin\n> I was originally thinking replication, but I did notice some nice pitr\n> features in 8.x - I'll have to look into that some more.\n>\n> Thanks for the pointers though...\n>\n> - Bucky\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nDo the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM SCSI drive with an average read speed of 50MB/sec. If you are going to read indexes, figure out how big your index is, and how many blocks will be returned, and figure out how many blocks this will require transferring from the main table, make an estimate of the seeks, add in the transfer total, and you have a time to get your data. A big array with a good controller can pass 1000MB/sec transfer on the right bus if you buy the write technologies. But be warned, if you buy the wrong ones, your big array can end up being slower than a single drive for sequential transfer. At 1000MB/sec your scan would take 20 seconds.\nBe warned, the tech specs page:http://www.sun.com/servers/x64/x4500/specs.xml#anchor3doesn't mention RAID 10 as a possible, and this is probably what most would recommend for fast data access if you are doing both read and write operations. If you are doing mostly Read, then RAID 5 is passable, but it's redundancy with large numbers of drives is not so great.\nAlex.On 9/18/06, Bucky Jordan <[email protected]> wrote:\n> good normalization skills are really important for large databases,> along with materialization strategies for 'denormalized sets'.Good points- thanks. I'm especially curious what others have done for\nthe materialization. The matview project on gborg appears dead, and I'veonly found a smattering of references on google. My guess is, you rollyour own for optimal performance...> regarding the number of rows, there is no limit to how much pg can\n> handle per se, just some practical limitations, especially vacuum and> reindex times. these are important because they are required to keep> a handle on mvcc bloat and its very nice to be able to vaccum bits of\n> your database at a time.I was hoping for some actual numbers on \"practical\". Hardware isn't toomuch of an issue (within reason- we're not talking an amazon or googlehere... the SunFire X4500 looks interesting... )- if a customer wants to\nstore that much data, and pay for it, we'll figure out how to do it. I'djust rather not have to re-design the database. Say the requirement isto keep 12 months of data accessible, each \"scan\" produces 100M records,\nand I run one per month. What happens if the customer wants to run itonce a week? I was more trying to figure out at what point (ballpark)I'm going to have to look into archive tables and things of that nature\n(or at Bizgres/MPP). It's easier for us to add more/bigger hardware, butnot so easy to redesign/add history tables...>> just another fyi, if you have a really big database, you can forget> about doing pg_dump for backups (unless you really don't care about\n> being x day or days behind)...you simply have to due some type of> replication/failover strategy. i would start with pitr.>> merlinI was originally thinking replication, but I did notice some nice pitr\nfeatures in 8.x - I'll have to look into that some more.Thanks for the pointers though...- Bucky---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Mon, 18 Sep 2006 19:14:56 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:\n>If you have a table with 100million records, each of which is 200bytes long,\n>that gives you roughtly 20 gig of data (assuming it was all written neatly\n>and hasn't been updated much). \n\nIf you're in that range it doesn't even count as big or challenging--you \ncan keep it memory resident for not all that much money. \n\nMike Stone\n",
"msg_date": "Mon, 18 Sep 2006 19:58:13 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Bucky,\n\nOn 9/18/06 7:37 AM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> My question is at what point do I have to get fancy with those big\n> tables? From your presentation, it looks like PG can handle 1.2 billion\n> records or so as long as you write intelligent queries. (And normal PG\n> should be able to handle that, correct?)\n\nPG has limitations that will confront you at sizes beyond about a couple\nhundred GB of table size, as will Oracle and others.\n\nYou should be careful to implement very good disk hardware and leverage\nPostgres 8.1 partitioning and indexes intelligently as you go beyond 100GB\nper instance. Also be sure to set the random_page_cost parameter in\npostgresql.conf to 100 or even higher when you use indexes, as the actual\nseek rate for random access ranges between 50 and 300 for modern disk\nhardware. If this parameter is left at the default of 4, indexes will often\nbe used inappropriately.\n \n> Also, does anyone know if/when any of the MPP stuff will be ported to\n> Postgres, or is the plan to keep that separate?\n\nThe plan is to keep that separate for now, though we're contributing\ntechnology like partitioning, faster sorting, bitmap index, adaptive nested\nloop, and hybrid hash aggregation to make big databases work better in\nPostgres. \n\n- Luke\n\n\n",
"msg_date": "Mon, 18 Sep 2006 18:10:13 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Alex,\n\nOn 9/18/06 4:14 PM, \"Alex Turner\" <[email protected]> wrote:\n\n> Be warned, the tech specs page:\n> http://www.sun.com/servers/x64/x4500/specs.xml#anchor3\n> doesn't mention RAID 10 as a possible, and this is probably what most would\n> recommend for fast data access if you are doing both read and write\n> operations. If you are doing mostly Read, then RAID 5 is passable, but it's\n> redundancy with large numbers of drives is not so great.\n\nRAID10 works great on the X4500 we get 1.6GB/s + per X4500 using RAID10 in\nZFS. We worked with the Sun Solaris kernel team to make that happen and the\npatches are part of Solaris 10 Update 3 due out in November.\n\n- Luke\n\n\n",
"msg_date": "Mon, 18 Sep 2006 18:14:39 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Sweet - thats good - RAID 10 support seems like an odd thing to leave out.\n\nAlex\n\nOn 9/18/06, Luke Lonergan <[email protected]> wrote:\n>\n> Alex,\n>\n> On 9/18/06 4:14 PM, \"Alex Turner\" <[email protected]> wrote:\n>\n> > Be warned, the tech specs page:\n> > http://www.sun.com/servers/x64/x4500/specs.xml#anchor3\n> > doesn't mention RAID 10 as a possible, and this is probably what most\n> would\n> > recommend for fast data access if you are doing both read and write\n> > operations. If you are doing mostly Read, then RAID 5 is passable, but\n> it's\n> > redundancy with large numbers of drives is not so great.\n>\n> RAID10 works great on the X4500 we get 1.6GB/s + per X4500 using RAID10\n> in\n> ZFS. We worked with the Sun Solaris kernel team to make that happen and\n> the\n> patches are part of Solaris 10 Update 3 due out in November.\n>\n> - Luke\n>\n>\n>\n\nSweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan <\[email protected]> wrote:Alex,On 9/18/06 4:14 PM, \"Alex Turner\" <\[email protected]> wrote:> Be warned, the tech specs page:> http://www.sun.com/servers/x64/x4500/specs.xml#anchor3\n> doesn't mention RAID 10 as a possible, and this is probably what most would> recommend for fast data access if you are doing both read and write> operations. If you are doing mostly Read, then RAID 5 is passable, but it's\n> redundancy with large numbers of drives is not so great.RAID10 works great on the X4500 we get 1.6GB/s + per X4500 using RAID10 inZFS. We worked with the Sun Solaris kernel team to make that happen and the\npatches are part of Solaris 10 Update 3 due out in November.- Luke",
"msg_date": "Mon, 18 Sep 2006 23:40:16 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Yep, Solaris ZFS kicks butt. It does RAID10/5/6, etc and implements most of\nthe high end features available on high end SANs...\n\n- Luke\n\n\nOn 9/18/06 8:40 PM, \"Alex Turner\" <[email protected]> wrote:\n\n> Sweet - thats good - RAID 10 support seems like an odd thing to leave out.\n> \n> Alex\n> \n> On 9/18/06, Luke Lonergan < [email protected]\n> <mailto:[email protected]> > wrote:\n>> Alex,\n>> \n>> On 9/18/06 4:14 PM, \"Alex Turner\" < [email protected]> wrote:\n>> \n>>> Be warned, the tech specs page:\n>>> http://www.sun.com/servers/x64/x4500/specs.xml#anchor3\n>>> <http://www.sun.com/servers/x64/x4500/specs.xml#anchor3>\n>>> doesn't mention RAID 10 as a possible, and this is probably what most would\n>>> recommend for fast data access if you are doing both read and write\n>>> operations. If you are doing mostly Read, then RAID 5 is passable, but it's\n>>> redundancy with large numbers of drives is not so great.\n>> \n>> RAID10 works great on the X4500 we get 1.6GB/s + per X4500 using RAID10 in\n>> ZFS. We worked with the Sun Solaris kernel team to make that happen and the\n>> patches are part of Solaris 10 Update 3 due out in November.\n>> \n>> - Luke\n>> \n>> \n> \n> \n\n\n\n",
"msg_date": "Mon, 18 Sep 2006 20:42:39 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "On Mon, Sep 18, 2006 at 06:10:13PM -0700, Luke Lonergan wrote:\n> Also be sure to set the random_page_cost parameter in\n> postgresql.conf to 100 or even higher when you use indexes, as the actual\n> seek rate for random access ranges between 50 and 300 for modern disk\n> hardware. If this parameter is left at the default of 4, indexes will often\n> be used inappropriately.\n\nDoes a tool exist yet to time this for a particular configuration?\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Mon, 18 Sep 2006 23:45:47 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Mark,\n\nOn 9/18/06 8:45 PM, \"[email protected]\" <[email protected]> wrote:\n\n> Does a tool exist yet to time this for a particular configuration?\n\nWe're considering building this into ANALYZE on a per-table basis. The\nbasic approach times sequential access in page rate, then random seeks as\npage rate and takes the ratio of same.\n\nSince PG's heap scan is single threaded, the seek rate is equivalent to a\nsingle disk (even though RAID arrays may have many spindles), the typical\nrandom seek rates are around 100-200 seeks per second from within the\nbackend. That means that as sequential scan performance increases, such as\nhappens when using large RAID arrays, the random_page_cost will range from\n50 to 300 linearly as the size of the RAID array increases.\n\n- Luke\n\n\n",
"msg_date": "Mon, 18 Sep 2006 21:01:45 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Mike,\n\n> On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:\n> >If you have a table with 100million records, each of which is\n200bytes\n> long,\n> >that gives you roughtly 20 gig of data (assuming it was all written\n> neatly\n> >and hasn't been updated much).\n> \nI'll keep that in mind (minimizing updates during loads). My plan is\nupdates will actually be implemented as insert to summary/history table\nthen delete old records. The OLTP part of this will be limited to a\nparticular set of tables that I anticipate will not be nearly as large.\n\n> If you're in that range it doesn't even count as big or\nchallenging--you\n> can keep it memory resident for not all that much money.\n> \n> Mike Stone\n> \nI'm aware of that, however, *each* scan could be 100m records, and we\nneed to keep a minimum of 12, and possibly 50 or more. So sure, if I\nonly have 100m records total, sure, but 500m, or 1b... According to\nAlex's calculations, that'd be 100G for 500m records (just that one\ntable, not including indexes). \n\n From what Luke was saying, there's some issues once you get over a\ncouple hundred GB in a single table, so in the case of 12 scans, it\nlooks like I can squeeze it in given sufficient hardware, but more than\nthat and I'll have to look at history tables or some other solution. I'd\nalso think doing some sort of summary table/materialized view for\ncount/sum operations would be a necessity at this point.\n\nI'm not sure that this is a good topic for the list, but in the interest\nof sharing info I'll ask, and if someone feels it warrants a private\nresponse, we can discuss off list. Would Bizgres be able to handle\ntables > 200GB or so, or is it still quite similar to Postgres (single\nthreaded/process issues per query type things..)? What about Bizgres\nMPP? And also, does switching from Postgres to Bizgres or Bizgres MPP\nrequire any application changes?\n\nThanks for all the help,\n\nBucky\n",
"msg_date": "Tue, 19 Sep 2006 09:58:20 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as expected)"
},
{
"msg_contents": "Hi, Luke,\n\nLuke Lonergan wrote:\n\n> Since PG's heap scan is single threaded, the seek rate is equivalent to a\n> single disk (even though RAID arrays may have many spindles), the typical\n> random seek rates are around 100-200 seeks per second from within the\n> backend. That means that as sequential scan performance increases, such as\n> happens when using large RAID arrays, the random_page_cost will range from\n> 50 to 300 linearly as the size of the RAID array increases.\n\nDo you think that adding some posix_fadvise() calls to the backend to\npre-fetch some blocks into the OS cache asynchroneously could improve\nthat situation?\n\nI could imagine that e. G. index bitmap scans could profit in the heap\nfetching stage by fadvise()ing the next few blocks.\n\nMaybe asynchroneous I/O could be used for the same benefit, but\nposix_fadvise is less() intrusive, and can easily be #define'd out on\nplatforms that don't support it.\n\nCombine this with the Linux Kernel I/O Scheduler patches (readahead\nimprovements) that were discussed here in summer...\n\nRegards,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 20 Sep 2006 10:09:04 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Markus,\n\nOn 9/20/06 1:09 AM, \"Markus Schaber\" <[email protected]> wrote:\n\n> Do you think that adding some posix_fadvise() calls to the backend to\n> pre-fetch some blocks into the OS cache asynchroneously could improve\n> that situation?\n\nNope - this requires true multi-threading of the I/O, there need to be\nmultiple seek operations running simultaneously. The current executor\nblocks on each page request, waiting for the I/O to happen before requesting\nthe next page. The OS can't predict what random page is to be requested\nnext.\n\nWe can implement multiple scanners (already present in MPP), or we could\nimplement AIO and fire off a number of simultaneous I/O requests for\nfulfillment.\n\n- Luke \n\n\n",
"msg_date": "Wed, 20 Sep 2006 10:03:58 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Hi, Luke,\n\nLuke Lonergan wrote:\n\n>> Do you think that adding some posix_fadvise() calls to the backend to\n>> pre-fetch some blocks into the OS cache asynchroneously could improve\n>> that situation?\n> \n> Nope - this requires true multi-threading of the I/O, there need to be\n> multiple seek operations running simultaneously. The current executor\n> blocks on each page request, waiting for the I/O to happen before requesting\n> the next page. The OS can't predict what random page is to be requested\n> next.\n\nI thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\nmeant for this purpose?\n\nMy idea was that the executor could posix_fadvise() the blocks it will\nneed in the near future, and later, when it actually issues the blocking\nread, the block is there already. This could even give speedups in the\nsingle-spindle case, as the I/O scheduler could already fetch the next\nblocks while the executor processes the current one.\n\nBut there must be some details in the executor that prevent this.\n\n> We can implement multiple scanners (already present in MPP), or we could\n> implement AIO and fire off a number of simultaneous I/O requests for\n> fulfillment.\n\nAIO is much more intrusive to implement, so I'd preferrably look\nwhether posix_fadvise() could improve the situation.\n\nThanks,\nMarkus\n",
"msg_date": "Wed, 20 Sep 2006 20:02:06 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "IMHO, AIO is the architecturally cleaner and more elegant solution.\n\nWe in fact have a project on the boards to do this but funding (as \nyet) has not been found.\n\nMy $.02,\nRon\n\n\nAt 02:02 PM 9/20/2006, Markus Schaber wrote:\n>Hi, Luke,\n>\n>Luke Lonergan wrote:\n>\n> >> Do you think that adding some posix_fadvise() calls to the backend to\n> >> pre-fetch some blocks into the OS cache asynchroneously could improve\n> >> that situation?\n> >\n> > Nope - this requires true multi-threading of the I/O, there need to be\n> > multiple seek operations running simultaneously. The current executor\n> > blocks on each page request, waiting for the I/O to happen before \n> requesting\n> > the next page. The OS can't predict what random page is to be requested\n> > next.\n>\n>I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n>meant for this purpose?\n>\n>My idea was that the executor could posix_fadvise() the blocks it will\n>need in the near future, and later, when it actually issues the blocking\n>read, the block is there already. This could even give speedups in the\n>single-spindle case, as the I/O scheduler could already fetch the next\n>blocks while the executor processes the current one.\n>\n>But there must be some details in the executor that prevent this.\n>\n> > We can implement multiple scanners (already present in MPP), or we could\n> > implement AIO and fire off a number of simultaneous I/O requests for\n> > fulfillment.\n>\n>AIO is much more intrusive to implement, so I'd preferrably look\n>whether posix_fadvise() could improve the situation.\n>\n>Thanks,\n>Markus\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n",
"msg_date": "Wed, 20 Sep 2006 15:35:11 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Markus,\n\nOn 9/20/06 11:02 AM, \"Markus Schaber\" <[email protected]> wrote:\n\n> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n> meant for this purpose?\n\nThis is a good idea - I wasn't aware that this was possible.\n\nWe'll do some testing and see if it works as advertised on Linux and\nSolaris.\n\n- Luke\n\n\n",
"msg_date": "Wed, 20 Sep 2006 17:23:31 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Hi, Luke,\n\nLuke Lonergan wrote:\n\n>> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n>> meant for this purpose?\n> \n> This is a good idea - I wasn't aware that this was possible.\n\nThis possibility was the reason for me to propose it. :-)\n\n> We'll do some testing and see if it works as advertised on Linux and\n> Solaris.\n\nFine, I'm looking forward to the results.\n\nAccording to my small test, it works at least on linux 2.6.17.4.\n\nBtw, posix_fadvise() could even give a small improvement for\nmulti-threaded backends, given that the I/O subsystem is smart enough to\ncope intelligently to cope with large bunches of outstanding requests.\n\nHTH,\nMarkus\n",
"msg_date": "Thu, 21 Sep 2006 09:31:12 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "> > Do you think that adding some posix_fadvise() calls to the backend\nto\n> > pre-fetch some blocks into the OS cache asynchroneously could\nimprove\n> > that situation?\n> \n> Nope - this requires true multi-threading of the I/O, there need to be\n> multiple seek operations running simultaneously. The current executor\n> blocks on each page request, waiting for the I/O to happen before\n> requesting\n> the next page. The OS can't predict what random page is to be\nrequested\n> next.\n> \n> We can implement multiple scanners (already present in MPP), or we\ncould\n> implement AIO and fire off a number of simultaneous I/O requests for\n> fulfillment.\n\nSo this might be a dumb question, but the above statements apply to the\ncluster (e.g. postmaster) as a whole, not per postgres\nprocess/transaction correct? So each transaction is blocked waiting for\nthe main postmaster to retrieve the data in the order it was requested\n(i.e. not multiple scanners/aio)?\n\nIn this case, the only way to take full advantage of larger hardware\nusing normal postgres would be to run multiple instances? (Which might\nnot be a bad idea since it would set your application up to be able to\ndeal with databases distributed on multiple servers...)\n\n- Bucky\n\n\n",
"msg_date": "Thu, 21 Sep 2006 15:13:55 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "> So this might be a dumb question, but the above statements apply to the\n> cluster (e.g. postmaster) as a whole, not per postgres\n> process/transaction correct? So each transaction is blocked waiting for\n> the main postmaster to retrieve the data in the order it was requested\n> (i.e. not multiple scanners/aio)?\n\nEach connection runs its own separate back-end process, so these\nstatements apply per PG connection (=process).\n",
"msg_date": "Thu, 21 Sep 2006 12:41:54 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Hi, Bucky,\n\nBucky Jordan wrote:\n\n>> We can implement multiple scanners (already present in MPP), or we\n> could\n>> implement AIO and fire off a number of simultaneous I/O requests for\n>> fulfillment.\n> \n> So this might be a dumb question, but the above statements apply to the\n> cluster (e.g. postmaster) as a whole, not per postgres\n> process/transaction correct? So each transaction is blocked waiting for\n> the main postmaster to retrieve the data in the order it was requested\n> (i.e. not multiple scanners/aio)?\n\nNo, that's a wrong assumption.\n\nIt applies per active backend. When connecting, the Postmaster forks a\nnew backend process. Each backend process has its own scanner and\nexecutor. The main postmaster is only for coordination (forking, config\nreload etc.), all the work is done in the forked per-connection backends.\n\nFurthermore, the PostgreSQL MVCC system ensures that readers are neither\never blocked nor blocking other backends. Writers can block each other\ndue to the ACID transaction semantics, however the MVCC limits that to a\nminimum.\n\n> In this case, the only way to take full advantage of larger hardware\n> using normal postgres would be to run multiple instances? (Which might\n> not be a bad idea since it would set your application up to be able to\n> deal with databases distributed on multiple servers...)\n\nTypical OLTP applications (Web UIs, Booking systems, etc.) have multiple\nconnections, and those run fully parallel.\n\nSo if your application is of this type, it will take full advantage of\nlarger hardware. In the list archive, you should find some links to\nbenchmarks that prove this statement, PostgreSQL scales linearly, up to\n8 CPUs and 32 \"hyperthreads\" in this benchmarks.\n\nOur discussion is about some different type of application, where you\nhave a single application issuing a single query at a time dealing with\na large amount (several gigs up to teras) of data.\n\nNow, when such a query is generating sequential disk access, the I/O\nscheduler of the underlying OS can easily recognize that pattern, and\nprefetch the data, thus giving the full speed benefit of the underlying\nRAID.\n\nThe discussed problem arises when such large queries generate random\n(non-continous) disk access (e. G. index scans). Here, the underlying\nRAID cannot effectively prefetch data as it does not know what the\napplication will need next. This effectively limits the speed to that of\na single disk, regardless of the details of the underlying RAID, as it\ncan only process a request at a time, and has to wait for the\napplication for the next one.\n\nNow, Bizgres MPP goes the way of having multiple threads per backend,\neach one processing a fraction of the data. So there are always several\noutstanding read requests that can be scheduled to the disks.\n\nMy proposal was to use posix_fadvise() in the single-threaded scanner,\nso it can tell the OS \"I will need those blocks in the near future\". So\nthe OS can pre-fetch those blocks into the cache, while PostgreSQL still\nprocesses the previous block of data.\n\nAnother proposal would be to use so-called asynchroneous I/O. This is\ndefinitely an interesting and promising idea, but needs much more\nchanges to the code, compared to posix_fadvise().\n\n\nI hope that this lengthy mail is enlightening, if not, don't hesitate to\nask.\n\nThanks for your patience,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Thu, 21 Sep 2006 21:54:06 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Markus,\n\nFirst, thanks- your email was very enlightining. But, it does bring up a\nfew additional questions, so thanks for your patience also- I've listed\nthem below.\n\n> It applies per active backend. When connecting, the Postmaster forks a\n> new backend process. Each backend process has its own scanner and\n> executor. The main postmaster is only for coordination (forking,\nconfig\n> reload etc.), all the work is done in the forked per-connection\nbackends.\n\nEach postgres process also uses shared memory (aka the buffer cache) so\nas to not fetch data that another process has already requested,\ncorrect?\n\n> Our discussion is about some different type of application, where you\n> have a single application issuing a single query at a time dealing\nwith\n> a large amount (several gigs up to teras) of data.\nCommonly these are referred to as OLAP applications, correct? Which is\nwhere I believe my application is more focused (it may be handling some\ntransactions in the future, but at the moment, it follows the \"load lots\nof data, then analyze it\" pattern). \n \n> The discussed problem arises when such large queries generate random\n> (non-continous) disk access (e. G. index scans). Here, the underlying\n> RAID cannot effectively prefetch data as it does not know what the\n> application will need next. This effectively limits the speed to that\nof\n> a single disk, regardless of the details of the underlying RAID, as it\n> can only process a request at a time, and has to wait for the\n> application for the next one.\nDoes this have anything to do with postgres indexes not storing data, as\nsome previous posts to this list have mentioned? (In otherwords, having\nthe index in memory doesn't help? Or are we talking about indexes that\nare too large to fit in RAM?)\n\nSo this issue would be only on a per query basis? Could it be alleviated\nsomewhat if I ran multiple smaller queries? For example, I want to\ncalculate a summary table on 500m records- fire off 5 queries that count\n100m records each and update the summary table, leaving MVCC to handle\nupdate contention?\n\nActually, now that I think about it- that would only work if the\nsections I mentioned above were on different disks right? So I would\nactually have to do table partitioning with tablespaces on different\nspindles to get that to be beneficial? (which is basically not feasible\nwith RAID, since I don't get to pick what disks the data goes on...)\n\nAre there any other workarounds for current postgres?\n\nThanks again,\n\nBucky\n",
"msg_date": "Thu, 21 Sep 2006 17:16:35 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Hi, Bucky,\n\nBucky Jordan wrote:\n\n> Each postgres process also uses shared memory (aka the buffer cache) so\n> as to not fetch data that another process has already requested,\n> correct?\n\nYes.\n\nAdditinally, the OS caches disk blocks. Most unixoid ones like Linux use\n(nearly) all unused memory for this block cache, I don't know about Windows.\n\n> Commonly these are referred to as OLAP applications, correct? Which is\n> where I believe my application is more focused (it may be handling some\n> transactions in the future, but at the moment, it follows the \"load lots\n> of data, then analyze it\" pattern). \n\nYes, most OLAP apps fall into this category. But I also think that most\nOLAP apps mainly generate sequential data access (sequential scans), for\nwhich the OS prefetching of data works fine.\n\nBtw, some weeks ago, there was a patch mentioned here that improves the\nlinux kernel I/O scheduler wr/t those prefetching capabilities.\n\n> Does this have anything to do with postgres indexes not storing data, as\n> some previous posts to this list have mentioned? (In otherwords, having\n> the index in memory doesn't help? Or are we talking about indexes that\n> are too large to fit in RAM?)\n\nYes, it has, but only for the cases where your query fetches only\ncolumns in that index. In case where you fetch other columns, PostgreSQL\nhas to access the Heap nevertheless to fetch those.\n\nThe overhead for checking outdated row versions (those that were updated\nor deleted, but not yet vacuumed) is zero, as those \"load bulk, then\nanalyze\" applications typically don't create invalid rows, so every row\nfetched from the heap is valid. This is very different in OLTP applications.\n\n> So this issue would be only on a per query basis? Could it be alleviated\n> somewhat if I ran multiple smaller queries? For example, I want to\n> calculate a summary table on 500m records- fire off 5 queries that count\n> 100m records each and update the summary table, leaving MVCC to handle\n> update contention?\n\nYes, you could do that, but only if you're CPU bound, and have a\nmulti-core machine. And you need table partitioning, as LIMIT/OFFSET is\nexpensive. Btw, the Bizgres people do exactly this under their hood, so\nit may be worth a look.\n\nIf you're I/O bound, and your query is a full table scan, or something\nelse that results in (nearly) sequential disk access, the OS prefetch\nalgorithm will work.\n\nYou can use some I/O monitoring tools to compare the actual speed the\ndata comes in when PostgreSQL does the sequential scan, and compare it\nto DD'ing the database table files. For simple aggregates like sum(),\nyou usually get near the \"raw\" speed, and the real bottlenecks are the\ndisk I/O rate, bad RAID implementations or PCI bus contention.\n\n> Actually, now that I think about it- that would only work if the\n> sections I mentioned above were on different disks right? So I would\n> actually have to do table partitioning with tablespaces on different\n> spindles to get that to be beneficial? (which is basically not feasible\n> with RAID, since I don't get to pick what disks the data goes on...)\n\nIf you really need that much throughput, you can always put the\ndifferent partitions on different RAIDs. But hardware gets very\nexpensive in those dimensions, and it may be better to partition the\ndata on different machines altogether. AFAIK, Bizgres MPP does exactly that.\n\n> Are there any other workarounds for current postgres?\n\nAre your questions of theoretical nature, or do you have a concrete\nproblem? In latter case, you could post your details here, and we'll see\nwhether we can help.\n\nBtw, I'm not related with Bizgres in any way. :-)\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Thu, 21 Sep 2006 23:42:56 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Bucky,\n\nOn 9/21/06 2:16 PM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> Does this have anything to do with postgres indexes not storing data, as\n> some previous posts to this list have mentioned? (In otherwords, having\n> the index in memory doesn't help? Or are we talking about indexes that\n> are too large to fit in RAM?)\n\nYes, if the index could be scanned without needing to scan the heap to\nsatisfy a query, that query would benefit from sequential access. This is\ntrue whether the index fits in RAM or not.\n \n> So this issue would be only on a per query basis? Could it be alleviated\n> somewhat if I ran multiple smaller queries? For example, I want to\n> calculate a summary table on 500m records- fire off 5 queries that count\n> 100m records each and update the summary table, leaving MVCC to handle\n> update contention?\n\nClever, functional and very painful way to do it, but yes, you would get 5\ndisks worth of seeking.\n\nMy goal is to provide for as many disks seeking at the same time as are\navailable to the RAID. Note that the Sun Appliance (X4500 based) has 11\ndisk drives available per CPU core. Later it will drop to 5-6 disks per\ncore with the introduction of quad core CPUs, which is more the norm for\nnow. Bizgres MPP will achieve one or two concurrent heap scanner per CPU\nfor a given query in the default configurations, so we're missing out on\nlots of potential speedup for index scans in many cases.\n\nWith both MPP and stock Postgres you get more seek rate as you add users,\nbut it would take 44 users to use all of the drives in random seeking for\nPostgres, where for MPP it would take more like 5.\n\n> Actually, now that I think about it- that would only work if the\n> sections I mentioned above were on different disks right? So I would\n> actually have to do table partitioning with tablespaces on different\n> spindles to get that to be beneficial? (which is basically not feasible\n> with RAID, since I don't get to pick what disks the data goes on...)\n\nOn average, for random seeking we can assume that RAID will distribute the\ndata evenly. The I/Os should balance out.\n\n- Luke\n\n\n",
"msg_date": "Thu, 21 Sep 2006 16:31:49 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "> >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n> >> meant for this purpose?\n> > \n> > This is a good idea - I wasn't aware that this was possible.\n> \n> This possibility was the reason for me to propose it. :-)\n\nposix_fadvise() features in the TODO list already; I'm not sure if any work\non it has been done for pg8.2.\n\nAnyway, I understand that POSIX_FADV_DONTNEED on a linux 2.6 kernel allows\npages to be discarded from memory earlier than usual. This is useful, since\nit means you can prevent your seqscan from nuking the OS cache.\n\nOf course you could argue the OS should be able to detect this, and prevent\nit occuring anyway. I don't know anything about linux's behaviour in this\narea.\n\n.Guy\n",
"msg_date": "Fri, 22 Sep 2006 14:52:09 +1200",
"msg_from": "Guy Thornley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Guy Thornley wrote:\n> > >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n> > >> meant for this purpose?\n> > > \n> > > This is a good idea - I wasn't aware that this was possible.\n> > \n> > This possibility was the reason for me to propose it. :-)\n> \n> posix_fadvise() features in the TODO list already; I'm not sure if any work\n> on it has been done for pg8.2.\n> \n> Anyway, I understand that POSIX_FADV_DONTNEED on a linux 2.6 kernel allows\n> pages to be discarded from memory earlier than usual. This is useful, since\n> it means you can prevent your seqscan from nuking the OS cache.\n> \n> Of course you could argue the OS should be able to detect this, and prevent\n> it occuring anyway. I don't know anything about linux's behaviour in this\n> area.\n\nWe tried posix_fadvise() during the 8.2 development cycle, but had\nproblems as outlined in a comment in xlog.c:\n\n /*\n * posix_fadvise is problematic on many platforms: on older x86 Linux\n * it just dumps core, and there are reports of problems on PPC platforms\n * as well. The following is therefore disabled for the time being.\n * We could consider some kind of configure test to see if it's safe to\n * use, but since we lack hard evidence that there's any useful performance\n * gain to be had, spending time on that seems unprofitable for now.\n */\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 21 Sep 2006 23:05:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "On Fri, Sep 22, 2006 at 02:52:09PM +1200, Guy Thornley wrote:\n> > >> I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly\n> > >> meant for this purpose?\n> > > This is a good idea - I wasn't aware that this was possible.\n> > This possibility was the reason for me to propose it. :-)\n> posix_fadvise() features in the TODO list already; I'm not sure if any work\n> on it has been done for pg8.2.\n> \n> Anyway, I understand that POSIX_FADV_DONTNEED on a linux 2.6 kernel allows\n> pages to be discarded from memory earlier than usual. This is useful, since\n> it means you can prevent your seqscan from nuking the OS cache.\n> \n> Of course you could argue the OS should be able to detect this, and prevent\n> it occuring anyway. I don't know anything about linux's behaviour in this\n> area.\n\nI recall either monitoring or participating in the discussion when this\ncall was added to Linux.\n\nI don't believe the kernel can auto-detect that you do not need a page\nany longer. It can only prioritize pages to keep when memory is fully\nin use and a new page must be loaded. This is often some sort of LRU\nscheme. If the page is truly useless, only the application can know.\n\nI'm not convinced that PostgreSQL can know this. The case where it is\nuseful is if a single process is sequentially scanning a large file\n(much larger than memory). As soon as it is more than one process,\nor if it is not a sequential scan, or if it is not a large file, this\ncall hurts more than it gains. Just because I'm done with the page does\nnot mean that *you* are done with the page.\n\nI'd advise against using this call unless it can be shown that the page\nwill not be used in the future, or at least, that the page is less useful\nthan all other pages currently in memory. This is what the call really means.\nIt means, \"There is no value to keeping this page in memory\".\n\nPerhaps certain PostgreSQL loads fit this pattern. None of my uses fit\nthis pattern, and I have trouble believing that a majority of PostgreSQL\nloads fits this pattern.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 21 Sep 2006 23:40:37 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Mark,\n\nOn 9/21/06 8:40 PM, \"[email protected]\" <[email protected]> wrote:\n\n> I'd advise against using this call unless it can be shown that the page\n> will not be used in the future, or at least, that the page is less useful\n> than all other pages currently in memory. This is what the call really means.\n> It means, \"There is no value to keeping this page in memory\".\n\nYes, it's a bit subtle.\n\nI think the topic is similar to \"cache bypass\", used in cache capable vector\nprocessors (Cray, Convex, Multiflow, etc) in the 90's. When you are\nscanning through something larger than the cache, it should be marked\n\"non-cacheable\" and bypass caching altogether. This avoids a copy, and\nkeeps the cache available for things that can benefit from it.\n\nWRT the PG buffer cache, the rule would have to be: \"if the heap scan is\ngoing to be larger than \"effective_cache_size\", then issue the\nposix_fadvise(BLOCK_NOT_NEEDED) call\". It doesn't sound very efficient to\ndo this in block/extent increments though, and it would possibly mess with\nsubsets of the block space that would be re-used for other queries.\n\n- Luke\n\n\n",
"msg_date": "Thu, 21 Sep 2006 20:46:41 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Hi, Guy,\n\nGuy Thornley wrote:\n\n> Of course you could argue the OS should be able to detect this, and prevent\n> it occuring anyway. I don't know anything about linux's behaviour in this\n> area.\n\nYes, one can argue that way.\n\nBut a generic Algorithm in the OS can never be as smart as the\napplication which has more informations about semantics and algorithms.\nEverything else would need a crystal ball device :-)\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Fri, 22 Sep 2006 11:13:01 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "On Thu, Sep 21, 2006 at 08:46:41PM -0700, Luke Lonergan wrote:\n> Mark,\n> \n> On 9/21/06 8:40 PM, \"[email protected]\" <[email protected]> wrote:\n> \n> > I'd advise against using this call unless it can be shown that the page\n> > will not be used in the future, or at least, that the page is less useful\n> > than all other pages currently in memory. This is what the call really means.\n> > It means, \"There is no value to keeping this page in memory\".\n> \n> Yes, it's a bit subtle.\n> \n> I think the topic is similar to \"cache bypass\", used in cache capable vector\n> processors (Cray, Convex, Multiflow, etc) in the 90's. When you are\n> scanning through something larger than the cache, it should be marked\n> \"non-cacheable\" and bypass caching altogether. This avoids a copy, and\n> keeps the cache available for things that can benefit from it.\n> \n> WRT the PG buffer cache, the rule would have to be: \"if the heap scan is\n> going to be larger than \"effective_cache_size\", then issue the\n> posix_fadvise(BLOCK_NOT_NEEDED) call\". It doesn't sound very efficient to\n> do this in block/extent increments though, and it would possibly mess with\n> subsets of the block space that would be re-used for other queries.\n\nAnother issue is that if you start two large seqscans on the same table\nat about the same time, right now you should only be issuing one set of\nreads for both requests, because one of them will just pull the blocks\nback out of cache. If we weren't caching then each query would have to\nphysically read (which would be horrid).\n\nThere's been talk of adding code that would have a seqscan detect if\nanother seqscan is happening on the table at the same time, and if it\nis, to start it's seqscan wherever the other seqscan is currently\nrunning. That would probably ensure that we weren't reading from the\ntable in 2 different places, even if we weren't caching.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 22 Sep 2006 09:01:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "On Thu, Sep 21, 2006 at 11:05:39PM -0400, Bruce Momjian wrote:\n> We tried posix_fadvise() during the 8.2 development cycle, but had\n> problems as outlined in a comment in xlog.c:\n> \n> /*\n> * posix_fadvise is problematic on many platforms: on older x86 Linux\n> * it just dumps core, and there are reports of problems on PPC platforms\n> * as well. The following is therefore disabled for the time being.\n> * We could consider some kind of configure test to see if it's safe to\n> * use, but since we lack hard evidence that there's any useful performance\n> * gain to be had, spending time on that seems unprofitable for now.\n> */\n\nIn case it's not clear, that's a call for someone to do some performance\ntesting. :)\n\nBruce, you happen to have a URL for a patch to put fadvise in?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 22 Sep 2006 09:02:29 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Luke Lonergan wrote:\n> \n> I think the topic is similar to \"cache bypass\", used in cache capable vector\n> processors (Cray, Convex, Multiflow, etc) in the 90's. When you are\n> scanning through something larger than the cache, it should be marked\n> \"non-cacheable\" and bypass caching altogether. This avoids a copy, and\n> keeps the cache available for things that can benefit from it.\n\nAnd 'course some file systems do this automatically when they\ndetect a sequential scan[1] though it can have unexpected (to some)\nnegative side effects[2]. For file systems that support freebehind\nas a configurable parameter, it might be easier to experiment with\nthe idea there.\n\n[1] http://www.ediaudit.com/doc_sol10/Solaris_10_Doc/common/SUNWaadm/reloc/sun_docs/C/solaris_10/SUNWaadm/SOLTUNEPARAMREF/p18.html\n[2] http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6207772\n\n\n",
"msg_date": "Sun, 24 Sep 2006 06:29:50 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
},
{
"msg_contents": "Jim,\n\nOn 9/22/06 7:01 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> There's been talk of adding code that would have a seqscan detect if\n> another seqscan is happening on the table at the same time, and if it\n> is, to start it's seqscan wherever the other seqscan is currently\n> running. That would probably ensure that we weren't reading from the\n> table in 2 different places, even if we weren't caching.\n\nRight, aka \"SyncScan\"\n\nThe optimization you point out that we miss when bypassing cache is a pretty\nunlikely event in real world apps, though it makes poorly designed\nbenchmarks go really fast. It's much more likely that the second seqscan\nwill start after the block cache is exhausted, which will cause actuator\nthrashing (depending on the readahead that the OS uses). SyncScan fixes\nthat.\n\n- Luke\n\n\n",
"msg_date": "Mon, 25 Sep 2006 07:23:58 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large tables (was: RAID 0 not as fast as"
}
] |
[
{
"msg_contents": "Hi,\n\n Is there anyway we can optimize this sql ? it is doing full table\nscan on listing and address table . Postgres version 8.0.2\n\nThanks!\nPallav.\n\n\nexplain analyze\nselect listing0_.listingid as col_0_0_, \ngetmaxdate(listing0_.lastupdate, max(addressval2_.createdate)) as col_1_0_\nfrom listing.listing listing0_\nleft outer join listing.address listingadd1_\non listing0_.fkbestaddressid=listingadd1_.addressid\nleft outer join listing.addressvaluation addressval2_\non listingadd1_.addressid=addressval2_.fkaddressid\nwhere listing0_.lastupdate>'2006-09-15 08:31:26.927'\nand listing0_.lastupdate<=current_timestamp\nor addressval2_.createdate>'2006-09-15 08:31:26.927' and\naddressval2_.createdate<=current_timestamp\ngroup by listing0_.listingid , listing0_.lastupdate\norder by getmaxdate(listing0_.lastupdate, max(addressval2_.createdate))\nasc limit 10;\n\n\nLimit (cost=2399501.49..2399501.51 rows=10 width=20) (actual time=414298.076..414298.174 rows=10 loops=1)\n -> Sort (cost=2399501.49..2410707.32 rows=4482333 width=20) (actual time=414298.068..414298.098 rows=10 loops=1)\n Sort Key: getmaxdate(listing0_.lastupdate, max(addressval2_.createdate))\n -> GroupAggregate (cost=1784490.47..1851725.47 rows=4482333 width=20) (actual time=414212.926..414284.927 rows=2559 loops=1)\n -> Sort (cost=1784490.47..1795696.31 rows=4482333 width=20) (actual time=414174.678..414183.536 rows=2563 loops=1)\n Sort Key: listing0_.listingid, listing0_.lastupdate\n -> Merge Right Join (cost=1113947.32..1236714.45 rows=4482333 width=20) (actual time=273257.256..414163.920 rows=2563 loops=1)\n Merge Cond: (\"outer\".fkaddressid = \"inner\".addressid)\n Filter: (((\"inner\".lastupdate > '2006-09-15 08:31:26.927'::timestamp without time zone) AND (\"inner\".lastupdate <= ('now'::text)::timestamp(6) with time zone)) OR ((\"outer\".createdate > '2006-09-15 08:31:26.927'::timestamp without time zone) AND (\"outer\".createdate <= ('now'::text)::timestamp(6) with time zone)))\n -> Index Scan using idx_addressvaluation_fkaddressid on addressvaluation addressval2_ (cost=0.00..79769.55 rows=947056 width=12) (actual time=0.120..108240.633 rows=960834 loops=1)\n -> Sort (cost=1113947.32..1125153.15 rows=4482333 width=16) (actual time=256884.646..275823.217 rows=5669719 loops=1)\n Sort Key: listingadd1_.addressid\n -> Hash Left Join (cost=228115.38..570557.39 rows=4482333 width=16) (actual time=93874.356..205054.946 rows=4490963 loops=1)\n Hash Cond: (\"outer\".fkbestaddressid = \"inner\".addressid)\n -> Seq Scan on listing listing0_ (cost=0.00..112111.33 rows=4482333 width=16) (actual time=0.026..25398.685 rows=4490963 loops=1)\n -> Hash (cost=183333.70..183333.70 rows=6990270 width=4) (actual time=93873.659..93873.659 rows=0 loops=1)\n -> Seq Scan on address listingadd1_ (cost=0.00..183333.70 rows=6990270 width=4) (actual time=13.256..69441.056 rows=6990606 loops=1)\n\n",
"msg_date": "Fri, 15 Sep 2006 11:09:35 -0400",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize SQL"
},
{
"msg_contents": "Pallav Kalva <[email protected]> writes:\n> select listing0_.listingid as col_0_0_, \n> getmaxdate(listing0_.lastupdate, max(addressval2_.createdate)) as col_1_0_\n> from listing.listing listing0_\n> left outer join listing.address listingadd1_\n> on listing0_.fkbestaddressid=listingadd1_.addressid\n> left outer join listing.addressvaluation addressval2_\n> on listingadd1_.addressid=addressval2_.fkaddressid\n> where listing0_.lastupdate>'2006-09-15 08:31:26.927'\n> and listing0_.lastupdate<=current_timestamp\n> or addressval2_.createdate>'2006-09-15 08:31:26.927' and\n> addressval2_.createdate<=current_timestamp\n> group by listing0_.listingid , listing0_.lastupdate\n> order by getmaxdate(listing0_.lastupdate, max(addressval2_.createdate))\n> asc limit 10;\n\nIf that WHERE logic is actually what you need, then getting this query\nto run quickly seems pretty hopeless. The database must form the full\nouter join result: it cannot discard any listing0_ rows, even if they\nhave lastupdate outside the given range, because they might join to\naddressval2_ rows within the given createdate range. And conversely\nit can't discard any addressval2_ rows early. Is there any chance\nthat you wanted AND not OR there?\n\nOne thing that might help a bit is to change the join order:\n\nfrom listing.listing listing0_\nleft outer join listing.addressvaluation addressval2_\non listing0_.fkbestaddressid=addressval2_.fkaddressid\nleft outer join listing.address listingadd1_\non listing0_.fkbestaddressid=listingadd1_.addressid\n\nso that at least the WHERE clause can be applied before having joined to\nlistingadd1_. The semantics of your ON clauses are probably wrong anyway\n--- did you think twice about what happens if there's no matching\nlistingadd1_ entry?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Sep 2006 11:53:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SQL "
},
{
"msg_contents": "On 15-9-2006 17:53 Tom Lane wrote:\n> If that WHERE logic is actually what you need, then getting this query\n> to run quickly seems pretty hopeless. The database must form the full\n> outer join result: it cannot discard any listing0_ rows, even if they\n> have lastupdate outside the given range, because they might join to\n> addressval2_ rows within the given createdate range. And conversely\n> it can't discard any addressval2_ rows early. Is there any chance\n> that you wanted AND not OR there?\n\nCouldn't it also help to do something like this?\n\nSELECT ..., (SELECT MAX(createdate) FROM addressval ...)\nFROM listing l\n LEFT JOIN address ...\nWHERE l.id IN (SELECT id FROM listing WHERE lastupdate ...\n UNION\n SELECT id FROM listing JOIN addressval a ON ... WHERE \na.createdate ...)\n\n\nIts not pretty, but looking at the explain only a small amount of \nrecords match both clauses. So this should allow the use of indexes for \nboth the createdate-clause and the lastupdate-clause.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 15 Sep 2006 18:22:55 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SQL"
}
] |
[
{
"msg_contents": "Greetings:\n\nI'm running 8.1.4, and have noticed major differences in execution time \nfor plpgsql functions running queries that differ only in use of an \narray such as:\n\n\nslower_function( vals integer[] )\n\t[query] WHERE id = ANY vals;\n\n\nfaster_function( vals integer[] )\n\tvals_text := array_to_string( vals, ',' )\n\tEXECUTE '[query] WHERE id IN (' || vals_text || ')';\n\n\nIn general, there are about 10 integers in the lookup set on average and \n50 max.\n\nWhat are the advantages or disadvantages of using arrays in this \nsituation? The = ANY array method makes plpgsql development cleaner, \nbut seems to really lack performance in certain cases. What do you \nrecommend as the preferred method?\n\nThanks for your comments.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Fri, 15 Sep 2006 15:12:07 -0400",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of IN (...) vs. = ANY array[...]"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> What are the advantages or disadvantages of using arrays in this \n> situation? The = ANY array method makes plpgsql development cleaner, \n> but seems to really lack performance in certain cases.\n\nIn existing releases, the form with IN (list-of-scalar-constants)\ncan be optimized into indexscan(s), but = ANY (array) isn't.\n\n8.2 will treat them equivalently (in fact, it converts IN (...) to\n= ANY (ARRAY[...]) !). So depending on your time horizon, you might\nwish to stick with whichever is cleaner for your calling code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Sep 2006 16:19:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of IN (...) vs. = ANY array[...] "
}
] |
[
{
"msg_contents": "Hi listers,\nI wanted to try PG partitioning (aka constraint exclusion) with two levels .\nI am using PG 8.1.3 on RHEL4U2,\n\nMy setup:\n\nCREATE TABLE part (\n id1 int not null,\n id2 int not null,\n id3 int not null,\n filler varchar(200)\n );\n\n--- level 1 partitions on id1 column only\ncreate table part_id1_0_10 ( CHECK ( id1>= 0 and id1<=10) ) INHERITS (part);\ncreate table part_id1_11_20 ( CHECK ( id1>=11 and id1<=20) ) INHERITS (part);\n\n--- level2 partitions\n-- subpartitions for parent partition1\ncreate table part_id1_0_10__id2_0_10 ( CHECK ( id2>= 0 and id2<=10) ) INHERITS(part_id1_0_10);\ncreate table part_id1_0_10__id2_11_20 ( CHECK ( id2>= 11 and id2<=20) ) INHERITS(part_id1_0_10);\n\n-- subpartitions for parent partition2\ncreate table part_id1_11_20__id2_0_10 ( CHECK ( id2>= 0 and id2<=10) ) INHERITS(part_id1_11_20);\ncreate table part_id1_11_20__id2_11_20 ( CHECK ( id2>= 11 and id2<=20) ) INHERITS(part_id1_11_20);\n\nI have created indexes on all tables.\nMy Problem is that I don't see partiotion elimination feature (Parameer constraint_exclusion is ON):\n\npgpool=# EXPLAIN ANALYZE select * from part where id1 = 3 and id2 = 5;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n-----------------------------\n Result (cost=0.00..957.04 rows=5 width=130) (actual time=1.606..9.216 rows=483 loops=1)\n -> Append (cost=0.00..957.04 rows=5 width=130) (actual time=1.602..7.910 rows=483 loops=1)\n -> Seq Scan on part (cost=0.00..24.85 rows=1 width=130) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((id1 = 3) AND (id2 = 5))\n -> Bitmap Heap Scan on part_id1_0_10 part (cost=1.02..9.50 rows=1 width=130) (actual time=0.014..0.014 rows=0\nloops=1)\n Recheck Cond: (id1 = 3)\n Filter: (id2 = 5)\n -> Bitmap Index Scan on idx_part_id1_0_10 (cost=0.00..1.02 rows=5 width=0) (actual time=0.010..0.010\nrows=0 loops=1)\n Index Cond: (id1 = 3)\n -> Bitmap Heap Scan on part_id1_11_20 part (cost=2.89..436.30 rows=1 width=130) (actual time=0.025..0.025\nrows=0 loops=1)\n Recheck Cond: (id1 = 3)\n Filter: (id2 = 5)\n -> Bitmap Index Scan on idx_part_id1_11_20 (cost=0.00..2.89 rows=254 width=0) (actual time=0.021..0.021\nrows=0 loops=1)\n Index Cond: (id1 = 3)\n -> Bitmap Heap Scan on part_id1_0_10__id2_0_10 part (cost=2.52..255.56 rows=1 width=130) (actual\ntime=1.554..6.526 rows=483 loops=1)\n Recheck Cond: (id2 = 5)\n Filter: (id1 = 3)\n -> Bitmap Index Scan on idx_part_id1_0_10__id2_0_10 (cost=0.00..2.52 rows=148 width=0) (actual\ntime=1.410..1.410 rows=5242 loops=1)\n Index Cond: (id2 = 5)\n -> Bitmap Heap Scan on part_id1_0_10__id2_11_20 part (cost=2.47..230.82 rows=1 width=130) (actual\ntime=0.034..0.034 rows=0 loops=1)\n Recheck Cond: (id2 = 5)\n Filter: (id1 = 3)\n -> Bitmap Index Scan on idx_part_id1_0_10__id2_11_20 (cost=0.00..2.47 rows=134 width=0) (actual\ntime=0.030..0.030 rows=0 loops=1)\n Index Cond: (id2 = 5)\n Total runtime: 9.950 ms\n(25 rows)\n\nWhy PG is searching in part_id1_11_20 table, for example ? From the check contraint it is pretty\nClear that in this table there are not records with ids =3 ??\n\npgpool=# \\d+ part_id1_11_20\n Table \"public.part_id1_11_20\"\n Column | Type | Modifiers | Description\n--------+------------------------+-----------+-------------\n id1 | integer | not null |\n id2 | integer | not null |\n id3 | integer | not null |\n filler | character varying(200) | |\nIndexes:\n \"idx_part_id1_11_20\" btree (id1)\nCheck constraints:\n \"part_id1_11_20_id1_check\" CHECK (id1 >= 11 AND id1 <= 20)\nInherits: part\nHas OIDs: no\n\n\n\nBest Regards.\nMilen \n\n\n\n\nNachricht\n\n\nHi listers,I wanted to try PG partitioning (aka constraint \nexclusion) with two levels .I am using PG 8.1.3 on \nRHEL4U2,My setup:CREATE TABLE part ( \nid1 int not null, id2 \nint not null, id3 int not \nnull, filler varchar(200) \n);--- level 1 partitions on id1 column onlycreate table \npart_id1_0_10 ( CHECK ( id1>= 0 and id1<=10) ) \nINHERITS (part);create table part_id1_11_20 ( CHECK ( id1>=11 \nand id1<=20) ) INHERITS (part);--- level2 \npartitions-- subpartitions for parent partition1create table \npart_id1_0_10__id2_0_10 ( CHECK ( id2>= 0 and id2<=10) ) \nINHERITS(part_id1_0_10);create table part_id1_0_10__id2_11_20 ( CHECK ( \nid2>= 11 and id2<=20) ) INHERITS(part_id1_0_10);-- \nsubpartitions for parent partition2create table \npart_id1_11_20__id2_0_10 ( CHECK ( id2>= 0 and id2<=10) \n) INHERITS(part_id1_11_20);create table part_id1_11_20__id2_11_20 ( CHECK ( \nid2>= 11 and id2<=20) ) INHERITS(part_id1_11_20);I have \ncreated indexes on all tables.My Problem is that I don't see \npartiotion elimination feature (Parameer constraint_exclusion is \nON):pgpool=# EXPLAIN ANALYZE select * from part \nwhere id1 = 3 and id2 = \n5; \nQUERY \nPLAN ----------------------------------------------------------------------------------------------------------------------------------------------------- Result \n(cost=0.00..957.04 rows=5 width=130) (actual time=1.606..9.216 rows=483 \nloops=1) -> Append (cost=0.00..957.04 rows=5 \nwidth=130) (actual time=1.602..7.910 rows=483 \nloops=1) -> Seq \nScan on part (cost=0.00..24.85 rows=1 width=130) (actual time=0.001..0.001 \nrows=0 \nloops=1) \nFilter: ((id1 = 3) AND (id2 = \n5)) -> Bitmap Heap \nScan on part_id1_0_10 part (cost=1.02..9.50 rows=1 width=130) (actual \ntime=0.014..0.014 rows=0 \nloops=1) \nRecheck Cond: (id1 = \n3) \nFilter: (id2 = \n5) \n-> Bitmap Index Scan on idx_part_id1_0_10 (cost=0.00..1.02 rows=5 \nwidth=0) (actual time=0.010..0.010 rows=0 \nloops=1) \nIndex Cond: (id1 = 3) \n-> Bitmap Heap Scan on part_id1_11_20 \npart (cost=2.89..436.30 rows=1 width=130) (actual time=0.025..0.025 rows=0 \nloops=1) \nRecheck Cond: (id1 = \n3) \nFilter: (id2 = \n5) \n-> Bitmap Index Scan on idx_part_id1_11_20 (cost=0.00..2.89 \nrows=254 width=0) (actual time=0.021..0.021 rows=0 \nloops=1) \nIndex Cond: (id1 = 3) \n-> Bitmap Heap Scan on part_id1_0_10__id2_0_10 part \n(cost=2.52..255.56 rows=1 width=130) (actual time=1.554..6.526 rows=483 \nloops=1) \nRecheck Cond: (id2 = \n5) \nFilter: (id1 = \n3) \n-> Bitmap Index Scan on idx_part_id1_0_10__id2_0_10 \n(cost=0.00..2.52 rows=148 width=0) (actual time=1.410..1.410 rows=5242 \nloops=1) \nIndex Cond: (id2 = 5) \n-> Bitmap Heap Scan on part_id1_0_10__id2_11_20 part (cost=2.47..230.82 \nrows=1 width=130) (actual time=0.034..0.034 rows=0 \nloops=1) \nRecheck Cond: (id2 = \n5) \nFilter: (id1 = \n3) \n-> Bitmap Index Scan on idx_part_id1_0_10__id2_11_20 \n(cost=0.00..2.47 rows=134 width=0) (actual time=0.030..0.030 rows=0 \nloops=1) \nIndex Cond: (id2 = 5) Total runtime: 9.950 ms(25 rows)Why \nPG is searching in part_id1_11_20 table, for example ? From the check \ncontraint it is prettyClear that in this table there are not records with \nids =3 ??pgpool=# \\d+ \npart_id1_11_20 \nTable \"public.part_id1_11_20\" Column \n| \nType | Modifiers | \nDescription--------+------------------------+-----------+------------- id1 \n| \ninteger \n| not null | id2 | \ninteger \n| not null | id3 | \ninteger \n| not null | filler | character varying(200) \n| \n|Indexes: \"idx_part_id1_11_20\" btree (id1)Check \nconstraints: \"part_id1_11_20_id1_check\" CHECK (id1 >= \n11 AND id1 <= 20)Inherits: partHas OIDs: noBest \nRegards.Milen",
"msg_date": "Mon, 18 Sep 2006 00:12:33 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition elimination problem"
},
{
"msg_contents": "\"Milen Kulev\" <[email protected]> writes:\n> My Problem is that I don't see partiotion elimination feature (Parameer =\n> constraint_exclusion is ON):\n\nYour example works as expected for me. You *sure* you have\nconstraint_exclusion turned on?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Sep 2006 19:13:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition elimination problem "
},
{
"msg_contents": "Hi Tom,\nYou are right, of course :\n\n\npgpool=# set constraint_exclusion = on ;\nSET\npgpool=# explain analyze select * from part where id1=3 and id2=5 ;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n-----------------------------\n Result (cost=0.00..289.92 rows=3 width=130) (actual time=3.604..27.839 rows=483 loops=1)\n -> Append (cost=0.00..289.92 rows=3 width=130) (actual time=3.600..22.550 rows=483 loops=1)\n -> Seq Scan on part (cost=0.00..24.85 rows=1 width=130) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((id1 = 3) AND (id2 = 5))\n -> Bitmap Heap Scan on part_id1_0_10 part (cost=1.02..9.50 rows=1 width=130) (actual time=0.014..0.014 rows=0\nloops=1)\n Recheck Cond: (id1 = 3)\n Filter: (id2 = 5)\n -> Bitmap Index Scan on idx_part_id1_0_10 (cost=0.00..1.02 rows=5 width=0) (actual time=0.009..0.009\nrows=0 loops=1)\n Index Cond: (id1 = 3)\n -> Bitmap Heap Scan on part_id1_0_10__id2_0_10 part (cost=2.52..255.56 rows=1 width=130) (actual\ntime=3.578..20.377 rows=483 loops=1)\n Recheck Cond: (id2 = 5)\n Filter: (id1 = 3)\n -> Bitmap Index Scan on idx_part_id1_0_10__id2_0_10 (cost=0.00..2.52 rows=148 width=0) (actual\ntime=3.460..3.460 rows=5242 loops=1)\n Index Cond: (id2 = 5)\n Total runtime: 30.576 ms\n\n\nNow the execution plan looks good. \nAnd now I have another problem -> constraint_exclusion is on in the postgresql.conf file.\nBUT in my psql session I see something different ;(. Only after setting this parameter explicitely in the session, it\nworks.\nWhat I have done wrong ?\n \n \npgpool=# show constraint_exclusion ;\n constraint_exclusion\n----------------------\n off\n(1 row)\n \npgpool=# set constraint_exclusion = on ;\nSET\npgpool=# show constraint_exclusion ;\n constraint_exclusion\n----------------------\n on\n(1 row)\n\n \nBest Regards. \nMilen \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Monday, September 18, 2006 1:14 AM\nTo: Milen Kulev\nCc: [email protected]\nSubject: Re: [PERFORM] Partition elimination problem\n\n\n\"Milen Kulev\" <[email protected]> writes:\n> My Problem is that I don't see partiotion elimination feature\n> (Parameer = constraint_exclusion is ON):\n\nYour example works as expected for me. You *sure* you have constraint_exclusion turned on?\n\n regards, tom lane\n\n\n\n\n\nNachricht\n\n\nHi Tom,You are right, of course :pgpool=# \nset constraint_exclusion = on ;SETpgpool=# explain analyze \nselect * from part where id1=3 and id2=5 \n; \nQUERY \nPLAN ----------------------------------------------------------------------------------------------------------------------------------------------------- Result \n(cost=0.00..289.92 rows=3 width=130) (actual time=3.604..27.839 rows=483 \nloops=1) -> Append (cost=0.00..289.92 rows=3 \nwidth=130) (actual time=3.600..22.550 rows=483 \nloops=1) -> Seq \nScan on part (cost=0.00..24.85 rows=1 width=130) (actual time=0.001..0.001 \nrows=0 \nloops=1) \nFilter: ((id1 = 3) AND (id2 = \n5)) -> Bitmap Heap \nScan on part_id1_0_10 part (cost=1.02..9.50 \nrows=1 width=130) (actual time=0.014..0.014 rows=0 \nloops=1) \nRecheck Cond: (id1 = \n3) \nFilter: (id2 = \n5) \n-> Bitmap Index Scan on idx_part_id1_0_10 (cost=0.00..1.02 rows=5 \nwidth=0) (actual time=0.009..0.009 rows=0 \nloops=1) \nIndex Cond: (id1 = 3) \n-> Bitmap Heap Scan on part_id1_0_10__id2_0_10 part (cost=2.52..255.56 \nrows=1 width=130) (actual time=3.578..20.377 rows=483 \nloops=1) \nRecheck Cond: (id2 = \n5) \nFilter: (id1 = \n3) \n-> Bitmap Index Scan on idx_part_id1_0_10__id2_0_10 \n(cost=0.00..2.52 rows=148 width=0) (actual time=3.460..3.460 rows=5242 \nloops=1) \nIndex Cond: (id2 = 5) Total runtime: 30.576 ms\nNow the execution plan looks good. \nAnd now I have another problem -> \nconstraint_exclusion is on in the postgresql.conf file.\nBUT in my psql session I see something different ;(. \nOnly after setting this parameter explicitely in the session, it \nworks.\nWhat I have done wrong ?\n \n \npgpool=# show constraint_exclusion \n; constraint_exclusion---------------------- off(1 \nrow)\n \npgpool=# set constraint_exclusion = on \n;SETpgpool=# show constraint_exclusion \n; constraint_exclusion---------------------- on(1 \nrow)\n \nBest Regards. \n\nMilen \n-----Original Message-----From: Tom Lane [mailto:[email protected]]Sent: Monday, \nSeptember 18, 2006 1:14 AMTo: Milen KulevCc: \[email protected]: Re: [PERFORM] Partition elimination \nproblem\"Milen Kulev\" <[email protected]> writes:> My \nProblem is that I don't see partiotion elimination feature> \n(Parameer = constraint_exclusion is ON):Your example works as expected \nfor me. You *sure* you have constraint_exclusion turned \non? \n \n regards, tom \nlane",
"msg_date": "Mon, 18 Sep 2006 08:07:51 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition elimination problem "
},
{
"msg_contents": ".... And sorry for the hassle.\nI was running the db cluster with .... Tthw wrong(old) postgresql.conf ;(\n \nBest Regrads.\nMilen \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, September 18, 2006 1:14 AM\nTo: Milen Kulev\nCc: [email protected]\nSubject: Re: [PERFORM] Partition elimination problem \n\n\n\"Milen Kulev\" <[email protected]> writes:\n> My Problem is that I don't see partiotion elimination feature \n> (Parameer = constraint_exclusion is ON):\n\nYour example works as expected for me. You *sure* you have constraint_exclusion turned on?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Sep 2006 08:09:55 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition elimination problem -> Solved"
}
] |
[
{
"msg_contents": "That query is generated by hibernate, right?\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Pallav\nKalva\nSent: den 15 september 2006 17:10\nTo: [email protected]\nSubject: [PERFORM] Optimize SQL\n\nHi,\n\n Is there anyway we can optimize this sql ? it is doing full table\nscan on listing and address table . Postgres version 8.0.2\n\nThanks!\nPallav.\n\n\nexplain analyze\nselect listing0_.listingid as col_0_0_, getmaxdate(listing0_.lastupdate,\nmax(addressval2_.createdate)) as col_1_0_ from listing.listing listing0_\nleft outer join listing.address listingadd1_ on\nlisting0_.fkbestaddressid=listingadd1_.addressid\nleft outer join listing.addressvaluation addressval2_ on\nlistingadd1_.addressid=addressval2_.fkaddressid\nwhere listing0_.lastupdate>'2006-09-15 08:31:26.927'\nand listing0_.lastupdate<=current_timestamp\nor addressval2_.createdate>'2006-09-15 08:31:26.927' and\naddressval2_.createdate<=current_timestamp\ngroup by listing0_.listingid , listing0_.lastupdate order by\ngetmaxdate(listing0_.lastupdate, max(addressval2_.createdate)) asc limit\n10;\n\n\nLimit (cost=2399501.49..2399501.51 rows=10 width=20) (actual\ntime=414298.076..414298.174 rows=10 loops=1)\n -> Sort (cost=2399501.49..2410707.32 rows=4482333 width=20) (actual\ntime=414298.068..414298.098 rows=10 loops=1)\n Sort Key: getmaxdate(listing0_.lastupdate,\nmax(addressval2_.createdate))\n -> GroupAggregate (cost=1784490.47..1851725.47 rows=4482333\nwidth=20) (actual time=414212.926..414284.927 rows=2559 loops=1)\n -> Sort (cost=1784490.47..1795696.31 rows=4482333\nwidth=20) (actual time=414174.678..414183.536 rows=2563 loops=1)\n Sort Key: listing0_.listingid, listing0_.lastupdate\n -> Merge Right Join (cost=1113947.32..1236714.45\nrows=4482333 width=20) (actual time=273257.256..414163.920 rows=2563\nloops=1)\n Merge Cond: (\"outer\".fkaddressid =\n\"inner\".addressid)\n Filter: (((\"inner\".lastupdate > '2006-09-15\n08:31:26.927'::timestamp without time zone) AND (\"inner\".lastupdate <=\n('now'::text)::timestamp(6) with time zone)) OR ((\"outer\".createdate >\n'2006-09-15 08:31:26.927'::timestamp without time zone) AND\n(\"outer\".createdate <= ('now'::text)::timestamp(6) with time zone)))\n -> Index Scan using\nidx_addressvaluation_fkaddressid on addressvaluation addressval2_\n(cost=0.00..79769.55 rows=947056 width=12) (actual\ntime=0.120..108240.633 rows=960834 loops=1)\n -> Sort (cost=1113947.32..1125153.15\nrows=4482333 width=16) (actual time=256884.646..275823.217 rows=5669719\nloops=1)\n Sort Key: listingadd1_.addressid\n -> Hash Left Join\n(cost=228115.38..570557.39 rows=4482333 width=16) (actual\ntime=93874.356..205054.946 rows=4490963 loops=1)\n Hash Cond:\n(\"outer\".fkbestaddressid = \"inner\".addressid)\n -> Seq Scan on listing listing0_\n(cost=0.00..112111.33 rows=4482333 width=16) (actual\ntime=0.026..25398.685 rows=4490963 loops=1)\n -> Hash\n(cost=183333.70..183333.70 rows=6990270 width=4) (actual\ntime=93873.659..93873.659 rows=0 loops=1)\n -> Seq Scan on address\nlistingadd1_ (cost=0.00..183333.70 rows=6990270 width=4) (actual\ntime=13.256..69441.056 rows=6990606 loops=1)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Mon, 18 Sep 2006 09:38:10 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize SQL"
}
] |
[
{
"msg_contents": "I'm having a problem with a simple query, that finds children of a node, \nusing a materialized path to the node. The query:\n\nselect n1.id\nfrom nodes n1, nodes n2\nwhere n1.path like n2.path || '%'\nand n2.id = 14;\n\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..120256.56 rows=17517 width=4) (actual \ntime=0.901..953.485 rows=7 loops=1)\n Join Filter: ((\"inner\".path)::text ~~ ((\"outer\".path)::text || \n'%'::text))\n -> Index Scan using nodes_id on nodes n2 (cost=0.00..35.08 rows=11 \nwidth=34) (actual time=0.050..0.059 rows=1 loops=1)\n Index Cond: (id = 14)\n -> Seq Scan on nodes n1 (cost=0.00..6151.89 rows=318489 width=38) \n(actual time=0.010..479.479 rows=318489 loops=1)\n Total runtime: 953.551 ms\n(6 rows)\n\nI've tried re-writing the query, which results in a different plan:\n\nselect id\nfrom nodes\nwhere path like (\n select path\n from nodes\n where id = 14\n limit 1\n) || '%';\n\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on nodes (cost=3.19..7747.52 rows=1592 width=4) (actual \ntime=0.230..226.311 rows=7 loops=1)\n Filter: ((path)::text ~~ (($0)::text || '%'::text))\n InitPlan\n -> Limit (cost=0.00..3.19 rows=1 width=34) (actual \ntime=0.018..0.019 rows=1 loops=1)\n -> Index Scan using nodes_id on nodes (cost=0.00..35.08 \nrows=11 width=34) (actual time=0.016..0.016 rows=1 loops=1)\n Index Cond: (id = 14)\n Total runtime: 226.381 ms\n(7 rows)\n\nWhile the plan looks a little better, the estimated rows are woefully \ninaccurate for some reason, resulting in a seq scan on nodes.\nIf I perform the nested select in the second query separately, then use \nthe result in the outer select, it's extremely fast:\n\ntest=# select path from nodes where id = 14;\n path \n--------\n /3/13/\n(1 row)\n\nTime: 0.555 ms\n\ntest=# select id from nodes where path like '/3/13/%';\nid\n---------\n 14\n 169012\n 15\n 16\n 17\n 169219\n 169220\n(7 rows)\n\nTime: 1.062 ms\n\nI've vacuum full analyzed. PG version is 8.1.4\n\nThe nodes table is as follows:\n\ntest=# \\d nodes\n Table \"public.nodes\"\n Column | Type | Modifiers\n--------+-------------------------+-----------\n id | integer | not null\n path | character varying(2000) | not null\n depth | integer | not null\nIndexes:\n \"nodes_pkey\" PRIMARY KEY, btree (id, path)\n \"nodes_id\" btree (id)\n \"nodes_path\" btree (path)\n\ntest# select count(*) from nodes;\n count \n--------\n 318489\n\nIs there a way to perform this efficiently in one query ?\n",
"msg_date": "Tue, 19 Sep 2006 09:48:10 +1000",
"msg_from": "Marc McIntyre <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE query problem"
},
{
"msg_contents": "Marc McIntyre <[email protected]> writes:\n> ... Is there a way to perform this efficiently in one query ?\n\nNo, because you're hoping for an indexscan optimization of a LIKE\nquery, and that can only happen if the pattern is a plan-time constant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Sep 2006 22:50:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query problem "
},
{
"msg_contents": "Thanks Tom,\n\nIs that documented somewhere? I can't seem to see any mention of it in \nthe docs.\n\nTom Lane wrote:\n> Marc McIntyre <[email protected]> writes:\n> \n>> ... Is there a way to perform this efficiently in one query ?\n>> \n>\n> No, because you're hoping for an indexscan optimization of a LIKE\n> query, and that can only happen if the pattern is a plan-time constant.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n\n",
"msg_date": "Tue, 19 Sep 2006 13:16:48 +1000",
"msg_from": "Marc McIntyre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIKE query problem"
}
] |
[
{
"msg_contents": "I've just fired off a \"DELETE FROM table\" command (i.e. unfiltered \nDELETE) on a trivially small table but with many foreign key references \n(on similar-sized tables), and I'm waiting for it to finish. It's been \n10 minutes now, which seems very excessive for a table of 9000 rows on a \n3 GHz desktop machine.\n\n'top' says it's all spent in USER time, and there's a ~~500KB/s write \nrate going on. Just before this DELETE, I've deleted data from a larger \ntable (50000 rows) using the same method and it finished in couple of \nseconds - maybe it's a PostgreSQL bug?\n\nMy question is: assuming it's not a bug, how to optimize DELETEs? \nIncreasing work_mem maybe?\n\n(I'm using PostgreSQL 8.1.4 on FreeBSD 6- amd64)\n\n(I know about TRUNCATE; I need those foreign key references to cascade)\n",
"msg_date": "Tue, 19 Sep 2006 15:22:34 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing DELETE"
},
{
"msg_contents": "On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote:\n> I've just fired off a \"DELETE FROM table\" command (i.e. unfiltered \n> DELETE) on a trivially small table but with many foreign key references \n> (on similar-sized tables), and I'm waiting for it to finish. It's been \n> 10 minutes now, which seems very excessive for a table of 9000 rows on a \n> 3 GHz desktop machine.\n\nI would guess that a few of those referenced tables are missing indexes\non the referenced column.\n\n> 'top' says it's all spent in USER time, and there's a ~~500KB/s write \n> rate going on. Just before this DELETE, I've deleted data from a larger \n> table (50000 rows) using the same method and it finished in couple of \n> seconds - maybe it's a PostgreSQL bug?\n> \n> My question is: assuming it's not a bug, how to optimize DELETEs? \n> Increasing work_mem maybe?\n> \n> (I'm using PostgreSQL 8.1.4 on FreeBSD 6- amd64)\n> \n> (I know about TRUNCATE; I need those foreign key references to cascade)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n",
"msg_date": "Tue, 19 Sep 2006 09:53:07 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing DELETE"
},
{
"msg_contents": "> I've just fired off a \"DELETE FROM table\" command (i.e. unfiltered \n> DELETE) on a trivially small table but with many foreign key references \n> (on similar-sized tables), and I'm waiting for it to finish. It's been \n> 10 minutes now, which seems very excessive for a table of 9000 rows on a \n> 3 GHz desktop machine.\n\nIf you have missing indexes on the child tables foreign keys, that might\nbe a cause of slow delete. The cascading delete must look up the to be\ndeleted rows in all child tables, which will do sequential scans if you\ndon't have proper indexes.\n\nTry to do an explain analyze for deleting one row, that should also show\nyou the time spent in triggers, which might clue you in what's taking so\nlong.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Tue, 19 Sep 2006 16:15:29 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing DELETE"
},
{
"msg_contents": "You do not have indexes on all of the columns which are linked by\nforeign key constraints.\n\nFor example, let's say that I had a \"scientist\" table with a single\ncolumn \"scientist_name\" and another table \"discovery\" which had\n\"scientist_name\" as a column with a foreign key constraint to the\n\"scientist\" table.\n\nIf the system were to try to delete a row from the scientist table, then\nit would need to scan the discovery table for any row which referenced\nthat scientist_name.\n\nIf there is an index on the scientist_name column in the discovery\ntable, this is a fast operation. In your case however, there most\nlikely isn't an index on that column, so it needs to do a full table\nscan of the discovery table for each row deleted from the scientist\ntable.\n\nIf the discovery table has 100,000 rows, and there are 100 scientists,\nthen deleting those 100 scientists would require scanning 100,000 * 100\n= 10M records, so this sort of thing can quickly become a very expensive\noperation.\n\nBecause of this potential for truly atrocious update/delete behavior,\nsome database systems (SQL Server at least, and IIRC Oracle as well)\neither automatically create the index on discovery.scientist_name when\nthe foreign key constraint is created, or refuse to create the foreign\nkey constraint if there isn't already an index.\n\nPG doesn't force you to have an index, which can be desirable for\nperformance reasons in some situations if you know what you're doing,\nbut allows you to royally shoot yourself in the foot on deletes/updates\nto the parent table if you're not careful.\n\nIf you have a lot of constraints and want to track down which one is\nunindexed, then doing an EXPLAIN ANALYZE of deleting a single row from\nthe parent table will tell you how long each of the referential\nintegrity checks takes, so you can figure out which indexes are missing.\n\n-- Mark Lewis\n\nOn Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote:\n> I've just fired off a \"DELETE FROM table\" command (i.e. unfiltered \n> DELETE) on a trivially small table but with many foreign key references \n> (on similar-sized tables), and I'm waiting for it to finish. It's been \n> 10 minutes now, which seems very excessive for a table of 9000 rows on a \n> 3 GHz desktop machine.\n> \n> 'top' says it's all spent in USER time, and there's a ~~500KB/s write \n> rate going on. Just before this DELETE, I've deleted data from a larger \n> table (50000 rows) using the same method and it finished in couple of \n> seconds - maybe it's a PostgreSQL bug?\n> \n> My question is: assuming it's not a bug, how to optimize DELETEs? \n> Increasing work_mem maybe?\n> \n> (I'm using PostgreSQL 8.1.4 on FreeBSD 6- amd64)\n> \n> (I know about TRUNCATE; I need those foreign key references to cascade)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Tue, 19 Sep 2006 07:17:25 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing DELETE"
},
{
"msg_contents": "Rod Taylor wrote:\n> On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote:\n>> I've just fired off a \"DELETE FROM table\" command (i.e. unfiltered \n>> DELETE) on a trivially small table but with many foreign key references \n>> (on similar-sized tables), and I'm waiting for it to finish. It's been \n>> 10 minutes now, which seems very excessive for a table of 9000 rows on a \n>> 3 GHz desktop machine.\n> \n> I would guess that a few of those referenced tables are missing indexes\n> on the referenced column.\n\nYes, it was a pilot error :(\n\nAmong the small and properly indexed referencing tables there was a\nseldom queried but huge log table.\n\n",
"msg_date": "Tue, 19 Sep 2006 22:39:42 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing DELETE"
}
] |
[
{
"msg_contents": "Hello Lister, \nI am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For more\ninformation and examples about Oracle pipelined functions see:\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n\nI have used pipeline functions in DWH enviromnent with success and would like \nTo use similar concept in PG too.\n\nAny help, examples , links and shared experiences would be greately appreciated.\n\nBest Regards.\nMilen \n\n",
"msg_date": "Tue, 19 Sep 2006 22:43:33 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pipelined functions in Postgres"
},
{
"msg_contents": "I think pipelined functions are code you can pretend is a database table.\n\nFor example you can do it like this in Oracle:\n\nselect * from PLSQL_FUNCTION;\n\nYou can achieve something similar in PostgreSQL using RETURN SETOF functions\nlike this:\n\nCREATE OR REPLACE FUNCTION test_pipe (int)\n RETURNS SETOF RECORD AS\n$$\nDECLARE\n v_rec RECORD;\nBEGIN\n FOR temp_rec IN (SELECT col FROM table where col > 10)\n LOOP\n RETURN NEXT v_rec;\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n\nThis function can be called like this:\n\nSELECT * FROM test_pipe(10) AS tbl (col int);\n\nHope this helps...\n\nThanks,\n-- \nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\n\nOn 9/20/06, Milen Kulev <[email protected]> wrote:\n>\n> Hello Lister,\n> I am curios whether I can emulate the Oracle pipelined functions\n> functionality in PG too (using RETURN NEXT ). For more\n> information and examples about Oracle pipelined functions see:\n>\n> http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n>\n> I have used pipeline functions in DWH enviromnent with success and would\n> like\n> To use similar concept in PG too.\n>\n> Any help, examples , links and shared experiences would be greately\n> appreciated.\n>\n> Best Regards.\n> Milen\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nI think pipelined functions are code you can pretend is a database table. \nFor example you can do it like this in Oracle:\n\nselect * from PLSQL_FUNCTION;You can achieve something similar in PostgreSQL using RETURN SETOF functions like this:\nCREATE OR REPLACE FUNCTION test_pipe (int)\n RETURNS SETOF RECORD AS\n$$\nDECLARE\n v_rec RECORD;\nBEGIN\n FOR temp_rec IN (SELECT col FROM table where col > 10)\n LOOP\n RETURN NEXT v_rec;\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;This function can be called like this:\n\nSELECT * FROM test_pipe(10) AS tbl (col int); Hope this helps...Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 9/20/06, Milen Kulev <[email protected]> wrote:\nHello Lister,I am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For moreinformation and examples about Oracle pipelined functions see:\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109I have used pipeline functions in DWH enviromnent with success and would likeTo use similar concept in PG too.\nAny help, examples , links and shared experiences would be greately appreciated.Best Regards.Milen---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Wed, 20 Sep 2006 02:05:17 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pipelined functions in Postgres"
},
{
"msg_contents": "Hi Milen,\n\nPipelined function is a code that acts like a database table.\n\nInorder to use this functionality in postgres you would need to write the\nfunction like this\n\nCREATE OR REPLACE FUNCTION get_test_data (numeric)\n RETURNS SETOF RECORD AS\n$$\nDECLARE\n temp_rec RECORD;\nBEGIN\n FOR temp_rec IN (SELECT ename FROM emp WHERE sal > $1)\n LOOP\n RETURN NEXT temp_rec;\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n\nnow inorder to call this function you would write the code as follows\n\nSELECT * FROM get_test_data(1000) AS t1 (emp_name VARCHAR);\n\n\nRegards\nTalha Amjad\n\n\n\nOn 9/19/06, Milen Kulev <[email protected]> wrote:\n>\n> Hello Lister,\n> I am curios whether I can emulate the Oracle pipelined functions\n> functionality in PG too (using RETURN NEXT ). For more\n> information and examples about Oracle pipelined functions see:\n>\n> http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n>\n> I have used pipeline functions in DWH enviromnent with success and would\n> like\n> To use similar concept in PG too.\n>\n> Any help, examples , links and shared experiences would be greately\n> appreciated.\n>\n> Best Regards.\n> Milen\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nHi Milen,\n \nPipelined function is a code that acts like a database table.\n \nInorder to use this functionality in postgres you would need to write the function like this\n \nCREATE OR REPLACE FUNCTION get_test_data (numeric) RETURNS SETOF RECORD AS$$DECLARE temp_rec RECORD;BEGIN FOR temp_rec IN (SELECT ename FROM emp WHERE sal > $1) LOOP\n RETURN NEXT temp_rec; END LOOP; RETURN;END;$$ LANGUAGE plpgsql; \nnow inorder to call this function you would write the code as follows\n \nSELECT * FROM get_test_data(1000) AS t1 (emp_name VARCHAR); \n \nRegards\nTalha Amjad\n \nOn 9/19/06, Milen Kulev <[email protected]> wrote:\nHello Lister,I am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For more\ninformation and examples about Oracle pipelined functions see:http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\nI have used pipeline functions in DWH enviromnent with success and would likeTo use similar concept in PG too.Any help, examples , links and shared experiences would be greately appreciated.\nBest Regards.Milen---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Tue, 19 Sep 2006 14:07:58 -0700",
"msg_from": "\"Talha Khan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pipelined functions in Postgres"
},
{
"msg_contents": "Hello Shoaib,\nI know the SETOF funcitons. I want to simulate (somehow) producer/consumer relationship with SETOF(pipelined)\nfunctions. The first (producer )function generates records (just like your test_pipe function), and the second\nfunction consumers the records , produced by the first function. The second function can be rows/records producer for\nanother consumer functions e.g. it should looks like(or similar)\nselect * from consumer_function( producer_function(param1, param2, ...));\n \nWhat I want to achieve is to impelement some ETL logic in consumer_functions (they could be chained, of course).\nThe main idea is to read source DWH tables once (in producer_function, for example), and to process the rowsets \nin the consumer functions. I want to avoid writing to intermediate tables while performing ETL processing .\nIs this possible with SETOF functions ? \n \nBest Regards\nMilen \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Shoaib Mir\nSent: Tuesday, September 19, 2006 11:05 PM\nTo: Milen Kulev\nCc: [email protected]\nSubject: Re: [PERFORM] Pipelined functions in Postgres\n\n\nI think pipelined functions are code you can pretend is a database table. \n\nFor example you can do it like this in Oracle:\n\nselect * from PLSQL_FUNCTION;\n\nYou can achieve something similar in PostgreSQL using RETURN SETOF functions like this:\n\nCREATE OR REPLACE FUNCTION test_pipe (int)\n RETURNS SETOF RECORD AS\n$$\nDECLARE\n v_rec RECORD;\nBEGIN\n FOR temp_rec IN (SELECT col FROM table where col > 10)\n LOOP\n RETURN NEXT v_rec;\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n\nThis function can be called like this:\n\nSELECT * FROM test_pipe(10) AS tbl (col int);\n \nHope this helps...\n\nThanks,\n-- \nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\n\n\nOn 9/20/06, Milen Kulev <[email protected]> wrote: \n\nHello Lister,\nI am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For more\ninformation and examples about Oracle pipelined functions see:\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n\nI have used pipeline functions in DWH enviromnent with success and would like\nTo use similar concept in PG too. \n\nAny help, examples , links and shared experiences would be greately appreciated.\n\nBest Regards.\nMilen\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate \n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\n\n\n\n\nNachricht\n\n\nHello \nShoaib,\nI know \nthe SETOF funcitons. I want to simulate (somehow) producer/consumer \nrelationship with SETOF(pipelined) functions. The first \n(producer )function generates records (just like your test_pipe function), \nand the second function consumers the records , produced by the first \nfunction. The second function can be rows/records producer for another consumer \nfunctions e.g. it should looks like(or similar)\nselect \n* from consumer_function( producer_function(param1, param2, \n...));\n \nWhat I \nwant to achieve is to impelement some ETL logic \nin consumer_functions (they could be chained, of \ncourse).\nThe \nmain idea is to read source DWH tables once (in producer_function, \nfor example), and to process the rowsets \nin the \nconsumer functions. I want to avoid writing to intermediate tables \nwhile performing ETL processing .\nIs \nthis possible with SETOF functions \n? \n \nBest \nRegards\nMilen \n\n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Shoaib \n MirSent: Tuesday, September 19, 2006 11:05 PMTo: Milen \n KulevCc: [email protected]: Re: \n [PERFORM] Pipelined functions in PostgresI think \n pipelined functions are code you can pretend is a database table. For \n example you can do it like this in Oracle:select * from PLSQL_FUNCTION;You can \n achieve something similar in PostgreSQL using RETURN SETOF functions like \n this:CREATE OR REPLACE FUNCTION \n test_pipe (int) RETURNS SETOF RECORD AS$$DECLARE \n v_rec RECORD;BEGIN FOR temp_rec IN (SELECT col FROM \n table where col > 10) LOOP \n RETURN NEXT v_rec; END LOOP; \n RETURN;END;$$ LANGUAGE plpgsql;This function \n can be called like this:SELECT * FROM \n test_pipe(10) AS tbl (col int); Hope this \n helps...Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 9/20/06, Milen \n Kulev <[email protected]> \n wrote:\nHello \n Lister,I am curios whether I can emulate the Oracle pipelined functions \n functionality in PG too (using RETURN NEXT ). For moreinformation and \n examples about Oracle pipelined functions see:http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109I \n have used pipeline functions in DWH enviromnent with \n success and would likeTo use similar concept in PG too. Any \n help, examples , links and shared experiences would be greately \n appreciated.Best \n Regards.Milen---------------------------(end of \n broadcast)---------------------------TIP 1: if posting/reading through \n Usenet, please send an appropriate \n subscribe-nomail command to [email protected] so that \n your message can get through to the \n mailing list cleanly",
"msg_date": "Tue, 19 Sep 2006 23:22:15 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pipelined functions in Postgres"
},
{
"msg_contents": "Talha, \ndo you know how much memory is consumed by the SETOF function ?\nWhat happens with memory consumption of the function if \nSELECT ename FROM emp WHERE sal > $1\n returns 10 mio rows ? \nI suppose that memory for the RECORD structure is immediately reused by the next record.\n \nRegards, Milen \n \n\n-----Original Message-----\nFrom: Talha Khan [mailto:[email protected]] \nSent: Tuesday, September 19, 2006 11:08 PM\nTo: Milen Kulev\nCc: [email protected]\nSubject: Re: [PERFORM] Pipelined functions in Postgres\n\n\nHi Milen,\n \nPipelined function is a code that acts like a database table.\n \nInorder to use this functionality in postgres you would need to write the function like this\n \nCREATE OR REPLACE FUNCTION get_test_data (numeric)\n RETURNS SETOF RECORD AS\n$$\nDECLARE\n temp_rec RECORD;\nBEGIN\n FOR temp_rec IN (SELECT ename FROM emp WHERE sal > $1)\n LOOP\n RETURN NEXT temp_rec;\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n \nnow inorder to call this function you would write the code as follows\n \nSELECT * FROM get_test_data(1000) AS t1 (emp_name VARCHAR);\n \n \nRegards\nTalha Amjad\n\n\n \nOn 9/19/06, Milen Kulev <[email protected]> wrote: \n\nHello Lister,\nI am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For more \ninformation and examples about Oracle pipelined functions see:\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n<http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n> \n\nI have used pipeline functions in DWH enviromnent with success and would like\nTo use similar concept in PG too.\n\nAny help, examples , links and shared experiences would be greately appreciated.\n\nBest Regards.\nMilen\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\n\nNachricht\n\n\nTalha, \n\ndo you \nknow how much memory is consumed by the SETOF function \n?\nWhat \nhappens with memory consumption of the function if \nSELECT \nename FROM emp WHERE sal > $1\n returns 10 mio \nrows ? \nI suppose \nthat memory for the RECORD structure is immediately reused by the \nnext record.\n \nRegards, \nMilen \n \n\n\n-----Original Message-----From: Talha Khan \n [mailto:[email protected]] Sent: Tuesday, September 19, 2006 \n 11:08 PMTo: Milen KulevCc: \n [email protected]: Re: [PERFORM] Pipelined \n functions in Postgres\nHi Milen,\n \nPipelined function is a code that acts like a database table.\n \nInorder to use this functionality in postgres you would need to write the \n function like this\n \nCREATE OR REPLACE FUNCTION get_test_data (numeric) \n RETURNS SETOF RECORD AS$$DECLARE temp_rec \n RECORD;BEGIN FOR temp_rec IN (SELECT ename FROM \n emp WHERE sal > $1) LOOP \n RETURN NEXT temp_rec; END LOOP; \n RETURN;END;$$ LANGUAGE plpgsql; \nnow inorder to call this function you would write the code as \n follows\n \nSELECT * FROM get_test_data(1000) AS t1 (emp_name \n VARCHAR); \n \nRegards\nTalha Amjad\n \nOn 9/19/06, Milen \n Kulev <[email protected]> \n wrote:\nHello \n Lister,I am curios whether I can emulate the Oracle pipelined functions \n functionality in PG too (using RETURN NEXT ). For more information and \n examples about Oracle pipelined functions see:http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109 \n I have used pipeline functions in DWH \n enviromnent with success and would likeTo use similar concept \n in PG too.Any help, examples , links and shared \n experiences would be greately appreciated.Best \n Regards.Milen---------------------------(end of \n broadcast)---------------------------TIP 1: if posting/reading through \n Usenet, please send an \n appropriate subscribe-nomail command \n to [email protected] so \n that your message can get through to \n the mailing list cleanly",
"msg_date": "Tue, 19 Sep 2006 23:29:07 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pipelined functions in Postgres"
},
{
"msg_contents": "I dont think so that will be possible using SETOF function ...\n\nYou might have to partition the current query and this way can distribute\nthe full load of the query if there is too much data invovled.\n\nThanks,\n-- \nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 9/20/06, Milen Kulev <[email protected]> wrote:\n>\n> Hello Shoaib,\n> I know the SETOF funcitons. I want to simulate (somehow)\n> producer/consumer relationship with SETOF(pipelined) functions. The first\n> (producer )function generates records (just like your test_pipe function),\n> and the second function consumers the records , produced by the first\n> function. The second function can be rows/records producer for another\n> consumer functions e.g. it should looks like(or similar)\n> select * from consumer_function( producer_function(param1, param2, ...));\n>\n> What I want to achieve is to impelement some ETL logic\n> in consumer_functions (they could be chained, of course).\n> The main idea is to read source DWH tables once (in producer_function,\n> for example), and to process the rowsets\n> in the consumer functions. I want to avoid writing to intermediate tables\n> while performing ETL processing .\n> Is this possible with SETOF functions ?\n>\n> Best Regards\n> Milen\n>\n> -----Original Message-----\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Shoaib Mir\n> *Sent:* Tuesday, September 19, 2006 11:05 PM\n> *To:* Milen Kulev\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Pipelined functions in Postgres\n>\n> I think pipelined functions are code you can pretend is a database table.\n>\n> For example you can do it like this in Oracle:\n>\n> select * from PLSQL_FUNCTION;\n>\n> You can achieve something similar in PostgreSQL using RETURN SETOF\n> functions like this:\n>\n> CREATE OR REPLACE FUNCTION test_pipe (int)\n> RETURNS SETOF RECORD AS\n> $$\n> DECLARE\n> v_rec RECORD;\n> BEGIN\n> FOR temp_rec IN (SELECT col FROM table where col > 10)\n> LOOP\n> RETURN NEXT v_rec;\n> END LOOP;\n> RETURN;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> This function can be called like this:\n>\n> SELECT * FROM test_pipe(10) AS tbl (col int);\n>\n> Hope this helps...\n>\n> Thanks,\n> --\n> Shoaib Mir\n> EnterpriseDB (www.enterprisedb.com)\n>\n>\n> On 9/20/06, Milen Kulev <[email protected]> wrote:\n> >\n> > Hello Lister,\n> > I am curios whether I can emulate the Oracle pipelined functions\n> > functionality in PG too (using RETURN NEXT ). For more\n> > information and examples about Oracle pipelined functions see:\n> >\n> > http://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109\n> >\n> > I have used pipeline functions in DWH enviromnent with success and\n> > would like\n> > To use similar concept in PG too.\n> >\n> > Any help, examples , links and shared experiences would be greately\n> > appreciated.\n> >\n> > Best Regards.\n> > Milen\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n>\n>\n>\n\nI dont think so that will be possible using SETOF function ...You might have to partition the current query and this way can distribute the full load of the query if there is too much data invovled.Thanks,\n-- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 9/20/06, Milen Kulev <\[email protected]> wrote:\n\nHello \nShoaib,\nI know \nthe SETOF funcitons. I want to simulate (somehow) producer/consumer \nrelationship with SETOF(pipelined) functions. The first \n(producer )function generates records (just like your test_pipe function), \nand the second function consumers the records , produced by the first \nfunction. The second function can be rows/records producer for another consumer \nfunctions e.g. it should looks like(or similar)\nselect \n* from consumer_function( producer_function(param1, param2, \n...));\n \nWhat I \nwant to achieve is to impelement some ETL logic \nin consumer_functions (they could be chained, of \ncourse).\nThe \nmain idea is to read source DWH tables once (in producer_function, \nfor example), and to process the rowsets \nin the \nconsumer functions. I want to avoid writing to intermediate tables \nwhile performing ETL processing .\nIs \nthis possible with SETOF functions \n? \n \nBest \nRegards\nMilen \n\n\n\n-----Original Message-----From:\[email protected] \n [mailto:[email protected]] On Behalf Of Shoaib \n MirSent: Tuesday, September 19, 2006 11:05 PMTo: Milen \n KulevCc: [email protected]: Re: \n [PERFORM] Pipelined functions in PostgresI think \n pipelined functions are code you can pretend is a database table. For \n example you can do it like this in Oracle:select * from PLSQL_FUNCTION;You can \n achieve something similar in PostgreSQL using RETURN SETOF functions like \n this:CREATE OR REPLACE FUNCTION \n test_pipe (int) RETURNS SETOF RECORD AS$$\nDECLARE \n v_rec RECORD;BEGIN FOR temp_rec IN (SELECT col FROM \n table where col > 10) LOOP \n RETURN NEXT v_rec; END LOOP; \n RETURN;END;$$ LANGUAGE plpgsql;This function \n can be called like this:SELECT * FROM \n test_pipe(10) AS tbl (col int); Hope this \n helps...Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 9/20/06, Milen \n Kulev <[email protected]> \n wrote:\nHello \n Lister,I am curios whether I can emulate the Oracle pipelined functions \n functionality in PG too (using RETURN NEXT ). For moreinformation and \n examples about Oracle pipelined functions see:\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:8127757633768425921::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:4447489221109I \n have used pipeline functions in DWH enviromnent with \n success and would likeTo use similar concept in PG too. Any \n help, examples , links and shared experiences would be greately \n appreciated.Best \n Regards.Milen---------------------------(end of \n broadcast)---------------------------TIP 1: if posting/reading through \n Usenet, please send an appropriate \n subscribe-nomail command to [email protected] so that \n your message can get through to the \n mailing list cleanly",
"msg_date": "Wed, 20 Sep 2006 03:04:28 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pipelined functions in Postgres"
},
{
"msg_contents": "On Tue, 2006-09-19 at 23:22 +0200, Milen Kulev wrote:\n> Hello Shoaib,\n> I know the SETOF funcitons. I want to simulate (somehow)\n> producer/consumer relationship with SETOF(pipelined) functions. The\n> first (producer )function generates records (just like your test_pipe\n> function), and the second function consumers the records , produced by\n> the first function. The second function can be rows/records producer\n> for another consumer functions e.g. it should looks like(or similar)\n> select * from consumer_function( producer_function(param1,\n> param2, ...));\n> \n> What I want to achieve is to impelement some ETL logic\n> in consumer_functions (they could be chained, of course).\n> The main idea is to read source DWH tables once (in\n> producer_function, for example), and to process the rowsets \n> in the consumer functions. I want to avoid writing to intermediate\n> tables while performing ETL processing .\n> Is this possible with SETOF functions ? \n> \n\nFunctions cannot take a relation as a parameter.\n\nWhy not create a single function that does what you need it to do? You\ncan write such a function in the language of your choice, including C,\nperl, PL/pgSQL, among others. That gives you a lot of power to do what\nyou need to do in a single pass, without passing the results on to other\nfunctions.\n\nIf you provide an example of what you need to be able to do maybe\nsomeone on this list knows a way to do it with one function call.\n\nAlso, I'll point out that what you want to do is very similar to using\ntypical relational constructs. Consider whether sub-selects or\naggregates in conjunction with set-returning functions can achieve what\nyou want. PostgreSQL is smart enough to only read the big table once if\npossible.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 19 Sep 2006 16:54:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pipelined functions in Postgres"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was searching tips to speed up/reduce load on a Pg8 app.\nThank you for all your suggestions on the matter.\nThread is archived here:\n\nhttp://www.mail-archive.com/[email protected]/msg18342.html\n\nAfter intensive application profiling and database workload analysis,\nI managed to reduce CPU load with application-level changes.\n\nFor database overload in presence of many concurrent\ntransactions, I found that just doing an \"ANALYZE\" on sensible\nrelations makes the situation better.\n\nI scheduled a cron job every hour or so that runs an analyze on the\n4/5 most intensive relations and sleeps 30 seconds between every\nanalyze.\n\nThis has optimized db response times when many clients run together.\nI wanted to report this, maybe it can be helpful for others\nout there... :-)\n\n-- \nCosimo\n\n",
"msg_date": "Wed, 20 Sep 2006 11:09:23 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update on high concurrency OLTP application and Postgres 8 tuning"
},
{
"msg_contents": "On Wed, Sep 20, 2006 at 11:09:23AM +0200, Cosimo Streppone wrote:\n> \n> I scheduled a cron job every hour or so that runs an analyze on the\n> 4/5 most intensive relations and sleeps 30 seconds between every\n> analyze.\n> \n> This has optimized db response times when many clients run together.\n> I wanted to report this, maybe it can be helpful for others\n> out there... :-)\n\nThis suggests to me that your statistics need a lot of updating. You\n_might_ find that setting the statistics to a higher number on some\ncolumns of some of your tables will allow you to analyse less\nfrequently. That's a good thing just because ANALYSE will impose an\nI/O load.\n\nA \n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n",
"msg_date": "Wed, 20 Sep 2006 07:07:31 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update on high concurrency OLTP application and Postgres 8 tuning"
},
{
"msg_contents": "Andrew wrote:\n\n> On Wed, Sep 20, 2006 at 11:09:23AM +0200, Cosimo Streppone wrote:\n>> I scheduled a cron job every hour or so that runs an analyze on the\n>> 4/5 most intensive relations and sleeps 30 seconds between every\n>> analyze.\n>\n> This suggests to me that your statistics need a lot of updating.\n\nAgreed.\n\n> You _might_ find that setting the statistics to a higher number on some\n> columns of some of your tables will allow you to analyse less\n> frequently.\n\nAt the moment, my rule of thumb is to check out the ANALYZE VERBOSE\nmessages to see if all table pages are being scanned.\n\n INFO: \"mytable\": scanned xxx of yyy pages, containing ...\n\nIf xxx = yyy, then I keep statistics at the current level.\nWhen xxx is way less than yyy, I increase the numbers a bit\nand retry.\n\nIt's probably primitive, but it seems to work well.\n\n > [...] ANALYSE will impose an I/O load.\n\nIn my case, analyze execution doesn't impact performance\nin any noticeable way. YMMV of course.\n\n-- \nCosimo\n\n",
"msg_date": "Wed, 20 Sep 2006 14:22:30 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update on high concurrency OLTP application and Postgres"
},
{
"msg_contents": "Christian Storm wrote:\n\n>>At the moment, my rule of thumb is to check out the ANALYZE VERBOSE\n>>messages to see if all table pages are being scanned.\n>>\n>> INFO: \"mytable\": scanned xxx of yyy pages, containing ...\n>>\n>>If xxx = yyy, then I keep statistics at the current level.\n>>When xxx is way less than yyy, I increase the numbers a bit\n>>and retry.\n>>\n>>It's probably primitive, but it seems to work well.\n >\n> What heuristic do you use to up the statistics for such a table?\n\nNo heuristics, just try and see.\nFor tables of ~ 10k pages, I set statistics to 100/200.\nFor ~ 100k pages, I set them to 500 or more.\nI don't know the exact relation.\n\n> Once you've changed it, what metric do you use to\n > see if it helps or was effective?\n\nI rerun an analyze and see the results... :-)\nIf you mean checking the usefulness, I can see it only\nunder heavy load, if particular db queries run in the order\nof a few milliseconds.\n\nIf I see normal queries that take longer and longer, or\nthey even appear in the server's log (> 500 ms), then\nI know an analyze is needed, or statistics should be set higher.\n\n-- \nCosimo\n\n\n",
"msg_date": "Fri, 22 Sep 2006 22:48:16 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update on high concurrency OLTP application and Postgres"
},
{
"msg_contents": "Have you ever done any testing to see if just setting \ndefault_statistics_target to 500 has a negative impact on the system?\n\nOn Sep 22, 2006, at 4:48 PM, Cosimo Streppone wrote:\n\n> Christian Storm wrote:\n>\n>>> At the moment, my rule of thumb is to check out the ANALYZE VERBOSE\n>>> messages to see if all table pages are being scanned.\n>>>\n>>> INFO: \"mytable\": scanned xxx of yyy pages, containing ...\n>>>\n>>> If xxx = yyy, then I keep statistics at the current level.\n>>> When xxx is way less than yyy, I increase the numbers a bit\n>>> and retry.\n>>>\n>>> It's probably primitive, but it seems to work well.\n> >\n>> What heuristic do you use to up the statistics for such a table?\n>\n> No heuristics, just try and see.\n> For tables of ~ 10k pages, I set statistics to 100/200.\n> For ~ 100k pages, I set them to 500 or more.\n> I don't know the exact relation.\n>\n>> Once you've changed it, what metric do you use to\n> > see if it helps or was effective?\n>\n> I rerun an analyze and see the results... :-)\n> If you mean checking the usefulness, I can see it only\n> under heavy load, if particular db queries run in the order\n> of a few milliseconds.\n>\n> If I see normal queries that take longer and longer, or\n> they even appear in the server's log (> 500 ms), then\n> I know an analyze is needed, or statistics should be set higher.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 26 Sep 2006 23:14:15 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update on high concurrency OLTP application and Postgres"
}
] |
[
{
"msg_contents": "Hi,\n\nI am running bechmark test in a 50 GB postgresql database.\nI have the postgresql.conf with all parameters by default.\nIn this configuration the database is very, very slow.\n\nCould you please tell which is the best configuration?\n\nMy system:\nPentium D 3.0Ghz\nRAM: 1GB\nHD: 150GB SATA\n\nThanks in advance,\nNuno\n",
"msg_date": "Wed, 20 Sep 2006 16:28:08 +0100",
"msg_from": "\"Nuno Alves\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "running benchmark test on a 50GB database"
},
{
"msg_contents": "I would start by reading this web page:\n\nhttp://powerpostgresql.com/PerfList\n\nThere are probably some other web pages out there with similar information,\nor you can check the mailing list archives for a lot of info. If those\nplaces don't help, then you should try to indentify what queries are slow,\npost an EXPLAIN ANALYZE of the slow queries along with the relvent schema\ninfo (i.e. table definitions and indexes).\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Nuno Alves\n> Sent: Wednesday, September 20, 2006 10:28 AM\n> To: [email protected]\n> Subject: [PERFORM] running benchmark test on a 50GB database\n> \n> \n> Hi,\n> \n> I am running bechmark test in a 50 GB postgresql database.\n> I have the postgresql.conf with all parameters by default.\n> In this configuration the database is very, very slow.\n> \n> Could you please tell which is the best configuration?\n> \n> My system:\n> Pentium D 3.0Ghz\n> RAM: 1GB\n> HD: 150GB SATA\n> \n> Thanks in advance,\n> Nuno\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n",
"msg_date": "Wed, 20 Sep 2006 10:43:45 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: running benchmark test on a 50GB database"
},
{
"msg_contents": "\n> I am running bechmark test in a 50 GB postgresql database.\n> I have the postgresql.conf with all parameters by default.\n> In this configuration the database is very, very slow.\n> \n> Could you please tell which is the best configuration?\n> \n> My system:\n> Pentium D 3.0Ghz\n> RAM: 1GB\n> HD: 150GB SATA\n\nWe don't know what your database looks like, what the\nqueries are you're running, what \"very, very\nslow\" means for you and what version of PostgreSQL\non what OS this is :/\n\nThe two links are a good starting point to tuning your DB:\nhttp://www.postgresql.org/docs/8.1/static/performance-tips.html\nhttp://www.powerpostgresql.com/PerfList/\n\n\nBye, Chris.\n\n\n",
"msg_date": "Wed, 20 Sep 2006 17:47:41 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: running benchmark test on a 50GB database"
},
{
"msg_contents": "On Wed, Sep 20, 2006 at 05:47:41PM +0200, Chris Mair wrote:\n> \n> > I am running bechmark test in a 50 GB postgresql database.\n> > I have the postgresql.conf with all parameters by default.\n> > In this configuration the database is very, very slow.\n> > \n> > Could you please tell which is the best configuration?\n> > \n> > My system:\n> > Pentium D 3.0Ghz\n> > RAM: 1GB\n> > HD: 150GB SATA\n> \n> We don't know what your database looks like, what the\n> queries are you're running, what \"very, very\n> slow\" means for you and what version of PostgreSQL\n> on what OS this is :/\n> \n> The two links are a good starting point to tuning your DB:\n> http://www.postgresql.org/docs/8.1/static/performance-tips.html\n> http://www.powerpostgresql.com/PerfList/\n\nAlso, 1G is kinda light on memory for a 50G database.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 20 Sep 2006 17:05:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: running benchmark test on a 50GB database"
}
] |
[
{
"msg_contents": "Hi\n\nAfter upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux\nRH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).\nI've applied the following parameters to postgres.conf:\n\nmax_connections = 500\nshared_buffers = 3000\nwork_mem = 100000\neffective_cache_size = 3000000000\n\nMost queries still perform slower than with MySQL. \nIs there anything else that can be tweaked or is this a limitation of PG or the benchmark?\n\nThanks.\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 21 Sep 2006 07:52:44 -0700 (PDT)",
"msg_from": "yoav x <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and sql-bench"
},
{
"msg_contents": "On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:\n> Hi\n> \n> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux\n> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).\n> I've applied the following parameters to postgres.conf:\n> \n> max_connections = 500\n> shared_buffers = 3000\n> work_mem = 100000\n> effective_cache_size = 3000000000\n> \n> Most queries still perform slower than with MySQL. \n> Is there anything else that can be tweaked or is this a limitation of PG or the benchmark?\n\nAs mentioned by others, you are using a benchmark that is slanted\ntowards MySQL. \n\n",
"msg_date": "Thu, 21 Sep 2006 11:03:12 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "Not to offend, but since most of us are PG users, we're not all that\nfamiliar with what the different tests in MySQL's sql-bench benchmark\ndo. So you won't get very far by saying \"PG is slow on benchmark X, can\nI make it faster?\", because that doesn't include any of the information\nwe need in order to help.\n\nSpecifics would be nice, including at least the following:\n\n1. Which specific test case(s) would you like to try to make faster?\nWhat do the table schema look like, including indexes and constraints?\n\n2. What strategy did you settle on for handling VACUUM and ANALYZE\nduring the test? Have you confirmed that you aren't suffering from\ntable bloat?\n\n3. What are the actual results you got from the PG run in question?\n\n4. What is the size of the data set referenced in the test run?\n\n-- Mark Lewis\n\nOn Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:\n> Hi\n> \n> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux\n> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).\n> I've applied the following parameters to postgres.conf:\n> \n> max_connections = 500\n> shared_buffers = 3000\n> work_mem = 100000\n> effective_cache_size = 3000000000\n> \n> Most queries still perform slower than with MySQL. \n> Is there anything else that can be tweaked or is this a limitation of PG or the benchmark?\n> \n> Thanks.\n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n",
"msg_date": "Thu, 21 Sep 2006 08:05:18 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "Hi.\n\nDo you compare apples to apples? InnoDB tables to PostgreSQL? Are all\nneeded indexes available? Are you sure about that? What about fsync?\nDoes the benchmark insert a lot of rows? Have you tested placing the\nWAL on a separate disk? Is PostgreSQL logging more stuff?\n\nAnother thing: have you analyzed the tables? Have you tested higher\nshared_buffers?\n\nAnd the last thing: there are lies, damn lies and benchmarks. What\ndoes a benchmark, which might be optimized for one DB, help you with\nyour own db workload?\n\nThere are soooo many things that can go wrong with a benchmark if you\ndon't have real knowledge on how to optimize both DBMS that it is just\nworthless to use it anyway if you don't have the knowledge ...\n\nPostgreSQL outperforms MySQL in our environment in EVERY situation\nneeded by the application. So, does the benchmark represent your work\nload? Does the benchmark result say anything for your own situation?\nOr is this all for the sake of running a benchmark?\n\ncug\n\n\nOn 9/21/06, yoav x <[email protected]> wrote:\n> Hi\n>\n> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux\n> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).\n> I've applied the following parameters to postgres.conf:\n>\n> max_connections = 500\n> shared_buffers = 3000\n> work_mem = 100000\n> effective_cache_size = 3000000000\n>\n> Most queries still perform slower than with MySQL.\n> Is there anything else that can be tweaked or is this a limitation of PG or the benchmark?\n>\n> Thanks.\n\n-- \nPostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\nhttp://www.bignerdranch.com/news/2006-08-21.shtml\n",
"msg_date": "Thu, 21 Sep 2006 17:07:56 +0200",
"msg_from": "\"Guido Neitzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "yoav x <[email protected]> writes:\n> I've applied the following parameters to postgres.conf:\n\n> max_connections = 500\n> shared_buffers = 3000\n> work_mem = 100000\n> effective_cache_size = 3000000000\n\nPlease see my earlier reply --- you ignored at least\ncheckpoint_segments, which is critical, and perhaps other things.\n\nDon't forget also that testing mysql/myisam against fsync = on\nis inherently unfair.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Sep 2006 11:12:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench "
},
{
"msg_contents": "On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:\n> Hi\n> \n> After upgrading DBI and DBD::Pg, this benchmark still picks MySQL as the winner (at least on Linux\n> RH3 on a Dell 1875 server with 2 hyperthreaded 3.6GHz CPUs and 4GB RAM).\n> I've applied the following parameters to postgres.conf:\n> \n> max_connections = 500\n> shared_buffers = 3000\n\nThat's a low setting. 3000*8192 = 24MB. This should probably be closer\nto 25% total memory, or 1GB, or 131072 shared buffers (however, that's\njust a rule of thumb, there may be a better setting).\n\n> work_mem = 100000\n> effective_cache_size = 3000000000\n\nThat is a very high setting. effective_cache_size is measured in disk\npages, so if you want 3GB the correct setting is 393216.\n\n> \n> Most queries still perform slower than with MySQL. \n> Is there anything else that can be tweaked or is this a limitation of PG or the benchmark?\n> \n\nAs others have pointed out, sql-bench may not be a realistic benchmark.\nThe best way to examine performance is against real work.\n\nAlso, consider that relational databases were not developed to increase\nperformance. Things like filesystems are inherently \"faster\" because\nthey do less. However, relational databases make development of systems\nof many applications easier to develop, and also make it easier to make\na well-performing application. If the benchmark isn't testing anything\nthat a filesystem can't do, then either:\n(a) Your application could probably make better use of a relational\ndatabase; or\n(b) The benchmark doesn't represent your application's needs.\n\nRegards,\n\tJeff Davis\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Thu, 21 Sep 2006 09:45:56 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "On Thu, Sep 21, 2006 at 11:12:45AM -0400, Tom Lane wrote:\n> yoav x <[email protected]> writes:\n> > I've applied the following parameters to postgres.conf:\n> \n> > max_connections = 500\n> > shared_buffers = 3000\n> > work_mem = 100000\n> > effective_cache_size = 3000000000\n \nYou just told the database that you have 23G of storage.\neffective_cache_size is measured in blocks, which are normally 8K.\n\n> Please see my earlier reply --- you ignored at least\n> checkpoint_segments, which is critical, and perhaps other things.\n> \n> Don't forget also that testing mysql/myisam against fsync = on\n> is inherently unfair.\n\nEven with fsync = off, there's still a non-trivial amount of overhead\nbrought on by MVCC that's missing in myisam. If you don't care about\nconcurrency or ACIDity, but performance is critical (the case that the\nMySQL benchmark favors), then PostgreSQL probably isn't for you.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 21 Sep 2006 16:49:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "On 21-9-2006 23:49 Jim C. Nasby wrote:\n> Even with fsync = off, there's still a non-trivial amount of overhead\n> brought on by MVCC that's missing in myisam. If you don't care about\n> concurrency or ACIDity, but performance is critical (the case that the\n> MySQL benchmark favors), then PostgreSQL probably isn't for you.\n\nThat depends on the required scalability (both in number of cpu's and in \nnumber of concurrent clients). In our benchmarks MySQL is beaten by \nPostgreSQL in a read-mostly environment with queries that are designed \nfor MySQL, but slightly adjusted to work on PostgreSQL (for MySQL 5.0 \nand 5.1, about the same adjustments where needed).\nBut for very low amounts of concurrent users, MySQL outperforms PostgreSQL.\n\nHave a look here:\nhttp://tweakers.net/reviews/646/10\nand here:\nhttp://tweakers.net/reviews/638/4\n\nAs you can see both MySQL 5.0 and 4.1 start much higher for a few \nclients, but when you add more clients or more cpu's, MySQL scales less \ngood and even starts dropping performance and soon is far behind \ncompared to PostgreSQL.\n\nSo for a web-application, PostgreSQL may be much better, since generally \nthe only situation where you need maximum performance, is when you have \nto service a lot of concurrent visitors.\nBut if you benchmark only with a single thread or do benchmarks that are \nno where near a real-life environment, it may show very different \nresults of course.\n\nBest regards,\n\nArjen van der Meijden\n",
"msg_date": "Fri, 22 Sep 2006 00:26:15 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench"
},
{
"msg_contents": "Hi\n\nI am not comparing Postgres to MyISAM (obviously it is not a very fair comparison) and we do need\nACID, so all comparison are made against InnoDB (which now supports MVCC as well). I will try\nagain with the suggestions posted here.\n\nThanks.\n\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> yoav x <[email protected]> writes:\n> > I've applied the following parameters to postgres.conf:\n> \n> > max_connections = 500\n> > shared_buffers = 3000\n> > work_mem = 100000\n> > effective_cache_size = 3000000000\n> \n> Please see my earlier reply --- you ignored at least\n> checkpoint_segments, which is critical, and perhaps other things.\n> \n> Don't forget also that testing mysql/myisam against fsync = on\n> is inherently unfair.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Mon, 25 Sep 2006 07:58:17 -0700 (PDT)",
"msg_from": "yoav x <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and sql-bench "
},
{
"msg_contents": "On Sep 25, 2006, at 10:58 AM, yoav x wrote:\n> I am not comparing Postgres to MyISAM (obviously it is not a very \n> fair comparison) and we do need\n> ACID, so all comparison are made against InnoDB (which now supports \n> MVCC as well). I will try\n> again with the suggestions posted here.\n\nMake sure that you're not inadvertently disabling ACIDity in MySQL/ \nInnoDB; some options/performance tweaks will do that last I looked.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 26 Sep 2006 23:28:41 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and sql-bench "
}
] |
[
{
"msg_contents": "A colleague pointed me to this site tomorrow:\n\nhttp://tweakers.net/reviews/642/13\n\nI can't read the language, so can't get a grip on what exactly the \n\"benchmark\" was about.\n\nTheir diagrams show `Request per seconds'. What should that mean? How \nmany connections PG accepted per second? So they measured the OS fork \nperformance? Should that value be of any interrest? Anyone with heavy \nOLTP workload will use persistent connections or a connection pool in front.\n\nDo they mean TPS? That woulnd't make much sense in a CPU benchmark, as \nOLTP workload is typically limited by the disc subsystem.\n\nCan someone enlighten me what this site is about?\n\n\n-- \nRegards,\nHannes Dorbath\n",
"msg_date": "Fri, 22 Sep 2006 10:32:02 +0200",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "Hello Hannes,\n\nThe text above the pictures on page 13. Translated in my crappy english.\n\nThe confrontation between the Opteron and Woodcrest was inevitable in \nthis article, but who can add 1 and 1 should have known from the \nprevious two pages that it doesn't look that good for AMD . Under loads \nof 25 till 100 simultaneous visitors, the Xeon performs 24% better with \nMSQL 4.1.20, 30% better in MySQL 5.0.20a and 37% better in PostgreSQL \n8.2-dev. In short, the Socket F Opteron doesn't stand a chance, although \nthe Woodcrest scales better and has such a high startpoint with one \ncore, there is no chance of beating it. We can imagine that the Opteron \nwith more memory and production hardware, would be a few % faster, but \nthe difference with the Woodcrest is that high that we have a hard time \nbelieving that the complete picture would change that much.\n\n\nRegards,\nNick\n\nHannes Dorbath wrote:\n> A colleague pointed me to this site tomorrow:\n>\n> http://tweakers.net/reviews/642/13\n>\n> I can't read the language, so can't get a grip on what exactly the \n> \"benchmark\" was about.\n>\n> Their diagrams show `Request per seconds'. What should that mean? How \n> many connections PG accepted per second? So they measured the OS fork \n> performance? Should that value be of any interrest? Anyone with heavy \n> OLTP workload will use persistent connections or a connection pool in \n> front.\n>\n> Do they mean TPS? That woulnd't make much sense in a CPU benchmark, as \n> OLTP workload is typically limited by the disc subsystem.\n>\n> Can someone enlighten me what this site is about?\n>\n>\n",
"msg_date": "Fri, 22 Sep 2006 10:58:06 +0200",
"msg_from": "nicky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "Try the translation ;)\n\nhttp://tweakers.net/reviews/646/13\n\nOn 22-9-2006 10:32 Hannes Dorbath wrote:\n> A colleague pointed me to this site tomorrow:\n> \n> http://tweakers.net/reviews/642/13\n> \n> I can't read the language, so can't get a grip on what exactly the \n> \"benchmark\" was about.\n> \n> Their diagrams show `Request per seconds'. What should that mean? How \n> many connections PG accepted per second? So they measured the OS fork \n> performance? Should that value be of any interrest? Anyone with heavy \n> OLTP workload will use persistent connections or a connection pool in \n> front.\n> \n> Do they mean TPS? That woulnd't make much sense in a CPU benchmark, as \n> OLTP workload is typically limited by the disc subsystem.\n> \n> Can someone enlighten me what this site is about?\n> \n> \n",
"msg_date": "Fri, 22 Sep 2006 10:59:27 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "On Sep 22, 2006, at 4:58 AM, nicky wrote:\n\n> till 100 simultaneous visitors, the Xeon performs 24% better with \n> MSQL 4.1.20, 30% better in MySQL 5.0.20a and 37% better in \n> PostgreSQL 8.2-dev. In short, the Socket F Opteron doesn't stand a \n> chance, although the Woodcrest scales better and has such a high \n> startpoint with one core, there is no chance of beating it. We\n\nso you think AMD is just sitting around twiddling their thumbs and \nsaying \"well, time to give up since Intel is faster today\". no. \nthere will be back-and forth between these two vendors to our \nbenefit. I would expect next-gen AMD chips to be faster than the \nintels. If not, then perhaps they *should* give up :-)",
"msg_date": "Fri, 22 Sep 2006 16:34:04 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "On 22-9-2006 22:34 Vivek Khera wrote:\n> so you think AMD is just sitting around twiddling their thumbs and \n> saying \"well, time to give up since Intel is faster today\". no. there \n> will be back-and forth between these two vendors to our benefit. I \n> would expect next-gen AMD chips to be faster than the intels. If not, \n> then perhaps they *should* give up :-)\n\nPlease read the english translation of that article I gave earlier \ntoday. Than you can see the set-up and that its a bit childish to quote \n\"benchmark\" as you did in the title of this thread.\nAll the answers in your initial mail are answered in the article, and as \nsaid, there is an english translation of the dutch article you posted.\n\nWhat you conclude from that translation is not the conclusion of the \narticle, just that AMD has *no* answer at this time and won't have for \nat least somewhere in 2007 when their K8L will hit the market.\nBut the K8L is not likely to be as much faster as the Opteron was to the \nfirst Xeon's, if at all faster...\n\nIf you're an AMD-fan, by all means, buy their products, those processors \nare indeed fast and you can build decent servers with them. But don't \nrule out Intel, just because with previous processors they were the \nslower player ;)\n\nBest regards,\n\nArjen van der Meijden\n",
"msg_date": "Fri, 22 Sep 2006 23:50:47 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "On Fri, Sep 22, 2006 at 11:50:47PM +0200, Arjen van der Meijden wrote:\n> If you're an AMD-fan, by all means, buy their products, those processors \n> are indeed fast and you can build decent servers with them. But don't \n> rule out Intel, just because with previous processors they were the \n> slower player ;)\n\nYep. From what I understand, Intel is 8 to 10 times the size of AMD.\n\nIt's somewhat amazing that AMD even competes, and excellent for us, the\nconsumer, that they compete well, ensuring that we get very fast\ncomputers, for amazingly low prices.\n\nBut Intel isn't crashing down any time soon. Perhaps they became a little\nlazy, and made a few mistakes. AMD is forcing them to clean up.\n\nMay the competition continue... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Fri, 22 Sep 2006 18:36:17 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "I find the benchmark much more interesting in comparing PostgreSQL to\nMySQL than Intel to AMD. It might be as biased as other \"benchmarks\"\nbut it shows clearly something that a lot of PostgreSQL user always\nthought: MySQL gives up on concurrency ... it just doesn't scale well.\n\ncug\n\n\nOn 9/23/06, [email protected] <[email protected]> wrote:\n> Yep. From what I understand, Intel is 8 to 10 times the size of AMD.\n>\n> It's somewhat amazing that AMD even competes, and excellent for us, the\n> consumer, that they compete well, ensuring that we get very fast\n> computers, for amazingly low prices.\n>\n> But Intel isn't crashing down any time soon. Perhaps they became a little\n> lazy, and made a few mistakes. AMD is forcing them to clean up.\n>\n> May the competition continue... :-)\n>\n> Cheers,\n> mark\n\n\n\n-- \nPostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\nhttp://www.bignerdranch.com/news/2006-08-21.shtml\n",
"msg_date": "Sat, 23 Sep 2006 15:00:31 +0200",
"msg_from": "\"Guido Neitzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "\nOn 23-Sep-06, at 9:00 AM, Guido Neitzer wrote:\n\n> I find the benchmark much more interesting in comparing PostgreSQL to\n> MySQL than Intel to AMD. It might be as biased as other \"benchmarks\"\n> but it shows clearly something that a lot of PostgreSQL user always\n> thought: MySQL gives up on concurrency ... it just doesn't scale well.\n>\n> cug\n>\nBefore you get too carried away with this benchmark, you should \nreview the previous comments on this thread.\nNot that I don't agree, but lets put things in perspective.\n\n1) The database fits entirely in memory, so this is really only \ntesting CPU, not I/O which should be taken into account IMO\n2) The machines were not \"equal\" The AMD boxes did not have as much ram.\n\n\nDAVE\n>\n> On 9/23/06, [email protected] <[email protected]> wrote:\n>> Yep. From what I understand, Intel is 8 to 10 times the size of AMD.\n>>\n>> It's somewhat amazing that AMD even competes, and excellent for \n>> us, the\n>> consumer, that they compete well, ensuring that we get very fast\n>> computers, for amazingly low prices.\n>>\n>> But Intel isn't crashing down any time soon. Perhaps they became a \n>> little\n>> lazy, and made a few mistakes. AMD is forcing them to clean up.\n>>\n>> May the competition continue... :-)\n>>\n>> Cheers,\n>> mark\n>\n>\n>\n> -- \n> PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\n> http://www.bignerdranch.com/news/2006-08-21.shtml\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n",
"msg_date": "Sat, 23 Sep 2006 09:16:50 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "On 9/23/06, Dave Cramer <[email protected]> wrote:\n\n> 1) The database fits entirely in memory, so this is really only\n> testing CPU, not I/O which should be taken into account IMO\n\nI don't think this really is a reason that MySQL broke down on ten or\nmore concurrent connections. The RAM might be, but I don't think so\ntoo in this case as it represents exactly what we have seen in similar\ntests. MySQL performs quite well on easy queries and not so much\nconcurrency. We don't have that case very often in my company ... we\nhave at least ten to twenty connections to the db performing\nstatements. And we have some fairly complex statements running very\noften.\n\nNevertheless - a benchmark is a benchmark. Nothing else. We prefer\nPostgreSQL for other reasons then higher performance (which it has for\nlots of situations).\n\ncug\n\n-- \nPostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\nhttp://www.bignerdranch.com/news/2006-08-21.shtml\n",
"msg_date": "Sat, 23 Sep 2006 15:49:53 +0200",
"msg_from": "\"Guido Neitzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
},
{
"msg_contents": "\nOn 23-Sep-06, at 9:49 AM, Guido Neitzer wrote:\n\n> On 9/23/06, Dave Cramer <[email protected]> wrote:\n>\n>> 1) The database fits entirely in memory, so this is really only\n>> testing CPU, not I/O which should be taken into account IMO\n>\n> I don't think this really is a reason that MySQL broke down on ten or\n> more concurrent connections. The RAM might be, but I don't think so\n> too in this case as it represents exactly what we have seen in similar\n> tests. MySQL performs quite well on easy queries and not so much\n> concurrency. We don't have that case very often in my company ... we\n> have at least ten to twenty connections to the db performing\n> statements. And we have some fairly complex statements running very\n> often.\n>\n> Nevertheless - a benchmark is a benchmark. Nothing else. We prefer\n> PostgreSQL for other reasons then higher performance (which it has for\n> lots of situations).\n\nI should make myself clear. I like the results of the benchmark. But \nI wanted to keep things in perspective.\n\nDave\n>\n> cug\n>\n> -- \n> PostgreSQL Bootcamp, Big Nerd Ranch Europe, Nov 2006\n> http://www.bignerdranch.com/news/2006-08-21.shtml\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sat, 23 Sep 2006 10:19:34 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs. Xeon \"benchmark\""
}
] |
[
{
"msg_contents": "The query expain analyze looks like this:\n\nclick-counter=# explain analyze select count(*) as count,\nto_char(date_trunc('day',c.datestamp),'DD-Mon') as day from impression c,\nurl u, handle h where c.url_id=u.url_id and c.handle_id=h.handle_id and\nh.handle like '10000.19%' group by date_trunc('day',c.datestamp) order by\ndate_trunc('day',c.datestamp);\n\nQUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=530282.76..530283.04 rows=113 width=8) (actual time=\n191887.059..191887.131 rows=114 loops=1)\n Sort Key: date_trunc('day'::text, c.datestamp)\n -> HashAggregate (cost=530276.65..530278.91 rows=113 width=8) (actual\ntime=191886.081..191886.509 rows=114 loops=1)\n -> Hash Join (cost=128.41..518482.04 rows=2358921 width=8)\n(actual time=17353.281..190568.890 rows=625212 loops=1)\n Hash Cond: (\"outer\".handle_id = \"inner\".handle_id)\n -> Merge Join (cost=0.00..444641.52 rows=5896746 width=12)\n(actual time=34.582..183154.561 rows=5896746 loops=1)\n Merge Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Index Scan using url_pkey on url u (cost=\n0.00..106821.10 rows=692556 width=8) (actual\ntime=0.078..83432.380rows=692646 loops=1)\n -> Index Scan using impression_url_i on impression c\n(cost=0.00..262546.95 rows=5896746 width=16) (actual\ntime=34.473..86701.410rows=5896746 loops=1)\n -> Hash (cost=123.13..123.13 rows=2115 width=4) (actual\ntime=40.159..40.159 rows=2706 loops=1)\n -> Bitmap Heap Scan on handle h\n(cost=24.69..123.13rows=2115 width=4) (actual time=\n20.362..36.819 rows=2706 loops=1)\n Filter: (handle ~~ '10000.19%'::text)\n -> Bitmap Index Scan on handles_i (cost=\n0.00..24.69 rows=2115 width=0) (actual time=20.264..20.264 rows=2706\nloops=1)\n Index Cond: ((handle >= '10000.19'::text)\nAND (handle < '10000.1:'::text))\n Total runtime: 191901.868 ms\n\n(looks like it sped up a bit the second time I did it)\n\nWhen I query relpages for the tables involved:\n\nclick-counter=# select relpages from pg_class where relname='impression';\n relpages\n----------\n 56869\n(1 row)\n\nclick-counter=# select relpages from pg_class where relname='url';\n relpages\n----------\n 66027\n(1 row)\n\nclick-counter=# select relpages from pg_class where relname='handle';\n relpages\n----------\n 72\n(1 row)\n\nclick-counter=#\n\nthey only total 122968.\n\nHome come the query statistics showed that 229066 blocks where read given\nthat all the blocks in all the tables put together only total 122968?\n\nLOG: QUERY STATISTICS\nDETAIL: ! system usage stats:\n ! 218.630786 elapsed 24.160000 user 13.930000 system sec\n ! [261.000000 user 85.610000 sys total]\n ! 0/0 [0/0] filesystem blocks in/out\n ! 65/47 [20176/99752] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/0 [0/0] voluntary/involuntary context switches\n ! buffer usage stats:\n ! Shared blocks: 229066 read, 2 written, buffer\nhit rate = 55.61%\n ! Local blocks: 0 read, 0 written, buffer\nhit rate = 0.00%\n ! Direct blocks: 0 read, 0 written\n\n\nAlex.\n\nThe query expain analyze looks like this:click-counter=# explain analyze select count(*) as count, to_char(date_trunc('day',c.datestamp),'DD-Mon') as day from impression c, url u, handle h where c.url_id=u.url_id and \nc.handle_id=h.handle_id and h.handle like '10000.19%' group by date_trunc('day',c.datestamp) order by date_trunc('day',c.datestamp); QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort (cost=530282.76..530283.04 rows=113 width=8) (actual time=\n191887.059..191887.131 rows=114 loops=1) Sort Key: date_trunc('day'::text, c.datestamp) -> HashAggregate (cost=530276.65..530278.91 rows=113 width=8) (actual time=191886.081..191886.509 rows=114 loops=1)\n -> Hash Join (cost=128.41..518482.04 rows=2358921 width=8) (actual time=17353.281..190568.890 rows=625212 loops=1) Hash Cond: (\"outer\".handle_id = \"inner\".handle_id)\n -> Merge Join (cost=0.00..444641.52 rows=5896746 width=12) (actual time=34.582..183154.561 rows=5896746 loops=1) Merge Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Index Scan using url_pkey on url u (cost=0.00..106821.10 rows=692556 width=8) (actual time=0.078..83432.380 rows=692646 loops=1) -> Index Scan using impression_url_i on impression c (cost=\n0.00..262546.95 rows=5896746 width=16) (actual time=34.473..86701.410 rows=5896746 loops=1) -> Hash (cost=123.13..123.13 rows=2115 width=4) (actual time=40.159..40.159 rows=2706 loops=1) -> Bitmap Heap Scan on handle h (cost=\n24.69..123.13 rows=2115 width=4) (actual time=20.362..36.819 rows=2706 loops=1) Filter: (handle ~~ '10000.19%'::text) -> Bitmap Index Scan on handles_i (cost=\n0.00..24.69 rows=2115 width=0) (actual time=20.264..20.264 rows=2706 loops=1) Index Cond: ((handle >= '10000.19'::text) AND (handle < '10000.1:'::text)) Total runtime: 191901.868\n ms(looks like it sped up a bit the second time I did it)When I query relpages for the tables involved:click-counter=# select relpages from pg_class where relname='impression'; relpages----------\n 56869(1 row)click-counter=# select relpages from pg_class where relname='url'; relpages---------- 66027(1 row)click-counter=# select relpages from pg_class where relname='handle';\n relpages---------- 72(1 row)click-counter=#they only total 122968.Home come the query statistics showed that 229066 blocks where read given that all the blocks in all the tables put together only total 122968?\nLOG: QUERY STATISTICSDETAIL: ! system usage stats: ! 218.630786 elapsed 24.160000 user 13.930000 system sec ! [261.000000 user 85.610000 sys total] ! 0/0 [0/0] filesystem blocks in/out\n ! 65/47 [20176/99752] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! buffer usage stats:\n ! Shared blocks: 229066 read, 2 written, buffer hit rate = 55.61% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written\nAlex.",
"msg_date": "Fri, 22 Sep 2006 11:34:58 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Confusion and Questions about blocks read"
},
{
"msg_contents": "\"Alex Turner\" <[email protected]> writes:\n> Home come the query statistics showed that 229066 blocks where read given\n> that all the blocks in all the tables put together only total 122968?\n\nYou forgot to count the indexes. Also, the use of indexscans in the\nmergejoins probably causes multiple re-reads of some table blocks,\ndepending on just what the physical ordering of the rows is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Sep 2006 12:24:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Confusion and Questions about blocks read "
},
{
"msg_contents": "ahh.... good point\n\nThanks\n\nOn 9/22/06, Tom Lane <[email protected]> wrote:\n>\n> \"Alex Turner\" <[email protected]> writes:\n> > Home come the query statistics showed that 229066 blocks where read\n> given\n> > that all the blocks in all the tables put together only total 122968?\n>\n> You forgot to count the indexes. Also, the use of indexscans in the\n> mergejoins probably causes multiple re-reads of some table blocks,\n> depending on just what the physical ordering of the rows is.\n>\n> regards, tom lane\n>\n\nahh.... good pointThanksOn 9/22/06, Tom Lane <[email protected]> wrote:\n\"Alex Turner\" <[email protected]> writes:> Home come the query statistics showed that 229066 blocks where read given> that all the blocks in all the tables put together only total 122968?\nYou forgot to count the indexes. Also, the use of indexscans in themergejoins probably causes multiple re-reads of some table blocks,depending on just what the physical ordering of the rows is. regards, tom lane",
"msg_date": "Fri, 22 Sep 2006 13:13:41 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Confusion and Questions about blocks read"
},
{
"msg_contents": "Ok - so I have another mystery:\n\nI insert virtually the same rows into two different tables:\n\ntrend=# insert into fish select 2, nextval('result_entry_order_seq'),\nproperty_id from property;\nINSERT 0 59913\ntrend=# insert into result_entry select 0,\nnextval('result_entry_order_seq'), property_id from property;\nINSERT 0 59913\ntrend=#\n\nbut the stats show one as having written 20x as many blocks:\n\nLOG: statement: insert into fish select 2,\nnextval('result_entry_order_seq'), property_id from property;\nLOG: QUERY STATISTICS\nDETAIL: ! system usage stats:\n ! 2.098067 elapsed 0.807877 user 1.098833 system sec\n ! [23.875370 user 27.789775 sys total]\n ! 0/0 [0/0] filesystem blocks in/out\n ! 0/1 [5/62269] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 72/6 [18464/1126] voluntary/involuntary context switches\n ! buffer usage stats:\n ! Shared blocks: 79106 read, 420 written, buffer\nhit rate = 79.39%\n ! Local blocks: 0 read, 0 written, buffer\nhit rate = 0.00%\n ! Direct blocks: 0 read, 0 written\n\nLOG: statement: insert into result_entry select 0,\nnextval('result_entry_order_seq'), property_id from property;\nLOG: QUERY STATISTICS\nDETAIL: ! system usage stats:\n! 16.963729 elapsed 3.533463 user 1.706740 system sec\n! [27.408833 user 29.497515 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/1186 [5/63455] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 59/139 [18525/1265] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 100744 read, 7352 written, buffer hit rate\n= 89.71%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\n\nI understand the read blocks difference, the second had to check indexes\nmatching the foreign keys.\n\n\nThe table definitions are given below:\n\ntrend=# \\d fish\n Table \"public.fish\"\n Column | Type | Modifiers\n--------------------+---------+-----------\n result_id | bigint |\n result_entry_order | bigint |\n property_id | integer |\nIndexes:\n \"fish_pkey\" UNIQUE, btree (result_id, result_entry_order)\n\ntrend=# \\d result_Entry\n Table \"public.result_entry\"\n Column | Type | Modifiers\n--------------------+---------+-----------\n result_id | bigint |\n result_entry_order | bigint |\n property_id | integer |\nIndexes:\n \"fish_pkey\" UNIQUE, btree (result_id, result_entry_order)\n\nThe explain analyzes are kind of interesting:\n\ntrend=# explain analyze insert into fish select 2,\nnextval('result_entry_order_seq'), property_id from property;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on property (cost=0.00..79295.70 rows=59913 width=8) (actual\ntime=0.275..1478.681 rows=59913 loops=1)\n Total runtime: 2178.600 ms\n(2 rows)\n\ntrend=# explain analyze insert into result_entry select 0,\nnextval('result_entry_order_seq'), property_id from property;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on property (cost=0.00..79295.70 rows=59913 width=8) (actual\ntime=0.118..1473.352 rows=59913 loops=1)\n Trigger for constraint result_entry_result_fk: time=2037.351 calls=59913\n Trigger for constraint result_entry_property_fk: time=8622.260 calls=59913\n Total runtime: 12959.716 ms\n(4 rows)\n\n\nI don't understand the time for the FK check given the size of the tables\nthey are checking against (and I understand it's the indexes, not the tables\nthat the actualy check is made):\n\ntrend=# select count(*) from result_cache;\n count\n-------\n 8\n(1 row)\n\ntrend=#\n\n\ntrend=# select count(*) from property;\n count\n-------\n 59913\n(1 row)\n\ntrend=#\n\nThe database was just re-indexed, and no changes beyond this insert were\nmade in that time and result_entry has recently been vacuumed.\n\nAny insight would be greatly appreciated\n\nAlex\n\n\n\nOn 9/22/06, Alex Turner <[email protected]> wrote:\n>\n> ahh.... good point\n>\n> Thanks\n>\n> On 9/22/06, Tom Lane <[email protected]> wrote:\n> >\n> > \"Alex Turner\" <[email protected]> writes:\n> > > Home come the query statistics showed that 229066 blocks where read\n> > given\n> > > that all the blocks in all the tables put together only total 122968?\n> >\n> > You forgot to count the indexes. Also, the use of indexscans in the\n> > mergejoins probably causes multiple re-reads of some table blocks,\n> > depending on just what the physical ordering of the rows is.\n> >\n> > regards, tom lane\n> >\n>\n>\n\nOk - so I have another mystery:I insert virtually the same rows into two different tables:trend=# insert into fish select 2, nextval('result_entry_order_seq'), property_id from property;INSERT 0 59913\ntrend=# insert into result_entry select 0, nextval('result_entry_order_seq'), property_id from property;INSERT 0 59913trend=#but the stats show one as having written 20x as many blocks:LOG: statement: insert into fish select 2, nextval('result_entry_order_seq'), property_id from property;\nLOG: QUERY STATISTICSDETAIL: ! system usage stats: ! 2.098067 elapsed 0.807877 user 1.098833 system sec ! [23.875370 user 27.789775 sys total] ! 0/0 [0/0] filesystem blocks in/out\n ! 0/1 [5/62269] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 72/6 [18464/1126] voluntary/involuntary context switches ! buffer usage stats:\n ! Shared blocks: 79106 read, 420 written, buffer hit rate = 79.39% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written\nLOG: statement: insert into result_entry select 0, nextval('result_entry_order_seq'), property_id from property;LOG: QUERY STATISTICSDETAIL: ! system usage stats:! 16.963729 elapsed 3.533463 user \n1.706740 system sec! [27.408833 user 29.497515 sys total]! 0/0 [0/0] filesystem blocks in/out! 0/1186 [5/63455] page faults/reclaims, 0 [0] swaps! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 59/139 [18525/1265] voluntary/involuntary context switches! buffer usage stats:! Shared blocks: 100744 read, 7352 written, buffer hit rate = 89.71%! Local blocks: 0 read, 0 written, buffer hit rate = \n0.00%! Direct blocks: 0 read, 0 writtenI understand the read blocks difference, the second had to check indexes matching the foreign keys.The table definitions are given below:\ntrend=# \\d fish Table \"public.fish\" Column | Type | Modifiers--------------------+---------+----------- result_id | bigint | result_entry_order | bigint |\n property_id | integer |Indexes: \"fish_pkey\" UNIQUE, btree (result_id, result_entry_order)trend=# \\d result_Entry Table \"public.result_entry\n\" Column | Type | Modifiers --------------------+---------+----------- result_id | bigint | result_entry_order | bigint | property_id | integer |\nIndexes: \"fish_pkey\" UNIQUE, btree (result_id, result_entry_order)The explain analyzes are kind of interesting:trend=# explain analyze insert into fish select 2, nextval('result_entry_order_seq'), property_id from property;\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------- Seq Scan on property (cost=\n0.00..79295.70 rows=59913 width=8) (actual time=0.275..1478.681 rows=59913 loops=1) Total runtime: 2178.600 ms(2 rows)trend=# explain analyze insert into result_entry select 0, nextval('result_entry_order_seq'), property_id from property;\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------- Seq Scan on property (cost=\n0.00..79295.70 rows=59913 width=8) (actual time=0.118..1473.352 rows=59913 loops=1) Trigger for constraint result_entry_result_fk: time=2037.351 calls=59913 Trigger for constraint result_entry_property_fk: time=8622.260\n calls=59913 Total runtime: 12959.716 ms(4 rows)I don't understand the time for the FK check given the size of the tables they are checking against (and I understand it's the indexes, not the tables that the actualy check is made):\ntrend=# select count(*) from result_cache; count------- 8(1 row)trend=#trend=# select count(*) from property; count------- 59913(1 row)trend=#\nThe database was just re-indexed, and no changes beyond this insert were made in that time and result_entry has recently been vacuumed.Any insight would be greatly appreciatedAlex\nOn 9/22/06, Alex Turner <[email protected]> wrote:\nahh.... good pointThanksOn 9/22/06, Tom Lane <\[email protected]> wrote:\n\"Alex Turner\" <[email protected]> writes:> Home come the query statistics showed that 229066 blocks where read given\n> that all the blocks in all the tables put together only total 122968?\nYou forgot to count the indexes. Also, the use of indexscans in themergejoins probably causes multiple re-reads of some table blocks,depending on just what the physical ordering of the rows is. regards, tom lane",
"msg_date": "Fri, 22 Sep 2006 17:38:23 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Confusion and Questions about blocks read"
},
{
"msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n> \"Alex Turner\" <[email protected]> writes:\n>> Home come the query statistics showed that 229066 blocks where read given\n>> that all the blocks in all the tables put together only total 122968?\n> \n> You forgot to count the indexes. Also, the use of indexscans in the\n> mergejoins probably causes multiple re-reads of some table blocks,\n> depending on just what the physical ordering of the rows is.\n\nAs far as I understand, Index Bitmap Scans improve this behaviour, by\nensuring that every table block is read only once.\n\nBtw, would it be feasible to enhance normal index scans by looking at\nall rows in the current table block whether they meet the query\ncriteria, fetch them all, and blacklist the block for further revisiting\nduring the same index scan?\n\nI think that, for non-sorted cases, this could improve index scans a\nlittle, but I don't know whether it's worth the effort, given that\nbitmap indidex scans exist.\n\nThanks,\nMarkus\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Sat, 23 Sep 2006 14:19:42 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Confusion and Questions about blocks read"
},
{
"msg_contents": "On Sep 23, 2006, at 8:19 AM, Markus Schaber wrote:\n> Btw, would it be feasible to enhance normal index scans by looking at\n> all rows in the current table block whether they meet the query\n> criteria, fetch them all, and blacklist the block for further \n> revisiting\n> during the same index scan?\n>\n> I think that, for non-sorted cases, this could improve index scans a\n> little, but I don't know whether it's worth the effort, given that\n> bitmap indidex scans exist.\n\nThe trade-off is you'd burn a lot more CPU on those pages. What might \nbe interesting would be collapsing bitmap scan data down to a page \nlevel when certain conditions were met, such as if you're getting a \nsignificant number of hits for a given page. There's probably other \ncriteria that could be used as well. One issue would be considering \nthe effects of other bitmap index operations; if you're ANDing a \nbunch of scans together, you're likely to have far fewer tuples per \npage coming out the backside, which means you probably wouldn't want \nto burn the extra CPU to do full page scans.\n\nBTW, I remember discussion at some point about ordering the results \nof a bitmap scan by page/tuple ID, which would essentially do what \nyou're talking about. I don't know if it actually happened or not, \nthough.\n\nIf this is something that interests you, I recommend taking a look at \nthe code; it's generally not too hard to read through thanks to all \nthe comments.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n\n\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 26 Sep 2006 23:08:35 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Confusion and Questions about blocks read"
}
] |
[
{
"msg_contents": "Hi all,\n\nI still have an dual dual-core opteron box with a 3Ware 9550SX-12 sitting \nhere and I need to start getting it ready for production. I also have to \nsend back one processor since we were mistakenly sent two. Before I do \nthat, I would like to record some stats for posterity and post to the list \nso that others can see how this particular hardware performs.\n\nIt looks to be more than adequate for our needs...\n\nWhat are the standard benchmarks that people here use for comparison \npurposes? I know all benchmarks are flawed in some way, but I'd at least \nlike to measure with the same tools that folks here generally use to get a \nballpark figure.\n\nThanks,\n\nCharles\n",
"msg_date": "Fri, 22 Sep 2006 13:14:16 -0400 (EDT)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": true,
"msg_subject": "recommended benchmarks"
},
{
"msg_contents": "On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote:\n> Hi all,\n> \n> I still have an dual dual-core opteron box with a 3Ware 9550SX-12 sitting \n> here and I need to start getting it ready for production. I also have to \n> send back one processor since we were mistakenly sent two. Before I do \n> that, I would like to record some stats for posterity and post to the list \n> so that others can see how this particular hardware performs.\n> \n> It looks to be more than adequate for our needs...\n> \n> What are the standard benchmarks that people here use for comparison \n> purposes? I know all benchmarks are flawed in some way, but I'd at least \n> like to measure with the same tools that folks here generally use to get a \n> ballpark figure.\n\nCheck out the OSDL stuff.\n\nhttp://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/\n\nBrad.\n\n",
"msg_date": "Fri, 22 Sep 2006 15:08:05 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recommended benchmarks"
},
{
"msg_contents": "> On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote:\n> > Hi all,\n> >\n> > I still have an dual dual-core opteron box with a 3Ware 9550SX-12\n> sitting\n> > here and I need to start getting it ready for production. I also\nhave\n> to\n> > send back one processor since we were mistakenly sent two. Before I\ndo\n> > that, I would like to record some stats for posterity and post to\nthe\n> list\n> > so that others can see how this particular hardware performs.\n> >\n> > It looks to be more than adequate for our needs...\n> >\n> > What are the standard benchmarks that people here use for comparison\n> > purposes? I know all benchmarks are flawed in some way, but I'd at\n> least\n> > like to measure with the same tools that folks here generally use to\nget\n> a\n> > ballpark figure.\n> \n> Check out the OSDL stuff.\n> \n>\nhttp://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_sui\nte\n> /\n> \n> Brad.\n> \n\nLet me know what tests you end up using and how difficult they are to\nsetup/run- I have a dell 2950 (2 dual core woodcrest) that I could\nprobably run the same tests on. I'm looking into DBT2 (OLTP, similar to\nTPC-C) to start with, then probably DBT-3 since it's more OLAP style\n(and more like the application I'll be dealing with). \n\nWhat specific hardware are you testing? (CPU, RAM, raid setup, etc?)\n\n- Bucky\n",
"msg_date": "Fri, 22 Sep 2006 17:18:24 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recommended benchmarks"
},
{
"msg_contents": "If the real world applications you'll be running on the box are Java\n(or use lots of prepared statements and no stored procedures)... try\nBenchmarkSQL from pgFoundry. Its extremely easy to setup and use.\nLike the DBT2, it's an oltp benchmark that is similar to the tpc-c.\n\n--Denis Lussier\n http://www.enterprisedb.com\n\nOn 9/22/06, Bucky Jordan <[email protected]> wrote:\n> > On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote:\n> > > Hi all,\n> > >\n> > > I still have an dual dual-core opteron box with a 3Ware 9550SX-12\n> > sitting\n> > > here and I need to start getting it ready for production. I also\n> have\n> > to\n> > > send back one processor since we were mistakenly sent two. Before I\n> do\n> > > that, I would like to record some stats for posterity and post to\n> the\n> > list\n> > > so that others can see how this particular hardware performs.\n> > >\n> > > It looks to be more than adequate for our needs...\n> > >\n> > > What are the standard benchmarks that people here use for comparison\n> > > purposes? I know all benchmarks are flawed in some way, but I'd at\n> > least\n> > > like to measure with the same tools that folks here generally use to\n> get\n> > a\n> > > ballpark figure.\n> >\n> > Check out the OSDL stuff.\n> >\n> >\n> http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_sui\n> te\n> > /\n> >\n> > Brad.\n> >\n>\n> Let me know what tests you end up using and how difficult they are to\n> setup/run- I have a dell 2950 (2 dual core woodcrest) that I could\n> probably run the same tests on. I'm looking into DBT2 (OLTP, similar to\n> TPC-C) to start with, then probably DBT-3 since it's more OLAP style\n> (and more like the application I'll be dealing with).\n>\n> What specific hardware are you testing? (CPU, RAM, raid setup, etc?)\n>\n> - Bucky\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Sat, 23 Sep 2006 07:52:48 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recommended benchmarks"
}
] |
[
{
"msg_contents": "Hi all, I'm having some confusion with the 7.4 query planner.\n\nI have two identical queries, whereby the passed (varchar) parameter\nappears to be the deciding factor between a sequential or an index scan.\n\n\nIE, This query:\n\nexplain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\nAS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\na1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'p1' AND a2.STEP_ID =\n1 );\nNOTICE: QUERY PLAN:\n\nUnique (cost=1175.88..1175.88 rows=1 width=16)\n -> Sort (cost=1175.88..1175.88 rows=1 width=16)\n -> Nested Loop (cost=0.00..1175.87 rows=1 width=16)\n -> Index Scan using idx_9 on os_currentstep a1\n(cost=0.00..1172.45 rows=1 width=8)\n -> Index Scan using idx_8 on os_currentstep a2\n(cost=0.00..3.41 rows=1 width=8)\n\nHowever, this query:\n\nexplain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\nAS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\na1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'GIL' AND a2.STEP_ID =\n1 );\nNOTICE: QUERY PLAN:\n\nUnique (cost=3110.22..3110.22 rows=1 width=16)\n -> Sort (cost=3110.22..3110.22 rows=1 width=16)\n -> Nested Loop (cost=0.00..3110.21 rows=1 width=16)\n -> Seq Scan on os_currentstep a1 (cost=0.00..3106.78\nrows=1 width=8)\n -> Index Scan using idx_8 on os_currentstep a2\n(cost=0.00..3.41 rows=1 width=8)\n\n\nThoughts about why changing OWNER from 'p1' to 'GIL' would go from an\nIndex Scan to a Sequential?\n\n[There is an index on os_currentstep, and it was vacuum analyze'd\nrecently.]\n\nRunning version 7.4 (working on upgrading to 8.0 soon). Thanks!\n\n--\nAnthony\n\n",
"msg_date": "Fri, 22 Sep 2006 17:59:16 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is it choosing a different plan?"
},
{
"msg_contents": "I thought this was related to the TYPE (ie, I could cast it using\nsomething like: attr1=1::int8). However, I tried a few more values, and\nthe query planner is confusing me.\n\nWith these values, in the owner, I get a Seq Scan:\n\t'GIL', '1122', '2305'\n\nWith these values, in the owner, I get an Index Scan:\n\t'p1', 'p2', '2300', '8088', 'CHANGEINVENTION'\n\nThe os_currentstep table has about 119,700 rows in it -- and I can't do\ntoo much to actually change the query, since it's coming from something\nof a 'black box' application.\n\nThoughts?\n\n--\nAnthony\n\nOn Fri, 2006-09-22 at 17:59 -0500, Anthony Presley wrote:\n> Hi all, I'm having some confusion with the 7.4 query planner.\n> \n> I have two identical queries, whereby the passed (varchar) parameter\n> appears to be the deciding factor between a sequential or an index scan.\n> \n> \n> IE, This query:\n> \n> explain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\n> AS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\n> a1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'p1' AND a2.STEP_ID =\n> 1 );\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=1175.88..1175.88 rows=1 width=16)\n> -> Sort (cost=1175.88..1175.88 rows=1 width=16)\n> -> Nested Loop (cost=0.00..1175.87 rows=1 width=16)\n> -> Index Scan using idx_9 on os_currentstep a1\n> (cost=0.00..1172.45 rows=1 width=8)\n> -> Index Scan using idx_8 on os_currentstep a2\n> (cost=0.00..3.41 rows=1 width=8)\n> \n> However, this query:\n> \n> explain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\n> AS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\n> a1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'GIL' AND a2.STEP_ID =\n> 1 );\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=3110.22..3110.22 rows=1 width=16)\n> -> Sort (cost=3110.22..3110.22 rows=1 width=16)\n> -> Nested Loop (cost=0.00..3110.21 rows=1 width=16)\n> -> Seq Scan on os_currentstep a1 (cost=0.00..3106.78\n> rows=1 width=8)\n> -> Index Scan using idx_8 on os_currentstep a2\n> (cost=0.00..3.41 rows=1 width=8)\n> \n> \n> Thoughts about why changing OWNER from 'p1' to 'GIL' would go from an\n> Index Scan to a Sequential?\n> \n> [There is an index on os_currentstep, and it was vacuum analyze'd\n> recently.]\n> \n> Running version 7.4 (working on upgrading to 8.0 soon). Thanks!\n> \n> --\n> Anthony\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Fri, 22 Sep 2006 18:58:53 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is it choosing a different plan?"
},
{
"msg_contents": "Doh!\n\nBad kharma. I apologize. Too late, and not enuf caffeine. I posted\nhere because this query is taking 2+ minutes on a production machine,\nand under 4 seconds on a development machine.\n\nFor posterity sakes .... the seq scan is because of the distribution of\nthose values. GIL is in about 1/2 of the records. The others are very\ncommon. Cheaper to do a Sequential than to do an index. The other\nvalues are present in only a few spotted cases (1 to 3000), and the\nindex is better.\n\nAlso helps when the production machine has all of its indexes in place\nto actually do the reading.\n\nSorry to be a bother!\n\n--\nAnthony\n\nOn Fri, 2006-09-22 at 18:58 -0500, Anthony Presley wrote:\n> I thought this was related to the TYPE (ie, I could cast it using\n> something like: attr1=1::int8). However, I tried a few more values, and\n> the query planner is confusing me.\n> \n> With these values, in the owner, I get a Seq Scan:\n> 'GIL', '1122', '2305'\n> \n> With these values, in the owner, I get an Index Scan:\n> 'p1', 'p2', '2300', '8088', 'CHANGEINVENTION'\n> \n> The os_currentstep table has about 119,700 rows in it -- and I can't do\n> too much to actually change the query, since it's coming from something\n> of a 'black box' application.\n> \n> Thoughts?\n> \n> --\n> Anthony\n> \n> On Fri, 2006-09-22 at 17:59 -0500, Anthony Presley wrote:\n> > Hi all, I'm having some confusion with the 7.4 query planner.\n> >\n> > I have two identical queries, whereby the passed (varchar) parameter\n> > appears to be the deciding factor between a sequential or an index scan.\n> >\n> >\n> > IE, This query:\n> >\n> > explain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\n> > AS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\n> > a1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'p1' AND a2.STEP_ID =\n> > 1 );\n> > NOTICE: QUERY PLAN:\n> >\n> > Unique (cost=1175.88..1175.88 rows=1 width=16)\n> > -> Sort (cost=1175.88..1175.88 rows=1 width=16)\n> > -> Nested Loop (cost=0.00..1175.87 rows=1 width=16)\n> > -> Index Scan using idx_9 on os_currentstep a1\n> > (cost=0.00..1172.45 rows=1 width=8)\n> > -> Index Scan using idx_8 on os_currentstep a2\n> > (cost=0.00..3.41 rows=1 width=8)\n> >\n> > However, this query:\n> >\n> > explain SELECT DISTINCT (a1.ENTRY_ID) AS retrieved FROM OS_CURRENTSTEP\n> > AS a1 , OS_CURRENTSTEP AS a2 WHERE a1.ENTRY_ID = a1.ENTRY_ID AND\n> > a1.ENTRY_ID = a2.ENTRY_ID AND ( a1.OWNER = 'GIL' AND a2.STEP_ID =\n> > 1 );\n> > NOTICE: QUERY PLAN:\n> >\n> > Unique (cost=3110.22..3110.22 rows=1 width=16)\n> > -> Sort (cost=3110.22..3110.22 rows=1 width=16)\n> > -> Nested Loop (cost=0.00..3110.21 rows=1 width=16)\n> > -> Seq Scan on os_currentstep a1 (cost=0.00..3106.78\n> > rows=1 width=8)\n> > -> Index Scan using idx_8 on os_currentstep a2\n> > (cost=0.00..3.41 rows=1 width=8)\n> >\n> >\n> > Thoughts about why changing OWNER from 'p1' to 'GIL' would go from an\n> > Index Scan to a Sequential?\n> >\n> > [There is an index on os_currentstep, and it was vacuum analyze'd\n> > recently.]\n> >\n> > Running version 7.4 (working on upgrading to 8.0 soon). Thanks!\n> >\n> > --\n> > Anthony\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 22 Sep 2006 19:25:24 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is it choosing a different plan?"
}
] |
[
{
"msg_contents": "I've got this query with an IN clause:\n\nselect count(*),public.album.gid,public.album.name,public.album.id \nfrom public.album,public.albumjoin,public.puid,public.puidjoin where \nalbumjoin.album = public.album.id and public.puidjoin.track = \npublic.albumjoin.track and public.puid.id = public.puidjoin.puid and \npublic.puid.puid\nIN (select umdb.puid.name from umdb.puid,umdb.node where umdb.puid.id \n= umdb.node.puid and umdb.node.dir=5886)\n group by gid,name,public.album.id having count(*) >= 6 order by \ncount(*) desc;\n\nIt gives me a rather expensive plan:\n\n Sort (cost=35729.07..35729.75 rows=272 width=69)\n Sort Key: count(*)\n -> HashAggregate (cost=35713.31..35718.07 rows=272 width=69)\n Filter: (count(*) >= 6)\n -> Nested Loop (cost=51.67..35709.91 rows=272 width=69)\n -> Nested Loop (cost=51.67..34216.30 rows=272 width=4)\n -> Nested Loop (cost=51.67..33338.04 rows=272 \nwidth=4)\n -> Hash IN Join (cost=51.67..31794.72 \nrows=218 width=4)\n Hash Cond: ((\"outer\".puid)::text = \n\"inner\".name)\n -> Seq Scan on puid \n(cost=0.00..23495.21 rows=1099421 width=44)\n -> Hash (cost=51.63..51.63 \nrows=15 width=40)\n -> Nested Loop \n(cost=0.00..51.63 rows=15 width=40)\n -> Index Scan using \nnode_dir on node (cost=0.00..3.22 rows=16 width=4)\n Index Cond: (dir \n= 5886)\n -> Index Scan using \npuid_pkey on puid (cost=0.00..3.01 rows=1 width=44)\n Index Cond: \n(puid.id = \"outer\".puid)\n -> Index Scan using puidjoin_puidtrack \non puidjoin (cost=0.00..7.05 rows=2 width=8)\n Index Cond: (\"outer\".id = \npuidjoin.puid)\n -> Index Scan using albumjoin_trackindex on \nalbumjoin (cost=0.00..3.22 rows=1 width=8)\n Index Cond: (\"outer\".track = \nalbumjoin.track)\n -> Index Scan using album_pkey on album \n(cost=0.00..5.48 rows=1 width=69)\n Index Cond: (\"outer\".album = album.id)\n\nIf I'm reading this right, it looks like it's expensive because it's \ndoing a sequential scan on public.puid.puid to satisfy the IN clause. \n(Although why it's doing that I'm not sure, given that there's a \nrecently analyzed index on public.puid.puid.) Interestingly, if I \nreplace that IN subselect with the 15 values it will return, my plan \nimproves by two orders of magnitude:\n\n Sort (cost=235.53..235.56 rows=12 width=69)\n Sort Key: count(*)\n -> HashAggregate (cost=235.11..235.32 rows=12 width=69)\n Filter: (count(*) >= 6)\n -> Nested Loop (cost=20.03..234.96 rows=12 width=69)\n -> Nested Loop (cost=20.03..169.06 rows=12 width=4)\n -> Nested Loop (cost=20.03..130.32 rows=12 \nwidth=4)\n -> Bitmap Heap Scan on puid \n(cost=20.03..59.52 rows=10 width=4)\n Recheck Cond: ((puid = \n'f68dcf86-992c-2e4a-21fb-2fc8c56edfeb'::bpchar) OR (puid = \n'7716dbcf-56ab-623b-ab33-3b2e67a0727c'::bpchar) OR (puid = \n'724d6a39-0d15-a296-2dd2-127c34f13809'::bpchar) OR (puid = \n'02f0cd9f-9fa5-abda-06cd-5dbb13826243'::bpchar) OR (puid = '165d5bea- \nb21f-9302-b991-0927f491787b'::bpchar) OR (puid = '4223dbc8-85af-a92e- \nb63d-72a726475e2c'::bpchar) OR (puid = '2d43ef9a- \nc7ee-2425-7fac-8b937cbed178'::bpchar) OR (puid = '9ff81c2f-04b7- \ncf5d-705f-7b944a5ae093'::bpchar) OR (puid = 'deaddddd-dfaf-18dd-6d4d- \nc483e8ba60f7'::bpchar) OR (puid = '20939b69- \nff98-770a-1444-3b0e9892712f'::bpchar))\n -> BitmapOr (cost=20.03..20.03 \nrows=10 width=0)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'f68dcf86-992c-2e4a-21fb-2fc8c56edfeb'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'7716dbcf-56ab-623b-ab33-3b2e67a0727c'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'724d6a39-0d15-a296-2dd2-127c34f13809'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'02f0cd9f-9fa5-abda-06cd-5dbb13826243'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'165d5bea-b21f-9302-b991-0927f491787b'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'4223dbc8-85af-a92e-b63d-72a726475e2c'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'2d43ef9a-c7ee-2425-7fac-8b937cbed178'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'9ff81c2f-04b7-cf5d-705f-7b944a5ae093'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'deaddddd-dfaf-18dd-6d4d-c483e8ba60f7'::bpchar)\n -> Bitmap Index Scan on \npuid_puidindex (cost=0.00..2.00 rows=1 width=0)\n Index Cond: (puid = \n'20939b69-ff98-770a-1444-3b0e9892712f'::bpchar)\n -> Index Scan using puidjoin_puidtrack \non puidjoin (cost=0.00..7.05 rows=2 width=8)\n Index Cond: (\"outer\".id = \npuidjoin.puid)\n -> Index Scan using albumjoin_trackindex on \nalbumjoin (cost=0.00..3.22 rows=1 width=8)\n Index Cond: (\"outer\".track = \nalbumjoin.track)\n -> Index Scan using album_pkey on album \n(cost=0.00..5.48 rows=1 width=69)\n Index Cond: (\"outer\".album = album.id)\n\nI guess my question is: if postgres is (correctly) estimating that \nonly 15 rows will come out of the subselect, and it knows it can \nchoose a much better plan with bitmap index scans, should it be able \nto choose the bitmap plan over the sequential scan? Or should I run \nthe subselect myself and then rewrite my query to push in the \nconstant values?\n",
"msg_date": "Sun, 24 Sep 2006 10:12:23 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "IN not handled very well?"
},
{
"msg_contents": "Ben <[email protected]> writes:\n> -> Hash IN Join (cost=51.67..31794.72 \n> rows=218 width=4)\n> Hash Cond: ((\"outer\".puid)::text = \n> \"inner\".name)\n> -> Seq Scan on puid \n> (cost=0.00..23495.21 rows=1099421 width=44)\n\n> -> Bitmap Heap Scan on puid \n> (cost=20.03..59.52 rows=10 width=4)\n> Recheck Cond: ((puid = \n> 'f68dcf86-992c-2e4a-21fb-2fc8c56edfeb'::bpchar) OR (puid = \n> '7716dbcf-56ab-623b-ab33-3b2e67a0727c'::bpchar) OR (puid = \n\n\nApparently you've got a datatype mismatch: name is text while puid is\nchar(N). The comparisons to name can't be converted into indexscans\non puid because the semantics aren't the same for text and char\ncomparisons.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Sep 2006 13:57:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN not handled very well? "
},
{
"msg_contents": "Ah, so I do. Thanks, that helps an awful lot.\n\nBut the plan is still twice as expensive as when I put in the static \nvalues. Is it just unreasonable to expect the planner to see that \nthere aren't many rows in the subselect, so to use the bitmap scans \nafter all?\n\nOn Sep 24, 2006, at 10:57 AM, Tom Lane wrote:\n\n> Ben <[email protected]> writes:\n>> -> Hash IN Join (cost=51.67..31794.72\n>> rows=218 width=4)\n>> Hash Cond: ((\"outer\".puid)::text =\n>> \"inner\".name)\n>> -> Seq Scan on puid\n>> (cost=0.00..23495.21 rows=1099421 width=44)\n>\n>> -> Bitmap Heap Scan on puid\n>> (cost=20.03..59.52 rows=10 width=4)\n>> Recheck Cond: ((puid =\n>> 'f68dcf86-992c-2e4a-21fb-2fc8c56edfeb'::bpchar) OR (puid =\n>> '7716dbcf-56ab-623b-ab33-3b2e67a0727c'::bpchar) OR (puid =\n>\n>\n> Apparently you've got a datatype mismatch: name is text while puid is\n> char(N). The comparisons to name can't be converted into indexscans\n> on puid because the semantics aren't the same for text and char\n> comparisons.\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 24 Sep 2006 11:12:25 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IN not handled very well? "
},
{
"msg_contents": "On Sep 24, 2006, at 2:12 PM, Ben wrote:\n> Ah, so I do. Thanks, that helps an awful lot.\n>\n> But the plan is still twice as expensive as when I put in the \n> static values. Is it just unreasonable to expect the planner to see \n> that there aren't many rows in the subselect, so to use the bitmap \n> scans after all?\n\nBased on your initial post, it probably should know that it's only \ngetting 15 rows (since it did in your initial plan), so it's unclear \nwhy it's not choosing the bitmap scan.\n\nCan you post the results of EXPLAIN ANALYZE?\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 26 Sep 2006 23:21:11 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN not handled very well? "
}
] |
[
{
"msg_contents": "Hello!\n\nI got two AMD Opteron 885 processors (2.6ghz) and 8 gig of memory.\nHarddrives are 4 scsi disks in 10 raid.\n\nI'm running gentoo, and the kernel finds and uses all of my 2 (4) cpu's.\n\nHow can i actually verify that my PostgreSQL (or that my OS) actually gives\neach new query a fresh idle CPU) all of my CPU's?\n\nKjell Tore.\n\n-- \n\nSocial Engineering Specialist\n- Because there's no patch for Human Stupidity\n\nHello!I got two AMD Opteron 885 processors (2.6ghz) and 8 gig of memory.Harddrives are 4 scsi disks in 10 raid.I'm running gentoo, and the kernel finds and uses all of my 2 (4) cpu's.How can i actually verify that my PostgreSQL (or that my OS) actually gives each new query a fresh idle CPU) all of my CPU's?\nKjell Tore.-- Social Engineering Specialist- Because there's no patch for Human Stupidity",
"msg_date": "Mon, 25 Sep 2006 10:30:33 +0200",
"msg_from": "\"Kjell Tore Fossbakk\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multi-processor question"
},
{
"msg_contents": "Hi, Kjell Tore,\n\nKjell Tore Fossbakk wrote:\n\n> I got two AMD Opteron 885 processors (2.6ghz) and 8 gig of memory.\n> Harddrives are 4 scsi disks in 10 raid.\n> \n> I'm running gentoo, and the kernel finds and uses all of my 2 (4) cpu's.\n> \n> How can i actually verify that my PostgreSQL (or that my OS) actually\n> gives each new query a fresh idle CPU) all of my CPU's?\n\nOn unixoid systems, use top to display individual CPU loads, they should\nbe balanced, if you issue the same type of queries in parallel.[1] On\nWindows, the Task Manager should be able to display individual CPU load\ngraphs.\n\nNote, however, that if you issue different kinds of queries in parallel,\nit is well possible that some CPUs have 100% load (on CPU-intensive\nqueries), and the other CPUs have low load (processing the other, I/O\nintensive queries.\n\nBtw, if your queries need a long time, but CPU load is low, than it is\nvery likely that you're I/O bound, either at the disks in the Server, or\nat the network connections to the clients.\n\nHTH,\nMarkus\n[1] You might need some command line options or keys, e. G. on some\ndebian boxes over here, one has to press \"1\" to switch a running top to\nmulti-cpu mode, pressing \"1\" again switches back to accumulation.\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Mon, 25 Sep 2006 10:47:40 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-processor question"
}
] |
[
{
"msg_contents": "Our application has a number of inserters posting rows of network\nstatistics into a database. This is occuring continously. The\nfollowing is an example of a stats table (simplified but maintains key\nconcepts).\n \n \nCREATE TABLE stats \n(\n logtime timestamptz,\n key int,\n stat1 bigint,\n stat2 bigint,\n stat3 bigint,\n PRIMARY KEY (key,logtime)\n);\nCREATE INDEX x ON stats(logtime);\n \nThere are on the order of 1M unique values for \"key\" and a new row for\neach key value will be inserted say every 15 minutes. These rows are\ndivided up between a number of different inserting elements, but that\nisn't relevant.\n \nThe problem is, the insert pattern has low correlation with the\n(key,logtime) index. In this case, would need >1M blocks in my\nshared_buffer space to prevent a read-modify-write type of pattern\nhappening during the inserts (given a large enough database).\n \nWondering about lowering the BLKSZ value so that the total working set\nof blocks required can be maintained in my shared buffers. Our database\nonly has 8G of memory and likely need to reduce BLKSZ to 512....\n \nAny comment on other affects or gotchas with lowering the size of BLKSZ?\nCurrently, our database is thrashing its cache of blocks we we're\ngetting only ~100 inserts/second, every insert results in a\nevict-read-modify operation.\n \n \nIdeally, like to keep the entire working set of blocks in memory across\ninsert periods so that the i/o looks more like write full blocks....\n \nThanks\nMarc\n \n \n\n\n\n\n\nOur application has \na number of inserters posting rows of network statistics into a database. \nThis is occuring continously. The following is an example of a stats table \n(simplified but maintains key concepts).\n \n \nCREATE TABLE stats \n\n(\n logtime timestamptz,\n key \nint,\n stat1 \nbigint,\n stat2 \nbigint,\n stat3 \nbigint,\n PRIMARY KEY \n(key,logtime)\n);\nCREATE INDEX x ON \nstats(logtime);\n \nThere are on the \norder of 1M unique values for \"key\" and a new row for each key value will be \ninserted say every 15 minutes. These rows are divided up between a number \nof different inserting elements, but that isn't relevant.\n \nThe problem is, the \ninsert pattern has low correlation with the (key,logtime) index. In \nthis case, would need >1M blocks in my shared_buffer space to prevent a \nread-modify-write type of pattern happening during the inserts (given a large \nenough database).\n \nWondering about \nlowering the BLKSZ value so that the total working set of blocks required can be \nmaintained in my shared buffers. Our database only has 8G of memory and \nlikely need to reduce BLKSZ to 512....\n \nAny comment on other \naffects or gotchas with lowering the size of BLKSZ? Currently, our \ndatabase is thrashing its cache of blocks we we're getting only ~100 \ninserts/second, every insert results in a evict-read-modify \noperation.\n \n \nIdeally, like to \nkeep the entire working set of blocks in memory across insert periods so that \nthe i/o looks more like write full blocks....\n \nThanks\nMarc",
"msg_date": "Mon, 25 Sep 2006 16:09:33 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Decreasing BLKSZ"
},
{
"msg_contents": "Hi, Marc,\n\nMarc Morin wrote:\n\n> The problem is, the insert pattern has low correlation with the\n> (key,logtime) index. In this case, would need >1M blocks in my\n> shared_buffer space to prevent a read-modify-write type of pattern\n> happening during the inserts (given a large enough database).\n\nWould it be possible to change the primary key to (logtime,key)? This\ncould help keeping the \"working window\" small.\n\nSecondly, the real working set is smaller, as the rows are all inserted\nat the end of the table, filling each page until it's full, so only the\nlast pages are accessed. There's no relation between the index order,\nand the order of data on disk, unless you CLUSTER.\n\n> Any comment on other affects or gotchas with lowering the size of\n> BLKSZ? Currently, our database is thrashing its cache of blocks we\n> we're getting only ~100 inserts/second, every insert results in a\n> evict-read-modify operation.\n\nI'm not shure that's the correct diagnosis.\n\nDo you have one transaction per insert? Every transaction means a forced\nsync to the disk, so you won't get more than about 100-200 commits per\nsecond, depending on your actual disk rotation speed.\n\nTo improve concurrency of the \"numer of inserters\" running in parallel,\ntry to tweak the config variables commit_delay and commit_sibling, so\nyou get a higher overall throughput at cost of an increased delay per\nconnection, and increase the number of inserters. Using sensible\ntweaking, the throughput should scale nearly linear with the number of\nbackens. :-)\n\nIf feasible for your application, you can also bundle several log\nentries into a single transaction. If you're CPU bound, you can use COPY\ninstead of INSERT or (if you can wait for 8.2) the new multi-row INSERT\nto further improve performance, but I doubt that you're CPU bound.\n\nThe only way to \"really\" get over the sync limit is to have (at least)\nthe WAL on a battery backed ram / SSD media that has no \"spinning disk\"\nphysical limit, or abandon crash safety by turning fsync off.\n\nThanks,\nMarkus.\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Mon, 25 Sep 2006 23:11:08 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ"
},
{
"msg_contents": "> Would it be possible to change the primary key to \n> (logtime,key)? This could help keeping the \"working window\" small.\n\nNo, the application accessing the data wants all the rows between start\nand end time for a particular key value. \n\n> \n> Secondly, the real working set is smaller, as the rows are \n> all inserted at the end of the table, filling each page until \n> it's full, so only the last pages are accessed. There's no \n> relation between the index order, and the order of data on \n> disk, unless you CLUSTER.\n\nI'd theorizing that my problem is in updating the index itself and not\nthe heap. Insert order\nRefers to the order by which the applications are inserting the rows and\nas such, the order by\nWhich the index is being updated. This in turn, is causing the b-tree\nto be traverse. Problem\nIs the working set of blocks at the bottom of the btree is too big for\nmy cache.\n\n> \n> > Any comment on other affects or gotchas with lowering the size of \n> > BLKSZ? Currently, our database is thrashing its cache of blocks we \n> > we're getting only ~100 inserts/second, every insert results in a \n> > evict-read-modify operation.\n> \n> I'm not shure that's the correct diagnosis.\n> \n> Do you have one transaction per insert? Every transaction \n> means a forced sync to the disk, so you won't get more than \n> about 100-200 commits per second, depending on your actual \n> disk rotation speed.\n\nNo, an insert consists of roughly 10,000+ rows per transaction block. \n\n> \n> To improve concurrency of the \"numer of inserters\" running in \n> parallel, try to tweak the config variables commit_delay and \n> commit_sibling, so you get a higher overall throughput at \n> cost of an increased delay per connection, and increase the \n> number of inserters. Using sensible tweaking, the throughput \n> should scale nearly linear with the number of backens. :-)\n\nI don't think this will help us here due to large transactions already.\n\n> \n> If feasible for your application, you can also bundle several \n> log entries into a single transaction. If you're CPU bound, \n> you can use COPY instead of INSERT or (if you can wait for \n> 8.2) the new multi-row INSERT to further improve performance, \n> but I doubt that you're CPU bound.\n\n> \n> The only way to \"really\" get over the sync limit is to have \n> (at least) the WAL on a battery backed ram / SSD media that \n> has no \"spinning disk\"\n> physical limit, or abandon crash safety by turning fsync off.\n\nAgain, problem is not with WAL writing, already on it's own raid1 disk\npair. The \nI/O pattern we see is about 1-2% load on WAL and 100% load on the array\nholding the indexes and tables. Throughput is very low, something like\n150k-200K bytes/second of real rows being deposited on the disk.\n\nThe disks are busy seeking all over the disk platter to fetch a block,\nadd a single row, then seek to another spot and write back a previously\ndirty buffer....\n\n> \n> Thanks,\n> Markus.\n> --\n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n> \n> Fight against software patents in Europe! www.ffii.org \n> www.nosoftwarepatents.org\n> \n",
"msg_date": "Mon, 25 Sep 2006 17:54:10 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Decreasing BLKSZ"
},
{
"msg_contents": "\"Marc Morin\" <[email protected]> writes:\n> No, an insert consists of roughly 10,000+ rows per transaction block. \n\nPerhaps it would help to pre-sort these rows by key?\n\nLike Markus, I'm pretty suspicious of lowering BLCKSZ ... you can try it\nbut it's likely to prove counterproductive (more btree index levels,\nmore rows requiring toasting, a tighter limit on what rows will fit at\nall). I doubt I'd try to make it lower than a couple K in any case.\n\nThe bottom line here is likely to be \"you need more RAM\" :-(\n\nI wonder whether there is a way to use table partitioning to make the\ninsert pattern more localized? We'd need to know a lot more about your\ninsertion patterns to guess how, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Sep 2006 18:27:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ "
},
{
"msg_contents": "I'm not sure if decreasing BLKSZ is the way to go. It would allow you\nto have more smaller blocks in memory, but the actual coverage of the\nindex would remain the same; if only 33% of the index fits in memory\nwith the 8K BLKSZ then only 33% would fit in memory with a 4k BLKSZ. I\ncan see where you're going if the tree nodes for all 15 million key\nentries fit in memory as well as the most recent nodes for the logtime\nnodes lower down in the index; basically trying to make sure that the\n\"right\" 33% of the index is in memory. \n\nBut it seems like it might be more useful to have two indexes, one on\nlogtime and one on key. Inserts into the logtime index would be\ncorrelated with your insert order and as such be cache-friendly so\nthat's not an issue. The index on just the key column would be at least\nas small as the active subset of a combined index, so performance should\nbe at least as good as you could possibly achieve by reducing BLKSIZE.\n\nPG 8.1 is smart enough to use a bitmap index scan to combine the two\nindexes at query time; if that gives you adequate performance then it\nwould be simpler than reducing BLKSIZE.\n\n-- Mark Lewis\n\nOn Mon, 2006-09-25 at 17:54 -0400, Marc Morin wrote:\n> > Would it be possible to change the primary key to \n> > (logtime,key)? This could help keeping the \"working window\" small.\n> \n> No, the application accessing the data wants all the rows between start\n> and end time for a particular key value. \n> \n> > \n> > Secondly, the real working set is smaller, as the rows are \n> > all inserted at the end of the table, filling each page until \n> > it's full, so only the last pages are accessed. There's no \n> > relation between the index order, and the order of data on \n> > disk, unless you CLUSTER.\n> \n> I'd theorizing that my problem is in updating the index itself and not\n> the heap. Insert order\n> Refers to the order by which the applications are inserting the rows and\n> as such, the order by\n> Which the index is being updated. This in turn, is causing the b-tree\n> to be traverse. Problem\n> Is the working set of blocks at the bottom of the btree is too big for\n> my cache.\n> \n> > \n> > > Any comment on other affects or gotchas with lowering the size of \n> > > BLKSZ? Currently, our database is thrashing its cache of blocks we \n> > > we're getting only ~100 inserts/second, every insert results in a \n> > > evict-read-modify operation.\n> > \n> > I'm not shure that's the correct diagnosis.\n> > \n> > Do you have one transaction per insert? Every transaction \n> > means a forced sync to the disk, so you won't get more than \n> > about 100-200 commits per second, depending on your actual \n> > disk rotation speed.\n> \n> No, an insert consists of roughly 10,000+ rows per transaction block. \n> \n> > \n> > To improve concurrency of the \"numer of inserters\" running in \n> > parallel, try to tweak the config variables commit_delay and \n> > commit_sibling, so you get a higher overall throughput at \n> > cost of an increased delay per connection, and increase the \n> > number of inserters. Using sensible tweaking, the throughput \n> > should scale nearly linear with the number of backens. :-)\n> \n> I don't think this will help us here due to large transactions already.\n> \n> > \n> > If feasible for your application, you can also bundle several \n> > log entries into a single transaction. If you're CPU bound, \n> > you can use COPY instead of INSERT or (if you can wait for \n> > 8.2) the new multi-row INSERT to further improve performance, \n> > but I doubt that you're CPU bound.\n> \n> > \n> > The only way to \"really\" get over the sync limit is to have \n> > (at least) the WAL on a battery backed ram / SSD media that \n> > has no \"spinning disk\"\n> > physical limit, or abandon crash safety by turning fsync off.\n> \n> Again, problem is not with WAL writing, already on it's own raid1 disk\n> pair. The \n> I/O pattern we see is about 1-2% load on WAL and 100% load on the array\n> holding the indexes and tables. Throughput is very low, something like\n> 150k-200K bytes/second of real rows being deposited on the disk.\n> \n> The disks are busy seeking all over the disk platter to fetch a block,\n> add a single row, then seek to another spot and write back a previously\n> dirty buffer....\n> \n> > \n> > Thanks,\n> > Markus.\n> > --\n> > Markus Schaber | Logical Tracking&Tracing International AG\n> > Dipl. Inf. | Software Development GIS\n> > \n> > Fight against software patents in Europe! www.ffii.org \n> > www.nosoftwarepatents.org\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Mon, 25 Sep 2006 15:28:24 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ"
},
{
"msg_contents": " \n> \n> The bottom line here is likely to be \"you need more RAM\" :-(\n\nYup. Just trying to get a handle on what I can do if I need more than\n16G\nOf ram... That's as much as I can put on the installed based of\nservers.... 100s of them.\n\n> \n> I wonder whether there is a way to use table partitioning to \n> make the insert pattern more localized? We'd need to know a \n> lot more about your insertion patterns to guess how, though.\n> \n> \t\t\tregards, tom lane\n\nWe're doing partitioning as well.....\n> \n",
"msg_date": "Mon, 25 Sep 2006 20:29:37 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Decreasing BLKSZ "
},
{
"msg_contents": "> > The bottom line here is likely to be \"you need more RAM\" :-(\n> \n> Yup. Just trying to get a handle on what I can do if I need more than\n> 16G\n> Of ram... That's as much as I can put on the installed based of\n> servers.... 100s of them.\n> \n> >\n> > I wonder whether there is a way to use table partitioning to\n> > make the insert pattern more localized? We'd need to know a\n> > lot more about your insertion patterns to guess how, though.\n> >\n> > \t\t\tregards, tom lane\n> \n> We're doing partitioning as well.....\n> >\nI'm guessing that you basically have a data collection application that\nsends in lots of records, and a reporting application that wants\nsummaries of the data? So, if I understand the problem correctly, you\ndon't have enough ram (or may not in the future) to index the data as it\ncomes in. \n\nNot sure how much you can change the design, but what about either\nupdating a summary table(s) as the records come in (trigger, part of the\ntransaction, or do it in the application) or, index periodically? In\notherwords, load a partition (say a day's worth) then index that\npartition all at once. If you're doing real-time analysis that might not\nwork so well though, but the summary tables should. \n\nI assume the application generates unique records on its own due to the\ntimestamp, so this isn't really about checking for constraint\nviolations? If so, you can probably do away with the index on the tables\nthat you're running the inserts on.\n\n- Bucky\n",
"msg_date": "Tue, 26 Sep 2006 17:25:40 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ "
},
{
"msg_contents": "Yes, that is our application. We have implemented both scenarios...\n\n1- partitions loaded without indexes on them.. And build index \"when\npartition is full\". Slow to drill down into incomplete partitions.\n2- paritions with index as loaded. Slow, on insert (problem mentioned)\nbut good to drill down....\n\nSo, I'd like my cake and eat it too... :-)\n\nI'd like to have my indexes built as rows are inserted into the\npartition so help with the drill down...\n\n> -----Original Message-----\n> From: Bucky Jordan [mailto:[email protected]] \n> Sent: Tuesday, September 26, 2006 5:26 PM\n> To: Marc Morin; Tom Lane\n> Cc: Markus Schaber; [email protected]\n> Subject: RE: [PERFORM] Decreasing BLKSZ \n> \n> > > The bottom line here is likely to be \"you need more RAM\" :-(\n> > \n> > Yup. Just trying to get a handle on what I can do if I \n> need more than \n> > 16G Of ram... That's as much as I can put on the installed based of \n> > servers.... 100s of them.\n> > \n> > >\n> > > I wonder whether there is a way to use table partitioning to make \n> > > the insert pattern more localized? We'd need to know a lot more \n> > > about your insertion patterns to guess how, though.\n> > >\n> > > \t\t\tregards, tom lane\n> > \n> > We're doing partitioning as well.....\n> > >\n> I'm guessing that you basically have a data collection \n> application that sends in lots of records, and a reporting \n> application that wants summaries of the data? So, if I \n> understand the problem correctly, you don't have enough ram \n> (or may not in the future) to index the data as it comes in. \n> \n> Not sure how much you can change the design, but what about \n> either updating a summary table(s) as the records come in \n> (trigger, part of the transaction, or do it in the \n> application) or, index periodically? In otherwords, load a \n> partition (say a day's worth) then index that partition all \n> at once. If you're doing real-time analysis that might not \n> work so well though, but the summary tables should. \n> \n> I assume the application generates unique records on its own \n> due to the timestamp, so this isn't really about checking for \n> constraint violations? If so, you can probably do away with \n> the index on the tables that you're running the inserts on.\n> \n> - Bucky\n> \n",
"msg_date": "Tue, 26 Sep 2006 17:36:04 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Decreasing BLKSZ "
},
{
"msg_contents": "> \n> So, I'd like my cake and eat it too... :-)\n> \n> I'd like to have my indexes built as rows are inserted into the\n> partition so help with the drill down...\n> \nSo you want to drill down so fine grained that summary tables don't do\nmuch good? Keep in mind, even if you roll up only two records, that's\nhalf as many you have to process (be it for drill down or index). \n\nI've seen applications that have a log table with no indexes/constraints\nand lots of records being inserted, then they only report on very fine\ngrained summary tables. Drill downs still work pretty well, but if you\nget audited and want to see that specific action, well, you're in for a\nbit of a wait, but hopefully that doesn't happen too often.\n\nIf that's the case (summary tables won't work), I'd be very curious how\nyou manage to get your cake and eat it too :) \n\n- Bucky\n",
"msg_date": "Tue, 26 Sep 2006 18:14:07 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ "
},
{
"msg_contents": "Hi, Marc,\n\nMarc Morin wrote:\n\n>> I wonder whether there is a way to use table partitioning to \n>> make the insert pattern more localized? We'd need to know a \n>> lot more about your insertion patterns to guess how, though.\n> \n> We're doing partitioning as well.....\n\nAnd is constraint exclusion set up properly, and have you verified that\nit works?\n\nHTH,\nMarkus\n",
"msg_date": "Wed, 27 Sep 2006 00:39:09 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ"
},
{
"msg_contents": "On Sep 26, 2006, at 5:36 PM, Marc Morin wrote:\n> 1- partitions loaded without indexes on them.. And build index \"when\n> partition is full\". Slow to drill down into incomplete partitions.\n> 2- paritions with index as loaded. Slow, on insert (problem \n> mentioned)\n> but good to drill down....\n\nHow big are your partitions? The number of rows in your active \npartition will determine how large your indexes are (and probably \nmore importantly, how many levels there are), which will definitely \naffect your timing. So, you might have better luck with a smaller \npartition size.\n\nI'd definitely try someone else's suggestion of making the PK \nlogtime, key (assuming that you need to enforce uniqueness) and \nhaving an extra index on just key. If you don't need to enforce \nuniqueness, just have one index on key and one on logtime. Or if your \npartitions are small enough, don't even create the logtime index \nuntil the partition isn't being inserted into anymore.\n\nIf the number of key values is pretty fixed, it'd be an interesting \nexperiment to try partitioning on that, perhaps even with one key per \npartition (which would allow you to drop the key from the tables \nentirely, ie:\n\nCREATE TABLE stats_1 (logtime PRIMARY KEY, stat1, stat2, stat3);\nCREATE TABLE stats_2 ...\n\nCREATE VIEW stats AS\nSELECT 1 AS key, * FROM stats_1\nUNION ALL SELECT 2, * FROM stats_2\n...\n\nI wouldn't put too much work into that as no real effort's been \nexpended to optimize for that case (especially the resulting monster \nUNION ALL), but you might get lucky.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 26 Sep 2006 23:42:16 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decreasing BLKSZ "
}
] |
[
{
"msg_contents": "I have some odd cases here joining two tables - the planner insists on\nMerge Join, but Nested Loop is really faster - and that makes sense,\nsince I'm selecting just a small partition of the data available. All\nplanner constants seems to be set at the default values, the only way to\nget a shift towards Nested Loops seems to be to raise the constants. I\nbelieve our memory is big enough to hold the indices, and that the\neffective_cache_size is set to a sane value (but how to verify that,\nanyway?).\n\nWhat causes the nested loops to be estimated so costly - or is it the\nmerge joins that are estimated too cheaply? Should I raise all the\nplanner cost constants, or only one of them?\n\nHere are some sample explains:\n\n\nprod=> explain analyze select * from ticket join users on users_id=users.id where ticket.created>'2006-09-25 17:00';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..67664.15 rows=10977 width=675) (actual time=0.038..202.877 rows=10627 loops=1)\n -> Index Scan using ticket_on_created on ticket (cost=0.00..11665.94 rows=10977 width=80) (actual time=0.014..35.571 rows=10627 loops=1)\n Index Cond: (created > '2006-09-25 17:00:00'::timestamp without time zone)\n -> Index Scan using users_pkey on users (cost=0.00..5.00 rows=1 width=595) (actual time=0.007..0.008 rows=1 loops=10627)\n Index Cond: (\"outer\".users_id = users.id)\n Total runtime: 216.612 ms\n(6 rows)\n\nprod=> explain analyze select * from ticket join users on users_id=users.id where ticket.created>'2006-09-25 16:00';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=12844.93..68580.37 rows=11401 width=675) (actual time=106.631..1712.458 rows=11554 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".users_id)\n -> Index Scan using users_pkey on users (cost=0.00..54107.38 rows=174508 width=595) (actual time=0.041..1215.221 rows=174599 loops=1)\n -> Sort (cost=12844.93..12873.43 rows=11401 width=80) (actual time=105.753..123.905 rows=11554 loops=1)\n Sort Key: ticket.users_id\n -> Index Scan using ticket_on_created on ticket (cost=0.00..12076.68 rows=11401 width=80) (actual time=0.074..65.297 rows=11554 loops=1)\n Index Cond: (created > '2006-09-25 16:00:00'::timestamp without time zone)\n Total runtime: 1732.452 ms\n(8 rows)\n",
"msg_date": "Tue, 26 Sep 2006 21:35:53 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merge Join vs Nested Loop"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> What causes the nested loops to be estimated so costly - or is it the\n> merge joins that are estimated too cheaply? Should I raise all the\n> planner cost constants, or only one of them?\n\nIf your tables are small enough to fit (mostly) in memory, then the\nplanner tends to overestimate the cost of a nestloop because it fails to\naccount for cacheing effects across multiple scans of the inner table.\nThis is addressed in 8.2, but in earlier versions about all you can do\nis reduce random_page_cost, and a sane setting of that (ie not less than\n1.0) may not be enough to push the cost estimates where you want them.\nStill, reducing random_page_cost ought to be your first recourse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Sep 2006 18:09:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join vs Nested Loop "
},
{
"msg_contents": "[Tom Lane - Tue at 06:09:56PM -0400]\n> If your tables are small enough to fit (mostly) in memory, then the\n> planner tends to overestimate the cost of a nestloop because it fails to\n> account for cacheing effects across multiple scans of the inner table.\n> This is addressed in 8.2, but in earlier versions about all you can do\n> is reduce random_page_cost, and a sane setting of that (ie not less than\n> 1.0) may not be enough to push the cost estimates where you want them.\n> Still, reducing random_page_cost ought to be your first recourse.\n\nThank you. Reducing the random page hit cost did reduce the nested loop\ncost significantly, sadly the merge join costs where reduced even\nfurther, causing the planner to favor those even more than before.\nSetting the effective_cache_size really low solved the issue, but I\nbelieve we rather want to have a high effective_cache_size.\n\nEventually, setting the effective_cache_size to near-0, and setting\nrandom_page_cost to 1 could maybe be a desperate measure. Another one\nis to turn off merge/hash joins and seq scans. It could be a worthwhile\nexperiment if nothing else :-)\n\nThe bulk of our database is historical data that most often is not\ntouched at all, though one never knows for sure until the queries have\nrun all through - so table partitioning is not an option, it seems like.\nMy general idea is that nested loops would cause the most recent data\nand most important part of the indexes to stay in the OS cache. Does\nthis make sense from an experts point of view? :-)\n\n",
"msg_date": "Wed, 27 Sep 2006 11:48:03 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "On Wed, 2006-09-27 at 11:48 +0200, Tobias Brox wrote:\n> [Tom Lane - Tue at 06:09:56PM -0400]\n> > If your tables are small enough to fit (mostly) in memory, then the\n> > planner tends to overestimate the cost of a nestloop because it fails to\n> > account for cacheing effects across multiple scans of the inner table.\n> > This is addressed in 8.2, but in earlier versions about all you can do\n> > is reduce random_page_cost, and a sane setting of that (ie not less than\n> > 1.0) may not be enough to push the cost estimates where you want them.\n> > Still, reducing random_page_cost ought to be your first recourse.\n> \n> Thank you. Reducing the random page hit cost did reduce the nested loop\n> cost significantly, sadly the merge join costs where reduced even\n> further, causing the planner to favor those even more than before.\n> Setting the effective_cache_size really low solved the issue, but I\n> believe we rather want to have a high effective_cache_size.\n> \n> Eventually, setting the effective_cache_size to near-0, and setting\n> random_page_cost to 1 could maybe be a desperate measure. Another one\n> is to turn off merge/hash joins and seq scans. It could be a worthwhile\n> experiment if nothing else :-)\n> \n> The bulk of our database is historical data that most often is not\n> touched at all, though one never knows for sure until the queries have\n> run all through - so table partitioning is not an option, it seems like.\n> My general idea is that nested loops would cause the most recent data\n> and most important part of the indexes to stay in the OS cache. Does\n> this make sense from an experts point of view? :-)\n\nHave you tried chaning the cpu_* cost options to see how they affect\nmerge versus nested loop?\n",
"msg_date": "Wed, 27 Sep 2006 09:58:30 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "[Scott Marlowe - Wed at 09:58:30AM -0500]\n> Have you tried chaning the cpu_* cost options to see how they affect\n> merge versus nested loop?\n\nAs said in the original post, increasing any of them shifts the planner\ntowards nested loops instead of merge_join. I didn't check which one of\nthe cost constants made the most impact.\n",
"msg_date": "Wed, 27 Sep 2006 17:05:12 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "On Wed, 2006-09-27 at 17:05 +0200, Tobias Brox wrote:\n> [Scott Marlowe - Wed at 09:58:30AM -0500]\n> > Have you tried chaning the cpu_* cost options to see how they affect\n> > merge versus nested loop?\n> \n> As said in the original post, increasing any of them shifts the planner\n> towards nested loops instead of merge_join. I didn't check which one of\n> the cost constants made the most impact.\n\nSo, by decreasing them, you should move away from nested loops then,\nright? Has that not worked for some reason?\n",
"msg_date": "Wed, 27 Sep 2006 10:19:24 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "[Scott Marlowe - Wed at 10:19:24AM -0500]\n> So, by decreasing them, you should move away from nested loops then,\n> right? Has that not worked for some reason?\n\nI want to move to nested loops, they are empirically faster in many of\nour queries, and that makes sense since we've got quite big tables and\nmost of the queries only touch a small partition of the data.\n\nI've identified that moving any of the cost constants (including\nrandom_page_cost) upwards gives me the right result, but I'm still wary\nif this is the right thing to do. Even if so, what constants should I\ntarget first? I could of course try to analyze a bit what constants\ngive the biggest impact. Then again, we have many more queries hitting\nthe database than the few I'm doing research into (and those I'm doing\nresearch into is even very simplified versions of the real queries).\n",
"msg_date": "Wed, 27 Sep 2006 17:26:55 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "On Wed, 2006-09-27 at 10:26, Tobias Brox wrote:\n> [Scott Marlowe - Wed at 10:19:24AM -0500]\n> > So, by decreasing them, you should move away from nested loops then,\n> > right? Has that not worked for some reason?\n> \n> I want to move to nested loops, they are empirically faster in many of\n> our queries, and that makes sense since we've got quite big tables and\n> most of the queries only touch a small partition of the data.\n> \n> I've identified that moving any of the cost constants (including\n> random_page_cost) upwards gives me the right result, but I'm still wary\n> if this is the right thing to do. Even if so, what constants should I\n> target first? I could of course try to analyze a bit what constants\n> give the biggest impact. Then again, we have many more queries hitting\n> the database than the few I'm doing research into (and those I'm doing\n> research into is even very simplified versions of the real queries).\n\nAhh, the other direction then. I would think it's safer to nudge these\na bit than to drop random page cost to 1 or set effective_cache_size to\n1000 etc...\n\nBut I'm sure you should test the other queries and / or keep an eye on\nyour database while running to make sure those changes don't impact\nother users.\n",
"msg_date": "Wed, 27 Sep 2006 10:30:59 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "[Scott Marlowe - Wed at 10:31:35AM -0500]\n> And remember, you can always change any of those settings in session for\n> just this one query to force the planner to make the right decision.\n\nsure ... I could identify the most problematic queries, and hack up the\nsoftware application to modify the config settings for those exact\nqueries ... but it's a very ugly solution. :-) Particularly if Tom Lane\nis correct saying the preferance of merge join instead of nested loop is\nindeed a bug.\n",
"msg_date": "Wed, 27 Sep 2006 17:36:19 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join vs Nested Loop"
},
{
"msg_contents": "I found a way to survive yet some more weeks :-)\n\nOne of the queries we've had most problems with today is principially\nsomething like:\n\n select A.*,sum(B.*) from A join B where A.created>x and ... order by\n A.created desc limit 32 group by A.*\n\nThere is by average two rows in B for every row in A.\nNote the 'limit 32'-part. I rewrote the query to:\n\n select A.*,(select sum(B.*) from B ...) where A.created>x and ...\n order by A.created desc limit 32;\n\nAnd voila, the planner found out it needed just some few rows from A,\nand execution time was cutted from 1-2 minutes down to 20 ms. :-)\n\nI've also started thinking a bit harder about table partitioning, if we\nadd some more redundancy both to the queries and the database, it may\nhelp us drastically reduce the real expenses of some of the merge\njoins...\n\n",
"msg_date": "Wed, 27 Sep 2006 21:01:45 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join vs Nested Loop"
}
] |
[
{
"msg_contents": "Hi,\n\nI have the following query which has been running very slowly and after a\nlot of testing/trial and error I found an execution plan that ran the query\nin a fraction of the time (and then lost the statistics that produced it).\nWhat I wish to know is how to force the query to use the faster execution\nplan.\n\nQuery:\nSELECT count(*) as count FROM \n( \n\tSELECT *\n\t\tFROM transaction t, merchant m\n\t\tWHERE t.merchant_id = m.id \n\t\t\tAND m.id = 198\n\t\t\tAND t.transaction_date >= '20050101'\n\t\t\tAND t.transaction_date <= '20060925'\n\t\t\tAND credit_card_no LIKE '1111%111'\n\n\tUNION ALL\n\tSELECT *\n\t\tFROM transaction t, merchant m\n\t\tWHERE t.merchant_id = m.id\n\t\t\tAND m.parent_merchant_id = 198\n\t\t\tAND t.transaction_date >= '20050101'\n\t\t\tAND t.transaction_date <= '20060925'\n\t\t\tAND credit_card_no LIKE '1111%111'\n) AS foobar\n\nDesired Execution Plan:\nAggregate (cost=97377.90..97377.90 rows=1 width=0)\n -> Subquery Scan foobar (cost=0.00..97377.86 rows=16 width=0)\n -> Append (cost=0.00..97377.70 rows=16 width=636)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..10304.81 rows=3\nwidth=636)\n -> Nested Loop (cost=0.00..10304.78 rows=3 width=636)\n -> Index Scan using pk_merchant on merchant m\n(cost=0.00..5.11 rows=1 width=282)\n Index Cond: (id = 198)\n -> Index Scan using ix_transaction_merchant_id on\n\"transaction\" t (cost=0.00..10299.64 rows=3 width=354)\n Index Cond: (198 = merchant_id)\n Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\n -> Subquery Scan \"*SELECT* 2\" (cost=13.86..87072.89 rows=13\nwidth=636)\n -> Hash Join (cost=13.86..87072.76 rows=13 width=636)\n Hash Cond: (\"outer\".merchant_id = \"inner\".id)\n -> Seq Scan on \"transaction\" t\n(cost=0.00..87052.65 rows=1223 width=354)\n Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\n -> Hash (cost=13.85..13.85 rows=4 width=282)\n -> Index Scan using\nix_merchant_parent_merchant_id on merchant m (cost=0.00..13.85 rows=4\nwidth=282)\n Index Cond: (parent_merchant_id = 198)\n\nUndesired Execution Plan:\nAggregate (cost=88228.82..88228.82 rows=1 width=0)\n -> Subquery Scan foobar (cost=0.00..88228.73 rows=35 width=0)\n -> Append (cost=0.00..88228.38 rows=35 width=631)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1137.61 rows=1\nwidth=631)\n -> Nested Loop (cost=0.00..1137.60 rows=1 width=631)\n -> Index Scan using ix_transaction_merchant_id on\n\"transaction\" t (cost=0.00..1132.47 rows=1 width=349)\n Index Cond: (198 = merchant_id)\n Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\n -> Index Scan using pk_merchant on merchant m\n(cost=0.00..5.11 rows=1 width=282)\n Index Cond: (id = 198)\n -> Subquery Scan \"*SELECT* 2\" (cost=20.90..87090.77 rows=34\nwidth=631)\n -> Hash Join (cost=20.90..87090.43 rows=34 width=631)\n Hash Cond: (\"outer\".merchant_id = \"inner\".id)\n -> Seq Scan on \"transaction\" t\n(cost=0.00..87061.04 rows=1632 width=349)\n Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\n -> Hash (cost=20.88..20.88 rows=8 width=282)\n -> Seq Scan on merchant m\n(cost=0.00..20.88 rows=8 width=282)\n Filter: (parent_merchant_id = 198)\n\n\n\nThanks for any help/ideas\n\n\nTim\n\n",
"msg_date": "Wed, 27 Sep 2006 16:10:11 +0930",
"msg_from": "\"Tim Truman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing the use of particular execution plans"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Truman\n> \n> Hi,\n> \n> I have the following query which has been running very slowly \n> and after a\n> lot of testing/trial and error I found an execution plan that \n> ran the query\n> in a fraction of the time (and then lost the statistics that \n> produced it).\n> What I wish to know is how to force the query to use the \n> faster execution\n> plan.\n\nIt would be a bit easier to diagnose the problem if you posted EXPLAIN\nANALYZE rather than just EXPLAIN. The two plans you posted looked very\nsimilar except for the order of the nested loop in subquery 1 and an index\nscan rather than a seq scan in subquery 2. \n\nMy guess would be that the order of the nested loop is determined mostly by\nestimates of matching rows. If you ran an EXPLAIN ANALYZE you could tell if\nthe planner is estimating correctly. If it is not, you could try increasing\nyour statistics target and running ANALYZE. \n\nTo make the planner prefer an index scan over a seq scan, I would first\ncheck the statistics again, and then you can try setting enable_seqscan to\nfalse (enable_seqscan is meant more for testing than production) or, you\ncould try reducing random_page_cost, but you should test that against a\nrange of queries before putting it in production.\n\nDave\n\n",
"msg_date": "Wed, 27 Sep 2006 10:51:26 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing the use of particular execution plans"
},
{
"msg_contents": "On Wed, Sep 27, 2006 at 10:51:26AM -0500, Dave Dutcher wrote:\n> To make the planner prefer an index scan over a seq scan, I would first\n> check the statistics again, and then you can try setting enable_seqscan to\n> false (enable_seqscan is meant more for testing than production) or, you\n> could try reducing random_page_cost, but you should test that against a\n> range of queries before putting it in production.\n\nIndex scans are also pretty picky about correlation. If you have really\nlow correlation you don't want to index scan, but I think our current\nestimates make it too eager to switch to a seqscan.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 27 Sep 2006 15:13:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing the use of particular execution plans"
},
{
"msg_contents": "Here is an \"explain analyze\" for the query that performs slowly, I hope this\nhelps unfortunately I can't reproduce the version of the query that ran\nquickly and therefore can't provide and 'explain analyze' for it.\n\n\"Aggregate (cost=88256.32..88256.32 rows=1 width=0) (actual\ntime=55829.000..55829.000 rows=1 loops=1)\"\n\" -> Subquery Scan foobar (cost=0.00..88256.23 rows=35 width=0) (actual\ntime=19235.000..55829.000 rows=24 loops=1)\"\n\" -> Append (cost=0.00..88255.88 rows=35 width=631) (actual\ntime=19235.000..55829.000 rows=24 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1165.12 rows=1\nwidth=631) (actual time=16.000..16.000 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..1165.11 rows=1 width=631)\n(actual time=16.000..16.000 rows=0 loops=1)\"\n\" -> Index Scan using ix_transaction_merchant_id\non \"transaction\" t (cost=0.00..1159.98 rows=1 width=349) (actual\ntime=16.000..16.000 rows=0 loops=1)\"\n\" Index Cond: (198 = merchant_id)\"\n\" Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\"\n\" -> Index Scan using pk_merchant on merchant m\n(cost=0.00..5.11 rows=1 width=282) (never executed)\"\n\" Index Cond: (id = 198)\"\n\" -> Subquery Scan \"*SELECT* 2\" (cost=20.90..87090.76 rows=34\nwidth=631) (actual time=19219.000..55813.000 rows=24 loops=1)\"\n\" -> Hash Join (cost=20.90..87090.42 rows=34 width=631)\n(actual time=19219.000..55813.000 rows=24 loops=1)\"\n\" Hash Cond: (\"outer\".merchant_id = \"inner\".id)\"\n\" -> Seq Scan on \"transaction\" t\n(cost=0.00..87061.04 rows=1630 width=349) (actual time=234.000..55797.000\nrows=200 loops=1)\"\n\" Filter: ((transaction_date >=\n'2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n((credit_card_no)::text ~~ '4564%549'::text))\"\n\" -> Hash (cost=20.88..20.88 rows=8 width=282)\n(actual time=16.000..16.000 rows=0 loops=1)\"\n\" -> Seq Scan on merchant m\n(cost=0.00..20.88 rows=8 width=282) (actual time=0.000..16.000 rows=7\nloops=1)\"\n\" Filter: (parent_merchant_id = 198)\"\n\"Total runtime: 55829.000 ms\"\n\nOnce again any help much appreciated.\n\nTim\n\n-----Original Message-----\nFrom: Dave Dutcher [mailto:[email protected]] \nSent: Thursday, 28 September 2006 1:21 AM\nTo: 'Tim Truman'; [email protected]\nSubject: RE: [PERFORM] Forcing the use of particular execution plans\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Truman\n> \n> Hi,\n> \n> I have the following query which has been running very slowly \n> and after a\n> lot of testing/trial and error I found an execution plan that \n> ran the query\n> in a fraction of the time (and then lost the statistics that \n> produced it).\n> What I wish to know is how to force the query to use the \n> faster execution\n> plan.\n\nIt would be a bit easier to diagnose the problem if you posted EXPLAIN\nANALYZE rather than just EXPLAIN. The two plans you posted looked very\nsimilar except for the order of the nested loop in subquery 1 and an index\nscan rather than a seq scan in subquery 2. \n\nMy guess would be that the order of the nested loop is determined mostly by\nestimates of matching rows. If you ran an EXPLAIN ANALYZE you could tell if\nthe planner is estimating correctly. If it is not, you could try increasing\nyour statistics target and running ANALYZE. \n\nTo make the planner prefer an index scan over a seq scan, I would first\ncheck the statistics again, and then you can try setting enable_seqscan to\nfalse (enable_seqscan is meant more for testing than production) or, you\ncould try reducing random_page_cost, but you should test that against a\nrange of queries before putting it in production.\n\nDave\n",
"msg_date": "Tue, 3 Oct 2006 13:29:37 +0930",
"msg_from": "\"Tim Truman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing the use of particular execution plans"
},
{
"msg_contents": "\"Tim Truman\" <[email protected]> writes:\n> Here is an \"explain analyze\" for the query that performs slowly,\n\nThis shows that the planner is exactly correct in thinking that all\nthe runtime is going into the seqscan on transaction:\n\n> \"Aggregate (cost=88256.32..88256.32 rows=1 width=0) (actual\n> time=55829.000..55829.000 rows=1 loops=1)\"\n> ...\n> \" -> Seq Scan on \"transaction\" t\n> (cost=0.00..87061.04 rows=1630 width=349) (actual time=234.000..55797.000\n> rows=200 loops=1)\"\n> \" Filter: ((transaction_date >=\n> '2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n> ((credit_card_no)::text ~~ '4564%549'::text))\"\n\nSince that component of the plan was identical in your two original\nplans (\"desired\" and \"undesired\") it seems pretty clear that you have\nnot correctly identified what your problem is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Oct 2006 00:19:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing the use of particular execution plans "
},
{
"msg_contents": "\nThanks Tom\nThe time difference did distract me from the issue. Switching Seq Scan to\noff reduced the runtime greatly, so I am now adjusting the\neffective_cache_size, random_page_cost settings to favor indexes over Seq\nScans.\n\nRegards,\nTim\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, 3 October 2006 1:50 PM\nTo: Tim Truman\nCc: 'Dave Dutcher'; [email protected]\nSubject: Re: [PERFORM] Forcing the use of particular execution plans \n\n\"Tim Truman\" <[email protected]> writes:\n> Here is an \"explain analyze\" for the query that performs slowly,\n\nThis shows that the planner is exactly correct in thinking that all\nthe runtime is going into the seqscan on transaction:\n\n> \"Aggregate (cost=88256.32..88256.32 rows=1 width=0) (actual\n> time=55829.000..55829.000 rows=1 loops=1)\"\n> ...\n> \" -> Seq Scan on \"transaction\" t\n> (cost=0.00..87061.04 rows=1630 width=349) (actual time=234.000..55797.000\n> rows=200 loops=1)\"\n> \" Filter: ((transaction_date >=\n> '2005-01-01'::date) AND (transaction_date <= '2006-09-25'::date) AND\n> ((credit_card_no)::text ~~ '4564%549'::text))\"\n\nSince that component of the plan was identical in your two original\nplans (\"desired\" and \"undesired\") it seems pretty clear that you have\nnot correctly identified what your problem is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 3 Oct 2006 16:21:01 +0930",
"msg_from": "\"Tim Truman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing the use of particular execution plans "
},
{
"msg_contents": "Jim C. Nasby wrote:\n> \n> Index scans are also pretty picky about correlation. If you have really\n> low correlation you don't want to index scan,\n\nI'm still don't think \"correlation\" is the right metric\nat all for making this decision.\n\nIf you have a list of addresses clustered by \"zip\"\nthe \"correlation\" of State, City, County, etc will all be zero (since\nthe zip codes don't match the alphabetical order of state or city names)\nbut index scans are still big wins because the data for any given\nstate or city will be packed on the same few pages - and in fact\nthe pages could be read mostly sequentially.\n\n> but I think our current\n> estimates make it too eager to switch to a seqscan.\n",
"msg_date": "Tue, 03 Oct 2006 17:10:04 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing the use of particular execution plans"
},
{
"msg_contents": "Adding -performance back in.\n\nOn Tue, Oct 03, 2006 at 05:10:04PM -0700, Ron Mayer wrote:\n> Jim C. Nasby wrote:\n> > \n> > Index scans are also pretty picky about correlation. If you have really\n> > low correlation you don't want to index scan,\n> \n> I'm still don't think \"correlation\" is the right metric\n> at all for making this decision.\n> \n> If you have a list of addresses clustered by \"zip\"\n> the \"correlation\" of State, City, County, etc will all be zero (since\n> the zip codes don't match the alphabetical order of state or city names)\n> but index scans are still big wins because the data for any given\n> state or city will be packed on the same few pages - and in fact\n> the pages could be read mostly sequentially.\n \nThat's a good point that I don't think has been considered before. I\nthink correlation is still somewhat important, but what's probably far\nmore important is data localization.\n\nOne possible way to calculate this would be to note the location of\nevery tuple with a given value in the heap. Calculate the geometric mean\nof those locations (I think you could essentially average all the\nctids), and divide that by the average distance of each tuple from that\nmean (or maybe the reciprocal of that would be more logical).\n\nObviously we don't want to scan the whole table to do that, but there\nshould be some way to do it via sampling as well.\n\nOr perhaps someone knows of a research paper with real data on how to do\nthis instead of hand-waving. :)\n\n> > but I think our current\n> > estimates make it too eager to switch to a seqscan.\n-- \nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n",
"msg_date": "Tue, 3 Oct 2006 19:55:45 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing the use of particular execution plans"
}
] |
[
{
"msg_contents": "Tim Truman wrote:\n> Query:\n> SELECT count(*) as count FROM \n> ( \n> \tSELECT *\n> \t\tFROM transaction t, merchant m\n> \t\tWHERE t.merchant_id = m.id \n> \t\t\tAND m.id = 198\n> \t\t\tAND t.transaction_date >= '20050101'\n> \t\t\tAND t.transaction_date <= '20060925'\n> \t\t\tAND credit_card_no LIKE '1111%111'\n> \n> \tUNION ALL\n> \tSELECT *\n> \t\tFROM transaction t, merchant m\n> \t\tWHERE t.merchant_id = m.id\n> \t\t\tAND m.parent_merchant_id = 198\n> \t\t\tAND t.transaction_date >= '20050101'\n> \t\t\tAND t.transaction_date <= '20060925'\n> \t\t\tAND credit_card_no LIKE '1111%111'\n> ) AS foobar\n> \n\nActually, I think the best course of action is to rewrite the query to a \nfaster alternative. What you can try is:\nSELECT SUM(count) AS count FROM\n(\n\tSELECT count(*) AS count\n\t\tFROM transaction t, merchant m\n\t\tWHERE t.merchant_id = m.id\n\t\t\tAND m.id = 198\n\t\t\tAND t.transaction_date >= '20050101'\n\t\t\tAND t.transaction_date <= '20060925'\n\t\t\tAND credit_card_no LIKE '1111%111'\n\n\tUNION ALL\n\tSELECT count(*) AS count\n\t\tFROM transaction t, merchant m\n\t\tWHERE t.merchant_id = m.id\n\t\t\tAND m.parent_merchant_id = 198\n\t\t\tAND t.transaction_date >= '20050101'\n\t\t\tAND t.transaction_date <= '20060925'\n\t\t\tAND credit_card_no LIKE '1111%111'\n) AS foobar;\n\n\nThe next optimization is to merge the 2 subqueries into one. If you \nschema is such that m.id can not be the same as m.parent_merchant_id I \nthink your query can be reduced to:\nSELECT count(*) AS count\n\tFROM transaction t, merchant m\n\tWHERE t.merchant_id = m.id\n\t\tAND\n\t\t(\n\t\t\tm.id = 198\n\t\t\tOR\n\t\t\tm.parent_merchant_id = 198\n\t\t)\n\t\tAND t.transaction_date >= '20050101'\n\t\tAND t.transaction_date <= '20060925'\n\t\tAND credit_card_no LIKE '1111%111'\n\n\nIf m.id can be the same as m.parent_merchant_id you need something like:\nSELECT SUM(\n\tCASE WHEN m.id = m.parent_merchant_id THEN 2 ELSE 1 END\n\t) AS count\n\tFROM transaction t, merchant m\n\tWHERE t.merchant_id = m.id\n\t\tAND\n\t\t(\n\t\t\tm.id = 198\n\t\t\tOR\n\t\t\tm.parent_merchant_id = 198\n\t\t)\n\t\tAND t.transaction_date >= '20050101'\n\t\tAND t.transaction_date <= '20060925'\n\t\tAND credit_card_no LIKE '1111%111'\n\nJochem\n",
"msg_date": "Wed, 27 Sep 2006 14:12:18 +0200",
"msg_from": "Jochem van Dieten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing the use of particular execution plans"
}
] |
[
{
"msg_contents": "Hello,\n\nwe are running a 7.3 postgres db with only a big table (avg \n500.000records) and 7 indexes for a search engine.\nwe have 2 of this databases and we can switch from one to another.\nLast week we decided to give a try to 8.1 on one of them and everything \nwent fine, db is faster (about 2 or 3 times in our case) and the server \nload is higher - which should mean that faster response time is achieved \nby taking a better use of the server.\n\nWe also activated the autovacuum feature to give it a try and that's \nwere our problems started.\nI left the standard autovacuum configuration just to wait and see, pg \ndecided to start a vacuum on the table just midday when users were \nlaunching search queries on the table and server load reached a very \nhigh value so that in a couple of minutes the db was unusable\n\nWith pg7.3 we use to vacuum the db night time, mostly because the insert \nand updates in this table is made in a batch way: a single task that \nputs 100.000 records in the db in 10/20minutes, so the best time to \nactually vacuum the db would be after this batch.\n\nI have read that autovacuum cannot check to see pg load before launching \nvacuum but is there any patch about it? that would sort out the problem \nin a good and simple way.\nOtherwise, which kind of set of parameters I should put in autovacuum \nconfiguration? I am stuck because in our case the table gets mostly read \nand if I set up things as to vacuum the table after a specific amount of \ninsert/updates, I cannot foresee whether this could happen during \ndaytime when server is under high load.\nHow can I configure the vacuum to run after the daily batch insert/update?\n\nAny help appreciated\nThank you very much\nEdoardo\n\n\n",
"msg_date": "Wed, 27 Sep 2006 18:08:30 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "On Wednesday 27 September 2006 09:08, Edoardo Ceccarelli <[email protected]> \nwrote:\n>\n> How can I configure the vacuum to run after the daily batch\n> insert/update?\n>\n\nIf you really only want it to run then, you should disable autovacuum and \ncontinue to run the vacuum manually.\n\nYou might also investigate the vacuum cost delay options, which will make \nvacuum take longer but will have less of an impact on your database while \nrunning.\n\n\n-- \n\"If a nation values anything more than freedom, it will lose its freedom;\nand the irony of it is that if it is comfort or money that it values more,\nit will lose that too.\" -- Somerset Maugham, Author\n\n",
"msg_date": "Wed, 27 Sep 2006 09:13:22 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "[Edoardo Ceccarelli - Wed at 06:08:30PM +0200]\n> We also activated the autovacuum feature to give it a try and that's \n> were our problems started.\n(...)\n> How can I configure the vacuum to run after the daily batch insert/update?\n\nI think you shouldn't use autovacuum in your case.\n\nWe haven't dared testing out autovacuum yet even though we probably\nshould, so we're running vacuum at fixed times of the day. We have a\nvery simple script to do this, the most important part of it reads:\n\necho \"vacuum verbose analyze;\" | psql $DB_NAME > $logdir/$filename 2>&1\n\n",
"msg_date": "Wed, 27 Sep 2006 18:13:29 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:\n> How can I configure the vacuum to run after the daily batch insert/update?\n\nCheck out this:\nhttp://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html\n\nBy inserting the right row you can disable autovacuum to vacuum your big\ntables, and then you can schedule vacuum nightly for those just as\nbefore. There's still a benefit in that you don't need to care about\nvacuuming the rest of the tables, which will be done just in time.\n\nCheers,\nCsaba.\n\n",
"msg_date": "Wed, 27 Sep 2006 18:13:44 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "In response to Edoardo Ceccarelli <[email protected]>:\n\n> Hello,\n> \n> we are running a 7.3 postgres db with only a big table (avg \n> 500.000records) and 7 indexes for a search engine.\n> we have 2 of this databases and we can switch from one to another.\n> Last week we decided to give a try to 8.1 on one of them and everything \n> went fine, db is faster (about 2 or 3 times in our case) and the server \n> load is higher - which should mean that faster response time is achieved \n> by taking a better use of the server.\n> \n> We also activated the autovacuum feature to give it a try and that's \n> were our problems started.\n> I left the standard autovacuum configuration just to wait and see, pg \n> decided to start a vacuum on the table just midday when users were \n> launching search queries on the table and server load reached a very \n> high value so that in a couple of minutes the db was unusable\n> \n> With pg7.3 we use to vacuum the db night time, mostly because the insert \n> and updates in this table is made in a batch way: a single task that \n> puts 100.000 records in the db in 10/20minutes, so the best time to \n> actually vacuum the db would be after this batch.\n> \n> I have read that autovacuum cannot check to see pg load before launching \n> vacuum but is there any patch about it? that would sort out the problem \n> in a good and simple way.\n> Otherwise, which kind of set of parameters I should put in autovacuum \n> configuration? I am stuck because in our case the table gets mostly read \n> and if I set up things as to vacuum the table after a specific amount of \n> insert/updates, I cannot foresee whether this could happen during \n> daytime when server is under high load.\n> How can I configure the vacuum to run after the daily batch insert/update?\n\nIt doesn't sound as if your setup is a good match for autovacuum. You\nmight be better off going back to the cron vacuums. That's the\nbeauty of Postgres -- it gives you the choice.\n\nIf you want to continue with autovac, you may want to experiment with\nvacuum_cost_delay and associated parameters, which can lessen the\nimpact of vacuuming.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Wed, 27 Sep 2006 12:14:54 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:\n> \n> I have read that autovacuum cannot check to see pg load before\n> launching \n> vacuum but is there any patch about it? that would sort out the\n> problem \n> in a good and simple way. \n\nIn some cases the solution to high load is to vacuum the tables being\nhit the heaviest -- meaning that simply checking machine load isn't\nenough to make that decision.\n\nIn fact, that high load problem is exactly why autovacuum was created in\nthe first place.\n-- \n\n",
"msg_date": "Wed, 27 Sep 2006 12:31:52 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to Edoardo Ceccarelli <[email protected]>:\n>>\n>> I have read that autovacuum cannot check to see pg load before launching \n>> vacuum but is there any patch about it? that would sort out the problem \n>> in a good and simple way.\n>> Otherwise, which kind of set of parameters I should put in autovacuum \n>> configuration? I am stuck because in our case the table gets mostly read \n>> and if I set up things as to vacuum the table after a specific amount of \n>> insert/updates, I cannot foresee whether this could happen during \n>> daytime when server is under high load.\n>> How can I configure the vacuum to run after the daily batch insert/update?\n>> \n>\n> It doesn't sound as if your setup is a good match for autovacuum. You\n> might be better off going back to the cron vacuums. That's the\n> beauty of Postgres -- it gives you the choice.\n>\n> If you want to continue with autovac, you may want to experiment with\n> vacuum_cost_delay and associated parameters, which can lessen the\n> impact of vacuuming.\n>\n> \nThe db is constantly monitored during high peak so that we can switch to \na backup pg7.3 database that is being vacuumed every night.\nThis is giving me the opportunity to try it so I tried this:\n\nvacuum_cost_delay = 200\nvacuum_cost_page_hit = 5\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\nvacuum_cost_limit = 100\n\nI know these values affect the normal vacuum process but apparently this \nmeans setting\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n # vacuum_cost_delay\n\nand\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovac, -1 means use\n # vacuum_cost_limit\n\n\nfor the rest of them I am currently trying the deafults:\n\n#autovacuum_naptime = 60 # time between autovacuum runs, \nin secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before \nvacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before \nanalyze\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before vacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before \nanalyze\n\nDoes anybody know which process is actually AUTO-vacuum-ing the db?\nSo that I can check when is running...\n\n\n\n\n\n\n\n\n\n\n\n\nBill Moran wrote:\n\nIn response to Edoardo Ceccarelli <[email protected]>:\n\n\n\nI have read that autovacuum cannot check to see pg load before launching \nvacuum but is there any patch about it? that would sort out the problem \nin a good and simple way.\nOtherwise, which kind of set of parameters I should put in autovacuum \nconfiguration? I am stuck because in our case the table gets mostly read \nand if I set up things as to vacuum the table after a specific amount of \ninsert/updates, I cannot foresee whether this could happen during \ndaytime when server is under high load.\nHow can I configure the vacuum to run after the daily batch insert/update?\n \n\n\nIt doesn't sound as if your setup is a good match for autovacuum. You\nmight be better off going back to the cron vacuums. That's the\nbeauty of Postgres -- it gives you the choice.\n\nIf you want to continue with autovac, you may want to experiment with\nvacuum_cost_delay and associated parameters, which can lessen the\nimpact of vacuuming.\n\n \n\nThe db is constantly monitored during high peak so that we can switch\nto a backup pg7.3 database that is being vacuumed every night.\nThis is giving me the opportunity to try it so I tried this:\n\nvacuum_cost_delay = 200\nvacuum_cost_page_hit = 5\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\nvacuum_cost_limit = 100\n\nI know these values affect the normal vacuum process but apparently\nthis means setting\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n # vacuum_cost_delay\n\nand \n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovac, -1 means use\n # vacuum_cost_limit\n\n\nfor the rest of them I am currently trying the deafults:\n\n#autovacuum_naptime = 60 # time between autovacuum runs,\nin secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\nvacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before\nanalyze\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\nvacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\nanalyze\n\nDoes anybody know which process is actually AUTO-vacuum-ing the db? \nSo that I can check when is running...",
"msg_date": "Wed, 27 Sep 2006 18:40:22 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "Rod Taylor wrote:\n> On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:\n> \n>> I have read that autovacuum cannot check to see pg load before\n>> launching \n>> vacuum but is there any patch about it? that would sort out the\n>> problem \n>> in a good and simple way. \n>> \n>\n> In some cases the solution to high load is to vacuum the tables being\n> hit the heaviest -- meaning that simply checking machine load isn't\n> enough to make that decision.\n>\n> In fact, that high load problem is exactly why autovacuum was created in\n> the first place.\n> \nTrue,\nbut autovacuum could check load -before- and -during- it's execution and \nit could adjust himself automatically to perform more or less \naggressively depending on the difference between those two values.\nMaybe with a parameter like: maximum-autovacuum-load=0.2\nthat would mean: \"never load the machine more than 20% for the autovacuum\"\n\n...another thing is, how could autovacuum check for machine load, this \nis something I cannot imagine right now...\n\n\n\n\n\n\n\nRod Taylor wrote:\n\nOn Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:\n \n\nI have read that autovacuum cannot check to see pg load before\nlaunching \nvacuum but is there any patch about it? that would sort out the\nproblem \nin a good and simple way. \n \n\n\nIn some cases the solution to high load is to vacuum the tables being\nhit the heaviest -- meaning that simply checking machine load isn't\nenough to make that decision.\n\nIn fact, that high load problem is exactly why autovacuum was created in\nthe first place.\n \n\nTrue, \nbut autovacuum could check load -before- and -during- it's execution\nand it could adjust himself automatically to perform more or less\naggressively depending on the difference between those two values.\nMaybe with a parameter like: maximum-autovacuum-load=0.2\nthat would mean: \"never load the machine more than 20% for the\nautovacuum\"\n\n...another thing is, how could autovacuum check for machine load, this\nis something I cannot imagine right now...",
"msg_date": "Wed, 27 Sep 2006 18:49:23 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "[Edoardo Ceccarelli - Wed at 06:49:23PM +0200]\n> ...another thing is, how could autovacuum check for machine load, this \n> is something I cannot imagine right now...\n\nOne solution I made for our application, is to check the\npg_stats_activity view. It requires some config to get the stats\navailable in that view, though. When the application is to start a\nlow-priority transaction, it will first do:\n\n select count(*) from pg_stat_activity where current_query not like\n '<IDL%' and query_start+?<now();\n\nif the returned value is high, the application will sleep a bit and try\nagain later.\n\n",
"msg_date": "Wed, 27 Sep 2006 18:53:36 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "Csaba Nagy wrote:\n> On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:\n> \n>> How can I configure the vacuum to run after the daily batch insert/update?\n>> \n>\n> Check out this:\n> http://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html\n>\n> By inserting the right row you can disable autovacuum to vacuum your big\n> tables, and then you can schedule vacuum nightly for those just as\n> before. There's still a benefit in that you don't need to care about\n> vacuuming the rest of the tables, which will be done just in time.\n\nIn addition autovacuum respects the work of manual or cron based \nvacuums, so if you issue a vacuum right after a daily batch insert / \nupdate, autovacuum won't repeat the work of that manual vacuum.\n\n\n",
"msg_date": "Wed, 27 Sep 2006 14:33:10 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "In response to Edoardo Ceccarelli <[email protected]>:\n\n> Rod Taylor wrote:\n> > On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:\n> > \n> >> I have read that autovacuum cannot check to see pg load before\n> >> launching \n> >> vacuum but is there any patch about it? that would sort out the\n> >> problem \n> >> in a good and simple way. \n> >> \n> >\n> > In some cases the solution to high load is to vacuum the tables being\n> > hit the heaviest -- meaning that simply checking machine load isn't\n> > enough to make that decision.\n> >\n> > In fact, that high load problem is exactly why autovacuum was created in\n> > the first place.\n> > \n> True,\n> but autovacuum could check load -before- and -during- it's execution and \n> it could adjust himself automatically to perform more or less \n> aggressively depending on the difference between those two values.\n> Maybe with a parameter like: maximum-autovacuum-load=0.2\n> that would mean: \"never load the machine more than 20% for the autovacuum\"\n\nThis is pretty non-trivial. How do you define 20% load? 20% of the\nCPU? Does that mean that it's OK for autovac to use 3% cpu and 100% of\nyour IO? Ok, so we need to calculate an average of IO and CPU -- which\ndisks? If your WAL logs are on one disk, and you've used tablespaces\nto spread the rest of your DB across different partitions, it can\nbe pretty difficult to determine which IO parameters you want to take\ninto consideration.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Wed, 27 Sep 2006 16:27:56 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
},
{
"msg_contents": ">> True,\n>> but autovacuum could check load -before- and -during- it's execution and \n>> it could adjust himself automatically to perform more or less \n>> aggressively depending on the difference between those two values.\n>> Maybe with a parameter like: maximum-autovacuum-load=0.2\n>> that would mean: \"never load the machine more than 20% for the autovacuum\"\n>> \n>\n> This is pretty non-trivial. How do you define 20% load? 20% of the\n> CPU? Does that mean that it's OK for autovac to use 3% cpu and 100% of\n> your IO? Ok, so we need to calculate an average of IO and CPU -- which\n> disks? If your WAL logs are on one disk, and you've used tablespaces\n> to spread the rest of your DB across different partitions, it can\n> be pretty difficult to determine which IO parameters you want to take\n> into consideration.\n>\n> \nAs I said before, it could be done, the main requirement is to find a \nway for pg to check for a value of the system load; of course it has to \nbe an average value between disk and cpu, of course the daemon would \nhave to collect sample of this values continuously, and of course \neverything would be better if the server it's only running Postgres\nStill I think you are right, it wouldn't suit exactly every situations \nbut it could be an \"emergency\" feature:\nWhat happened to me was very clear:\nserver running pg8 under heavy load, cpu's were 90% idle as usual.\nAt some point the vacuum started, the server reached 50 of overall load \nand cpu's were 1% idle\nI think any test can detect such a situation, regardles if load it's \nmore I/O based or CPU based\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTrue,\nbut autovacuum could check load -before- and -during- it's execution and \nit could adjust himself automatically to perform more or less \naggressively depending on the difference between those two values.\nMaybe with a parameter like: maximum-autovacuum-load=0.2\nthat would mean: \"never load the machine more than 20% for the autovacuum\"\n \n\n\nThis is pretty non-trivial. How do you define 20% load? 20% of the\nCPU? Does that mean that it's OK for autovac to use 3% cpu and 100% of\nyour IO? Ok, so we need to calculate an average of IO and CPU -- which\ndisks? If your WAL logs are on one disk, and you've used tablespaces\nto spread the rest of your DB across different partitions, it can\nbe pretty difficult to determine which IO parameters you want to take\ninto consideration.\n\n \n\nAs I said before, it could be done, the main requirement is to find a\nway for pg to check for a value of the system load; of course it has to\nbe an average value between disk and cpu, of course the daemon would\nhave to collect sample of this values continuously, and of course\neverything would be better if the server it's only running Postgres\nStill I think you are right, it wouldn't suit exactly every situations\nbut it could be an \"emergency\" feature:\nWhat happened to me was very clear: \nserver running pg8 under heavy load, cpu's were 90% idle as usual.\nAt some point the vacuum started, the server reached 50 of overall load\nand cpu's were 1% idle\nI think any test can detect such a situation, regardles if load it's\nmore I/O based or CPU based",
"msg_date": "Thu, 28 Sep 2006 10:44:05 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum on a -mostly- r/o table"
}
] |
[
{
"msg_contents": "List,\n\nI posted a little about this a while back to the general list, but never\nreally got any where with it so I'll try again, this time with a little\nmore detail and hopefully someone can send me in the right direction.\n\nHere is the problem, I have a procedure that is called 100k times a day.\n Most of the time it's screaming fast, but other times it takes a few\nseconds. When it does lag my machine can get behind which causes other\nproblems, so I'm trying to figure out why there is such a huge delta in\nperformance with this proc.\n\nThe proc is pretty large (due to the number of vars) so I will summarize\nit here:\n\n==========================================================================\nCREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$\nBEGIN\nINSERT into tmpaccounting_tab ( ... ) values ( ... );\n\nIF _acctType = 'start' THEN\n BEGIN\n INSERT into radutmp_tab ( ... ) valuse ( ... );\n EXCEPTION WHEN UNIQUE_VIOLATION THEN\n NULL;\n END;\nELSIF _acctType = 'stop' THEN\n UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =\n_userName;\n IF (NOT FOUND) THEN\n INSERT into radutmp_tab ( ... ) values ( ... );\n END IF;\n\nEND IF;\nEND;\n$$\nLANGUAGE plpgsql;\n==========================================================================\n\nSo in a nutshell, if I get an accounting record put it in the\ntmpaccounting_tab and then insert or update the radutmp_tab based on\nwhat kind of record it is. If for some reason the message is a start\nmessage and a duplicate, drop it, and if the message is a stop message\nand we don't have the start then insert it.\n\nThe tmpaccounting_tab table doesn't have any indexes and gets flushed to\nthe accounting_tab table nightly so it should have very good insert\nperformance as the table is kept small (compared to accounting_tab) and\ndoesn't have index overhead. The radutmp_tab is also kept small as\ncompleted sessions are flushed to another table nightly, but I do keep\nan index on sessionId and userName so the update isn't slow.\n\nNow that you have the layout, the problem: I log whenever a query takes\nmore than 250ms and have logged this query:\n\nduration: 3549.229 ms statement: select acctMessage( 'stop',\n'username', 'address', 'time', 'session', 'port', 'address', 'bytes',\n'bytes', 0, 0, 1, 'reason', '', '', '', 'proto', 'service', 'info')\n\nBut when I go do an explain analyze it is very fast:\n\n QUERY PLAN\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.03 rows=1 width=0) (actual time=6.812..6.813\nrows=1 loops=1)\n Total runtime: 6.888 ms\n\nSo the question is why on a relatively simple proc and I getting a query\nperformance delta between 3549ms and 7ms?\n\nHere are some values from my postgres.conf to look at:\n\nshared_buffers = 60000 # min 16 or max_connections*2,\n8KB each\ntemp_buffers = 5000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 131072 # min 64, size in KB\nmaintenance_work_mem = 262144 # min 1024, size in KB\nmax_stack_depth = 2048 # min 100, size in KB\neffective_cache_size = 65536 # typically 8KB each\n\n\nThanks for any help you can give,\nschu\n\n",
"msg_date": "Wed, 27 Sep 2006 10:37:22 -0800",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with inconsistant query performance."
},
{
"msg_contents": "Periodically taking longer is probably a case of some other process in\nthe database holding a lock you need, or otherwise bogging the system\ndown, especially if you're always running acctmessage from the same\nconnection (because the query plans shouldn't be changing then). I'd\nsuggest looking at what else is happening at the same time.\n\nAlso, it's more efficient to operate on chunks of data rather than one\nrow at a time whenever possible. If you have to log each row\nindividually, consider simply logging them into a table, and then\nperiodically pulling data out of that table to do additional processing\non it.\n\nBTW, your detection of duplicates/row existance has a race condition.\nTake a look at example 36-1 at\nhttp://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\nfor a better way to handle it.\n\nOn Wed, Sep 27, 2006 at 10:37:22AM -0800, Matthew Schumacher wrote:\n> List,\n> \n> I posted a little about this a while back to the general list, but never\n> really got any where with it so I'll try again, this time with a little\n> more detail and hopefully someone can send me in the right direction.\n> \n> Here is the problem, I have a procedure that is called 100k times a day.\n> Most of the time it's screaming fast, but other times it takes a few\n> seconds. When it does lag my machine can get behind which causes other\n> problems, so I'm trying to figure out why there is such a huge delta in\n> performance with this proc.\n> \n> The proc is pretty large (due to the number of vars) so I will summarize\n> it here:\n> \n> ==========================================================================\n> CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$\n> BEGIN\n> INSERT into tmpaccounting_tab ( ... ) values ( ... );\n> \n> IF _acctType = 'start' THEN\n> BEGIN\n> INSERT into radutmp_tab ( ... ) valuse ( ... );\n> EXCEPTION WHEN UNIQUE_VIOLATION THEN\n> NULL;\n> END;\n> ELSIF _acctType = 'stop' THEN\n> UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =\n> _userName;\n> IF (NOT FOUND) THEN\n> INSERT into radutmp_tab ( ... ) values ( ... );\n> END IF;\n> \n> END IF;\n> END;\n> $$\n> LANGUAGE plpgsql;\n> ==========================================================================\n> \n> So in a nutshell, if I get an accounting record put it in the\n> tmpaccounting_tab and then insert or update the radutmp_tab based on\n> what kind of record it is. If for some reason the message is a start\n> message and a duplicate, drop it, and if the message is a stop message\n> and we don't have the start then insert it.\n> \n> The tmpaccounting_tab table doesn't have any indexes and gets flushed to\n> the accounting_tab table nightly so it should have very good insert\n> performance as the table is kept small (compared to accounting_tab) and\n> doesn't have index overhead. The radutmp_tab is also kept small as\n> completed sessions are flushed to another table nightly, but I do keep\n> an index on sessionId and userName so the update isn't slow.\n> \n> Now that you have the layout, the problem: I log whenever a query takes\n> more than 250ms and have logged this query:\n> \n> duration: 3549.229 ms statement: select acctMessage( 'stop',\n> 'username', 'address', 'time', 'session', 'port', 'address', 'bytes',\n> 'bytes', 0, 0, 1, 'reason', '', '', '', 'proto', 'service', 'info')\n> \n> But when I go do an explain analyze it is very fast:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.03 rows=1 width=0) (actual time=6.812..6.813\n> rows=1 loops=1)\n> Total runtime: 6.888 ms\n> \n> So the question is why on a relatively simple proc and I getting a query\n> performance delta between 3549ms and 7ms?\n> \n> Here are some values from my postgres.conf to look at:\n> \n> shared_buffers = 60000 # min 16 or max_connections*2,\n> 8KB each\n> temp_buffers = 5000 # min 100, 8KB each\n> #max_prepared_transactions = 5 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 131072 # min 64, size in KB\n> maintenance_work_mem = 262144 # min 1024, size in KB\n> max_stack_depth = 2048 # min 100, size in KB\n> effective_cache_size = 65536 # typically 8KB each\n> \n> \n> Thanks for any help you can give,\n> schu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 27 Sep 2006 15:31:55 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "Jim,\n\nThanks for the help. I went and looked at that example and I don't see\nhow it's different than the \"INSERT into radutmp_tab\" I'm already doing.\n Both raise an exception, the only difference is that I'm not doing\nanything with it. Perhaps you are talking about the \"IF (NOT FOUND)\" I\nput after the \"UPDATE radutmp_tab\". Should this be an EXCEPTION\ninstead? Also I don't know how this could cause a race condition. As\nfar as I understand each proc is run in it's own transaction, and the\ncode in the proc is run serially. Can you explain more why this could\ncase a race?\n\nThanks,\nschu\n\n\n\nJim C. Nasby wrote:\n> Periodically taking longer is probably a case of some other process in\n> the database holding a lock you need, or otherwise bogging the system\n> down, especially if you're always running acctmessage from the same\n> connection (because the query plans shouldn't be changing then). I'd\n> suggest looking at what else is happening at the same time.\n> \n> Also, it's more efficient to operate on chunks of data rather than one\n> row at a time whenever possible. If you have to log each row\n> individually, consider simply logging them into a table, and then\n> periodically pulling data out of that table to do additional processing\n> on it.\n> \n> BTW, your detection of duplicates/row existance has a race condition.\n> Take a look at example 36-1 at\n> http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n> for a better way to handle it.\n\n>> ==========================================================================\n>> CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$\n>> BEGIN\n>> INSERT into tmpaccounting_tab ( ... ) values ( ... );\n>>\n>> IF _acctType = 'start' THEN\n>> BEGIN\n>> INSERT into radutmp_tab ( ... ) valuse ( ... );\n>> EXCEPTION WHEN UNIQUE_VIOLATION THEN\n>> NULL;\n>> END;\n>> ELSIF _acctType = 'stop' THEN\n>> UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =\n>> _userName;\n>> IF (NOT FOUND) THEN\n>> INSERT into radutmp_tab ( ... ) values ( ... );\n>> END IF;\n>>\n>> END IF;\n>> END;\n>> $$\n>> LANGUAGE plpgsql;\n>> ==========================================================================\n",
"msg_date": "Wed, 27 Sep 2006 13:33:09 -0800",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "On Wed, Sep 27, 2006 at 01:33:09PM -0800, Matthew Schumacher wrote:\n> Jim,\n> \n> Thanks for the help. I went and looked at that example and I don't see\n> how it's different than the \"INSERT into radutmp_tab\" I'm already doing.\n> Both raise an exception, the only difference is that I'm not doing\n> anything with it. Perhaps you are talking about the \"IF (NOT FOUND)\" I\n> put after the \"UPDATE radutmp_tab\". Should this be an EXCEPTION\n> instead? Also I don't know how this could cause a race condition. As\n> far as I understand each proc is run in it's own transaction, and the\n> code in the proc is run serially. Can you explain more why this could\n> case a race?\n \nIt can cause a race if another process could be performing those same\ninserts or updates at the same time.\n\nI know the UPDATE case can certainly cause a race. 2 connections try to\nupdate, both hit NOT FOUND, both try to insert... only one will get to\ncommit.\n\nI think that the UNIQUE_VIOLATION case should be safe, since a second\ninserter should block if there's another insert that's waiting to\ncommit.\n\nDELETEs are something else to think about for both cases.\n\nIf you're certain that only one process will be performing DML on those\ntables at any given time, then what you have is safe. But if that's the\ncase, I'm thinking you should be able to group things into chunks, which\nshould be more efficient.\n\n> Thanks,\n> schu\n> \n> \n> \n> Jim C. Nasby wrote:\n> > Periodically taking longer is probably a case of some other process in\n> > the database holding a lock you need, or otherwise bogging the system\n> > down, especially if you're always running acctmessage from the same\n> > connection (because the query plans shouldn't be changing then). I'd\n> > suggest looking at what else is happening at the same time.\n> > \n> > Also, it's more efficient to operate on chunks of data rather than one\n> > row at a time whenever possible. If you have to log each row\n> > individually, consider simply logging them into a table, and then\n> > periodically pulling data out of that table to do additional processing\n> > on it.\n> > \n> > BTW, your detection of duplicates/row existance has a race condition.\n> > Take a look at example 36-1 at\n> > http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n> > for a better way to handle it.\n> \n> >> ==========================================================================\n> >> CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$\n> >> BEGIN\n> >> INSERT into tmpaccounting_tab ( ... ) values ( ... );\n> >>\n> >> IF _acctType = 'start' THEN\n> >> BEGIN\n> >> INSERT into radutmp_tab ( ... ) valuse ( ... );\n> >> EXCEPTION WHEN UNIQUE_VIOLATION THEN\n> >> NULL;\n> >> END;\n> >> ELSIF _acctType = 'stop' THEN\n> >> UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =\n> >> _userName;\n> >> IF (NOT FOUND) THEN\n> >> INSERT into radutmp_tab ( ... ) values ( ... );\n> >> END IF;\n> >>\n> >> END IF;\n> >> END;\n> >> $$\n> >> LANGUAGE plpgsql;\n> >> ==========================================================================\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 27 Sep 2006 16:42:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "Jim C. Nasby wrote:\n> \n> It can cause a race if another process could be performing those same\n> inserts or updates at the same time.\n\nThere are inserts and updates running all of the time, but never the\nsame data. I'm not sure how I can get around this since the queries are\ncoming from my radius system which is not able to queue this stuff up\nbecause it waits for a successful query before returning an OK packet\nback to the client.\n\n> \n> I know the UPDATE case can certainly cause a race. 2 connections try to\n> update, both hit NOT FOUND, both try to insert... only one will get to\n> commit.\n\nWhy is that? Doesn't the first update lock the row causing the second\none to wait, then the second one stomps on the row allowing both to\ncommit? I must be confused....\n\n> \n> I think that the UNIQUE_VIOLATION case should be safe, since a second\n> inserter should block if there's another insert that's waiting to\n> commit.\n\nAre you saying that inserts inside of an EXCEPTION block, but normal\ninserts don't?\n\n> \n> DELETEs are something else to think about for both cases.\n\nI only do one delete and that is every night when I move the data to the\nprimary table and remove that days worth of data from the tmp table.\nThis is done at non-peak times with a vacuum, so I think I'm good here.\n\n> \n> If you're certain that only one process will be performing DML on those\n> tables at any given time, then what you have is safe. But if that's the\n> case, I'm thinking you should be able to group things into chunks, which\n> should be more efficient.\n\nYea, I wish I could, but I really need to do one at a time because of\nhow radius waits for a successful query before telling the access server\nall is well. If the query fails, the access server won't get the 'OK'\npacket and will send the data to the secondary radius system where it\ngets queued.\n\nDo you know of a way to see what is going on with the locking system\nother than \"select * from pg_locks\"? I can't ever seem to catch the\nsystem when queries start to lag.\n\nThanks again,\nschu\n",
"msg_date": "Wed, 27 Sep 2006 14:17:23 -0800",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "On Wed, Sep 27, 2006 at 02:17:23PM -0800, Matthew Schumacher wrote:\n> Jim C. Nasby wrote:\n> > \n> > It can cause a race if another process could be performing those same\n> > inserts or updates at the same time.\n> \n> There are inserts and updates running all of the time, but never the\n> same data. I'm not sure how I can get around this since the queries are\n> coming from my radius system which is not able to queue this stuff up\n> because it waits for a successful query before returning an OK packet\n> back to the client.\n> \n> > \n> > I know the UPDATE case can certainly cause a race. 2 connections try to\n> > update, both hit NOT FOUND, both try to insert... only one will get to\n> > commit.\n> \n> Why is that? Doesn't the first update lock the row causing the second\n> one to wait, then the second one stomps on the row allowing both to\n> commit? I must be confused....\n\nWhat if there's no row to update?\n\nProcess A Process B\nUPDATE .. NOT FOUND\n UPDATE .. NOT FOUND\n INSERT\nINSERT blocks\n COMMIT\nUNIQUE_VIOLATION\n\nThat's assuming that there's a unique index. If there isn't one, you'd\nget duplicate records.\n\n> > I think that the UNIQUE_VIOLATION case should be safe, since a second\n> > inserter should block if there's another insert that's waiting to\n> > commit.\n> \n> Are you saying that inserts inside of an EXCEPTION block, but normal\n> inserts don't?\n\nNo... if there's a unique index, a second INSERT attempting to create a\nduplicate record will block until the first INSERT etiher commits or\nrollsback.\n\n> > DELETEs are something else to think about for both cases.\n> \n> I only do one delete and that is every night when I move the data to the\n> primary table and remove that days worth of data from the tmp table.\n> This is done at non-peak times with a vacuum, so I think I'm good here.\n\nExcept that you might still have someone fire off that function while\nthe delete's running, or vice-versa. So there could be a race condition\n(I haven't thought enough about what race conditions that could cause).\n\n> > If you're certain that only one process will be performing DML on those\n> > tables at any given time, then what you have is safe. But if that's the\n> > case, I'm thinking you should be able to group things into chunks, which\n> > should be more efficient.\n> \n> Yea, I wish I could, but I really need to do one at a time because of\n> how radius waits for a successful query before telling the access server\n> all is well. If the query fails, the access server won't get the 'OK'\n> packet and will send the data to the secondary radius system where it\n> gets queued.\n \nIn that case, the key is to do the absolute smallest amount of work\npossible as part of that transaction. Ideally, you would only insert a\nrecord into a queue table somewhere, and then periodically process\nrecords out of that table in batches.\n\n> Do you know of a way to see what is going on with the locking system\n> other than \"select * from pg_locks\"? I can't ever seem to catch the\n> system when queries start to lag.\n\nNo. Your best bet is to open two psql sessions and step through things\nin different combinations (make sure and do this in transactions).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 27 Sep 2006 17:28:57 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "> So the question is why on a relatively simple proc and I getting a query\n> performance delta between 3549ms and 7ms?\n\nWhat version of PG is it?\n\nI had such problems in a pseudo-realtime app I use here with Postgres, and\nthey went away when I moved to 8.1 (from 7.4). I guess it is better shared\nbuffer management code (don`t You see a\nbig_query_searching_through_half_the_db just before You get this slow\ninsert? ) .\n\nGreetings\nMarcin\n\n",
"msg_date": "Thu, 28 Sep 2006 14:42:32 +0200",
"msg_from": "\"Marcin Mank\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "Marcin Mank wrote:\n>> So the question is why on a relatively simple proc and I getting a query\n>> performance delta between 3549ms and 7ms?\n> \n> What version of PG is it?\n> \n> I had such problems in a pseudo-realtime app I use here with Postgres, and\n> they went away when I moved to 8.1 (from 7.4). I guess it is better shared\n> buffer management code (don`t You see a\n> big_query_searching_through_half_the_db just before You get this slow\n> insert? ) .\n> \n> Greetings\n> Marcin\n> \n\nMarcin,\n\nIt is 8.1.4, and there is searching being done on the radutmp_tab all of\nthe time. It is relatively small though, only a couple of thousand\nrows. The tmpaccounting_tab table which also gets inserts is only used\nfor inserting, then one large query every night to flush the data to an\nindexed table.\n\nWhat I really need is a way to profile my proc when it runs slow so that\nI can resolve which of the queries is really slow. Anyone with an idea\non how to do this?\n\nschu\n",
"msg_date": "Thu, 28 Sep 2006 07:18:37 -0800",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "In response to Matthew Schumacher <[email protected]>:\n> \n> What I really need is a way to profile my proc when it runs slow so that\n> I can resolve which of the queries is really slow. Anyone with an idea\n> on how to do this?\n\nYou could turn on statement logging and duration logging. This would\ngive you a record of when things run and how long they take. A little\nwork analyzing should show you which queries are running when your\nfavorite query slows down.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Thu, 28 Sep 2006 11:28:43 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 11:28:43AM -0400, Bill Moran wrote:\n> In response to Matthew Schumacher <[email protected]>:\n> > \n> > What I really need is a way to profile my proc when it runs slow so that\n> > I can resolve which of the queries is really slow. Anyone with an idea\n> > on how to do this?\n> \n> You could turn on statement logging and duration logging. This would\n> give you a record of when things run and how long they take. A little\n> work analyzing should show you which queries are running when your\n> favorite query slows down.\n\nBy default, that doesn't help you debug what's happening inside a\nfunction, because you only get the call to the function. I don't know if\nyou can increase verbosity to combat that.\n\nSomething else to consider is that gettimeofday() on some platforms is\npainfully slow, which could completely skew all your numbers.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 11:15:49 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
},
{
"msg_contents": "In response to \"Jim C. Nasby\" <[email protected]>:\n\n> On Thu, Sep 28, 2006 at 11:28:43AM -0400, Bill Moran wrote:\n> > In response to Matthew Schumacher <[email protected]>:\n> > > \n> > > What I really need is a way to profile my proc when it runs slow so that\n> > > I can resolve which of the queries is really slow. Anyone with an idea\n> > > on how to do this?\n> > \n> > You could turn on statement logging and duration logging. This would\n> > give you a record of when things run and how long they take. A little\n> > work analyzing should show you which queries are running when your\n> > favorite query slows down.\n> \n> By default, that doesn't help you debug what's happening inside a\n> function, because you only get the call to the function. I don't know if\n> you can increase verbosity to combat that.\n\nRight, but my point was that he believes another query is interfering\nwhen the target query is slow. Turning on those logging statements\nwill:\na) Allow him to identify times when the query is slow.\nb) Identify other queries that are running at the same time.\n\nIf he sees a pattern (i.e. My query is always slow if query X5 is\nrunning at the same time) he'll have a good lead into further research.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Thu, 28 Sep 2006 13:40:24 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with inconsistant query performance."
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 2658\nLogged by: Graham Davis\nEmail address: [email protected]\nPostgreSQL version: 8.1.4\nOperating system: Linux\nDescription: Query not using index\nDetails: \n\nI know that in version 8 you guys added support so that aggregate functions\ncan take advantage of indexes. However, I have a simple query that is not\ntaking advantage of an index where I believe it should.\n\nI have a large table full of GPS positions. I want to query the table for\nthe most recent location of each asset (an asset is essentially a vehicle). \nThe ts column is the timestamp, so I am using this to figure out the most\nrecent position. I use the following query to do it:\n\nSELECT assetid, max(ts) AS ts \nFROM asset_positions \nGROUP BY assetid;\n\nI have an index on (ts), another index on (assetid) and a multikey index on\n(assetid, ts). I know the assetid index is pointless since the multikey one\ntakes its place, but I put it there while testing just to make sure. The\nANALYZE EXPLAIN for this query is:\n\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------------------\n HashAggregate (cost=125423.96..125424.21 rows=20 width=12) (actual\ntime=39693.995..39694.036 rows=20 loops=1)\n -> Seq Scan on asset_positions (cost=0.00..116654.64 rows=1753864\nwidth=12) (actual time=20002.362..34724.896 rows=1738693 loops=1)\n Total runtime: 39694.245 ms\n(3 rows)\n\nYou can see it is doing a sequential scan on the table when it should be\nusing the (assetid, ts) index, or at the very least the (ts) index. This\nquery takes about 40 seconds to complete with a table of 1.7 million rows. \nI tested running the query without the group by as follows:\n\nSELECT max(ts) AS ts\nFROM asset_positions;\n\nThis query DOES use the (ts) index and takes less than 1 ms to complete. So\nI'm not sure why my initial query is not using one of the indexes. I have\nto use the GROUP BY in my query so that I get the max ts of EACH asset. \n\nI've tried restructuring my query so that it will use an index, but nothing\nseems to work. I tried this syntax for example:\n\nSELECT DISTINCT ON (assetid) assetid, ts\nFROM asset_positions \nORDER BY assetid, ts DESC;\n\nIt still does a sequential scan and takes 40+ seconds to complete. If I am\nmissing something here, please let me know, but I believe this is a bug that\nneeds addressing. If it is not a bug (and there just simply isn't support\nfor this with multikey indexes yet), please let me know so I can either try\nrestructuring the coding I am working on, or move on for now. The\ndocumentation does not mention anything about this, but I know from reading\na list of changes in version 8 that this sort of support was added for\naggregate functions. If you need more information, please let me know,\nthanks in advance.\n",
"msg_date": "Wed, 27 Sep 2006 20:56:32 GMT",
"msg_from": "\"Graham Davis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #2658: Query not using index"
},
{
"msg_contents": "This shouldn't have been submitted to the bugs list, as it isn't a bug.\nThe best spot for this kind of question is the performance list so I am\ncopying it there and redirecting followups there.\n\nOn Wed, Sep 27, 2006 at 20:56:32 +0000,\n Graham Davis <[email protected]> wrote:\n> \n> SELECT assetid, max(ts) AS ts \n> FROM asset_positions \n> GROUP BY assetid;\n> \n> I have an index on (ts), another index on (assetid) and a multikey index on\n> (assetid, ts). I know the assetid index is pointless since the multikey one\n> takes its place, but I put it there while testing just to make sure. The\n> ANALYZE EXPLAIN for this query is:\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -------------------------------------------------------------\n> HashAggregate (cost=125423.96..125424.21 rows=20 width=12) (actual\n> time=39693.995..39694.036 rows=20 loops=1)\n> -> Seq Scan on asset_positions (cost=0.00..116654.64 rows=1753864\n> width=12) (actual time=20002.362..34724.896 rows=1738693 loops=1)\n> Total runtime: 39694.245 ms\n> (3 rows)\n> \n> You can see it is doing a sequential scan on the table when it should be\n> using the (assetid, ts) index, or at the very least the (ts) index. This\n> query takes about 40 seconds to complete with a table of 1.7 million rows. \n> I tested running the query without the group by as follows:\n\n> SELECT DISTINCT ON (assetid) assetid, ts\n> FROM asset_positions \n> ORDER BY assetid, ts DESC;\n\nThis is almost what you want to do to get an alternative plan. But you\nneed to ORDER BY assetid DESC, ts DESC to make use of the multicolumn\nindex. If you really need the other output order, reverse it in your\napplication or use the above as a subselect in another query that orders\nby assetid ASC.\n",
"msg_date": "Mon, 2 Oct 2006 19:01:47 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Hi,\n\nAdding DESC to both columns in the SORT BY did not make the query use \nthe multikey index. So both\n\nSELECT DISTINCT ON (assetid) assetid, ts\nFROM asset_positions \nORDER BY assetid, ts DESC;\n\nand\n\nSELECT DISTINCT ON (assetid) assetid, ts\nFROM asset_positions \nORDER BY assetid DESC, ts DESC;\n\nuse the same query plans and both do sequential scans without using either the (assetid, ts) or (ts) indexes. Any other ideas on how to make this query use an index? Thanks,\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n\n\n>On Wed, Sep 27, 2006 at 20:56:32 +0000,\n> Graham Davis <[email protected]> wrote:\n> \n>\n>>SELECT assetid, max(ts) AS ts \n>>FROM asset_positions \n>>GROUP BY assetid;\n>>\n>>I have an index on (ts), another index on (assetid) and a multikey index on\n>>(assetid, ts). I know the assetid index is pointless since the multikey one\n>>takes its place, but I put it there while testing just to make sure. The\n>>ANALYZE EXPLAIN for this query is:\n>>\n>> QUERY PLAN\n>>----------------------------------------------------------------------------\n>>-------------------------------------------------------------\n>> HashAggregate (cost=125423.96..125424.21 rows=20 width=12) (actual\n>>time=39693.995..39694.036 rows=20 loops=1)\n>> -> Seq Scan on asset_positions (cost=0.00..116654.64 rows=1753864\n>>width=12) (actual time=20002.362..34724.896 rows=1738693 loops=1)\n>> Total runtime: 39694.245 ms\n>>(3 rows)\n>>\n>>You can see it is doing a sequential scan on the table when it should be\n>>using the (assetid, ts) index, or at the very least the (ts) index. This\n>>query takes about 40 seconds to complete with a table of 1.7 million rows. \n>>I tested running the query without the group by as follows:\n>> \n>>\n>\n> \n>\n>>SELECT DISTINCT ON (assetid) assetid, ts\n>>FROM asset_positions \n>>ORDER BY assetid, ts DESC;\n>> \n>>\n>\n>This is almost what you want to do to get an alternative plan. But you\n>need to ORDER BY assetid DESC, ts DESC to make use of the multicolumn\n>index. If you really need the other output order, reverse it in your\n>application or use the above as a subselect in another query that orders\n>by assetid ASC.\n> \n>\n\n\n",
"msg_date": "Tue, 03 Oct 2006 11:20:49 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "[email protected] (Graham Davis) writes:\n> Adding DESC to both columns in the SORT BY did not make the query use\n> the multikey index. So both\n>\n> SELECT DISTINCT ON (assetid) assetid, ts\n> FROM asset_positions ORDER BY assetid, ts DESC;\n>\n> and\n>\n> SELECT DISTINCT ON (assetid) assetid, ts\n> FROM asset_positions ORDER BY assetid DESC, ts DESC;\n>\n> use the same query plans and both do sequential scans without using\n> either the (assetid, ts) or (ts) indexes. Any other ideas on how to\n> make this query use an index? Thanks,\n\nWhy do you want to worsen performance by forcing the use of an index?\n\nYou are reading through the entire table, after all, and doing so via\na sequential scan is normally the fastest way to do that. An index\nscan would only be more efficient if you don't have enough space in\nmemory to store all assetid values.\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/emacs.html\nExpect the unexpected.\n-- The Hitchhiker's Guide to the Galaxy, page 7023\n",
"msg_date": "Tue, 03 Oct 2006 15:02:32 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "The asset_positions table has about 1.7 million rows, and this query \ntakes over 40 seconds to do a sequential scan. Initially I was trying \nto get the original query:\n\nSELECT assetid, max(ts) AS ts \nFROM asset_positions \nGROUP BY assetid;\n\nto use the multikey index since I read that PostgreSQL 8 added support \nfor aggregates to use indexes. However, the GROUP BY was causing the query\nplan to not use any index (removing the GROUP by allowed the query to \nuse the ts index and it took only 50 ms to run). Since I need the query \nto find the max time\nfor EACH asset, I can't just drop the GROUP BY from my query. So I was \ntrying some alternate ways of writing the query (as described in the \nbelow email) to\nforce the use of one of these indexes.\n\n40 seconds is much too slow for this query to run and I'm assuming that \nthe use of an index will make it much faster (as seen when I removed the \nGROUP BY clause). Any tips?\n\nGraham.\n\n\nChris Browne wrote:\n\n>[email protected] (Graham Davis) writes:\n> \n>\n>>Adding DESC to both columns in the SORT BY did not make the query use\n>>the multikey index. So both\n>>\n>>SELECT DISTINCT ON (assetid) assetid, ts\n>>FROM asset_positions ORDER BY assetid, ts DESC;\n>>\n>>and\n>>\n>>SELECT DISTINCT ON (assetid) assetid, ts\n>>FROM asset_positions ORDER BY assetid DESC, ts DESC;\n>>\n>>use the same query plans and both do sequential scans without using\n>>either the (assetid, ts) or (ts) indexes. Any other ideas on how to\n>>make this query use an index? Thanks,\n>> \n>>\n>\n>Why do you want to worsen performance by forcing the use of an index?\n>\n>You are reading through the entire table, after all, and doing so via\n>a sequential scan is normally the fastest way to do that. An index\n>scan would only be more efficient if you don't have enough space in\n>memory to store all assetid values.\n> \n>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Tue, 03 Oct 2006 12:10:49 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Also, the multikey index of (assetid, ts) would already be sorted and \nthat is why using such an index in this case is\nfaster than doing a sequential scan that does the sorting afterwards.\n\nGraham.\n\n\nChris Browne wrote:\n\n>[email protected] (Graham Davis) writes:\n> \n>\n>>Adding DESC to both columns in the SORT BY did not make the query use\n>>the multikey index. So both\n>>\n>>SELECT DISTINCT ON (assetid) assetid, ts\n>>FROM asset_positions ORDER BY assetid, ts DESC;\n>>\n>>and\n>>\n>>SELECT DISTINCT ON (assetid) assetid, ts\n>>FROM asset_positions ORDER BY assetid DESC, ts DESC;\n>>\n>>use the same query plans and both do sequential scans without using\n>>either the (assetid, ts) or (ts) indexes. Any other ideas on how to\n>>make this query use an index? Thanks,\n>> \n>>\n>\n>Why do you want to worsen performance by forcing the use of an index?\n>\n>You are reading through the entire table, after all, and doing so via\n>a sequential scan is normally the fastest way to do that. An index\n>scan would only be more efficient if you don't have enough space in\n>memory to store all assetid values.\n> \n>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Tue, 03 Oct 2006 12:13:43 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "[email protected] (Graham Davis) writes:\n> 40 seconds is much too slow for this query to run and I'm assuming\n> that the use of an index will make it much faster (as seen when I\n> removed the GROUP BY clause). Any tips?\n\nAssumptions are dangerous things.\n\nAn aggregate like this has *got to* scan the entire table, and given\nthat that is the case, an index scan is NOT optimal; a seq scan is.\n\nAn index scan is just going to be slower.\n-- \nlet name=\"cbbrowne\" and tld=\"linuxdatabases.info\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/linux.html\n\"The computer is the ultimate polluter: its feces are\nindistinguishable from the food it produces.\" -- Alan J. Perlis\n",
"msg_date": "Tue, 03 Oct 2006 15:18:36 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "How come an aggreate like that has to use a sequential scan? I know \nthat PostgreSQL use to have to do a sequential scan for all aggregates, \nbut there was support added to version 8 so that aggregates would take \nadvantage of indexes. This is why\n\nSELECT max(ts) AS ts\nFROM asset_positions;\n\nUses an index on the ts column and only takes 50 milliseconds. When I \nadded the group by it would not use a multikey index or any other \nindex. Is there just no support for aggregates to use multikey \nindexes? Sorry to be so pushy, but I just want to make sure I \nunderstand why the above query can use an index and the following can't:\n\nSELECT assetid, max(ts) AS ts\nFROM asset_positions\nGROUP BY assetid;\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n\n\nChris Browne wrote:\n\n>[email protected] (Graham Davis) writes:\n> \n>\n>>40 seconds is much too slow for this query to run and I'm assuming\n>>that the use of an index will make it much faster (as seen when I\n>>removed the GROUP BY clause). Any tips?\n>> \n>>\n>\n>Assumptions are dangerous things.\n>\n>An aggregate like this has *got to* scan the entire table, and given\n>that that is the case, an index scan is NOT optimal; a seq scan is.\n>\n>An index scan is just going to be slower.\n> \n>\n\n\n",
"msg_date": "Tue, 03 Oct 2006 13:32:24 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Graham Davis <[email protected]> writes:\n> How come an aggreate like that has to use a sequential scan? I know \n> that PostgreSQL use to have to do a sequential scan for all aggregates, \n> but there was support added to version 8 so that aggregates would take \n> advantage of indexes.\n\nNot in a GROUP BY context, only for the simple case. Per the comment in\nplanagg.c:\n\n\t * We don't handle GROUP BY, because our current implementations of\n\t * grouping require looking at all the rows anyway, and so there's not\n\t * much point in optimizing MIN/MAX.\n\nThe problem is that using an index to obtain the maximum value of ts for\na given value of assetid is not the same thing as finding out what all\nthe distinct values of assetid are.\n\nThis could possibly be improved but it would take a considerable amount\nmore work. It's definitely not in the category of \"bug fix\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Oct 2006 16:48:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index "
},
{
"msg_contents": "On Tue, Oct 03, 2006 at 12:13:43 -0700,\n Graham Davis <[email protected]> wrote:\n> Also, the multikey index of (assetid, ts) would already be sorted and \n> that is why using such an index in this case is\n> faster than doing a sequential scan that does the sorting afterwards.\n\nThat isn't necessarily true. The sequentional scan and sort will need a lot\nfewer disk seeks and could run faster than using an index scan that has\nthe disk drives doing seeks for every tuple (in the worst case, where\nthe on disk order of tuples doesn't match the order in the index).\n\nIf your server is caching most of the blocks than the index scan might\ngive better results. You might try disabling sequentional scans to\ntry to coerce the other plan and see what results you get. If it is\nsubstantially faster the other way, then you might want to look at lowering\nthe random page cost factor. However, since this can affect other queries\nyou need to be careful that you don't speed up one query at the expense\nof a lot of other queries.\n",
"msg_date": "Tue, 3 Oct 2006 15:48:11 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Thanks Tom, that explains it and makes sense. I guess I will have to \naccept this query taking 40 seconds, unless I can figure out another way \nto write it so it can use indexes. If there are any more syntax \nsuggestions, please pass them on. Thanks for the help everyone.\n\nGraham.\n\n\nTom Lane wrote:\n\n>Graham Davis <[email protected]> writes:\n> \n>\n>>How come an aggreate like that has to use a sequential scan? I know \n>>that PostgreSQL use to have to do a sequential scan for all aggregates, \n>>but there was support added to version 8 so that aggregates would take \n>>advantage of indexes.\n>> \n>>\n>\n>Not in a GROUP BY context, only for the simple case. Per the comment in\n>planagg.c:\n>\n>\t * We don't handle GROUP BY, because our current implementations of\n>\t * grouping require looking at all the rows anyway, and so there's not\n>\t * much point in optimizing MIN/MAX.\n>\n>The problem is that using an index to obtain the maximum value of ts for\n>a given value of assetid is not the same thing as finding out what all\n>the distinct values of assetid are.\n>\n>This could possibly be improved but it would take a considerable amount\n>more work. It's definitely not in the category of \"bug fix\".\n>\n>\t\t\tregards, tom lane\n> \n>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Tue, 03 Oct 2006 13:52:28 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Have you looked into a materialized view sort of approach? You could\ncreate a table which had assetid as a primary key, and max_ts as a\ncolumn. Then use triggers to keep that table up to date as rows are\nadded/updated/removed from the main table.\n\nThis approach would only make sense if there were far fewer distinct\nassetid values than rows in the main table, and would get slow if you\ncommonly delete rows from the main table or decrease the value for ts in\nthe row with the highest ts for a given assetid.\n\n-- Mark Lewis\n\nOn Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:\n> Thanks Tom, that explains it and makes sense. I guess I will have to \n> accept this query taking 40 seconds, unless I can figure out another way \n> to write it so it can use indexes. If there are any more syntax \n> suggestions, please pass them on. Thanks for the help everyone.\n> \n> Graham.\n> \n> \n> Tom Lane wrote:\n> \n> >Graham Davis <[email protected]> writes:\n> > \n> >\n> >>How come an aggreate like that has to use a sequential scan? I know \n> >>that PostgreSQL use to have to do a sequential scan for all aggregates, \n> >>but there was support added to version 8 so that aggregates would take \n> >>advantage of indexes.\n> >> \n> >>\n> >\n> >Not in a GROUP BY context, only for the simple case. Per the comment in\n> >planagg.c:\n> >\n> >\t * We don't handle GROUP BY, because our current implementations of\n> >\t * grouping require looking at all the rows anyway, and so there's not\n> >\t * much point in optimizing MIN/MAX.\n> >\n> >The problem is that using an index to obtain the maximum value of ts for\n> >a given value of assetid is not the same thing as finding out what all\n> >the distinct values of assetid are.\n> >\n> >This could possibly be improved but it would take a considerable amount\n> >more work. It's definitely not in the category of \"bug fix\".\n> >\n> >\t\t\tregards, tom lane\n> > \n> >\n> \n> \n",
"msg_date": "Tue, 03 Oct 2006 14:06:52 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "The \"summary table\" approach maintained by triggers is something we are \nconsidering, but it becomes a bit more complicated to implement. \nCurrently we have groups of new positions coming in every few seconds or \nless. They are not guaranteed to be in order. So for instance, a group \nof positions from today could come in and be inserted, then a group of \npositions that got lost from yesterday could come in and be inserted \nafterwards. \n\nThis means the triggers would have to do some sort of logic to figure \nout if the newly inserted position is actually the most recent by \ntimestamp. If positions are ever deleted or updated, the same sort of \nquery that is currently running slow will need to be executed in order \nto get the new most recent position. So there is the possibility that \nnew positions can be inserted faster than the triggers can calculate \nand maintain the summary table. There are some other complications \nwith maintaining such a summary table in our system too, but I won't get \ninto those.\n\nRight now I'm just trying to see if I can get the query itself running \nfaster, which would be the easiest solution for now.\n\nGraham.\n\n\nMark Lewis wrote:\n\n>Have you looked into a materialized view sort of approach? You could\n>create a table which had assetid as a primary key, and max_ts as a\n>column. Then use triggers to keep that table up to date as rows are\n>added/updated/removed from the main table.\n>\n>This approach would only make sense if there were far fewer distinct\n>assetid values than rows in the main table, and would get slow if you\n>commonly delete rows from the main table or decrease the value for ts in\n>the row with the highest ts for a given assetid.\n>\n>-- Mark Lewis\n>\n>On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:\n> \n>\n>>Thanks Tom, that explains it and makes sense. I guess I will have to \n>>accept this query taking 40 seconds, unless I can figure out another way \n>>to write it so it can use indexes. If there are any more syntax \n>>suggestions, please pass them on. Thanks for the help everyone.\n>>\n>>Graham.\n>>\n>>\n>>Tom Lane wrote:\n>>\n>> \n>>\n>>>Graham Davis <[email protected]> writes:\n>>> \n>>>\n>>> \n>>>\n>>>>How come an aggreate like that has to use a sequential scan? I know \n>>>>that PostgreSQL use to have to do a sequential scan for all aggregates, \n>>>>but there was support added to version 8 so that aggregates would take \n>>>>advantage of indexes.\n>>>> \n>>>>\n>>>> \n>>>>\n>>>Not in a GROUP BY context, only for the simple case. Per the comment in\n>>>planagg.c:\n>>>\n>>>\t * We don't handle GROUP BY, because our current implementations of\n>>>\t * grouping require looking at all the rows anyway, and so there's not\n>>>\t * much point in optimizing MIN/MAX.\n>>>\n>>>The problem is that using an index to obtain the maximum value of ts for\n>>>a given value of assetid is not the same thing as finding out what all\n>>>the distinct values of assetid are.\n>>>\n>>>This could possibly be improved but it would take a considerable amount\n>>>more work. It's definitely not in the category of \"bug fix\".\n>>>\n>>>\t\t\tregards, tom lane\n>>> \n>>>\n>>> \n>>>\n>> \n>>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Tue, 03 Oct 2006 14:23:26 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Hmmm. How many distinct assetids are there?\n-- Mark Lewis\n\nOn Tue, 2006-10-03 at 14:23 -0700, Graham Davis wrote:\n> The \"summary table\" approach maintained by triggers is something we are \n> considering, but it becomes a bit more complicated to implement. \n> Currently we have groups of new positions coming in every few seconds or \n> less. They are not guaranteed to be in order. So for instance, a group \n> of positions from today could come in and be inserted, then a group of \n> positions that got lost from yesterday could come in and be inserted \n> afterwards. \n> \n> This means the triggers would have to do some sort of logic to figure \n> out if the newly inserted position is actually the most recent by \n> timestamp. If positions are ever deleted or updated, the same sort of \n> query that is currently running slow will need to be executed in order \n> to get the new most recent position. So there is the possibility that \n> new positions can be inserted faster than the triggers can calculate \n> and maintain the summary table. There are some other complications \n> with maintaining such a summary table in our system too, but I won't get \n> into those.\n> \n> Right now I'm just trying to see if I can get the query itself running \n> faster, which would be the easiest solution for now.\n> \n> Graham.\n> \n> \n> Mark Lewis wrote:\n> \n> >Have you looked into a materialized view sort of approach? You could\n> >create a table which had assetid as a primary key, and max_ts as a\n> >column. Then use triggers to keep that table up to date as rows are\n> >added/updated/removed from the main table.\n> >\n> >This approach would only make sense if there were far fewer distinct\n> >assetid values than rows in the main table, and would get slow if you\n> >commonly delete rows from the main table or decrease the value for ts in\n> >the row with the highest ts for a given assetid.\n> >\n> >-- Mark Lewis\n> >\n> >On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:\n> > \n> >\n> >>Thanks Tom, that explains it and makes sense. I guess I will have to \n> >>accept this query taking 40 seconds, unless I can figure out another way \n> >>to write it so it can use indexes. If there are any more syntax \n> >>suggestions, please pass them on. Thanks for the help everyone.\n> >>\n> >>Graham.\n> >>\n> >>\n> >>Tom Lane wrote:\n> >>\n> >> \n> >>\n> >>>Graham Davis <[email protected]> writes:\n> >>> \n> >>>\n> >>> \n> >>>\n> >>>>How come an aggreate like that has to use a sequential scan? I know \n> >>>>that PostgreSQL use to have to do a sequential scan for all aggregates, \n> >>>>but there was support added to version 8 so that aggregates would take \n> >>>>advantage of indexes.\n> >>>> \n> >>>>\n> >>>> \n> >>>>\n> >>>Not in a GROUP BY context, only for the simple case. Per the comment in\n> >>>planagg.c:\n> >>>\n> >>>\t * We don't handle GROUP BY, because our current implementations of\n> >>>\t * grouping require looking at all the rows anyway, and so there's not\n> >>>\t * much point in optimizing MIN/MAX.\n> >>>\n> >>>The problem is that using an index to obtain the maximum value of ts for\n> >>>a given value of assetid is not the same thing as finding out what all\n> >>>the distinct values of assetid are.\n> >>>\n> >>>This could possibly be improved but it would take a considerable amount\n> >>>more work. It's definitely not in the category of \"bug fix\".\n> >>>\n> >>>\t\t\tregards, tom lane\n> >>> \n> >>>\n> >>> \n> >>>\n> >> \n> >>\n> \n> \n",
"msg_date": "Tue, 03 Oct 2006 14:34:27 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "Not many. It fluctuates, but there are usually only ever a few hundred \nat most. Each assetid has multi-millions of positions though.\n\nMark Lewis wrote:\n\n>Hmmm. How many distinct assetids are there?\n>-- Mark Lewis\n>\n>On Tue, 2006-10-03 at 14:23 -0700, Graham Davis wrote:\n> \n>\n>>The \"summary table\" approach maintained by triggers is something we are \n>>considering, but it becomes a bit more complicated to implement. \n>>Currently we have groups of new positions coming in every few seconds or \n>>less. They are not guaranteed to be in order. So for instance, a group \n>>of positions from today could come in and be inserted, then a group of \n>>positions that got lost from yesterday could come in and be inserted \n>>afterwards. \n>>\n>>This means the triggers would have to do some sort of logic to figure \n>>out if the newly inserted position is actually the most recent by \n>>timestamp. If positions are ever deleted or updated, the same sort of \n>>query that is currently running slow will need to be executed in order \n>>to get the new most recent position. So there is the possibility that \n>>new positions can be inserted faster than the triggers can calculate \n>>and maintain the summary table. There are some other complications \n>>with maintaining such a summary table in our system too, but I won't get \n>>into those.\n>>\n>>Right now I'm just trying to see if I can get the query itself running \n>>faster, which would be the easiest solution for now.\n>>\n>>Graham.\n>>\n>>\n>>Mark Lewis wrote:\n>>\n>> \n>>\n>>>Have you looked into a materialized view sort of approach? You could\n>>>create a table which had assetid as a primary key, and max_ts as a\n>>>column. Then use triggers to keep that table up to date as rows are\n>>>added/updated/removed from the main table.\n>>>\n>>>This approach would only make sense if there were far fewer distinct\n>>>assetid values than rows in the main table, and would get slow if you\n>>>commonly delete rows from the main table or decrease the value for ts in\n>>>the row with the highest ts for a given assetid.\n>>>\n>>>-- Mark Lewis\n>>>\n>>>On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:\n>>> \n>>>\n>>> \n>>>\n>>>>Thanks Tom, that explains it and makes sense. I guess I will have to \n>>>>accept this query taking 40 seconds, unless I can figure out another way \n>>>>to write it so it can use indexes. If there are any more syntax \n>>>>suggestions, please pass them on. Thanks for the help everyone.\n>>>>\n>>>>Graham.\n>>>>\n>>>>\n>>>>Tom Lane wrote:\n>>>>\n>>>> \n>>>>\n>>>> \n>>>>\n>>>>>Graham Davis <[email protected]> writes:\n>>>>>\n>>>>>\n>>>>> \n>>>>>\n>>>>> \n>>>>>\n>>>>>>How come an aggreate like that has to use a sequential scan? I know \n>>>>>>that PostgreSQL use to have to do a sequential scan for all aggregates, \n>>>>>>but there was support added to version 8 so that aggregates would take \n>>>>>>advantage of indexes.\n>>>>>> \n>>>>>>\n>>>>>> \n>>>>>>\n>>>>>> \n>>>>>>\n>>>>>Not in a GROUP BY context, only for the simple case. Per the comment in\n>>>>>planagg.c:\n>>>>>\n>>>>>\t * We don't handle GROUP BY, because our current implementations of\n>>>>>\t * grouping require looking at all the rows anyway, and so there's not\n>>>>>\t * much point in optimizing MIN/MAX.\n>>>>>\n>>>>>The problem is that using an index to obtain the maximum value of ts for\n>>>>>a given value of assetid is not the same thing as finding out what all\n>>>>>the distinct values of assetid are.\n>>>>>\n>>>>>This could possibly be improved but it would take a considerable amount\n>>>>>more work. It's definitely not in the category of \"bug fix\".\n>>>>>\n>>>>>\t\t\tregards, tom lane\n>>>>>\n>>>>>\n>>>>> \n>>>>>\n>>>>> \n>>>>>\n>>>> \n>>>>\n>>>> \n>>>>\n>> \n>>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Tue, 03 Oct 2006 14:35:45 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": " Hi,\n\n I wonder how PostgreSQL caches the SQL query results. For example ;\n\n * does postgres cache query result in memory that done by session A \n?\n * does session B use these results ?\n\nBest Regards\n\nAdnan DURSUN\n\n",
"msg_date": "Wed, 4 Oct 2006 00:49:12 +0300",
"msg_from": "\"Adnan DURSUN\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL Caching"
},
{
"msg_contents": "A few hundred is quite a lot for the next proposal and it's kind of an\nugly one, but might as well throw the idea out since you never know.\n\nHave you considered creating one partial index per assetid? Something\nalong the lines of \"CREATE INDEX asset_index_N ON asset_positions(ts)\nWHERE assetid=N\"? I'd guess that the planner probably wouldn't be smart\nenough to use the partial indexes unless you issued a separate query for\neach assetid, but each one of those queries should be really fast. Of\ncourse, this is all assuming that PG knows how to use partial indexes to\nsatisfy MAX queries; I'm not sure if it does.\n\n-- Mark Lewis\n\nOn Tue, 2006-10-03 at 14:35 -0700, Graham Davis wrote:\n> Not many. It fluctuates, but there are usually only ever a few hundred \n> at most. Each assetid has multi-millions of positions though.\n> \n> Mark Lewis wrote:\n> \n> >Hmmm. How many distinct assetids are there?\n> >-- Mark Lewis\n> >\n> >On Tue, 2006-10-03 at 14:23 -0700, Graham Davis wrote:\n> > \n> >\n> >>The \"summary table\" approach maintained by triggers is something we are \n> >>considering, but it becomes a bit more complicated to implement. \n> >>Currently we have groups of new positions coming in every few seconds or \n> >>less. They are not guaranteed to be in order. So for instance, a group \n> >>of positions from today could come in and be inserted, then a group of \n> >>positions that got lost from yesterday could come in and be inserted \n> >>afterwards. \n> >>\n> >>This means the triggers would have to do some sort of logic to figure \n> >>out if the newly inserted position is actually the most recent by \n> >>timestamp. If positions are ever deleted or updated, the same sort of \n> >>query that is currently running slow will need to be executed in order \n> >>to get the new most recent position. So there is the possibility that \n> >>new positions can be inserted faster than the triggers can calculate \n> >>and maintain the summary table. There are some other complications \n> >>with maintaining such a summary table in our system too, but I won't get \n> >>into those.\n> >>\n> >>Right now I'm just trying to see if I can get the query itself running \n> >>faster, which would be the easiest solution for now.\n> >>\n> >>Graham.\n> >>\n> >>\n> >>Mark Lewis wrote:\n> >>\n> >> \n> >>\n> >>>Have you looked into a materialized view sort of approach? You could\n> >>>create a table which had assetid as a primary key, and max_ts as a\n> >>>column. Then use triggers to keep that table up to date as rows are\n> >>>added/updated/removed from the main table.\n> >>>\n> >>>This approach would only make sense if there were far fewer distinct\n> >>>assetid values than rows in the main table, and would get slow if you\n> >>>commonly delete rows from the main table or decrease the value for ts in\n> >>>the row with the highest ts for a given assetid.\n> >>>\n> >>>-- Mark Lewis\n> >>>\n> >>>On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:\n> >>> \n> >>>\n> >>> \n> >>>\n> >>>>Thanks Tom, that explains it and makes sense. I guess I will have to \n> >>>>accept this query taking 40 seconds, unless I can figure out another way \n> >>>>to write it so it can use indexes. If there are any more syntax \n> >>>>suggestions, please pass them on. Thanks for the help everyone.\n> >>>>\n> >>>>Graham.\n> >>>>\n> >>>>\n> >>>>Tom Lane wrote:\n> >>>>\n> >>>> \n> >>>>\n> >>>> \n> >>>>\n> >>>>>Graham Davis <[email protected]> writes:\n> >>>>>\n> >>>>>\n> >>>>> \n> >>>>>\n> >>>>> \n> >>>>>\n> >>>>>>How come an aggreate like that has to use a sequential scan? I know \n> >>>>>>that PostgreSQL use to have to do a sequential scan for all aggregates, \n> >>>>>>but there was support added to version 8 so that aggregates would take \n> >>>>>>advantage of indexes.\n> >>>>>> \n> >>>>>>\n> >>>>>> \n> >>>>>>\n> >>>>>> \n> >>>>>>\n> >>>>>Not in a GROUP BY context, only for the simple case. Per the comment in\n> >>>>>planagg.c:\n> >>>>>\n> >>>>>\t * We don't handle GROUP BY, because our current implementations of\n> >>>>>\t * grouping require looking at all the rows anyway, and so there's not\n> >>>>>\t * much point in optimizing MIN/MAX.\n> >>>>>\n> >>>>>The problem is that using an index to obtain the maximum value of ts for\n> >>>>>a given value of assetid is not the same thing as finding out what all\n> >>>>>the distinct values of assetid are.\n> >>>>>\n> >>>>>This could possibly be improved but it would take a considerable amount\n> >>>>>more work. It's definitely not in the category of \"bug fix\".\n> >>>>>\n> >>>>>\t\t\tregards, tom lane\n> >>>>>\n> >>>>>\n> >>>>> \n> >>>>>\n> >>>>> \n> >>>>>\n> >>>> \n> >>>>\n> >>>> \n> >>>>\n> >> \n> >>\n> \n> \n",
"msg_date": "Tue, 03 Oct 2006 14:54:17 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index"
},
{
"msg_contents": "\nLike many descent RDBMS, Postgresql server allocates its own shared\nmemory area where data is cached in. When receiving a query request,\nPostgres engine checks first its shared memory buffers, if not found,\nthe engine performs disk I/Os to retrieve data from PostgreSQL data\nfiles and place it in the shared buffer area before serving it back to\nthe client. Blocks in the shared buffers are shared by other sessions\nand can therefore be possibly accessed by other sessions. Postgresql\nshared buffers can be allocated by setting the postgresql.conf parameter\nnamely, shared_buffers.\n\nSincerely,\n\n--\n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adnan\nDURSUN\nSent: Tuesday, October 03, 2006 2:49 PM\nTo: [email protected]\nSubject: [PERFORM] PostgreSQL Caching\n\n Hi,\n\n I wonder how PostgreSQL caches the SQL query results. For example ;\n\n * does postgres cache query result in memory that done by\nsession A \n?\n * does session B use these results ?\n\nBest Regards\n\nAdnan DURSUN\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n",
"msg_date": "Tue, 3 Oct 2006 15:11:20 -0700",
"msg_from": "\"Tomeh, Husam\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": "Mark Lewis <[email protected]> writes:\n> Have you considered creating one partial index per assetid? Something\n> along the lines of \"CREATE INDEX asset_index_N ON asset_positions(ts)\n> WHERE assetid=N\"? I'd guess that the planner probably wouldn't be smart\n> enough to use the partial indexes unless you issued a separate query for\n> each assetid, but each one of those queries should be really fast.\n\nActually, a single index on (assetid, ts) is sufficient to handle\n\n\tselect max(ts) from asset_positions where assetid = constant\n\nThe problem is to know what values of \"constant\" to issue the query for,\nand this idea doesn't seem to help with that.\n\nIf Graham is willing to assume that the set of assetids changes slowly,\nperhaps he could keep a summary table that contains all the valid\nassetids (or maybe there already is such a table? is assetid a foreign\nkey?) and do\n\n\tselect pk.assetid,\n (select max(ts) from asset_positions where assetid = pk.assetid)\n\tfrom other_table pk;\n\nI'm pretty sure the subselect would be planned the way he wants.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Oct 2006 18:39:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2658: Query not using index "
},
{
"msg_contents": "\n Thanks,\n\n I wonder these ;\n\n * When any session updates the data that allready in shared buffer, \ndoes Postgres sychronize the data both disk and shared buffers area \nimmediately ?\n * Does postgres cache SQL execution plan analyze results in memory \nto use for other sessions ? For example ;\n When session A execute \"SELECT * FROM tab WHERE col1 = val1 AND col2 \n= val2\", does postgres save the parser/optimizer result in memory in order\n to use by other session to prevent duplicate execution of parser \nand optimizer so therefore get time ?. Because an execution plan is created \nbefore..\n\nSincenerly\n\nAdnan DURSUN\n\n----- Original Message ----- \nFrom: \"Tomeh, Husam\" <[email protected]>\nTo: \"Adnan DURSUN\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, October 04, 2006 1:11 AM\nSubject: Re: [PERFORM] PostgreSQL Caching\n\n\n\nLike many descent RDBMS, Postgresql server allocates its own shared\nmemory area where data is cached in. When receiving a query request,\nPostgres engine checks first its shared memory buffers, if not found,\nthe engine performs disk I/Os to retrieve data from PostgreSQL data\nfiles and place it in the shared buffer area before serving it back to\nthe client. Blocks in the shared buffers are shared by other sessions\nand can therefore be possibly accessed by other sessions. Postgresql\nshared buffers can be allocated by setting the postgresql.conf parameter\nnamely, shared_buffers.\n\nSincerely,\n\n--\n Husam\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adnan\nDURSUN\nSent: Tuesday, October 03, 2006 2:49 PM\nTo: [email protected]\nSubject: [PERFORM] PostgreSQL Caching\n\n Hi,\n\n I wonder how PostgreSQL caches the SQL query results. For example ;\n\n * does postgres cache query result in memory that done by\nsession A\n?\n * does session B use these results ?\n\nBest Regards\n\nAdnan DURSUN\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n**********************************************************************\nThis message contains confidential information intended only for the use of \nthe addressee(s) named above and may contain information that is legally \nprivileged. If you are not the addressee, or the person responsible for \ndelivering it to the addressee, you are hereby notified that reading, \ndisseminating, distributing or copying this message is strictly prohibited. \nIf you have received this message by mistake, please immediately notify us \nby replying to the message and delete the original message immediately \nthereafter.\n\nThank you.\n\n FADLD Tag\n**********************************************************************\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 4 Oct 2006 02:53:20 +0300",
"msg_from": "\"Adnan DURSUN\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": " \n>> * When any session updates the data that already in shared\nbuffer, \n>>does Postgres synchronize the data both disk and shared buffers area \n>> immediately ?\n\nNot necessarily true. When a block is modified in the shared buffers,\nthe modified block is written to the Postgres WAL log. A periodic DB\ncheckpoint is performed to flush the modified blocks in the shared\nbuffers to the data files.\n\n>> * Does postgres cache SQL execution plan analyze results in memory \n>> to use for other sessions ? For example ;\n>> When session A execute \"SELECT * FROM tab WHERE col1 = val1\nAND col2 \n>> = val2\", does postgres save the parser/optimizer result in memory in\norder\n>> to use by other session to prevent duplicate execution of\nparser \n>> and optimizer so therefore get time ?. Because an execution plan is\ncreated \n>> before..\n\nQuery plans are not stored in the shared buffers and therefore can not\nbe re-used by other sessions. They're only cached by the connection on a\nsession level.\n\nSincerely,\n\n--\n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adnan\nDURSUN\nSent: Tuesday, October 03, 2006 4:53 PM\nTo: [email protected]\nSubject: Re: [PERFORM] PostgreSQL Caching\n\n\n Thanks,\n\n I wonder these ;\n\n * When any session updates the data that allready in shared\nbuffer, \ndoes Postgres sychronize the data both disk and shared buffers area \nimmediately ?\n * Does postgres cache SQL execution plan analyze results in\nmemory \nto use for other sessions ? For example ;\n When session A execute \"SELECT * FROM tab WHERE col1 = val1 AND\ncol2 \n= val2\", does postgres save the parser/optimizer result in memory in\norder\n to use by other session to prevent duplicate execution of\nparser \nand optimizer so therefore get time ?. Because an execution plan is\ncreated \nbefore..\n\nSincenerly\n\nAdnan DURSUN\n\n----- Original Message ----- \nFrom: \"Tomeh, Husam\" <[email protected]>\nTo: \"Adnan DURSUN\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, October 04, 2006 1:11 AM\nSubject: Re: [PERFORM] PostgreSQL Caching\n\n\n\nLike many descent RDBMS, Postgresql server allocates its own shared\nmemory area where data is cached in. When receiving a query request,\nPostgres engine checks first its shared memory buffers, if not found,\nthe engine performs disk I/Os to retrieve data from PostgreSQL data\nfiles and place it in the shared buffer area before serving it back to\nthe client. Blocks in the shared buffers are shared by other sessions\nand can therefore be possibly accessed by other sessions. Postgresql\nshared buffers can be allocated by setting the postgresql.conf parameter\nnamely, shared_buffers.\n\nSincerely,\n\n--\n Husam\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adnan\nDURSUN\nSent: Tuesday, October 03, 2006 2:49 PM\nTo: [email protected]\nSubject: [PERFORM] PostgreSQL Caching\n\n Hi,\n\n I wonder how PostgreSQL caches the SQL query results. For example ;\n\n * does postgres cache query result in memory that done by\nsession A\n?\n * does session B use these results ?\n\nBest Regards\n\nAdnan DURSUN\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n**********************************************************************\nThis message contains confidential information intended only for the use\nof \nthe addressee(s) named above and may contain information that is legally\n\nprivileged. If you are not the addressee, or the person responsible for\n\ndelivering it to the addressee, you are hereby notified that reading, \ndisseminating, distributing or copying this message is strictly\nprohibited. \nIf you have received this message by mistake, please immediately notify\nus \nby replying to the message and delete the original message immediately \nthereafter.\n\nThank you.\n\n FADLD Tag\n**********************************************************************\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Tue, 3 Oct 2006 18:29:26 -0700",
"msg_from": "\"Tomeh, Husam\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": "----- Original Message ----- \nFrom: \"Tomeh, Husam\" <[email protected]>\nTo: \"Adnan DURSUN\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, October 04, 2006 4:29 AM\nSubject: RE: [PERFORM] PostgreSQL Caching\n\n\n>Query plans are not stored in the shared buffers and therefore can not\n>be re-used by other sessions. They're only cached by the connection on a\n>session level.\n\n Ok. i see. thanks..So that means that a stored object execution plan \nsaved before is destroyed from memory after it was altered or dropped by any \nsession. Is that true ?\n And last one :-)\n i want to be can read an execution plan when i look at it. \nSo, is there any doc about how it should be read ?\n\nSincenerly !\n\nAdnan DURSUN \n\n",
"msg_date": "Wed, 4 Oct 2006 05:24:19 +0300",
"msg_from": "\"Adnan DURSUN\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Adnan DURSUN\n> i want to be can read an execution plan when \n> i look at it. \n> So, is there any doc about how it should be read ?\n\n\nYou are asking how to read the output from EXPLAIN? This page is a good\nplace to start:\n\nhttp://www.postgresql.org/docs/8.1/interactive/performance-tips.html \n\n\n\n",
"msg_date": "Wed, 4 Oct 2006 07:38:00 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": "On Wed, 2006-10-04 at 07:38 -0500, Dave Dutcher wrote:\n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf Of \n> > Adnan DURSUN\n> > i want to be can read an execution plan when \n> > i look at it. \n> > So, is there any doc about how it should be read ?\n> \n> \n> You are asking how to read the output from EXPLAIN? This page is a good\n> place to start:\n> \n> http://www.postgresql.org/docs/8.1/interactive/performance-tips.html \n\nRobert Treat's Explaining Explain presentation from OSCON is also very\ngood:\n\nhttp://redivi.com/~bob/oscon2005_pgsql_pdf/OSCON_Explaining_Explain_Public.pdf#search=%22%22explaining%20explain%22%22\n\nBrad.\n\n",
"msg_date": "Wed, 04 Oct 2006 09:47:05 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
},
{
"msg_contents": "On Tue, 2006-10-03 at 18:29 -0700, Tomeh, Husam wrote:\n> >> * When any session updates the data that already in shared\n> buffer, \n> >>does Postgres synchronize the data both disk and shared buffers area \n> >> immediately ?\n> \n> Not necessarily true. When a block is modified in the shared buffers,\n> the modified block is written to the Postgres WAL log. A periodic DB\n> checkpoint is performed to flush the modified blocks in the shared\n> buffers to the data files.\n\nPostgres 8.0 and beyond have a process called bgwriter that continually\nflushes dirty buffers to disk, to minimize the work that needs to be\ndone at checkpoint time.\n\n",
"msg_date": "Wed, 04 Oct 2006 09:52:26 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Caching"
}
] |
[
{
"msg_contents": "I have a query which really should be lightning fast (limit 1 from\nindex), but which isn't. I've checked the pg_locks table, there are no\nlocks on the table. The database is not under heavy load at the moment,\nbut the query seems to draw CPU power. I checked the pg_locks view, but\nfound nothing locking the table. It's a queue-like table, lots of rows\nbeeing added and removed to the queue. The queue is currently empty.\nHave a look:\n\nNBET=> vacuum verbose analyze my_queue;\nINFO: vacuuming \"public.my_queue\"\nINFO: index \"my_queue_pkey\" now contains 34058 row\nversions in 390 pages\nDETAIL: 288 index pages have been deleted, 285 are current\nly reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"my_queue\": found 0 removable, 34058 nonremovable row versions in 185 pages\nDETAIL: 34058 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.my_queue\"\nINFO: \"my_queue\": scanned 185 of 185 pages, containing 0 live rows and 34058 dead rows; 0 rows in sample, 0 estimated total rows\nVACUUM\nNBET=> explain analyze select bet_id from my_queue order by bet_id limit 1;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.04 rows=1 width=4) (actual time=402.525..402.525 rows=0 loops=1)\n -> Index Scan using my_queue_pkey on stats_bet_queue (cost=0.00..1314.71 rows=34058 width=4) (actual time=402.518..402.518 rows=0 loops=1)\n Total runtime: 402.560 ms\n(3 rows)\n\nNBET=> select count(*) from my_queue;\n count\n-------\n 0\n(1 row)\n\nIt really seems like some transaction is still viewing the queue, since\nit found 38k of non-removable rows ... but how do I find the pid of the\ntransaction viewing the queue? As said, the pg_locks didn't give me any\nhints ...\n\n",
"msg_date": "Thu, 28 Sep 2006 08:56:31 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow queue-like empty table"
},
{
"msg_contents": "[Tobias Brox - Thu at 08:56:31AM +0200]\n> It really seems like some transaction is still viewing the queue, since\n> it found 38k of non-removable rows ... but how do I find the pid of the\n> transaction viewing the queue? As said, the pg_locks didn't give me any\n> hints ...\n\nDropping the table and recreating it solved the immediate problem, but\nthere must be some better solution than that? :-)\n",
"msg_date": "Thu, 28 Sep 2006 09:36:36 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queue-like empty table"
},
{
"msg_contents": "On Thu, 2006-09-28 at 09:36, Tobias Brox wrote:\n> [Tobias Brox - Thu at 08:56:31AM +0200]\n> > It really seems like some transaction is still viewing the queue, since\n> > it found 38k of non-removable rows ... but how do I find the pid of the\n> > transaction viewing the queue? As said, the pg_locks didn't give me any\n> > hints ...\n\nThe open transaction doesn't have to have any locks on your queue table\nto prevent vacuuming dead rows. It's mere existence is enough... MVCC\nmeans that a still running transaction could still see those dead rows,\nand so VACUUM can't remove them until there's no transaction which\nstarted before they were deleted.\n\nSo long running transactions are your enemy when it comes to high\ninsert/delete rate queue tables.\n\nSo you should check for \"idle in transaction\" sessions, those are bad...\nor any other long running transaction.\n\n\n> Dropping the table and recreating it solved the immediate problem, but\n> there must be some better solution than that? :-)\n\nIf you must have long running transactions on your system (like\nvacuuming another big table - that also qualifies as a long running\ntransaction, though this is fixed in 8.2), then you could use CLUSTER\n(see the docs), which is currently not MVCC conforming and deletes all\nthe dead space regardless if any other running transaction can see it or\nnot. This is only acceptable if you're application handles the queue\ntable independently, not mixed in complex transactions. And the CLUSTER\ncommand takes an exclusive lock on the table, so it won't work for e.g.\nduring a pg_dump, it would keep the queue table locked exclusively for\nthe whole duration of the pg_dump (it won't be able to actually get the\nlock, but it will prevent any other activity on it, as it looks like in\nprogress exclusive lock requests block any new shared lock request).\n\nHTH,\nCsaba.\n\n\n",
"msg_date": "Thu, 28 Sep 2006 10:45:35 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queue-like empty table"
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 08:56:31AM +0200, Tobias Brox wrote:\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"my_queue\": found 0 removable, 34058 nonremovable row versions in 185 pages\n ^^^^^^^\n\nYou have a lot of dead rows that can't be removed. You must have a\nlot of other transactions in process. Note that nobody needs to be\n_looking_ at those rows for them to be unremovable. The transactions\njust have to be old enough.\n\n\n> -------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=402.525..402.525 rows=0 loops=1)\n> -> Index Scan using my_queue_pkey on stats_bet_queue (cost=0.00..1314.71 rows=34058 width=4) (actual time=402.518..402.518 rows=0 loops=1)\n\nI'm amazed this does an indexscan on an empty table. \n\nIf this table is \"hot\", my bet is that you have attempted to optimise\nin an area that actually isn't an optimisation under PostgreSQL. \nThat is, if you're putting data in there, a daemon is constantly\ndeleting from it, but all your other transactions depend on knowing\nthe value of the \"unprocessed queue\", the design just doesn't work\nunder PostgreSQL. It turns out to be impossible to keep the table\nvacuumed well enough for high performance.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n",
"msg_date": "Thu, 28 Sep 2006 16:17:10 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queue-like empty table"
},
{
"msg_contents": "[Csaba Nagy - Thu at 10:45:35AM +0200]\n> So you should check for \"idle in transaction\" sessions, those are bad...\n> or any other long running transaction.\n\nThank you (and others) for pointing this out, you certainly set us on\nthe right track. We did have some few unclosed transactions;\ntransactions not beeing ended by \"rollback\" or \"commit\". We've been\nfixing this, beating up the programmers responsible and continued\nmonitoring.\n\nI don't think it's only due to those queue-like tables, we've really\nseen a significant improvement on the graphs showing load and cpu usage\non the database server after we killed all the \"idle in transaction\". I\ncan safely relax still some weeks before I need to do more optimization\nwork :-)\n\n(oh, btw, we didn't really beat up the programmers ... too big\ngeographical distances ;-)\n",
"msg_date": "Wed, 4 Oct 2006 12:59:10 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queue-like empty table"
},
{
"msg_contents": "On Oct 4, 2006, at 5:59 AM, Tobias Brox wrote:\n> [Csaba Nagy - Thu at 10:45:35AM +0200]\n>> So you should check for \"idle in transaction\" sessions, those are \n>> bad...\n>> or any other long running transaction.\n>\n> Thank you (and others) for pointing this out, you certainly set us on\n> the right track. We did have some few unclosed transactions;\n> transactions not beeing ended by \"rollback\" or \"commit\". We've been\n> fixing this, beating up the programmers responsible and continued\n> monitoring.\n>\n> I don't think it's only due to those queue-like tables, we've really\n> seen a significant improvement on the graphs showing load and cpu \n> usage\n> on the database server after we killed all the \"idle in \n> transaction\". I\n> can safely relax still some weeks before I need to do more \n> optimization\n> work :-)\n\nLeaving transactions open for a long time is murder on pretty much \nany database. It's about one of the worst programming mistakes you \ncan make (from a performance standpoint). Further, mishandling \ntransaction close is a great way to lose data:\n\nBEGIN;\n...useful work\n--COMMIT should have happened here\n...more work\n...ERROR!\nROLLBACK;\n\nYou just lost that useful work.\n\n> (oh, btw, we didn't really beat up the programmers ... too big\n> geographical distances ;-)\n\nThis warrants a plane ticket. Seriously. If your app programmers \naren't versed in transaction management, you should probably be \ndefining a database API that allows the use of autocommit.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Thu, 5 Oct 2006 22:42:24 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queue-like empty table"
}
] |
[
{
"msg_contents": "I am a software developer who is acting in a (temporary) dba role for a \nproject. I had recommended PostgreSQL be brought in to replace the proposed \nMySQL DB - I chose PostgreSQL because of its reputation as a more stable \nsolution than MySQL.\n\nAt this early stage in the project, we are initializing our portal's \ndatabase with millions of rows of imported data in over 50 different \nflattened tables; each table's structure is unique to the data provider. \nThis requires a pretty complex import program, because the data must be \nmatched semantically, not literally. Even with all of the expression \nmatching and fuzzy logic in the code,our performance statistics show that \nthe program spends over 75% of its time in SQL queries looking for matching \nand/or duplicate data.\n\nThe import is slow - and degrades as the tables grow. With even more \nmillions of rows in dozens of import tables to come, the imports will take \nforever. My ability to analyse the queries is limited; because of the nature \nof the import process, the SQL queries are mutable, every imported row can \nchange the structure of a SQL query as the program adds and subtracts search \nconditions to the SQL command text before execution. The import program is \nscripted in Tcl. An attempt to convert our queries to prepared queries \n(curiousy) did not bring any performance improvements, and we converted back \nto simplify the code.\n\nWe urgently need a major performance improvement. We are running the \nPostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, dual core \n3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what type) disc \nsubsystem. Sorry about the long intro, but here are my questions:\n\n1) Are we paying any big penalties by running Windows vs LINUX (or any other \nOS)?\n\n2) Has the debate over PostgreSQL and Xeon processors been settled? Is this \na factor?\n\n3) Are there any easy-to-use performance analysis/optimisation tools that we \ncan use? I am dreaming of one that could point out problems and suggest \nand.or effect solutions.\n\n4) Can anyone recommend any commercial PostgreSQL service providers that may \nbe able to swiftly come in and assist us with our performance issues?\n\nBelow, please find what I believe are the configuration settings of interest \nin our system\n\nAny help and advice will be much appreciated. TIA,\n\nCarlo\n\nmax_connections = 100\nshared_buffers = 50000\nwork_mem = 32768\nmaintenance_work_mem = 32768\ncheckpoint_segments = 128\neffective_cache_size = 10000\nrandom_page_cost = 3\nstats_start_collector = on\nstats_command_string = on\nstats_row_level = on\nautovacuum = on\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 28 Sep 2006 12:44:10 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performace Optimization for Dummies"
},
{
"msg_contents": "\n> The import is slow - and degrades as the tables grow. With even more \n> millions of rows in dozens of import tables to come, the imports will take \n> forever. My ability to analyse the queries is limited; because of the nature \n> of the import process, the SQL queries are mutable, every imported row can \n> change the structure of a SQL query as the program adds and subtracts search \n> conditions to the SQL command text before execution. The import program is \n> scripted in Tcl. An attempt to convert our queries to prepared queries \n> (curiousy) did not bring any performance improvements, and we converted back \n> to simplify the code.\n\nHow are you loading the tables? Copy? Insert?\n\n> \n> We urgently need a major performance improvement. We are running the \n> PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, dual core \n> 3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what type) disc \n> subsystem. Sorry about the long intro, but here are my questions:\n> \n> 1) Are we paying any big penalties by running Windows vs LINUX (or any other \n> OS)?\n\nYes. Linux or FreeBSD is going to stomp Win32 for PostgreSQL performance.\n\n> \n> 2) Has the debate over PostgreSQL and Xeon processors been settled? Is this \n> a factor?\n\nDepends. PostgreSQL is much better with the Xeon in general, but are you\nrunning woodcrest based CPUs or the older models?\n\n> \n> 3) Are there any easy-to-use performance analysis/optimisation tools that we \n> can use? I am dreaming of one that could point out problems and suggest \n> and.or effect solutions.\n\nI don't know about Windows, but *nix has a number of tools available\ndirectly at the operating system level to help you determine various\nbottlenecks.\n\n> \n> 4) Can anyone recommend any commercial PostgreSQL service providers that may \n> be able to swiftly come in and assist us with our performance issues?\n\nhttp://www.commandprompt.com/ (disclaimer, I am an employee)\n\n> \n> Below, please find what I believe are the configuration settings of interest \n> in our system\n> \n> Any help and advice will be much appreciated. TIA,\n> \n> Carlo\n> \n> max_connections = 100\n> shared_buffers = 50000\n\nThis could probably be higher.\n\n> work_mem = 32768\n\nDepending on what you are doing, this is could be to low or to high.\n\n> maintenance_work_mem = 32768\n> checkpoint_segments = 128\n> effective_cache_size = 10000\n\nThis coudl probably be higher.\n\n> random_page_cost = 3\n> stats_start_collector = on\n> stats_command_string = on\n> stats_row_level = on\n> autovacuum = on\n\nStats are a hit... you need to determine if you actually need them.\n\nJoshua D. Drake\n\n\n\n> \n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 28 Sep 2006 10:11:31 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On 9/28/06, Carlo Stonebanks <[email protected]> wrote:\n> We urgently need a major performance improvement. We are running the\n> PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, dual core\n> 3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what type) disc\n> subsystem. Sorry about the long intro, but here are my questions:\n\nare you using the 'copy' interface?\n\n> 1) Are we paying any big penalties by running Windows vs LINUX (or any other\n> OS)?\n\nthats a tough question. my gut says that windows will not scale as\nwell as recent linux kernels in high load environments.\n\n> 2) Has the debate over PostgreSQL and Xeon processors been settled? Is this\n> a factor?\n\nhearing good things about the woodcrest. pre-woodcrest xeon (dempsey\ndown) is outclassed by the opteron.\n\n> Below, please find what I believe are the configuration settings of interest\n> in our system\n\n1. can probably run fsync=off during the import\n2. if import is single proecess, consider temporary bump to memory for\nindex creation. or, since you have four cores consider having four\nprocesses import the data somehow.\n3. turn off stats collector, stats_command_string, stats_row_level,\nand autovacuum during import.\n\nmerlin\n\n> Any help and advice will be much appreciated. TIA,\n>\n> Carlo\n>\n> max_connections = 100\n> shared_buffers = 50000\n> work_mem = 32768\n> maintenance_work_mem = 32768\n> checkpoint_segments = 128\n> effective_cache_size = 10000\n> random_page_cost = 3\n> stats_start_collector = on\n> stats_command_string = on\n> stats_row_level = on\n> autovacuum = on\n>\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Thu, 28 Sep 2006 13:17:53 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> How are you loading the tables? Copy? Insert?\n\nOnce the data is transformed, it is inserted. I don't have stats, but the \nprograms visual feedback does not spend a lot of time on the \"inserting \ndata\" message. Then again, if there is an asynchronous component to an \ninsert, perhaps I am not seeing how slow an insert really is until I query \nthe table.\n\n> Yes. Linux or FreeBSD is going to stomp Win32 for PostgreSQL performance.\n\nDon't suppose you'd care to hazard a guess on what sort of scale we're \ntalking about? Are we talking about 10%? 100% I know this is a hard one to \njudge, My impression was that the *NIX improvements were with concurrent \nuse and right now, I am obsessing on this single-threaded issue.\n\n> Depends. PostgreSQL is much better with the Xeon in general, but are you\n> running woodcrest based CPUs or the older models?\n\nWeren't those released in July? This server is a few months older, so I \nguess not. But maybe? Does Dell have the ability to install CPUs from the \nfuture like Cyberdyne does? ;-)\n\n> I don't know about Windows, but *nix has a number of tools available\n> directly at the operating system level to help you determine various\n> bottlenecks.\n\nAre we talking about I/O operations? I was thinking of SQL query analysis. \nThe stuff I read here about query analysis is pretty intruiging, but to \nsomeone unfamiliar with this type of query analysis it all looks quite \nuncertain to me. I mean, I read the threads and it all looks like a lot of \ntrying ot figure out how to cajole PostgreSQL to do what you want, rather \nthan telling it: HEY I CREATED THAT INDEX FOR A REASON, USE IT!\n\nI know this may be non-dba sophistication on my part, but I would like a \ntool that would make this whole process a little less arcane. I'm not the \nGandalf type.\n\n>> 4) Can anyone recommend any commercial PostgreSQL service providers that \n>> may\n>> be able to swiftly come in and assist us with our performance issues?\n>\n> http://www.commandprompt.com/ (disclaimer, I am an employee)\n\nVery much appreciated.\n\n>> max_connections = 100\n>> shared_buffers = 50000\n>\n This could probably be higher.\n\nOk, good start...\n\n>\n>> work_mem = 32768\n>\n> Depending on what you are doing, this is could be to low or to high.\n\nIs this like \"You could be too fat or too thin\"? Aren't you impressed with \nthe fact that I managed to pick the one number that was not right for \nanything?\n\n>\n>> maintenance_work_mem = 32768\n>> checkpoint_segments = 128\n>> effective_cache_size = 10000\n>\n> This coudl probably be higher.\n\n... noted...\n\n>\n>> random_page_cost = 3\n>> stats_start_collector = on\n>> stats_command_string = on\n>> stats_row_level = on\n>> autovacuum = on\n>\n> Stats are a hit... you need to determine if you actually need them.\n\nUnfortunately, this is the only way I know of of getting the query string to \nappear in the PostgreSQL server status display. While trying to figure out \nwhat is slowing things down, having that is really helpful. I also imagined \nthat this sort of thing would be a performance hit when you are getting lots \nof small, concurrent queries. In my case, we have queries which are taking \naround a second to perform outer joins. They aren't competing with any other \nrequests as the site is not running, we are just running one app to seed the \ndata.\n\n\n",
"msg_date": "Thu, 28 Sep 2006 13:47:44 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> are you using the 'copy' interface?\n\nStraightforward inserts - the import data has to transformed, normalised and \nde-duped by the import program. I imagine the copy interface is for more \nstraightforward data importing. These are - buy necessity - single row \ninserts.\n\n> thats a tough question. my gut says that windows will not scale as\n> well as recent linux kernels in high load environments.\n\nBut not in the case of a single import program trying to seed a massive \ndatabase?\n\n> hearing good things about the woodcrest. pre-woodcrest xeon (dempsey\n> down) is outclassed by the opteron.\n\nNeed to find a way to deterimine the Xeon type. The server was bought in \nearly 2006, and it looks like woodcrest was form July.\n\n> 1. can probably run fsync=off during the import\n> 2. if import is single proecess, consider temporary bump to memory for\n> index creation. or, since you have four cores consider having four\n> processes import the data somehow.\n> 3. turn off stats collector, stats_command_string, stats_row_level,\n> and autovacuum during import.\n\nVery helpful, thanks.\n\nCarlo \n\n\n",
"msg_date": "Thu, 28 Sep 2006 13:53:22 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "\nOn Sep 28, 2006, at 10:53 AM, Carlo Stonebanks wrote:\n\n>> are you using the 'copy' interface?\n>\n> Straightforward inserts - the import data has to transformed, \n> normalised and\n> de-duped by the import program. I imagine the copy interface is for \n> more\n> straightforward data importing. These are - buy necessity - single row\n> inserts.\n\nAre you wrapping all this in a transaction?\n\nYou're doing some dynamically generated selects as part of the\n\"de-duping\" process? They're probably the expensive bit. What\ndo those queries tend to look like?\n\nAre you analysing the table periodically? If not, then you might\nhave statistics based on an empty table, or default statistics, which\nmight cause the planner to choose bad plans for those selects.\n\nTalking of which, are there indexes on the table? Normally you\nwouldn't have indexes in place during a bulk import, but if you're\ndoing selects as part of the data load process then you'd be forcing\nsequential scans for every query, which would explain why it gets\nslower as the table gets bigger.\n\nCheers,\n Steve\n\n",
"msg_date": "Thu, 28 Sep 2006 11:24:59 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:[email protected]] On Behalf Of Carlo\nStonebanks\n> Subject: [PERFORM] Performace Optimization for Dummies\n> \n> At this early stage in the project, we are initializing our portal's \n> database with millions of rows of imported data in over 50 different \n> flattened tables; each table's structure is unique to the \n> data provider. \n> This requires a pretty complex import program, because the \n> data must be \n> matched semantically, not literally. Even with all of the expression \n> matching and fuzzy logic in the code,our performance \n> statistics show that \n> the program spends over 75% of its time in SQL queries \n> looking for matching \n> and/or duplicate data.\n> \n> The import is slow - and degrades as the tables grow. \n\nSo your program first transforms the data and then inserts it? And it is\nthe transforming process which is running select statements that is slow?\nIf that is the case you could use duration logging to find the slow select\nstatement, and then you could post an EXPLAIN ANALYZE of the select. \n\nOne question off the top of my head is are you using regular expressions for\nyour fuzzy logic if so do your indexes have the right operator classes?\n(see http://www.postgresql.org/docs/8.1/static/indexes-opclass.html)\n\nDave\n\n",
"msg_date": "Thu, 28 Sep 2006 13:26:49 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 10:11:31AM -0700, Joshua D. Drake wrote:\n> > 4) Can anyone recommend any commercial PostgreSQL service providers that may \n> > be able to swiftly come in and assist us with our performance issues?\n> \n> http://www.commandprompt.com/ (disclaimer, I am an employee)\n \nYou forgot us. :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 13:34:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 01:47:44PM -0400, Carlo Stonebanks wrote:\n> > How are you loading the tables? Copy? Insert?\n> \n> Once the data is transformed, it is inserted. I don't have stats, but the \n> programs visual feedback does not spend a lot of time on the \"inserting \n> data\" message. Then again, if there is an asynchronous component to an \n> insert, perhaps I am not seeing how slow an insert really is until I query \n> the table.\n \nWell, individual inserts are slow, especially if they're not wrapped up\nin a transaction. And you also mentioned checking for dupes. I suspect\nthat you're not going to find any huge gains in tuning the database...\nit sounds like the application (as in: how it's using the database) is\nwhat needs help.\n\n> >> work_mem = 32768\n> >\n> > Depending on what you are doing, this is could be to low or to high.\n> \n> Is this like \"You could be too fat or too thin\"? Aren't you impressed with \n> the fact that I managed to pick the one number that was not right for \n> anything?\n\nFor what you're doing, it's probably fine where it is... but while\nyou're in the single-thread case, you can safely make that pretty big\n(like 1000000).\n\n> >\n> >> maintenance_work_mem = 32768\n> >> checkpoint_segments = 128\n> >> effective_cache_size = 10000\n> >\n> > This coudl probably be higher.\n\nI'd suggest setting it to about 3G, or 375000.\n> >\n> >> random_page_cost = 3\n> >> stats_start_collector = on\n> >> stats_command_string = on\n> >> stats_row_level = on\n> >> autovacuum = on\n> >\n> > Stats are a hit... you need to determine if you actually need them.\n> \n> Unfortunately, this is the only way I know of of getting the query string to \n> appear in the PostgreSQL server status display. While trying to figure out \n> what is slowing things down, having that is really helpful. I also imagined \n> that this sort of thing would be a performance hit when you are getting lots \n> of small, concurrent queries. In my case, we have queries which are taking \n> around a second to perform outer joins. They aren't competing with any other \n> requests as the site is not running, we are just running one app to seed the \n> data.\n\nstats_command_string can extract a huge penalty pre-8.2, on the order of\n30%. I'd turn it off unless you *really* need it. Command logging (ie:\nlog_min_duration_statement) is much less of a burden.\n\nThe fact that you're doing outer joins while loading data really makes\nme suspect that the application needs to be changed for any real\nbenefits to be had. But you should still look at what EXPLAIN ANALYZE is\nshowing you on those queries; you might be able to find some gains\nthere.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 13:44:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 01:53:22PM -0400, Carlo Stonebanks wrote:\n> > are you using the 'copy' interface?\n> \n> Straightforward inserts - the import data has to transformed, normalised and \n> de-duped by the import program. I imagine the copy interface is for more \n> straightforward data importing. These are - buy necessity - single row \n> inserts.\n\nBTW, stuff like de-duping is something you really want the database -\nnot an external program - to be doing. Think about loading the data into\na temporary table and then working on it from there.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 13:45:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n>> are you using the 'copy' interface?\n> \n> Straightforward inserts - the import data has to transformed, normalised and \n> de-duped by the import program. I imagine the copy interface is for more \n> straightforward data importing. These are - buy necessity - single row \n> inserts.\n> \n\nI know this is an answer to a question you didn't ask, but here it is. I\nwas once doing stuff where I processed log files and had to do many\nlookups to normalize the data before insertion.\n\nI started out doing everything in SQL and using postgresql tables and it\ntook a little over 24 hours to process 24 hours worth of data. Like you,\nit was single process, many lookups.\n\nI found a better way. I rewrote it (once in c#, again in python) and\nused hashtables/dictionaries instead of tables for the lookup data. For\nexample, I'd start by loading the data into hash tables (yes, this took\na *lot* of ram) then for each row I did something like:\n 1. is it in the hash table?\n 1. If not, insert it into the db\n 1. Insert it into the hashtable\n 2. Get the lookup field out of the hash table\n 3. Output normalized data\n\nThis allow me to create text files containing the data in COPY format\nwhich can then be inserted into the database at dramatically increased\nspeeds.\n\nMy first version in C# (mono) cut the time down to 6 hours for 24 hours\nworth of data. I tweaked the algorithms and rewrote it in Python and got\nit down to 45 min. (Python can't take all the credit for the performance\nboost, I used an improved technique that could have been done in C# as\nwell) This time included the time needed to do the copy and update the\nindexes.\n\nI created a version that also used gdb databases instead of hash tables.\nIt increased the time from 45 min to a little over an hour but decreased\nthe memory usage to something like 45MB (vs dozens or hundreds of MB per\nhashtable)\n-- \nMatthew Nuzum\nnewz2000 on freenode\n",
"msg_date": "Thu, 28 Sep 2006 13:47:26 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "\n> Are you wrapping all this in a transaction?\n\nYes, the transactions can typically wrap 1 to 10 single-table, single-row \ninserts and updates.\n\n\n> You're doing some dynamically generated selects as part of the\n> \"de-duping\" process? They're probably the expensive bit. What\n> do those queries tend to look like?\n\nWithout a doubt, this is the expensive bit.\n\n> Are you analysing the table periodically? If not, then you might\n> have statistics based on an empty table, or default statistics, which\n> might cause the planner to choose bad plans for those selects.\n\nNow there's something I didn't know - I thought that analysis and planning \nwas done with every select, and the performance benefit of prepared \nstatements was to plan-once, execute many. I can easily put in a periodic \nanalyse statement. I obviously missed how to use analyze properluy, I \nthought it was just human-readable output - do I understand correctly, that \nit can be used to get the SQL server to revaluate its plan based on newer \nstatistics - even on non-prepared queries?\n\n> Talking of which, are there indexes on the table? Normally you\n> wouldn't have indexes in place during a bulk import, but if you're\n> doing selects as part of the data load process then you'd be forcing\n> sequential scans for every query, which would explain why it gets\n> slower as the table gets bigger.\n\nThere are indexes for every obvious \"where this = that\" clauses. I don't \nbelieve that they will work for ilike expressions.\n\n>\n> Cheers,\n> Steve\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Thu, 28 Sep 2006 15:10:49 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> So your program first transforms the data and then inserts it? And it is\n> the transforming process which is running select statements that is slow?\n\nThere are cross-referencing and deduplication processes. Does this person \nhave an office at this exact address? In a similarily named building in the \nsame zip code? City? What is the similarity of the building or enterprise \nnames? Is there a person with a similar name with the same type of \nprofessional license nearby? We basically look for the statistical \nlikelyhood that they already exist to decide whether to update their data, \nor insert a new data element.\n\nThese are all extremely soft queries and require left outer joins with all \nof the related tables that would contain this data (the left outer join \ntells us whether the related element satisfied the search condition). As I \nmentioned, as the data comes in, we examine what we have to work with and \nmodify the tables and columns we can check - which is what I meant by \" the \nSQL queries are mutable, every imported row can change the structure of a \nSQL query as the program adds and subtracts search conditions to the SQL \ncommand text before execution.\"\n\n> If that is the case you could use duration logging to find the slow select\n> statement, and then you could post an EXPLAIN ANALYZE of the select.\n\nI'm pretty sure I know who the culprit is, and - like I said, it comes from \na section of code that creates a mutable statement. However, everyone is \nbeing so helpful and I should post this data as soon as I can.\n\n> One question off the top of my head is are you using regular expressions \n> for\n> your fuzzy logic if so do your indexes have the right operator classes?\n> (see http://www.postgresql.org/docs/8.1/static/indexes-opclass.html)\n\nI am using regular expressions and fuzzy logic, but mostly on the client \nside (I have a Tcl implementation of levenshtein, for example). I don't \nthink you can use indexes on functions such as levenshtein, because it \nrequires a parameter only available at execution time. The link you sent me \nwas very interesting - I will definitely reconsider my query logic if I can \noptimise regular expression searches on the server. Thanks!\n\nCarlo \n\n\n",
"msg_date": "Thu, 28 Sep 2006 15:47:15 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On 9/28/06, Carlo Stonebanks <[email protected]> wrote:\n> > are you using the 'copy' interface?\n>\n> Straightforward inserts - the import data has to transformed, normalised and\n> de-duped by the import program. I imagine the copy interface is for more\n> straightforward data importing. These are - buy necessity - single row\n> inserts.\n\nright. see comments below.\n\n> > thats a tough question. my gut says that windows will not scale as\n> > well as recent linux kernels in high load environments.\n>\n> But not in the case of a single import program trying to seed a massive\n> database?\n\nprobably not.\n\n> > hearing good things about the woodcrest. pre-woodcrest xeon (dempsey\n> > down) is outclassed by the opteron.\n>\n> Need to find a way to deterimine the Xeon type. The server was bought in\n> early 2006, and it looks like woodcrest was form July.\n\nok, there are better chips out there but again this is not something\nyou would really notice outside of high load environements.\n\n> > 1. can probably run fsync=off during the import\n> > 2. if import is single proecess, consider temporary bump to memory for\n> > index creation. or, since you have four cores consider having four\n> > processes import the data somehow.\n> > 3. turn off stats collector, stats_command_string, stats_row_level,\n> > and autovacuum during import.\n\nby the way, stats_command_string is a known performance killer that\niirc was improved in 8.2. just fyi.\n\nI would suggest at least consideration of retooling your import as\nfollows...it might be a fun project to learn some postgresql\ninternals. I'm assuming you are doing some script preprocessing in a\nlanguage like perl:\n\nbulk load denomalized tables into scratch tables into the postgresql\ndatabase. create indexes appropriate to the nomalization process\nremembering you can index on virtually any expression in postgresql\n(including regex substitution).\n\nuse sql to process the data. if tables are too large to handle with\nmonolithic queries, use cursors and/or functions to handle the\nconversion. now you can keep track of progress using pl/pgsql raise\nnotice command for example.\n\nmerlin\n",
"msg_date": "Thu, 28 Sep 2006 16:06:56 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "The deduplication process requires so many programmed procedures that it \nruns on the client. Most of the de-dupe lookups are not \"straight\" lookups, \nbut calculated ones emplying fuzzy logic. This is because we cannot dictate \nthe format of our input data and must deduplicate with what we get.\n\nThis was one of the reasons why I went with PostgreSQL in the first place, \nbecause of the server-side programming options. However, I saw incredible \nperformance hits when running processes on the server and I partially \nabandoned the idea (some custom-buiilt name-comparison functions still run \non the server).\n\nI am using Tcl on both the server and the client. I'm not a fan of Tcl, but \nit appears to be quite well implemented and feature-rich in PostgreSQL. I \nfind PL/pgsql awkward - even compared to Tcl. (After all, I'm just a \nprogrammer... we do tend to be a little limited.)\n\nThe import program actually runs on the server box as a db client and \ninvolves about 3000 lines of code (and it will certainly grow steadily as we \nadd compatability with more import formats). Could a process involving that \nmuch logic run on the db server, and would there really be a benefit?\n\nCarlo\n\n\n\"\"Jim C. Nasby\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Thu, Sep 28, 2006 at 01:53:22PM -0400, Carlo Stonebanks wrote:\n>> > are you using the 'copy' interface?\n>>\n>> Straightforward inserts - the import data has to transformed, normalised \n>> and\n>> de-duped by the import program. I imagine the copy interface is for more\n>> straightforward data importing. These are - buy necessity - single row\n>> inserts.\n>\n> BTW, stuff like de-duping is something you really want the database -\n> not an external program - to be doing. Think about loading the data into\n> a temporary table and then working on it from there.\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Thu, 28 Sep 2006 16:15:03 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On 9/28/06, Carlo Stonebanks <[email protected]> wrote:\n> The deduplication process requires so many programmed procedures that it\n> runs on the client. Most of the de-dupe lookups are not \"straight\" lookups,\n> but calculated ones emplying fuzzy logic. This is because we cannot dictate\n> the format of our input data and must deduplicate with what we get.\n>\n> This was one of the reasons why I went with PostgreSQL in the first place,\n> because of the server-side programming options. However, I saw incredible\n> performance hits when running processes on the server and I partially\n> abandoned the idea (some custom-buiilt name-comparison functions still run\n> on the server).\n\nimo, the key to high performance big data movements in postgresql is\nmastering sql and pl/pgsql, especially the latter. once you get good\nat it, your net time of copy+plpgsql is going to be less than\ninsert+tcl.\n\nmerlin\n",
"msg_date": "Thu, 28 Sep 2006 16:55:57 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "\nOn Sep 28, 2006, at 12:10 PM, Carlo Stonebanks wrote:\n\n>\n>> Are you wrapping all this in a transaction?\n>\n> Yes, the transactions can typically wrap 1 to 10 single-table, \n> single-row\n> inserts and updates.\n>\n>\n>> You're doing some dynamically generated selects as part of the\n>> \"de-duping\" process? They're probably the expensive bit. What\n>> do those queries tend to look like?\n>\n> Without a doubt, this is the expensive bit.\n\nIf you could give some samples of those queries here I suspect\npeople could be a lot more helpful with some optimisations, or\nat least pinpoint where the performance issues are likely to be.\n\n>\n>> Are you analysing the table periodically? If not, then you might\n>> have statistics based on an empty table, or default statistics, which\n>> might cause the planner to choose bad plans for those selects.\n>\n> Now there's something I didn't know - I thought that analysis and \n> planning\n> was done with every select, and the performance benefit of prepared\n> statements was to plan-once, execute many. I can easily put in a \n> periodic\n> analyse statement. I obviously missed how to use analyze properluy, I\n> thought it was just human-readable output - do I understand \n> correctly, that\n> it can be used to get the SQL server to revaluate its plan based on \n> newer\n> statistics - even on non-prepared queries?\n\nI think you're confusing \"explain\" and \"analyze\". \"Explain\" gives you\nhuman readable output as to what the planner decided to do with the\nquery you give it.\n\n\"Analyze\" samples the data in tables and stores the statistical \ndistribution\nof the data, and estimates of table size and that sort of thing for the\nplanner to use to decide on a good query plan. You need to run\nanalyze when the statistics or size of a table has changed \nsignificantly,\nso as to give the planner the best chance of choosing an appropriate\nplan.\n\nIf you're not running analyze occasionally then the planner will be\nworking on default stats or empty table stats and will tend to avoid\nindexes. I don't know whether autovacuum will also analyze tables\nfor you automagically, but it would be a good idea to analyze the table\nevery so often, especially early on in the load - as the stats \ngathered for\na small table will likely give painful performance once the table has\ngrown a lot.\n\n>\n>> Talking of which, are there indexes on the table? Normally you\n>> wouldn't have indexes in place during a bulk import, but if you're\n>> doing selects as part of the data load process then you'd be forcing\n>> sequential scans for every query, which would explain why it gets\n>> slower as the table gets bigger.\n>\n> There are indexes for every obvious \"where this = that\" clauses. I \n> don't\n> believe that they will work for ilike expressions.\n\nIf you're doing a lot of \"where foo ilike 'bar%'\" queries, with the \npattern\nanchored to the left you might want to look at using a functional index\non lower(foo) and rewriting the query to look like \"where lower(foo) \nlike\nlower('bar%')\".\n\nSimilarly if you have many queries where the pattern is anchored\nat the right of the string then a functional index on the reverse of the\nstring can be useful.\n\nCheers,\n Steve\n\n",
"msg_date": "Thu, 28 Sep 2006 14:04:21 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Lots of great info here, I will see what applies to my situation. However, I \ndon't see bulk inserts of the tables working, because all of the tables need \nto be refreshed as values to deduplicate and match will change with every \nrow added. In order for this to work, i would have to write queries against \nthe hash tables. This is where something like MySQL's in-memory tables would \nhave come in handy...\n\nWhat is GDB?\n\nCarlo\n\n\"Matthew Nuzum\" <[email protected]> wrote in message \nnews:[email protected]...\n> Carlo Stonebanks wrote:\n>>> are you using the 'copy' interface?\n>>\n>> Straightforward inserts - the import data has to transformed, normalised \n>> and\n>> de-duped by the import program. I imagine the copy interface is for more\n>> straightforward data importing. These are - buy necessity - single row\n>> inserts.\n>>\n>\n> I know this is an answer to a question you didn't ask, but here it is. I\n> was once doing stuff where I processed log files and had to do many\n> lookups to normalize the data before insertion.\n>\n> I started out doing everything in SQL and using postgresql tables and it\n> took a little over 24 hours to process 24 hours worth of data. Like you,\n> it was single process, many lookups.\n>\n> I found a better way. I rewrote it (once in c#, again in python) and\n> used hashtables/dictionaries instead of tables for the lookup data. For\n> example, I'd start by loading the data into hash tables (yes, this took\n> a *lot* of ram) then for each row I did something like:\n> 1. is it in the hash table?\n> 1. If not, insert it into the db\n> 1. Insert it into the hashtable\n> 2. Get the lookup field out of the hash table\n> 3. Output normalized data\n>\n> This allow me to create text files containing the data in COPY format\n> which can then be inserted into the database at dramatically increased\n> speeds.\n>\n> My first version in C# (mono) cut the time down to 6 hours for 24 hours\n> worth of data. I tweaked the algorithms and rewrote it in Python and got\n> it down to 45 min. (Python can't take all the credit for the performance\n> boost, I used an improved technique that could have been done in C# as\n> well) This time included the time needed to do the copy and update the\n> indexes.\n>\n> I created a version that also used gdb databases instead of hash tables.\n> It increased the time from 45 min to a little over an hour but decreased\n> the memory usage to something like 45MB (vs dozens or hundreds of MB per\n> hashtable)\n> -- \n> Matthew Nuzum\n> newz2000 on freenode\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Thu, 28 Sep 2006 17:04:31 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On Thu, Sep 28, 2006 at 02:04:21PM -0700, Steve Atkins wrote:\n> I think you're confusing \"explain\" and \"analyze\". \"Explain\" gives you\n> human readable output as to what the planner decided to do with the\n> query you give it.\n \nDon't forget about EXPLAIN ANALYZE, which is related to EXPLAIN but has\nnothing to do with the ANALYZE command.\n\n> indexes. I don't know whether autovacuum will also analyze tables\n> for you automagically, but it would be a good idea to analyze the table\n\nIt does.\n\n> >>Talking of which, are there indexes on the table? Normally you\n> >>wouldn't have indexes in place during a bulk import, but if you're\n> >>doing selects as part of the data load process then you'd be forcing\n> >>sequential scans for every query, which would explain why it gets\n> >>slower as the table gets bigger.\n> >\n> >There are indexes for every obvious \"where this = that\" clauses. I \n> >don't\n> >believe that they will work for ilike expressions.\n> \n> If you're doing a lot of \"where foo ilike 'bar%'\" queries, with the \n> pattern\n> anchored to the left you might want to look at using a functional index\n> on lower(foo) and rewriting the query to look like \"where lower(foo) \n> like\n> lower('bar%')\".\n> \n> Similarly if you have many queries where the pattern is anchored\n> at the right of the string then a functional index on the reverse of the\n> string can be useful.\n\ntsearch might prove helpful... I'm not sure how it handles substrings.\n\nSomething else to consider... databases love doing bulk operations. It\nmight be useful to load prospective data into a temporary table, and\nthen do as many operations as you can locally (ie: within the database)\non that table, hopefully eleminating as many candidate rows as possible\nalong the way.\n\nI also suspect that running multiple merge processes at once would help.\nRight now, your workload looks something like this:\n\nclient sends query database is idle\nclient is idle database runs query\nclient gets query back database is idle\n\nOversimplification, but you get the point. There's a lot of time spent\nwaiting on each side. If the import code is running on the server, you\nshould probably run one import process per CPU. If it's on an external\nserver, 2 per CPU would probably be better (and that might be faster\nthan running local on the server at that point).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 18:52:43 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> Lots of great info here, I will see what applies to my situation. However, I \n> don't see bulk inserts of the tables working, because all of the tables need \n> to be refreshed as values to deduplicate and match will change with every \n> row added. In order for this to work, i would have to write queries against \n> the hash tables. This is where something like MySQL's in-memory tables would \n> have come in handy...\n> \n> What is GDB?\n> \n> Carlo\n\nSorry, meant GDBM (disk based hash/lookup table).\n\nWith Postgres if your tables fit into RAM then they are in-memory as\nlong as they're actively being used.\n\nHashtables and GDBM, as far as I know, are only useful for key->value\nlookups. However, for this they are *fast*. If you can figure out a way\nto make them work I'll bet things speed up.\n-- \nMatthew Nuzum\nnewz2000 on freenode\n",
"msg_date": "Thu, 28 Sep 2006 22:08:37 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> Something else to consider... databases love doing bulk operations. It\n> might be useful to load prospective data into a temporary table, and\n> then do as many operations as you can locally (ie: within the database)\n> on that table, hopefully eleminating as many candidate rows as possible\n> along the way.\n\nI wish this would work... it was definitely something I considered early on \nin the project. Even thinking of explaining why it won't work is giving me a \nheadache...\n\nBut I sure wish it would. \n\n\n",
"msg_date": "Fri, 29 Sep 2006 00:30:23 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> imo, the key to high performance big data movements in postgresql is\n> mastering sql and pl/pgsql, especially the latter. once you get good\n> at it, your net time of copy+plpgsql is going to be less than\n> insert+tcl.\n\nIf this implies bulk inserts, I'm afraid I have to consider something else. \nAny data that has been imported and dedpulicated has to be placed back into \nthe database so that it can be available for the next imported row (there \nare currently 16 tables affected, and more to come). If I was to cache all \ninserts into a seperate resource, then I would have to search 32 tables - \nthe local pending resources, as well as the data still in the system. I am \nnot even mentioning that imports do not just insert rows, they could just \nrows, adding their own complexity. \n\n\n",
"msg_date": "Fri, 29 Sep 2006 00:37:37 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> by the way, stats_command_string is a known performance killer that\n> iirc was improved in 8.2. just fyi.\n\nThis is a handy fact, I will get on this right away.\n\n> bulk load denomalized tables into scratch tables into the postgresql\n> database. create indexes appropriate to the nomalization process\n> remembering you can index on virtually any expression in postgresql\n> (including regex substitution).\n\n> use sql to process the data. if tables are too large to handle with\n> monolithic queries, use cursors and/or functions to handle the\n> conversion. now you can keep track of progress using pl/pgsql raise\n> notice command for example.\n\nFor reasons I've exlained elsewhere, the import process is not well suited \nto breaking up the data into smaller segments. However, I'm interested in \nwhat can be indexed. I am used to the idea that indexing only applies to \nexpressions that allows the data to be sorted, and then binary searches can \nbe performed on the sorted list. For example, I can see how you can create \nan index to support:\n\nwhere foo like 'bar%'\n\nBut is there any way to create an index expression that will help with:\n\nwhere foo like '%bar%'?\n\nI don't see it - but then again, I'm ready to be surprised!\n\nCarlo \n\n\n",
"msg_date": "Fri, 29 Sep 2006 00:46:54 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> Don't forget about EXPLAIN ANALYZE, which is related to EXPLAIN but has\n> nothing to do with the ANALYZE command.\n\nAh, hence my confusion. Thanks for the clarification... I never knew about \nANALYZE as a seperate command. \n\n\n",
"msg_date": "Fri, 29 Sep 2006 00:49:31 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> But is there any way to create an index expression that will help with:\n> where foo like '%bar%'?\n\nIf you are concerned about that, what you are probably really looking\nfor is full-text indexing. See contrib/tsearch2 for our current best\nanswer to that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 00:49:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies "
},
{
"msg_contents": ">> indexes. I don't know whether autovacuum will also analyze tables\n>> for you automagically, but it would be a good idea to analyze the table\n>\n> It does.\n\nSo, I have checked my log and I see an autovacuum running once every minute \non our various databases being hosted on the server - once every minute!\n\n From what I can see, autovacuum is hitting the db's in question about once \nevery five minutes. Does this imply an ANALYZE is being done automatically \nthat would meet the requirements we are talking about here? Is there any \nbenefit ot explicitly performing an ANALYZE?\n\n(Or does this go hand-in-and with turning off autovacuum...?)\n\n\n",
"msg_date": "Fri, 29 Sep 2006 01:01:51 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> We urgently need a major performance improvement. We are running the\n> PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, \n> dual core\n> 3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what \n> type) disc\n> subsystem. Sorry about the long intro, but here are my questions:\n\nOthers have already drilled down to the way you do the inserts and \nstatistics etc., so I'll just point out:\n\nAre you fully utilizing all the 4 cores you have? Could you parallelize \nthe loading process, if you're currently running just one client? Are \nyou I/O bound or CPU bound?\n\n-- \nHeikki Linnakangas\nEnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 29 Sep 2006 10:12:43 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On Thu, 2006-09-28 at 12:44 -0400, Carlo Stonebanks wrote:\n\n> At this early stage in the project, we are initializing our portal's \n> database with millions of rows of imported data in over 50 different \n> flattened tables; each table's structure is unique to the data provider. \n> This requires a pretty complex import program, because the data must be \n> matched semantically, not literally. Even with all of the expression \n> matching and fuzzy logic in the code,our performance statistics show that \n> the program spends over 75% of its time in SQL queries looking for matching \n> and/or duplicate data.\n\nMy experience with that type of load process is that doing this\nrow-by-row is a very expensive approach and your results bear that out.\n\nIt is often better to write each step as an SQL statement that operates\non a set of rows at one time. The lookup operations become merge joins\nrather than individual SQL Selects via an index, so increase the\nefficiency of the lookup process by using bulk optimisations and\ncompletely avoiding any program/server call traffic. Data can move from\nstep to step by using Insert Selects into temporary tables, as Jim has\nalready suggested.\n\nThe SQL set approach is different to the idea of simply moving the code\nserver-side by dropping it in a function. That helps with the net\ntraffic but has other issues also. You don't need to use esoteric\nin-memory thingies if you use the more efficient join types already\navailable when you do set based operations (i.e. join all rows at once\nin one big SQL statement).\n\nYou can also improve performance by ordering your checks so that the\nones most likely to fail happen first.\n\nTrying to achieve a high level of data quality in one large project is\nnot often possible. Focus on the most critical areas of checking and get\nthat working first with acceptable performance, then layer on additional\nchecks while tuning. The complexity of the load programs you have also\nmeans they are susceptible to introducing data quality problems rather\nthan removing them, so an incremental approach will also aid debugging\nof the load suite. Dynamic SQL programs are particularly susceptible to\nthis kind of bug because you can't eyeball the code.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 29 Sep 2006 11:58:27 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Hi, Carlo,\n\nCarlo Stonebanks wrote:\n\n> We urgently need a major performance improvement.\n\nDid you think about putting the whole data into PostgreSQL using COPY in\na nearly unprocessed manner, index it properly, and then use SQL and\nstored functions to transform the data inside the database to the\ndesired result?\n\nWe're using this way for some 3rd-party databases we have to process\nin-house.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Fri, 29 Sep 2006 14:19:51 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Hi, Carlo,\n\nCarlo Stonebanks wrote:\n\n> From what I can see, autovacuum is hitting the db's in question about once \n> every five minutes. Does this imply an ANALYZE is being done automatically \n> that would meet the requirements we are talking about here? Is there any \n> benefit ot explicitly performing an ANALYZE?\n\nAutovacuum looks at the modification statistics (they count how much\nmodifications happened on the table), and decides whether it's time to\nVACUUM (reclaim empty space) and ANALYZE (update column value\ndistributions) the table.\n\nThe exact thresholds for Autovacuum to kick in are configurable, see the\ndocs.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Fri, 29 Sep 2006 14:25:33 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "In response to \"Carlo Stonebanks\" <[email protected]>:\n\n> >> indexes. I don't know whether autovacuum will also analyze tables\n> >> for you automagically, but it would be a good idea to analyze the table\n> >\n> > It does.\n> \n> So, I have checked my log and I see an autovacuum running once every minute \n> on our various databases being hosted on the server - once every minute!\n> \n> From what I can see, autovacuum is hitting the db's in question about once \n> every five minutes. Does this imply an ANALYZE is being done automatically \n> that would meet the requirements we are talking about here? Is there any \n> benefit ot explicitly performing an ANALYZE?\n> \n> (Or does this go hand-in-and with turning off autovacuum...?)\n\nIt's only checking to see if vacuum/analyze needs done every 5 minutes.\nIt may or may not do any actual work at that time, based on how much\nthe tables have changed. See:\nhttp://www.postgresql.org/docs/8.1/interactive/maintenance.html#AUTOVACUUM\n\nThis is a case, during your bulk loads, where autovacuum might actually\nhurt you. How many records are you inserting/updating in 5 minutes?\nYou may be exceeding autovacuum's ability to keep things clean.\n\nI can't say for sure, but I would suspect that you'd be better off not\nusing autovacuum until after the initial data loads are done. My\nguess is that you'll get better performance if you disable autovac and\nwrite manual vacuum/analyze into your load scripts. Exactly how often\nto have your script do it is something that will require testing to\nfigure out, but probably starting with every 100 or so, then adjust\nit up and down and see what works best.\n\nExplicitly performing a vacuum or analyze can be very beneficial,\nespecially if you know what kind of changes your creating in the data.\n(Now that I think of it, there's no reason to disable autovac, as it\nwill notice if you've just manually vacuumed a table and not do it\nagain.) If you know that you're radically changing the kind of data\nin a table, manually running analyze is a good idea. If you know that\nyou're creating a lot of dead tuples, manually vacuuming is a good\nidea. Especially during a big data load where these changes might be\ntaking place faster than autovac notices.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Fri, 29 Sep 2006 08:58:02 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "On 9/29/06, Carlo Stonebanks <[email protected]> wrote:\n> For reasons I've exlained elsewhere, the import process is not well suited\n> to breaking up the data into smaller segments. However, I'm interested in\n> what can be indexed. I am used to the idea that indexing only applies to\n> expressions that allows the data to be sorted, and then binary searches can\n> be performed on the sorted list. For example, I can see how you can create\n> an index to support:\n>\n> where foo like 'bar%'\n>\n> But is there any way to create an index expression that will help with:\n>\n> where foo like '%bar%'?\n>\n> I don't see it - but then again, I'm ready to be surprised!\n\nusing standard (btree) index, you can create an index on any constant\nexpression. so, you can create in index that matches '%bar%, but if\nyou also want to match '%bat%', you need another index. there are\nother exotic methods like t_search and gist approach which may or may\nnot be suitable.\n\nregarding your import process, you came to this list and asked for\nadvice on how to fix your particular problem. tweaking\npostgresql.conf, etc will get you incremental gains but are unlikely\nto have a huge impact. as i understand it, your best shot at\nimprovement using current process is to:\n1. fork your import somhow to get all 4 cores running\n2. write the code that actually does the insert in C and use the\nparameterized prepared statement.\n\nhowever, your general approach has been 'please give me advice, but\nonly the advice that i want'. if you really want to fix your problem,\ngive more specific details about your import and open the door to\nimprovements in your methodology which i suspect is not optimal. you\nconcluded that client side coding was the way to go, but here you are\nasking how to make it work. if you want help (and there are some\nextremely smart people here who may give you world class advice for\nfree), you need to lay your cards on the table and be willing to\nconsider alternative solutions. you may find that a few properly\nconsructed queries will knock out 75% of your code and running time.\n\nremember often the key to optimization is choosing the right algorithm\n\nmerlin\n",
"msg_date": "Fri, 29 Sep 2006 10:39:20 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "I have loaded three of the four cores by running three different versions of \nthe import program to import three different segments of the table to \nimport. The server jumps to 75% usage, with three postgresql processes \neating up 25% each., the actual client itself taking up just a few ticks.\n\n\"Heikki Linnakangas\" <[email protected]> wrote in message \nnews:[email protected]...\n> Carlo Stonebanks wrote:\n>> We urgently need a major performance improvement. We are running the\n>> PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, dual \n>> core\n>> 3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what type) \n>> disc\n>> subsystem. Sorry about the long intro, but here are my questions:\n>\n> Others have already drilled down to the way you do the inserts and \n> statistics etc., so I'll just point out:\n>\n> Are you fully utilizing all the 4 cores you have? Could you parallelize \n> the loading process, if you're currently running just one client? Are you \n> I/O bound or CPU bound?\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Mon, 2 Oct 2006 22:43:46 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> My experience with that type of load process is that doing this\n> row-by-row is a very expensive approach and your results bear that out.\n\nI expected this, and had warned the client before the project started that \nthis is exactly where SQL underperforms.\n\n> It is often better to write each step as an SQL statement that operates\n> on a set of rows at one time.\n\nThe problem with this approach is that every row of data is dependent on the \nprevious row's data being validated and imported. e.g.\n\nImport Row 1:\nJohn Q Smith\nFoobar Corp\n123 Main St,\nBigtown, MD 12345-6789\n\nImport Row 2:\nJohn Quincy Smith\nFuzzyLoginc Inc\n123 Main St, Suite 301\nBigtown, MD 12345-6789\n\nImport Row 3:\nBobby Jones\nFoobar Corp\n123 Main Strett Suite 300,\nBigtown, MD 12345\n\nEvery row must be imported into the table so that the next row may see the \ndata and consider it when assigning ID's to the name, company and address. \n(all data must be normalised) How can this be done using set logic?\n\n> You can also improve performance by ordering your checks so that the\n> ones most likely to fail happen first.\n\nAlready done - I believe the problem is definitely in the navigational \naccess model. What I am doing now makes perfect sense as far as the logic of \nthe process goes - any other developer will read it and understand what is \ngoing on. At 3000 lines of code, this will be tedious, but understandable. \nBut SQL hates it.\n\n> Trying to achieve a high level of data quality in one large project is\n> not often possible. Focus on the most critical areas of checking and get\n> that working first with acceptable performance, then layer on additional\n> checks while tuning. The complexity of the load programs you have also\n> means they are susceptible to introducing data quality problems rather\n> than removing them, so an incremental approach will also aid debugging\n> of the load suite.\n\nI couldn't agree more.\n\nCarlo\n\n\n",
"msg_date": "Mon, 2 Oct 2006 23:01:04 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> 1. fork your import somhow to get all 4 cores running\n\nThis is already happening, albeit only 3. No improvement - it appears we \nhave taken the same problem, and divided it by 3. Same projected completion \ntime. this is really curious, to say the least.\n\n> 2. write the code that actually does the insert in C and use the\n> parameterized prepared statement.\n\nI had already tried the paremetrised prepare statement; I had mentioned that \nI was surprised that it had no effect. No one here seemed surprised, or at \nleast didn't think of commenting on it.\n\n> however, your general approach has been 'please give me advice, but\n> only the advice that i want'.\n\nI'm sorry I don't understand - I had actually originally come asking four \nquestions asking for recommendations and opinions on hardware, O/S and \ncommercial support. I did also ask for comments on my config setup.\n\n\n",
"msg_date": "Mon, 2 Oct 2006 23:01:50 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> Did you think about putting the whole data into PostgreSQL using COPY in\n> a nearly unprocessed manner, index it properly, and then use SQL and\n> stored functions to transform the data inside the database to the\n> desired result?\n\nThis is actually what we are doing. The slowness is on the row-by-row \ntransformation. Every row reqauires that all the inserts and updates of the \npvious row be committed - that's why we have problems figuring out how to \nuse this using SQL set logic.\n\nCarlo \n\n\n",
"msg_date": "Mon, 2 Oct 2006 23:03:31 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Hi, Carlo,\n\nCarlo Stonebanks wrote:\n>> Did you think about putting the whole data into PostgreSQL using COPY in\n>> a nearly unprocessed manner, index it properly, and then use SQL and\n>> stored functions to transform the data inside the database to the\n>> desired result?\n> \n> This is actually what we are doing. The slowness is on the row-by-row \n> transformation. Every row reqauires that all the inserts and updates of the \n> pvious row be committed - that's why we have problems figuring out how to \n> use this using SQL set logic.\n\nMaybe \"group by\", \"order by\", \"distinct on\" and hand-written functions\nand aggregates (like first() or best()) may help.\n\nYou could combine all relevant columns into an user-defined compund\ntype, then group by entity, and have a self-defined aggregate generate\nthe accumulated tuple for each entity.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Tue, 03 Oct 2006 10:15:27 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "Hi, Carlo,\n\nCarlo Stonebanks wrote:\n\n>> Trying to achieve a high level of data quality in one large project is\n>> not often possible. Focus on the most critical areas of checking and get\n>> that working first with acceptable performance, then layer on additional\n>> checks while tuning. The complexity of the load programs you have also\n>> means they are susceptible to introducing data quality problems rather\n>> than removing them, so an incremental approach will also aid debugging\n>> of the load suite.\n> \n> I couldn't agree more.\n\nI still think that using a PL in the backend might be more performant\nthan having an external client, alone being the SPI interface more\nefficient compared to the network serialization for external applications.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Tue, 03 Oct 2006 10:20:45 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> I still think that using a PL in the backend might be more performant\n> than having an external client, alone being the SPI interface more\n> efficient compared to the network serialization for external applications.\n\nI would actually love for this to work better, as this is technology that I \nwould like to develop in general - I see db servers with strong server-side \nprogramming languages as being able to operate as application servers, with \nthe enterprises business logic centralised on the server.\n\nThe import routine that I wrote will actually work on the server as well - \nit will detect the presence of the spi_ calls, and replace the pg_* calls \nwith spi_* calls. So, you see this WAS my intention.\n\nHowever, the last time I tried to run something that complex from the db \nserver, it ran quite slowly compared to from a client. This may have had \nsomething to do with the client that I used to call the stored procedure - I \nthought that perhaps the client created an implicit transaction around my \nSQL statement to allow a rollback, and all of the updates and inserts got \nbacked up in a massive transaction queue that took forever to commit.\n\nCarlo \n\n\n",
"msg_date": "Tue, 3 Oct 2006 05:32:24 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
},
{
"msg_contents": "> Maybe \"group by\", \"order by\", \"distinct on\" and hand-written functions\n> and aggregates (like first() or best()) may help.\n\nWe use these - we have lexical analysis functions which assign a rating to \neach row in a set, and the likelyhood that the data is a match, and then we \nsort our results.\n\nI thought this would be the cause of the slowdowns - and it is, but a very \nsmall part of it. I have identified the problem code, and the problems are \nwithin some very simple joins. I have posted the code under a related topic \nheader. I obviously have a few things to learn about optimising SQL joins.\n\nCarlo\n\n>\n> You could combine all relevant columns into an user-defined compund\n> type, then group by entity, and have a self-defined aggregate generate\n> the accumulated tuple for each entity.\n>\n> Markus\n> -- \n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in Europe! www.ffii.org\n> www.nosoftwarepatents.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Tue, 3 Oct 2006 05:36:23 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performace Optimization for Dummies"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a Java application using hiber nate that connects to PostgreSQl 8.1.4.\n\nSo, looking forward the log files I got the following error:\n\n2006-09-28 09:24:25 LOG: unexpected EOF on client connection\n2006-09-28 09:26:06 LOG: unexpected EOF on client connection\n2006-09-28 09:48:24 LOG: unexpected EOF on client connection\n2006-09-28 13:41:14 LOG: unexpected EOF on client connection\n2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t\n2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n\n2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t\n2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n\nCould anyone tell me what is it and how to solve the problem. \n\nI have a Win2000 box machine with Postgre 8.1.4, Hiber Nate and Java application all on the same box.\n\nThanks in advice.\n \t\nLeandro Guimarães dos Santos\t\nEmail: [email protected]\t\n\n",
"msg_date": "Thu, 28 Sep 2006 15:49:36 -0300",
"msg_from": "=?iso-8859-1?Q?Leandro_Guimar=E3es_dos_Santos?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Performace Optimization for Dummies"
}
] |
[
{
"msg_contents": "[email protected]\nBcc: [email protected]\nSubject: Re: RES: [PERFORM] Performace Optimization for Dummies\nReply-To: \nIn-Reply-To: <EC1DBC210AF6B54DB7DAF8714C9743B70530DF54@mesctx03mtzvp.contax-br.contax.root>\nX-Operating-System: FreeBSD 6.0-RELEASE-p4 amd64\nX-Distributed: Join the Effort! http://www.distributed.net\n\nPlease start a new thread instead of replying to an existing one - it\nscrews up a lot of people's mail readers.\n\nSince this is likely a java issue, I'm moving this to pgsql-jdbc, though\nthis does make me wonder if you have some kind of network issues. Are\nyou using a firewall? What address are you connecting to; is it\n'localhost'/127.0.0.1 or something else?\n\nOn Thu, Sep 28, 2006 at 03:49:36PM -0300, Leandro Guimar?es dos Santos wrote:\n> Hi All,\n> \n> I have a Java application using hiber nate that connects to PostgreSQl 8.1.4.\n> \n> So, looking forward the log files I got the following error:\n> \n> 2006-09-28 09:24:25 LOG: unexpected EOF on client connection\n> 2006-09-28 09:26:06 LOG: unexpected EOF on client connection\n> 2006-09-28 09:48:24 LOG: unexpected EOF on client connection\n> 2006-09-28 13:41:14 LOG: unexpected EOF on client connection\n> 2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t\n> 2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n> \n> 2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n> 2006-09-28 13:59:29 LOG: could not receive data from client: No connection could be made because the target machine actively refused it.\t\n> 2006-09-28 13:59:29 LOG: unexpected EOF on client connection\n> \n> Could anyone tell me what is it and how to solve the problem. \n> \n> I have a Win2000 box machine with Postgre 8.1.4, Hiber Nate and Java application all on the same box.\n> \n> Thanks in advice.\n> \t\n> Leandro Guimar?es dos Santos\t\n> Email: [email protected]\t\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 13:54:21 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Honorable members of the list,\n\n\nI would like to share with you a side effect that I discovered today on\nour postgresql 8.1 server.\nWe ve been running this instance with PITR for now 2 months without any\nproblems.\nThe wal's are copied to a remote machine with the pg_archive_command and\nlocally to some other directory.\nFor some independant reasons we made the remote machine unreachable for\nsome hours. The pg_archive_command returned as expected a failure value.\n\nNow to what puzzles me:\nthe load on the box that normally is kept between 0.7 and 1.5 did\nsuddenly rise to 4.5 -5.5 and the processes responsiveness got bad.\nThe dir pg_xlog has plenty of space to keep several day of wal's.\nthere was no unfinished backup's or whatever that could have apparently\nslowed the machine that much.\n\nSo the question is: is there a correlation between not getting the wal's\narchived and this \"massive\" load growth?\nIn my understanding, as the pgl engine has nothing more to do with the\nfilled up log except just to make sure it's archived correctly ther\nshould not be any significant load increase for this reason. Looking at\nthe logs the engine tried approx. every 3 minutes to archive the wal's.\nIs this behaviour expected, If it is then is it reasonnable to burden\nthe engine that is already in a inexpected situation with some IMHO\nunecessary load increase.\n\nyour thougths are welcome\n\nCedric\n\n\n",
"msg_date": "Thu, 28 Sep 2006 21:41:38 +0200",
"msg_from": "Cedric Boudin <[email protected]>",
"msg_from_op": true,
"msg_subject": "archive wal's failure and load increase."
},
{
"msg_contents": "Cedric Boudin <[email protected]> writes:\n> So the question is: is there a correlation between not getting the wal's\n> archived and this \"massive\" load growth?\n\nShouldn't be. Do you want to force the condition again and try to see\n*where* the cycles are going? \"High load factor\" alone is a singularly\nuseless report. Also, how many unarchived WAL files were there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Sep 2006 17:28:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase. "
},
{
"msg_contents": "On Thu, 2006-09-28 at 21:41 +0200, Cedric Boudin wrote:\n\n> I would like to share with you a side effect that I discovered today on\n> our postgresql 8.1 server.\n> We ve been running this instance with PITR for now 2 months without any\n> problems.\n> The wal's are copied to a remote machine with the pg_archive_command and\n> locally to some other directory.\n> For some independant reasons we made the remote machine unreachable for\n> some hours. The pg_archive_command returned as expected a failure value.\n> \n> Now to what puzzles me:\n> the load on the box that normally is kept between 0.7 and 1.5 did\n> suddenly rise to 4.5 -5.5 and the processes responsiveness got bad.\n> The dir pg_xlog has plenty of space to keep several day of wal's.\n> there was no unfinished backup's or whatever that could have apparently\n> slowed the machine that much.\n> \n> So the question is: is there a correlation between not getting the wal's\n> archived and this \"massive\" load growth?\n> In my understanding, as the pgl engine has nothing more to do with the\n> filled up log except just to make sure it's archived correctly ther\n> should not be any significant load increase for this reason. Looking at\n> the logs the engine tried approx. every 3 minutes to archive the wal's.\n> Is this behaviour expected, If it is then is it reasonnable to burden\n> the engine that is already in a inexpected situation with some IMHO\n> unecessary load increase.\n\narchiver will attempt to run archive_command 3 times before it fails.\nSuccess or failure should be visible in the logs. archiver will try this\na *minimum* of every 60 seconds, so if there is a delay of 3 minutes\nthen I'm guessing the archive_command itself has some kind of timeout on\nit before failure. That should be investigated.\n\nIf archive_command succeeds then archiver will process all outstanding\nfiles. If it fails then it stops trying - it doesn't retry *every*\noutstanding file, so the retries themselves do not grow in cost as the\nnumber of outstanding files increases. So IMHO the archiver itself is\nnot the source of any issues.\n\nThere is one negative effect from having outstanding archived files:\nEvery time we switch xlogs we would normally reuse an existing file.\nWhen those files are locked because of pending archive operations we are\nunable to do that, so must create a new xlog file, zero it and fsync -\nlook at xlog.c:XLogFileInit(). While that occurs all WAL write\noperations will be halted and the log jam that results probably slows\nthe server down somewhat, since we peform those actions with\nWALWriteLock held.\n\nWe could improve that situation by \n1. (server change) notifying bgwriter that we have an archiver failure\nsituation and allow new xlogs to be created as a background task. We\ndiscussed putting PreallocXlogFiles() in bgwriter once before, but I\nthink last time we discussed that idea it was rejected, IIRC.\n\n2. (server or manual change) preallocating more xlog files\n\n3. (user change) enhancing the archive_command script so that it begins\nreusing files once archiving has been disabled for a certain length of\ntime/size of xlog directory. You can do this by having a script try the\narchive operation and if it fails (and has been failing) then return a\n\"success\" message to the server to allow it reuse files. That means you\nstart dropping WAL data and hence would prevent a recovery from going\npast the point you started dropping files - I'd never do that, but some\nhave argued previously that might be desirable.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 29 Sep 2006 11:58:33 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase."
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> We discussed putting PreallocXlogFiles() in bgwriter once before, but I\n> think last time we discussed that idea it was rejected, IIRC.\n\nWe already do that: it's called a checkpoint. If the rate of WAL\ngeneration was more than checkpoint_segments per checkpoint_timeout,\nthen indeed there would be a problem with foreground processes having to\nmanufacture WAL segment files for themselves, but it would be a bursty\nthing (ie, problem goes away after a checkpoint, then comes back).\n\nIt's a good thought but I don't think the theory holds water for\nexplaining Cedric's problem, unless there was *also* some effect\npreventing checkpoints from completing ... which would be a much more\nserious problem than the archiver failing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 10:29:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase. "
},
{
"msg_contents": "On Fri, 2006-09-29 at 10:29 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > We discussed putting PreallocXlogFiles() in bgwriter once before, but I\n> > think last time we discussed that idea it was rejected, IIRC.\n> \n> We already do that: it's called a checkpoint. \n\nYes, but not enough.\n\nPreallocXlogFiles() adds only a *single* xlog file, sometimes.\n\nOn a busy system, that would be used up too quickly to make a\ndifference. After that the effect of adding new files would continue as\nsuggested.\n\nIf it did add more than one... it might work better for this case.\n\n> It's a good thought but I don't think the theory holds water for\n> explaining Cedric's problem, unless there was *also* some effect\n> preventing checkpoints from completing ... which would be a much more\n> serious problem than the archiver failing.\n\nStill the best explanation for me.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 29 Sep 2006 15:53:40 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase."
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> PreallocXlogFiles() adds only a *single* xlog file, sometimes.\n\nHm, you are right. I wonder why it's so unaggressive ... perhaps\nbecause under normal circumstances we soon settle into a steady\nstate where each checkpoint recycles the right number of files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 11:55:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase. "
},
{
"msg_contents": "On Fri, 2006-09-29 at 11:55 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > PreallocXlogFiles() adds only a *single* xlog file, sometimes.\n> \n> Hm, you are right. I wonder why it's so unaggressive ... perhaps\n> because under normal circumstances we soon settle into a steady\n> state where each checkpoint recycles the right number of files.\n\nThat is normally the case, yes. But only for people that have correctly\njudged (or massively overestimated) what checkpoint_segments should be\nset at.\n\nCurrently, when we don't have enough we add one, maybe. When we have too\nmany we truncate right back to checkpoint_segments as quickly as\npossible.\n\nSeems like we should try and automate that completely for 8.3:\n- calculate the number required by keeping a running average which\nignores a single peak value, yet takes 5 consistently high values as the\nnew average\n- add more segments with increasing aggressiveness 1,1,2,3,5,8 segments\nat a time when required \n- handle out-of-space errors fairly gracefully by waking up the\narchiver, complaining to the logs and then eventually preventing\ntransactions from writing to logs rather than taking server down\n- shrink back more slowly by halving the difference between the\noverlimit and the typical value\n- get rid of checkpoint_segments GUC\n\nThat should handle peaks caused by data loads, archiving interruptions\nor other peak loadings.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 02 Oct 2006 17:25:13 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: archive wal's failure and load increase."
}
] |
[
{
"msg_contents": "Hey guys, I've got a query that is inherently expensive, because it has to \ndo some joins against some large tables. But it's currently *very* \nexpensive (at least for a web app), and I've been struggling in vain all \nday to knock the cost down. Annoyingly, the least costly version I've come \nup with remains my first attempt, and is the most straight-forward:\n\nexplain select\n \tdistinct public.album.id\nfrom\n \tpublic.album,public.albumjoin,public.track,umdb.node\nwhere\n \tnode.dir=2811\n \tand albumjoin.album = public.album.id\n \tand public.albumjoin.track = public.track.id\n \tand levenshtein(substring(public.track.name for 75),\n \t\tsubstring(node.file for 75)) <= 10\n \tand public.album.id in\n \t\t(select album from albumjoin group by album having count(*) between 15 and 25) \ngroup by public.album.id\nhaving count(*) >= 5;\n\n\n Unique (cost=991430.53..1013711.74 rows=425772 width=4)\n -> GroupAggregate (cost=991430.53..1012647.31 rows=425772 width=4)\n Filter: (count(*) >= 5)\n -> Sort (cost=991430.53..996373.93 rows=1977360 width=4)\n Sort Key: album.id\n -> Nested Loop (cost=513549.06..737866.68 rows=1977360 width=4)\n Join Filter: (levenshtein(\"substring\"((\"inner\".name)::text, 1, 75), \"substring\"(\"outer\".file, 1, 75)) <= 10)\n -> Index Scan using node_dir on node (cost=0.00..3.22 rows=16 width=40)\n Index Cond: (dir = 2811)\n -> Materialize (cost=513549.06..520153.61 rows=370755 width=25)\n -> Hash Join (cost=271464.72..510281.31 rows=370755 width=25)\n Hash Cond: (\"outer\".id = \"inner\".track)\n -> Seq Scan on track (cost=0.00..127872.69 rows=5111469 width=25)\n -> Hash (cost=268726.83..268726.83 rows=370755 width=8)\n -> Hash Join (cost=150840.51..268726.83 rows=370755 width=8)\n Hash Cond: (\"outer\".album = \"inner\".id)\n -> Seq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=8)\n -> Hash (cost=150763.24..150763.24 rows=30908 width=8)\n -> Hash Join (cost=127951.57..150763.24 rows=30908 width=8)\n Hash Cond: (\"outer\".id = \"inner\".album)\n -> Seq Scan on album (cost=0.00..12922.72 rows=425772 width=4)\n -> Hash (cost=127874.30..127874.30 rows=30908 width=4)\n -> HashAggregate (cost=126947.06..127565.22 rows=30908 width=4)\n Filter: ((count(*) >= 15) AND (count(*) <= 25))\n -> Seq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=4)\n\n\nI've tried adding a length(public.track.name) index and filtering \npublic.track to those rows where length(name) is within a few characters \nof node.file, but that actually makes the plan more expensive.\n\nIs there any hope to make things much cheaper? Unfortunately, I can't \nfilter out anything from the album or albumjoin tables.\n",
"msg_date": "Thu, 28 Sep 2006 15:18:56 -0700 (PDT)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "any hope for my big query?"
},
{
"msg_contents": "Instead of the IN, see if this is better:\n\nAND (SELECT count(*) FROM albumjoin aj WHERE aj.album = album.id)\nBETWEEN 15 AND 25.\n\n From a design standpoint, it probably makes sense to have a track_count\nfield in the album table that is kept up-to-date by triggers on\nalbumjoin.\n\nAnd some nits. :)\n\nI find it's a lot easier to call all id fields by the object name, ie:\nalbum_id, track_id, etc. Lets you do things like:\n\nFROM album a\n JOIN albumjoin aj USING album_id\n JOIN track t USING track_id\n\nUnless you've got a lot of tables in a query, table aliases (t, aj, and\na above) are your friend. :)\n\nFace it... camelCase just doesn't work worth anything in databases...\nunderscoresmakeitmucheasiertoreadthings. :)\n\nOn Thu, Sep 28, 2006 at 03:18:56PM -0700, Ben wrote:\n> Hey guys, I've got a query that is inherently expensive, because it has to \n> do some joins against some large tables. But it's currently *very* \n> expensive (at least for a web app), and I've been struggling in vain all \n> day to knock the cost down. Annoyingly, the least costly version I've come \n> up with remains my first attempt, and is the most straight-forward:\n> \n> explain select\n> \tdistinct public.album.id\n> from\n> \tpublic.album,public.albumjoin,public.track,umdb.node\n> where\n> \tnode.dir=2811\n> \tand albumjoin.album = public.album.id\n> \tand public.albumjoin.track = public.track.id\n> \tand levenshtein(substring(public.track.name for 75),\n> \t\tsubstring(node.file for 75)) <= 10\n> \tand public.album.id in\n> \t\t(select album from albumjoin group by album having count(*) \n> \t\tbetween 15 and 25) group by public.album.id\n> having count(*) >= 5;\n> \n> \n> Unique (cost=991430.53..1013711.74 rows=425772 width=4)\n> -> GroupAggregate (cost=991430.53..1012647.31 rows=425772 width=4)\n> Filter: (count(*) >= 5)\n> -> Sort (cost=991430.53..996373.93 rows=1977360 width=4)\n> Sort Key: album.id\n> -> Nested Loop (cost=513549.06..737866.68 rows=1977360 \n> width=4)\n> Join Filter: \n> (levenshtein(\"substring\"((\"inner\".name)::text, 1, 75), \n> \"substring\"(\"outer\".file, 1, 75)) <= 10)\n> -> Index Scan using node_dir on node \n> (cost=0.00..3.22 rows=16 width=40)\n> Index Cond: (dir = 2811)\n> -> Materialize (cost=513549.06..520153.61 \n> rows=370755 width=25)\n> -> Hash Join (cost=271464.72..510281.31 \n> rows=370755 width=25)\n> Hash Cond: (\"outer\".id = \"inner\".track)\n> -> Seq Scan on track \n> (cost=0.00..127872.69 rows=5111469 \n> width=25)\n> -> Hash (cost=268726.83..268726.83 \n> rows=370755 width=8)\n> -> Hash Join \n> (cost=150840.51..268726.83 \n> rows=370755 width=8)\n> Hash Cond: (\"outer\".album = \n> \"inner\".id)\n> -> Seq Scan on albumjoin \n> (cost=0.00..88642.18 \n> rows=5107318 width=8)\n> -> Hash \n> (cost=150763.24..150763.24 \n> rows=30908 width=8)\n> -> Hash Join \n> (cost=127951.57..150763.24 rows=30908 width=8)\n> Hash Cond: \n> (\"outer\".id = \n> \"inner\".album)\n> -> Seq Scan on \n> album \n> (cost=0.00..12922.72 rows=425772 width=4)\n> -> Hash \n> (cost=127874.30..127874.30 rows=30908 width=4)\n> -> \n> HashAggregate (cost=126947.06..127565.22 rows=30908 width=4)\n> Filter: ((count(*) >= 15) AND (count(*) <= 25))\n> -> \n> Seq \n> Scan \n> on \n> albumjoin (cost=0.00..88642.18 rows=5107318 width=4)\n> \n> \n> I've tried adding a length(public.track.name) index and filtering \n> public.track to those rows where length(name) is within a few characters \n> of node.file, but that actually makes the plan more expensive.\n> \n> Is there any hope to make things much cheaper? Unfortunately, I can't \n> filter out anything from the album or albumjoin tables.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 28 Sep 2006 19:09:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "You have 2 seqscans on albumjoin table, you first make a simple join:\n\n...and albumjoin.album = public.album.id ...\n\nthat generates the first\n-> Seq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=8)\nand then you group values from same table counting them with\n\n... (select album from albumjoin group by album having count(*) between \n15 and 25) ...\n\nthat generates the second\n\nSeq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=4)\n\ngiven the complexity of the query, maybe you could create an \nintermediate table with only one seqscan and use that one in final query \nbut I don't know if that's possible with the db structure you have\n\n\nCan I ask what exactly is albumjoin table? is it a n-n relation?\n\n>\n> explain select\n> distinct public.album.id\n> from\n> public.album,public.albumjoin,public.track,umdb.node\n> where\n> node.dir=2811\n> and albumjoin.album = public.album.id\n> and public.albumjoin.track = public.track.id\n> and levenshtein(substring(public.track.name for 75),\n> substring(node.file for 75)) <= 10\n> and public.album.id in\n> (select album from albumjoin group by album having count(*) \n> between 15 and 25) group by public.album.id\n> having count(*) >= 5;\n>\n>\n> Unique (cost=991430.53..1013711.74 rows=425772 width=4)\n> -> GroupAggregate (cost=991430.53..1012647.31 rows=425772 width=4)\n> Filter: (count(*) >= 5)\n> -> Sort (cost=991430.53..996373.93 rows=1977360 width=4)\n> Sort Key: album.id\n> -> Nested Loop (cost=513549.06..737866.68 \n> rows=1977360 width=4)\n> Join Filter: \n> (levenshtein(\"substring\"((\"inner\".name)::text, 1, 75), \n> \"substring\"(\"outer\".file, 1, 75)) <= 10)\n> -> Index Scan using node_dir on node \n> (cost=0.00..3.22 rows=16 width=40)\n> Index Cond: (dir = 2811)\n> -> Materialize (cost=513549.06..520153.61 \n> rows=370755 width=25)\n> -> Hash Join (cost=271464.72..510281.31 \n> rows=370755 width=25)\n> Hash Cond: (\"outer\".id = \"inner\".track)\n> -> Seq Scan on track \n> (cost=0.00..127872.69 rows=5111469 width=25)\n> -> Hash (cost=268726.83..268726.83 \n> rows=370755 width=8)\n> -> Hash Join \n> (cost=150840.51..268726.83 rows=370755 width=8)\n> Hash Cond: (\"outer\".album \n> = \"inner\".id)\n> -> Seq Scan on \n> albumjoin (cost=0.00..88642.18 rows=5107318 width=8)\n> -> Hash \n> (cost=150763.24..150763.24 rows=30908 width=8)\n> -> Hash Join \n> (cost=127951.57..150763.24 rows=30908 width=8)\n> Hash Cond: \n> (\"outer\".id = \"inner\".album)\n> -> Seq Scan \n> on album (cost=0.00..12922.72 rows=425772 width=4)\n> -> Hash \n> (cost=127874.30..127874.30 rows=30908 width=4)\n> -> \n> HashAggregate (cost=126947.06..127565.22 rows=30908 width=4)\n> \n> Filter: ((count(*) >= 15) AND (count(*) <= 25))\n> \n> -> Seq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=4)\n>\n>\n> I've tried adding a length(public.track.name) index and filtering \n> public.track to those rows where length(name) is within a few \n> characters of node.file, but that actually makes the plan more expensive.\n>\n> Is there any hope to make things much cheaper? Unfortunately, I can't \n> filter out anything from the album or albumjoin tables.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Fri, 29 Sep 2006 11:19:19 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "On Thu, 2006-09-28 at 15:18 -0700, Ben wrote:\n\n> \tdistinct public.album.id\n\n> group by public.album.id\n\nYou can remove the distinct clause for starters...\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 29 Sep 2006 12:01:23 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "There's no join criteria for umdb.node... is that really what you want?\n\nOn Thu, Sep 28, 2006 at 03:18:56PM -0700, Ben wrote:\n> Hey guys, I've got a query that is inherently expensive, because it has to \n> do some joins against some large tables. But it's currently *very* \n> expensive (at least for a web app), and I've been struggling in vain all \n> day to knock the cost down. Annoyingly, the least costly version I've come \n> up with remains my first attempt, and is the most straight-forward:\n> \n> explain select\n> \tdistinct public.album.id\n> from\n> \tpublic.album,public.albumjoin,public.track,umdb.node\n> where\n> \tnode.dir=2811\n> \tand albumjoin.album = public.album.id\n> \tand public.albumjoin.track = public.track.id\n> \tand levenshtein(substring(public.track.name for 75),\n> \t\tsubstring(node.file for 75)) <= 10\n> \tand public.album.id in\n> \t\t(select album from albumjoin group by album having count(*) \n> \t\tbetween 15 and 25) group by public.album.id\n> having count(*) >= 5;\n> \n> \n> Unique (cost=991430.53..1013711.74 rows=425772 width=4)\n> -> GroupAggregate (cost=991430.53..1012647.31 rows=425772 width=4)\n> Filter: (count(*) >= 5)\n> -> Sort (cost=991430.53..996373.93 rows=1977360 width=4)\n> Sort Key: album.id\n> -> Nested Loop (cost=513549.06..737866.68 rows=1977360 \n> width=4)\n> Join Filter: \n> (levenshtein(\"substring\"((\"inner\".name)::text, 1, 75), \n> \"substring\"(\"outer\".file, 1, 75)) <= 10)\n> -> Index Scan using node_dir on node \n> (cost=0.00..3.22 rows=16 width=40)\n> Index Cond: (dir = 2811)\n> -> Materialize (cost=513549.06..520153.61 \n> rows=370755 width=25)\n> -> Hash Join (cost=271464.72..510281.31 \n> rows=370755 width=25)\n> Hash Cond: (\"outer\".id = \"inner\".track)\n> -> Seq Scan on track \n> (cost=0.00..127872.69 rows=5111469 \n> width=25)\n> -> Hash (cost=268726.83..268726.83 \n> rows=370755 width=8)\n> -> Hash Join \n> (cost=150840.51..268726.83 \n> rows=370755 width=8)\n> Hash Cond: (\"outer\".album = \n> \"inner\".id)\n> -> Seq Scan on albumjoin \n> (cost=0.00..88642.18 \n> rows=5107318 width=8)\n> -> Hash \n> (cost=150763.24..150763.24 \n> rows=30908 width=8)\n> -> Hash Join \n> (cost=127951.57..150763.24 rows=30908 width=8)\n> Hash Cond: \n> (\"outer\".id = \n> \"inner\".album)\n> -> Seq Scan on \n> album \n> (cost=0.00..12922.72 rows=425772 width=4)\n> -> Hash \n> (cost=127874.30..127874.30 rows=30908 width=4)\n> -> \n> HashAggregate (cost=126947.06..127565.22 rows=30908 width=4)\n> Filter: ((count(*) >= 15) AND (count(*) <= 25))\n> -> \n> Seq \n> Scan \n> on \n> albumjoin (cost=0.00..88642.18 rows=5107318 width=4)\n> \n> \n> I've tried adding a length(public.track.name) index and filtering \n> public.track to those rows where length(name) is within a few characters \n> of node.file, but that actually makes the plan more expensive.\n> \n> Is there any hope to make things much cheaper? Unfortunately, I can't \n> filter out anything from the album or albumjoin tables.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 29 Sep 2006 10:35:57 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "On Thursday 28 September 2006 17:18, Ben wrote:\n\n> explain select\n> \tdistinct public.album.id\n> from\n> \tpublic.album,public.albumjoin,public.track,umdb.node\n> where\n> \tnode.dir=2811\n> \tand albumjoin.album = public.album.id\n> \tand public.albumjoin.track = public.track.id\n> \tand levenshtein(substring(public.track.name for 75),\n> \t\tsubstring(node.file for 75)) <= 10\n> \tand public.album.id in\n> \t\t(select album from albumjoin group by album having count(*) between \n15 and 25) \n> group by public.album.id\n> having count(*) >= 5;\n\nIf I'm reading this right, you want all the albums with 15-25 entries in \nalbum join having 5 or more tracks that are (soundex type) similar to \nother nodes. Knowing that, you can also try something like this:\n\nselect a.album\n from (select album,track from albumjoin group by album having count(1) \nbetween 15 and 25) a\n join public.track t on (a.track = t.id)\n join umdb.node n on (levenshtein(substring(t.name for 75), \nsubstring(n.file for 75)) < 9)\n where n.dir = 2811\n group by a.album\n having count(1) > 4;\n\nThis removes two of your tables, since you were only interested in \nalbums with 15-25 albumjoins, and weren't actually using any album data \nother than the ID, which albumjoin supplies. Your subselect is now an \nintegral part of the whole query, being treated like a temp table that \nonly supplies album IDs with 15-25 albumjoins. From there, add on the \ntrack information, and use that to restrict the matching nodes. Your \nexplain should be better with the above.\n\nJust remember with the levenshtein in there, you're forcing a sequence \nscan on the node table. Depending on how big that table is, you may \nnot be able to truly optimize this.\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n",
"msg_date": "Mon, 2 Oct 2006 10:32:35 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "On Fri, 29 Sep 2006, Jim C. Nasby wrote:\n\n> There's no join criteria for umdb.node... is that really what you want?\n>\n\nUnfortunately, yes, it is.\n\nI've taken in all of everybody's helpful advice (thanks!) and reworked \nthings a little, and now I'm left with this expensive nugget:\n\nselect aj.album from\n(select seconds-1 as a,seconds+1 as b from node where node.dir = 6223) n\njoin public.track t\non (t.length between n.a*1000 and n.b*1000)\njoin public.albumjoin aj\non (aj.track = t.id)\njoin (select id from public.albummeta am where tracks between 3 and 7) lam\non (lam.id = aj.album)\ngroup by aj.album having count(*) >= 4;\n\n...which comes out to be:\n\n HashAggregate (cost=904444.69..904909.99 rows=31020 width=4)\n Filter: (count(*) >= 4)\n -> Nested Loop (cost=428434.81..897905.17 rows=1307904 width=4)\n Join Filter: ((\"inner\".length >= ((\"outer\".seconds - 1) * 1000)) AND (\"inner\".length <= ((\"outer\".seconds + 1) * 1000)))\n -> Index Scan using node_dir on node (cost=0.00..3.46 rows=17 width=4)\n Index Cond: (dir = 6223)\n -> Materialize (cost=428434.81..438740.01 rows=692420 width=8)\n -> Hash Join (cost=210370.58..424361.39 rows=692420 width=8)\n Hash Cond: (\"outer\".id = \"inner\".track)\n -> Seq Scan on track t (cost=0.00..128028.41 rows=5123841 width=8)\n -> Hash (cost=205258.53..205258.53 rows=692420 width=8)\n -> Hash Join (cost=6939.10..205258.53 rows=692420 width=8)\n Hash Cond: (\"outer\".album = \"inner\".id)\n -> Seq Scan on albumjoin aj (cost=0.00..88918.41 rows=5123841 width=8)\n -> Hash (cost=6794.51..6794.51 rows=57834 width=4)\n -> Bitmap Heap Scan on albummeta am (cost=557.00..6794.51 rows=57834 width=4)\n Recheck Cond: ((tracks >= 3) AND (tracks <= 7))\n -> Bitmap Index Scan on albummeta_tracks_index (cost=0.00..557.00 rows=57834 width=0)\n Index Cond: ((tracks >= 3) AND (tracks <= 7))\n(19 rows)\n\n\nI'm surprised (though probably just because I'm ignorant) that it would \nhave so much sequential scanning in there. For instance, because n is \ngoing to have at most a couple dozen rows, it seems that instead of \nscanning all of public.track, it should be able to convert my \"t.length \nbetween a and b\" clause to some between statements or'd together. Or at \nleast, it would be nice if the planner could do that. :)\n\n",
"msg_date": "Wed, 4 Oct 2006 14:40:47 -0700 (PDT)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: any hope for my big query?"
},
{
"msg_contents": "On Oct 4, 2006, at 4:40 PM, Ben wrote:\n> I'm surprised (though probably just because I'm ignorant) that it \n> would have so much sequential scanning in there. For instance, \n> because n is going to have at most a couple dozen rows, it seems \n> that instead of scanning all of public.track, it should be able to \n> convert my \"t.length between a and b\" clause to some between \n> statements or'd together. Or at least, it would be nice if the \n> planner could do that.\n\nThat would require the planner having that knowledge at plan-time, \nwhich it can't without actually querying the database. One thing that \nmight work wonders is performing the n query ahead of time and then \nsticking it in an array... that might speed things up.\n\nWorst case, you could run the n query, and then run the rest of the \nquery for each row of n you get back.\n\nBetter yet... send us a patch that allows the planner to look into \nwhat a subselect will return to us. ;)\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Thu, 5 Oct 2006 23:02:57 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any hope for my big query?"
}
] |
[
{
"msg_contents": "Hi,\nI am working on datamigration from older version of informix to postgres 8.1\n\nI need to increase performance on postgres, since informix(older version,older\nhardware, little bigger DB data)\nis 4-5 times faster than postgres8.1 (new hardware, less DB data)\n\nMy readings from Internet lead to me below configs but not making faster. I am \ndoing this first time and hoped to get help from forum here.\n\nI(we) am running 4GB ram running FC5(64bit), postgresql 8.1\n\nMy configs are\n-------------------\nkernel.shmmax = 1048470784\nkernel.shmall = 16382356\n-------------------\nshared_buffers = 32768\nwork_mem = 16384\neffective_cache_size = 200000\nrandom_page_cost = 3\n-------------------\n\nIf I run the query below with informix, it gives cost=107.\nwith postgres with additional indexes it gives cost=407, before the additional\nindexes it was even much slower\n------------------------------------------------------\ndevelopment=# explain SELECT count (distinct invC.inv_id) as cnt FROM\ninv_categories invC, inv_milestones invM, milestoneDef mDef, inv_milestones\ninvM2, milestoneDef mDef2 WHERE category_id = 1 AND invC.inv_id = invM.inv_id\nAND mDef.id = invM.milestone_id AND mDef2.id = invM2.milestone_id AND\ninvM2.inv_id = invC.inv_id AND (mDef.description LIKE '7020%' OR\nmDef.description LIKE '7520%') AND invM.dateDue <= CURRENT_DATE AND\n(mDef2.description LIKE '7021%' OR mDef2.description LIKE '7521%') AND\ninvM2.dateDue >= CURRENT_DATE;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=407.37..407.38 rows=1 width=4)\n -> Nested Loop (cost=2.06..407.37 rows=1 width=4)\n -> Nested Loop (cost=2.06..398.21 rows=3 width=8)\n -> Nested Loop (cost=2.06..379.57 rows=1 width=8)\n -> Nested Loop (cost=2.06..367.36 rows=4 width=12)\n -> Bitmap Heap Scan on inv_categories invc \n(cost=2.06..32.29 rows=18 width=4)\n Recheck Cond: (category_id = 1)\n -> Bitmap Index Scan on az_test2 \n(cost=0.00..2.06 rows=18 width=0)\n Index Cond: (category_id = 1)\n -> Index Scan using az_invm_invid on inv_milestones\ninvm2 (cost=0.00..18.60 rows=1 width=8)\n Index Cond: (invm2.inv_id = \"outer\".inv_id)\n Filter: (datedue >= ('now'::text)::date)\n -> Index Scan using milestonedef_pkey on milestonedef\nmdef2 (cost=0.00..3.04 rows=1 width=4)\n Index Cond: (mdef2.id = \"outer\".milestone_id)\n Filter: ((description ~~ '7021%'::citext) OR\n(description ~~ '7521%'::citext))\n -> Index Scan using az_invm_invid on inv_milestones invm \n(cost=0.00..18.60 rows=3 width=8)\n Index Cond: (\"outer\".inv_id = invm.inv_id)\n Filter: (datedue <= ('now'::text)::date)\n -> Index Scan using milestonedef_pkey on milestonedef mdef \n(cost=0.00..3.04 rows=1 width=4)\n Index Cond: (mdef.id = \"outer\".milestone_id)\n Filter: ((description ~~ '7020%'::citext) OR (description ~~\n'7520%'::citext))\n(21 rows)\n\n------------------------------------------------------\n\nThanks for help.\n\n-------------------------------------------------\nThis mail sent through IMP: www.resolution.com\n",
"msg_date": "Fri, 29 Sep 2006 13:12:45 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "how to optimize postgres 8.1"
},
{
"msg_contents": "[email protected] writes:\n> I need to increase performance on postgres,\n\nWell, for starters, have you ANALYZEd your tables? That EXPLAIN output\nlooks suspiciously like default estimates. Then post EXPLAIN ANALYZE\n(not just EXPLAIN) results for your problem query.\n\n> If I run the query below with informix, it gives cost=107.\n> with postgres with additional indexes it gives cost=407,\n\nThat comparison is meaningless --- I know of no reason to think that\ninformix measures cost estimates on the same scale we do. It'd be\ninteresting to see what query plan they use, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 14:07:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to optimize postgres 8.1 "
},
{
"msg_contents": "Hi, Gurkan,\n\[email protected] wrote:\n\n> If I run the query below with informix, it gives cost=107.\n> with postgres with additional indexes it gives cost=407, before the additional\n> indexes it was even much slower\n\nWhat are your real timing measurements, in a produciton-like setup in a\nproduction-like load? That's the only kind of \"benchmarking\" that will\ngive you an useful comparison.\n\nYou cannot compare anything else.\n\nEspecially, you cannot compare those \"artificial\" cost estimator values,\nas they are likely to be defined differently for PostgreSQL and Informix.\n\nFor PostgreSQL, they are relative values to the cost of reading a page\nas part of a sequential scan. And those values are tunable - fiddle with\nthe random_page_cost and cpu_*_cost values in the postgresql.conf, and\nyou will see very different values compared to the 407 you see now, even\nif the query plan stays equal.\n\nDo you look up the definition of cost for Informix? Have you made shure\nthat they're comparable?\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Sat, 30 Sep 2006 11:50:33 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to optimize postgres 8.1"
}
] |
[
{
"msg_contents": "I'm experiencing a problem with our postgres database. Queries that \nnormally take seconds suddenly start taking hours, if they complete at \nall. \n\nThis isn't a vacuuming or analyzing problem- I've been on this list long \nenough that they were my first response, and besides it doesn't happen \nwith just a single query. Turning off autovaccum (and switching to a \nweekend vaccuum) seems to have reduced the frequency of the problem, but \ndid not eliminate it. Besides, I've seen this problem with copy \nstatements, which shouldn't be that susceptable to problems with these.\n\nNor is it a problem with normal database locking- when it happens, I've \nbeen poking around in pg_locks, and nothing seems wrong (all locks have \nbeen granted, no obvious deadlocks). \n\nRestarting the database seems to help occassionally, but not always.\n\nThis is happening both in production, where the database is held on an \niscsi partition on an EMC, and in development, where the database is \nheld on a single 7200 RPM SATA drive. Both are Opteron-based HP 145 \nservers running Centos (aka Redhat) in 64-bit mode.\n\nWhat I'm looking for is pointers as to what to do next- what can I do to \ntrack the problem down. Any help would be appreciated. Thank you.\n\nThe output of pg_config:\n-bash-3.00$ /usr/local/pgsql/bin/pg_config\nBINDIR = /usr/local/pgsql/bin\nDOCDIR = /usr/local/pgsql/doc\nINCLUDEDIR = /usr/local/pgsql/include\nPKGINCLUDEDIR = /usr/local/pgsql/include\nINCLUDEDIR-SERVER = /usr/local/pgsql/include/server\nLIBDIR = /usr/local/pgsql/lib\nPKGLIBDIR = /usr/local/pgsql/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql/man\nSHAREDIR = /usr/local/pgsql/share\nSYSCONFDIR = /usr/local/pgsql/etc\nPGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--with-perl' '--with-python' '--with-openssl' '--with-pam' \n'--enable-thread-safety'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline \n-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,/usr/local/pgsql/lib\nLDFLAGS_SL =\nLIBS = -lpgport -lpam -lssl -lcrypto -lz -lreadline -ltermcap -lcrypt \n-lresolv -lnsl -ldl -lm -lbsd\nVERSION = PostgreSQL 8.1.4\n-bash-3.00$\n\n",
"msg_date": "Fri, 29 Sep 2006 15:24:14 -0400",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres locking up?"
},
{
"msg_contents": "On Fri, Sep 29, 2006 at 03:24:14PM -0400, Brian Hurt wrote:\n> I'm experiencing a problem with our postgres database. Queries that \n> normally take seconds suddenly start taking hours, if they complete at \n> all. \n\nThe first thing I'd do is EXPLAIN and EXPLAIN ANALYSE on the queries\nin question.\n\nThe next thing I'd look for is OS-level performance problems.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n",
"msg_date": "Fri, 29 Sep 2006 15:30:06 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres locking up?"
},
{
"msg_contents": "Brian Hurt <[email protected]> writes:\n> I'm experiencing a problem with our postgres database. Queries that \n> normally take seconds suddenly start taking hours, if they complete at \n> all. \n\nAre they waiting? Consuming CPU? Consuming I/O? top and vmstat will\nhelp you find out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 16:58:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres locking up? "
}
] |
[
{
"msg_contents": "Hi,\n\nHow do I optimize postgres8.1?\n\nI have 'vacuum full analyze'\n\nI have posted output of 'explain analyze select ..'\n\nI have created some indexes\n\nI am running Mixed-Mode server,4GB ram running FC5(64bit), postgresql 8.1 AND\nMy configs are;(Are these good number?)\n-------------------\nkernel.shmmax = 1048470784\nkernel.shmall = 16382356\n-------------------\nshared_buffers = 32768\nwork_mem = 16384\neffective_cache_size = 200000\nrandom_page_cost = 3\n-------------------\n\n-----------------------------------------------------------------------------------\ndevelopment=# explain ANALYZE SELECT count (distinct invC.inv_id) as cnt FROM\ninv_categories invC, inv_milestones invM, milestoneDef mDef, inv_milestones\ninvM2, milestoneDef mDef2 WHERE category_id = 1 AND invC.inv_id = invM.inv_id\nAND mDef.id = invM.milestone_id AND mDef2.id = invM2.milestone_id AND\ninvM2.inv_id = invC.inv_id AND (mDef.description LIKE '7020%' OR\nmDef.description LIKE '7520%') AND invM.dateDue <= CURRENT_DATE AND\n(mDef2.description LIKE '7021%' OR mDef2.description LIKE '7521%') AND\ninvM2.dateDue >= CURRENT_DATE;\n \nQUERY PLAN\n-----------------------------------------------------------------------------------\n Aggregate (cost=499.93..499.94 rows=1 width=4) (actual time=8.152..8.154\nrows=1 loops=1)\n -> Nested Loop (cost=65.26..499.92 rows=1 width=4) (actual\ntime=1.762..8.065 rows=13 loops=1)\n -> Nested Loop (cost=65.26..487.75 rows=4 width=8) (actual\ntime=1.637..7.380 rows=38 loops=1)\n -> Nested Loop (cost=65.26..467.71 rows=1 width=8) (actual\ntime=1.614..5.732 rows=13 loops=1)\n -> Nested Loop (cost=65.26..455.53 rows=4 width=12)\n(actual time=1.557..5.427 rows=13 loops=1)\n -> Bitmap Heap Scan on inv_categories invc \n(cost=65.26..95.48 rows=18 width=4) (actual time=1.497..1.624 rows=44 loops=1)\n Recheck Cond: (category_id = 1)\n -> Bitmap Index Scan on az_invcat_ifx1 \n(cost=0.00..65.26 rows=18 width=0) (actual time=1.482..1.482 rows=44 loops=1)\n Index Cond: (category_id = 1)\n -> Index Scan using az_invm_invid on inv_milestones\ninvm2 (cost=0.00..19.99 rows=1 width=8) (actual time=0.069..0.080 rows=0 loops=44)\n Index Cond: (invm2.inv_id = \"outer\".inv_id)\n Filter: (datedue >= ('now'::text)::date)\n -> Index Scan using milestonedef_pkey on milestonedef\nmdef2 (cost=0.00..3.03 rows=1 width=4) (actual time=0.012..0.014 rows=1 loops=13)\n Index Cond: (mdef2.id = \"outer\".milestone_id)\n Filter: ((description ~~ '7021%'::citext) OR\n(description ~~ '7521%'::citext))\n -> Index Scan using az_invm_invid on inv_milestones invm \n(cost=0.00..19.99 rows=4 width=8) (actual time=0.023..0.110 rows=3 loops=13)\n Index Cond: (\"outer\".inv_id = invm.inv_id)\n Filter: (datedue <= ('now'::text)::date)\n -> Index Scan using milestonedef_pkey on milestonedef mdef \n(cost=0.00..3.03 rows=1 width=4) (actual time=0.011..0.012 rows=0 loops=38)\n Index Cond: (mdef.id = \"outer\".milestone_id)\n Filter: ((description ~~ '7020%'::citext) OR (description ~~\n'7520%'::citext))\n Total runtime: 8.466 ms\n(22 rows)\n-----------------------------------------------------------------------------------\n\nthanks for help.\n\n-------------------------------------------------\nThis mail sent through IMP: www.resolution.com\n",
"msg_date": "Fri, 29 Sep 2006 17:36:12 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "cont. how to optimize postgres 8.1 "
},
{
"msg_contents": "[email protected] writes:\n> How do I optimize postgres8.1?\n\n8 msec for a five-way join doesn't sound out of line to me. What were\nyou expecting?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2006 17:58:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cont. how to optimize postgres 8.1 "
}
] |
[
{
"msg_contents": "\nBrian Hurt <[email protected]> wrote:\n\n> I'm experiencing a problem with our postgres database. Queries that\n> normally take seconds suddenly start taking hours, if they complete at\n> all.\n\nDo you have any long running transactions? I have noticed that with Postgres\n8.1.x, a long running transaction combined with other transactions over a long\nenough time period can very predictably lead to this type of behavior\n\nOne simple way to see if you have any long running transactions is to look for\nPIDs that are \"idle in transaction\" for long periods of time.\n\nrobert\n\n",
"msg_date": "Fri, 29 Sep 2006 19:53:46 -0500",
"msg_from": "Robert Becker Cope <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres locking up?"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI am trying to vaccum one of the table using the following command:\n\nVACUUM FULL ANALYZE VERBOSE table_name;\n\n\n\nBut for some reason the table vaccuming is not going ahead. Can you guys let\nme know what the problem is.\n\nRegards,\nNimesh.\n\nHi,\n \n \nI am trying to vaccum one of the table using the following command:\n \nVACUUM FULL ANALYZE VERBOSE table_name;\n \n \n \nBut for some reason the table vaccuming is not going ahead. Can you guys let me know what the problem is.\n \nRegards,\nNimesh.",
"msg_date": "Sat, 30 Sep 2006 14:55:54 +0530",
"msg_from": "\"Nimesh Satam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table not getting vaccumed."
},
{
"msg_contents": "\"Nimesh Satam\" <[email protected]> writes:\n> I am trying to vaccum one of the table using the following command:\n> VACUUM FULL ANALYZE VERBOSE table_name;\n> But for some reason the table vaccuming is not going ahead.\n\nVACUUM FULL requires exclusive lock on the table, so it's probably\nwaiting for some open transaction that has a reader's or writer's\nlock on it. Look in pg_stat_activity and pg_locks to find out more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Sep 2006 12:45:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table not getting vaccumed. "
},
{
"msg_contents": "On Sat, Sep 30, 2006 at 02:55:54PM +0530, Nimesh Satam wrote:\n> I am trying to vaccum one of the table using the following command:\n> \n> VACUUM FULL ANALYZE VERBOSE table_name;\n\nAre you sure you want to do a vacuum full? Normally, that shouldn't be\nrequired.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 2 Oct 2006 14:08:32 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table not getting vaccumed."
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to determine if there is a way to improve the performance \nwhen selecting data from the information_schema.columns view.\n\nWe use data from this view to inform our application information on the \ncolumns on a table and is used when data is selected from a table.\n\nBelow is the output from EXPLAIN ANALYSE:\n\nsmf=> explain analyse select column_name, column_default, is_nullable, \ndata_type, character_maximum_length, numeric_precision, \nnumeric_precision_radix, \nsmf-> numeric_scale, udt_name from information_schema.columns where \ntable_name = 't_fph_tdrdw' order by ordinal_position;\n \nQUERY \nPLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5228.55..5228.64 rows=38 width=449) (actual \ntime=567.434..567.467 rows=47 loops=1)\n Sort Key: (a.attnum)::information_schema.cardinal_number\n -> Hash Join (cost=5071.47..5227.55 rows=38 width=449) (actual \ntime=547.207..567.113 rows=47 loops=1)\n Hash Cond: (\"outer\".oid = \"inner\".atttypid)\n -> Hash Left Join (cost=79.27..173.95 rows=1169 width=310) \n(actual time=8.036..17.515 rows=1170 loops=1)\n Hash Cond: (\"outer\".typbasetype = \"inner\".oid)\n Join Filter: (\"outer\".typtype = 'd'::\"char\")\n -> Hash Join (cost=1.06..75.29 rows=1169 width=176) \n(actual time=0.046..6.960 rows=1170 loops=1)\n Hash Cond: (\"outer\".typnamespace = \"inner\".oid)\n -> Seq Scan on pg_type t (cost=0.00..56.69 \nrows=1169 width=116) (actual time=0.006..3.868 rows=1170 loops=1)\n -> Hash (cost=1.05..1.05 rows=5 width=68) (actual \ntime=0.025..0.025 rows=5 loops=1)\n -> Seq Scan on pg_namespace nt \n(cost=0.00..1.05 rows=5 width=68) (actual time=0.003..0.013 rows=5 loops=1)\n -> Hash (cost=75.29..75.29 rows=1169 width=138) (actual \ntime=7.983..7.983 rows=1170 loops=1)\n -> Hash Join (cost=1.06..75.29 rows=1169 \nwidth=138) (actual time=0.036..5.620 rows=1170 loops=1)\n Hash Cond: (\"outer\".typnamespace = \"inner\".oid)\n -> Seq Scan on pg_type bt (cost=0.00..56.69 \nrows=1169 width=78) (actual time=0.003..2.493 rows=1170 loops=1)\n -> Hash (cost=1.05..1.05 rows=5 width=68) \n(actual time=0.022..0.022 rows=5 loops=1)\n -> Seq Scan on pg_namespace nbt \n(cost=0.00..1.05 rows=5 width=68) (actual time=0.003..0.012 rows=5 loops=1)\n -> Hash (cost=4992.11..4992.11 rows=38 width=143) (actual \ntime=536.532..536.532 rows=47 loops=1)\n -> Merge Join (cost=4722.45..4992.11 rows=38 width=143) \n(actual time=535.940..536.287 rows=47 loops=1)\n Merge Cond: (\"outer\".attrelid = \"inner\".oid)\n -> Merge Left Join (cost=4527.17..4730.67 \nrows=26238 width=143) (actual time=481.392..520.627 rows=10508 loops=1)\n Merge Cond: ((\"outer\".attrelid = \n\"inner\".adrelid) AND (\"outer\".attnum = \"inner\".adnum))\n -> Sort (cost=4471.90..4537.50 rows=26238 \nwidth=107) (actual time=481.345..497.647 rows=10508 loops=1)\n Sort Key: a.attrelid, a.attnum\n -> Seq Scan on pg_attribute a \n(cost=0.00..1474.20 rows=26238 width=107) (actual time=0.007..92.444 \nrows=26792 loops=1)\n Filter: ((attnum > 0) AND (NOT \nattisdropped))\n -> Sort (cost=55.27..57.22 rows=780 \nwidth=38) (actual time=0.035..0.035 rows=0 loops=1)\n Sort Key: ad.adrelid, ad.adnum\n -> Seq Scan on pg_attrdef ad \n(cost=0.00..17.80 rows=780 width=38) (actual time=0.003..0.003 rows=0 \nloops=1)\n -> Sort (cost=195.27..195.28 rows=3 width=8) \n(actual time=3.900..3.938 rows=1 loops=1)\n Sort Key: c.oid\n -> Hash Join (cost=194.12..195.25 rows=3 \nwidth=8) (actual time=3.889..3.892 rows=1 loops=1)\n Hash Cond: (\"outer\".oid = \n\"inner\".relnamespace)\n -> Seq Scan on pg_namespace nc \n(cost=0.00..1.05 rows=5 width=4) (actual time=0.007..0.016 rows=5 loops=1)\n -> Hash (cost=194.11..194.11 rows=3 \nwidth=12) (actual time=3.826..3.826 rows=1 loops=1)\n -> Seq Scan on pg_class c \n(cost=0.00..194.11 rows=3 width=12) (actual time=2.504..3.818 rows=1 \nloops=1)\n Filter: (((relkind = \n'r'::\"char\") OR (relkind = 'v'::\"char\")) AND (pg_has_role(relowner, \n'MEMBER'::text) OR has_table_privilege(oid, 'SELECT'::text) OR \nhas_table_privilege(oid, 'INSERT'::text) OR has_table_privilege(oid, \n'UPDATE'::text) OR has_table_privilege(oid, 'REFERENCES'::text)) AND \n(((relname)::information_schema.sql_identifier)::text = \n't_fph_tdrdw'::text))\n Total runtime: 568.211 ms\n(39 rows)\n\nsmf=>\n\n\nIf I create a table from this view \"create table \nmy_information_schema_columns as select * from \ninformation_schema.columns;\", naturally the performance is greatly improved.\n\nsmf=> explain analyse select column_name, column_default, is_nullable, \ndata_type, character_maximum_length, numeric_precision, \nnumeric_precision_radix,\nsmf-> numeric_scale, udt_name from my_information_schema_columns where \ntable_name = 't_fph_tdrdw' order by ordinal_position;\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=605.75..605.81 rows=24 width=180) (actual \ntime=39.878..39.914 rows=47 loops=1)\n Sort Key: ordinal_position\n -> Seq Scan on my_information_schema_columns (cost=0.00..605.20 \nrows=24 width=180) (actual time=16.280..39.651 rows=47 loops=1)\n Filter: ((table_name)::text = 't_fph_tdrdw'::text)\n Total runtime: 40.049 ms\n(5 rows)\n\nsmf=>\n\nAnd if I add a index \"create index my_information_schema_columns_index \non my_information_schema_columns (table_name);\" , it is improved even more.\n\nsmf=> explain analyse select column_name, column_default, is_nullable, \ndata_type, character_maximum_length, numeric_precision, \nnumeric_precision_radix,\nsmf-> numeric_scale, udt_name from my_information_schema_columns where \ntable_name = 't_fph_tdrdw' order by \nordinal_position; \n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=294.18..294.48 rows=119 width=180) (actual \ntime=0.520..0.558 rows=47 loops=1)\n Sort Key: ordinal_position\n -> Bitmap Heap Scan on my_information_schema_columns \n(cost=2.42..290.08 rows=119 width=180) (actual time=0.169..0.296 rows=47 \nloops=1)\n Recheck Cond: ((table_name)::text = 't_fph_tdrdw'::text)\n -> Bitmap Index Scan on my_information_schema_columns_index \n(cost=0.00..2.42 rows=119 width=0) (actual time=0.149..0.149 rows=47 \nloops=1)\n Index Cond: ((table_name)::text = 't_fph_tdrdw'::text)\n Total runtime: 0.691 ms\n(7 rows)\n\nsmf=>\n\nIf a table is created from the information_schema.columns view, then we \nhave the problem of keeping the table up to date.\n\nAny hints, rtfm's (locations please), where to look, etc, will be \nappreciated.\n\nRegards\nSteve Martin\n\n-- \n \\\\|// From near to far,\n @ @ from here to there,\n ---oOOo-(_)-oOOo--- funny things are everywhere. (Dr. Seuss)\n\n\n\n",
"msg_date": "Mon, 02 Oct 2006 15:26:52 +1300",
"msg_from": "Steve Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "selecting data from information_schema.columns performance."
},
{
"msg_contents": "Steve Martin <[email protected]> writes:\n> I am trying to determine if there is a way to improve the performance \n> when selecting data from the information_schema.columns view.\n\nIn my experience, there isn't any single one of the information_schema\nviews whose performance doesn't suck :-(. Somebody should work on that\nsometime. I haven't looked closely enough to determine where the\nbottlenecks are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 01 Oct 2006 23:01:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: selecting data from information_schema.columns performance. "
},
{
"msg_contents": "On Sun, Oct 01, 2006 at 11:01:19PM -0400, Tom Lane wrote:\n> Steve Martin <[email protected]> writes:\n> > I am trying to determine if there is a way to improve the performance \n> > when selecting data from the information_schema.columns view.\n> \n> In my experience, there isn't any single one of the information_schema\n> views whose performance doesn't suck :-(. Somebody should work on that\n> sometime. I haven't looked closely enough to determine where the\n> bottlenecks are.\n\nLooking at the newsysviews stuff should prove enlightening... \nAndrewSN spent a lot of time making sure those views are very\nperformant.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 2 Oct 2006 14:11:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: selecting data from information_schema.columns performance."
},
{
"msg_contents": "Hi\n\nThanks for you replies.\n\nRegarding, newsysviews, what is the current state, I have had a quick \nlook at the pgFoundry site and the last updates were 9 months ago.\n\nThe most efficient way in the short term I can find to improve \nperformance for our application is to create a table from \ninformation_schema.columns and update it when tables a created or \ndeleted, or columns added or removed. E.g.\n\n=> create table my_information_schema_columns as select * from \ninformation_schema.columns;\n=> create index my_information_schema_columns_index \non my_information_schema_columns (table_name);\n\nUpdate table with the following statements:\n \nWhen tables or columns are added:\n=> insert into my_information_schema_columns select * from \ninformation_schema.columns\n-> except select * from my_information_schema_columns;\n\nWhen tables are removed, does not work for column changes:\n=> delete from my_information_schema_columns\n-> where table_name = (select table_name from my_information_schema_columns\n-> except select table_name from information_schema.columns);\n\nFor column changes a script will need to be created, the following \nreturns the rows to be deleted. (Any alternative ideas?)\n=> select table_name, column_name, ordinal_position from \nmy_information_schema_columns\n-> except select table_name, column_name, ordinal_position from \ninformation_schema.columns;\n\n\nMy problem now is how to get the update statements to be executed when a \ntable is created or dropped, or columns are added or removed. For our \napplication, this is not very often. My understanding is that triggers \ncannot be created for system tables, therefore the updates cannot be \ntriggered when pg_tables is modified. Also how to detect column changes \nis a problem.\n\nDetecting when a table has been added is relatively easy and can be \nperformed by our application, e.g. check my_information_schema_columns, \nif it does not exist, check information_schema.columns, if exist, run \nupdate statements.\n\nA simple method would be to run a cron job to do the updates, but I \nwould like to try to be a bit more intelligent about when the update \nstatements are executed.\n\nRegards\nSteve Martin\n\n\nJim C. Nasby wrote:\n\n>On Sun, Oct 01, 2006 at 11:01:19PM -0400, Tom Lane wrote:\n> \n>\n>>Steve Martin <[email protected]> writes:\n>> \n>>\n>>>I am trying to determine if there is a way to improve the performance \n>>>when selecting data from the information_schema.columns view.\n>>> \n>>>\n>>In my experience, there isn't any single one of the information_schema\n>>views whose performance doesn't suck :-(. Somebody should work on that\n>>sometime. I haven't looked closely enough to determine where the\n>>bottlenecks are.\n>> \n>>\n>\n>Looking at the newsysviews stuff should prove enlightening... \n>AndrewSN spent a lot of time making sure those views are very\n>performant.\n> \n>\n\n-- \n \\\\|// From near to far,\n @ @ from here to there,\n ---oOOo-(_)-oOOo--- funny things are everywhere. (Dr. Seuss)\n\n\n",
"msg_date": "Tue, 03 Oct 2006 12:31:13 +1300",
"msg_from": "Steve Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: selecting data from information_schema.columns"
},
{
"msg_contents": "On Oct 2, 2006, at 7:31 PM, Steve Martin wrote:\n> Regarding, newsysviews, what is the current state, I have had a \n> quick look at the pgFoundry site and the last updates were 9 months \n> ago.\n\nWell, the system catalogs don't change terribly often, so it's not \nlike a lot needs to be done. We'd hoped to get them into core, but \nthat didn't pan out. Theoretically, we should be making the views \nlook more like information_schema, but no one's gotten to it yet.\n\n> The most efficient way in the short term I can find to improve \n> performance for our application is to create a table from \n> information_schema.columns and update it when tables a created or \n> deleted, or columns added or removed. E.g.\n\nWell, there's nothing that says you have to use information_schema. \nYou can always query the catalog tables directly. Even if you don't \nwant to use newsysviews as-is, the code there should be very helpful \nfor doing that.\n\nThere is no ability to put triggers on DDL, so the best you could do \nwith your caching table is to just periodically update it.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Mon, 2 Oct 2006 21:49:49 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: selecting data from information_schema.columns"
}
] |
[
{
"msg_contents": "Hi, to all!\n\nRecently i try increasing the memory values of shared buffers on one\nIBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1\nStorage.\n\nI try change these shared memory values to use 25% of memory ram (2048\nMB) and effective_cache_size to 50% (4096 MB) of memory. All this\nsettings to 220 Max Connections.\n\nWhere I start up the cluster very messages of configurations errors on\nshared_memmory and SHMMAX look up. I try change the values of\nshared_memory, max_connections and effective_cache_size and large the\nsize of SHMALL and SHMMAX to use 4294967296 (4096 MB) but the cluster\ndon't start.\n\nOnly with 15% of value on shared memory i can start up this cluster.\nIn my tests the maximum value who i can put is 1.9 GB, more of this\nthe cluster don't start.\n\nCan anybody help me and explicate if exist one limit to memory on 32\nbits Architecture.\n\nAnybody was experience with tuning servers with this configurations\nand increasing ?\n\nthanks to all.\n\n\n\n-- \nMarcelo Costa\n",
"msg_date": "Mon, 2 Oct 2006 12:49:55 -0300",
"msg_from": "\"Marcelo Costa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How much memory in 32 bits Architecture to Shared Buffers is Possible"
},
{
"msg_contents": "Marcelo Costa wrote:\n> Hi, to all!\n> \n> Recently i try increasing the memory values of shared buffers on one\n> IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1\n> Storage.\n\nYou haven't specified your OS so I am going to assume Linux.\n\n> Where I start up the cluster very messages of configurations errors on\n> shared_memmory and SHMMAX look up. I try change the values of\n> shared_memory, max_connections and effective_cache_size and large the\n> size of SHMALL and SHMMAX to use 4294967296 (4096 MB) but the cluster\n> don't start.\n\nYou have to edit your sysctl.conf see:\n\nhttp://www.postgresql.org/docs/8.1/static/runtime.html\n\nI *think* (I would have to double check) the limit for shared memory on\nlinux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all\n64bit now.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 02 Oct 2006 09:26:07 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How much memory in 32 bits Architecture to Shared Buffers"
},
{
"msg_contents": "Yes, my system is DEBIAN SARGE 3.0\n\nthanks,\n\nMarcelo\n\n2006/10/2, Joshua D. Drake <[email protected]>:\n>\n> Marcelo Costa wrote:\n> > Hi, to all!\n> >\n> > Recently i try increasing the memory values of shared buffers on one\n> > IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1\n> > Storage.\n>\n> You haven't specified your OS so I am going to assume Linux.\n>\n> > Where I start up the cluster very messages of configurations errors on\n> > shared_memmory and SHMMAX look up. I try change the values of\n> > shared_memory, max_connections and effective_cache_size and large the\n> > size of SHMALL and SHMMAX to use 4294967296 (4096 MB) but the cluster\n> > don't start.\n>\n> You have to edit your sysctl.conf see:\n>\n> http://www.postgresql.org/docs/8.1/static/runtime.html\n>\n> I *think* (I would have to double check) the limit for shared memory on\n> linux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all\n> 64bit now.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>\n> --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n>\n>\n\n\n-- \nMarcelo Costa\n\nYes, my system is DEBIAN SARGE 3.0thanks,Marcelo2006/10/2, Joshua D. Drake <[email protected]>:\nMarcelo Costa wrote:> Hi, to all!>> Recently i try increasing the memory values of shared buffers on one> IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1> Storage.\nYou haven't specified your OS so I am going to assume Linux.> Where I start up the cluster very messages of configurations errors on> shared_memmory and SHMMAX look up. I try change the values of\n> shared_memory, max_connections and effective_cache_size and large the> size of SHMALL and SHMMAX to use 4294967296 (4096 MB) but the cluster> don't start.You have to edit your sysctl.conf see:\nhttp://www.postgresql.org/docs/8.1/static/runtime.htmlI *think* (I would have to double check) the limit for shared memory onlinux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all\n64bit now.Sincerely,Joshua D. Drake-- === The PostgreSQL Company: Command Prompt, Inc. ===Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/-- Marcelo Costa",
"msg_date": "Mon, 2 Oct 2006 13:31:51 -0300",
"msg_from": "\"Marcelo Costa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How much memory in 32 bits Architecture to Shared Buffers is\n\tPossible"
}
] |
[
{
"msg_contents": "Some very helpful people had asked that I post the troublesome code that was\ngenerated by my import program.\n\nI installed a SQL log feature in my import program. I have\nposted samples of the SQL statements that cause the biggest delays.\n\nThanks for all of your help.\n\nCarlo\n\n----------\nSample 1:\nThis one is very expensive on my system.\n----------\nselect\nf.facility_id,\nprovider_practice_id\nfrom\nmdx_core.provider_practice as pp\njoin mdx_core.facility as f\non f.facility_id = pp.facility_id\njoin mdx_core.facility_address as fa\non fa.facility_id = pp.facility_id\njoin mdx_core.address as a\non a.address_id = fa.address_id\nwhere\npp.provider_id = 1411311\nand f.facility_type_code != 'P'\nand (\npp.facility_address_id is not null\nand a.state_code = 'NY'\nand '10001-2382' = a.postal_code||'%'\nand a.city = 'New York'\n) or (\nf.default_state_code = 'NY'\nand '10001-2382' like f.default_postal_code||'%'\nand f.default_city = 'New York'\n)\nlimit 1\n\nLimit (cost=3899.18..32935.21 rows=1 width=8)\n -> Hash Join (cost=3899.18..91007.27 rows=3 width=8)\n Hash Cond: (\"outer\".address_id = \"inner\".address_id)\n Join Filter: (((\"outer\".provider_id = 1411311) AND \n(\"outer\".facility_type_code <> 'P'::bpchar) AND (\"outer\".facility_address_id \nIS NOT NULL) AND ((\"inner\".state_code)::text = 'NY'::text) AND \n('10001-2382'::text = ((\"inner\".postal_code)::text || '%'::text)) AND \n((\"inner\".city)::text = 'New York'::text)) OR ((\"outer\".default_state_code = \n'NY'::bpchar) AND ('10001-2382'::text ~~ \n((\"outer\".default_postal_code)::text || '%'::text)) AND \n((\"outer\".default_city)::text = 'New York'::text)))\n -> Merge Join (cost=0.00..50589.20 rows=695598 width=57)\n Merge Cond: (\"outer\".facility_id = \"inner\".facility_id)\n -> Merge Join (cost=0.00..16873.90 rows=128268 width=49)\n Merge Cond: (\"outer\".facility_id = \"inner\".facility_id)\n -> Index Scan using facility_pkey on facility f \n(cost=0.00..13590.18 rows=162525 width=41)\n -> Index Scan using facility_address_facility_idx on \nfacility_address fa (cost=0.00..4254.46 rows=128268 width=8)\n -> Index Scan using provider_practice_facility_idx on \nprovider_practice pp (cost=0.00..28718.27 rows=452129 width=16)\n -> Hash (cost=3650.54..3650.54 rows=99454 width=36)\n -> Seq Scan on address a (cost=0.00..3650.54 rows=99454 \nwidth=36)\n\n----------\nSample 2:\nThis one includes a call to a custom function which performs lexical \ncomparisons\nand returns a rating on the likelihood that the company names refer to the \nsame\nfacility. Replacing the code:\n mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as \ncomp\nwith\n 1 as comp\n-- to avoid the function call only shaved a fragment off the execution time, \nwhich leads me to believe my problem is in the SQL structure itself.\n\n----------\nselect\nmdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as \ncomp,\nfacil.*\nfrom (\nselect\nf.facility_id,\nfa.facility_address_id,\na.address_id,\nf.facility_type_code,\nf.name,\na.address,\na.city,\na.state_code,\na.postal_code,\na.country_code\nfrom\nmdx_core.facility as f\njoin mdx_core.facility_address as fa\non fa.facility_id = f.facility_id\njoin mdx_core.address as a\non a.address_id = fa.address_id\nwhere\nfacility_address_id is not null\nand a.country_code = 'US'\nand a.state_code = 'IL'\nand '60640-5759' like a.postal_code||'%'\nunion select\nf.facility_id,\nnull as facility_address_id,\nnull as address_id,\nf.facility_type_code,\nf.name,\nnull as address,\nf.default_city as city,\nf.default_state_code as state_code,\nf.default_postal_code as postal_code,\nf.default_country_code as country_code\nfrom\nmdx_core.facility as f\nleft outer join mdx_core.facility_address as fa\non fa.facility_id = f.facility_id\nwhere\nfacility_address_id is null\nand f.default_country_code = 'US'\nand '60640-5759' like f.default_postal_code||'%'\n) as facil\norder by comp\n\nSort (cost=20595.92..20598.01 rows=834 width=236)\n Sort Key: mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 \nInc'::text, (name)::text)\n -> Subquery Scan facil (cost=20522.10..20555.46 rows=834 width=236)\n -> Unique (cost=20522.10..20545.03 rows=834 width=103)\n -> Sort (cost=20522.10..20524.18 rows=834 width=103)\n Sort Key: facility_id, facility_address_id, address_id, \nfacility_type_code, name, address, city, state_code, postal_code, \ncountry_code\n -> Append (cost=4645.12..20481.63 rows=834 width=103)\n -> Nested Loop (cost=4645.12..8381.36 rows=21 \nwidth=103)\n -> Hash Join (cost=4645.12..8301.35 \nrows=21 width=72)\n Hash Cond: (\"outer\".address_id = \n\"inner\".address_id)\n -> Seq Scan on facility_address fa \n(cost=0.00..3014.68 rows=128268 width=12)\n Filter: (facility_address_id IS \nNOT NULL)\n -> Hash (cost=4645.08..4645.08 \nrows=16 width=64)\n -> Seq Scan on address a \n(cost=0.00..4645.08 rows=16 width=64)\n Filter: ((country_code = \n'US'::bpchar) AND ((state_code)::text = 'IL'::text) AND ('60640-5759'::text \n~~ ((postal_code)::text || '%'::text)))\n -> Index Scan using facility_pkey on \nfacility f (cost=0.00..3.80 rows=1 width=35)\n Index Cond: (\"outer\".facility_id = \nf.facility_id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..12100.07 rows=813 width=73)\n -> Nested Loop Left Join \n(cost=0.00..12091.94 rows=813 width=73)\n Filter: (\"inner\".facility_address_id \nIS NULL)\n -> Seq Scan on facility f \n(cost=0.00..8829.19 rows=813 width=73)\n Filter: ((default_country_code = \n'US'::bpchar) AND ('60640-5759'::text ~~ ((default_postal_code)::text || \n'%'::text)))\n -> Index Scan using \nfacility_address_facility_idx on facility_address fa (cost=0.00..3.99 \nrows=2 width=8)\n Index Cond: (fa.facility_id = \n\"outer\".facility_id)\n\n\n",
"msg_date": "Tue, 3 Oct 2006 04:33:01 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/3/06, Carlo Stonebanks <[email protected]> wrote:\n> Some very helpful people had asked that I post the troublesome code that was\n> generated by my import program.\n>\n> I installed a SQL log feature in my import program. I have\n> posted samples of the SQL statements that cause the biggest delays.\n\nexplain analyze is more helpful because it prints the times.\n\nsample 1, couple questions:\nwhat is the purpose of limit 1?\nif you break up the 'or' which checks facility and address into two\nseparate queries, are the two queries total times more, less, or same\nas the large query.\n\nmerlin\n",
"msg_date": "Tue, 3 Oct 2006 11:04:05 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 3 Oct 2006, at 16:04, Merlin Moncure wrote:\n\n> On 10/3/06, Carlo Stonebanks <[email protected]> wrote:\n>> Some very helpful people had asked that I post the troublesome \n>> code that was\n>> generated by my import program.\n>>\n>> I installed a SQL log feature in my import program. I have\n>> posted samples of the SQL statements that cause the biggest delays.\n>\n> explain analyze is more helpful because it prints the times.\n\nYou can always use the \\timing flag in psql ;)\n\nl1_historical=# \\timing\nTiming is on.\nl1_historical=# select 1;\n?column?\n----------\n 1\n(1 row)\n\nTime: 4.717 ms\n\n\n\n\n",
"msg_date": "Tue, 3 Oct 2006 16:16:48 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> explain analyze is more helpful because it prints the times.\n\nSorry, this runs in-line in my code, and I didn't want to slow the \nalready-slow program with explain analyze. I have run it outside of the code \nin its own query. The new results are below.\n\n> sample 1, couple questions:\n> what is the purpose of limit 1?\n\nI don't need to know the results, I just need to know if any data which \nmeets this criteria exists.\n\n> if you break up the 'or' which checks facility and address into two\n> separate queries, are the two queries total times more, less, or same\n> as the large query.\n\nThey are much less; I had assumed that SQL would use lazy evaluation in this \ncase, not bothering to perform one half of the OR condition if the other \nhalf But the single query is much heavier than the two seperate ones.\n\nCarlo\n\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\nselect\nf.facility_id,\nprovider_practice_id\nfrom\nmdx_core.provider_practice as pp\njoin mdx_core.facility as f\non f.facility_id = pp.facility_id\njoin mdx_core.facility_address as fa\non fa.facility_id = pp.facility_id\njoin mdx_core.address as a\non a.address_id = fa.address_id\nwhere\npp.provider_id = 1411311\nand f.facility_type_code != 'P'\nand (\npp.facility_address_id is not null\nand a.state_code = 'NY'\nand '10001-2382' = a.postal_code||'%'\nand a.city = 'New York'\n) or (\nf.default_state_code = 'NY'\nand '10001-2382' like f.default_postal_code||'%'\nand f.default_city = 'New York'\n)\nlimit 1\n\n\"Limit (cost=3899.18..22561.46 rows=1 width=8) (actual \ntime=9410.970..9410.970 rows=0 loops=1)\"\n\" -> Hash Join (cost=3899.18..97210.58 rows=5 width=8) (actual \ntime=9410.966..9410.966 rows=0 loops=1)\"\n\" Hash Cond: (\"outer\".address_id = \"inner\".address_id)\"\n\" Join Filter: (((\"outer\".provider_id = 1411311) AND \n(\"outer\".facility_type_code <> 'P'::bpchar) AND (\"outer\".facility_address_id \nIS NOT NULL) AND ((\"inner\".state_code)::text = 'NY'::text) AND \n('10001-2382'::text = ((\"inner\".postal_code)::text || '%' (..)\"\n\" -> Merge Join (cost=0.00..51234.97 rows=801456 width=57) (actual \ntime=0.314..6690.241 rows=685198 loops=1)\"\n\" Merge Cond: (\"outer\".facility_id = \"inner\".facility_id)\"\n\" -> Merge Join (cost=0.00..15799.46 rows=128268 width=49) \n(actual time=0.197..1637.553 rows=128268 loops=1)\"\n\" Merge Cond: (\"outer\".facility_id = \n\"inner\".facility_id)\"\n\" -> Index Scan using facility_pkey on facility f \n(cost=0.00..13247.94 rows=176864 width=41) (actual time=0.145..591.219 \nrows=126624 loops=1)\"\n\" -> Index Scan using facility_address_facility_idx on \nfacility_address fa (cost=0.00..4245.12 rows=128268 width=8) (actual \ntime=0.041..384.632 rows=128268 loops=1)\"\n\" -> Index Scan using provider_practice_facility_idx on \nprovider_practice pp (cost=0.00..30346.89 rows=489069 width=16) (actual \ntime=0.111..3031.675 rows=708714 loops=1)\"\n\" -> Hash (cost=3650.54..3650.54 rows=99454 width=36) (actual \ntime=478.509..478.509 rows=99454 loops=1)\"\n\" -> Seq Scan on address a (cost=0.00..3650.54 rows=99454 \nwidth=36) (actual time=0.033..251.203 rows=99454 loops=1)\"\n\"Total runtime: 9412.654 ms\"\n\n----------\nSample 2:\nThis one includes a call to a custom function which performs lexical\ncomparisons\nand returns a rating on the likelihood that the company names refer to the\nsame\nfacility. Replacing the code:\n mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as\ncomp\nwith\n 1 as comp\n-- to avoid the function call only shaved a fragment off the execution time,\nwhich leads me to believe my problem is in the SQL structure itself.\n----------\nselect\nmdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as\ncomp,\nfacil.*\nfrom (\nselect\nf.facility_id,\nfa.facility_address_id,\na.address_id,\nf.facility_type_code,\nf.name,\na.address,\na.city,\na.state_code,\na.postal_code,\na.country_code\nfrom\nmdx_core.facility as f\njoin mdx_core.facility_address as fa\non fa.facility_id = f.facility_id\njoin mdx_core.address as a\non a.address_id = fa.address_id\nwhere\nfacility_address_id is not null\nand a.country_code = 'US'\nand a.state_code = 'IL'\nand '60640-5759' like a.postal_code||'%'\nunion select\nf.facility_id,\nnull as facility_address_id,\nnull as address_id,\nf.facility_type_code,\nf.name,\nnull as address,\nf.default_city as city,\nf.default_state_code as state_code,\nf.default_postal_code as postal_code,\nf.default_country_code as country_code\nfrom\nmdx_core.facility as f\nleft outer join mdx_core.facility_address as fa\non fa.facility_id = f.facility_id\nwhere\nfacility_address_id is null\nand f.default_country_code = 'US'\nand '60640-5759' like f.default_postal_code||'%'\n) as facil\norder by comp\n\n\"Sort (cost=21565.50..21567.78 rows=909 width=236) (actual \ntime=1622.448..1622.456 rows=12 loops=1)\"\n\" Sort Key: mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 \nInc'::text, (name)::text)\"\n\" -> Subquery Scan facil (cost=21484.47..21520.83 rows=909 width=236) \n(actual time=1173.103..1622.134 rows=12 loops=1)\"\n\" -> Unique (cost=21484.47..21509.47 rows=909 width=103) (actual \ntime=829.747..829.840 rows=12 loops=1)\"\n\" -> Sort (cost=21484.47..21486.75 rows=909 width=103) \n(actual time=829.744..829.761 rows=12 loops=1)\"\n\" Sort Key: facility_id, facility_address_id, address_id, \nfacility_type_code, name, address, city, state_code, postal_code, \ncountry_code\"\n\" -> Append (cost=4645.12..21439.81 rows=909 width=103) \n(actual time=146.952..829.517 rows=12 loops=1)\"\n\" -> Nested Loop (cost=4645.12..8380.19 rows=22 \nwidth=103) (actual time=146.949..510.824 rows=12 loops=1)\"\n\" -> Hash Join (cost=4645.12..8301.36 \nrows=22 width=72) (actual time=146.912..510.430 rows=12 loops=1)\"\n\" Hash Cond: (\"outer\".address_id = \n\"inner\".address_id)\"\n\" -> Seq Scan on facility_address fa \n(cost=0.00..3014.68 rows=128268 width=12) (actual time=0.007..238.228 \nrows=128268 loops=1)\"\n\" Filter: (facility_address_id IS \nNOT NULL)\"\n\" -> Hash (cost=4645.08..4645.08 \nrows=17 width=64) (actual time=131.827..131.827 rows=3 loops=1)\"\n\" -> Seq Scan on address a \n(cost=0.00..4645.08 rows=17 width=64) (actual time=3.555..131.797 rows=3 \nloops=1)\"\n\" Filter: ((country_code = \n'US'::bpchar) AND ((state_code)::text = 'IL'::text) AND ('60640-5759'::text \n~~ ((postal_code)::text || '%'::text)))\"\n\" -> Index Scan using facility_pkey on \nfacility f (cost=0.00..3.57 rows=1 width=35) (actual time=0.021..0.023 \nrows=1 loops=12)\"\n\" Index Cond: (\"outer\".facility_id = \nf.facility_id)\"\n\" -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..13059.40 rows=887 width=73) (actual time=318.669..318.669 rows=0 \nloops=1)\"\n\" -> Nested Loop Left Join \n(cost=0.00..13050.53 rows=887 width=73) (actual time=318.664..318.664 rows=0 \nloops=1)\"\n\" Filter: (\"inner\".facility_address_id \nIS NULL)\"\n\" -> Seq Scan on facility f \n(cost=0.00..9438.13 rows=887 width=73) (actual time=4.468..318.364 rows=10 \nloops=1)\"\n\" Filter: ((default_country_code \n= 'US'::bpchar) AND ('60640-5759'::text ~~ ((default_postal_code)::text || \n'%'::text)))\"\n\" -> Index Scan using \nfacility_address_facility_idx on facility_address fa (cost=0.00..4.05 \nrows=2 width=8) (actual time=0.018..0.022 rows=1 loops=10)\"\n\" Index Cond: (fa.facility_id = \n\"outer\".facility_id)\"\n\n\n",
"msg_date": "Tue, 3 Oct 2006 12:58:30 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Please ignore sample 1 - now that I have the logging feature, I can see that \nmy query generator algorithm made an error.\n\nThe SQL of concern is now script 2. \n\n\n",
"msg_date": "Tue, 3 Oct 2006 14:07:22 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/3/06, Carlo Stonebanks <[email protected]> wrote:\n> Please ignore sample 1 - now that I have the logging feature, I can see that\n> my query generator algorithm made an error.\n\ncan you do explain analyze on the two select queries on either side of\nthe union separatly? the subquery is correctly written and unlikely\nto be a problem (in fact, good style imo). so lets have a look at\nboth sides of facil query and see where the problem is.\n\nmerlin\n",
"msg_date": "Tue, 3 Oct 2006 15:21:06 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Hi, Alex,\n\nAlex Stapleton wrote:\n\n>> explain analyze is more helpful because it prints the times.\n> \n> You can always use the \\timing flag in psql ;)\n\nHave you ever tried EXPLAIN ANALYZE?\n\n\\timing gives you one total timing, but EXPLAIN ANALYZE gives you\ntimings for sub-plans, including real row counts etc.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 04 Oct 2006 14:11:35 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> can you do explain analyze on the two select queries on either side of\n> the union separatly? the subquery is correctly written and unlikely\n> to be a problem (in fact, good style imo). so lets have a look at\n> both sides of facil query and see where the problem is.\n\nSorry for the delay, the server was down yesterday and couldn't get \nanything.\n\nI have modified the sub-queries a little, trying to get the index scans to \nfire - all the tables involved here are large enough to benefit from index \nscans over sequential scans. I am mystified as to why PART 1 is giving me:\n\n \"Seq Scan on facility_address fa (cost=0.00..3014.68 rows=128268 width=12) \n(actual time=0.007..99.033 rows=128268 loops=1)\"\n\nwhich I assume is for the:\n\n\"join mdx_core.facility_address as fa on fa.facility_id = f.facility_id\"\n\nThen again, I am not sure how to read the EXPLAIN ANALYSE performance \nnumbers.\n\nThe other part of the UNION (PART 2) I have also modified, I think it's \nworking nicely. Let me know if I'm mistaken on thinking that!\n\nThe one remaining problem is that the UNION of these two sub-queries has a \ncolumn which is a call to a custom TCL function that does a lexical analysis \non the results, ranking the result names by their proximity to the imported \nname. his definitely eats up the performance and I hope that my decision to \ncall this function on the results of the union (assuming union deletes \nredundent rows) is the correct one.\n\nThanks!\n\nCarlo\n\n\n/* PART 1.\n The redundant expression \"facility_address_id is NULL\" was removed because\n only an OUTER join would have made this meaningful. We use only INNER \njoins in this sub-query\n Both facility_address and address have seq scans, even though there is an \nindex for\n facility_address(facility_id( and an index for address( country_code, \npostal_code, address).\n The \"like\" operator appears to be making things expensive. This is used \nbecause we have to take\n into account that perhaps the import row is using the 5-number US ZIP, \nnot the 9-number USZIP+4\n standard (although this is not the case in this sample).\n/*\nexplain analyse select\n f.facility_id,\n fa.facility_address_id,\n a.address_id,\n f.facility_type_code,\n f.name,\n a.address,\n a.city,\n a.state_code,\n a.postal_code,\n a.country_code\n from\n mdx_core.facility as f\n join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\n join mdx_core.address as a\n on a.address_id = fa.address_id\n where\n a.country_code = 'US'\n and a.state_code = 'IL'\n and a.postal_code like '60640-5759'||'%'\n order by facility_id\n\n\"Sort (cost=6392.50..6392.50 rows=1 width=103) (actual \ntime=189.133..189.139 rows=12 loops=1)\"\n\" Sort Key: f.facility_id\"\n\" -> Nested Loop (cost=2732.88..6392.49 rows=1 width=103) (actual \ntime=14.006..188.967 rows=12 loops=1)\"\n\" -> Hash Join (cost=2732.88..6388.91 rows=1 width=72) (actual \ntime=13.979..188.748 rows=12 loops=1)\"\n\" Hash Cond: (\"outer\".address_id = \"inner\".address_id)\"\n\" -> Seq Scan on facility_address fa (cost=0.00..3014.68 \nrows=128268 width=12) (actual time=0.004..98.867 rows=128268 loops=1)\"\n\" -> Hash (cost=2732.88..2732.88 rows=1 width=64) (actual \ntime=6.430..6.430 rows=3 loops=1)\"\n\" -> Bitmap Heap Scan on address a (cost=62.07..2732.88 \nrows=1 width=64) (actual time=2.459..6.417 rows=3 loops=1)\"\n\" Recheck Cond: ((country_code = 'US'::bpchar) AND \n((state_code)::text = 'IL'::text))\"\n\" Filter: ((postal_code)::text ~~ \n'60640-5759%'::text)\"\n\" -> Bitmap Index Scan on \naddress_country_state_postal_code_address_idx (cost=0.00..62.07 rows=3846 \nwidth=0) (actual time=1.813..1.813 rows=3554 loops=1)\"\n\" Index Cond: ((country_code = 'US'::bpchar) \nAND ((state_code)::text = 'IL'::text))\"\n\" -> Index Scan using facility_pkey on facility f (cost=0.00..3.56 \nrows=1 width=35) (actual time=0.012..0.013 rows=1 loops=12)\"\n\" Index Cond: (\"outer\".facility_id = f.facility_id)\"\n\"Total runtime: 189.362 ms\"\n\n/* PART 2 - can you see anything that could work faster? */\n\nexplain analyse select\n f.facility_id,\n null as facility_address_id,\n null as address_id,\n f.facility_type_code,\n f.name,\n null as address,\n f.default_city as city,\n f.default_state_code as state_code,\n f.default_postal_code as postal_code,\n f.default_country_code as country_code\n from\n mdx_core.facility as f\n left outer join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\n where\n fa.facility_address_id is null\n and f.default_country_code = 'US'\n and f.default_state_code = 'IL'\n and '60640-5759' like f.default_postal_code||'%'\n\n\"Nested Loop Left Join (cost=0.00..6042.41 rows=32 width=73) (actual \ntime=14.923..14.923 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Index Scan using facility_country_state_postal_code_idx on facility f \n(cost=0.00..5914.69 rows=32 width=73) (actual time=10.118..14.773 rows=10 \nloops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n(default_state_code = 'IL'::bpchar))\"\n\" Filter: ('60640-5759'::text ~~ ((default_postal_code)::text || \n'%'::text))\"\n\" -> Index Scan using facility_address_facility_idx on facility_address fa \n(cost=0.00..3.97 rows=2 width=8) (actual time=0.009..0.011 rows=1 loops=10)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 15.034 ms\"\n\n\n",
"msg_date": "Wed, 4 Oct 2006 14:22:48 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/4/06, Carlo Stonebanks <[email protected]> wrote:\n> > can you do explain analyze on the two select queries on either side of\n> > the union separatly? the subquery is correctly written and unlikely\n> > to be a problem (in fact, good style imo). so lets have a look at\n> > both sides of facil query and see where the problem is.\n>\n> Sorry for the delay, the server was down yesterday and couldn't get\n> anything.\n>\n> I have modified the sub-queries a little, trying to get the index scans to\n> fire - all the tables involved here are large enough to benefit from index\n> scans over sequential scans. I am mystified as to why PART 1 is giving me:\n>\n\n> \"Seq Scan on facility_address fa (cost=0.00..3014.68 rows=128268 width=12)\n> (actual time=0.007..99.033 rows=128268 loops=1)\"\n\nnot sure on this, lets go back to that.\n\n> into account that perhaps the import row is using the 5-number US ZIP,\n> not the 9-number USZIP+4\n\n\n> where\n> a.country_code = 'US'\n> and a.state_code = 'IL'\n> and a.postal_code like '60640-5759'||'%'\n> order by facility_id\n\n1. create a small function, sql preferred which truncates the zip code\nto 5 digits or reduces to so called 'fuzzy' matching criteria. lets\ncall it zip_trunc(text) and make it immutable which it is. write this\nin sql, not tcl if possible (trust me).\n\ncreate index address_idx on address(country_code, state_code,\nzip_trunc(postal_code));\n\nrewrite above where clause as\n\nwhere (a.country_code, a.state_code, zip_trunc(postal_code)) = ('US',\n'IL', zip_trunc('60640-5759'));\n\ntry it out, then lets see how it goes and then we can take a look at\nany seqscan issues.\n\nmerlin\n",
"msg_date": "Wed, 4 Oct 2006 17:07:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Hi Merlin,\n\nHere are the results. The query returned more rows (65 vs 12) because of the \nvague postal_code.\n\nIn reality, we would have to modify the postal_code logic to take advantage \nof full zip codes when they were avalable, not unconditionally truncate \nthem.\n\nCarlo\n\nexplain analyze select\n f.facility_id,\n fa.facility_address_id,\n a.address_id,\n f.facility_type_code,\n f.name,\n a.address,\n a.city,\n a.state_code,\n a.postal_code,\n a.country_code\n from\n mdx_core.facility as f\n join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\n join mdx_core.address as a\n on a.address_id = fa.address_id\n where\n (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) = \n('US', 'IL', mdx_core.zip_trunc('60640-5759'))\n order by facility_id\n\n\"Sort (cost=6474.78..6474.84 rows=25 width=103) (actual \ntime=217.279..217.311 rows=65 loops=1)\"\n\" Sort Key: f.facility_id\"\n\" -> Nested Loop (cost=2728.54..6474.20 rows=25 width=103) (actual \ntime=35.828..217.059 rows=65 loops=1)\"\n\" -> Hash Join (cost=2728.54..6384.81 rows=25 width=72) (actual \ntime=35.801..216.117 rows=65 loops=1)\"\n\" Hash Cond: (\"outer\".address_id = \"inner\".address_id)\"\n\" -> Seq Scan on facility_address fa (cost=0.00..3014.68 \nrows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)\"\n\" -> Hash (cost=2728.50..2728.50 rows=19 width=64) (actual \ntime=33.618..33.618 rows=39 loops=1)\"\n\" -> Bitmap Heap Scan on address a (cost=48.07..2728.50 \nrows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)\"\n\" Recheck Cond: ((country_code = 'US'::bpchar) AND \n((state_code)::text = 'IL'::text))\"\n\" Filter: (mdx_core.zip_trunc(postal_code) = \n'60640'::text)\"\n\" -> Bitmap Index Scan on \naddress_country_state_zip_trunc_idx (cost=0.00..48.07 rows=3846 width=0) \n(actual time=1.783..1.783 rows=3554 loops=1)\"\n\" Index Cond: ((country_code = 'US'::bpchar) \nAND ((state_code)::text = 'IL'::text))\"\n\" -> Index Scan using facility_pkey on facility f (cost=0.00..3.56 \nrows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)\"\n\" Index Cond: (\"outer\".facility_id = f.facility_id)\"\n\"Total runtime: 217.520 ms\"\n\n\n\n\"\"Merlin Moncure\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On 10/4/06, Carlo Stonebanks <[email protected]> wrote:\n>> > can you do explain analyze on the two select queries on either side of\n>> > the union separatly? the subquery is correctly written and unlikely\n>> > to be a problem (in fact, good style imo). so lets have a look at\n>> > both sides of facil query and see where the problem is.\n>>\n>> Sorry for the delay, the server was down yesterday and couldn't get\n>> anything.\n>>\n>> I have modified the sub-queries a little, trying to get the index scans \n>> to\n>> fire - all the tables involved here are large enough to benefit from \n>> index\n>> scans over sequential scans. I am mystified as to why PART 1 is giving \n>> me:\n>>\n>\n>> \"Seq Scan on facility_address fa (cost=0.00..3014.68 rows=128268 \n>> width=12)\n>> (actual time=0.007..99.033 rows=128268 loops=1)\"\n>\n> not sure on this, lets go back to that.\n>\n>> into account that perhaps the import row is using the 5-number US ZIP,\n>> not the 9-number USZIP+4\n>\n>\n>> where\n>> a.country_code = 'US'\n>> and a.state_code = 'IL'\n>> and a.postal_code like '60640-5759'||'%'\n>> order by facility_id\n>\n> 1. create a small function, sql preferred which truncates the zip code\n> to 5 digits or reduces to so called 'fuzzy' matching criteria. lets\n> call it zip_trunc(text) and make it immutable which it is. write this\n> in sql, not tcl if possible (trust me).\n>\n> create index address_idx on address(country_code, state_code,\n> zip_trunc(postal_code));\n>\n> rewrite above where clause as\n>\n> where (a.country_code, a.state_code, zip_trunc(postal_code)) = ('US',\n> 'IL', zip_trunc('60640-5759'));\n>\n> try it out, then lets see how it goes and then we can take a look at\n> any seqscan issues.\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Wed, 4 Oct 2006 18:41:40 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/5/06, Carlo Stonebanks <[email protected]> wrote:\n> Hi Merlin,\n>\n> Here are the results. The query returned more rows (65 vs 12) because of the\n> vague postal_code.\n\nright. interestingly, the index didn't work properly anyways.\nregardless, this is easily solvable but it looks like we might be\nlooking in the wrong place. do we have an multi-column index on\nfacility_address(facility_id, address_id)? did you run analyze?\n\n> In reality, we would have to modify the postal_code logic to take advantage\n> of full zip codes when they were avalable, not unconditionally truncate\n> them.\n>\n> Carlo\n>\n> explain analyze select\n> f.facility_id,\n> fa.facility_address_id,\n> a.address_id,\n> f.facility_type_code,\n> f.name,\n> a.address,\n> a.city,\n> a.state_code,\n> a.postal_code,\n> a.country_code\n> from\n> mdx_core.facility as f\n> join mdx_core.facility_address as fa\n> on fa.facility_id = f.facility_id\n> join mdx_core.address as a\n> on a.address_id = fa.address_id\n> where\n> (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) =\n> ('US', 'IL', mdx_core.zip_trunc('60640-5759'))\n> order by facility_id\n>\n> \"Sort (cost=6474.78..6474.84 rows=25 width=103) (actual\n> time=217.279..217.311 rows=65 loops=1)\"\n> \" Sort Key: f.facility_id\"\n> \" -> Nested Loop (cost=2728.54..6474.20 rows=25 width=103) (actual\n> time=35.828..217.059 rows=65 loops=1)\"\n> \" -> Hash Join (cost=2728.54..6384.81 rows=25 width=72) (actual\n> time=35.801..216.117 rows=65 loops=1)\"\n> \" Hash Cond: (\"outer\".address_id = \"inner\".address_id)\"\n> \" -> Seq Scan on facility_address fa (cost=0.00..3014.68\n> rows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)\"\n> \" -> Hash (cost=2728.50..2728.50 rows=19 width=64) (actual\n> time=33.618..33.618 rows=39 loops=1)\"\n> \" -> Bitmap Heap Scan on address a (cost=48.07..2728.50\n> rows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)\"\n> \" Recheck Cond: ((country_code = 'US'::bpchar) AND\n> ((state_code)::text = 'IL'::text))\"\n> \" Filter: (mdx_core.zip_trunc(postal_code) =\n> '60640'::text)\"\n> \" -> Bitmap Index Scan on\n> address_country_state_zip_trunc_idx (cost=0.00..48.07 rows=3846 width=0)\n> (actual time=1.783..1.783 rows=3554 loops=1)\"\n> \" Index Cond: ((country_code = 'US'::bpchar)\n> AND ((state_code)::text = 'IL'::text))\"\n> \" -> Index Scan using facility_pkey on facility f (cost=0.00..3.56\n> rows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)\"\n> \" Index Cond: (\"outer\".facility_id = f.facility_id)\"\n> \"Total runtime: 217.520 ms\"\n",
"msg_date": "Thu, 5 Oct 2006 11:56:23 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> do we have an multi-column index on\n> facility_address(facility_id, address_id)? did you run analyze?\n\nThere is an index on facility_address on facility_id.\n\nI didn't create an index on facility_address.address_id because I expected \njoins to go in the other direction (from facility_address to address).\nNor did I create a multi-column index on facility_id, address_id because I \nhad yet to come up with a query that required that.\n\nHowever, I still have a lot to learn about how SQL chooses its indexes, how \nmulti-column indexes are used, and when to use them (other than the \nobvious - i.e. sort orders or relational expressions which request those \ncolumns in one search expression)\n\nAnalyse is actually run every time a page of imported data loads into the \nclient program. This is currently set at 500 rows.\n\nCarlo\n\n>> explain analyze select\n>> f.facility_id,\n>> fa.facility_address_id,\n>> a.address_id,\n>> f.facility_type_code,\n>> f.name,\n>> a.address,\n>> a.city,\n>> a.state_code,\n>> a.postal_code,\n>> a.country_code\n>> from\n>> mdx_core.facility as f\n>> join mdx_core.facility_address as fa\n>> on fa.facility_id = f.facility_id\n>> join mdx_core.address as a\n>> on a.address_id = fa.address_id\n>> where\n>> (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) =\n>> ('US', 'IL', mdx_core.zip_trunc('60640-5759'))\n>> order by facility_id\n>>\n>> \"Sort (cost=6474.78..6474.84 rows=25 width=103) (actual\n>> time=217.279..217.311 rows=65 loops=1)\"\n>> \" Sort Key: f.facility_id\"\n>> \" -> Nested Loop (cost=2728.54..6474.20 rows=25 width=103) (actual\n>> time=35.828..217.059 rows=65 loops=1)\"\n>> \" -> Hash Join (cost=2728.54..6384.81 rows=25 width=72) (actual\n>> time=35.801..216.117 rows=65 loops=1)\"\n>> \" Hash Cond: (\"outer\".address_id = \"inner\".address_id)\"\n>> \" -> Seq Scan on facility_address fa (cost=0.00..3014.68\n>> rows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)\"\n>> \" -> Hash (cost=2728.50..2728.50 rows=19 width=64) (actual\n>> time=33.618..33.618 rows=39 loops=1)\"\n>> \" -> Bitmap Heap Scan on address a \n>> (cost=48.07..2728.50\n>> rows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)\"\n>> \" Recheck Cond: ((country_code = 'US'::bpchar) \n>> AND\n>> ((state_code)::text = 'IL'::text))\"\n>> \" Filter: (mdx_core.zip_trunc(postal_code) =\n>> '60640'::text)\"\n>> \" -> Bitmap Index Scan on\n>> address_country_state_zip_trunc_idx (cost=0.00..48.07 rows=3846 width=0)\n>> (actual time=1.783..1.783 rows=3554 loops=1)\"\n>> \" Index Cond: ((country_code = \n>> 'US'::bpchar)\n>> AND ((state_code)::text = 'IL'::text))\"\n>> \" -> Index Scan using facility_pkey on facility f \n>> (cost=0.00..3.56\n>> rows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)\"\n>> \" Index Cond: (\"outer\".facility_id = f.facility_id)\"\n>> \"Total runtime: 217.520 ms\"\n\n\n",
"msg_date": "Thu, 5 Oct 2006 03:33:30 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> I didn't create an index on facility_address.address_id because I expected \n> joins to go in the other direction (from facility_address to address).\n\nWell, that's your problem right there ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Oct 2006 09:24:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL "
},
{
"msg_contents": "On 10/5/06, Carlo Stonebanks <[email protected]> wrote:\n> > do we have an multi-column index on\n> > facility_address(facility_id, address_id)? did you run analyze?\n>\n> There is an index on facility_address on facility_id.\n>\n> I didn't create an index on facility_address.address_id because I expected\n> joins to go in the other direction (from facility_address to address).\n> Nor did I create a multi-column index on facility_id, address_id because I\n> had yet to come up with a query that required that.\n\nright. well, since you are filtering on address, I would consider\nadded an index on address_id or a multi column on address_id,\nfacility_id (in addition to facility_id). also, I'd consider removing\nall the explicit joins like this:\n\nexplain analyze select\n f.facility_id,\n fa.facility_address_id,\n a.address_id,\n f.facility_type_code,\n f.name,\n a.address,\n a.city,\n a.state_code,\n a.postal_code,\n a.country_code\n from\n mdx_core.facility f,\n mdx_core.facility_address fa,\n mdx_core.address a\n where\n fa.facility_id = f.facility_id and\n a.address_id = fa.address_id and\n a.country_code = 'US' and\n a.state_code = 'IL' and\n a.postal_code like '60640-5759'||'%'\n order by facility_id;\n\nyet another way to write that where clause is:\n\n (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id) and\n a.country_code = 'US' and\n a.state_code = 'IL' and\n a.postal_code like '60640-5759'||'%'\n order by facility_id;\n\nI personally only use explicit joins when doing outer joins and even\nthem push them out as far as possible.\n\nI like the row constructor style better because it shows the key\nrelationships more clearly. I don't think it makes a difference in\nexecution (go ahead and try it). If you do make a multi column key on\nfacility_address, though, make sure to put they key fields in left to\nright order in the row constructor. Try adding a multi key on\naddress_id and facility_id and run it this way. In a proper design\nyou would have a primary key on these fields but with imported data\nyou obviously have to make compromises :).\n\n> However, I still have a lot to learn about how SQL chooses its indexes, how\n> multi-column indexes are used, and when to use them (other than the\n> obvious - i.e. sort orders or relational expressions which request those\n> columns in one search expression)\n\nwell, it's kind of black magic but if the database is properly laid\nout the function usually follows form pretty well.\n\n> Analyse is actually run every time a page of imported data loads into the\n> client program. This is currently set at 500 rows.\n\nok.\n\nmerlin\n",
"msg_date": "Thu, 5 Oct 2006 09:30:45 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Just to clarify: if I expect to join two tables that I expect to benfit from \nindexed scans, I should create indexes on the joined columns on BOTH sides?\n\nCarlo\n\n\n\"Tom Lane\" <[email protected]> wrote in message \nnews:[email protected]...\n> \"Carlo Stonebanks\" <[email protected]> writes:\n>> I didn't create an index on facility_address.address_id because I \n>> expected\n>> joins to go in the other direction (from facility_address to address).\n>\n> Well, that's your problem right there ...\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Thu, 5 Oct 2006 12:29:42 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> Just to clarify: if I expect to join two tables that I expect to benfit from \n> indexed scans, I should create indexes on the joined columns on BOTH sides?\n\nWell, it all depends on the queries you plan to issue ... but for the\nparticular query shown here, the lack of that index is the performance\nbottleneck.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Oct 2006 12:35:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL "
},
{
"msg_contents": "Oh you hate explicit joins too? I started in Oracle and was dismayed to find \nout what the SQL standard was. I especially miss the simplicity of += outer \njoins.\n\nI'll try adding the address_id index to facility_address and see what I get!\n\nCarlo\n\n\n\"\"Merlin Moncure\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On 10/5/06, Carlo Stonebanks <[email protected]> wrote:\n>> > do we have an multi-column index on\n>> > facility_address(facility_id, address_id)? did you run analyze?\n>>\n>> There is an index on facility_address on facility_id.\n>>\n>> I didn't create an index on facility_address.address_id because I \n>> expected\n>> joins to go in the other direction (from facility_address to address).\n>> Nor did I create a multi-column index on facility_id, address_id because \n>> I\n>> had yet to come up with a query that required that.\n>\n> right. well, since you are filtering on address, I would consider\n> added an index on address_id or a multi column on address_id,\n> facility_id (in addition to facility_id). also, I'd consider removing\n> all the explicit joins like this:\n>\n> explain analyze select\n> f.facility_id,\n> fa.facility_address_id,\n> a.address_id,\n> f.facility_type_code,\n> f.name,\n> a.address,\n> a.city,\n> a.state_code,\n> a.postal_code,\n> a.country_code\n> from\n> mdx_core.facility f,\n> mdx_core.facility_address fa,\n> mdx_core.address a\n> where\n> fa.facility_id = f.facility_id and\n> a.address_id = fa.address_id and\n> a.country_code = 'US' and\n> a.state_code = 'IL' and\n> a.postal_code like '60640-5759'||'%'\n> order by facility_id;\n>\n> yet another way to write that where clause is:\n>\n> (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id) and\n> a.country_code = 'US' and\n> a.state_code = 'IL' and\n> a.postal_code like '60640-5759'||'%'\n> order by facility_id;\n>\n> I personally only use explicit joins when doing outer joins and even\n> them push them out as far as possible.\n>\n> I like the row constructor style better because it shows the key\n> relationships more clearly. I don't think it makes a difference in\n> execution (go ahead and try it). If you do make a multi column key on\n> facility_address, though, make sure to put they key fields in left to\n> right order in the row constructor. Try adding a multi key on\n> address_id and facility_id and run it this way. In a proper design\n> you would have a primary key on these fields but with imported data\n> you obviously have to make compromises :).\n>\n>> However, I still have a lot to learn about how SQL chooses its indexes, \n>> how\n>> multi-column indexes are used, and when to use them (other than the\n>> obvious - i.e. sort orders or relational expressions which request those\n>> columns in one search expression)\n>\n> well, it's kind of black magic but if the database is properly laid\n> out the function usually follows form pretty well.\n>\n>> Analyse is actually run every time a page of imported data loads into the\n>> client program. This is currently set at 500 rows.\n>\n> ok.\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Thu, 5 Oct 2006 13:46:59 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "This didn't work right away, but DID work after running a VACUUM FULL. In \nother words, i was still stuck with a sequential scan until after the \nvacuum.\n\nI turned autovacuum off in order to help with the import, but was perfoming \nan ANALYZE with every 500 rows imported.\n\nWith autovacuum off for imports, how frequently should I VACUUM?\n\n\n\n\"\"Merlin Moncure\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On 10/5/06, Carlo Stonebanks <[email protected]> wrote:\n>> > do we have an multi-column index on\n>> > facility_address(facility_id, address_id)? did you run analyze?\n>>\n>> There is an index on facility_address on facility_id.\n>>\n>> I didn't create an index on facility_address.address_id because I \n>> expected\n>> joins to go in the other direction (from facility_address to address).\n>> Nor did I create a multi-column index on facility_id, address_id because \n>> I\n>> had yet to come up with a query that required that.\n>\n> right. well, since you are filtering on address, I would consider\n> added an index on address_id or a multi column on address_id,\n> facility_id (in addition to facility_id). also, I'd consider removing\n> all the explicit joins like this:\n>\n> explain analyze select\n> f.facility_id,\n> fa.facility_address_id,\n> a.address_id,\n> f.facility_type_code,\n> f.name,\n> a.address,\n> a.city,\n> a.state_code,\n> a.postal_code,\n> a.country_code\n> from\n> mdx_core.facility f,\n> mdx_core.facility_address fa,\n> mdx_core.address a\n> where\n> fa.facility_id = f.facility_id and\n> a.address_id = fa.address_id and\n> a.country_code = 'US' and\n> a.state_code = 'IL' and\n> a.postal_code like '60640-5759'||'%'\n> order by facility_id;\n>\n> yet another way to write that where clause is:\n>\n> (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id) and\n> a.country_code = 'US' and\n> a.state_code = 'IL' and\n> a.postal_code like '60640-5759'||'%'\n> order by facility_id;\n>\n> I personally only use explicit joins when doing outer joins and even\n> them push them out as far as possible.\n>\n> I like the row constructor style better because it shows the key\n> relationships more clearly. I don't think it makes a difference in\n> execution (go ahead and try it). If you do make a multi column key on\n> facility_address, though, make sure to put they key fields in left to\n> right order in the row constructor. Try adding a multi key on\n> address_id and facility_id and run it this way. In a proper design\n> you would have a primary key on these fields but with imported data\n> you obviously have to make compromises :).\n>\n>> However, I still have a lot to learn about how SQL chooses its indexes, \n>> how\n>> multi-column indexes are used, and when to use them (other than the\n>> obvious - i.e. sort orders or relational expressions which request those\n>> columns in one search expression)\n>\n> well, it's kind of black magic but if the database is properly laid\n> out the function usually follows form pretty well.\n>\n>> Analyse is actually run every time a page of imported data loads into the\n>> client program. This is currently set at 500 rows.\n>\n> ok.\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Fri, 6 Oct 2006 12:44:29 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:\n> This didn't work right away, but DID work after running a VACUUM FULL. In \n> other words, i was still stuck with a sequential scan until after the \n> vacuum.\n> \n> I turned autovacuum off in order to help with the import, but was perfoming \n> an ANALYZE with every 500 rows imported.\n> \n> With autovacuum off for imports, how frequently should I VACUUM?\n\nBasically once the query planner stops using seq scans is usually good\nenough, although sometimes there's a bit of a period where it'll be\nusing nested loops and then switch to merge etc...\n\nEvery 500 is probably a bit much. After the first few thousand rows,\nrun an analyze, and after about 5 to 10 thousand another analyze and you\nshould be set.\n",
"msg_date": "Fri, 06 Oct 2006 11:53:43 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/6/06, Scott Marlowe <[email protected]> wrote:\n> On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:\n> > This didn't work right away, but DID work after running a VACUUM FULL. In\n> > other words, i was still stuck with a sequential scan until after the\n> > vacuum.\n> >\n> > I turned autovacuum off in order to help with the import, but was perfoming\n> > an ANALYZE with every 500 rows imported.\n\nhow did you determine that it is done every 500 rows? this is the\ndefault autovacuum paramater. if you followed my earlier\nrecommendations, you are aware that autovacuum (which also analyzes)\nis not running during bulk inserts, right?\n\nimo, best way to do big data import/conversion is to:\n1. turn off all extra features, like stats, logs, etc\n2. use copy interface to load data into scratch tables with probably\nall text fields\n3. analyze (just once)\n4. use big queries to transform, normalize, etc\n5. drop scratch tables\n6. set up postgresql.conf for production use, fsync, stats, etc\n\nimportant feature of analyze is to tell the planner approx. how big\nthe tables are.\n\nmerlin\n",
"msg_date": "Fri, 6 Oct 2006 14:53:35 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> how did you determine that it is done every 500 rows? this is the\n\nThe import program pages the import table - it is currently set at 500 rows \nper page. With each page, I run an ANALYZE.\n\n> default autovacuum paramater. if you followed my earlier\n> recommendations, you are aware that autovacuum (which also analyzes)\n> is not running during bulk inserts, right?\n\nIt's intuitivly obvious, but I can't do bulk inserts. It's just not the \nnature of what we are doing with the data.\n\n> imo, best way to do big data import/conversion is to:\n> 1. turn off all extra features, like stats, logs, etc\n\ndone\n\n> 2. use copy interface to load data into scratch tables with probably\n> all text fields\n\ndone\n\n> 3. analyze (just once)\n\nI think this doesn't apply in our case, because we aren't doing bulk \ninserts.\n\n> 4. use big queries to transform, normalize, etc\n\nThis is currently being done programmatically. The nature of what we're \ndoing is suited for imperitive, navigational logic rather than declarative, \ndata set logic; just the opposite of what SQL likes, I know! If there's some \nway to replace thousands of lines of analysis and decision trees with \nultrafast queries - great...\n\n> important feature of analyze is to tell the planner approx. how big\n> the tables are.\n\nBut the tables grow as the process progresses - would you not want the \nserver to re-evaluate its strategy periodically?\n\nCarlo\n\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n\n\n",
"msg_date": "Fri, 6 Oct 2006 16:45:07 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:\n> I personally only use explicit joins when doing outer joins and even\n> them push them out as far as possible.\n\nI used to be like that too, until I actually started using join syntax.\nI now find it's *way* easier to identify what the join conditions are,\nand to seperate them from the rest of the where clause. It also makes it\npretty much impossible to mess up a join clause and get a cartesian\nproduct.\n\nIf you are going to put the join clauses in the WHERE clause, at least\nput a space between the join stuff and the rest of the WHERE clause.\n\nIn any case, this is nothing but a matter of taste in this case, unless\nyou set join_collapse_limit to less than 3 (or maybe 4).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 8 Oct 2006 15:34:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On Fri, Oct 06, 2006 at 02:53:35PM -0400, Merlin Moncure wrote:\n> On 10/6/06, Scott Marlowe <[email protected]> wrote:\n> >On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:\n> >> This didn't work right away, but DID work after running a VACUUM FULL. In\n> >> other words, i was still stuck with a sequential scan until after the\n> >> vacuum.\n> >>\n> >> I turned autovacuum off in order to help with the import, but was \n> >perfoming\n> >> an ANALYZE with every 500 rows imported.\n> \n> how did you determine that it is done every 500 rows? this is the\n> default autovacuum paramater. if you followed my earlier\n\nNote that that parameter doesn't mean you'll get an analyze every 500\nrows.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 8 Oct 2006 15:36:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/6/06, Carlo Stonebanks <[email protected]> wrote:\n> > how did you determine that it is done every 500 rows? this is the\n>\n> The import program pages the import table - it is currently set at 500 rows\n> per page. With each page, I run an ANALYZE.\n\nright, i just wanted to make sure of something (you are doing it\nproperly). really, analyze only needs to be run when tables go up an\norder of mangitude in size or so, or a little bit less...like when the\ntable grows 50% or so.\n\n> > default autovacuum paramater. if you followed my earlier\n> > recommendations, you are aware that autovacuum (which also analyzes)\n> > is not running during bulk inserts, right?\n\n> It's intuitivly obvious, but I can't do bulk inserts. It's just not the\n> nature of what we are doing with the data.\n\nright.\n\n> This is currently being done programmatically. The nature of what we're\n> doing is suited for imperitive, navigational logic rather than declarative,\n> data set logic; just the opposite of what SQL likes, I know! If there's some\n> way to replace thousands of lines of analysis and decision trees with\n> ultrafast queries - great...\n>\n> > important feature of analyze is to tell the planner approx. how big\n> > the tables are.\n>\n> But the tables grow as the process progresses - would you not want the\n> server to re-evaluate its strategy periodically?\n\nyes, but it makes the most difference when the tables are small so as\nto keep the planner from doing seqscans as they grow.\n\nwell it looks like you are on the right track, hopefully the process\nruns in an acceptable amount of time at this point.\n\nmerlin\n",
"msg_date": "Mon, 9 Oct 2006 09:49:46 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/8/06, Jim C. Nasby <[email protected]> wrote:\n> On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:\n> > I personally only use explicit joins when doing outer joins and even\n> > them push them out as far as possible.\n>\n> I used to be like that too, until I actually started using join syntax.\n> I now find it's *way* easier to identify what the join conditions are,\n> and to seperate them from the rest of the where clause. It also makes it\n> pretty much impossible to mess up a join clause and get a cartesian\n> product.\n>\n> If you are going to put the join clauses in the WHERE clause, at least\n> put a space between the join stuff and the rest of the WHERE clause.\n\nI use the row constructor to define key relationships for non trivial\nqueries i.e.\nselect foo.*, bar.* from foo f, bar b\n where (f.a, f.b, f.c) = (b.a, b.b, b.c) -- etc\n\nI am a really big fan of the row constructor, especially since we can\ndo proper key ordering in 8.2.\n\nby convention I do relating first, filtering second. for really\ncomplex queries I will inline comment each line of the where clause:\n\nwhere\n (p.a) = (pd.b) and -- match part to part description\n pd.type != 'A' -- not using archived parts\n\nas to unwanted cartesian products, I test all prodution queries in the\nshell first. The really complex ones are somewhat trial and error\nprocess after all these years :)\n\nbeing something of a mathematical guy, I love sql for its (mostly)\nfunctional nature but hate the grammar. reminds me a little bit too\nmuch of cobol. the join syntax is just too much for me, although with\nleft/right/natural joins there is no other way, and I very much agree\nwith Carlo wrt oracle's nonstandard join syntax being more elegant.\n\nmerlin\n",
"msg_date": "Mon, 9 Oct 2006 10:19:13 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Hi Merlin,\n\nWell, I'm back. first of all, thanks for your dogged determination to help \nme out - it is much appreciated. I owe you a beer or twelve.\n\nThe import has been running for a week. The import program got faster as I \ntuned things. I capture the dynamic SQL statements generated by the app, as \nwell as an accompanying EXPLAIN - and put it out to an XML file. I turned \noff seq scan in the config, and ran a trial import. I knew that with seq \nscan off that if I saw a seq scan in my log, it's because there were no \nindexes available to satisfy the query - I adjusted accordingly and this \nworked really well.\n\nWhen the import runs against an empty or small db, it's blisteringly fast \n(considering that it's a heauristically based process). This proved that it \nwasn't the app or the SQL connection that was slow. Once again, though, as \nthe data db grows, it slows down. Now it's crawling again. All of the \nqueries appear to be fine, taking advantage of the indexes. There is ONE \nquery, though, that seems to be the troublemaker - the same one I had \nbrought up before. I believe that it is one sub-query that is causing the \nproblem, taking what appears to be 500 to 1000+ms to run every time. (See \nbelow).\n\nCuriously, it's using index scans, and it really looks like a simple query \nto me. I am completely baffled. The two tables in question have about 800K \nrows each - not exactly an incredible number. The EXPLAIN is simple, but the \nperformance is dreadful. All the other queries run much faster than this - \ndoes ANYTHING about this query strike you as odd?\n\nCarlo\n\n/*\nFind all facilities that do not have full address information\nbut do have default location information that indicates\nits the facilitiy's US zip code.\nNULL values cast as columns are placeholders to allow\nthis sub-query to be unioned with another subquery\nthat contains full address data\n*/\nselect\n f.facility_id,\n null as facility_address_id,\n null as address_id,\n f.facility_type_code,\n f.name,\n null as address,\n f.default_city as city,\n f.default_state_code as state_code,\n f.default_postal_code as postal_code,\n f.default_country_code as country_code,\n null as parsed_unit\nfrom\n mdx_core.facility as f\nleft outer join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\nwhere\n facility_address_id is null\n and f.default_country_code = 'US'\n and (f.default_postal_code = '14224-1945' or f.default_postal_code = \n'14224')\n\n\"Nested Loop Left Join (cost=22966.70..23594.84 rows=93 width=71) (actual \ntime=662.075..662.075 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Bitmap Heap Scan on facility f (cost=22966.70..23231.79 rows=93 \nwidth=71) (actual time=661.907..661.929 rows=7 loops=1)\"\n\" Recheck Cond: (((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text)) OR \n((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text = \n'14224'::text)))\"\n\" -> BitmapOr (cost=22966.70..22966.70 rows=93 width=0) (actual \ntime=661.891..661.891 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on \nfacility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47 \nwidth=0) (actual time=374.284..374.284 rows=7 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text))\"\n\" -> Bitmap Index Scan on \nfacility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47 \nwidth=0) (actual time=287.599..287.599 rows=0 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224'::text))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx \non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \ntime=0.014..0.016 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 662.203 ms\"\n> \n\n\n",
"msg_date": "Sun, 15 Oct 2006 17:46:13 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> Curiously, it's using index scans, and it really looks like a simple query \n> to me. I am completely baffled. The two tables in question have about 800K \n> rows each - not exactly an incredible number. The EXPLAIN is simple, but the \n> performance is dreadful. All the other queries run much faster than this - \n> does ANYTHING about this query strike you as odd?\n\nLots of dead rows perhaps? The EXPLAIN estimates look a bit out of line\n--- 11483 cost units to fetch 47 index entries is an order or two of\nmagnitude higher than it ought to be. The real time also seems to be\nconcentrated in that index scan. What are the physical sizes of the\ntable and index? (VACUUM VERBOSE output for the facility table might\ntell something.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2006 18:27:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL "
},
{
"msg_contents": "Hey Tom, thanks for jumping in. Nothing on TV on a Sunday afternoon? ;-) \nAppreciate teh input.\n\nHere is vacuum verbose output for both the tables in question.\n\nCarlo\n\n\nINFO: vacuuming \"mdx_core.facility\"\nINFO: index \"facility_pkey\" now contains 832399 row versions in 3179 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.09s/0.04u sec elapsed 0.21 sec.\nINFO: index \"facility_country_state_city_idx\" now contains 832444 row \nversions in 6630 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.15s/0.07u sec elapsed 43.81 sec.\nINFO: index \"facility_country_state_postal_code_idx\" now contains 832499 \nrow versions in 6658 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.23s/0.07u sec elapsed 0.37 sec.\nINFO: \"facility\": found 0 removable, 832398 nonremovable row versions in \n15029 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.67s/0.32u sec elapsed 44.71 sec.\nINFO: vacuuming \"pg_toast.pg_toast_58570311\"\nINFO: index \"pg_toast_58570311_index\" now contains 0 row versions in 1 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_58570311\": found 0 removable, 0 nonremovable row versions \nin 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nQuery returned successfully with no result in 44875 ms.\n\nINFO: vacuuming \"mdx_core.facility_address\"\nINFO: index \"facility_address_pkey\" now contains 772770 row versions in \n2951 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.10s/0.04u sec elapsed 9.73 sec.\nINFO: index \"facility_address_address_idx\" now contains 772771 row versions \nin 2750 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.04u sec elapsed 0.34 sec.\nINFO: index \"facility_address_facility_address_address_type_idx\" now \ncontains 772773 row versions in 3154 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.04u sec elapsed 0.06 sec.\nINFO: \"facility_address\": found 0 removable, 772747 nonremovable row \nversions in 7969 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.39s/0.18u sec elapsed 10.70 sec.\n\nQuery returned successfully with no result in 10765 ms.\n\n\n\n\n\"Tom Lane\" <[email protected]> wrote in message \nnews:[email protected]...\n> \"Carlo Stonebanks\" <[email protected]> writes:\n>> Curiously, it's using index scans, and it really looks like a simple \n>> query\n>> to me. I am completely baffled. The two tables in question have about \n>> 800K\n>> rows each - not exactly an incredible number. The EXPLAIN is simple, but \n>> the\n>> performance is dreadful. All the other queries run much faster than \n>> this -\n>> does ANYTHING about this query strike you as odd?\n>\n> Lots of dead rows perhaps? The EXPLAIN estimates look a bit out of line\n> --- 11483 cost units to fetch 47 index entries is an order or two of\n> magnitude higher than it ought to be. The real time also seems to be\n> concentrated in that index scan. What are the physical sizes of the\n> table and index? (VACUUM VERBOSE output for the facility table might\n> tell something.)\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Sun, 15 Oct 2006 18:48:28 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/15/06, Carlo Stonebanks <[email protected]> wrote:\n> Hi Merlin,\n>\n> Well, I'm back. first of all, thanks for your dogged determination to help\n> me out - it is much appreciated. I owe you a beer or twelve.\n>\n> The import has been running for a week. The import program got faster as I\n> tuned things. I capture the dynamic SQL statements generated by the app, as\n> well as an accompanying EXPLAIN - and put it out to an XML file. I turned\n> off seq scan in the config, and ran a trial import. I knew that with seq\n> scan off that if I saw a seq scan in my log, it's because there were no\n> indexes available to satisfy the query - I adjusted accordingly and this\n> worked really well.\n>\n> When the import runs against an empty or small db, it's blisteringly fast\n> (considering that it's a heauristically based process). This proved that it\n> wasn't the app or the SQL connection that was slow. Once again, though, as\n> the data db grows, it slows down. Now it's crawling again. All of the\n> queries appear to be fine, taking advantage of the indexes. There is ONE\n> query, though, that seems to be the troublemaker - the same one I had\n> brought up before. I believe that it is one sub-query that is causing the\n> problem, taking what appears to be 500 to 1000+ms to run every time. (See\n> below).\n>\n> Curiously, it's using index scans, and it really looks like a simple query\n> to me. I am completely baffled. The two tables in question have about 800K\n> rows each - not exactly an incredible number. The EXPLAIN is simple, but the\n> performance is dreadful. All the other queries run much faster than this -\n> does ANYTHING about this query strike you as odd?\n\n\nCan you try temporarily disabling bitmap scans and see what comes up?\n\nmerlin\n",
"msg_date": "Mon, 16 Oct 2006 09:38:33 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> Can you try temporarily disabling bitmap scans and see what comes up?\n\nWell, that's slowing everything down. I've got a couple of results, below\n\n1) Bitmap scan off, but seq scan enabled.\n2) Bitmap scan and seq scan off\n3) Bitmap scan back on, seq scan back on, and a new index created\n4) VACUUM VERBOSE on the tables involved\n5) Original SQL with original EXPLAIN to show the code that started this.\n\nCarlo\n\n1) Bitmap scan off, but seq scan enabled. It created a suprisingly expensive \nseq scan.\n\n\"Nested Loop Left Join (cost=0.00..34572.43 rows=109 width=71) (actual \ntime=1536.827..1536.827 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Seq Scan on facility f (cost=0.00..34146.91 rows=109 width=71) \n(actual time=621.100..1536.606 rows=7 loops=1)\"\n\" Filter: ((default_country_code = 'US'::bpchar) AND \n(((default_postal_code)::text = '14224-1945'::text) OR \n((default_postal_code)::text = '14224'::text)))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx \non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \ntime=0.020..0.023 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 1536.957 ms\"\n\n2) So I turned both bitmap scan and seq scan off - now we get index scans, \nthe performance is suprisingly horrible:\n\n\"Nested Loop Left Join (cost=0.00..39286.55 rows=109 width=71) (actual \ntime=3598.462..3598.462 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Index Scan using facility_pkey on facility f (cost=0.00..38861.03 \nrows=109 width=71) (actual time=1500.690..3598.201 rows=7 loops=1)\"\n\" Filter: ((default_country_code = 'US'::bpchar) AND \n(((default_postal_code)::text = '14224-1945'::text) OR \n((default_postal_code)::text = '14224'::text)))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx \non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \ntime=0.024..0.027 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 3598.600 ms\"\n\n3) So I turned bitmap scan back on, seq scan back on, and created an index \nto EXPLICITLY to satisfy this condition. Iintuitivly, I thought that \ncombinations of other indexes should have satisfied the optimizer, but \nfigured better overkill than nothing. I thought this would solve it - but \nno. We is using a BRAND NEW INDEX which is unlikely to be corrupt so \nexpensive?\n\n\"Nested Loop Left Join (cost=25300.96..26043.67 rows=110 width=71) (actual \ntime=1339.216..1339.216 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Bitmap Heap Scan on facility f (cost=25300.96..25614.42 rows=110 \nwidth=71) (actual time=1339.043..1339.066 rows=7 loops=1)\"\n\" Recheck Cond: (((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text)) OR \n((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text = \n'14224'::text)))\"\n\" -> BitmapOr (cost=25300.96..25300.96 rows=110 width=0) (actual \ntime=1339.027..1339.027 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on \nfacility_facility_country_state_postal_code_idx (cost=0.00..12650.48 \nrows=55 width=0) (actual time=796.146..796.146 rows=7 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text))\"\n\" -> Bitmap Index Scan on \nfacility_facility_country_state_postal_code_idx (cost=0.00..12650.48 \nrows=55 width=0) (actual time=542.873..542.873 rows=0 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224'::text))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx \non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \ntime=0.014..0.016 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 1339.354 ms\"\n\n4) VACUUM VERBOSE on the tables involved. Note how much more painful in \nelapsed time it is to vacuum facility vs facility_address, even though the \nnumber of rows is comparable:\n\nINFO: vacuuming \"mdx_core.facility\"\nINFO: index \"facility_pkey\" now contains 964123 row versions in 3682 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.03u sec elapsed 0.18 sec.\nINFO: index \"facility_country_state_city_idx\" now contains 964188 row \nversions in 7664 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.25s/0.17u sec elapsed 84.14 sec.\nINFO: index \"facility_country_state_postal_code_idx\" now contains 964412 \nrow versions in 7689 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.42s/0.10u sec elapsed 137.12 sec.\nINFO: index \"facility_facility_country_state_city_idx\" now contains 964493 \nrow versions in 6420 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.17s/0.09u sec elapsed 2.23 sec.\nINFO: index \"facility_facility_country_state_postal_code_idx\" now contains \n964494 row versions in 6895 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.01u sec elapsed 0.95 sec.\nINFO: \"facility\": found 0 removable, 964123 nonremovable row versions in \n17398 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.90s/0.57u sec elapsed 224.80 sec.\nINFO: vacuuming \"pg_toast.pg_toast_58570311\"\nINFO: index \"pg_toast_58570311_index\" now contains 0 row versions in 1 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"pg_toast_58570311\": found 0 removable, 0 nonremovable row versions \nin 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\n\nQuery returned successfully with no result in 224903 ms.\n\nINFO: vacuuming \"mdx_core.facility_address\"\nINFO: index \"facility_address_pkey\" now contains 893157 row versions in \n3411 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.17s/0.04u sec elapsed 11.10 sec.\nINFO: index \"facility_address_address_idx\" now contains 893157 row versions \nin 3164 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.04u sec elapsed 0.61 sec.\nINFO: index \"facility_address_facility_address_address_type_idx\" now \ncontains 893157 row versions in 3797 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.07 sec.\nINFO: \"facility_address\": found 0 removable, 893139 nonremovable row \nversions in 9210 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.26s/0.15u sec elapsed 12.14 sec.\n\nQuery returned successfully with no result in 12297 ms.\n\n\n\n\n\n5) Here is the original query, plus original explain analyze. Number of rows \nhave increased since this was run, so the costs are lower, but still \nsignificant:\n\n/*\nFind all facilities that do not have full address information\nbut do have default location information that indicates\nits the facilitiy's US zip code.\nNULL values cast as columns are placeholders to allow\nthis sub-query to be unioned with another subquery\nthat contains full address data\n*/\nselect\n f.facility_id,\n null as facility_address_id,\n null as address_id,\n f.facility_type_code,\n f.name,\n null as address,\n f.default_city as city,\n f.default_state_code as state_code,\n f.default_postal_code as postal_code,\n f.default_country_code as country_code,\n null as parsed_unit\nfrom\n mdx_core.facility as f\nleft outer join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\nwhere\n facility_address_id is null\n and f.default_country_code = 'US'\n and (f.default_postal_code = '14224-1945' or f.default_postal_code =\n'14224')\n\n\"Nested Loop Left Join (cost=22966.70..23594.84 rows=93 width=71) (actual\ntime=662.075..662.075 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Bitmap Heap Scan on facility f (cost=22966.70..23231.79 rows=93\nwidth=71) (actual time=661.907..661.929 rows=7 loops=1)\"\n\" Recheck Cond: (((default_country_code = 'US'::bpchar) AND\n((default_postal_code)::text = '14224-1945'::text)) OR\n((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =\n'14224'::text)))\"\n\" -> BitmapOr (cost=22966.70..22966.70 rows=93 width=0) (actual\ntime=661.891..661.891 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on\nfacility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47\nwidth=0) (actual time=374.284..374.284 rows=7 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND\n((default_postal_code)::text = '14224-1945'::text))\"\n\" -> Bitmap Index Scan on\nfacility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47\nwidth=0) (actual time=287.599..287.599 rows=0 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND\n((default_postal_code)::text = '14224'::text))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx\non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual\ntime=0.014..0.016 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 662.203 ms\"\n>\n\n\n\n",
"msg_date": "Mon, 16 Oct 2006 13:33:28 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "I think there's 2 things that would help this case. First, partition on\ncountry. You can either do this on a table level or on an index level\nby putting where clauses on the indexes (index method would be the\nfastest one to test, since it's just new indexes). That should shrink\nthe size of that index noticably.\n\nThe other thing is to try and get the planner to not double-scan the\nindex. If you add the following, I think it will scan the index once for\nthe LIKE, and then just filter whatever it finds to match the other\nconditions.\n\n and f.default_postal_code LIKE '14224%'\n\nOn Mon, Oct 16, 2006 at 01:33:28PM -0400, Carlo Stonebanks wrote:\n> INFO: vacuuming \"mdx_core.facility\"\n> INFO: index \"facility_pkey\" now contains 964123 row versions in 3682 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.03s/0.03u sec elapsed 0.18 sec.\n> INFO: index \"facility_country_state_city_idx\" now contains 964188 row \n> versions in 7664 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.25s/0.17u sec elapsed 84.14 sec.\n> INFO: index \"facility_country_state_postal_code_idx\" now contains 964412 \n> row versions in 7689 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.42s/0.10u sec elapsed 137.12 sec.\n> INFO: index \"facility_facility_country_state_city_idx\" now contains 964493 \n> row versions in 6420 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.17s/0.09u sec elapsed 2.23 sec.\n> INFO: index \"facility_facility_country_state_postal_code_idx\" now contains \n> 964494 row versions in 6895 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.01s/0.01u sec elapsed 0.95 sec.\n> INFO: \"facility\": found 0 removable, 964123 nonremovable row versions in \n> 17398 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.90s/0.57u sec elapsed 224.80 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_58570311\"\n> INFO: index \"pg_toast_58570311_index\" now contains 0 row versions in 1 \n> pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.01 sec.\n> INFO: \"pg_toast_58570311\": found 0 removable, 0 nonremovable row versions \n> in 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.01 sec.\n> \n> Query returned successfully with no result in 224903 ms.\n> \n> INFO: vacuuming \"mdx_core.facility_address\"\n> INFO: index \"facility_address_pkey\" now contains 893157 row versions in \n> 3411 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.17s/0.04u sec elapsed 11.10 sec.\n> INFO: index \"facility_address_address_idx\" now contains 893157 row versions \n> in 3164 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.07s/0.04u sec elapsed 0.61 sec.\n> INFO: index \"facility_address_facility_address_address_type_idx\" now \n> contains 893157 row versions in 3797 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.01s/0.00u sec elapsed 0.07 sec.\n> INFO: \"facility_address\": found 0 removable, 893139 nonremovable row \n> versions in 9210 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.26s/0.15u sec elapsed 12.14 sec.\n> \n> Query returned successfully with no result in 12297 ms.\n> \n> \n> \n> \n> \n> 5) Here is the original query, plus original explain analyze. Number of rows \n> have increased since this was run, so the costs are lower, but still \n> significant:\n> \n> /*\n> Find all facilities that do not have full address information\n> but do have default location information that indicates\n> its the facilitiy's US zip code.\n> NULL values cast as columns are placeholders to allow\n> this sub-query to be unioned with another subquery\n> that contains full address data\n> */\n> select\n> f.facility_id,\n> null as facility_address_id,\n> null as address_id,\n> f.facility_type_code,\n> f.name,\n> null as address,\n> f.default_city as city,\n> f.default_state_code as state_code,\n> f.default_postal_code as postal_code,\n> f.default_country_code as country_code,\n> null as parsed_unit\n> from\n> mdx_core.facility as f\n> left outer join mdx_core.facility_address as fa\n> on fa.facility_id = f.facility_id\n> where\n> facility_address_id is null\n> and f.default_country_code = 'US'\n> and (f.default_postal_code = '14224-1945' or f.default_postal_code =\n> '14224')\n> \n> \"Nested Loop Left Join (cost=22966.70..23594.84 rows=93 width=71) (actual\n> time=662.075..662.075 rows=0 loops=1)\"\n> \" Filter: (\"inner\".facility_address_id IS NULL)\"\n> \" -> Bitmap Heap Scan on facility f (cost=22966.70..23231.79 rows=93\n> width=71) (actual time=661.907..661.929 rows=7 loops=1)\"\n> \" Recheck Cond: (((default_country_code = 'US'::bpchar) AND\n> ((default_postal_code)::text = '14224-1945'::text)) OR\n> ((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =\n> '14224'::text)))\"\n> \" -> BitmapOr (cost=22966.70..22966.70 rows=93 width=0) (actual\n> time=661.891..661.891 rows=0 loops=1)\"\n> \" -> Bitmap Index Scan on\n> facility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47\n> width=0) (actual time=374.284..374.284 rows=7 loops=1)\"\n> \" Index Cond: ((default_country_code = 'US'::bpchar) AND\n> ((default_postal_code)::text = '14224-1945'::text))\"\n> \" -> Bitmap Index Scan on\n> facility_country_state_postal_code_idx (cost=0.00..11483.35 rows=47\n> width=0) (actual time=287.599..287.599 rows=0 loops=1)\"\n> \" Index Cond: ((default_country_code = 'US'::bpchar) AND\n> ((default_postal_code)::text = '14224'::text))\"\n> \" -> Index Scan using facility_address_facility_address_address_type_idx\n> on facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual\n> time=0.014..0.016 rows=1 loops=7)\"\n> \" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n> \"Total runtime: 662.203 ms\"\n> >\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 16 Oct 2006 15:45:51 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On 10/15/06, Carlo Stonebanks <[email protected]> wrote:\n> that contains full address data\n> */\n> select\n> f.facility_id,\n> null as facility_address_id,\n> null as address_id,\n> f.facility_type_code,\n> f.name,\n> null as address,\n> f.default_city as city,\n> f.default_state_code as state_code,\n> f.default_postal_code as postal_code,\n> f.default_country_code as country_code,\n> null as parsed_unit\n> from\n> mdx_core.facility as f\n> left outer join mdx_core.facility_address as fa\n> on fa.facility_id = f.facility_id\n> where\n> facility_address_id is null\n> and f.default_country_code = 'US'\n> and (f.default_postal_code = '14224-1945' or f.default_postal_code =\n> '14224')\n\nwhat is the facility_address_id is null all about? remove it since you\nhardcode it to true in select.\n\nyou have a two part part key on facility(country code, postal code), right?\n\nmerlin\n",
"msg_date": "Mon, 16 Oct 2006 17:07:11 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> what is the facility_address_id is null all about? remove it since you\n> hardcode it to true in select.\n\nThe facility_address_id is null statement is necessary, as this is a \nsub-query from a union clause and I want to optimise the query with the \noriginal logic intact. The value is not hard coded to true but rather to \nnull. Admittedly, it's redundant but I put it there to make sure that I \nmatched up the columns from the other select in the union clause.\n\n> you have a two part part key on facility(country code, postal code), \n> right?\n\nThe indexes and constrains are below. If you see redundancy, this was from \nvain attempts to please the optimiser gods.\n\nCarlo\n\nALTER TABLE mdx_core.facility\n ADD CONSTRAINT facility_pkey PRIMARY KEY(facility_id);\n\nCREATE INDEX facility_country_state_city_idx\n ON mdx_core.facility\n USING btree\n (default_country_code, default_state_code, lower(default_city::text));\n\nCREATE INDEX facility_country_state_postal_code_idx\n ON mdx_core.facility\n USING btree\n (default_country_code, default_state_code, default_postal_code);\n\nCREATE INDEX facility_facility_country_state_city_idx\n ON mdx_core.facility\n USING btree\n (facility_id, default_country_code, default_state_code, \nlower(default_city::text));\n\nCREATE INDEX facility_facility_country_state_postal_code_idx\n ON mdx_core.facility\n USING btree\n (facility_id, default_country_code, default_state_code, \ndefault_postal_code);\n\n\n\"\"Merlin Moncure\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On 10/15/06, Carlo Stonebanks <[email protected]> wrote:\n>> that contains full address data\n>> */\n>> select\n>> f.facility_id,\n>> null as facility_address_id,\n>> null as address_id,\n>> f.facility_type_code,\n>> f.name,\n>> null as address,\n>> f.default_city as city,\n>> f.default_state_code as state_code,\n>> f.default_postal_code as postal_code,\n>> f.default_country_code as country_code,\n>> null as parsed_unit\n>> from\n>> mdx_core.facility as f\n>> left outer join mdx_core.facility_address as fa\n>> on fa.facility_id = f.facility_id\n>> where\n>> facility_address_id is null\n>> and f.default_country_code = 'US'\n>> and (f.default_postal_code = '14224-1945' or f.default_postal_code =\n>> '14224')\n>\n> what is the facility_address_id is null all about? remove it since you\n> hardcode it to true in select.\n>\n> you have a two part part key on facility(country code, postal code), \n> right?\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Mon, 16 Oct 2006 17:37:41 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": ">I think there's 2 things that would help this case. First, partition on\n> country. You can either do this on a table level or on an index level\n> by putting where clauses on the indexes (index method would be the\n> fastest one to test, since it's just new indexes). That should shrink\n> the size of that index noticably.\n\nI'm afraid I don't quite understand this, or how to 'partition' this at a \ntable level. Right now, the table consists of ONLY US addresses, so I don't \nknow if I would expect a performance improvement in changing the table or \nthe indexes as the indexes would not reduce anything.>\n> The other thing is to try and get the planner to not double-scan the\n> index. If you add the following, I think it will scan the index once for\n> the LIKE, and then just filter whatever it finds to match the other\n> conditions.\n>\n> and f.default_postal_code LIKE '14224%'\n\nI did try this - nothing signoificant came from the results (see below)\n\nthanks,\n\nCarlo\n\nexplain analyze select\n f.facility_id,\n null as facility_address_id,\n null as address_id,\n f.facility_type_code,\n f.name,\n null as address,\n f.default_city as city,\n f.default_state_code as state_code,\n f.default_postal_code as postal_code,\n f.default_country_code as country_code,\n null as parsed_unit\nfrom\n mdx_core.facility as f\nleft outer join mdx_core.facility_address as fa\n on fa.facility_id = f.facility_id\nwhere\n facility_address_id is null\n and f.default_country_code = 'US'\n and f.default_postal_code like '14224%'\n and (f.default_postal_code = '14224-1945' or f.default_postal_code = \n'14224')\n\n\"Nested Loop Left Join (cost=26155.38..26481.58 rows=1 width=71) (actual \ntime=554.138..554.138 rows=0 loops=1)\"\n\" Filter: (\"inner\".facility_address_id IS NULL)\"\n\" -> Bitmap Heap Scan on facility f (cost=26155.38..26477.68 rows=1 \nwidth=71) (actual time=554.005..554.025 rows=7 loops=1)\"\n\" Recheck Cond: (((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text)) OR \n((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text = \n'14224'::text)))\"\n\" Filter: ((default_postal_code)::text ~~ '14224%'::text)\"\n\" -> BitmapOr (cost=26155.38..26155.38 rows=113 width=0) (actual \ntime=553.983..553.983 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on \nfacility_facility_country_state_postal_code_idx (cost=0.00..13077.69 \nrows=57 width=0) (actual time=313.156..313.156 rows=7 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224-1945'::text))\"\n\" -> Bitmap Index Scan on \nfacility_facility_country_state_postal_code_idx (cost=0.00..13077.69 \nrows=57 width=0) (actual time=240.819..240.819 rows=0 loops=1)\"\n\" Index Cond: ((default_country_code = 'US'::bpchar) AND \n((default_postal_code)::text = '14224'::text))\"\n\" -> Index Scan using facility_address_facility_address_address_type_idx \non facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \ntime=0.010..0.012 rows=1 loops=7)\"\n\" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n\"Total runtime: 554.243 ms\"\n\n\n",
"msg_date": "Mon, 16 Oct 2006 17:56:54 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On Monday 16 October 2006 16:37, Carlo Stonebanks wrote:\n\n> The facility_address_id is null statement is necessary, as this is a \n> sub-query from a union clause and I want to optimise the query with\n> the original logic intact. The value is not hard coded to true but\n> rather to null.\n\nHeh, you neglect to mention that this query is discovering faculty who \ndo *not* have an address entry, which makes the \"is null\" a major \nnecessity. With that, how did a \"not exists (blabla faculty_address \nblabla)\" subquery to get the same effect treat you? How about an \"IN \n(blabla LIMIT 1)\" ?\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n",
"msg_date": "Mon, 16 Oct 2006 17:28:42 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "Sorry, I didn'tpoint it out because an earlier post included the query with \ndocumentation - that post got lost... or at least *I* can't see it.\n\nThe other half of the union renders the facilities that DO have addresses, \nand because of the performance problem (which I have finally sorted out by \ncreating indexes which are more explicit - my oversight, really!)\n\nThe original query was a slightly more complex outer join, which I then \ndecomposed to an explicit union with two halves - one half handling the \nexplicit \"facility_address_id is null\" portion, the other half handling the \n\"is not null\" portion (implicitly because of the normal join between \nfacility and facility_address).\n\nI hadn't considered the \"not exists\" option - it's obvious when you look at \nthe sub-query by itself, but didn't strike me before I broke it out of the \nunion and you mentioned it. I was just under th eimpression that getting \nthis sub-query to work would have produced the most clear, straightforward \nANALYZE results.\n\nCarlo\n\n\"Shaun Thomas\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Monday 16 October 2006 16:37, Carlo Stonebanks wrote:\n>\n>> The facility_address_id is null statement is necessary, as this is a\n>> sub-query from a union clause and I want to optimise the query with\n>> the original logic intact. The value is not hard coded to true but\n>> rather to null.\n>\n> Heh, you neglect to mention that this query is discovering faculty who\n> do *not* have an address entry, which makes the \"is null\" a major\n> necessity. With that, how did a \"not exists (blabla faculty_address\n> blabla)\" subquery to get the same effect treat you? How about an \"IN\n> (blabla LIMIT 1)\" ?\n>\n> -- \n>\n> Shaun Thomas\n> Database Administrator\n>\n> Leapfrog Online\n> 807 Greenwood Street\n> Evanston, IL 60201\n> Tel. 847-440-8253\n> Fax. 847-570-5750\n> www.leapfrogonline.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Tue, 17 Oct 2006 02:39:24 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "> you have a two part part key on facility(country code, postal code), \n> right?\n\nWell, I'm glad you pointed it out, because I THOUGhT I had created it, but \napparently I haven't -- I only noticed that it was missing after I listed \nall the other indexes. Looks like this query is one of the victims of a db \nstructure corruption I suffered when transferring the schema over from \ndevelopment into production.\n\n(Well, that's my excuse and I'm sticking to it!)\n\nThanks for all the help - I've reduced the execution time to 1/10 of its \noriginal time.\n\nCarlo \n\n\n",
"msg_date": "Tue, 17 Oct 2006 02:43:57 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "On Mon, Oct 16, 2006 at 05:56:54PM -0400, Carlo Stonebanks wrote:\n> >I think there's 2 things that would help this case. First, partition on\n> > country. You can either do this on a table level or on an index level\n> > by putting where clauses on the indexes (index method would be the\n> > fastest one to test, since it's just new indexes). That should shrink\n> > the size of that index noticably.\n> \n> I'm afraid I don't quite understand this, or how to 'partition' this at a \n> table level. Right now, the table consists of ONLY US addresses, so I don't \n> know if I would expect a performance improvement in changing the table or \n> the indexes as the indexes would not reduce anything.>\n\nIt will help because you can then drop country_code from the index,\nmaking it smaller.\n\n> > The other thing is to try and get the planner to not double-scan the\n> > index. If you add the following, I think it will scan the index once for\n> > the LIKE, and then just filter whatever it finds to match the other\n> > conditions.\n> >\n> > and f.default_postal_code LIKE '14224%'\n \nHrm... well, first step would be to drop the explicit postal code tests\njust to validate that it's faster to do the LIKE than it is to do the\ntwo explicit tests. If that proves to be the case, you can wrap that in\na subquery, and put the final where clause in the outer part of the\nquery. You'll probably have to use the OFFSET 0 hack, too.\n\n> I did try this - nothing signoificant came from the results (see below)\n> \n> thanks,\n> \n> Carlo\n> \n> explain analyze select\n> f.facility_id,\n> null as facility_address_id,\n> null as address_id,\n> f.facility_type_code,\n> f.name,\n> null as address,\n> f.default_city as city,\n> f.default_state_code as state_code,\n> f.default_postal_code as postal_code,\n> f.default_country_code as country_code,\n> null as parsed_unit\n> from\n> mdx_core.facility as f\n> left outer join mdx_core.facility_address as fa\n> on fa.facility_id = f.facility_id\n> where\n> facility_address_id is null\n> and f.default_country_code = 'US'\n> and f.default_postal_code like '14224%'\n> and (f.default_postal_code = '14224-1945' or f.default_postal_code = \n> '14224')\n> \n> \"Nested Loop Left Join (cost=26155.38..26481.58 rows=1 width=71) (actual \n> time=554.138..554.138 rows=0 loops=1)\"\n> \" Filter: (\"inner\".facility_address_id IS NULL)\"\n> \" -> Bitmap Heap Scan on facility f (cost=26155.38..26477.68 rows=1 \n> width=71) (actual time=554.005..554.025 rows=7 loops=1)\"\n> \" Recheck Cond: (((default_country_code = 'US'::bpchar) AND \n> ((default_postal_code)::text = '14224-1945'::text)) OR \n> ((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text = \n> '14224'::text)))\"\n> \" Filter: ((default_postal_code)::text ~~ '14224%'::text)\"\n> \" -> BitmapOr (cost=26155.38..26155.38 rows=113 width=0) (actual \n> time=553.983..553.983 rows=0 loops=1)\"\n> \" -> Bitmap Index Scan on \n> facility_facility_country_state_postal_code_idx (cost=0.00..13077.69 \n> rows=57 width=0) (actual time=313.156..313.156 rows=7 loops=1)\"\n> \" Index Cond: ((default_country_code = 'US'::bpchar) AND \n> ((default_postal_code)::text = '14224-1945'::text))\"\n> \" -> Bitmap Index Scan on \n> facility_facility_country_state_postal_code_idx (cost=0.00..13077.69 \n> rows=57 width=0) (actual time=240.819..240.819 rows=0 loops=1)\"\n> \" Index Cond: ((default_country_code = 'US'::bpchar) AND \n> ((default_postal_code)::text = '14224'::text))\"\n> \" -> Index Scan using facility_address_facility_address_address_type_idx \n> on facility_address fa (cost=0.00..3.89 rows=1 width=8) (actual \n> time=0.010..0.012 rows=1 loops=7)\"\n> \" Index Cond: (fa.facility_id = \"outer\".facility_id)\"\n> \"Total runtime: 554.243 ms\"\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 14:01:18 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Optimization for Dummies 2 - the SQL"
},
{
"msg_contents": "I have a question about index growth.\n\nThe way I understand it, dead tuples in indexes were not reclaimed by \nVACUUM commands in the past. However, I've read in a few forum posts \nthat this was changed somewhere between 7.4 and 8.0.\n\nI'm having an issue where my GIST indexes are growing quite large, and \nrunning a VACUUM doesn't appear to remove the dead tuples. For example, \nif I check out the size an index before running any VACUUM :\n\nselect pg_relation_size('asset_positions_position_idx');\n pg_relation_size\n------------------\n 11624448\n(1 row)\n\nThe size is about 11Mb. If I run a VACUUM command in verbose, I see \nthis about the index:\n\nINFO: index \"asset_positions_position_idx\" now contains 4373 row \nversions in 68 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.16 sec.\n\nWhen I run the same command to find the size after the VACUUM, it hasn't \nchanged. However, if I drop and then recreate this index, the size \nbecomes much smaller (almost half the size):\n\ndrop index asset_positions_position_idx;\nDROP INDEX\n\nCREATE INDEX asset_positions_position_idx ON asset_positions USING GIST \n(position GIST_GEOMETRY_OPS);\nCREATE INDEX\n\nselect pg_relation_size('asset_positions_position_idx');\n pg_relation_size\n------------------\n 6225920\n(1 row)\n\nIs there something I am missing here, or is the reclaiming of dead \ntuples for these indexes just not working when I run a VACUUM? Is it \nsuppose to work?\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Wed, 18 Oct 2006 15:20:19 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "index growth problem"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:\n> I have a question about index growth.\n> \n> The way I understand it, dead tuples in indexes were not reclaimed by \n> VACUUM commands in the past. However, I've read in a few forum posts \n> that this was changed somewhere between 7.4 and 8.0.\n \nThere was a change to indexes that made vacuum more effective; I don't\nremember the details off-hand.\n\n> I'm having an issue where my GIST indexes are growing quite large, and \n> running a VACUUM doesn't appear to remove the dead tuples. For example, \n> if I check out the size an index before running any VACUUM :\n> \n> select pg_relation_size('asset_positions_position_idx');\n> pg_relation_size\n> ------------------\n> 11624448\n> (1 row)\n> \n> The size is about 11Mb. If I run a VACUUM command in verbose, I see \n> this about the index:\n> \n> INFO: index \"asset_positions_position_idx\" now contains 4373 row \n> versions in 68 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.16 sec.\n> \n> When I run the same command to find the size after the VACUUM, it hasn't \n> changed. However, if I drop and then recreate this index, the size \n> becomes much smaller (almost half the size):\n> \n> drop index asset_positions_position_idx;\n> DROP INDEX\n> \n> CREATE INDEX asset_positions_position_idx ON asset_positions USING GIST \n> (position GIST_GEOMETRY_OPS);\n> CREATE INDEX\n> \n> select pg_relation_size('asset_positions_position_idx');\n> pg_relation_size\n> ------------------\n> 6225920\n> (1 row)\n> \n> Is there something I am missing here, or is the reclaiming of dead \n> tuples for these indexes just not working when I run a VACUUM? Is it \n> suppose to work?\n\nThat's not really a useful test to see if VACUUM is working. VACUUM can\nonly trim space off the end of a relation (index or table), where by\n'end' I mean the end of the last file for that relation on the\nfilesystem. This means it's pretty rare for VACUUM to actually shrink\nfiles on-disk for tables. This can be even more difficult for indexes (I\nthink it's virtually impossible to shrink a B-tree index file).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 17:33:08 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index growth problem"
},
{
"msg_contents": "So I guess any changes that were made to make VACUUM and FSM include \nindexes\ndoes not remove the necessity to reindex (as long as we don't want index \nsizes to bloat and grow larger than they need be).\nIs that correct?\n\nGraham.\n\n\nJim C. Nasby wrote:\n\n>On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:\n> \n>\n>>I have a question about index growth.\n>>\n>>The way I understand it, dead tuples in indexes were not reclaimed by \n>>VACUUM commands in the past. However, I've read in a few forum posts \n>>that this was changed somewhere between 7.4 and 8.0.\n>> \n>>\n> \n>There was a change to indexes that made vacuum more effective; I don't\n>remember the details off-hand.\n>\n> \n>\n>>I'm having an issue where my GIST indexes are growing quite large, and \n>>running a VACUUM doesn't appear to remove the dead tuples. For example, \n>>if I check out the size an index before running any VACUUM :\n>>\n>>select pg_relation_size('asset_positions_position_idx');\n>>pg_relation_size\n>>------------------\n>> 11624448\n>>(1 row)\n>>\n>>The size is about 11Mb. If I run a VACUUM command in verbose, I see \n>>this about the index:\n>>\n>>INFO: index \"asset_positions_position_idx\" now contains 4373 row \n>>versions in 68 pages\n>>DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n>>CPU 0.00s/0.00u sec elapsed 0.16 sec.\n>>\n>>When I run the same command to find the size after the VACUUM, it hasn't \n>>changed. However, if I drop and then recreate this index, the size \n>>becomes much smaller (almost half the size):\n>>\n>>drop index asset_positions_position_idx;\n>>DROP INDEX\n>>\n>>CREATE INDEX asset_positions_position_idx ON asset_positions USING GIST \n>>(position GIST_GEOMETRY_OPS);\n>>CREATE INDEX\n>>\n>>select pg_relation_size('asset_positions_position_idx');\n>>pg_relation_size\n>>------------------\n>> 6225920\n>>(1 row)\n>>\n>>Is there something I am missing here, or is the reclaiming of dead \n>>tuples for these indexes just not working when I run a VACUUM? Is it \n>>suppose to work?\n>> \n>>\n>\n>That's not really a useful test to see if VACUUM is working. VACUUM can\n>only trim space off the end of a relation (index or table), where by\n>'end' I mean the end of the last file for that relation on the\n>filesystem. This means it's pretty rare for VACUUM to actually shrink\n>files on-disk for tables. This can be even more difficult for indexes (I\n>think it's virtually impossible to shrink a B-tree index file).\n> \n>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Wed, 18 Oct 2006 15:39:56 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index growth problem"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 03:39:56PM -0700, Graham Davis wrote:\n> So I guess any changes that were made to make VACUUM and FSM include \n> indexes\n> does not remove the necessity to reindex (as long as we don't want index \n> sizes to bloat and grow larger than they need be).\n> Is that correct?\n\nNot in recent releases, no. Remember that any index on a field that gets\nupdate activity will naturally have some amount of empty space due to\npage splits, but this is normal (and actually desireable). So you can't\njust compare index size before and after a REINDEX and assume\nsomething's wrong if REINDEX shrinks the index; that gain is artificial.\n\nSo long as you are vacuuming frequently enough and keep the free space\nmap large enough, there shouldn't be any need to REINDEX.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 17:51:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index growth problem"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:\n>> When I run the same command to find the size after the VACUUM, it hasn't \n>> changed.\n\n> That's not really a useful test to see if VACUUM is working. VACUUM can\n> only trim space off the end of a relation (index or table), where by\n> 'end' I mean the end of the last file for that relation on the\n> filesystem. This means it's pretty rare for VACUUM to actually shrink\n> files on-disk for tables. This can be even more difficult for indexes (I\n> think it's virtually impossible to shrink a B-tree index file).\n\nRight; IIRC, a plain VACUUM doesn't even try to shorten the physical\nindex file, because of locking considerations. The important question\nis whether space gets recycled properly for re-use within the index.\nIf the index continues to grow over time, then you might have a problem\nwith insufficient FSM space (or not vacuuming often enough).\n\nIt might be worth pointing out that VACUUM isn't intended to try to\nreduce the disk file to the shortest possible length --- the assumption\nis that you are doing vacuuming on a regular basis and so the file\nlength should converge to a \"steady state\", wherein the internal free\nspace runs out about the time you do another VACUUM and reclaim some\nmore space for re-use. There's not really any point in being more\naggressive than that; we'd just create additional disk I/O when the\nfilesystem releases and later reassigns space to the file.\n\nOf course, this argument fails in the scenario where you make a large\nand permanent reduction in the amount of data in a table. There are\nvarious hacks you can use to clean up in that case --- use TRUNCATE not\nDELETE if you can, or consider using CLUSTER (not VACUUM FULL). Some\nvariants of ALTER TABLE will get rid of internal free space, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2006 19:00:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index growth problem "
}
] |
[
{
"msg_contents": "Hi List !\n\nI have a performance problem, but I am not sure whether it really\nis a problem or not.\nI am running a fresh install of PostgreSQL 8.1.4 on Windows2000.\nThe server is a bi-opteron with 2GB of RAM. The PostgreSQL's data\nfolder is on a RAID-0 array of 2 SATA WD Raptor drives (10.000\nrpm, 8MB cache).\n\nI have a very simple table, with only ~500 rows :\nCREATE TABLE table1\n(\n gid int4 NOT NULL DEFAULT 0,\n field1 varchar(45) NOT NULL,\n field2 int2 NOT NULL DEFAULT 1,\n field3 int2 NOT NULL DEFAULT 0,\n field4 int2 NOT NULL DEFAULT 1,\n field5 int4 NOT NULL DEFAULT -1,\n field6 int4,\n field7 int4,\n field8 int4,\n field9 int2 DEFAULT 1,\n CONSTRAINT table1_pkey PRIMARY KEY (gid)\n)\nWITHOUT OIDS;\n\nThe problem is that simple select queries with the primary key in the \nWHERE statement take very long to run.\nFor example, this query returns only 7 rows and takes about 1\nsecond to run !\nSELECT * FROM table1 WHERE gid in (33,110,65,84,92,94,13,7,68,41);\n\nEXPLAIN ANALYZE SELECT * FROM table1 WHERE gid in\n(33,110,65,84,92,94,13,7,68,41);\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on table1 (cost=0.00..23.69 rows=10 width=35) (actual\ntime=0.023..0.734 rows=7 loops=1)\n Filter: ((gid = 33) OR (gid = 110) OR (gid = 65) OR (gid = 84)\nOR (gid = 92) OR (gid = 94) OR (gid = 13) OR (gid = 7) OR (gid =\n68) OR (gid = 41))\n Total runtime: 0.801 ms\n(3 rows)\n\nI have run \"VACUUM FULL\" on this table many times... I don't know\nwhat to try next !\nWhat is wrong here (because I hope that something is wrong) ?\nThanks a lot for your help !\n\nRegards\n--\nArnaud\n\n",
"msg_date": "Tue, 03 Oct 2006 13:25:10 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance on very simple query ?"
},
{
"msg_contents": "On Tue, Oct 03, 2006 at 01:25:10PM +0200, Arnaud Lesauvage wrote:\n> For example, this query returns only 7 rows and takes about 1\n> second to run !\n>\n> [...]\n>\n> Total runtime: 0.801 ms\n\n0.801 ms is _far_ under a second... Where do you have the latter timing from?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 3 Oct 2006 14:00:27 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n>> Total runtime: 0.801 ms\n> \n> 0.801 ms is _far_ under a second... Where do you have the latter timing from?\n\nI fell stupid...\nSorry for the useless message...\n\n\n\n---->[]\n\n",
"msg_date": "Tue, 03 Oct 2006 14:04:22 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "On Oct 3, 2006, at 13:25 , Arnaud Lesauvage wrote:\n\n> The problem is that simple select queries with the primary key in \n> the WHERE statement take very long to run.\n> For example, this query returns only 7 rows and takes about 1\n> second to run !\n> SELECT * FROM table1 WHERE gid in (33,110,65,84,92,94,13,7,68,41);\n\nThis is a very small table, but generally speaking, such queries \nbenefit from an index; eg.,\n\n create index table1_gid on table1 (gid);\n\nNote that PostgreSQL may still perform a sequential scan if it thinks \nthis has a lower cost, eg. for small tables that span just a few pages.\n\n> I have run \"VACUUM FULL\" on this table many times... I don't know\n> what to try next !\n\nPostgreSQL's query planner relies on table statistics to perform \ncertain optimizations; make sure you run \"analyze table1\".\n\nAlexander.\n",
"msg_date": "Tue, 3 Oct 2006 14:08:03 +0200",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "[Arnaud Lesauvage - Tue at 01:25:10PM +0200]\n> I have a performance problem, but I am not sure whether it really\n> is a problem or not.\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on table1 (cost=0.00..23.69 rows=10 width=35) (actual\n> time=0.023..0.734 rows=7 loops=1)\n> Filter: ((gid = 33) OR (gid = 110) OR (gid = 65) OR (gid = 84)\n> OR (gid = 92) OR (gid = 94) OR (gid = 13) OR (gid = 7) OR (gid =\n> 68) OR (gid = 41))\n> Total runtime: 0.801 ms\n> (3 rows)\n> \n> I have run \"VACUUM FULL\" on this table many times... I don't know\n> what to try next !\n> What is wrong here (because I hope that something is wrong) ?\n> Thanks a lot for your help !\n\nDid you try \"analyze\" as well? It's weird it's using seq scan, since\nyou have a primary key it's supposed to have an index ... though 500\nrows is little.\n\nI just checked up our own production database, takes 0.08 ms to fetch a\nrow by ID from one of our tables containing 176k with rows.\n",
"msg_date": "Tue, 3 Oct 2006 14:10:04 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "[Tobias Brox - Tue at 02:10:04PM +0200]\n> Did you try \"analyze\" as well? It's weird it's using seq scan, since\n> you have a primary key it's supposed to have an index ... though 500\n> rows is little.\n> \n> I just checked up our own production database, takes 0.08 ms to fetch a\n> row by ID from one of our tables containing 176k with rows.\n\nOh, the gid is not primary key. I guess I should also apologize for\nadding noise here :-)\n\nMake an index here! :-)\n",
"msg_date": "Tue, 3 Oct 2006 14:11:09 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Tobias Brox wrote:\n> Oh, the gid is not primary key. I guess I should also apologize for\n> adding noise here :-)\n\nYes, it is a primary key, but I am the noise maker here ! ;-)\n",
"msg_date": "Tue, 03 Oct 2006 14:13:59 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "[Arnaud Lesauvage - Tue at 02:13:59PM +0200]\n> Tobias Brox wrote:\n> >Oh, the gid is not primary key. I guess I should also apologize for\n> >adding noise here :-)\n> \n> Yes, it is a primary key, but I am the noise maker here ! ;-)\n\nOh - it is. How can you have a default value on a primary key? Will it\nuse the index if you do \"analyze\"? Is there an index on the table at\nall, do you get it up if you ask for a description of the table (\\d\ntablename)? \n",
"msg_date": "Tue, 3 Oct 2006 14:17:09 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Arnaud Lesauvage - Tue at 02:13:59PM +0200]\n>> Tobias Brox wrote:\n>> >Oh, the gid is not primary key. I guess I should also apologize for\n>> >adding noise here :-)\n>> \n>> Yes, it is a primary key, but I am the noise maker here ! ;-)\n> \n> Oh - it is. How can you have a default value on a primary key? \n\nGood question, but I am not the DB designer in that case.\n\n > Will it\n> use the index if you do \"analyze\"? Is there an index on the table at\n> all, do you get it up if you ask for a description of the table (\\d\n> tablename)? \n\nIn this case (a simplified version of the real case), the pkey is the \nonly index. It is used if I only as for one row (WHERE gid=33).\n",
"msg_date": "Tue, 03 Oct 2006 14:25:28 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Tobias Brox <tobias 'at' nordicbet.com> writes:\n\n> Oh - it is. How can you have a default value on a primary key? Will it\n\nyou can but it is useless :)\n\nfoo=# create table bar (uid int primary key default 0, baz text);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"bar_pkey\" for table \"bar\"\nCREATE TABLE\nfoo=# insert into bar (baz) values ('');\nINSERT 217426996 1\nfoo=# insert into bar (baz) values ('');\nERROR: duplicate key violates unique constraint \"bar_pkey\"\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "03 Oct 2006 14:27:40 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Arnaud Lesauvage <[email protected]> writes:\n> Seq Scan on table1 (cost=0.00..23.69 rows=10 width=35) (actual\n> time=0.023..0.734 rows=7 loops=1)\n> Filter: ((gid = 33) OR (gid = 110) OR (gid = 65) OR (gid = 84)\n> OR (gid = 92) OR (gid = 94) OR (gid = 13) OR (gid = 7) OR (gid =\n> 68) OR (gid = 41))\n> Total runtime: 0.801 ms\n\nThis will start using the index as soon as the table gets big enough to\nmake it worthwhile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Oct 2006 10:35:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ? "
},
{
"msg_contents": "On October 3, 2006 04:25 am, Arnaud Lesauvage wrote:\n> Hi List !\n>\n> I have a performance problem, but I am not sure whether it really\n> is a problem or not.\n> I am running a fresh install of PostgreSQL 8.1.4 on Windows2000.\n> The server is a bi-opteron with 2GB of RAM. The PostgreSQL's data\n> folder is on a RAID-0 array of 2 SATA WD Raptor drives (10.000\n> rpm, 8MB cache).\n>\n> I have a very simple table, with only ~500 rows :\n> CREATE TABLE table1\n> (\n> gid int4 NOT NULL DEFAULT 0,\n> field1 varchar(45) NOT NULL,\n> field2 int2 NOT NULL DEFAULT 1,\n> field3 int2 NOT NULL DEFAULT 0,\n> field4 int2 NOT NULL DEFAULT 1,\n> field5 int4 NOT NULL DEFAULT -1,\n> field6 int4,\n> field7 int4,\n> field8 int4,\n> field9 int2 DEFAULT 1,\n> CONSTRAINT table1_pkey PRIMARY KEY (gid)\n> )\n> WITHOUT OIDS;\n>\n> The problem is that simple select queries with the primary key in the\n> WHERE statement take very long to run.\n> For example, this query returns only 7 rows and takes about 1\n> second to run !\n\nAccording to your explain analyze, it's taking 0.8 of a milisecond (less than \n1 1000th of a second) so I can't see how this can possibly be speed up.\n\n> SELECT * FROM table1 WHERE gid in (33,110,65,84,92,94,13,7,68,41);\n>\n> EXPLAIN ANALYZE SELECT * FROM table1 WHERE gid in\n> (33,110,65,84,92,94,13,7,68,41);\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>---------------------------------------------------------------------------\n> Seq Scan on table1 (cost=0.00..23.69 rows=10 width=35) (actual\n> time=0.023..0.734 rows=7 loops=1)\n> Filter: ((gid = 33) OR (gid = 110) OR (gid = 65) OR (gid = 84)\n> OR (gid = 92) OR (gid = 94) OR (gid = 13) OR (gid = 7) OR (gid =\n> 68) OR (gid = 41))\n> Total runtime: 0.801 ms\n> (3 rows)\n>\n> I have run \"VACUUM FULL\" on this table many times... I don't know\n> what to try next !\n> What is wrong here (because I hope that something is wrong) ?\n> Thanks a lot for your help !\n>\n> Regards\n> --\n> Arnaud\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n-- \nDarcy Buskermolen\nCommand Prompt, Inc.\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997\nhttp://www.commandprompt.com/\n",
"msg_date": "Tue, 3 Oct 2006 08:10:23 -0700",
"msg_from": "Darcy Buskermolen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "On October 3, 2006 05:08 am, Alexander Staubo wrote:\n> On Oct 3, 2006, at 13:25 , Arnaud Lesauvage wrote:\n> > The problem is that simple select queries with the primary key in\n> > the WHERE statement take very long to run.\n> > For example, this query returns only 7 rows and takes about 1\n> > second to run !\n> > SELECT * FROM table1 WHERE gid in (33,110,65,84,92,94,13,7,68,41);\n>\n> This is a very small table, but generally speaking, such queries\n> benefit from an index; eg.,\n>\n> create index table1_gid on table1 (gid);\n\ngid is is a PRIMARY KEY, so it will already have an index in place.\n>\n> Note that PostgreSQL may still perform a sequential scan if it thinks\n> this has a lower cost, eg. for small tables that span just a few pages.\n>\n> > I have run \"VACUUM FULL\" on this table many times... I don't know\n> > what to try next !\n>\n> PostgreSQL's query planner relies on table statistics to perform\n> certain optimizations; make sure you run \"analyze table1\".\n>\n> Alexander.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \nDarcy Buskermolen\nCommand Prompt, Inc.\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997\nhttp://www.commandprompt.com/\n",
"msg_date": "Tue, 3 Oct 2006 08:35:05 -0700",
"msg_from": "Darcy Buskermolen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
},
{
"msg_contents": "Hi, Tobias,\n\nTobias Brox wrote:\n\n> How can you have a default value on a primary key?\n\nJust declare the column with both a default value and a primary key\nconstraint.\n\nIt makes sense when the default value is calculated instead of a\nconstant, by calling a function that generates the key.\n\nIn fact, the SERIAL type does nothing but defining a sequence, and then\nuse nextval('sequencename') as default.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 04 Oct 2006 19:10:48 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on very simple query ?"
}
] |
[
{
"msg_contents": "I have a simple case, selecting on a LIKE where clause over a single\ncolumn that has an index on it. On windows it uses the index - on\nlinux it does not. I have exactly the same scema and data in each,\nand I have run the necessary analyze commands on both.\n\nWindows is running 8.1.4\nLinux is running from RPM postgresql-server-8.1.4-1.FC5.1\n\nThere are 1 million rows in the table - a number I would expect to\nlower the score of a sequential scan for the planner. There is an\nindex on 'c_number'.\n\nOn windows I get this:\n\norderstest=# explain analyze select * from t_order where c_number like '0001%';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_order_c_number on t_order (cost=0.00..26.53\nrows=928 width=43) (actual time=0.029..2.857 rows=1000 loops=1)\n Index Cond: (((c_number)::text >= '0001'::character varying) AND\n((c_number)::text < '0002'::character varying))\n Filter: ((c_number)::text ~~ '0001%'::text)\n Total runtime: 4.572 ms\n(4 rows)\n\nGreat - the index is used, and the query is lightning fast.\n\nOn Linux I get this:\n\norderstest=# explain analyze select c_number from t_order where\nc_number like '0001%';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Seq Scan on t_order (cost=0.00..20835.00 rows=983 width=11) (actual\ntime=1.364..1195.064 rows=1000 loops=1)\n Filter: ((c_number)::text ~~ '0001%'::text)\n Total runtime: 1197.312 ms\n(3 rows)\n\nI just can't use this level of performance in my application.\n\nOn my linux box, the only way I can get it to use the index is to use\nthe = operator. If I use anything else, a seq scan is used.\n\nDisabling sequence scans in the config has no effect. It still does\nnot use the index for anything other than an = comparison.\n\nHere is a dump of the table description:\n\norderstest=# \\d t_order;\n Table \"public.t_order\"\n Column | Type | Modifiers\n-----------------------+------------------------+-----------\n id | bigint | not null\n c_number | character varying(255) |\n customer_id | bigint |\n origincountry_id | bigint |\n destinationcountry_id | bigint |\nIndexes:\n \"t_order_pkey\" PRIMARY KEY, btree (id)\n \"t_order_c_number\" btree (c_number)\n \"zzzz_3\" btree (destinationcountry_id)\n \"zzzz_4\" btree (origincountry_id)\n \"zzzz_5\" btree (customer_id)\nForeign-key constraints:\n \"fk9efdd3a33dbb666c\" FOREIGN KEY (destinationcountry_id)\nREFERENCES go_country(id)\n \"fk9efdd3a37d3dd384\" FOREIGN KEY (origincountry_id) REFERENCES\ngo_country(id)\n \"fk9efdd3a38654c9d3\" FOREIGN KEY (customer_id) REFERENCES t_party(id)\n\nThat dump is exactly the same on both machines.\n\nThe only major difference between the hardware is that the windows\nmachine has 2gb RAM and a setting of 10000 shared memory pages,\nwhereas the linux machine has 756Mb RAM and a setting of 3000 shared\nmemory pages (max. shared memory allocation of 32Mb). I can't see any\nother differences in configuration.\n\nDisk throughput on both is reasonable (40Mb/second buffered reads)\n\nCan anyone explain the difference in the planner behaviour on the two\nsystems, using what appears to be the same version of postgres?\n\n-- \nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 08:48:03 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "simple case using index on windows but not on linux"
},
{
"msg_contents": "simon godden wrote:\n> The only major difference between the hardware is that the windows\n> machine has 2gb RAM and a setting of 10000 shared memory pages,\n> whereas the linux machine has 756Mb RAM and a setting of 3000 shared\n> memory pages (max. shared memory allocation of 32Mb). I can't see any\n> other differences in configuration.\nYou can increase the max shared memory size if you have root access. See\n\nhttp://www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SYSVIPC-PARAMETERS\n\nScroll down for Linux-specific instructions.\n\n-- \nHeikki Linnakangas\nEnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 04 Oct 2006 10:11:44 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "(Sending again because I forgot to reply to all)\n\nOn 10/4/06, Heikki Linnakangas <[email protected]> wrote:\n> You can increase the max shared memory size if you have root access. See\n>\n> http://www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SYSVIPC-PARAMETERS\n>\n> Scroll down for Linux-specific instructions.\n\nThanks for the link.\n\nAre you saying that the shared memory size is the issue here? Please\ncan you explain how it would cause a seq scan rather than an index\nscan.\n\nI would like to understand the issue before making changes.\n\n--\nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 10:19:01 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "simon godden wrote:\n> (Sending again because I forgot to reply to all)\n> \n> On 10/4/06, Heikki Linnakangas <[email protected]> wrote:\n>> You can increase the max shared memory size if you have root access. See\n>>\n>> http://www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SYSVIPC-PARAMETERS \n>>\n>>\n>> Scroll down for Linux-specific instructions.\n> \n> Thanks for the link.\n> \n> Are you saying that the shared memory size is the issue here? Please\n> can you explain how it would cause a seq scan rather than an index\n> scan.\n> \n> I would like to understand the issue before making changes.\n\nIt *might* be shared-memory settings. It's almost certainly something to \ndo with setup. If you have the same data and the same query and can \nreliably produce different results then something else must be different.\n\nIf you look at the explain output from both, PG knows the seq-scan is \ngoing to be expensive (cost=20835) so the Linux box either\n1. Doesn't have the index (and you say it does, so it's not this).\n2. Thinks the index will be even more expensive.\n3. Can't use the index at all.\n\nIssue \"set enable_seqscan=false\" and then run your explain analyse. If \nyour query uses the index, what is the estimated cost? If the estimated \ncost is larger than a seq-scan that would indicate your configuration \nsettings are badly out-of-range.\n\nIf the index isn't used, then we have problem #3. I think this is what \nyou are actually seeing. Your locale is something other than \"C\" and PG \ndoesn't know how to use like with indexes. Read up on operator classes \nor change your locale.\nhttp://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 04 Oct 2006 10:36:05 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "On 10/4/06, Richard Huxton <[email protected]> wrote:\n>\n> Issue \"set enable_seqscan=false\" and then run your explain analyse. If\n> your query uses the index, what is the estimated cost? If the estimated\n> cost is larger than a seq-scan that would indicate your configuration\n> settings are badly out-of-range.\n\nI did that and it still used seq-scan.\n\n>\n> If the index isn't used, then we have problem #3. I think this is what\n> you are actually seeing. Your locale is something other than \"C\" and PG\n> doesn't know how to use like with indexes. Read up on operator classes\n> or change your locale.\n> http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n>\n\nAha - that sounds like it - this is the output from locale\n\nLANG=en_US.UTF-8\nLC_CTYPE=\"en_US.UTF-8\"\nLC_NUMERIC=\"en_US.UTF-8\"\nLC_TIME=\"en_US.UTF-8\"\nLC_COLLATE=\"en_US.UTF-8\"\nLC_MONETARY=\"en_US.UTF-8\"\nLC_MESSAGES=\"en_US.UTF-8\"\nLC_PAPER=\"en_US.UTF-8\"\nLC_NAME=\"en_US.UTF-8\"\nLC_ADDRESS=\"en_US.UTF-8\"\nLC_TELEPHONE=\"en_US.UTF-8\"\nLC_MEASUREMENT=\"en_US.UTF-8\"\nLC_IDENTIFICATION=\"en_US.UTF-8\"\nLC_ALL=\n\nI guess it cannot determine the collating sequence?\n\nI'm not too familiar with unix locale issues - does this output match\nyour problem description?\n\nCan you explain how to change my locale to 'C'? (I'm quite happy for\nyou to tell me to RTFM, as I know this is not a linux user mailing\nlist :)\n\n-- \nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 10:40:46 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "simon godden wrote:\n>> If the index isn't used, then we have problem #3. I think this is what\n>> you are actually seeing. Your locale is something other than \"C\" and PG\n>> doesn't know how to use like with indexes. Read up on operator classes\n>> or change your locale.\n>> http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n> \n> Aha - that sounds like it - this is the output from locale\n> \n> LANG=en_US.UTF-8\n> LC_CTYPE=\"en_US.UTF-8\"\n..\n> I guess it cannot determine the collating sequence?\n\nIt can, but isn't sure that it can rely on LIKE 'A%' being the same as \n >= 'A' and < 'B' (not always true). Re-creating the index with the \nright opclass will tell it this is the case.\n\n> I'm not too familiar with unix locale issues - does this output match\n> your problem description?\n\nOK - quick intro to locales. Create a file /tmp/sortthis containing the \nfollowing:\n---begin file---\nBBB\nCCC\nAAA\nA CAT\nA DOG\nACAT\n---end file---\nNow run \"sort /tmp/sortthis\". You'll probably see spaces get ignored. \nNow run \"LANG=C sort /tmp/sortthis\". You'll probably see a traditional \nASCII (\"C\") sort. If not try LC_COLLATE rather than LANG.\n\n> Can you explain how to change my locale to 'C'? (I'm quite happy for\n> you to tell me to RTFM, as I know this is not a linux user mailing\n> list :)\n\nYou'll want to dump your databases and re-run initdb with a locale of \n\"C\" (or no locale). See:\n http://www.postgresql.org/docs/8.1/static/app-initdb.html\n\nThat will mean all sorting will be on ASCII value. The problem is that \nthe database picks up the operating-system's default locale when you \ninstall it from package. Not always what you want, but then until you \nunderstand the implications you can't really decide one way or the other.\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 04 Oct 2006 11:39:16 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "simon godden wrote:\n> I did that, e.g. initdb --locale=C, re-created all my data and have\n> exactly the same problem.\n> \n> I have two indexes, one with no options, and one with the varchar\n> operator options.\n> \n> So the situation now is:\n> If I do a like query it uses the index with the varchar options;\n> If I do a = query, it uses the index with no options;\n> If I do a < or > or any other operator, it reverts back to a seq-scan!\n> \n> I am on FC5 - any further ideas? Did I need to do anything specific\n> about collating sequence? I thought that the --locale=C would set\n> that for all options.\n\n From psql, a \"show all\" command will list all your config settings and \nlet you check the lc_xxx values are correct.\n\nMake sure you've analysed the database after restoring, otherwise it \nwill have bad stats available.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 04 Oct 2006 14:47:33 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "On 10/4/06, Richard Huxton <[email protected]> wrote:\n> simon godden wrote:\n>\n> From psql, a \"show all\" command will list all your config settings and\n> let you check the lc_xxx values are correct.\n\nlc_collate is C, as are all the other lc settings.\n\nI have run the analyze commands.\n\nStill the same.\n\n-- \nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 14:47:43 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "simon godden wrote:\n> On 10/4/06, Richard Huxton <[email protected]> wrote:\n>> simon godden wrote:\n>>\n>> From psql, a \"show all\" command will list all your config settings and\n>> let you check the lc_xxx values are correct.\n> \n> lc_collate is C, as are all the other lc settings.\n> \n> I have run the analyze commands.\n> \n> Still the same.\n\nCan you post EXPLAIN ANALYSE for the LIKE and <> queries that should be \nusing the index? With enable_seqscan on and off please.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 04 Oct 2006 15:07:03 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> \n> lc_collate is C, as are all the other lc settings.\n> \n> I have run the analyze commands.\n> \n> Still the same.\n\n\nThat is strange. I figured it had to be related to the locale and the LIKE\noperator. I'm not an expert on these locale issues, but I'd be curious to\nsee if it would start using an index if you added an index like this:\n\nCREATE INDEX test_index ON t_order (c_number varchar_pattern_ops);\n\nDave\n\n",
"msg_date": "Wed, 4 Oct 2006 09:12:07 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "> Can you post EXPLAIN ANALYSE for the LIKE and <> queries that should be\n> using the index? With enable_seqscan on and off please.\n>\n\nOK - I don't know what happened, but now my linux installation is\nbehaving like the windows one. I honestly don't know what changed,\nwhich I know doesn't help people determine the cause of my issue....\n\nBut I still have a problem with > and <, on both environments.\n\nNow, both LIKE and = are using the index with no options on it.\n\nBut the other operators are not.\n\nFirstly, with enable_seqscan on:\n\norderstest=# explain analyze select c_number from t_order where\nc_number like '00001%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_order_c_number on t_order (cost=0.00..3.01 rows=1\nwidth=11) (actual time=0.167..0.610 rows=100 loops=1)\n Index Cond: (((c_number)::text >= '00001'::character varying) AND\n((c_number)::text < '00002'::character varying))\n Filter: ((c_number)::text ~~ '00001%'::text)\n Total runtime: 0.921 ms\n(4 rows)\n\norderstest=# explain analyze select c_number from t_order where\nc_number > '0001';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on t_order (cost=0.00..18312.50 rows=878359 width=11)\n(actual time=1.102..4364.704 rows=878000 loops=1)\n Filter: ((c_number)::text > '0001'::text)\n Total runtime: 6431.968 ms\n(3 rows)\n\nAnd now with enable_seqscan off:\n\norderstest=# explain analyze select c_number from t_order where\nc_number like '00001%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_order_c_number on t_order (cost=0.00..3.01 rows=1\nwidth=11) (actual time=0.245..0.674 rows=100 loops=1)\n Index Cond: (((c_number)::text >= '00001'::character varying) AND\n((c_number)::text < '00002'::character varying))\n Filter: ((c_number)::text ~~ '00001%'::text)\n Total runtime: 0.971 ms\n(4 rows)\n\n(Just the same)\n\norderstest=# explain analyze select c_number from t_order where\nc_number > '0001';\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_order_c_number on t_order (cost=0.00..22087.31\nrows=878912 width=11) (actual time=0.230..3504.909 rows=878000\nloops=1)\n Index Cond: ((c_number)::text > '0001'::text)\n Total runtime: 5425.931 ms\n(3 rows)\n\n(Now using the index but getting awful performance out of it - how's that?)\n\nThe difference seems to be whether it is treating the index condition\nas 'character varying' or 'text'.\n\nBasically, can I do > < >= <= on a varchar without causing a seq-scan?\n\n-- \nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 15:46:12 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple case using index on windows but not on linux"
},
{
"msg_contents": "I think I am being stupid now.\n\nThe > query was returning so many rows (87% of the rows in the table)\nthat a seq-scan was of course the best way.\n\nSorry - all is now working and the problem was the locale issue.\n\nThanks so much for your help everyone.\n\n-- \nSimon Godden\n",
"msg_date": "Wed, 4 Oct 2006 15:48:52 +0100",
"msg_from": "\"simon godden\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple case using index on windows but not on linux"
}
] |
[
{
"msg_contents": "To be a bit constructive, could it be an idea to add unsubscribe\ninformation as one of the standard tailer tips? Then unsubscribe info\nwouldn't appear in every mail, but often enough for people considering\nto unsubscribe. To be totally non-constructive, let me add a bit to the\nnoise below:\n\n[Bruno]\n> > If you really can't figure out how to unsubscribe from a list, you should\n> > contact the list owner, not the list. The list members can't unsubscribe you\n> > (and it isn't their job to) and the owner may not be subscribed to the\n> > list. \n\nIf he can't find out how to unsubscribe from the list, how can he be\nexpected to figure out the owner address?\n\n[Joshua]\n> It is ridiculous that this community expects people to read email\n> headers to figure out how to unsubscribe from our lists.\n\nI always check the headers when I want to unsubscribe from any mailing\nlist, and I think most people on this list have above average knowledge\nof such technical details. Of course, on a list with this many\nrecepients there will always be some exceptions ...\n\n",
"msg_date": "Wed, 4 Oct 2006 17:44:38 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unsubscribe"
},
{
"msg_contents": "\n> [Joshua]\n>> It is ridiculous that this community expects people to read email\n>> headers to figure out how to unsubscribe from our lists.\n> \n> I always check the headers when I want to unsubscribe from any mailing\n> list, and I think most people on this list have above average knowledge\n> of such technical details. Of course, on a list with this many\n> recepients there will always be some exceptions ...\n\nI would consider myself above average knowledge of such technical\ndetails and I didn't know the list information was in the headers until\nrecently (the last time all of this came up).\n\nNow, I of course did know that there were headers, and I can use them to\ndiagnose problems but I was unaware of an RFC that explicitly stated how\nthe headers were supposed to be sent for mailing lists.\n\nHowever, that is besides the point. It is still ridiculous to expect\nanyone to read the headers just to unsubscribe from a list.\n\nIf we didn't want to add it for each list we could just add a link here:\n\nhttp://www.postgresql.org/community/lists/subscribe\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 04 Oct 2006 08:54:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribe"
},
{
"msg_contents": "> If we didn't want to add it for each list we could just add a link here:\n> \n> http://www.postgresql.org/community/lists/subscribe\n\n+1\n\nWhen I want to unsubscribe from a list (very rare in my case, I don't\nsubscribe in the first place if I'm not sure I want to get it), I start\nby looking where I subscribed... so the above suggestion might work\nquite well even for lazy subscribers, they'll have their unsubscription\ninfo right where they started the subscription process, no more\nsearching needed.\n\nCheers,\nCsaba\n\n\n",
"msg_date": "Wed, 04 Oct 2006 18:02:53 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribe"
},
{
"msg_contents": "On Wed, 2006-10-04 at 18:02, Csaba Nagy wrote:\n> > If we didn't want to add it for each list we could just add a link here:\n> > \n> > http://www.postgresql.org/community/lists/subscribe\n\nOK, now that I had a second look on that page, it does contain\nunsubscription info... but it's well hidden for the fugitive look... the\ncaption is a big \"Subscribe to Lists\", you wouldn't think at a first\nglance think that the form is actually used to unsubscribe too, would\nyou ?\n\nSo maybe it's just that the text should be more explicit about what it\nactually does...\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Wed, 04 Oct 2006 18:35:49 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribe"
},
{
"msg_contents": "On Oct 4, 2006, at 10:54 AM, Joshua D. Drake wrote:\n>> [Joshua]\n>>> It is ridiculous that this community expects people to read email\n>>> headers to figure out how to unsubscribe from our lists.\n>>\n>> I always check the headers when I want to unsubscribe from any \n>> mailing\n>> list, and I think most people on this list have above average \n>> knowledge\n>> of such technical details. Of course, on a list with this many\n>> recepients there will always be some exceptions ...\n>\n> I would consider myself above average knowledge of such technical\n> details and I didn't know the list information was in the headers \n> until\n> recently (the last time all of this came up).\n>\n> Now, I of course did know that there were headers, and I can use \n> them to\n> diagnose problems but I was unaware of an RFC that explicitly \n> stated how\n> the headers were supposed to be sent for mailing lists.\n>\n> However, that is besides the point. It is still ridiculous to expect\n> anyone to read the headers just to unsubscribe from a list.\n>\n> If we didn't want to add it for each list we could just add a link \n> here:\n>\n> http://www.postgresql.org/community/lists/subscribe\n\nAn even better option would be to switch to a list manager that \nactively traps these emails, such as mailman.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Thu, 5 Oct 2006 22:49:37 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribe"
},
{
"msg_contents": "On Oct 4, 2006, at 11:35 AM, Csaba Nagy wrote:\n> On Wed, 2006-10-04 at 18:02, Csaba Nagy wrote:\n>>> If we didn't want to add it for each list we could just add a \n>>> link here:\n>>>\n>>> http://www.postgresql.org/community/lists/subscribe\n>\n> OK, now that I had a second look on that page, it does contain\n> unsubscription info... but it's well hidden for the fugitive \n> look... the\n> caption is a big \"Subscribe to Lists\", you wouldn't think at a first\n> glance think that the form is actually used to unsubscribe too, would\n> you ?\n>\n> So maybe it's just that the text should be more explicit about what it\n> actually does...\n\nBetter yet, have an unsubscribe page...\n\nPersonally, I'm tempted to get creative with procmail, and post a \nrecipe that others can use to help enlighten those that post \nunsubscribe messages to the list... :>\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Thu, 5 Oct 2006 22:52:47 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribe"
}
] |
[
{
"msg_contents": "I'm having an interesting (perhaps anomalous) variability in UPDATE \nperformance on a table in my database, and wanted to see if there was \nany interest in looking further before I destroy the evidence and move on.\n\nThe table, VOTER, contains 3,090,013 rows and each row is about 120 \nbytes wide. It's loaded via a batch process in one shot, and the \nload is followed by an VACUUM FULL ANALYZE. Its structure is shown \nat the bottom of the message.\n\nIf I run the statement:\n\n(1): UPDATE voter SET gender = 'U';\n\non the table in this condition, the query effectively never ends -- \nI've allowed it to run for 12-14 hours before giving up. The plan \nfor that statement is:\n\nSeq Scan on voter (cost=0.00..145117.38 rows=3127738 width=120)\n\nHowever, if I do the following:\n\n(2): CREATE TABLE voter_copy AS SELECT * FROM voter;\n(3): UPDATE voter_copy SET gender = 'U';\n\nthe query is much faster --\n\nSeq Scan on voter_copy (cost=0.00..96231.35 rows=3090635 width=120) \n(actual time=108.056..43203.696 rows=3090013 loops=1)\nTotal runtime: 117315.731 ms\n\nWhen (1) is running, the machine is very nearly idle, with no \npostmaster taking more than 1 or 2 % of the CPU. When (3) is \nrunning, about 60% CPU utilization occurs.\n\nThe same behavior occurs if the table is dumped and reloaded.\n\nMy environment is Windows XP SP2 and I'm on Postgresql 8.1.4 \ninstalled via the msi installer. Hardware is an Athlon 2000+ \n1.67ghx, with 1G RAM. The database is hosted on a Seagate Barracuda \n7200.10 connected via a FastTrak 4300 without any RAID \nconfiguration. dd shows a write speed of 39 MB/s and read speed of \n44 MB/s. The server configuration deviates from the default in these \nstatements:\n\nfsync = off\nshared_buffers = 25000\nwork_mem = 50000\nmaintenance_work_mem = 100000\n\nCREATE TABLE voter\n(\n voter_id int4,\n sos_voter_id varchar(20),\n household_id int4,\n first_name varchar(40),\n middle_name varchar(40),\n last_name varchar(40),\n name_suffix varchar(10),\n phone_number varchar(10),\n bad_phone_no bool,\n registration_date date,\n birth_year int4,\n gender char(1),\n pri_ind char(1),\n gen_1992_primary_party char(1),\n council_votes int2,\n primary_votes int2,\n council_primary_votes int2,\n special_votes int2,\n presidential_votes int2,\n votes int2,\n absentee_votes int2,\n last_voted_date date,\n first_voted_date date,\n rating char(1),\n score float4,\n general_votes int2\n)\nWITHOUT OIDS;\n\n",
"msg_date": "Wed, 04 Oct 2006 12:54:30 -0500",
"msg_from": "Steve Peterson <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPDATE becomes mired / win32"
},
{
"msg_contents": "> The table, VOTER, contains 3,090,013 rows and each row is about 120 bytes \n> wide. It's loaded via a batch process in one shot, and the load is \n> followed by an VACUUM FULL ANALYZE. Its structure is shown at the bottom \n> of the message.\n\n\nif the table wasn't empty before and has indices defined, try a \"REINDEX \nTABLE VOTER\" before running the update. i had a similar case where an often \nupdated table was vacuumed regurarly, but the indices grew and grew and \ngrew. in my case the table - even when empty and analyze full'ed was 1.2gb \naccording to pgadmin due to (outdated) indices. a reindex fixed all my \nperformance issues.\n\n- thomas \n\n\n",
"msg_date": "Wed, 4 Oct 2006 20:13:33 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE becomes mired / win32"
},
{
"msg_contents": "Steve Peterson <[email protected]> writes:\n> If I run the statement:\n> (1): UPDATE voter SET gender = 'U';\n> on the table in this condition, the query effectively never ends -- \n> I've allowed it to run for 12-14 hours before giving up.\n> ...\n> When (1) is running, the machine is very nearly idle, with no \n> postmaster taking more than 1 or 2 % of the CPU.\n\nIs the disk busy? If neither CPU nor I/O are saturated, then it's a\ngood bet that the UPDATE isn't actually running at all, but is waiting\nfor a lock somewhere. Have you looked into pg_locks to check for a\nconflicting lock?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Oct 2006 16:28:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE becomes mired / win32 "
},
{
"msg_contents": "Both commands seem to be saturating the disk. There's nothing else \nrunning in the database, and all of the locks have 't' in the granted \ncolumn, which I'm assuming means they're not blocked.\n\nAccording to the statistics, the original table has 889 mb and \nindexes of 911mb, whereas the copy has 1021 mb and no space for indexes.\n\nSteve\n\nAt 03:28 PM 10/4/2006, Tom Lane wrote:\n>Steve Peterson <[email protected]> writes:\n> > If I run the statement:\n> > (1): UPDATE voter SET gender = 'U';\n> > on the table in this condition, the query effectively never ends --\n> > I've allowed it to run for 12-14 hours before giving up.\n> > ...\n> > When (1) is running, the machine is very nearly idle, with no\n> > postmaster taking more than 1 or 2 % of the CPU.\n>\n>Is the disk busy? If neither CPU nor I/O are saturated, then it's a\n>good bet that the UPDATE isn't actually running at all, but is waiting\n>for a lock somewhere. Have you looked into pg_locks to check for a\n>conflicting lock?\n>\n> regards, tom lane\n\n\n",
"msg_date": "Wed, 04 Oct 2006 23:56:11 -0500",
"msg_from": "Steve Peterson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATE becomes mired / win32 "
},
{
"msg_contents": "I'm pretty sure that the table was empty before doing the load, but I \ngave this a shot. It didn't have an impact on the results.\n\nThe behavior also persists across a dump/reload of the table into a \nnew install on a different machine. IIRC dump/reload rebuilds \nindexes from scratch.\n\nSteve\n\nAt 01:13 PM 10/4/2006, [email protected] wrote:\n>>The table, VOTER, contains 3,090,013 rows and each row is about 120 \n>>bytes wide. It's loaded via a batch process in one shot, and the \n>>load is followed by an VACUUM FULL ANALYZE. Its structure is shown \n>>at the bottom of the message.\n>\n>\n>if the table wasn't empty before and has indices defined, try a \n>\"REINDEX TABLE VOTER\" before running the update. i had a similar \n>case where an often updated table was vacuumed regurarly, but the \n>indices grew and grew and grew. in my case the table - even when \n>empty and analyze full'ed was 1.2gb according to pgadmin due to \n>(outdated) indices. a reindex fixed all my performance issues.\n>\n>- thomas\n>\n\n\n",
"msg_date": "Thu, 05 Oct 2006 00:06:49 -0500",
"msg_from": "Steve Peterson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATE becomes mired / win32"
},
{
"msg_contents": "Steve Peterson <[email protected]> writes:\n> The behavior also persists across a dump/reload of the table into a \n> new install on a different machine. IIRC dump/reload rebuilds \n> indexes from scratch.\n\nHm. There must be something you're not telling us that accounts for\nthe difference between the original table and the copied version ---\nforeign keys linking from other tables, perhaps?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Oct 2006 09:19:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE becomes mired / win32 "
}
] |
[
{
"msg_contents": "Look at this:\n\nNBET=> explain select * from account_transaction where users_id=123456 order by created desc limit 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..27.40 rows=10 width=213)\n -> Index Scan Backward using account_transaction_on_user_and_timestamp on account_transaction (cost=0.00..1189.19 rows=434 width=213)\n Index Cond: (users_id = 123456)\n(3 rows)\n\nNBET=> explain select * from account_transaction where users_id=123456 order by created desc, id desc limit 10;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1114.02..1114.04 rows=10 width=213)\n -> Sort (cost=1114.02..1115.10 rows=434 width=213)\n Sort Key: created, id\n -> Index Scan using account_transaction_by_users_id on account_transaction (cost=0.00..1095.01 rows=434 width=213)\n Index Cond: (users_id = 123456)\n(5 rows)\n\nIn case the explains doesn't explain themself good enough: we have a\ntransaction table with ID (primary key, serial), created (a timestamp)\nand a users_id. Some of the users have generated thousands of\ntransactions, and the above query is a simplified version of the query\nused to show the users their last transactions. Since we have a large\nuser base hammering our servers with this request, the speed is\nsignificant.\n\nWe have indices on the users_id field and the (users_id, created)-tuple.\n\nThe timestamp is set by the application and has a resolution of 1 second\n- so there may easily be several transactions sharing the same\ntimestamp, but this is an exception not the rule. I suppose the\ndevelopers needed to add the ID to the sort list to come around a bug,\nbut still prefering to have the primary sorting by created to be able to\nuse the index. One workaround here is to order only by id desc and\ncreate a new index on (users_id, id) - but I really don't like adding\nmore indices to the transaction table.\n\n",
"msg_date": "Wed, 4 Oct 2006 20:12:43 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multi-key index not beeing used - bug?"
},
{
"msg_contents": "Thanks Tobias. The difference here though, is that in terms of your \ndatabase I am doing a query to select the most recent transaction for \nEACH user at once, not just for one user. If I do a similar query to \nyours to get the last transaction for a single user, my query is fast \nlike yours. It's when I'm doing a query to get the results for all \nusers at once that it is slow. If you try a query to get the most \nrecent transaction of all useres at once you will run into the same \nproblem I am having.\n\nGraham.\n\n\nTobias Brox wrote:\n\n>Look at this:\n>\n>NBET=> explain select * from account_transaction where users_id=123456 order by created desc limit 10;\n> QUERY PLAN\n>-------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..27.40 rows=10 width=213)\n> -> Index Scan Backward using account_transaction_on_user_and_timestamp on account_transaction (cost=0.00..1189.19 rows=434 width=213)\n> Index Cond: (users_id = 123456)\n>(3 rows)\n>\n>NBET=> explain select * from account_transaction where users_id=123456 order by created desc, id desc limit 10;\n> QUERY PLAN\n>------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1114.02..1114.04 rows=10 width=213)\n> -> Sort (cost=1114.02..1115.10 rows=434 width=213)\n> Sort Key: created, id\n> -> Index Scan using account_transaction_by_users_id on account_transaction (cost=0.00..1095.01 rows=434 width=213)\n> Index Cond: (users_id = 123456)\n>(5 rows)\n>\n>In case the explains doesn't explain themself good enough: we have a\n>transaction table with ID (primary key, serial), created (a timestamp)\n>and a users_id. Some of the users have generated thousands of\n>transactions, and the above query is a simplified version of the query\n>used to show the users their last transactions. Since we have a large\n>user base hammering our servers with this request, the speed is\n>significant.\n>\n>We have indices on the users_id field and the (users_id, created)-tuple.\n>\n>The timestamp is set by the application and has a resolution of 1 second\n>- so there may easily be several transactions sharing the same\n>timestamp, but this is an exception not the rule. I suppose the\n>developers needed to add the ID to the sort list to come around a bug,\n>but still prefering to have the primary sorting by created to be able to\n>use the index. One workaround here is to order only by id desc and\n>create a new index on (users_id, id) - but I really don't like adding\n>more indices to the transaction table.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \n>\n\n\n-- \nGraham Davis\nRefractions Research Inc.\[email protected]\n\n",
"msg_date": "Wed, 04 Oct 2006 11:22:51 -0700",
"msg_from": "Graham Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-key index not beeing used - bug?"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> NBET=> explain select * from account_transaction where users_id=123456 order by created desc, id desc limit 10;\n\n> We have indices on the users_id field and the (users_id, created)-tuple.\n\nNeither of those indexes can provide the sort order the query is asking\nfor.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Oct 2006 16:33:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-key index not beeing used - bug? "
},
{
"msg_contents": "[Tom Lane - Wed at 04:33:54PM -0400]\n> > We have indices on the users_id field and the (users_id, created)-tuple.\n> \n> Neither of those indexes can provide the sort order the query is asking\n> for.\n\nAh; that's understandable - the planner have two options, to do a index\ntraversion without any extra sorting, or to take out everything and then\nsort. What I'd like postgres to do is to traverse the index and do some\nsorting for every unique value of created. Maybe such a feature can be\nfound in future releases - like Postgres 56.3? ;-)\n",
"msg_date": "Wed, 4 Oct 2006 22:41:48 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multi-key index not beeing used - bug?"
}
] |
[
{
"msg_contents": "I've got another query I'm trying to optimize:\n\nselect aj.album from\npublic.track t\njoin public.albumjoin aj\non (aj.track = t.id)\njoin (select id from public.albummeta am where tracks between 10 and \n14) lam\non (lam.id = aj.album)\nwhere (t.name % '01New OrderEvil Dust' or t.name % '04OrbitalOpen Mind')\ngroup by aj.album having count(*) >= 9.6;\n\nThis gives an expensive (but still reasonable) plan of:\n\nHashAggregate (cost=76523.64..76602.25 rows=4492 width=4)\n Filter: ((count(*))::numeric >= 9.6)\n -> Hash Join (cost=63109.73..76501.18 rows=4492 width=4)\n Hash Cond: (\"outer\".id = \"inner\".album)\n -> Bitmap Heap Scan on albummeta am \n(cost=1810.10..9995.34 rows=187683 width=4)\n Recheck Cond: ((tracks >= 10) AND (tracks <= 14))\n -> Bitmap Index Scan on albummeta_tracks_index \n(cost=0.00..1810.10 rows=187683 width=0)\n Index Cond: ((tracks >= 10) AND (tracks <= 14))\n -> Hash (cost=61274.03..61274.03 rows=10243 width=4)\n -> Nested Loop (cost=163.87..61274.03 rows=10243 \nwidth=4)\n -> Bitmap Heap Scan on track t \n(cost=163.87..28551.33 rows=10243 width=4)\n Recheck Cond: (((name)::text % '01New \nOrderEvil Dust'::text) OR ((name)::text % '04OrbitalOpen Mind'::text))\n -> BitmapOr (cost=163.87..163.87 \nrows=10248 width=0)\n -> Bitmap Index Scan on \ntrack_name_trgm_idx (cost=0.00..81.93 rows=5124 width=0)\n Index Cond: ((name)::text % \n'01New OrderEvil Dust'::text)\n -> Bitmap Index Scan on \ntrack_name_trgm_idx (cost=0.00..81.93 rows=5124 width=0)\n Index Cond: ((name)::text % \n'04OrbitalOpen Mind'::text)\n -> Index Scan using albumjoin_trackindex on \nalbumjoin aj (cost=0.00..3.18 rows=1 width=8)\n Index Cond: (aj.track = \"outer\".id)\n(19 rows)\n\nUnfortunately, when I modify this example to use a more typical \nnumber of trigram searches or'd together (anywhere from 10 to 20), \nthe planner thinks the bitmap heap scan on track t will return a lot \nof rows, and so reverts to doing a sequential scan of albumjoin for \nthe next table join. That would make sense.... IF there were a lot of \nrows returned by the bitmap index scans. But here is where the \nplanner gets it really wrong, if I'm reading it right.\n\nIt seems to think both my index scans will return 5124 rows, when, in \nreality, it's a lot less:\n\nselect count(*) from public.track where name % '01New OrderEvil Dust';\ncount\n-------\n 20\n(1 row)\n\nselect count(*) from public.track where name % '04OrbitalOpen Mind';\ncount\n-------\n 123\n(1 row)\n\n\nHow can I get the planner to not expect so many rows to be returned? \nA possibly related question is: because pg_tgrm lets me set the \nmatching threshold of the % operator, how does that affect the planner?\n",
"msg_date": "Wed, 4 Oct 2006 18:35:15 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_trgm indexes giving bad estimations?"
},
{
"msg_contents": "Ben <[email protected]> writes:\n> How can I get the planner to not expect so many rows to be returned? \n\nWrite an estimation function for the pg_trgm operator(s). (Send in a\npatch if you do!) I see that % is using \"contsel\" which is only a stub,\nand would likely be wrong for % even if it weren't.\n\n> A possibly related question is: because pg_tgrm lets me set the \n> matching threshold of the % operator, how does that affect the planner?\n\nIt hasn't a clue about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Oct 2006 21:54:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm indexes giving bad estimations? "
},
{
"msg_contents": "Now that I have a little time to work on this again, I've thought about it \nand it seems that an easy and somewhat accurate cop-out to do this is to \nuse whatever the selectivity function would be for the like operator, \nmultiplied by a scalar that pg_tgrm should already have access to.\n\nUnfortunately, it's not at all clear to me from reading \nhttp://www.postgresql.org/docs/8.1/interactive/xoper-optimization.html#AEN33077\nhow like impliments selectivity. Any pointers on where to look?\n\nOn Wed, 4 Oct 2006, Tom Lane wrote:\n\n> Ben <[email protected]> writes:\n>> How can I get the planner to not expect so many rows to be returned?\n>\n> Write an estimation function for the pg_trgm operator(s). (Send in a\n> patch if you do!) I see that % is using \"contsel\" which is only a stub,\n> and would likely be wrong for % even if it weren't.\n>\n>> A possibly related question is: because pg_tgrm lets me set the\n>> matching threshold of the % operator, how does that affect the planner?\n>\n> It hasn't a clue about that.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n",
"msg_date": "Tue, 31 Oct 2006 21:41:38 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_trgm indexes giving bad estimations? "
},
{
"msg_contents": "Ben <[email protected]> writes:\n> Unfortunately, it's not at all clear to me from reading \n> http://www.postgresql.org/docs/8.1/interactive/xoper-optimization.html#AEN33077\n> how like impliments selectivity. Any pointers on where to look?\n\nlikesel() and subsidiary functions in src/backend/utils/adt/selfuncs.c\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Nov 2006 01:01:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm indexes giving bad estimations? "
}
] |
[
{
"msg_contents": "I have two tables, SAMPLE and HITLIST that when joined, generate a monsterous sort.\n\n HITLIST_ROWS has about 48,000 rows\n SAMPLE has about 16 million rows\n\n The joined column is indexed in SAMPLE\n HITLIST_ROWS is a scratch table which is used a few times then discarded.\n HITLIST_ROWS has no indexes at all\n\nThere are two plans below. The first is before an ANALYZE HITLIST_ROWS, and it's horrible -- it looks to me like it's sorting the 16 million rows of the SEARCH table. Then I run ANALYZE HITLIST_ROWS, and the plan is pretty decent.\n\nFirst question: HITLIST_ROWS so small, I don't understand why the lack of ANALYZE should cause SAMPLE's contents to be sorted.\n\nSecond question: Even though ANALYZE brings it down from 26 minutes to 47 seconds, a huge improvement, it still seems slow to me. Its going at roughly 1 row per millisecond -- are my expectations too high? This is a small-ish Dell computer (Xeon), 4 GB memory, with a four-disk SATA software RAID0 (bandwidth limited to about 130 MB/sec due to PCI cards). Other joins of a similar size seem much faster.\n\nIt looks like I'll need to do an ANALYZE every time I modify HITLIST_ROWS, which seems like a waste because HITLIST_ROWS is rarely used more than once or twice before being truncated and rebuilt with new content. (HITLIST_ROWS can't be an actual temporary table, though, because it's a web application and each access is from a new connection.)\n\nThis is Postgres 8.0.3. (We're upgrading soon.)\n\nThanks,\nCraig\n\n\n\nexplain analyze select t.SAMPLE_ID from SAMPLE t, HITLIST_ROWS ph where t.VERSION_ID = ph.ObjectID);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=4782.35..1063809.82 rows=613226 width=4) (actual time=174.212..1593886.582 rows=176294 loops=1)\n Merge Cond: (\"outer\".version_id = \"inner\".objectid)\n -> Index Scan using i_sample_version_id on sample t (cost=0.00..1008713.68 rows=16446157 width=8) (actual time=0.111..1571911.208 rows=16446157 loops=1)\n -> Sort (cost=4782.35..4910.39 rows=51216 width=4) (actual time=173.669..389.496 rows=176329 loops=1)\n Sort Key: ph.objectid\n -> Seq Scan on hitlist_rows_378593 ph (cost=0.00..776.16 rows=51216 width=4) (actual time=0.015..90.059 rows=48834 loops=1)\n Total runtime: 1594093.725 ms\n(7 rows)\n\nchmoogle2=> analyze HITLIST_ROWS;\nANALYZE\nchmoogle2=> explain analyze select t.SAMPLE_ID from SAMPLE t, HITLIST_ROWS ph where t.VERSION_ID = ph.ObjectID;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=874.43..457976.83 rows=584705 width=4) (actual time=302.792..47796.719 rows=176294 loops=1)\n Hash Cond: (\"outer\".version_id = \"inner\".objectid)\n -> Seq Scan on sample t (cost=0.00..369024.57 rows=16446157 width=8) (actual time=46.344..26752.343 rows=16446157 loops=1)\n -> Hash (cost=752.34..752.34 rows=48834 width=4) (actual time=149.548..149.548 rows=0 loops=1)\n -> Seq Scan on hitlist_rows_378593 ph (cost=0.00..752.34 rows=48834 width=4) (actual time=0.048..80.721 rows=48834 loops=1)\n Total runtime: 47988.572 ms\n(6 rows)\n",
"msg_date": "Fri, 06 Oct 2006 23:34:31 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple join optimized badly?"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> There are two plans below. The first is before an ANALYZE HITLIST_ROWS, and it's horrible -- it looks to me like it's sorting the 16 million rows of the SEARCH table. Then I run ANALYZE HITLIST_ROWS, and the plan is pretty decent.\n\nIt would be interesting to look at the before-ANALYZE cost estimate for\nthe hash join, which you could get by setting enable_mergejoin off (you\nmight have to turn off enable_nestloop too). I recall though that\nthere's a fudge factor in costsize.c that penalizes hashing on a column\nthat no statistics are available for. The reason for this is the\npossibility that the column has only a small number of distinct values,\nwhich would make a hash join very inefficient (in the worst case all\nthe values might end up in the same hash bucket, making it no better\nthan a nestloop). Once you've done ANALYZE it plugs in a real estimate\ninstead, and evidently the cost estimate drops enough to make hashjoin\nthe winner.\n\nYou might be able to persuade it to use a hashjoin anyway by increasing\nwork_mem enough, but on the whole my advice is to do the ANALYZE after\nyou load up the temp table. The planner really can't be expected to be\nvery intelligent when it has no stats.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 Oct 2006 11:51:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "Wouldn't PG supporting simple optmizer hints get around this kinda\nproblem? Seems to me that at least one customer posting per week\nwould be solved via the use of simple hints.\n\nIf the community is interested... EnterpriseDB has added support for\na few different simple types of hints (optimize for speed, optimize\nfor first rows, use particular indexes) for our upcoming 8.2 version.\n We are glad to submit them into the community process if there is any\nchance they will eventually be accepted for 8.3.\n\nI don't think there is an ANSI standrd for hints, but, that doesn't\nmean they are not occosaionally extrenmely useful. All hints are\neffectively harmless/helpful suggestions, the planner is free to\nignore them if they are not feasible.\n\n--Denis Lussier\n Founder\n http://www.enterprisedb.com\n\nOn 10/7/06, Tom Lane <[email protected]> wrote:\n> \"Craig A. James\" <[email protected]> writes:\n> > There are two plans below. The first is before an ANALYZE HITLIST_ROWS, and it's horrible -- it looks to me like it's sorting the 16 million rows of the SEARCH table. Then I run ANALYZE HITLIST_ROWS, and the plan is pretty decent.\n>\n> It would be interesting to look at the before-ANALYZE cost estimate for\n> the hash join, which you could get by setting enable_mergejoin off (you\n> might have to turn off enable_nestloop too). I recall though that\n> there's a fudge factor in costsize.c that penalizes hashing on a column\n> that no statistics are available for. The reason for this is the\n> possibility that the column has only a small number of distinct values,\n> which would make a hash join very inefficient (in the worst case all\n> the values might end up in the same hash bucket, making it no better\n> than a nestloop). Once you've done ANALYZE it plugs in a real estimate\n> instead, and evidently the cost estimate drops enough to make hashjoin\n> the winner.\n>\n> You might be able to persuade it to use a hashjoin anyway by increasing\n> work_mem enough, but on the whole my advice is to do the ANALYZE after\n> you load up the temp table. The planner really can't be expected to be\n> very intelligent when it has no stats.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Sat, 7 Oct 2006 21:50:14 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Oct 7, 2006, at 8:50 PM, Denis Lussier wrote:\n> Wouldn't PG supporting simple optmizer hints get around this kinda\n> problem? Seems to me that at least one customer posting per week\n> would be solved via the use of simple hints.\n>\n> If the community is interested... EnterpriseDB has added support for\n> a few different simple types of hints (optimize for speed, optimize\n> for first rows, use particular indexes) for our upcoming 8.2 version.\n> We are glad to submit them into the community process if there is any\n> chance they will eventually be accepted for 8.3.\n\n+1 (and I'd be voting that way regardless of where my paycheck comes \nfrom) While it's important that we continue to improve the planner, \nit's simply not possible to build one that's smart enough to handle \nevery single situation.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Sun, 8 Oct 2006 12:17:09 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Denis,\n\n> Wouldn't PG supporting simple optmizer hints get around this kinda\n> problem? Seems to me that at least one customer posting per week\n> would be solved via the use of simple hints.\n\n... and add 100 other problems. Hints are used because the DBA thinks that \nthey are smarter than the optimizer; 99% of the time, they are wrong. \nJust try manually optimizing a complex query, you'll see -- with three \njoin types, several scan types, aggregates, bitmaps, internal and external \nsorts, and the ability to collapse subqueries it's significantly more than \na human can figure out accurately. \n\nGiven the availability of hints, the newbie DBA will attempt to use them \ninstead of fixing any of the underlying issues. Craig's post is a classic \nexample of that: what he really needs to do is ANALYZE HITLIST_ROWS after \npopulating it. If he had the option of hints, and was shortsighted (I'm \nnot assuming that Craig is shortsighted, but just for the sake of \nargument) he'd fix this with a hint and move on ... and then add another \nhint when he adds a another query which needs HITLIST_ROWS, and another. \nAnd then he'll find out that some change in his data (the sample table \ngrowing, for example) makes his hints obsolete and he has to go back and \nre-tune them all.\n\nAnd then ... it comes time to upgrade PostgreSQL. The hints which worked \nwell in version 8.0 won't necessarily work well in 8.2. In fact, many of \nthem may make queries disastrously slow. Ask any Oracle DBA, they'll \ntell you that upgrading hint is a major PITA, and why Oracle is getting \naway from Hints and has eliminated the rules-based optimizer.\n\nNow, if you were offering us a patch to auto-populate the statistics as a \ntable is loaded, I'd be all for that. But I, personally, would need a \nlot of convincing to believe that hints don't do more harm than good.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Sun, 8 Oct 2006 16:05:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Now, if you were offering us a patch to auto-populate the statistics as a \n> table is loaded, I'd be all for that.\n\nCuriously enough, I was just thinking about that after reading Craig's\npost. autovacuum will do this, sort of, if it's turned on --- but its\nreaction time is measured in minutes typically so that may not be good\nenough.\n\nAnother thing we've been beat up about in the past is that loading a\npg_dump script doesn't ANALYZE the data afterward...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 08 Oct 2006 19:16:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "> ... and add 100 other problems. Hints are used because the DBA thinks that \n> they are smarter than the optimizer; 99% of the time, they are wrong. \n> Just try manually optimizing a complex query, you'll see -- with three \n> join types, several scan types, aggregates, bitmaps, internal and external \n> sorts, and the ability to collapse subqueries it's significantly more than \n> a human can figure out accurately. \n\nSorry, this is just wrong, wrong, wrong.\n\nI've heard this from several PG developers every time hints have come up in my roughly eighteen months as a PG application developer. And in between every assertion that \"the application programmers aren't as smart as the optimizer\", there are a dozen or two examples where posters to this list are told to increase this setting, decrease that one, adjust these other two, and the end result is to get the plan that the application programmer -- AND the PG professionals -- knew was the right plan to start with.\n\nPeople are smarter than computers. Period.\n\nNow I'll agree that the majority, perhaps the great majority, of questions to this group should NOT be solved with hints. You're absolutely right that in most cases hints are a really bad idea. People will resort to hints when they should be learning better ways to craft SQL, and when they should have read the configuration guides.\n\nBut that doesn't alter the fact that many, perhaps most, complicated application will, sooner or later, run into a showstopper case where PG just optimizes wrong, and there's not a damned thing the app programmer can do about it.\n\nMy example, discussed previously in this forum, is a classic. I have a VERY expensive function (it's in the class of NP-complete problems, so there is no faster way to do it). There is no circumstance when my function should be used as a filter, and no circumstance when it should be done before a join. But PG has no way of knowing the cost of a function, and so the optimizer assigns the same cost to every function. Big disaster.\n\nThe result? I can't use my function in any WHERE clause that involves any other conditions or joins. Only by itself. PG will occasionally decide to use my function as a filter instead of doing the join or the other WHERE conditions first, and I'm dead.\n\nThe interesting thing is that PG works pretty well for me on big tables -- it does the join first, then applies my expensive functions. But with a SMALL (like 50K rows) table, it applies my function first, then does the join. A search that completes in 1 second on a 5,000,000 row database can take a minute or more on a 50,000 row database.\n\nInstead, I have to separate the WHERE terms into two SQL statements, and do the join myself. I do the first half of my query, suck it all into memory, do the second half, suck it into memory, build a hash table and join the two lists in memory, then take the joined results and apply my function to it.\n\nThis is not how a relational database should work. It shouldn't fall over dead just when a table's size SHRINKS beyond some threshold that causes the planner to switch to a poor plan.\n\nSince these tables are all in the same database, adjusting configuration parameters doesn't help me. And I suppose I could use SET to disable various plans, but how is that any different from a HINT feature?\n\nNow you might argue that function-cost needs to be added to the optimizer's arsenal of tricks. And I'd agree with you: That WOULD be a better solution than hints. But I need my problem solved TODAY, not next year. Hints can help solve problems NOW that can be brought to the PG team's attention later, and in the mean time let me get my application to work.\n\nSorry if I seem particularly hot under the collar on this one. I think you PG designers have created a wonderful product. It's not the lack of hints that bothers me, it's the \"You app developers are dumber than we are\" attitude. We're not. Some of us know what we're doing, and we need hints.\n\nIf it is just a matter of resources, that's fine. I understand that these things take time. But please don't keep dismissing the repeated and serious requests for this feature. It's important.\n\nThanks for listening.\nCraig\n",
"msg_date": "Sun, 08 Oct 2006 18:07:22 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Craig A. James wrote:\n>\n> \n> My example, discussed previously in this forum, is a classic. I have a \n> VERY expensive function (it's in the class of NP-complete problems, so \n> there is no faster way to do it). There is no circumstance when my \n> function should be used as a filter, and no circumstance when it should \n> be done before a join. But PG has no way of knowing the cost of a \n> function, and so the optimizer assigns the same cost to every function. \n> Big disaster.\n> \n> The result? I can't use my function in any WHERE clause that involves \n> any other conditions or joins. Only by itself. PG will occasionally \n> decide to use my function as a filter instead of doing the join or the \n> other WHERE conditions first, and I'm dead.\n> \n\nthis is an argument for cost-for-functions rather than hints AFAICS.\n\nIt seems to me that if (in addition to the function cost) we come up \nwith some efficient way of recording cross column statistics we would be \nwell on the way to silencing *most* of the demands for hints.\n\nWe would still be left with some of the really difficult problems - a \nmetric for \"locally correlated\" column distributions and a reliable \nstatistical algorithm for most common value sampling (or a different way \nof approaching this). These sound like interesting computer science or \nmathematics thesis topics, maybe we could try (again?) to get some \ninterest at that level?\n\nCheers\n\nMark\n\n\n",
"msg_date": "Mon, 09 Oct 2006 15:41:20 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Mark Kirkwood wrote:\n>> The result? I can't use my function in any WHERE clause that involves \n>> any other conditions or joins. Only by itself. PG will occasionally \n>> decide to use my function as a filter instead of doing the join or the \n>> other WHERE conditions first, and I'm dead.\n> \n> this is an argument for cost-for-functions rather than hints AFAICS.\n\nPerhaps you scanned past what I wrote a couple paragraphs farther down. I'm going to repeat it because it's the KEY POINT I'm trying to make:\n\nCraig James wrote:\n> Now you might argue that function-cost needs to be added to the \n> optimizer's arsenal of tricks. And I'd agree with you: That WOULD be a \n> better solution than hints. But I need my problem solved TODAY, not \n> next year. Hints can help solve problems NOW that can be brought to the \n> PG team's attention later, and in the mean time let me get my \n> application to work.\n\nCraig\n\n",
"msg_date": "Sun, 08 Oct 2006 19:46:00 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "[email protected] (Tom Lane) writes:\n> Another thing we've been beat up about in the past is that loading a\n> pg_dump script doesn't ANALYZE the data afterward...\n\nDo I misrecall, or were there not plans (circa 7.4...) to for pg_dump\nto have an option to do an ANALYZE at the end?\n\nI seem to remember some dispute as to whether the default should be to\ninclude the ANALYZE, with an option to suppress it, or the opposite...\n-- \n(reverse (concatenate 'string \"ofni.sesabatadxunil\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/wp.html\n\"You can measure a programmer's perspective by noting his attitude on\nthe continuing vitality of FORTRAN.\" -- Alan J. Perlis\n",
"msg_date": "Mon, 09 Oct 2006 02:58:26 +0000",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Craig A. James wrote:\n\n> \n> Perhaps you scanned past what I wrote a couple paragraphs farther down. \n> I'm going to repeat it because it's the KEY POINT I'm trying to make:\n> \n> Craig James wrote:\n>> Now you might argue that function-cost needs to be added to the \n>> optimizer's arsenal of tricks. And I'd agree with you: That WOULD be \n>> a better solution than hints. But I need my problem solved TODAY, not \n>> next year. Hints can help solve problems NOW that can be brought to \n>> the PG team's attention later, and in the mean time let me get my \n>> application to work.\n\nTrue enough - but (aside from the fact that hints might take just as \nlong to get into the development tree as cost-for-functions might take \nto write and put in...) there is a nasty side effect to adding hints - \nmost of the raw material for optimizer improvement disappears (and hence \noptimizer improvement stalls)- why? simply that everyone then hints \neverything - welcome to the mess that Oracle are in (and seem to be \ntrying to get out of recently)!\n\nI understand that it is frustrating to not have the feature you need now \n - but you could perhaps view it as a necessary part of the community \ndevelopment process - your need is the driver for optimizer improvement, \nand it can take time.\n\nNow ISTM that hints \"solve\" the problem by removing the need any further \noptimizer improvement at all - by making *you* the optimizer. This is \nbad for those of us in the DSS world, where most ad-hoc tools do not \nprovide the ability to add hints.\n\nCheers\n\nMark\n\n\n\n",
"msg_date": "Mon, 09 Oct 2006 16:38:34 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> True enough - but (aside from the fact that hints might take just as \n> long to get into the development tree as cost-for-functions might take \n> to write and put in...) there is a nasty side effect to adding hints - \n> most of the raw material for optimizer improvement disappears (and hence \n> optimizer improvement stalls)- why? simply that everyone then hints \n> everything - welcome to the mess that Oracle are in (and seem to be \n> trying to get out of recently)!\n\nAnd *that* is exactly the key point here. Sure, if we had unlimited\nmanpower we could afford to throw some at developing a hint language\nthat would be usable and not too likely to break at every PG revision.\nBut we do not have unlimited manpower. My opinion is that spending\nour development effort on hints will have a poor yield on investment\ncompared to spending similar effort on making the planner smarter.\n\nJosh's post points out some reasons why it's not that easy to get\nlong-term benefits from hints --- you could possibly address some of\nthose problems, but a hint language that responds to those criticisms\nwon't be trivial to design, implement, or maintain. See (many) past\ndiscussions for reasons why not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2006 00:43:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "Tom,\n\n> Josh's post points out some reasons why it's not that easy to get\n> long-term benefits from hints --- you could possibly address some of\n> those problems, but a hint language that responds to those criticisms\n> won't be trivial to design, implement, or maintain. See (many) past\n> discussions for reasons why not.\n\nWell, why don't we see what EDB can come up with? If it's not \"good enough\" \nwe'll just reject it. \n\nUnfortunately, EDB's solution is likely to be Oracle-based, which is liable to \nfall into the trap of \"not good enough.\"\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Mon, 9 Oct 2006 10:12:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Unfortunately, EDB's solution is likely to be Oracle-based, which is\n> liable to fall into the trap of \"not good enough.\"\n\nI'd be a bit worried about Oracle patents as well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2006 13:41:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "[email protected] (\"Craig A. James\") writes:\n> Mark Kirkwood wrote:\n>>> The result? I can't use my function in any WHERE clause that\n>>> involves any other conditions or joins. Only by itself. PG will\n>>> occasionally decide to use my function as a filter instead of doing\n>>> the join or the other WHERE conditions first, and I'm dead.\n>> this is an argument for cost-for-functions rather than hints AFAICS.\n>\n> Perhaps you scanned past what I wrote a couple paragraphs farther\n> down. I'm going to repeat it because it's the KEY POINT I'm trying\n> to make:\n>\n> Craig James wrote:\n>> Now you might argue that function-cost needs to be added to the\n>> optimizer's arsenal of tricks. And I'd agree with you: That WOULD\n>> be a better solution than hints. But I need my problem solved\n>> TODAY, not next year. Hints can help solve problems NOW that can be\n>> brought to the PG team's attention later, and in the mean time let\n>> me get my application to work.\n\nUnfortunately, that \"hint language\" also needs to mandate a temporal\nawareness of when hints were introduced so that it doesn't worsen\nthings down the road.\n\ne.g. - Suppose you upgrade to 8.4, where the query optimizer becomes\nsmart enough (perhaps combined with entirely new kinds of scan\nstrategies) to make certain of your hints obsolete and/or downright\nwrong. Those hints (well, *some* of them) ought to be ignored, right?\n\nThe trouble is that the \"hint language\" will be painfully large and\ncomplex. Its likely-nonstandard interaction with SQL will make query\nparsing worse.\n\nAll we really have, at this point, is a vague desire for a \"hint\nlanguage,\" as opposed to any clear direction as to what it should look\nlike, and how it needs to interact with other system components.\nThat's not nearly enough; there needs to be a clear design.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/advocacy.html\n'Typos in FINNEGANS WAKE? How could you tell?' -- Kim Stanley Robinson\n",
"msg_date": "Mon, 09 Oct 2006 18:07:29 +0000",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Sun, 2006-10-08 at 18:05, Josh Berkus wrote:\n\n> Now, if you were offering us a patch to auto-populate the statistics as a \n> table is loaded, I'd be all for that. But I, personally, would need a \n> lot of convincing to believe that hints don't do more harm than good.\n\nActually, I'd much rather have a log option, on by default, that spit\nout info messages when the planner made a guess that was off by a factor\nof 20 or 50 or so or more on a plan.\n\nI can remember to run stats, but finding slow queries that are slow\nbecause the plan was bad, that's the hard part.\n",
"msg_date": "Mon, 09 Oct 2006 15:30:59 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 06:07:29PM +0000, Chris Browne wrote:\n> [email protected] (\"Craig A. James\") writes:\n> > Mark Kirkwood wrote:\n> >>> The result? I can't use my function in any WHERE clause that\n> >>> involves any other conditions or joins. Only by itself. PG will\n> >>> occasionally decide to use my function as a filter instead of doing\n> >>> the join or the other WHERE conditions first, and I'm dead.\n> >> this is an argument for cost-for-functions rather than hints AFAICS.\n> >\n> > Perhaps you scanned past what I wrote a couple paragraphs farther\n> > down. I'm going to repeat it because it's the KEY POINT I'm trying\n> > to make:\n> >\n> > Craig James wrote:\n> >> Now you might argue that function-cost needs to be added to the\n> >> optimizer's arsenal of tricks. And I'd agree with you: That WOULD\n> >> be a better solution than hints. But I need my problem solved\n> >> TODAY, not next year. Hints can help solve problems NOW that can be\n> >> brought to the PG team's attention later, and in the mean time let\n> >> me get my application to work.\n> \n> Unfortunately, that \"hint language\" also needs to mandate a temporal\n> awareness of when hints were introduced so that it doesn't worsen\n> things down the road.\n> \n> e.g. - Suppose you upgrade to 8.4, where the query optimizer becomes\n> smart enough (perhaps combined with entirely new kinds of scan\n> strategies) to make certain of your hints obsolete and/or downright\n> wrong. Those hints (well, *some* of them) ought to be ignored, right?\n \nGreat, then you pull the hints back out of the application. They're a\nlast resort anyway; if you have more than a handful of them in your code\nyou really need to look at what you're doing.\n\n> The trouble is that the \"hint language\" will be painfully large and\n> complex. Its likely-nonstandard interaction with SQL will make query\n> parsing worse.\n> \n> All we really have, at this point, is a vague desire for a \"hint\n> language,\" as opposed to any clear direction as to what it should look\n> like, and how it needs to interact with other system components.\n> That's not nearly enough; there needs to be a clear design.\n\nI can agree to that, but we'll never get any progress so long as every\ntime hints are brought up the response is that they're evil and should\nnever be in the database. I'll also say that a very simple hinting\nlanguage (ie: allowing you to specify access method for a table, and\njoin methods) would go a huge way towards enabling app developers to get\nstuff done now while waiting for all these magical optimizer\nimprovements that have been talked about for years.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 9 Oct 2006 16:18:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I'll also say that a very simple hinting\n> language (ie: allowing you to specify access method for a table, and\n> join methods) would go a huge way towards enabling app developers to get\n> stuff done now while waiting for all these magical optimizer\n> improvements that have been talked about for years.\n\nBasically, the claim that it'll be both easy and useful is what I think\nis horsepucky ... let's see a detailed design if you think it's easy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2006 18:39:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> (snippage)... but we'll never get any progress so long as every\n> time hints are brought up the response is that they're evil and should\n> never be in the database. I'll also say that a very simple hinting\n> language (ie: allowing you to specify access method for a table, and\n> join methods) would go a huge way towards enabling app developers to get\n> stuff done now while waiting for all these magical optimizer\n> improvements that have been talked about for years.\n\nIt is possibly because some of us feel they are evil :-) (can't speak \nfor the *real* Pg developers, just my 2c here)\n\nAs for optimizer improvements well, yeah we all want those - but the \nbasic problem (as I think Tom stated) is the developer resources to do \nthem. As an aside this applies to hints as well - even if we have a \npatch to start off with - look at how much time bitmap indexes have been \nworked on to get them ready for release....\n\nPersonally I don't agree with the oft stated comment along the lines of \n\"we will never get the optimizer to the point where it does not need \nsome form of hinting\" as:\n\n1/ we don't know that to be a true statement, and\n2/ it is kind of admitting defeat on a very interesting problem, when in \nfact a great deal of progress has been made to date, obviously by people \nwho believe it is possible to build a \"start enough\" optimizer.\n\nbest wishes\n\nMark\n\n",
"msg_date": "Wed, 11 Oct 2006 13:34:53 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n> who believe it is possible to build a \"start enough\" optimizer.\n> \nThat's meant to read \"smart enough\" optimizer .. sorry.\n",
"msg_date": "Wed, 11 Oct 2006 13:36:39 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
}
] |
[
{
"msg_contents": "Jonah H. Harris wrote:\n> On Oct 08, 2006 07:05 PM, Josh Berkus <[email protected]> wrote:\n> > Hints are used because the DBA thinks that they are smarter than\n> > the optimizer; 99% of the time, they are wrong.\n> \n> That's a figure which I'm 100% sure cannot be backed up by fact.\n> \n> > Just try manually optimizing a complex query, you'll see -- with three\n> > join types, several scan types, aggregates, bitmaps, [blah blah blah]\n> > it's significantly more than a human can figure out accurately.\n> \n> Let me get this right... the optimizer is written by humans who know and\n> can calculate the proper query plan and generate code to do the same;\n> yet humans aren't smart enough to optimize the queries themselves? A bit\n> of circular reasoning here?\n\nI can do 100! on my computer, but can't do it in my head.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sun, 8 Oct 2006 22:12:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I can do 100! on my computer, but can't do it in my head.\n\nA poor example. 100! is a simple repetative calculation, something computers are very good at. Optimizing an SQL query is very difficult, and a completely different class of problem.\n\nThe fact is the PG team has done a remarkable job with the optimizer so far. I'm usually very happy with its plans. But humans still beat computers at many tasks, and there are unquestionably areas where the PG optimizer is not yet fully developed.\n\nWhen the optimizer reaches its limits, and you have to get your web site running, a HINT can be invaluable.\n\nI said something in a previous version of this topic, which I'll repeat here. The PG documentation for HINTs should be FILLED with STRONG ADMONITIONS to post the problematic queries here before resorting to hints.\n\nThere will always be fools who abuse hints. Too bad for them, but don't make the rest of us suffer for their folly.\n\nCraig\n\n",
"msg_date": "Sun, 08 Oct 2006 19:42:44 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
}
] |
[
{
"msg_contents": "I've recently moved to 8.1 and find that autovacuum doesn't seem to be\nworking, at least not the way I expected it to. I need the tuple count\nfor a table to be updated so indexes will be used when appropriate. I\nwas expecting the tuples count for a table to be updated after\nautovacuum ran. This doesn't seem to be the case. I added 511 records\nto a previously empty table and waited over an hour. Tuples for the\ntable (as per pgaccess) was 0. After I did a manual vacuum analyze it\nwent to 511. \n\n \n\nAm I missing something? Should there be something in the log file to\nindicate that autovacuum has run?\n\n \n\nI'm attaching my conf file.\n\n \n\nMedora Schauer\n\nFairfield Industries",
"msg_date": "Mon, 9 Oct 2006 08:38:55 -0500",
"msg_from": "\"Medora Schauer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum not working?"
},
{
"msg_contents": "In response to \"Medora Schauer\" <[email protected]>:\n\n> I've recently moved to 8.1 and find that autovacuum doesn't seem to be\n> working, at least not the way I expected it to. I need the tuple count\n> for a table to be updated so indexes will be used when appropriate. I\n> was expecting the tuples count for a table to be updated after\n> autovacuum ran. This doesn't seem to be the case. I added 511 records\n> to a previously empty table and waited over an hour. Tuples for the\n> table (as per pgaccess) was 0. After I did a manual vacuum analyze it\n> went to 511. \n\n From your attached config file:\n\n#autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Mon, 9 Oct 2006 10:11:28 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum not working?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.