threads
listlengths
1
275
[ { "msg_contents": ">Another benefit of Pentium D over AMD X2, at least until AMD chooses\n>to switch, is that Pentium D supports DDR2, whereas AMD only supports\n>DDR. There are a lot of technical pros and cons to each - with claims\n>from AMD that DDR2 can be slower than DDR - but one claim that isn't\n>often made, but that helped me make my choice:\n>\n> 1) DDR2 supports higher transfer speeds. I'm using DDR2 5400 on\n> the Intel. I think I'm at 3200 or so on the AMD X2.\n>\n> 2) DDR2 is cheaper. I purchased 1 Gbyte DDR2 5400 for $147 CDN.\n> 1 Gbyte of DDR 3200 starts at around the same price, and\n> stretches into $200 - $300 CDN.\n>\nThere's a logical fallacy here that needs to be noted.\n\nTHROUGHPUT is better with DDR2 if and only if there is enough data to be fetched in a serial fashion from memory.\n\nLATENCY however is dependent on the base clock rate of the RAM involved.\nSo PC3200, 200MHz x2, is going to actually perform better than PC2-5400, 166MHz x4, for almost any memory access pattern except those that are highly sequential.\n\nIn fact, even PC2-6400, 200MHz x4, has a disadvantage compared to 200MHz x2 memory.\nThe minimum latency of the two types of memory in clock cycles is always going to be higher for the memory type that multiplies its base clock rate by the most.\n\nFor the mostly random memory access patterns that comprise many DB applications, the base latency of the RAM involved is going to matter more than the peak throughput AKA the bandwidth of that RAM.\n\nThe big message here is that despite engineering tricks and marketing claims, the base clock rate of the RAM you use matters.\n\nA minor point to be noted in addition here is that most DB servers under load are limited by their physical IO subsystem, their HDs, and not the speed of their RAM.\n\nAll of the above comments about the relative performance of different RAM types become insignificant when performance is gated by the HD subsystem. \n\n", "msg_date": "Tue, 25 Apr 2006 23:07:17 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" }, { "msg_contents": "On Tue, Apr 25, 2006 at 11:07:17PM -0400, Ron Peacetree wrote:\n> THROUGHPUT is better with DDR2 if and only if there is enough data\n> to be fetched in a serial fashion from memory.\n> LATENCY however is dependent on the base clock rate of the RAM\n> involved. So PC3200, 200MHz x2, is going to actually perform better\n> than PC2-5400, 166MHz x4, for almost any memory access pattern\n> except those that are highly sequential.\n\nI had forgotten about this. Still, it's not quite as simple as you say.\n\nDDR2 has increased latency, however, it has a greater upper limit,\nand when run at the same clock speed (200 Mhz for 200 Mhz), it is\nnot going to perform worse. Add in double the pre-fetching capability,\nand what you get is that most benchmarks show DDR2 5400 as being\nslightly faster than DDR 3200.\n\nAMD is switching to DDR2, and I believe that, even after making such a\nbig deal about latency, and why they wouldn't switch to DDR2, they are\nnow saying that their on-chip memory controller will be able to access\nDDR2 memory (when they support it soon) faster than Intel can, not\nhaving an on-chip memory controller.\n\nYou said that DB accesses are random. I'm not so sure. In PostgreSQL,\nare not the individual pages often scanned sequentially, especially\nbecause all records are variable length? You don't think PostgreSQL\nwill regularly read 32 bytes (8 bytes x 4) at a time, in sequence?\nWhether for table pages, or index pages - I'm not seeing why the\naccesses wouldn't be sequential. You believe PostgreSQL will access\nthe table pages and index pages randomly on a per-byte basis? What\nis the minimum PostgreSQL record size again? Isn't it 32 bytes or\nover? :-)\n\nI wish my systems were running the same OS, and I'd run a test for\nyou. Alas, I don't think comparing Windows to Linux would be valuable.\n\n> A minor point to be noted in addition here is that most DB servers\n> under load are limited by their physical IO subsystem, their HDs,\n> and not the speed of their RAM.\n\nIt seems like a pretty major point to me. :-)\n\nIt's why Opteron with RAID kicks ass over HyperTransport.\n\n> All of the above comments about the relative performance of\n> different RAM types become insignificant when performance is gated\n> by the HD subsystem.\n\nYes.\n\nLuckily - we don't all have Terrabyte databases... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 26 Apr 2006 02:48:53 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" }, { "msg_contents": "On Tue, Apr 25, 2006 at 11:07:17PM -0400, Ron Peacetree wrote:\n> A minor point to be noted in addition here is that most DB servers under load are limited by their physical IO subsystem, their HDs, and not the speed of their RAM.\n\nI think if that were the only consideration we wouldn't be seeing such a\ndramatic difference between AMD and Intel though. Even in a disk-bound\nserver, caching is going to have a tremendous impact, and that's\nessentially entirely bound by memory bandwith and latency.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 26 Apr 2006 17:16:31 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" }, { "msg_contents": "On Wed, Apr 26, 2006 at 02:48:53AM -0400, [email protected] wrote:\n> You said that DB accesses are random. I'm not so sure. In PostgreSQL,\n> are not the individual pages often scanned sequentially, especially\n> because all records are variable length? You don't think PostgreSQL\n> will regularly read 32 bytes (8 bytes x 4) at a time, in sequence?\n> Whether for table pages, or index pages - I'm not seeing why the\n> accesses wouldn't be sequential. You believe PostgreSQL will access\n> the table pages and index pages randomly on a per-byte basis? What\n> is the minimum PostgreSQL record size again? Isn't it 32 bytes or\n> over? :-)\n\nData within a page can absolutely be accessed randomly; it would be\nhorribly inefficient to slog through 8K of data every time you needed to\nfind a single row.\n\nThe header size of tuples is ~23 bytes, depending on your version of\nPostgreSQL, and data fields have to start on the proper alignment\n(generally 4 bytes). So essentially the smallest row you can get is 28\nbytes.\n\nI know that tuple headers are dealt with as a C structure, but I don't\nknow if that means accessing any of the header costs the same as\naccessing the whole thing. I don't know if PostgreSQL can access fields\nwithin tuples without having to scan through at least the first part of\npreceeding fields, though I suspect that it can access fixed-width\nfields that sit before any varlena fields directly (without scanning\nthrough the other fields).\n\nIf we ever got to the point of divorcing the in-memory tuple layout from\nthe table layout it'd be interesting to experiment with having all\nvarlena length info stored immediately after all fixed-width fields;\nthat could potentially make accessing varlena's randomly faster. Note\nthat null fields are indicated as such in the null bitmap, so I'm pretty\nsure that their in-tuple position doesn't matter much. Of course if you\nwant the definitive answer, Use The Source.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 26 Apr 2006 17:24:57 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" } ]
[ { "msg_contents": "Hello list,\n\nwhat is the quickest way of dumping a DB and restoring it? I have done a\n\n \"pg_dump -D database | split --line-bytes 1546m part\"\n\nRestoration as\n\n \"cat part* | psql database 2> errors 1>/dev/null\"\n\nall dumpfiles total about 17Gb. It has been running for 50ish hrs and up \nto about the fourth file (5-6 ish Gb) and this is on a raid 5 server.\n\nA while back I did something similar for a table with where I put all \nthe insert statements in one begin/end/commit block, this slowed down \nthe restoration process. Will the same problem [slow restoration] occur \nif there is no BEGIN and END block? I assume the reason for slow inserts \nin this instance is that it allows for rollback, if this is the case \ncan I turn this off?\n\nThanks in advance\nEric Lam\n", "msg_date": "Wed, 26 Apr 2006 17:14:41 +0930", "msg_from": "Eric Lam <[email protected]>", "msg_from_op": true, "msg_subject": "Slow restoration question" }, { "msg_contents": "Eric Lam <[email protected]> writes:\n> what is the quickest way of dumping a DB and restoring it? I have done a\n\n> \"pg_dump -D database | split --line-bytes 1546m part\"\n\nDon't use \"-D\" if you want fast restore ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Apr 2006 16:23:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question " }, { "msg_contents": "Tom Lane <[email protected]> schrieb:\n\n> Eric Lam <[email protected]> writes:\n> > what is the quickest way of dumping a DB and restoring it? I have done a\n> \n> > \"pg_dump -D database | split --line-bytes 1546m part\"\n> \n> Don't use \"-D\" if you want fast restore ...\n\nhehe, yes ;-)\n\nhttp://people.planetpostgresql.org/devrim/index.php?/archives/44-d-of-pg_dump.html\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 29 Apr 2006 22:40:53 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "Tom Lane wrote:\n\n>Eric Lam <[email protected]> writes:\n> \n>\n>>what is the quickest way of dumping a DB and restoring it? I have done a\n>> \n>>\n>\n> \n>\n>> \"pg_dump -D database | split --line-bytes 1546m part\"\n>> \n>>\n>\n>Don't use \"-D\" if you want fast restore ...\n>\n>\t\t\tregards, tom lane\n>\n> \n>\nthanks, I read that from the doco, the reason why I am using the -D \noption is because I was informed by previous people in the company that \nthey never got a 100% strike rate in database restoration without using \nthe -D or -d options. If I have enough space on the QA/staging machine \nI'll give the no options dump restoration a try.\n\nAnyone have any estimates the time differences between the -D, -d and \n[using no option].\n\nregards\nEric Lam\n", "msg_date": "Mon, 01 May 2006 09:07:25 +0930", "msg_from": "Eric Lam <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n> all dumpfiles total about 17Gb. It has been running for 50ish hrs and up \n> to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.\n\nRAID5 generally doesn't bode too well for performance; that could be\npart of the issue.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 12:16:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "Everyone here always says that RAID 5 isn't good for Postgres. We \nhave an Apple Xserve RAID configured with RAID 5. We chose RAID 5 \nbecause Apple said their Xserve RAID was \"optimized\" for RAID 5. Not \nsure if we made the right decision though. They give an option for \nformatting as RAID 0+1. Is that the same as RAID 10 that everyone \ntalks about? Or is it the reverse?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 2, 2006, at 11:16 AM, Jim C. Nasby wrote:\n\n> On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n>> all dumpfiles total about 17Gb. It has been running for 50ish hrs \n>> and up\n>> to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.\n>\n> RAID5 generally doesn't bode too well for performance; that could be\n> part of the issue.\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n", "msg_date": "Tue, 2 May 2006 12:40:43 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "They are not equivalent. As I understand it, RAID 0+1 performs about\nthe same as RAID 10 when everything is working, but degrades much less\nnicely in the presence of a single failed drive, and is more likely to\nsuffer catastrophic data loss if multiple drives fail.\n\n-- Mark\n\nOn Tue, 2006-05-02 at 12:40 -0600, Brendan Duddridge wrote:\n> Everyone here always says that RAID 5 isn't good for Postgres. We \n> have an Apple Xserve RAID configured with RAID 5. We chose RAID 5 \n> because Apple said their Xserve RAID was \"optimized\" for RAID 5. Not \n> sure if we made the right decision though. They give an option for \n> formatting as RAID 0+1. Is that the same as RAID 10 that everyone \n> talks about? Or is it the reverse?\n> \n> Thanks,\n> \n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> \n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n> \n> http://www.clickspace.com\n> \n> On May 2, 2006, at 11:16 AM, Jim C. Nasby wrote:\n> \n> > On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n> >> all dumpfiles total about 17Gb. It has been running for 50ish hrs \n> >> and up\n> >> to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.\n> >\n> > RAID5 generally doesn't bode too well for performance; that could be\n> > part of the issue.\n> > -- \n> > Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> > Pervasive Software http://pervasive.com work: 512-231-6117\n> > vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Tue, 02 May 2006 11:49:15 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "RAID 10 is better than RAID 0+1. There is a lot of information on \nthe net about this, but here is the first one that popped up on \ngoogle for me.\n\nhttp://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html\n\nThe quick summary is that performance is about the same between the \ntwo, but RAID 10 gives better fault tolerance and rebuild \nperformance. I have seen docs for RAID cards that have confused \nthese two RAID levels. In addition, some cards claim to support RAID \n10, when they actually support RAID 0+1 or even RAID 0+1 with \nconcatenation (lame, some of the Dell PERCs have this).\n\nRAID 10 with 6 drives would stripe across 3 mirrored pairs. RAID 0+1 \nwith 6 drives is a mirror of two striped arrays (3 disks each). RAID \n0+1 (with concatenation) using 6 drives is a mirror of two volumes \n(kind of like JBOD) each consisting of 3 drives concatenated together \n(it's a cheap implementation, and it gives about the same performance \nas RAID 1 but with increased storage capacity and less fault \ntolerance). RAID 10 is better than RAID 5 (especially with 6 or less \ndisks) because you don't have the performance hit for parity (which \ndramatically affects rebuild performance and write performance) and \nyou get better fault tolerance (up to 3 disks can fail in a 6 disk \nRAID 10 and you can still be online, with RAID 5 you can only lose 1 \ndrive). All of this comes with a higher cost (more drives and higher \nend cards).\n\n-- Will Reese http://blog.rezra.com\n\n\nOn May 2, 2006, at 1:49 PM, Mark Lewis wrote:\n\n> They are not equivalent. As I understand it, RAID 0+1 performs about\n> the same as RAID 10 when everything is working, but degrades much less\n> nicely in the presence of a single failed drive, and is more likely to\n> suffer catastrophic data loss if multiple drives fail.\n>\n> -- Mark\n>\n> On Tue, 2006-05-02 at 12:40 -0600, Brendan Duddridge wrote:\n>> Everyone here always says that RAID 5 isn't good for Postgres. We\n>> have an Apple Xserve RAID configured with RAID 5. We chose RAID 5\n>> because Apple said their Xserve RAID was \"optimized\" for RAID 5. Not\n>> sure if we made the right decision though. They give an option for\n>> formatting as RAID 0+1. Is that the same as RAID 10 that everyone\n>> talks about? Or is it the reverse?\n>>\n>> Thanks,\n>>\n>> ____________________________________________________________________\n>> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>>\n>> ClickSpace Interactive Inc.\n>> Suite L100, 239 - 10th Ave. SE\n>> Calgary, AB T2G 0V9\n>>\n>> http://www.clickspace.com\n>>\n>> On May 2, 2006, at 11:16 AM, Jim C. Nasby wrote:\n>>\n>>> On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n>>>> all dumpfiles total about 17Gb. It has been running for 50ish hrs\n>>>> and up\n>>>> to about the fourth file (5-6 ish Gb) and this is on a raid 5 \n>>>> server.\n>>>\n>>> RAID5 generally doesn't bode too well for performance; that could be\n>>> part of the issue.\n>>> -- \n>>> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n>>> Pervasive Software http://pervasive.com work: 512-231-6117\n>>> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Tue, 2 May 2006 14:34:16 -0500", "msg_from": "Will Reese <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "BTW, you should be able to check to see what the controller is actually\ndoing by pulling one of the drives from a running array. If it only\nhammers 2 drives during the rebuild, it's RAID10. If it hammers all the\ndrives, it's 0+1.\n\nAs for Xserve raid, it is possible to eliminate most (or maybe even all)\nof the overhead associated with RAID5, depending on how tricky the\ncontroller wants to be. I believe many large storage appliances actually\nuse RAID5 internally, but they perform a bunch of 'magic' behind the\nscenes to get good performance from it. So, it is possible that the\nXServe RAID performs quite well on RAID5. If you provided the results\nfrom bonnie as well as info about the drives I suspect someone here\ncould tell you if you're getting close to RAID10 performance or not.\n\nOn Tue, May 02, 2006 at 02:34:16PM -0500, Will Reese wrote:\n> RAID 10 is better than RAID 0+1. There is a lot of information on \n> the net about this, but here is the first one that popped up on \n> google for me.\n> \n> http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html\n> \n> The quick summary is that performance is about the same between the \n> two, but RAID 10 gives better fault tolerance and rebuild \n> performance. I have seen docs for RAID cards that have confused \n> these two RAID levels. In addition, some cards claim to support RAID \n> 10, when they actually support RAID 0+1 or even RAID 0+1 with \n> concatenation (lame, some of the Dell PERCs have this).\n> \n> RAID 10 with 6 drives would stripe across 3 mirrored pairs. RAID 0+1 \n> with 6 drives is a mirror of two striped arrays (3 disks each). RAID \n> 0+1 (with concatenation) using 6 drives is a mirror of two volumes \n> (kind of like JBOD) each consisting of 3 drives concatenated together \n> (it's a cheap implementation, and it gives about the same performance \n> as RAID 1 but with increased storage capacity and less fault \n> tolerance). RAID 10 is better than RAID 5 (especially with 6 or less \n> disks) because you don't have the performance hit for parity (which \n> dramatically affects rebuild performance and write performance) and \n> you get better fault tolerance (up to 3 disks can fail in a 6 disk \n> RAID 10 and you can still be online, with RAID 5 you can only lose 1 \n> drive). All of this comes with a higher cost (more drives and higher \n> end cards).\n> \n> -- Will Reese http://blog.rezra.com\n> \n> \n> On May 2, 2006, at 1:49 PM, Mark Lewis wrote:\n> \n> >They are not equivalent. As I understand it, RAID 0+1 performs about\n> >the same as RAID 10 when everything is working, but degrades much less\n> >nicely in the presence of a single failed drive, and is more likely to\n> >suffer catastrophic data loss if multiple drives fail.\n> >\n> >-- Mark\n> >\n> >On Tue, 2006-05-02 at 12:40 -0600, Brendan Duddridge wrote:\n> >>Everyone here always says that RAID 5 isn't good for Postgres. We\n> >>have an Apple Xserve RAID configured with RAID 5. We chose RAID 5\n> >>because Apple said their Xserve RAID was \"optimized\" for RAID 5. Not\n> >>sure if we made the right decision though. They give an option for\n> >>formatting as RAID 0+1. Is that the same as RAID 10 that everyone\n> >>talks about? Or is it the reverse?\n> >>\n> >>Thanks,\n> >>\n> >>____________________________________________________________________\n> >>Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> >>\n> >>ClickSpace Interactive Inc.\n> >>Suite L100, 239 - 10th Ave. SE\n> >>Calgary, AB T2G 0V9\n> >>\n> >>http://www.clickspace.com\n> >>\n> >>On May 2, 2006, at 11:16 AM, Jim C. Nasby wrote:\n> >>\n> >>>On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n> >>>>all dumpfiles total about 17Gb. It has been running for 50ish hrs\n> >>>>and up\n> >>>>to about the fourth file (5-6 ish Gb) and this is on a raid 5 \n> >>>>server.\n> >>>\n> >>>RAID5 generally doesn't bode too well for performance; that could be\n> >>>part of the issue.\n> >>>-- \n> >>>Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> >>>Pervasive Software http://pervasive.com work: 512-231-6117\n> >>>vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> >>>\n> >>>---------------------------(end of\n> >>>broadcast)---------------------------\n> >>>TIP 4: Have you searched our list archives?\n> >>>\n> >>> http://archives.postgresql.org\n> >>>\n> >>\n> >>\n> >>\n> >>---------------------------(end of \n> >>broadcast)---------------------------\n> >>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> >> choose an index scan if your joining column's datatypes do not\n> >> match\n> >\n> >---------------------------(end of \n> >broadcast)---------------------------\n> >TIP 6: explain analyze is your friend\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 16:53:30 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "Hi Jim,\n\nThe output from bonnie on my boot drive is:\n\nFile './Bonnie.27964', size: 0\nWriting with putc()...done\nRewriting...done\nWriting intelligently...done\nReading with getc()...done\nReading intelligently...done\nSeeker 2...Seeker 1...Seeker 3...start 'em...done...done...done...\n -------Sequential Output-------- ---Sequential Input-- \n--Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- \n--Seeks---\nMachine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % \nCPU /sec %CPU\n 0 36325 98.1 66207 22.9 60663 16.2 50553 99.9 710972 \n100.0 44659.8 191.3\n\n\nAnd the output from the RAID drive is:\n\nFile './Bonnie.27978', size: 0\nWriting with putc()...done\nRewriting...done\nWriting intelligently...done\nReading with getc()...done\nReading intelligently...done\nSeeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...\n -------Sequential Output-------- ---Sequential Input-- \n--Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- \n--Seeks---\nMachine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % \nCPU /sec %CPU\n 0 40365 99.4 211625 61.4 212425 57.0 50740 99.9 730515 \n100.0 45897.9 190.1\n\n\nEach drive in the RAID 5 is a 400 GB serial ATA drive. I'm not sure \nthe manufacturer or the model number as it was all in a packaged box \nwhen we received it and I didn't check.\n\nDo these numbers seem decent enough for a Postgres database?\n\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 2, 2006, at 3:53 PM, Jim C. Nasby wrote:\n\n> BTW, you should be able to check to see what the controller is \n> actually\n> doing by pulling one of the drives from a running array. If it only\n> hammers 2 drives during the rebuild, it's RAID10. If it hammers all \n> the\n> drives, it's 0+1.\n>\n> As for Xserve raid, it is possible to eliminate most (or maybe even \n> all)\n> of the overhead associated with RAID5, depending on how tricky the\n> controller wants to be. I believe many large storage appliances \n> actually\n> use RAID5 internally, but they perform a bunch of 'magic' behind the\n> scenes to get good performance from it. So, it is possible that the\n> XServe RAID performs quite well on RAID5. If you provided the results\n> from bonnie as well as info about the drives I suspect someone here\n> could tell you if you're getting close to RAID10 performance or not.\n>\n> On Tue, May 02, 2006 at 02:34:16PM -0500, Will Reese wrote:\n>> RAID 10 is better than RAID 0+1. There is a lot of information on\n>> the net about this, but here is the first one that popped up on\n>> google for me.\n>>\n>> http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html\n>>\n>> The quick summary is that performance is about the same between the\n>> two, but RAID 10 gives better fault tolerance and rebuild\n>> performance. I have seen docs for RAID cards that have confused\n>> these two RAID levels. In addition, some cards claim to support RAID\n>> 10, when they actually support RAID 0+1 or even RAID 0+1 with\n>> concatenation (lame, some of the Dell PERCs have this).\n>>\n>> RAID 10 with 6 drives would stripe across 3 mirrored pairs. RAID 0+1\n>> with 6 drives is a mirror of two striped arrays (3 disks each). RAID\n>> 0+1 (with concatenation) using 6 drives is a mirror of two volumes\n>> (kind of like JBOD) each consisting of 3 drives concatenated together\n>> (it's a cheap implementation, and it gives about the same performance\n>> as RAID 1 but with increased storage capacity and less fault\n>> tolerance). RAID 10 is better than RAID 5 (especially with 6 or less\n>> disks) because you don't have the performance hit for parity (which\n>> dramatically affects rebuild performance and write performance) and\n>> you get better fault tolerance (up to 3 disks can fail in a 6 disk\n>> RAID 10 and you can still be online, with RAID 5 you can only lose 1\n>> drive). All of this comes with a higher cost (more drives and higher\n>> end cards).\n>>\n>> -- Will Reese http://blog.rezra.com\n>>\n>>\n>> On May 2, 2006, at 1:49 PM, Mark Lewis wrote:\n>>\n>>> They are not equivalent. As I understand it, RAID 0+1 performs \n>>> about\n>>> the same as RAID 10 when everything is working, but degrades much \n>>> less\n>>> nicely in the presence of a single failed drive, and is more \n>>> likely to\n>>> suffer catastrophic data loss if multiple drives fail.\n>>>\n>>> -- Mark\n>>>\n>>> On Tue, 2006-05-02 at 12:40 -0600, Brendan Duddridge wrote:\n>>>> Everyone here always says that RAID 5 isn't good for Postgres. We\n>>>> have an Apple Xserve RAID configured with RAID 5. We chose RAID 5\n>>>> because Apple said their Xserve RAID was \"optimized\" for RAID 5. \n>>>> Not\n>>>> sure if we made the right decision though. They give an option for\n>>>> formatting as RAID 0+1. Is that the same as RAID 10 that everyone\n>>>> talks about? Or is it the reverse?\n>>>>\n>>>> Thanks,\n>>>>\n>>>> ___________________________________________________________________ \n>>>> _\n>>>> Brendan Duddridge | CTO | 403-277-5591 x24 | \n>>>> [email protected]\n>>>>\n>>>> ClickSpace Interactive Inc.\n>>>> Suite L100, 239 - 10th Ave. SE\n>>>> Calgary, AB T2G 0V9\n>>>>\n>>>> http://www.clickspace.com\n>>>>\n>>>> On May 2, 2006, at 11:16 AM, Jim C. Nasby wrote:\n>>>>\n>>>>> On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:\n>>>>>> all dumpfiles total about 17Gb. It has been running for 50ish hrs\n>>>>>> and up\n>>>>>> to about the fourth file (5-6 ish Gb) and this is on a raid 5\n>>>>>> server.\n>>>>>\n>>>>> RAID5 generally doesn't bode too well for performance; that \n>>>>> could be\n>>>>> part of the issue.\n>>>>> -- \n>>>>> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n>>>>> Pervasive Software http://pervasive.com work: 512-231-6117\n>>>>> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>>>>>\n>>>>> ---------------------------(end of\n>>>>> broadcast)---------------------------\n>>>>> TIP 4: Have you searched our list archives?\n>>>>>\n>>>>> http://archives.postgresql.org\n>>>>>\n>>>>\n>>>>\n>>>>\n>>>> ---------------------------(end of\n>>>> broadcast)---------------------------\n>>>> TIP 9: In versions below 8.0, the planner will ignore your \n>>>> desire to\n>>>> choose an index scan if your joining column's datatypes do \n>>>> not\n>>>> match\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n", "msg_date": "Tue, 2 May 2006 20:09:52 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:\n> -------Sequential Output-------- ---Sequential Input-- --Random--\n> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\n>Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % CPU /sec %CPU\n> 0 40365 99.4 211625 61.4 212425 57.0 50740 99.9 730515 100.0 45897.9 190.1\n[snip]\n>Do these numbers seem decent enough for a Postgres database?\n\nThese numbers seem completely bogus, probably because bonnie is using a \nfile size smaller than memory and is reporting caching effects. (730MB/s \nisn't possible for a single external RAID unit with a pair of 2Gb/s \ninterfaces.) bonnie in general isn't particularly useful on modern \nlarge-ram systems, in my experience.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 08:18:49 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "\nOn May 3, 2006, at 8:18 AM, Michael Stone wrote:\n\n> On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:\n>> -------Sequential Output-------- ---Sequential \n>> Input-- --Random--\n>> -Per Char- --Block--- -Rewrite-- -Per Char- -- \n>> Block--- --Seeks---\n>> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % \n>> CPU /sec %CPU\n>> 0 40365 99.4 211625 61.4 212425 57.0 50740 99.9 \n>> 730515 100.0 45897.9 190.1\n> [snip]\n>> Do these numbers seem decent enough for a Postgres database?\n>\n> These numbers seem completely bogus, probably because bonnie is \n> using a file size smaller than memory and is reporting caching \n> effects. (730MB/s isn't possible for a single external RAID unit \n> with a pair of 2Gb/s interfaces.) bonnie in general isn't \n> particularly useful on modern large-ram systems, in my experience.\n>\n\nBonnie++ is able to use very large datasets. It also tries to figure \nout hte size you want (2x ram) - the original bonnie is limited to 2GB.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 3 May 2006 09:19:52 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On May 3, 2006, at 9:19 AM, Jeff Trout wrote:\n\n> Bonnie++ is able to use very large datasets. It also tries to \n> figure out hte size you want (2x ram) - the original bonnie is \n> limited to 2GB.\n\nbut you have to be careful building bonnie++ since it has bad \nassumptions about which systems can do large files... eg, on FreeBSD \nit doesn't try large files unless you patch it appropriately (which \nthe freebsd port does for you).", "msg_date": "Wed, 3 May 2006 10:16:10 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "\nOn May 3, 2006, at 10:16 AM, Vivek Khera wrote:\n\n>\n> On May 3, 2006, at 9:19 AM, Jeff Trout wrote:\n>\n>> Bonnie++ is able to use very large datasets. It also tries to \n>> figure out hte size you want (2x ram) - the original bonnie is \n>> limited to 2GB.\n>\n> but you have to be careful building bonnie++ since it has bad \n> assumptions about which systems can do large files... eg, on \n> FreeBSD it doesn't try large files unless you patch it \n> appropriately (which the freebsd port does for you).\n>\n\nOn platforms it thinks can't use large files it uses multiple sets of \n2GB files. (Sort of like our beloved PG)\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 3 May 2006 10:46:51 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:\n>Bonnie++ is able to use very large datasets. It also tries to figure \n>out hte size you want (2x ram) - the original bonnie is limited to 2GB.\n\nYes, and once you get into large datasets like that the quality of the \ndata is fairly poor because the program can't really eliminate cache \neffects. IOW, it tries but (in my experience) doesn't succeed very well.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 11:59:39 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, 2006-05-03 at 10:59, Michael Stone wrote:\n> On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:\n> >Bonnie++ is able to use very large datasets. It also tries to figure \n> >out hte size you want (2x ram) - the original bonnie is limited to 2GB.\n> \n> Yes, and once you get into large datasets like that the quality of the \n> data is fairly poor because the program can't really eliminate cache \n> effects. IOW, it tries but (in my experience) doesn't succeed very well.\n\nI have often used the mem=xxx arguments to lilo when needing to limit\nthe amount of memory for testing purposes. Just google for limit memory\nand your bootloader to find the options.\n", "msg_date": "Wed, 03 May 2006 11:07:15 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:\n>I have often used the mem=xxx arguments to lilo when needing to limit\n>the amount of memory for testing purposes. Just google for limit memory\n>and your bootloader to find the options.\n\nOr, just don't worry about it. Even if you get bonnie to reflect real \nnumbers, so what? In general the goal is to optimize application \nperformance, not bonnie performance. A simple set of dd's is enough to \ngive you a rough idea of disk performance, beyond that you really need \nto see how your disk is performing with your actual workload.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 13:06:06 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 01:06:06PM -0400, Michael Stone wrote:\n> On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:\n> >I have often used the mem=xxx arguments to lilo when needing to limit\n> >the amount of memory for testing purposes. Just google for limit memory\n> >and your bootloader to find the options.\n> \n> Or, just don't worry about it. Even if you get bonnie to reflect real \n> numbers, so what? In general the goal is to optimize application \n> performance, not bonnie performance. A simple set of dd's is enough to \n> give you a rough idea of disk performance, beyond that you really need \n> to see how your disk is performing with your actual workload.\n\nWell, in this case the question was about random write access, which dd\nwon't show you.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 3 May 2006 13:08:21 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:\n>Well, in this case the question was about random write access, which dd\n>won't show you.\n\nThat's the kind of thing you need to measure against your workload.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 15:26:54 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, 2006-05-03 at 14:26, Michael Stone wrote:\n> On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:\n> >Well, in this case the question was about random write access, which dd\n> >won't show you.\n> \n> That's the kind of thing you need to measure against your workload.\n\nOf course, the final benchmarking should be your application.\n\nBut, supposed you're comparing 12 or so RAID controllers for a one week\nperiod, and you don't even have the app fully written yet, and because\nof time constraints, you'll need the server ready before the app is\ndone. You don't need perfection, but you need some idea how the array\nperforms. I maintain that both methodologies have their uses. \n\nNote that I'm referring to bonnie++ as was an earlier poster. It\ncertainly seems capable of giving you a good idea of how your hardware\nwill behave under load.\n", "msg_date": "Wed, 03 May 2006 14:40:15 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:\n>Note that I'm referring to bonnie++ as was an earlier poster. It\n>certainly seems capable of giving you a good idea of how your hardware\n>will behave under load.\n\nIME it give fairly useless results. YMMV. Definately the numbers posted \nbefore seem bogus. If you have some way to make those figures useful in \nyour circumstance, great. Too often I see people taking bonnie numbers \nat face value and then being surprised that don't relate at all to \nreal-world performance. If your experience differs, fine.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 16:53:08 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, 2006-05-03 at 15:53, Michael Stone wrote:\n> On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:\n> >Note that I'm referring to bonnie++ as was an earlier poster. It\n> >certainly seems capable of giving you a good idea of how your hardware\n> >will behave under load.\n> \n> IME it give fairly useless results. YMMV. Definately the numbers posted \n> before seem bogus. If you have some way to make those figures useful in \n> your circumstance, great. Too often I see people taking bonnie numbers \n> at face value and then being surprised that don't relate at all to \n> real-world performance. If your experience differs, fine.\n\nI think the real problem is that people use the older bonnie that can\nonly work with smaller datasets on a machine with all the memory\nenabled. This will, for certain, give meaningless numbers.\n\nOTOH, having used bonnie++ on a machine artificially limited to 256 to\n512 meg or ram or so, has given me some very useful numbers, especially\nif you set the data set size to be several gigabytes.\n\nKeep in mind, the numbers listed before likely WERE generated on a\nmachine with plenty of memory using the older bonnie, so those numbers\nshould be bogus.\n\nIf you've not tried bonnie++ on a limited memory machine, you really\nshould. It's a quite useful tool for a simple first pass to figure out\nwhich RAID and fs configurations should be tested more thoroughly.\n", "msg_date": "Wed, 03 May 2006 16:30:32 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" }, { "msg_contents": "On Wed, May 03, 2006 at 04:30:32PM -0500, Scott Marlowe wrote:\n>If you've not tried bonnie++ on a limited memory machine, you really\n>should.\n\nYes, I have. I also patched bonnie to handle large files and other such \nnifty things before bonnie++ was forked. Mostly I just didn't get much \nvalue out of all that, because at the end of theago day optimizing for \nbonnie just doesn't equate to optimizing for real-world workloads. \nAgain, if it's useful for your workload, great. \n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 18:00:30 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow restoration question" } ]
[ { "msg_contents": "I'm posting this to the entire performance list in the hopes that it will be generally useful.\n=r\n\n-----Original Message-----\n>From: [email protected]\n>Sent: Apr 26, 2006 3:25 AM\n>To: Ron Peacetree <[email protected]>\n>Subject: Re: [PERFORM] Large (8M) cache vs. dual-core CPUs\n>\n>Hi Ron:\n>\n>As a result of your post on the matter, I've been redoing some of my\n>online research on this subject, to see whether I do have one or more\n>things wrong.\n>\nI'm always in favor of independent investigation to find the truth. :-)\n\n\n>You say:\n>\n>> THROUGHPUT is better with DDR2 if and only if there is enough data\n>> to be fetched in a serial fashion from memory.\n>...\n>> So PC3200, 200MHz x2, is going to actually perform better than\n>> PC2-5400, 166MHz x4, for almost any memory access pattern except\n>> those that are highly sequential.\n>...\n>> For the mostly random memory access patterns that comprise many DB\n>> applications, the base latency of the RAM involved is going to\n>> matter more than the peak throughput AKA the bandwidth of that RAM.\n>\n>I'm trying to understand right now - why does DDR2 require data to be\n>fetched in a serial fashion, in order for it to maximize bandwidth?\n>\nSDR transfers data on either the rising or falling edge of its clock cycle.\n\nDDR transfers data on both the rising and falling edge of the base clock signal. If there is a contiguous chunk of 2+ datums to be transferred.\n\nDDR2 basically has a second clock that cycles at 2x the rate of the base clock and thus we get 4 data transfers per base clock cycle. If there is a contiguous chunk of 4+ datums to be transferred.\n\nNote also what happens when transferring the first datum after a lull period.\nFor purposes of example, let's pretend that we are talking about a base clock rate of 200MHz= 5ns.\n\nThe SDR still transfers data every 5ns no matter what.\nThe DDR transfers the 1st datum in 10ns and then assuming there are at least 2 sequential datums to be transferred will transfer the 2nd and subsequent sequential pieces of data every 2.5ns.\nThe DDR2 transfers the 1st datum in 20ns and then assuming there are at least 4 sequential datums to be transferred will transfer the 2nd and subsequent sequential pieces of data every 1.25ns.\n\nThus we can see that randomly accessing RAM degrades performance significantly for DDR and DDR2. We can also see that the conditions for optimal RAM performance become more restrictive as we go from SDR to DDR to DDR2.\nThe reason DDR2 with a low base clock rate excelled at tasks like streaming multimedia and stank at things like small transaction OLTP DB applications is now apparent.\n\nFactors like CPU prefetching and victim buffers can muddy this picture a bit.\nAlso, if the CPU's off die IO is slower than the RAM it is talking to, how fast that RAM is becomes unimportant.\n\nThe reason AMD is has held off from supporting DDR2 until now are:\n1. DDR is EOL. JEDEC is not ratifying any DDR faster than 200x2 while DDR2 standards as fast as 333x4 are likely to be ratified (note that Intel pretty much avoided DDR, leaving it to AMD, while DDR2 is Intel's main RAM technology. Guess who has more pull with JEDEC?)\n\n2. DDR and DDR2 RAM with equal base clock rates are finally available, removing the biggest performance difference between DDR and DDR2.\n\n3. Due to the larger demand for DDR2, more of it is produced. That in turn has resulted in larger supplies of DDR2 than DDR. Which in turn, especially when combined with the factors above, has resulted in lower prices for DDR2 than for DDR of the same or faster base clock rate by now.\n\nHope this is helpful,\nRon\n", "msg_date": "Wed, 26 Apr 2006 08:40:37 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" }, { "msg_contents": "\n>\n>The reason AMD is has held off from supporting DDR2 until now are:\n>1. DDR is EOL. JEDEC is not ratifying any DDR faster than 200x2 while DDR2 standards as fast as 333x4 are likely to be ratified (note that Intel pretty much avoided DDR, leaving it to AMD, while DDR2 is Intel's main RAM technology. Guess who has more pull with JEDEC?)\n>\n> \n>\nDDR2 is to RDRAM as C# is to Java\n\n;)\n\n\n", "msg_date": "Wed, 26 Apr 2006 08:02:23 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" } ]
[ { "msg_contents": "Mea Culpa. There is a mistake in my example for SDR vs DDR vs DDR2.\nThis is what I get for posting before my morning coffee.\n\nThe base latency for all of the memory types is that of the base clock rate; 200MHz= 5ns in my given examples.\n\nI double factored, making DDR and DDR2 worse than they actually are.\n\nAgain, my apologies.\nRon\n\n-----Original Message-----\n>From: Ron Peacetree <[email protected]>\n>Sent: Apr 26, 2006 8:40 AM\n>To: [email protected], [email protected]\n>Subject: Re: [PERFORM] Large (8M) cache vs. dual-core CPUs\n>\n>I'm posting this to the entire performance list in the hopes that it will be generally useful.\n>=r\n<snip>\n>\n>Note also what happens when transferring the first datum after a lull period.\n>For purposes of example, let's pretend that we are talking about a base clock rate of 200MHz= 5ns.\n>\n>The SDR still transfers data every 5ns no matter what.\n>The DDR transfers the 1st datum in 10ns and then assuming there are at least 2 sequential datums to be >transferred will transfer the 2nd and subsequent sequential pieces of data every 2.5ns.\n>The DDR2 transfers the 1st datum in 20ns and then assuming there are at least 4 sequential datums to be >transferred will transfer the 2nd and subsequent sequential pieces of data every 1.25ns.\n>\n=5= ns to first transfer in all 3 casess. Bad Ron. No Biscuit!\n\n>\n>Thus we can see that randomly accessing RAM degrades performance significantly for DDR and DDR2. We can >also see that the conditions for optimal RAM performance become more restrictive as we go from SDR to DDR to >DDR2.\n>The reason DDR2 with a low base clock rate excelled at tasks like streaming multimedia and stank at things like >small transaction OLTP DB applications is now apparent.\n>\n>Factors like CPU prefetching and victim buffers can muddy this picture a bit.\n>Also, if the CPU's off die IO is slower than the RAM it is talking to, how fast that RAM is becomes unimportant.\n>\nThese statements, and everything else I posted, are accurate.\n", "msg_date": "Wed, 26 Apr 2006 11:10:40 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (8M) cache vs. dual-core CPUs" } ]
[ { "msg_contents": "I was wondering if there were any performance issues with having a data\ndirectory that was an nfs mounted drive? Say like a SAN or NAS device? Has\nanyone done this before?\n\n\n\n\nRunning on an NFS Mounted Directory\n\n\nI was wondering if there were any performance issues with having a data directory that was an nfs mounted drive?  Say like a SAN or NAS device? Has anyone done this before?", "msg_date": "Wed, 26 Apr 2006 22:06:58 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Running on an NFS Mounted Directory" }, { "msg_contents": "On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:\n> I was wondering if there were any performance issues with having a data\n> directory that was an nfs mounted drive? Say like a SAN or NAS device? Has\n> anyone done this before?\n \nMy understanding is that NFS is pretty poor in performance in general,\nso I would expect it to be particularly bad for a DB. You might run\nsome (non-DB) performance tests to get a feel for how bad it might me.\n(Someone once told me that NFS topped out at around 12MB/s, but I don't\nknow if that's really true [they were trying to sell a competitive\nnetworked filesystem]).\n\nIn any event, you're at least limited by ethernet speeds, if not more.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Wed, 26 Apr 2006 19:35:42 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "We have gotten very good performance from netapp and postgres 7.4.11 .\n\nI was able to push about 100MB/s over gigE, but that was limited by \nour netapp.\n\nDAS will generally always be faster, but if for example you have 2 \ndisks vs. 100 NFS mounted ,NFS will be faster.\n\nNFS is very reliable and I would stay away from iscsi.\n\n\n\nRegards,\nDan Gorman\n\nOn Apr 26, 2006, at 7:35 PM, Steve Wampler wrote:\n\n> On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:\n>> I was wondering if there were any performance issues with having a \n>> data\n>> directory that was an nfs mounted drive? Say like a SAN or NAS \n>> device? Has\n>> anyone done this before?\n>\n> My understanding is that NFS is pretty poor in performance in general,\n> so I would expect it to be particularly bad for a DB. You might run\n> some (non-DB) performance tests to get a feel for how bad it might me.\n> (Someone once told me that NFS topped out at around 12MB/s, but I \n> don't\n> know if that's really true [they were trying to sell a competitive\n> networked filesystem]).\n>\n> In any event, you're at least limited by ethernet speeds, if not more.\n>\n> -- \n> Steve Wampler -- [email protected]\n> The gods that smiled on your birth are now laughing out loud.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Wed, 26 Apr 2006 21:43:26 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Wed, Apr 26, 2006 at 07:35:42PM -0700, Steve Wampler wrote:\n> On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:\n> > I was wondering if there were any performance issues with having a data\n> > directory that was an nfs mounted drive? Say like a SAN or NAS device? Has\n> > anyone done this before?\n> \n> My understanding is that NFS is pretty poor in performance in general,\n> so I would expect it to be particularly bad for a DB. You might run\n> some (non-DB) performance tests to get a feel for how bad it might me.\n> (Someone once told me that NFS topped out at around 12MB/s, but I don't\n> know if that's really true [they were trying to sell a competitive\n> networked filesystem]).\n> \n> In any event, you're at least limited by ethernet speeds, if not more.\n\nMore importantly, the latency involved will kill commit performance. If\nit doesn't then it's likely that fsync isn't being obeyed, which means 0\ndata integrity.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 26 Apr 2006 23:55:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "I am looking for the best solution to have a large amount of disk storage\nattached to my PostgreSQL 8.1 server. I was thinking of having a san or nas\nattached device be mounted by the pg server over nfs, hence the question\nabout nfs performance. What other options/protocols are there to get high\nperformance and data integrity while having the benefit of not having the\nphysical storage attached to the db server?\n\n\nOn 4/27/06 12:55 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> On Wed, Apr 26, 2006 at 07:35:42PM -0700, Steve Wampler wrote:\n>> On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:\n>>> I was wondering if there were any performance issues with having a data\n>>> directory that was an nfs mounted drive? Say like a SAN or NAS device? Has\n>>> anyone done this before?\n>> \n>> My understanding is that NFS is pretty poor in performance in general,\n>> so I would expect it to be particularly bad for a DB. You might run\n>> some (non-DB) performance tests to get a feel for how bad it might me.\n>> (Someone once told me that NFS topped out at around 12MB/s, but I don't\n>> know if that's really true [they were trying to sell a competitive\n>> networked filesystem]).\n>> \n>> In any event, you're at least limited by ethernet speeds, if not more.\n> \n> More importantly, the latency involved will kill commit performance. If\n> it doesn't then it's likely that fsync isn't being obeyed, which means 0\n> data integrity.\n\n\n", "msg_date": "Thu, 27 Apr 2006 08:38:55 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 08:38:55AM -0400, Ketema Harris wrote:\n>I am looking for the best solution to have a large amount of disk storage\n>attached to my PostgreSQL 8.1 server. \n\n>What other options/protocols are there to get high performance and data \n>integrity while having the benefit of not having the physical storage \n>attached to the db server?\n\nThese are two distinct requirements. Are both really requirements or is \none \"nice to have\"? The \"best\" solution for \"a large amount of disk \nstorage\" isn't \"not having the physical storage attached to the db \nserver\". If you use non-local storage it will be slower and more \nexpensive, quite likely by a large margin. There may be other advantages \nto doing so, but you haven't mentioned any of those as requirements.\n\nMike Stone\n", "msg_date": "Thu, 27 Apr 2006 08:44:57 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "OK. My thought process was that having non local storage as say a big raid\n5 san ( I am talking 5 TB with expansion capability up to 10 ) would allow\nme to have redundancy, expandability, and hopefully still retain decent\nperformance from the db. I also would hopefully then not have to do\nperiodic backups from the db server to some other type of storage. Is this\nnot a good idea? How bad of a performance hit are we talking about? Also,\nin regards to the commit data integrity, as far as the db is concerned once\nthe data is sent to the san or nas isn't it \"written\"? The storage may have\nthat write in cache, but from my reading and understanding of how these\nvarious storage devices work that is how they keep up performance. I would\nexpect my bottleneck if any to be the actual Ethernet transfer to the\nstorage, and I am going to try and compensate for that with a full gigabit\nbackbone.\n\n\nOn 4/27/06 8:44 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Thu, Apr 27, 2006 at 08:38:55AM -0400, Ketema Harris wrote:\n>> I am looking for the best solution to have a large amount of disk storage\n>> attached to my PostgreSQL 8.1 server.\n> \n>> What other options/protocols are there to get high performance and data\n>> integrity while having the benefit of not having the physical storage\n>> attached to the db server?\n> \n> These are two distinct requirements. Are both really requirements or is\n> one \"nice to have\"? The \"best\" solution for \"a large amount of disk\n> storage\" isn't \"not having the physical storage attached to the db\n> server\". If you use non-local storage it will be slower and more\n> expensive, quite likely by a large margin. There may be other advantages\n> to doing so, but you haven't mentioned any of those as requirements.\n> \n> Mike Stone\n\n\n", "msg_date": "Thu, 27 Apr 2006 08:57:51 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 08:57:51 -0400,\n Ketema Harris <[email protected]> wrote:\n> performance from the db. I also would hopefully then not have to do\n> periodic backups from the db server to some other type of storage. Is this\n> not a good idea? How bad of a performance hit are we talking about? Also,\n\nYou always need to do backups if you care about your data. What if someone\naccidental deletes a lot of data? What if someone blows up your data\ncenter (or there is a flood)?\n", "msg_date": "Thu, 27 Apr 2006 08:05:55 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "Yes, your right, I meant not have to do the backups from the db server\nitself. I can do that within the storage device now, by allocating space\nfor it, and letting the device copy the data files on some periodic basis.\n\n\nOn 4/27/06 9:05 AM, \"Bruno Wolff III\" <[email protected]> wrote:\n\n> On Thu, Apr 27, 2006 at 08:57:51 -0400,\n> Ketema Harris <[email protected]> wrote:\n>> performance from the db. I also would hopefully then not have to do\n>> periodic backups from the db server to some other type of storage. Is this\n>> not a good idea? How bad of a performance hit are we talking about? Also,\n> \n> You always need to do backups if you care about your data. What if someone\n> accidental deletes a lot of data? What if someone blows up your data\n> center (or there is a flood)?\n\n\n", "msg_date": "Thu, 27 Apr 2006 09:06:48 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 08:57:51AM -0400, Ketema Harris wrote:\n> OK. My thought process was that having non local storage as say a big raid\n> 5 san ( I am talking 5 TB with expansion capability up to 10 ) would allow\n> me to have redundancy, expandability, and hopefully still retain decent\n> performance from the db. I also would hopefully then not have to do\n> periodic backups from the db server to some other type of storage. Is this\n> not a good idea? How bad of a performance hit are we talking about? Also,\n> in regards to the commit data integrity, as far as the db is concerned once\n> the data is sent to the san or nas isn't it \"written\"? The storage may have\n> that write in cache, but from my reading and understanding of how these\n> various storage devices work that is how they keep up performance. I would\n> expect my bottleneck if any to be the actual Ethernet transfer to the\n> storage, and I am going to try and compensate for that with a full gigabit\n> backbone.\n\nWell, if you have to have both the best performance and remote attach\nstorage, I think you'll find that a fibre-channel SAN is still the king\nof the hill. 4Gb FC switches are common now, though finding a 4Gb\nHBA for your computer might be a trick. 2Gb HBAs are everywhere in\nFC land. That's a premium price solution, however, and I don't know\nanything about how well PG would perform with a FC SAN. We use our\nSAN for bulk science data and leave the PGDB on a separate machine\nwith local disk.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Thu, 27 Apr 2006 06:09:50 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 08:57:51AM -0400, Ketema Harris wrote:\n>OK. My thought process was that having non local storage as say a big raid\n>5 san ( I am talking 5 TB with expansion capability up to 10 ) \n\nThat's two disk trays for a cheap slow array. (Versus a more expensive \nsolution with more spindles and better seek performance.)\n\n>would allow\n>me to have redundancy, expandability, and hopefully still retain decent\n>performance from the db. I also would hopefully then not have to do\n>periodic backups from the db server to some other type of storage.\n\nNo, backups are completely unrelated to your storage type; you need them \neither way. On a SAN you can use a SAN backup solution to back multiple \nsystems up with a single backup unit without involving the host CPUs. \nThis is fairly useless if you aren't amortizing the cost over a large \nenvironment.\n\n>Is this not a good idea?\n\nIt really depends on what you're hoping to get. As described, it's not \nclear. (I don't know what you mean by \"redundancy, expandability\" or \n\"decent performance\".)\n\n>How bad of a performance hit are we talking about?\n\nWay too many factors for an easy answer. Consider the case of NAS vs \nSCSI direct attach storage. You're probably in that case comparing a \nsingle 125MB/s (peak) gigabit ethernet channel to (potentially several) \n320MB/s (peak) SCSI channels. With a high-end NAS you might get 120MB/s \noff that GBE. With a (more realistic) mid-range unit you're more likely \nto get 40-60MB/s. Getting 200MB/s off the SCSI channel isn't a stretch, \nand you can fairly easily stripe across multiple SCSI channels. (You can \nalso bond multiple GBEs, but then your cost & complexity start going way \nup, and you're never going to scale as well.) If you have an environment \nwhere you're doing a lot of sequential scans it isn't even a contest. \nYou can also substitute SATA for SCSI, etc.\n\nFor a FC SAN the peformance numbers are a lot better, but the costs & \ncomplexity are a lot higher. An iSCSI SAN is somewhere in the middle.\n\n>Also, in regards to the commit data integrity, as far as the db is \n>concerned once the data is sent to the san or nas isn't it \"written\"? \n>The storage may have that write in cache, but from my reading and \n>understanding of how these various storage devices work that is how \n>they keep up performance. \n\nDepends on the configuration, but yes, most should be able to report \nback a \"write\" once the data is in a non-volatile cache. You can do the \nsame with a direct-attached array and eliminate the latency inherent in \naccessing the remote storage.\n\n>I would expect my bottleneck if any to be the actual Ethernet transfer \n>to the storage, and I am going to try and compensate for that with a \n>full gigabit backbone.\n\nsee above.\n\nThe advantages of a NAS or SAN are in things you haven't really touched \non. Is the filesystem going to be accessed by several systems? Do you \nneed the ability to do snapshots? (You may be able to do this with \ndirect-attach also, but doing it on a remote storage device tends to be \nsimpler.) Do you want to share one big, expensive, reliable unit between \nmultiple systems? Will you be doing failover? (Note that failover \nrequires software to go with the hardware, and can be done in a \ndifferent way with local storage also.) In some environments the answers \nto those questions are yes, and the price premium & performance \nimplications are well worth it. For a single DB server the answer is \nalmost certainly \"no\". \n\nMike Stone\n", "msg_date": "Thu, 27 Apr 2006 09:24:54 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 09:06:48 -0400,\n Ketema Harris <[email protected]> wrote:\n> Yes, your right, I meant not have to do the backups from the db server\n> itself. I can do that within the storage device now, by allocating space\n> for it, and letting the device copy the data files on some periodic basis.\n\nOnly if the database server isn't running or your SAN provides a way to\nprovide a snapshot of the data at a particular instant in time.\n", "msg_date": "Thu, 27 Apr 2006 08:31:30 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "First, I appreciate all of your input.\n\n>No, backups are completely unrelated to your storage type; you need them\n> either way.\nPlease another post. I meant the storage would do the back ups.\n>redundancy, expandability\nWhat I mean by these stupid flavor words is:\nRedundancy : raid 5.\nExpandability : the ability to stick another drive in my array and get more\nstorage and not have to turn of the db.\n>Do you \n> need the ability to do snapshots?\nYes.\n>Do you want to share one big, expensive, reliable unit between\n> multiple systems? Will you be doing failover?\nYes, and Yes. Really on one other system, a phone system, but it is the\ncrux of my business and will be writing a lot of recorded phone calls. I am\nworking with a storage company now to set up the failover, I want the db and\nphone systems to never no if the storage switched over.\n\nYou have given me a lot to think about. The performance concerns me and I\nwill have to find some way to test. Perhaps spending a little less on the\nstorage system and more on the actual servers is the way to go? Then\nutilize some combination off pg_backup, and the archive_command directive\nwith a periodic script.\n\nThank You all. I will keep researching this and the more input the better.\nThank You.\n\nOn 4/27/06 9:24 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Thu, Apr 27, 2006 at 08:57:51AM -0400, Ketema Harris wrote:\n>> OK. My thought process was that having non local storage as say a big raid\n>> 5 san ( I am talking 5 TB with expansion capability up to 10 )\n> \n> That's two disk trays for a cheap slow array. (Versus a more expensive\n> solution with more spindles and better seek performance.)\n> \n>> would allow\n>> me to have redundancy, expandability, and hopefully still retain decent\n>> performance from the db. I also would hopefully then not have to do\n>> periodic backups from the db server to some other type of storage.\n> \n> No, backups are completely unrelated to your storage type; you need them\n> either way. On a SAN you can use a SAN backup solution to back multiple\n> systems up with a single backup unit without involving the host CPUs.\n> This is fairly useless if you aren't amortizing the cost over a large\n> environment.\n> \n>> Is this not a good idea?\n> \n> It really depends on what you're hoping to get. As described, it's not\n> clear. (I don't know what you mean by \"redundancy, expandability\" or\n> \"decent performance\".)\n> \n>> How bad of a performance hit are we talking about?\n> \n> Way too many factors for an easy answer. Consider the case of NAS vs\n> SCSI direct attach storage. You're probably in that case comparing a\n> single 125MB/s (peak) gigabit ethernet channel to (potentially several)\n> 320MB/s (peak) SCSI channels. With a high-end NAS you might get 120MB/s\n> off that GBE. With a (more realistic) mid-range unit you're more likely\n> to get 40-60MB/s. Getting 200MB/s off the SCSI channel isn't a stretch,\n> and you can fairly easily stripe across multiple SCSI channels. (You can\n> also bond multiple GBEs, but then your cost & complexity start going way\n> up, and you're never going to scale as well.) If you have an environment\n> where you're doing a lot of sequential scans it isn't even a contest.\n> You can also substitute SATA for SCSI, etc.\n> \n> For a FC SAN the peformance numbers are a lot better, but the costs &\n> complexity are a lot higher. An iSCSI SAN is somewhere in the middle.\n> \n>> Also, in regards to the commit data integrity, as far as the db is\n>> concerned once the data is sent to the san or nas isn't it \"written\"?\n>> The storage may have that write in cache, but from my reading and\n>> understanding of how these various storage devices work that is how\n>> they keep up performance.\n> \n> Depends on the configuration, but yes, most should be able to report\n> back a \"write\" once the data is in a non-volatile cache. You can do the\n> same with a direct-attached array and eliminate the latency inherent in\n> accessing the remote storage.\n> \n>> I would expect my bottleneck if any to be the actual Ethernet transfer\n>> to the storage, and I am going to try and compensate for that with a\n>> full gigabit backbone.\n> \n> see above.\n> \n> The advantages of a NAS or SAN are in things you haven't really touched\n> on. Is the filesystem going to be accessed by several systems? Do you\n> need the ability to do snapshots? (You may be able to do this with\n> direct-attach also, but doing it on a remote storage device tends to be\n> simpler.) Do you want to share one big, expensive, reliable unit between\n> multiple systems? Will you be doing failover? (Note that failover\n> requires software to go with the hardware, and can be done in a\n> different way with local storage also.) In some environments the answers\n> to those questions are yes, and the price premium & performance\n> implications are well worth it. For a single DB server the answer is\n> almost certainly \"no\".\n> \n> Mike Stone\n\n\n", "msg_date": "Thu, 27 Apr 2006 09:41:21 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "The SAN has the snapshot capability.\n\n\nOn 4/27/06 9:31 AM, \"Bruno Wolff III\" <[email protected]> wrote:\n\n> On Thu, Apr 27, 2006 at 09:06:48 -0400,\n> Ketema Harris <[email protected]> wrote:\n>> Yes, your right, I meant not have to do the backups from the db server\n>> itself. I can do that within the storage device now, by allocating space\n>> for it, and letting the device copy the data files on some periodic basis.\n> \n> Only if the database server isn't running or your SAN provides a way to\n> provide a snapshot of the data at a particular instant in time.\n\n\n", "msg_date": "Thu, 27 Apr 2006 09:42:10 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 09:41:21AM -0400, Ketema Harris wrote:\n>>No, backups are completely unrelated to your storage type; you need them\n>> either way.\n>Please another post. I meant the storage would do the back ups.\n\nWhich isn't a backup. Even expensive storage arrays can break or burn \ndown.\n\n>>redundancy, expandability\n>What I mean by these stupid flavor words is:\n>Redundancy : raid 5.\n\nYou can get that without external storage.\n\n>Expandability : the ability to stick another drive in my array and get more\n>storage and not have to turn of the db.\n\nYou can also get that without external storage assuming you choose a \nplatform with a volume manager.\n\n>>Do you \n>> need the ability to do snapshots?\n>Yes.\n\nIf that's a hard requirement you'll have to eat the cost & performance \nproblems of an external solution or choose a platform that will let you \ndo that with direct-attach storage. (Something with a volume manager.)\n\n>>Do you want to share one big, expensive, reliable unit between\n>> multiple systems? Will you be doing failover?\n>Yes, and Yes. Really on one other system, a phone system, but it is the\n>crux of my business and will be writing a lot of recorded phone calls. I am\n>working with a storage company now to set up the failover, I want the db and\n>phone systems to never no if the storage switched over.\n\nIf you actually have a couple of systems you're trying to fail over, a \nFC SAN may be a reasonable solution. Depending on your reliability \nrequirement you can have multiple interfaces & FC switches to get \nredundant paths and a much higher level of storage reliability than you \ncould get with direct attach storage. OTOH, if the DB server itself \nbreaks you're still out of luck. :) You might compare that sort of \nsolution with a solution that has redundant servers and implements the \nfailover in software instead of hardware.\n\nMike Stone\n", "msg_date": "Thu, 27 Apr 2006 10:04:19 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 10:04:19AM -0400, Michael Stone wrote:\n> >>redundancy, expandability\n> >What I mean by these stupid flavor words is:\n> >Redundancy : raid 5.\n> \n> You can get that without external storage.\n \nYes, but some dedicated storage devices actually provide good\nperformance with RAID5. Most simpler solutions give pretty abysmal write\nperformance.\n\n> >>Do you \n> >>need the ability to do snapshots?\n> >Yes.\n> \n> If that's a hard requirement you'll have to eat the cost & performance \n> problems of an external solution or choose a platform that will let you \n> do that with direct-attach storage. (Something with a volume manager.)\n \nI'm wondering if PITR would suffice. Or maybe even Slony.\n\n> >>Do you want to share one big, expensive, reliable unit between\n> >>multiple systems? Will you be doing failover?\n> >Yes, and Yes. Really on one other system, a phone system, but it is the\n> >crux of my business and will be writing a lot of recorded phone calls. I am\n> >working with a storage company now to set up the failover, I want the db \n> >and\n> >phone systems to never no if the storage switched over.\n> \n> If you actually have a couple of systems you're trying to fail over, a \n> FC SAN may be a reasonable solution. Depending on your reliability \n> requirement you can have multiple interfaces & FC switches to get \n> redundant paths and a much higher level of storage reliability than you \n> could get with direct attach storage. OTOH, if the DB server itself \n> breaks you're still out of luck. :) You might compare that sort of \n> solution with a solution that has redundant servers and implements the \n> failover in software instead of hardware.\n\nBTW, I know a company here in Austin that does capacity planning for\ncomplex systems like this; contact me off-list if you're interested in\ntalking to them.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 27 Apr 2006 12:50:16 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Thu, Apr 27, 2006 at 12:50:16PM -0500, Jim C. Nasby wrote:\n>Yes, but some dedicated storage devices actually provide good\n>performance with RAID5. Most simpler solutions give pretty abysmal write\n>performance.\n\ndedicated storage device != SAN != NAS. You can get good performance in \na dedicated direct-attach device without paying for the SAN/NAS \ninfrastructure if you don't need it; you don't have to go right from EMC \nto PERC with nothing in the middle.\n\nMike Stone\n", "msg_date": "Thu, 27 Apr 2006 13:57:06 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "So do NAS's\n\nDan\n\nOn Apr 27, 2006, at 6:42 AM, Ketema Harris wrote:\n\n> The SAN has the snapshot capability.\n>\n>\n> On 4/27/06 9:31 AM, \"Bruno Wolff III\" <[email protected]> wrote:\n>\n>> On Thu, Apr 27, 2006 at 09:06:48 -0400,\n>> Ketema Harris <[email protected]> wrote:\n>>> Yes, your right, I meant not have to do the backups from the db \n>>> server\n>>> itself. I can do that within the storage device now, by \n>>> allocating space\n>>> for it, and letting the device copy the data files on some \n>>> periodic basis.\n>>\n>> Only if the database server isn't running or your SAN provides a \n>> way to\n>> provide a snapshot of the data at a particular instant in time.\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n", "msg_date": "Thu, 27 Apr 2006 11:58:59 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" }, { "msg_contents": "On Wed, 26 Apr 2006 23:55:24 -0500, Jim C. Nasby wrote:\n> On Wed, Apr 26, 2006 at 07:35:42PM -0700, Steve Wampler wrote:\n>> On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:\n>> > I was wondering if there were any performance issues with having a data\n>> > directory that was an nfs mounted drive? Say like a SAN or NAS device? Has\n>> > anyone done this before?\n>> \n>> My understanding is that NFS is pretty poor in performance in general,\n\n NFS is not a good choice for several reasons. First, NFS takes\npriority in the system kernel, and will slow down all other\noperations. Your best choice, as pointed out by others, is a DAS\nsolutions. If you must use NFS, you should consider putting it on\na fast dedicated network by itself. \n\n", "msg_date": "Tue, 02 May 2006 14:58:51 GMT", "msg_from": "Fortuitous Technologies <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running on an NFS Mounted Directory" } ]
[ { "msg_contents": "Hi,.\n\nWe are new to Postgresql. I am appreciated if the following question can be\nanswered.\n\nOur application has a strict speed requirement for DB operation. Our tests\nshow that it takes about 10secs for the operation when setting fsync off,\nbut takes about 70 seconds when setting fsync ON (with other WAL related\nparametered tuned).\n\nWe have to looking at setting fsync OFF option for performance reason,\nour questions are\n\n a) if we set fsync OFF and anything (very low chance though) like OS\ncrash, loss of power, or hardware fault happened, can postgresql rolls back\nto the state that the last checkpoint was done ( but all the operations\nafter that is lost)\n\n b) Does this roll back to last checkpoint can ensure the database back to\nconsistent state?\n\n c) What is worst scenarios if setting fsync OFF in term of database\nsafety. We try to avoid to restore the database from nightly backup.\n\nWe view our application is not that data loss critical, say loss of five\nminutes of data and operation occasionally, but the database integrity and\nconsistency must be kept.\n\nCan we set fsync OFF for the performance benefit, have the risk of only 5\nminutes data loss or much worse?\n\nThanks in advance.\n\nRegards,\n\nGuoping\n\n", "msg_date": "Thu, 27 Apr 2006 16:31:23 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "how unsafe (or worst scenarios) when setting fsync OFF for postgresql" }, { "msg_contents": "On Thu, 2006-04-27 at 16:31 +1000, Guoping Zhang wrote:\n\n> We have to looking at setting fsync OFF option for performance reason,\n> our questions are\n> \n> a) if we set fsync OFF and anything (very low chance though) like OS\n> crash, loss of power, or hardware fault happened, can postgresql rolls back\n> to the state that the last checkpoint was done ( but all the operations\n> after that is lost)\n\nThere is no rollback, only a rollforward from the checkpoint.\n\n> b) Does this roll back to last checkpoint can ensure the database back to\n> consistent state?\n\nTherefore no consistent state guaranteed if some WAL is missing\n\n> c) What is worst scenarios if setting fsync OFF in term of database\n> safety. We try to avoid to restore the database from nightly backup.\n\nLosing some DDL changes, probably. You'd need to be wary of things like\nANALYZE, VACUUM etc, since these make catalog changes also.\n\n> We view our application is not that data loss critical, say loss of five\n> minutes of data and operation occasionally, but the database integrity and\n> consistency must be kept.\n> \n> Can we set fsync OFF for the performance benefit, have the risk of only 5\n> minutes data loss or much worse?\n\nThats up to you. \n\nfsync can be turned on and off, so you can make critical changes with\nfsync on, then continue with fsync off.\n\nThe risk and the decision, are yours. You are warned.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com/\n\n", "msg_date": "Thu, 27 Apr 2006 08:13:04 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync" }, { "msg_contents": "Guoping,\n\nOn 4/27/06, Guoping Zhang <[email protected]> wrote:\n> We have to looking at setting fsync OFF option for performance reason,\n\nDid you try the other wal sync methods (fdatasync in particular)? I\nsaw a few posts lately explaining how changing sync method can affect\nperformances in specific cases.\n\n--\nGuillaume\n", "msg_date": "Thu, 27 Apr 2006 11:26:18 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "\"Guoping Zhang\" <[email protected]> writes:\n> Our application has a strict speed requirement for DB operation. Our tests\n> show that it takes about 10secs for the operation when setting fsync off,\n> but takes about 70 seconds when setting fsync ON (with other WAL related\n> parametered tuned).\n\nI can't believe that a properly tuned application would have an fsync\npenalty that large. Are you performing that \"operation\" as several\nthousand small transactions, or some such? Try grouping the operations\ninto one (or at most a few) transactions. Also, what wal_buffers and\nwal_sync_method settings are you using, and have you experimented with\nalternatives? What sort of platform is this on? What PG version?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Apr 2006 10:53:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Thu, 2006-04-27 at 16:31 +1000, Guoping Zhang wrote:\n>> Can we set fsync OFF for the performance benefit, have the risk of only 5\n>> minutes data loss or much worse?\n\n> Thats up to you. \n\n> fsync can be turned on and off, so you can make critical changes with\n> fsync on, then continue with fsync off.\n\nI think it would be a mistake to assume that the behavior would be\nnice clean \"we only lost recent changes\". Things could get arbitrarily\nbadly corrupted if some writes make it to disk and some don't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Apr 2006 10:57:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync " }, { "msg_contents": "Hi, Tom,\n\nThanks for the reply.\n\na) The tests consists of ten thousands very small transactions, which are\nnot grouped, that is why so slow with compare to set fsync off.\nb) we are using Solaris 10 on a SUN Fire 240 SPARC machine with a latest\npostgresql release (8.1.3)\nc) wal_sync_method is set to 'open_datasync', which is fastest among the\nfour, right?\nd) wal_buffers set to 32\n\nLooks like, if we have to set fsync be true, we need to modify our\napplication.\n\nThanks and regards,\nGuoping\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tom Lane\nSent: 2006��4��28�� 0:53\nTo: [email protected]\nCc: [email protected]; Guoping Zhang (E-mail)\nSubject: Re: [PERFORM] how unsafe (or worst scenarios) when setting\nfsync OFF for postgresql\n\n\n\"Guoping Zhang\" <[email protected]> writes:\n> Our application has a strict speed requirement for DB operation. Our tests\n> show that it takes about 10secs for the operation when setting fsync off,\n> but takes about 70 seconds when setting fsync ON (with other WAL related\n> parametered tuned).\n\nI can't believe that a properly tuned application would have an fsync\npenalty that large. Are you performing that \"operation\" as several\nthousand small transactions, or some such? Try grouping the operations\ninto one (or at most a few) transactions. Also, what wal_buffers and\nwal_sync_method settings are you using, and have you experimented with\nalternatives? What sort of platform is this on? What PG version?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n", "msg_date": "Fri, 28 Apr 2006 14:43:26 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "\"Guoping Zhang\" <[email protected]> writes:\n> a) The tests consists of ten thousands very small transactions, which are\n> not grouped, that is why so slow with compare to set fsync off.\n\nYup.\n\n> c) wal_sync_method is set to 'open_datasync', which is fastest among the\n> four, right?\n\nWell, is it? You shouldn't assume that without testing.\n\n> Looks like, if we have to set fsync be true, we need to modify our\n> application.\n\nYes, you should definitely look into batching your operations into\nlarger transactions. On normal hardware you can't expect to commit\ntransactions faster than one per disk revolution (unless they're coming\nfrom multiple clients, where there's a hope of ganging several parallel\ncommits per revolution).\n\nOr buy a disk controller with battery-backed write cache and put your\nfaith in that cache surviving a machine crash. But don't turn off fsync\nif you care about your data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Apr 2006 00:56:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "Hi, Simon/tom,\n\nThanks for the reply.\n\nIt appears to me that we have to set fsync ON, as a badly corrupted database\nby any chance in production line\nwill lead a serious problem.\n\nHowever, when try the differnt 'wal_sync_method' setting, lead a quite\ndifferent operation time (open_datasync is best for speed).\n\nBut altering the commit_delay from 1 to 100000, I observed that there is no\ntime difference for the operation. Why is that? As our tests consists of\n10000 small transactions which completed in 66 seconds, that is, about 160\ntransactions per second. When commit_delay set to 100000 (i.e., 0.1 second),\nthat in theory, shall group around 16 transactions into one commit, but\nresult is same from the repeated test. Am I mistaken something here?\n\nCheers and Regards,\nGuoping\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 2006��4��28�� 0:58\nTo: Simon Riggs\nCc: [email protected]; [email protected]; Guoping\nZhang (E-mail)\nSubject: Re: [PERFORM] how unsafe (or worst scenarios) when setting\nfsync\n\n\nSimon Riggs <[email protected]> writes:\n> On Thu, 2006-04-27 at 16:31 +1000, Guoping Zhang wrote:\n>> Can we set fsync OFF for the performance benefit, have the risk of only 5\n>> minutes data loss or much worse?\n\n> Thats up to you.\n\n> fsync can be turned on and off, so you can make critical changes with\n> fsync on, then continue with fsync off.\n\nI think it would be a mistake to assume that the behavior would be\nnice clean \"we only lost recent changes\". Things could get arbitrarily\nbadly corrupted if some writes make it to disk and some don't.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 28 Apr 2006 15:01:17 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync " }, { "msg_contents": "\"Guoping Zhang\" <[email protected]> writes:\n> But altering the commit_delay from 1 to 100000, I observed that there is no\n> time difference for the operation. Why is that? As our tests consists of\n> 10000 small transactions which completed in 66 seconds, that is, about 160\n> transactions per second. When commit_delay set to 100000 (i.e., 0.1 second),\n> that in theory, shall group around 16 transactions into one commit, but\n> result is same from the repeated test. Am I mistaken something here?\n\ncommit_delay can only help if there are multiple clients issuing\ntransactions concurrently, so that there are multiple commits pending at\nthe same instant. If you are issuing one serial stream of transactions,\nit's useless.\n\nIf you do have multiple active clients, then we need to look more closely;\nbut your statement does not indicate that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Apr 2006 01:05:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync " }, { "msg_contents": "Hi, Guillaume,\n\nThanks for the reply.\n\nI am using wal_sync_methods be open_datasync, which appear much faster than\n'fdatasync'.\n\nRegards,\nGuoping\n\n-----Original Message-----\nFrom: Guillaume Smet [mailto:[email protected]]\nSent: 2006��4��27�� 19:26\nTo: [email protected]\nCc: [email protected]; Guoping Zhang (E-mail)\nSubject: Re: [PERFORM] how unsafe (or worst scenarios) when setting\nfsync OFF for postgresql\n\n\nGuoping,\n\nOn 4/27/06, Guoping Zhang <[email protected]> wrote:\n> We have to looking at setting fsync OFF option for performance reason,\n\nDid you try the other wal sync methods (fdatasync in particular)? I\nsaw a few posts lately explaining how changing sync method can affect\nperformances in specific cases.\n\n--\nGuillaume\n\n", "msg_date": "Fri, 28 Apr 2006 15:18:08 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "Hi, Tom\n\nMany thanks for quick replies and that helps a lot.\n\nJust in case, anyone out there can recommend a good but cost effective\nbattery-backed write cache SCSI for Solaris SPARC platform? How well does it\nwork with UFS or newer ZFS for solaris?\n\nCheers and regards,\nGuoping\n\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 2006��4��28�� 14:57\nTo: [email protected]\nCc: [email protected]; 'Guoping Zhang (E-mail)'\nSubject: Re: [PERFORM] how unsafe (or worst scenarios) when setting\nfsync OFF for postgresql\n\n\n\"Guoping Zhang\" <[email protected]> writes:\n> a) The tests consists of ten thousands very small transactions, which are\n> not grouped, that is why so slow with compare to set fsync off.\n\nYup.\n\n> c) wal_sync_method is set to 'open_datasync', which is fastest among the\n> four, right?\n\nWell, is it? You shouldn't assume that without testing.\n\n> Looks like, if we have to set fsync be true, we need to modify our\n> application.\n\nYes, you should definitely look into batching your operations into\nlarger transactions. On normal hardware you can't expect to commit\ntransactions faster than one per disk revolution (unless they're coming\nfrom multiple clients, where there's a hope of ganging several parallel\ncommits per revolution).\n\nOr buy a disk controller with battery-backed write cache and put your\nfaith in that cache surviving a machine crash. But don't turn off fsync\nif you care about your data.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 28 Apr 2006 15:58:06 +1000", "msg_from": "\"Guoping Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" }, { "msg_contents": "Hk, Guoping,\n\nGuoping Zhang wrote:\n\n> a) The tests consists of ten thousands very small transactions, which are\n> not grouped, that is why so slow with compare to set fsync off.\n\nIf those transactions are submitted by concurrent applications over\nseveral simulataneous connections, playing with commit_delay and\ncommit_siblins may improve your situation.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 28 Apr 2006 10:59:10 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync" } ]
[ { "msg_contents": "Get a SCSI controller with a battery backed cache, and mount the disks\nwith data=writeback (if you use ext3). If you loose power in the middle\nof a transaction, the battery will ensure that the write operation still\ncompletes. With asynch writing setup like this, fsync operations will\nreturn almost immidiately giving you performance close to that of\nrunning with fsync off.\n\nRegards,\nMikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Guoping\nZhang\nSent: den 27 april 2006 08:31\nTo: [email protected]\nCc: Guoping Zhang (E-mail)\nSubject: [PERFORM] how unsafe (or worst scenarios) when setting fsync\nOFF for postgresql\n\nHi,.\n\nWe are new to Postgresql. I am appreciated if the following question can\nbe answered.\n\nOur application has a strict speed requirement for DB operation. Our\ntests show that it takes about 10secs for the operation when setting\nfsync off, but takes about 70 seconds when setting fsync ON (with other\nWAL related parametered tuned).\n\nWe have to looking at setting fsync OFF option for performance reason,\nour questions are\n\n a) if we set fsync OFF and anything (very low chance though) like OS\ncrash, loss of power, or hardware fault happened, can postgresql rolls\nback to the state that the last checkpoint was done ( but all the\noperations after that is lost)\n\n b) Does this roll back to last checkpoint can ensure the database back\nto consistent state?\n\n c) What is worst scenarios if setting fsync OFF in term of database\nsafety. We try to avoid to restore the database from nightly backup.\n\nWe view our application is not that data loss critical, say loss of five\nminutes of data and operation occasionally, but the database integrity\nand consistency must be kept.\n\nCan we set fsync OFF for the performance benefit, have the risk of only\n5 minutes data loss or much worse?\n\nThanks in advance.\n\nRegards,\n\nGuoping\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 27 Apr 2006 09:42:54 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" } ]
[ { "msg_contents": "Hello,\nMany thanks for your suggestions.\nI will try them.\nThe last two queries almost did not use disk, but used 100% cpu.\nThe differences of performance are big.\nFirebird has something similiar to EXPLAIN. Please look below.\nIs there something really wrong with the postgresql configuration (at my\nprevious msg) that is causing this poor performance at these 2 queries?\nI tweaked until almost no disk was used, but now it is using 100% cpu and took\ntoo much time to complete.\nThanks.\nAndre Felipe Machado\n\nhttp://www.techforce.com.br\n\n\n\n\nSQL> set plan on;\nSQL> set stats on;\nSQL> update CADASTRO set IN_CADASTRO_MAIS_ATUAL = case when\nCADASTRO.ID_CADASTRO= (select max(CAD2.ID_CADASTRO) from CADASTRO CAD2 inner\njoin DECLARACAO DECL on (DECL.ID_DECLARACAO=CAD2.ID_DECLARACAO) where\nCAD2.ID_EMPRESA=CADASTRO.ID_EMPRESA and DECL.AM_REFERENCIA = (select\nmax(DEC2.AM_REFERENCIA) from DECLARACAO DEC2 where DEC2.IN_FOI_RETIFICADA=0\nand 1 in (select CAD3.ID_CADASTRO from CADASTRO CAD3 where\nCAD3.ID_DECLARACAO=DEC2.ID_DECLARACAO and CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA\n) )and DECL.IN_FOI_RETIFICADA=0 )then 1 else 0 end;\n\nPLAN (CAD3 INDEX (PK_CADASTRO_DESC))\nPLAN (DEC2 NATURAL)\nPLAN JOIN (DECL INDEX (IDX_DT_REFERENCIA),CAD2 INDEX (RDB$FOREIGN18))\nPLAN (CADASTRO NATURAL)\nCurrent memory = 911072\nDelta memory = 355620\nMax memory = 911072\nElapsed time= 1.89 sec\nCpu = 0.00 sec\nBuffers = 2048\nReads = 1210\nWrites = 14\nFetches = 310384\n\nSQL>\nSQL> update CADASTRO set IN_CADASTRO_MAIS_ATUAL = case when\nCADASTRO.ID_CADASTRO= (select max(CAD2.ID_CADASTRO) from CADASTRO CAD2 inner\njoin DECLARACAO DECL on (DECL.ID_DECLARACAO=CAD2.ID_DECLARACAO) where\nCAD2.ID_EMPRESA=CADASTRO.ID_EMPRESA and DECL.AM_REFERENCIA = (select\nmax(DEC2.AM_REFERENCIA) from DECLARACAO DEC2 where DEC2.IN_FOI_RETIFICADA=0\nand exists (select CAD3.ID_CADASTRO from CADASTRO CAD3 where\nCAD3.ID_DECLARACAO=DEC2.ID_DECLARACAO and CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA\n) )and DECL.IN_FOI_RETIFICADA=0 )then 1 else 0 end;\n\nPLAN (CAD3 INDEX (RDB$FOREIGN18))\nPLAN (DEC2 NATURAL)\nPLAN JOIN (DECL INDEX (IDX_DT_REFERENCIA),CAD2 INDEX (RDB$FOREIGN18))\nPLAN (CADASTRO NATURAL)\nCurrent memory = 938968\nDelta memory = 8756\nMax memory = 15418996\nElapsed time= 1.09 sec\nCpu = 0.00 sec\nBuffers = 2048\nReads = 0\nWrites = 0\nFetches = 301007\n\nSQL>\n\n\n", "msg_date": "Thu, 27 Apr 2006 10:32:31 -0200", "msg_from": "\"andremachado\" <[email protected]>", "msg_from_op": true, "msg_subject": "Firebird 1.5.3 X Postgresql 8.1.3 (linux Firebird 1.5.3 X Postgresql\n\t8.1.3 (linux and and windows)]" }, { "msg_contents": "\"andremachado\" <[email protected]> writes:\n> Firebird has something similiar to EXPLAIN. Please look below.\n\nHm, maybe I just don't know how to read their output, but it's not\nobvious to me where they are doing the min/max aggregates.\n\n> Is there something really wrong with the postgresql configuration (at my\n> previous msg) that is causing this poor performance at these 2 queries?\n\nI don't think it's a configuration issue, it's a quality-of-plan issue.\n\nCould you put together a self-contained test case for this problem? I\ndon't have the time or interest to try to reverse-engineer tables and\ntest data for these queries --- but I would be interested in finding out\nwhere the time is going, if I could run the queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Apr 2006 12:13:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux Firebird 1.5.3 X\n\tPostgresql 8.1.3 (linux and and windows)]" } ]
[ { "msg_contents": "Hi folks,\n\nSorry to be bringing this up again, but I'm stumped by this problem\nand hope you can shed some light on it.\n\nI'm running postgresql 8.0 on a RLE4 server with 1.5 GB of RAM and a\nXenon 2 GHz CPU. The OS is bog standard and I've not done any kernel\ntuning on it. The file system is also bog standard ext3 with no raid\nof any kind. I know I could improve this aspect of the set up with\nmore disks and raid 0+1 etc, but the lack of performance that I'm\nexperiencing is not likely to be attributable to this sort of\nthing. More likely it's my bad understanding of Postgresql - I hope\nit's my bad understanding of Postgresql!!\n\nMy database is very simple and not by the book (normal forms etc. are\nnot all as they should be). My biggest table, by a factor of 3000 or\nso is one of 4 tables in my tiny database. It looks like this\n\n\n\n\\d job_log\n Table \"job_log\"\n Column | Type | Modifiers\n----------------+-----------------------------+--------------------------------------------------\njob_log_id | integer | not null default \nnextval('job_log_id_seq'::text)\nfirst_registry | timestamp without time zone |\ncustomer_name | character(50) |\nnode_id | integer |\njob_type | character(50) |\njob_name | character(256) |\njob_start | timestamp without time zone |\njob_timeout | interval |\njob_stop | timestamp without time zone |\nnfiles_in_job | integer |\nstatus | integer |\nerror_code | smallint |\nfile_details | text |\nIndexes:\n \"job_log_id_pkey\" PRIMARY KEY, btree (job_log_id)\n \"idx_customer_name_filter\" btree (customer_name)\n \"idx_job_name_filter\" btree (job_name)\n \"idx_job_start_filter\" btree (job_start)\n \"idx_job_stop_filter\" btree (job_stop)\nCheck constraints:\n \"job_log_status_check\" CHECK (status = 0 OR status = 1 OR status = 8 OR \nstatus = 9)\nForeign-key constraints:\n \"legal_node\" FOREIGN KEY (node_id) REFERENCES node(node_id)\n\n\nThe node table is tiny (2500 records). What I'm pulling my hair out\nover is that ANY Query, even something as simple as select count(*)\nform job_log takes of the order of tens of minutes to complete. Just\nnow I'm trying to run an explain analyze on the above query, but so\nfar, it's taken 35min! with no result and there is a postgres process at\nthe top of top\n\nWhat am I doing wrong??\n\nMany thanks,\n\nBealach\n\n\n", "msg_date": "Thu, 27 Apr 2006 18:12:01 +0000", "msg_from": "\"Bealach-na Bo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why so slow?" }, { "msg_contents": "Bealach-na Bo <[email protected]> schrieb:\n> The node table is tiny (2500 records). What I'm pulling my hair out\n> over is that ANY Query, even something as simple as select count(*)\n> form job_log takes of the order of tens of minutes to complete. Just\n> now I'm trying to run an explain analyze on the above query, but so\n> far, it's taken 35min! with no result and there is a postgres process at\n> the top of top\n> \n> What am I doing wrong??\n\nThe 'explain analyse' don't return a result, but it returns the query\nplan and importance details, how PG works.\n\nThat's why you should paste the query and the 'explain analyse' -\noutput. This is very important.\n\nAnyway, do you periodical vacuum your DB? My guess: no, and that's why\nyou have many dead rows.\n\n20:26 < akretschmer|home> ??vacuum\n20:26 < rtfm_please> For information about vacuum\n20:26 < rtfm_please> see http://developer.postgresql.org/~wieck/vacuum_cost/\n20:26 < rtfm_please> or http://www.postgresql.org/docs/current/static/sql-vacuum.html\n20:26 < rtfm_please> or http://www.varlena.com/varlena/GeneralBits/116.php\n\n20:27 < akretschmer|home> ??explain\n20:27 < rtfm_please> For information about explain\n20:27 < rtfm_please> see http://techdocs.postgresql.org/oscon2005/robert.treat/OSCON_Explaining_Explain_Public.sxi\n20:27 < rtfm_please> or http://www.gtsm.com/oscon2003/toc.html\n20:27 < rtfm_please> or http://www.postgresql.org/docs/current/static/sql-explain.html\n\n\nRead this links for more informations about vacuum and explain.\n\n\nHTH, Andreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Thu, 27 Apr 2006 20:28:23 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "OK, here is a much more detailed output. I still don't quite\nunderstand why simple queries like counting the number of rows in a\ntable should take minutes to complete. Surely, any performance\nenhancement to be had by vacuuming is closely related to indexes\nwhich, in turn, are closely related to sorting and searching. A simple\ncount of 365590 does not involve indexes (or does it??) and should not take \nminutes. Should I be forcing the\nway postgresql plans my queries?\n\nHere is my first attempt at vacuum that got nowhere and I had to\ncancel it.\n\n----------psql session start----------\nvacuum verbose analyze job_log;\nINFO: vacuuming \"job_log\"\nINFO: index \"job_log_id_pkey\" now contains 10496152 row versions in 59665 \npages\nDETAIL: 0 index row versions were removed.\n28520 index pages have been deleted, 20000 are currently reusable.\nCPU 1.44s/3.49u sec elapsed 33.71 sec.\nINFO: index \"idx_job_stop_filter\" now contains 10496152 row versions in \n71149 pages\nDETAIL: 0 index row versions were removed.\n24990 index pages have been deleted, 20000 are currently reusable.\nCPU 2.11s/3.61u sec elapsed 115.69 sec.\nINFO: index \"idx_job_start_filter\" now contains 10496152 row versions in \n57891 pages\nDETAIL: 0 index row versions were removed.\n19769 index pages have been deleted, 19769 are currently reusable.\nCPU 1.58s/3.44u sec elapsed 23.11 sec.\nCancel request sent\n----------psql session finish----------\n\n\nI thought that combining indexes would improve things and dropped the\n3 separate ones above and created this one\n\n----------psql session start----------\ncreate index idx_job_log_filter on job_log(job_name,job_start,job_stop);\n\nselect count(*) from job_log;\ncount\n--------\n365590\n(1 row)\n\nexplain analyse select count(*) from job_log;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=1382171.88..1382171.88 rows=1 width=0) (actual \ntime=207011.882..207011.883 rows=1 loops=1)\n -> Seq Scan on job_log (cost=0.00..1381257.90 rows=365590 width=0) \n(actual time=199879.510..206708.523 rows=365590 loops=1)\nTotal runtime: 207014.363 ms\n(3 rows)\n----------psql session finish----------\n\nThen I tried another vacuum and decided to be very patient\n\n----------psql session start----------\nvacuum verbose analyze job_log;\nINFO: vacuuming \"job_log\"\nINFO: index \"job_log_id_pkey\" now contains 10496152 row versions in 59665 \npages\nDETAIL: 0 index row versions were removed.\n28520 index pages have been deleted, 20000 are currently reusable.\nCPU 1.39s/3.39u sec elapsed 24.19 sec.\nINFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.59s/0.20u sec elapsed 10.28 sec.\nINFO: \"job_log\": removed 2795915 row versions in 368091 pages\nDETAIL: CPU 33.30s/30.11u sec elapsed 2736.54 sec.\nINFO: index \"job_log_id_pkey\" now contains 7700230 row versions in 59665 \npages\nDETAIL: 2795922 index row versions were removed.\n37786 index pages have been deleted, 20000 are currently reusable.\nCPU 2.76s/6.45u sec elapsed 152.14 sec.\nINFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.52s/0.20u sec elapsed 7.75 sec.\nINFO: \"job_log\": removed 2795922 row versions in 220706 pages\nDETAIL: CPU 19.81s/17.92u sec elapsed 1615.95 sec.\nINFO: index \"job_log_id_pkey\" now contains 4904317 row versions in 59665 \npages\nDETAIL: 2795913 index row versions were removed.\n45807 index pages have been deleted, 20000 are currently reusable.\nCPU 2.22s/5.30u sec elapsed 129.02 sec.\nINFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.50s/0.22u sec elapsed 7.61 sec.\nINFO: \"job_log\": removed 2795913 row versions in 188139 pages\nDETAIL: CPU 17.03s/15.37u sec elapsed 1369.45 sec.\nINFO: index \"job_log_id_pkey\" now contains 2108405 row versions in 59665 \npages\nDETAIL: 2795912 index row versions were removed.\n53672 index pages have been deleted, 20000 are currently reusable.\nCPU 2.13s/4.57u sec elapsed 122.74 sec.\nINFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.53s/0.23u sec elapsed 8.24 sec.\nINFO: \"job_log\": removed 2795912 row versions in 187724 pages\nDETAIL: CPU 16.84s/15.22u sec elapsed 1367.50 sec.\nINFO: index \"job_log_id_pkey\" now contains 365590 row versions in 59665 \npages\nDETAIL: 1742815 index row versions were removed.\n57540 index pages have been deleted, 20000 are currently reusable.\nCPU 1.38s/2.85u sec elapsed 76.52 sec.\nINFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.54s/0.31u sec elapsed 7.99 sec.\nINFO: \"job_log\": removed 1742815 row versions in 143096 pages\nDETAIL: CPU 12.77s/11.75u sec elapsed 1046.10 sec.\nINFO: \"job_log\": found 12926477 removable, 365590 nonremovable row versions \nin 1377602 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 7894754 unused item pointers.\n0 pages are entirely empty.\nCPU 124.49s/117.57u sec elapsed 8888.80 sec.\nINFO: vacuuming \"pg_toast.pg_toast_17308\"\nINFO: index \"pg_toast_17308_index\" now contains 130 row versions in 12 \npages\nDETAIL: 2543 index row versions were removed.\n9 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.11 sec.\nINFO: \"pg_toast_17308\": removed 2543 row versions in 617 pages\nDETAIL: CPU 0.04s/0.05u sec elapsed 4.85 sec.\nINFO: \"pg_toast_17308\": found 2543 removable, 130 nonremovable row versions \nin 650 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.06s/0.06u sec elapsed 5.28 sec.\nINFO: analyzing \"rshuser.job_log\"\nINFO: \"job_log\": scanned 3000 of 1377602 pages, containing 695 live rows \nand 0 dead rows; 695 rows in sample, 319144 estimated total rows\nVACUUM\n\n\nexplain analyse select count(*) from job_log;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=1382171.88..1382171.88 rows=1 width=0) (actual \ntime=207267.094..207267.095 rows=1 loops=1)\n -> Seq Scan on job_log (cost=0.00..1381257.90 rows=365590 width=0) \n(actual time=200156.539..206962.895 rows=365590 loops=1)\nTotal runtime: 207267.153 ms\n(3 rows)\n\n----------psql session finish----------\n\n\nI also took snapshots of top output while I ran the above\n\n\n----------top output start----------\nCpu(s): 0.7% us, 0.7% sy, 0.0% ni, 49.7% id, 48.5% wa, 0.5% hi, 0.0% si\nMem: 1554788k total, 1538268k used, 16520k free, 6220k buffers\nSwap: 1020024k total, 176k used, 1019848k free, 1404280k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n3368 postgres 18 0 37492 29m 11m D 2.7 1.9 3:00.54 postmaster\n\n\n\nCpu(s): 0.7% us, 0.8% sy, 0.0% ni, 49.7% id, 48.5% wa, 0.3% hi, 0.0% si\nMem: 1554788k total, 1538580k used, 16208k free, 2872k buffers\nSwap: 1020024k total, 176k used, 1019848k free, 1414908k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n3368 postgres 15 0 37492 29m 11m D 2.3 1.9 5:26.03 postmaster\n\n\nCpu(s): 0.5% us, 5.8% sy, 0.0% ni, 48.7% id, 44.4% wa, 0.5% hi, 0.0% si\nMem: 1554788k total, 1538196k used, 16592k free, 1804k buffers\nSwap: 1020024k total, 176k used, 1019848k free, 1444576k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n3368 postgres 15 0 20956 13m 11m D 11.0 0.9 6:25.10 postmaster\n----------top output end----------\n\n\nI know my database needs a major redesign. But I'm having a hard time\nexplaining the poor performance nevertheless.\n\n\nRegards,\n\nBealach\n\n\n>From: Andreas Kretschmer <[email protected]>\n>To: [email protected]\n>Subject: Re: [PERFORM] Why so slow?\n>Date: Thu, 27 Apr 2006 20:28:23 +0200\n>\n>Bealach-na Bo <[email protected]> schrieb:\n> > The node table is tiny (2500 records). What I'm pulling my hair out\n> > over is that ANY Query, even something as simple as select count(*)\n> > form job_log takes of the order of tens of minutes to complete. Just\n> > now I'm trying to run an explain analyze on the above query, but so\n> > far, it's taken 35min! with no result and there is a postgres process at\n> > the top of top\n> >\n> > What am I doing wrong??\n>\n>The 'explain analyse' don't return a result, but it returns the query\n>plan and importance details, how PG works.\n>\n>That's why you should paste the query and the 'explain analyse' -\n>output. This is very important.\n>\n>Anyway, do you periodical vacuum your DB? My guess: no, and that's why\n>you have many dead rows.\n>\n>20:26 < akretschmer|home> ??vacuum\n>20:26 < rtfm_please> For information about vacuum\n>20:26 < rtfm_please> see \n>http://developer.postgresql.org/~wieck/vacuum_cost/\n>20:26 < rtfm_please> or \n>http://www.postgresql.org/docs/current/static/sql-vacuum.html\n>20:26 < rtfm_please> or http://www.varlena.com/varlena/GeneralBits/116.php\n>\n>20:27 < akretschmer|home> ??explain\n>20:27 < rtfm_please> For information about explain\n>20:27 < rtfm_please> see \n>http://techdocs.postgresql.org/oscon2005/robert.treat/OSCON_Explaining_Explain_Public.sxi\n>20:27 < rtfm_please> or http://www.gtsm.com/oscon2003/toc.html\n>20:27 < rtfm_please> or \n>http://www.postgresql.org/docs/current/static/sql-explain.html\n>\n>\n>Read this links for more informations about vacuum and explain.\n>\n>\n>HTH, Andreas\n>--\n>Really, I'm not out to destroy Microsoft. That will just be a completely\n>unintentional side effect. (Linus Torvalds)\n>\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\n>Kaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n", "msg_date": "Fri, 28 Apr 2006 11:41:06 +0000", "msg_from": "\"Bealach-na Bo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Fri, Apr 28, 2006 at 11:41:06AM +0000, Bealach-na Bo wrote:\n> OK, here is a much more detailed output. I still don't quite\n> understand why simple queries like counting the number of rows in a\n> table should take minutes to complete. Surely, any performance\n> enhancement to be had by vacuuming is closely related to indexes\n> which, in turn, are closely related to sorting and searching. A simple\n> count of 365590 does not involve indexes (or does it??) and should not take \n> minutes. Should I be forcing the\n> way postgresql plans my queries?\n> \n> Here is my first attempt at vacuum that got nowhere and I had to\n> cancel it.\n> \n> ----------psql session start----------\n> vacuum verbose analyze job_log;\n> INFO: vacuuming \"job_log\"\n> INFO: index \"job_log_id_pkey\" now contains 10496152 row versions in 59665 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 28520 index pages have been deleted, 20000 are currently reusable.\n> CPU 1.44s/3.49u sec elapsed 33.71 sec.\n> INFO: index \"idx_job_stop_filter\" now contains 10496152 row versions in \n> 71149 pages\n> DETAIL: 0 index row versions were removed.\n> 24990 index pages have been deleted, 20000 are currently reusable.\n> CPU 2.11s/3.61u sec elapsed 115.69 sec.\n> INFO: index \"idx_job_start_filter\" now contains 10496152 row versions in \n> 57891 pages\n> DETAIL: 0 index row versions were removed.\n> 19769 index pages have been deleted, 19769 are currently reusable.\n> CPU 1.58s/3.44u sec elapsed 23.11 sec.\n> Cancel request sent\n> ----------psql session finish----------\n> \n> \n> I thought that combining indexes would improve things and dropped the\n> 3 separate ones above and created this one\n> \n> ----------psql session start----------\n> create index idx_job_log_filter on job_log(job_name,job_start,job_stop);\n> \n> select count(*) from job_log;\n> count\n> --------\n> 365590\n> (1 row)\n\nThe above shows that the indexes contained 10M rows and 160M of dead\nspace each. That means you weren't vacuuming nearly enough.\n\n> explain analyse select count(*) from job_log;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1382171.88..1382171.88 rows=1 width=0) (actual \n> time=207011.882..207011.883 rows=1 loops=1)\n> -> Seq Scan on job_log (cost=0.00..1381257.90 rows=365590 width=0) \n> (actual time=199879.510..206708.523 rows=365590 loops=1)\n> Total runtime: 207014.363 ms\n> (3 rows)\n> ----------psql session finish----------\n> \n> Then I tried another vacuum and decided to be very patient\n> \n> ----------psql session start----------\n> vacuum verbose analyze job_log;\n> INFO: vacuuming \"job_log\"\n> INFO: index \"job_log_id_pkey\" now contains 10496152 row versions in 59665 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 28520 index pages have been deleted, 20000 are currently reusable.\n> CPU 1.39s/3.39u sec elapsed 24.19 sec.\n> INFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.59s/0.20u sec elapsed 10.28 sec.\n> INFO: \"job_log\": removed 2795915 row versions in 368091 pages\n> DETAIL: CPU 33.30s/30.11u sec elapsed 2736.54 sec.\n> INFO: index \"job_log_id_pkey\" now contains 7700230 row versions in 59665 \n> pages\n> DETAIL: 2795922 index row versions were removed.\n> 37786 index pages have been deleted, 20000 are currently reusable.\n> CPU 2.76s/6.45u sec elapsed 152.14 sec.\n> INFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.52s/0.20u sec elapsed 7.75 sec.\n> INFO: \"job_log\": removed 2795922 row versions in 220706 pages\n> DETAIL: CPU 19.81s/17.92u sec elapsed 1615.95 sec.\n> INFO: index \"job_log_id_pkey\" now contains 4904317 row versions in 59665 \n> pages\n> DETAIL: 2795913 index row versions were removed.\n> 45807 index pages have been deleted, 20000 are currently reusable.\n> CPU 2.22s/5.30u sec elapsed 129.02 sec.\n> INFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.50s/0.22u sec elapsed 7.61 sec.\n> INFO: \"job_log\": removed 2795913 row versions in 188139 pages\n> DETAIL: CPU 17.03s/15.37u sec elapsed 1369.45 sec.\n> INFO: index \"job_log_id_pkey\" now contains 2108405 row versions in 59665 \n> pages\n> DETAIL: 2795912 index row versions were removed.\n> 53672 index pages have been deleted, 20000 are currently reusable.\n> CPU 2.13s/4.57u sec elapsed 122.74 sec.\n> INFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.53s/0.23u sec elapsed 8.24 sec.\n> INFO: \"job_log\": removed 2795912 row versions in 187724 pages\n> DETAIL: CPU 16.84s/15.22u sec elapsed 1367.50 sec.\n> INFO: index \"job_log_id_pkey\" now contains 365590 row versions in 59665 \n> pages\n> DETAIL: 1742815 index row versions were removed.\n> 57540 index pages have been deleted, 20000 are currently reusable.\n> CPU 1.38s/2.85u sec elapsed 76.52 sec.\n> INFO: index \"idx_job_log_filter\" now contains 365590 row versions in 15396 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.54s/0.31u sec elapsed 7.99 sec.\n> INFO: \"job_log\": removed 1742815 row versions in 143096 pages\n> DETAIL: CPU 12.77s/11.75u sec elapsed 1046.10 sec.\n> INFO: \"job_log\": found 12926477 removable, 365590 nonremovable row \n> versions in 1377602 pages\n\n13M dead rows, and the table is 1.4M pages, or 11GB. No wonder it's\nslow.\n\nYou need to run a vacuum full, and then you need to vacuum far more\noften. If you're running 8.1, turn on autovacuum and cut each default\nscale factor in half, to 0.2 and 0.1.\n\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 7894754 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 124.49s/117.57u sec elapsed 8888.80 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_17308\"\n> INFO: index \"pg_toast_17308_index\" now contains 130 row versions in 12 \n> pages\n> DETAIL: 2543 index row versions were removed.\n> 9 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.11 sec.\n> INFO: \"pg_toast_17308\": removed 2543 row versions in 617 pages\n> DETAIL: CPU 0.04s/0.05u sec elapsed 4.85 sec.\n> INFO: \"pg_toast_17308\": found 2543 removable, 130 nonremovable row \n> versions in 650 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.06s/0.06u sec elapsed 5.28 sec.\n> INFO: analyzing \"rshuser.job_log\"\n> INFO: \"job_log\": scanned 3000 of 1377602 pages, containing 695 live rows \n> and 0 dead rows; 695 rows in sample, 319144 estimated total rows\n> VACUUM\n> \n> \n> explain analyse select count(*) from job_log;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1382171.88..1382171.88 rows=1 width=0) (actual \n> time=207267.094..207267.095 rows=1 loops=1)\n> -> Seq Scan on job_log (cost=0.00..1381257.90 rows=365590 width=0) \n> (actual time=200156.539..206962.895 rows=365590 loops=1)\n> Total runtime: 207267.153 ms\n> (3 rows)\n> \n> ----------psql session finish----------\n> \n> \n> I also took snapshots of top output while I ran the above\n> \n> \n> ----------top output start----------\n> Cpu(s): 0.7% us, 0.7% sy, 0.0% ni, 49.7% id, 48.5% wa, 0.5% hi, 0.0% si\n> Mem: 1554788k total, 1538268k used, 16520k free, 6220k buffers\n> Swap: 1020024k total, 176k used, 1019848k free, 1404280k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 3368 postgres 18 0 37492 29m 11m D 2.7 1.9 3:00.54 postmaster\n> \n> \n> \n> Cpu(s): 0.7% us, 0.8% sy, 0.0% ni, 49.7% id, 48.5% wa, 0.3% hi, 0.0% si\n> Mem: 1554788k total, 1538580k used, 16208k free, 2872k buffers\n> Swap: 1020024k total, 176k used, 1019848k free, 1414908k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 3368 postgres 15 0 37492 29m 11m D 2.3 1.9 5:26.03 postmaster\n> \n> \n> Cpu(s): 0.5% us, 5.8% sy, 0.0% ni, 48.7% id, 44.4% wa, 0.5% hi, 0.0% si\n> Mem: 1554788k total, 1538196k used, 16592k free, 1804k buffers\n> Swap: 1020024k total, 176k used, 1019848k free, 1444576k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 3368 postgres 15 0 20956 13m 11m D 11.0 0.9 6:25.10 postmaster\n> ----------top output end----------\n> \n> \n> I know my database needs a major redesign. But I'm having a hard time\n> explaining the poor performance nevertheless.\n> \n> \n> Regards,\n> \n> Bealach\n> \n> \n> >From: Andreas Kretschmer <[email protected]>\n> >To: [email protected]\n> >Subject: Re: [PERFORM] Why so slow?\n> >Date: Thu, 27 Apr 2006 20:28:23 +0200\n> >\n> >Bealach-na Bo <[email protected]> schrieb:\n> >> The node table is tiny (2500 records). What I'm pulling my hair out\n> >> over is that ANY Query, even something as simple as select count(*)\n> >> form job_log takes of the order of tens of minutes to complete. Just\n> >> now I'm trying to run an explain analyze on the above query, but so\n> >> far, it's taken 35min! with no result and there is a postgres process at\n> >> the top of top\n> >>\n> >> What am I doing wrong??\n> >\n> >The 'explain analyse' don't return a result, but it returns the query\n> >plan and importance details, how PG works.\n> >\n> >That's why you should paste the query and the 'explain analyse' -\n> >output. This is very important.\n> >\n> >Anyway, do you periodical vacuum your DB? My guess: no, and that's why\n> >you have many dead rows.\n> >\n> >20:26 < akretschmer|home> ??vacuum\n> >20:26 < rtfm_please> For information about vacuum\n> >20:26 < rtfm_please> see \n> >http://developer.postgresql.org/~wieck/vacuum_cost/\n> >20:26 < rtfm_please> or \n> >http://www.postgresql.org/docs/current/static/sql-vacuum.html\n> >20:26 < rtfm_please> or http://www.varlena.com/varlena/GeneralBits/116.php\n> >\n> >20:27 < akretschmer|home> ??explain\n> >20:27 < rtfm_please> For information about explain\n> >20:27 < rtfm_please> see \n> >http://techdocs.postgresql.org/oscon2005/robert.treat/OSCON_Explaining_Explain_Public.sxi\n> >20:27 < rtfm_please> or http://www.gtsm.com/oscon2003/toc.html\n> >20:27 < rtfm_please> or \n> >http://www.postgresql.org/docs/current/static/sql-explain.html\n> >\n> >\n> >Read this links for more informations about vacuum and explain.\n> >\n> >\n> >HTH, Andreas\n> >--\n> >Really, I'm not out to destroy Microsoft. That will just be a completely\n> >unintentional side effect. (Linus Torvalds)\n> >\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\n> >Kaufbach, Saxony, Germany, Europe. N 51.05082?, E 13.56889?\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 28 Apr 2006 09:58:43 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On April 28, 2006 04:41 am, \"Bealach-na Bo\" <[email protected]> \nwrote:\n> INFO: index \"job_log_id_pkey\" now contains 10496152 row versions in\n> 59665 pages\n\nSee the 10496152 above? That means you have 10496152 rows of data in your \ntable. If those, only 365000 are alive. That means you have basically \nnever vacuumed this table before, correct? \n\nEvery update or delete creates a new dead row. count(*) scans the whole \ntable, dead rows included. That's why it takes so long, the table acts as \nthough it has 10496152 rows when doing sequential scans.\n\nDo a VACCUM FULL on it or CLUSTER it on on a index, both of which will empty \nout all the free space and make it behave as it should. Note; VACUUM FULL \nwill take quite a while and requires an exclusive lock on the table. \nCLUSTER also requires an exclusive lock but should be a lot faster for this \ntable.\n\nOh, and get autovacuum setup and working, posthaste.\n\n-- \nNo long, complicated contracts. No actuarial tables to pore over. Social\nSecurity operates on a very simple principle: the politicians take your \nmoney from you and squander it \"\n\n", "msg_date": "Fri, 28 Apr 2006 08:02:59 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "> > INFO: index \"job_log_id_pkey\" now contains 10496152 row versions in\n> > 59665 pages\n>\n>See the 10496152 above? That means you have 10496152 rows of data in your\n>table. If those, only 365000 are alive. That means you have basically\n>never vacuumed this table before, correct?\n\nAlmost correct :| I have vacuumed this table monthly (obviously not nearly \nenough), but it\nis basically a log of events of which there are a very large number of each \nday.\n\n>\n>Every update or delete creates a new dead row. count(*) scans the whole\n>table, dead rows included. That's why it takes so long, the table acts as\n>though it has 10496152 rows when doing sequential scans.\n\nOh! This explains my problems.\n\n>\n>Do a VACCUM FULL on it or CLUSTER it on on a index, both of which will \n>empty\n>out all the free space and make it behave as it should. Note; VACUUM FULL\n>will take quite a while and requires an exclusive lock on the table.\n>CLUSTER also requires an exclusive lock but should be a lot faster for this\n>table.\n>\n>Oh, and get autovacuum setup and working, posthaste.\n\nThe exclusive lock is going to cause problems for me since the table is very \nactive. Is there a way of getting around that or do I need to schedule the \napplication that accesses this table?\n\nI'm running version 8.0. Is there autovacuum for this version too?\n\nRegards,\nBealach\n\n\n", "msg_date": "Fri, 28 Apr 2006 17:31:30 +0000", "msg_from": "\"Bealach-na Bo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why so slow?" }, { "msg_contents": ">The above shows that the indexes contained 10M rows and 160M of dead\n>space each. That means you weren't vacuuming nearly enough.\n\nHow is it that a row in the table can grow to a size far exceeding the sum \nof the maximum sized of the fields it consists of?\n\n>13M dead rows, and the table is 1.4M pages, or 11GB. No wonder it's\n>slow.\n\nI had a look at the disk files and they are HUGE indeed.\n\n>\n>You need to run a vacuum full, and then you need to vacuum far more\n>often. If you're running 8.1, turn on autovacuum and cut each default\n>scale factor in half, to 0.2 and 0.1.\n\nI'm not runing 8.1. Is there a way of doing this in 8.0 or do I need to \nwrite a shell script + cron job?\n\nRegards,\n\nBealach\n\n\n", "msg_date": "Fri, 28 Apr 2006 17:37:30 +0000", "msg_from": "\"Bealach-na Bo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On April 28, 2006 10:31 am, \"Bealach-na Bo\" <[email protected]> \nwrote:\n> The exclusive lock is going to cause problems for me since the table is\n> very active. Is there a way of getting around that or do I need to\n> schedule the application that accesses this table?\n\nIf you don't need access to the old data constantly:\n\n - copy the live data to a new table\n - TRUNCATE the old table (which needs an exclusive lock but is very fast)\n - insert the data back in\n - for an event log I would imagine this could work\n\nIf you do need the old data while the app is running then I'm not sure what \nyou can do.\n\n>\n> I'm running version 8.0. Is there autovacuum for this version too?\n\nThere is an autovacuum daemon in contrib; it's more annoying to setup and \nkeep running than the one built into 8.1, but it works OK.\n\n-- \nEat right. Exercise regularly. Die anyway.\n\n", "msg_date": "Fri, 28 Apr 2006 10:43:15 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Fri, Apr 28, 2006 at 17:37:30 +0000,\n Bealach-na Bo <[email protected]> wrote:\n> >The above shows that the indexes contained 10M rows and 160M of dead\n> >space each. That means you weren't vacuuming nearly enough.\n> \n> How is it that a row in the table can grow to a size far exceeding the sum \n> of the maximum sized of the fields it consists of?\n\nBecause unless you run vacuum, the old deleted rows are not reused. Those\nrows cannot be deleted immediately, because the rows may be visible to\nother transactions. Periodic vacuums are used to find deleted rows which\nare no longer visible to any transactions.\n\nYou probably want to read the following:\nhttp://developer.postgresql.org/docs/postgres/routine-vacuuming.html\n", "msg_date": "Fri, 28 Apr 2006 14:00:05 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "\nAt 03:00 06/04/29, Bruno Wolff III wrote:\n>On Fri, Apr 28, 2006 at 17:37:30 +0000,\n> Bealach-na Bo <[email protected]> wrote:\n> > >The above shows that the indexes contained 10M rows and 160M of dead\n> > >space each. That means you weren't vacuuming nearly enough.\n> >\n> > How is it that a row in the table can grow to a size far exceeding the sum\n> > of the maximum sized of the fields it consists of?\n>\n>Because unless you run vacuum, the old deleted rows are not reused. Those\n>rows cannot be deleted immediately, because the rows may be visible to\n>other transactions. Periodic vacuums are used to find deleted rows which\n>are no longer visible to any transactions.\n>\n>You probably want to read the following:\n>http://developer.postgresql.org/docs/postgres/routine-vacuuming.html\n\nWould recycling dead tuples on the fly (mentioned in p.14 in the article \nhttp://www.postgresql.org/files/developer/transactions.pdf ) significantly \nreduce the need for periodic vacuums?\n\nWithout knowing the internals, I have this simplistic idea: if Postgres \nmaintains the current lowest transaction ID for all active transactions, it \nprobably could recycle dead tuples on the fly. The current lowest \ntransaction ID could be maintained in a doubly linked list with maximum \n<max_connections> entries. A backward link in the tuple header might be \nneeded too.\n\nAny comments?\n\nCheers,\nKC.\n\n\n\n\n", "msg_date": "Sat, 29 Apr 2006 09:46:06 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "K C Lau <[email protected]> writes:\n> Without knowing the internals, I have this simplistic idea: if Postgres \n> maintains the current lowest transaction ID for all active transactions, it \n> probably could recycle dead tuples on the fly.\n\n[ yawn... ] Yes, we've heard that before. The hard part is getting rid\nof index entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Apr 2006 22:39:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow? " }, { "msg_contents": "\nAt 10:39 06/04/29, Tom Lane wrote:\n>K C Lau <[email protected]> writes:\n> > Without knowing the internals, I have this simplistic idea: if Postgres\n> > maintains the current lowest transaction ID for all active \n> transactions, it\n> > probably could recycle dead tuples on the fly.\n>\n>[ yawn... ] Yes, we've heard that before. The hard part is getting rid\n>of index entries.\n>\n> regards, tom lane\n\nI apologize for simplistic ideas again. I presume that the equivalent tuple \nheader information is not maintained for index entries. What if they are, \nprobably only for the most commonly used index types to allow recycling \nwhere possible? The extra space required would be recycled too. It would \nprobably also help save a trip to the tuple data pages to determine the \nvalidity of index entries during index scans.\n\nCheers,\nKC.\n\n", "msg_date": "Sat, 29 Apr 2006 11:18:10 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow? " }, { "msg_contents": "On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:\n>I apologize for simplistic ideas again. I presume that the equivalent tuple \n>header information is not maintained for index entries. What if they are, \n>probably only for the most commonly used index types to allow recycling \n>where possible? \n\nAlternatively, you could just run vacuum...\n\nMike Stone\n", "msg_date": "Sat, 29 Apr 2006 07:29:28 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": ">If you don't need access to the old data constantly:\n>\n> - copy the live data to a new table\n> - TRUNCATE the old table (which needs an exclusive lock but is very fast)\n> - insert the data back in\n> - for an event log I would imagine this could work\n\nObtaining exclusive locks on this table is very difficult, or rather,\nwill make life very difficult for others, so I'm averse to running\nvacuum full or truncate (though I don't know how fast truncate is)\non a regular basis. I might just get away with running it\nonce a month, but no more.\n\n(Lazy) vacuum, however is a much more palatable option. But (lazy)\nvacuum does not always reclaim space. Will this affect performance and\ndoes this mean that a full vacuum is unavoidable? Or can I get away\nwith daily (lazy) vacuums? Disk space is not an issue for me, but\nperformance is a BIG issue. Of course, I realize that I could improve\nthe latter with better schema design - I'm working on a new schema,\nbut can't kill this one yet :|.\n\nRegards,\n\nBealach\n\n\n", "msg_date": "Sun, 30 Apr 2006 10:29:55 +0000", "msg_from": "\"Bealach-na Bo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "\"Bealach-na Bo\" <[email protected]> wrote:\n\n> >If you don't need access to the old data constantly:\n> >\n> > - copy the live data to a new table\n> > - TRUNCATE the old table (which needs an exclusive lock but is very fast)\n> > - insert the data back in\n> > - for an event log I would imagine this could work\n> \n> Obtaining exclusive locks on this table is very difficult, or rather,\n> will make life very difficult for others, so I'm averse to running\n> vacuum full or truncate (though I don't know how fast truncate is)\n> on a regular basis. I might just get away with running it\n> once a month, but no more.\n> \n> (Lazy) vacuum, however is a much more palatable option. But (lazy)\n> vacuum does not always reclaim space. Will this affect performance and\n> does this mean that a full vacuum is unavoidable? Or can I get away\n> with daily (lazy) vacuums? Disk space is not an issue for me, but\n> performance is a BIG issue. Of course, I realize that I could improve\n> the latter with better schema design - I'm working on a new schema,\n> but can't kill this one yet :|.\n\nMy understanding is basically that if you vacuum with the correct\nfrequency, you'll never need to vacuum full. This is why the\nautovacuum system is so nice, it adjusts the frequency of vacuum according\nto how much use the DB is getting.\n\nThe problem is that if you get behind, plain vacuum is unable to get things\ncaught up again, and a vacuum full is required to recover disk space.\n\nAt this point, it seems like you need to do 2 things:\n1) Schedule lazy vacuum to run, or configure autovacuum.\n2) Schedule some downtime to run \"vacuum full\" to recover some disk space.\n\n#2 only needs done once to get you back on track, assuming that #1 is\ndone properly.\n\nA little bit of wasted space in the database is OK, and lazy vacuum done\non a reasonable schedule will keep the level of wasted space to an\nacceptable level.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Sun, 30 Apr 2006 10:03:46 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "Hi, Bill,\n\nBill Moran wrote:\n\n> My understanding is basically that if you vacuum with the correct\n> frequency, you'll never need to vacuum full. This is why the\n> autovacuum system is so nice, it adjusts the frequency of vacuum according\n> to how much use the DB is getting.\n\nAdditonally, the \"free_space_map\" setting has to be high enough, it has\nto cover enough space to put in all pages that get dead rows between two\nvacuum runs.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 02 May 2006 19:16:12 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:\n> \n> At 10:39 06/04/29, Tom Lane wrote:\n> >K C Lau <[email protected]> writes:\n> >> Without knowing the internals, I have this simplistic idea: if Postgres\n> >> maintains the current lowest transaction ID for all active \n> >transactions, it\n> >> probably could recycle dead tuples on the fly.\n> >\n> >[ yawn... ] Yes, we've heard that before. The hard part is getting rid\n> >of index entries.\n> >\n> > regards, tom lane\n> \n> I apologize for simplistic ideas again. I presume that the equivalent tuple \n> header information is not maintained for index entries. What if they are, \n> probably only for the most commonly used index types to allow recycling \n> where possible? The extra space required would be recycled too. It would \n> probably also help save a trip to the tuple data pages to determine the \n> validity of index entries during index scans.\n\nYou should read through the -hacker archives, most of this stuff has\nbeen gone over multiple times.\n\nStoring tuple header info in indexes would be a huge drawback, as it\nwould result in about 20 extra bytes per index entry.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 15:13:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:\n> At this point, it seems like you need to do 2 things:\n> 1) Schedule lazy vacuum to run, or configure autovacuum.\n> 2) Schedule some downtime to run \"vacuum full\" to recover some disk space.\n> \n> #2 only needs done once to get you back on track, assuming that #1 is\n> done properly.\n\nYou'll also want to reindex since vacuum full won't clean the indexes\nup. You might also want to read\nhttp://www.pervasivepostgres.com/instantkb13/article.aspx?id=10087 and\nhttp://www.pervasivepostgres.com/instantkb13/article.aspx?id=10116.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 15:17:41 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:\n> > At this point, it seems like you need to do 2 things:\n> > 1) Schedule lazy vacuum to run, or configure autovacuum.\n> > 2) Schedule some downtime to run \"vacuum full\" to recover some disk space.\n> > \n> > #2 only needs done once to get you back on track, assuming that #1 is\n> > done properly.\n> \n> You'll also want to reindex since vacuum full won't clean the indexes\n> up. You might also want to read\n> http://www.pervasivepostgres.com/instantkb13/article.aspx?id=10087 and\n> http://www.pervasivepostgres.com/instantkb13/article.aspx?id=10116.\n\nReindexing is in a different class than vacuuming. Neglecting to vacuum\ncreates a problem that gets worse and worse as time goes on. Neglecting\nto reindex does not create an infinately growing problem, since empty\nindex pages are recycled automatically. It's also conceivable that some\nusage patterns don't need to reindex at all.\n\nhttp://www.postgresql.org/docs/8.1/interactive/routine-reindex.html\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Tue, 2 May 2006 19:28:34 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:\n>Reindexing is in a different class than vacuuming.\n\nKinda, but it is in the same class as vacuum full. If vacuum neglect (or \ndramatic change in usage) has gotten you to the point of 10G of overhead \non a 2G table you can get a dramatic speedup if you vacuum full, by \ndumping a lot of unused space. But in that case you might have a similar \namount of overhead in indices, which isn't going to go away unless you \nreindex. In either case the unused rows will be reused as needed, but if \nyou know you aren't going to need the space again anytime soon you might \nneed to vacuum full/reindex.\n\nMike Stone\n", "msg_date": "Wed, 03 May 2006 07:22:21 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" }, { "msg_contents": "On Wed, May 03, 2006 at 07:22:21AM -0400, Michael Stone wrote:\n> On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:\n> >Reindexing is in a different class than vacuuming.\n> \n> Kinda, but it is in the same class as vacuum full. If vacuum neglect (or \n> dramatic change in usage) has gotten you to the point of 10G of overhead \n> on a 2G table you can get a dramatic speedup if you vacuum full, by \n> dumping a lot of unused space. But in that case you might have a similar \ns/might/will/\n> amount of overhead in indices, which isn't going to go away unless you \n> reindex. In either case the unused rows will be reused as needed, but if \n> you know you aren't going to need the space again anytime soon you might \n> need to vacuum full/reindex.\n\nYou can also do a CLUSTER on the table, which rewrites both the table\nand all the indexes from scratch. But there was some kind of issue with\ndoing that that was fixed in HEAD, but I don't think it's been\nback-ported. I also don't remember exactly what the issue was... :/\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 4 May 2006 12:53:59 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so slow?" } ]
[ { "msg_contents": "I have small database running in 8.1.3 in W2K server.\nThe following query causes Postgres process to use 100% CPU and seems to run \nforever.\nIf I change '1EEKPANT' to less frequently used item code, it runs fast.\n\nHow to speed it up ?\n\nset search_path to public,firma2;\n select rid.toode\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n LEFT JOIN artliik ON toode.grupp=artliik.grupp and\n toode.liik=artliik.liik\n WHERE (NOT '0' or dok.kinnitatud)\n AND dok.kuupaev BETWEEN '2006-04-08' AND '2006-04-27'\n AND rid.toode='1EEKPANT'\n AND (NOT dok.eimuuda or '0' ) and\n dok.laonr='1'::float8 and\n POSITION( dok.doktyyp IN 'OSIDVGYKIF')!=0 AND\n ( ( ('1' OR (POSITION(dok.doktyyp IN 'TUNH')=0 and\n (rid.kogus<0 or\n ('1' and rid.kogus=0))))\n and\n POSITION(dok.doktyyp IN 'VGYKITDNHMEBARCFJ' )!=0\n AND CASE WHEN NOT dok.objrealt OR dok.doktyyp='I' THEN dok.yksus \nELSE rid.kuluobjekt END LIKE 'LADU%' ESCAPE '!'\n )\n OR\n (POSITION(dok.doktyyp IN 'OSIUDP' )!=0\n AND CASE WHEN dok.objrealt THEN rid.kuluobjekt ELSE dok.sihtyksus \nEND LIKE 'LADU%' ESCAPE '!'\n )\n )\n AND dok.kuupaev||dok.kellaaeg BETWEEN '2006-04-08' AND '2006-04-2723 59'\n AND ('0' or ( length(trim(rid.toode))>2 AND\n rid.toode is NOT NULL))\n\n AND ( LENGTH('' )=0 OR rid.partii='' OR (dok.doktyyp='I' AND\n rid.kulupartii='' ) )\n AND (NOT dok.inventuur or rid.kogus!=0)\n AND dok.dokumnr!= 0\n AND ( artliik.arttyyp NOT IN ('Teenus', 'Komplekt' ) OR artliik.arttyyp IS \nNULL)\n\n\nexplain returns:\n\n\"Nested Loop Left Join (cost=0.00..1828.18 rows=1 width=24)\"\n\" Filter: (((\"inner\".arttyyp <> 'Teenus'::bpchar) AND (\"inner\".arttyyp <> \n'Komplekt'::bpchar)) OR (\"inner\".arttyyp IS NULL))\"\n\" -> Nested Loop (cost=0.00..1822.51 rows=1 width=43)\"\n\" -> Nested Loop (cost=0.00..1816.56 rows=1 width=24)\"\n\" Join Filter: ((\"outer\".dokumnr = \"inner\".dokumnr) AND \n(((\"position\"('VGYKITDNHMEBARCFJ'::text, (\"outer\".doktyyp)::text) <> 0) AND \n(CASE WHEN ((NOT (\"outer\".objrealt)::boolean) OR (\"outer\".doktyyp = \n'I'::bpchar)) THEN \"outer\".yksus ELSE \"inner (..)\"\n\" -> Seq Scan on dok (cost=0.00..787.80 rows=1 width=39)\"\n\" Filter: ((kuupaev >= '2006-04-08'::date) AND (kuupaev \n<= '2006-04-27'::date) AND (NOT (eimuuda)::boolean) AND ((laonr)::double \nprecision = 1::double precision) AND (\"position\"('OSIDVGYKIF'::text, \n(doktyyp)::text) <> 0) AND (((kuupaev):: (..)\"\n\" -> Seq Scan on rid (cost=0.00..1019.42 rows=249 width=51)\"\n\" Filter: ((toode = '1EEKPANT'::bpchar) AND \n(length(btrim((toode)::text)) > 2) AND (toode IS NOT NULL))\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..5.94 rows=1 \nwidth=43)\"\n\" Index Cond: ('1EEKPANT'::bpchar = toode)\"\n\" -> Index Scan using artliik_pkey on artliik (cost=0.00..5.65 rows=1 \nwidth=88)\"\n\" Index Cond: ((\"outer\".grupp = artliik.grupp) AND (\"outer\".liik = \nartliik.liik))\"\n\n\nAndrus. \n\n\n", "msg_date": "Thu, 27 Apr 2006 21:44:49 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "CPU usage goes to 100%, query seems to ran forever" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> I have small database running in 8.1.3 in W2K server.\n> The following query causes Postgres process to use 100% CPU and seems to run \n> forever.\n> If I change '1EEKPANT' to less frequently used item code, it runs fast.\n\nYou have ANALYZEd all these tables recently, I hope? The planner\ncertainly doesn't think this query will take very long.\n\nTo find out what's wrong, you're going to have to be patient enough to\nlet an EXPLAIN ANALYZE run to completion. Plain EXPLAIN won't tell.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Apr 2006 15:09:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU usage goes to 100%, query seems to ran forever " }, { "msg_contents": "> You have ANALYZEd all these tables recently, I hope? The planner\n> certainly doesn't think this query will take very long.\n\nI have autovacuum running so I expect it takes care of ANALYZE, isn't it ?\n\nI ran also analyze command before running explain analyze.\n\n> To find out what's wrong, you're going to have to be patient enough to\n> let an EXPLAIN ANALYZE run to completion. Plain EXPLAIN won't tell.\n\nHere it is running in my local computer. I'm expecting run time no more 1 \nsecond\n\n\"Nested Loop Left Join (cost=0.00..1829.95 rows=1 width=24) (actual\ntime=492064.990..492064.990 rows=0 loops=1)\"\n\" Filter: (((\"inner\".arttyyp <> 'Teenus'::bpchar) AND (\"inner\".arttyyp <>\n'Komplekt'::bpchar)) OR (\"inner\".arttyyp IS NULL))\"\n\" -> Nested Loop (cost=0.00..1825.01 rows=1 width=43) (actual\ntime=492064.983..492064.983 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..1819.04 rows=1 width=24) (actual\ntime=492064.978..492064.978 rows=0 loops=1)\"\n\" Join Filter: ((\"outer\".dokumnr = \"inner\".dokumnr) AND\n(((\"position\"('VGYKITDNHMEBARCFJ'::text, (\"outer\".doktyyp)::text) <> 0) AND\n(CASE WHEN ((NOT (\"outer\".objrealt)::boolean) OR (\"outer\".doktyyp =\n'I'::bpchar)) THEN \"outer\".yksus ELSE \"inner (..)\"\n\" -> Seq Scan on dok (cost=0.00..787.80 rows=1 width=39)\n(actual time=0.152..878.198 rows=7670 loops=1)\"\n\" Filter: ((kuupaev >= '2006-04-08'::date) AND (kuupaev\n<= '2006-04-27'::date) AND (NOT (eimuuda)::boolean) AND ((laonr)::double\nprecision = 1::double precision) AND (\"position\"('OSIDVGYKIF'::text,\n(doktyyp)::text) <> 0) AND (((kuupaev):: (..)\"\n\" -> Seq Scan on rid (cost=0.00..1019.42 rows=315 width=51)\n(actual time=22.003..62.216 rows=839 loops=7670)\"\n\" Filter: ((toode = '1EEKPANT'::bpchar) AND\n(length(btrim((toode)::text)) > 2) AND (toode IS NOT NULL))\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..5.96 rows=1\nwidth=43) (never executed)\"\n\" Index Cond: ('1EEKPANT'::bpchar = toode)\"\n\" -> Index Scan using artliik_pkey on artliik (cost=0.00..4.92 rows=1\nwidth=31) (never executed)\"\n\" Index Cond: ((\"outer\".grupp = artliik.grupp) AND (\"outer\".liik =\nartliik.liik))\"\n\"Total runtime: 492065.840 ms\"\n\n\nAndrus. \n\n\n", "msg_date": "Fri, 28 Apr 2006 12:00:35 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU usage goes to 100%, query seems to ran forever" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> Here it is running in my local computer. I'm expecting run time no more 1 \n> second\n\nSomething seems to have truncated your EXPLAIN output, but anyway we\ncan see where the problem is:\n\n> \" -> Seq Scan on dok (cost=0.00..787.80 rows=1 width=39)\n> (actual time=0.152..878.198 rows=7670 loops=1)\"\n> \" Filter: ((kuupaev >= '2006-04-08'::date) AND (kuupaev\n> <= '2006-04-27'::date) AND (NOT (eimuuda)::boolean) AND ((laonr)::double\n> precision = 1::double precision) AND (\"position\"('OSIDVGYKIF'::text,\n> (doktyyp)::text) <> 0) AND (((kuupaev):: (..)\"\n\nThe planner is expecting to get one row from \"dok\" passing the filter\ncondition, and hence chooses a plan that is suitable for a small number\nof rows ... but in reality there are 7670 rows matching the filter\ncondition, and that's what blows the runtime out of the water. (Most of\nthe runtime is actually going into 7670 repeated scans of \"rid\", which\nwouldn't have happened with another plan type.)\n\nSo you need to see about getting that estimate to be more accurate.\nFirst thing is to make sure that \"dok\" has been ANALYZEd --- just do it\nby hand. If that doesn't change the EXPLAIN plan then you need to work\nharder. I can see at least three things you are doing that are\nunnecessarily destroying the planner's ability to estimate the number of\nmatching rows:\n\n dok.laonr='1'::float8 and\n\nSince laonr apparently isn't float8, this forces a runtime type\nconversion as well as interfering with statistics use. (The planner\nwill have ANALYZE stats about dok.laonr, but the connection to\ndok.laonr::float8 escapes it.) Just write the constant with quotes\nand no type coercion.\n\n POSITION( dok.doktyyp IN 'OSIDVGYKIF')!=0 AND\n\nThis is completely unestimatable given the available statistics, and it\ndoesn't look to me like it is all that great a semantic representation\neither. Perhaps the query that's really meant here is \"dok.doktypp IN\n('O','S','I', ...)\"? If so, you should say what you mean, not play\ngames with converting the query into some strange string operation.\n\n AND dok.kuupaev||dok.kellaaeg BETWEEN '2006-04-08' AND '2006-04-2723 59'\n\nThis is another case where the planner is not going to have any ability\nto make a useful estimate, and it's because you are using a crummy\nrepresentation of your data. You should merge those two columns into\none timestamp column and just do a simple BETWEEN test.\n\n\nBy and large, unnatural representations of data that you use in WHERE\nclauses are going to cost you big-time in SQL queries. It's worth\ntaking time up front to design a clean table schema, and taking time\nto revise it when requirements change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Apr 2006 10:57:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU usage goes to 100%, query seems to ran forever " }, { "msg_contents": "> Something seems to have truncated your EXPLAIN output, but anyway we\n> can see where the problem is:\n\nI copied it from pgAdmin in 640x480 screen resolution in XP\nMaybe pgAdmin bug ?\n\n> The planner is expecting to get one row from \"dok\" passing the filter\n> condition, and hence chooses a plan that is suitable for a small number\n> of rows ... but in reality there are 7670 rows matching the filter\n> condition, and that's what blows the runtime out of the water. (Most of\n> the runtime is actually going into 7670 repeated scans of \"rid\", which\n> wouldn't have happened with another plan type.)\n\nI added index\n\nCREATE INDEX rid_toode_idx ON firma2.rid USING btree (toode);\n\nand query start working fast !\n\n> So you need to see about getting that estimate to be more accurate.\n> First thing is to make sure that \"dok\" has been ANALYZEd --- just do it\n> by hand.\n\nAs I wrote I have autovacuum running. Is'nt this sufficient ?\n\n> I can see at least three things you are doing that are\n> unnecessarily destroying the planner's ability to estimate the number of\n> matching rows:\n>\n> dok.laonr='1'::float8 and\n> Since laonr apparently isn't float8, this forces a runtime type\n> conversion as well as interfering with statistics use. (The planner\n> will have ANALYZE stats about dok.laonr, but the connection to\n> dok.laonr::float8 escapes it.) Just write the constant with quotes\n> and no type coercion.\n\nI re-wrote it as\n\ndok.laonr=1\n\nthis query is automatically generated by VFP and ODBC parameter substitution \nwhich adds those type conversions.\nVFP has only float8 type and it probably forces ODBC driver convert numbers \nto float8\n\n> POSITION( dok.doktyyp IN 'OSIDVGYKIF')!=0 AND\n>\n> This is completely unestimatable given the available statistics, and it\n> doesn't look to me like it is all that great a semantic representation\n> either. Perhaps the query that's really meant here is \"dok.doktypp IN\n> ('O','S','I', ...)\"? If so, you should say what you mean, not play\n> games with converting the query into some strange string operation.\n\n'OSID ...' is a string parameter substituted to SELECT template.\nchanging this to IN ( 'O', 'S', .. requires re-writing parts of code and I'm \nnot sure it makes code faster.\n\n> AND dok.kuupaev||dok.kellaaeg BETWEEN '2006-04-08' AND '2006-04-2723 59'\n\n> This is another case where the planner is not going to have any ability\n> to make a useful estimate, and it's because you are using a crummy\n> representation of your data. You should merge those two columns into\n> one timestamp column and just do a simple BETWEEN test.\n> By and large, unnatural representations of data that you use in WHERE\n> clauses are going to cost you big-time in SQL queries. It's worth\n> taking time up front to design a clean table schema, and taking time\n> to revise it when requirements change.\n\ndate range test in other part of where clause\n\ndok.kuupaev BETWEEN ....\n\n is optimizable.\n\nAND dok.kuupaev||dok.kellaaeg adds time range test to date range.\nThere are less that some thousands documents per day.\n\nWasting time to re-engineer database and deployed application seems not \nreasonable in this case.\n\nAndrus. \n\n\n", "msg_date": "Fri, 28 Apr 2006 19:19:33 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU usage goes to 100%, query seems to ran forever" } ]
[ { "msg_contents": "a) I have absolutely no idea regarding price tags when it comes to SUN hardware, last time I worked with SUN gear was in ´01 so you'll have to check with your local (SUN-)supplier for uptodate prices.\n\nb) Same here (no idea). But I'd be surprised if UFS (and ZFS) was unable to take advantage of battery backed write cache...\n\nRegards,\nMikael\n\n\n-----Original Message-----\nFrom: Guoping Zhang [mailto:[email protected]] \nSent: den 28 april 2006 07:35\nTo: Mikael Carneholm; [email protected]\nCc: Guoping Zhang (E-mail)\nSubject: RE: [PERFORM] how unsafe (or worst scenarios) when setting fsync OFF for postgresql\n\nHi, Mikael,\n\nWe have not looked at this option yet, but very good direction though.\n\nTwo issues are unsure:\na) we are on SUN SPARC platform, unsure what the price tag for such a hardware device with SUN brand?\n\nb) how well does UFS (or a new ZFS) work with the device (as ext3 can mount with data=writeback)?\n \nCheers and regards,\nGuoping\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Mikael Carneholm\nSent: 2006Äê4ÔÂ27ÈÕ 17:43\nTo: [email protected]; [email protected]\nSubject: Re: [PERFORM] how unsafe (or worst scenarios) when setting fsync OFF for postgresql\n\n\nGet a SCSI controller with a battery backed cache, and mount the disks with data=writeback (if you use ext3). If you loose power in the middle of a transaction, the battery will ensure that the write operation still completes. With asynch writing setup like this, fsync operations will return almost immidiately giving you performance close to that of running with fsync off.\n\nRegards,\nMikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Guoping Zhang\nSent: den 27 april 2006 08:31\nTo: [email protected]\nCc: Guoping Zhang (E-mail)\nSubject: [PERFORM] how unsafe (or worst scenarios) when setting fsync OFF for postgresql\n\nHi,.\n\nWe are new to Postgresql. I am appreciated if the following question can be answered.\n\nOur application has a strict speed requirement for DB operation. Our tests show that it takes about 10secs for the operation when setting fsync off, but takes about 70 seconds when setting fsync ON (with other WAL related parametered tuned).\n\nWe have to looking at setting fsync OFF option for performance reason, our questions are\n\n a) if we set fsync OFF and anything (very low chance though) like OS crash, loss of power, or hardware fault happened, can postgresql rolls back to the state that the last checkpoint was done ( but all the operations after that is lost)\n\n b) Does this roll back to last checkpoint can ensure the database back to consistent state?\n\n c) What is worst scenarios if setting fsync OFF in term of database safety. We try to avoid to restore the database from nightly backup.\n\nWe view our application is not that data loss critical, say loss of five minutes of data and operation occasionally, but the database integrity and consistency must be kept.\n\nCan we set fsync OFF for the performance benefit, have the risk of only\n5 minutes data loss or much worse?\n\nThanks in advance.\n\nRegards,\n\nGuoping\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 28 Apr 2006 10:51:22 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how unsafe (or worst scenarios) when setting fsync OFF for\n\tpostgresql" } ]
[ { "msg_contents": "Hi,\n \n I have a performance problem with Postgresql version 8.1 installed on a Fedora Core release 4 (Stentz) with kernel version 2.6.11.\n \n The machine I am working on has 512MB of RAM and Pentium III 800 MHz CPU.\n \n I have only one table in the database which consists of 256 columns and 10000 rows. Each column is of float type and each row corresponds to a vector in my application. What I want to do is to compute the distance between a predefined vector in hand and the ones in the database.\n \n The computation proceeds according to the following pseudocode:\n \n for(i=1; i<=256 ; i++){\n distance += abs(x1_i - x2_i);\n }\n \n where x1_i denotes the vector in hand's i coordinate and x2_i denotes the i\n coordinate of the vector in the database.\n \n The distance computation have to be done for all the vectors in the database\n by means of a query and the result set should be sorted in terms of the\n computed distances.\n \n When I implement the query and measure the time spent for it in an application\n I see that the query is handled in more than 8 seconds which is undesirable in\n my application.\n \n Here what I want to ask you all is that, is it a normal performance for a\n computer with the properties that I have mentioned above? Is there any solution\n in your mind to increase the performance of my query?\n \n To make it more undestandable, I should give the query for vectors with size\n 3, but in my case their size is 256.\n \n select\n id as vectorid,\n abs(40.9546-x2_1)+abs(-72.9964-x2_2)+abs(53.5348-x2_3) as distance\n from vectordb\n order by distance\n \n Thank you all for your help.\n \n \n -\n gulsah\n \n\t\t\n---------------------------------\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1&cent;/min.\nHi, I have a performance problem with Postgresql version 8.1 installed on a Fedora Core release 4 (Stentz) with kernel version 2.6.11. The machine I am working on has 512MB of RAM and Pentium III 800 MHz CPU. I have only one table in the database which consists of 256 columns and 10000 rows. Each column is of float type and each row corresponds to a vector in my application. What I want to do is to compute the distance between a predefined vector in hand and the ones in the database. The computation proceeds according to the following pseudocode:         for(i=1; i<=256 ; i++){                 distance += abs(x1_i - x2_i);         } where x1_i denotes the vector in hand's i coordinate and x2_i denotes the i coordinate of the vector in the database. The\n distance computation have to be done for all the vectors in the database by means of a query and the result set should be sorted in terms of the computed distances. When I implement the query and measure the time spent for it in an application I see that the query is handled in more than 8 seconds which is undesirable in my application. Here what I want to ask you all is that, is it a normal performance for a computer with the properties that I have mentioned above? Is there any solution in your mind to increase the performance of my query? To make it more undestandable, I should give the query for vectors with size 3, but in my case their size is 256. select id as vectorid, abs(40.9546-x2_1)+abs(-72.9964-x2_2)+abs(53.5348-x2_3) as distance from vectordb order by distance Thank you all for your help. - gulsah \nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min.", "msg_date": "Fri, 28 Apr 2006 04:30:59 -0700 (PDT)", "msg_from": "gulsah <[email protected]>", "msg_from_op": true, "msg_subject": "query performance question" }, { "msg_contents": "You are pulling a fair amount of data from the database and doing a lot\nof computation in the SQL. I'm not sure how fast this query could be\nexpected to run, but I had one idea. If you've inserted and deleted a\nlot into this table, you will need to run vacuum ocasionally. If you\nhaven't been doing that, I would try a VACUUM FULL ANALYZE on the table.\n(That will take a lock on the table and prevent clients from reading\ndata while it is running.)\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of gulsah\nSent: Friday, April 28, 2006 6:31 AM\nTo: [email protected]\nSubject: [PERFORM] query performance question\n\n\nHi,\n\nI have a performance problem with Postgresql version 8.1 installed on a\nFedora Core release 4 (Stentz) with kernel version 2.6.11.\n\nThe machine I am working on has 512MB of RAM and Pentium III 800 MHz\nCPU.\n\nI have only one table in the database which consists of 256 columns and\n10000 rows. Each column is of float type and each row corresponds to a\nvector in my application. What I want to do is to compute the distance\nbetween a predefined vector in hand and the ones in the database.\n\nThe computation proceeds according to the following pseudocode:\n\n for(i=1; i<=256 ; i++){\n distance += abs(x1_i - x2_i);\n }\n\nwhere x1_i denotes the vector in hand's i coordinate and x2_i denotes\nthe i\ncoordinate of the vector in the database.\n\nThe distance computation have to be done for all the vectors in the\ndatabase\nby means of a query and the result set should be sorted in terms of the\ncomputed distances.\n\nWhen I implement the query and measure the time spent for it in an\napplication\nI see that the query is handled in more than 8 seconds which is\nundesirable in\nmy application.\n\nHere what I want to ask you all is that, is it a normal performance for\na\ncomputer with the properties that I have mentioned above? Is there any\nsolution\nin your mind to increase the performance of my query?\n\nTo make it more undestandable, I should give the query for vectors with\nsize\n3, but in my case their size is 256.\n\nselect\nid as vectorid,\nabs(40.9546-x2_1)+abs(-72.9964-x2_2)+abs(53.5348-x2_3) as distance\nfrom vectordb\norder by distance\n\nThank you all for your help.\n\n\n-\ngulsah\n\n\n\n\n _____ \n\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great\n<http://us.rd.yahoo.com/mail_us/taglines/postman7/*http://us.rd.yahoo.co\nm/evt=39666/*http://messenger.yahoo.com> rates starting at 1¢/min.\n\n\n\n\nMessage\n\n\nYou \nare pulling a fair amount of data from the database and doing a lot of \ncomputation in the SQL.  I'm not sure how fast this query could be expected \nto run, but I had one idea.  If you've \ninserted and deleted a lot into this table, you will need to run vacuum \nocasionally.  If you haven't been doing that, I would try a VACUUM FULL \nANALYZE on the table.  (That will take a lock on the table and prevent \nclients from reading data while it is running.)\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n gulsahSent: Friday, April 28, 2006 6:31 AMTo: \n [email protected]: [PERFORM] query \n performance questionHi,I have a performance \n problem with Postgresql version 8.1 installed on a Fedora Core release 4 \n (Stentz) with kernel version 2.6.11.The machine I am working on has \n 512MB of RAM and Pentium III 800 MHz CPU.I have only one table in the \n database which consists of 256 columns and 10000 rows. Each column is of float \n type and each row corresponds to a vector in my application. What I want to do \n is to compute the distance between a predefined vector in hand and the ones in \n the database.The computation proceeds according to the following \n pseudocode:        for(i=1; \n i<=256 ; \n i++){                \n distance += abs(x1_i - x2_i);        \n }where x1_i denotes the vector in hand's i coordinate and x2_i denotes \n the icoordinate of the vector in the database.The distance \n computation have to be done for all the vectors in the databaseby means of \n a query and the result set should be sorted in terms of thecomputed \n distances.When I implement the query and measure the time spent for it \n in an applicationI see that the query is handled in more than 8 seconds \n which is undesirable inmy application.Here what I want to ask you \n all is that, is it a normal performance for acomputer with the properties \n that I have mentioned above? Is there any solutionin your mind to increase \n the performance of my query?To make it more undestandable, I should \n give the query for vectors with size3, but in my case their size is \n 256.selectid as \n vectorid,abs(40.9546-x2_1)+abs(-72.9964-x2_2)+abs(53.5348-x2_3) as \n distancefrom vectordborder by distanceThank you all for your \n help.-gulsah\n\n\n Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great \n rates starting at 1¢/min.", "msg_date": "Sun, 30 Apr 2006 19:36:11 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" } ]
[ { "msg_contents": "Hello,\n\nI'm searching for a comfortable way to get a variable-size bunch of user\nspecified Objects via a single prepared statement, so I wanted to submit\nan ARRAY.\n\nHowever, the query planner seems to refuse to make index scans even with\n8.1:\n\ntestdb=# EXPLAIN SELECT * from streets WHERE link_id = ANY(ARRAY[1,2,3]);\n QUERY PLAN\n--------------------------------------------------------------------\n Seq Scan on streets (cost=0.00..288681.74 rows=1713754 width=393)\n Filter: (link_id = ANY ('{1,2,3}'::integer[]))\n(2 rows)\n\n\n\nVia IN, it works fine, but hast the disadvantage that we cannot use\nprepared statements effectively:\n\ntestdb=# explain select * from streets where link_id in (1,2,3);\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------\n Bitmap Heap Scan on streets (cost=6.02..16.08 rows=5 width=393)\n Recheck Cond: ((link_id = 1) OR (link_id = 2) OR (link_id = 3))\n -> BitmapOr (cost=6.02..6.02 rows=5 width=0)\n -> Bitmap Index Scan on streets_link_id_idx (cost=0.00..2.01\nrows=2 width=0)\n Index Cond: (link_id = 1)\n -> Bitmap Index Scan on streets_link_id_idx (cost=0.00..2.01\nrows=2 width=0)\n Index Cond: (link_id = 2)\n -> Bitmap Index Scan on streets_link_id_idx (cost=0.00..2.01\nrows=2 width=0)\n Index Cond: (link_id = 3)\n(9 rows)\n\n\nAnd on the net, I found a nice trick via an \"array flattening\" function,\nwhich at least uses a nested loop of index scans instead of an index\nbitmap scan:\n\ntestdb=# CREATE FUNCTION flatten_array(anyarray) RETURNS SETOF\nanyelement AS\ntestdb-# 'SELECT ($1)[i] FROM (SELECT\ngenerate_series(array_lower($1,1),array_upper($1,1)) as i) as foo;'\ntestdb-# language SQL STRICT IMMUTABLE;\n\n\ntestdb=# EXPLAIN SELECT * from streets JOIN flatten_array(ARRAY[1,2,3])\non flatten_array=link_id;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..5882.15 rows=1566 width=397)\n -> Function Scan on flatten_array (cost=0.00..12.50 rows=1000 width=4)\n -> Index Scan using treets_link_id_idx on streets (cost=0.00..5.84\nrows=2 width=393)\n Index Cond: (\"outer\".flatten_array = streets.link_id)\n(4 rows)\n\n\nCurrently, we're planning to use the array flattening approach, but are\nthere any plans to enhance the query planner for the direct ARRAY approach?\n\nThanks,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 28 Apr 2006 16:46:27 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Arrays and index scan" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n> However, the query planner seems to refuse to make index scans even with\n> 8.1:\n> testdb=# EXPLAIN SELECT * from streets WHERE link_id = ANY(ARRAY[1,2,3]);\n\nYup, that was just done in HEAD a couple months ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Apr 2006 11:35:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays and index scan " } ]
[ { "msg_contents": "This is a question that I also posted on Dell hardware forums, and I realize it \nprobably belongs there more than here. But I am thinking someone might have \nsome anecdotal information that could help me and this post may help someone \nelse down the road.\n\nMy PowerEdge 1800 (dual 3ghz Xeon, 3GB ram) came with a single SATA drive. I \ninstalled and developed a modest database application (Postgresql, Windows 2003 \nServer, Python) on it and was pretty impressed with the performance.\n\nI wanted to add some more space, speed and reliability so I bought a used 3ware \n9500s SATA raid card. I bought three more of the same drives (Seagate \nST3808110as) and configured them in RAID 5 (3 in RAID one as hot spare). I \nreinstalled the OS and software (my efforts to ghost were not \nfruitful...another story), and the first thing I did was run a sql script to \nmake tables, indexes, sequences etc for my app and import about 20MB of data.\n\nWhen I had this installed on a single SATA drive running from the PE1800's \non-board SATA interface, this operation took anywhere from 65-80 seconds.\n\nWith my new RAID card and drives, this operation took 272 seconds!?\n\nI did a quick search and found a few posts about how RAID 5 with Databases is a \npoor choice and one in particular about Postgres and how you could expect \nperformance to be halved with RAID 5 over a single drive or RAID 1 (until you \nget above 6 disks in your RAID 5 array). So, a poorly planned configuration on \nmy part.\n\nI scrubbed the RAID config, made two RAID 1 containers (also read about how \nmoving database logs to a different partition than the database data is optimal \nfor speed and reliability). I installed the OS on the first RAID 1 volume, the \nPostgresql apps and data on the other, and used Junction from sysinternals to \nput the pg_xlogs back on the OS partition (does Postgresql have an easier way \nto do this on Windows?).\n\nWell, things didn't improve noticeably - 265 seconds.\n\nNext step, turn on the 3ware RAID card's write cache (even though I have no \nBattery Backup Unit on the RAID card and am warned about possible data loss in \nthe event of power loss).\n\nThis helped - down to 172 seconds.\n\nIs this loss in performance just the normal overhead involved when adding a \nraid card - writes now having to go to two drives instead of one? Or maybe is \nthe SATA interface on the Dell 1800s motherboard faster than the interface on \nthe 3ware raid card (SATA II ?).\n\nThanks for any help you guidance you can provide.\n", "msg_date": "Fri, 28 Apr 2006 08:37:57 -0700", "msg_from": "Erik Myllymaki <[email protected]>", "msg_from_op": true, "msg_subject": "hardare config question" }, { "msg_contents": "\nOn Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n\n> When I had this installed on a single SATA drive running from the \n> PE1800's on-board SATA interface, this operation took anywhere from \n> 65-80 seconds.\n>\n> With my new RAID card and drives, this operation took 272 seconds!?\n\nswitch it to RAID10 and re-try your experiment. if that is fast, \nthen you know your raid controller does bad RAID5.\n\nanyhow, I have in one server (our office mail server and part-time \ndevelopment testing box) an adaptec SATA RAID from dell. it is \nconfigured for RAID5 and does well for normal office stuff, but when \nwe do postgres tests on it, it just is plain old awful.\n\nbut I have some LSI based cards on which RAID5 is plenty fast and \nsuitable for the DB, but those are SCSI.\n\nFor what it is worth, the Dell PE1850 internal PERC4/Si card is \nwicked fast when hooked up with a pair of U320 SCSI drives.\n\n\n", "msg_date": "Fri, 28 Apr 2006 13:36:45 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "It's also possible that the single SATA drive you were testing (or the\ncontroller it was attached to) is lying about fsync and performing write\ncaching behind your back, whereas your new controller and drives are\nnot.\n\nYou'll find a lot more info on the archives of this list about it, but\nbasically if your application is committing a whole lot of small\ntransactions, then it will run fast (but not safely) on a drive which\nlies about fsync, but slower on a better disk subsystem which doesn't\nlie about fsync.\n\nTry running a test with fsync=off with your new equipment and if it\nsuddenly starts running faster, then you know that's the problem.\nYou'll either have a choice of losing all of your data the next time the\nsystem shuts down uncleanly but being fast, or of running slow, or of\nfixing the applications to use chunkier transactions.\n\n-- Mark\n\nOn Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n> \n> > When I had this installed on a single SATA drive running from the \n> > PE1800's on-board SATA interface, this operation took anywhere from \n> > 65-80 seconds.\n> >\n> > With my new RAID card and drives, this operation took 272 seconds!?\n> \n> switch it to RAID10 and re-try your experiment. if that is fast, \n> then you know your raid controller does bad RAID5.\n> \n> anyhow, I have in one server (our office mail server and part-time \n> development testing box) an adaptec SATA RAID from dell. it is \n> configured for RAID5 and does well for normal office stuff, but when \n> we do postgres tests on it, it just is plain old awful.\n> \n> but I have some LSI based cards on which RAID5 is plenty fast and \n> suitable for the DB, but those are SCSI.\n> \n> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n> wicked fast when hooked up with a pair of U320 SCSI drives.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Fri, 28 Apr 2006 10:47:21 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "Erik,\n\nI think you have a mismatch in your Linux driver and firmware for your 3Ware\ncard. Download a matched Linux driver and firmware from www.3ware.com and\nyour problems should disappear.\n\n- Luke\n\n\nOn 4/28/06 8:37 AM, \"Erik Myllymaki\" <[email protected]> wrote:\n\n> This is a question that I also posted on Dell hardware forums, and I realize\n> it\n> probably belongs there more than here. But I am thinking someone might have\n> some anecdotal information that could help me and this post may help someone\n> else down the road.\n> \n> My PowerEdge 1800 (dual 3ghz Xeon, 3GB ram) came with a single SATA drive. I\n> installed and developed a modest database application (Postgresql, Windows\n> 2003\n> Server, Python) on it and was pretty impressed with the performance.\n> \n> I wanted to add some more space, speed and reliability so I bought a used\n> 3ware\n> 9500s SATA raid card. I bought three more of the same drives (Seagate\n> ST3808110as) and configured them in RAID 5 (3 in RAID one as hot spare). I\n> reinstalled the OS and software (my efforts to ghost were not\n> fruitful...another story), and the first thing I did was run a sql script to\n> make tables, indexes, sequences etc for my app and import about 20MB of data.\n> \n> When I had this installed on a single SATA drive running from the PE1800's\n> on-board SATA interface, this operation took anywhere from 65-80 seconds.\n> \n> With my new RAID card and drives, this operation took 272 seconds!?\n> \n> I did a quick search and found a few posts about how RAID 5 with Databases is\n> a\n> poor choice and one in particular about Postgres and how you could expect\n> performance to be halved with RAID 5 over a single drive or RAID 1 (until you\n> get above 6 disks in your RAID 5 array). So, a poorly planned configuration on\n> my part.\n> \n> I scrubbed the RAID config, made two RAID 1 containers (also read about how\n> moving database logs to a different partition than the database data is\n> optimal\n> for speed and reliability). I installed the OS on the first RAID 1 volume, the\n> Postgresql apps and data on the other, and used Junction from sysinternals to\n> put the pg_xlogs back on the OS partition (does Postgresql have an easier way\n> to do this on Windows?).\n> \n> Well, things didn't improve noticeably - 265 seconds.\n> \n> Next step, turn on the 3ware RAID card's write cache (even though I have no\n> Battery Backup Unit on the RAID card and am warned about possible data loss in\n> the event of power loss).\n> \n> This helped - down to 172 seconds.\n> \n> Is this loss in performance just the normal overhead involved when adding a\n> raid card - writes now having to go to two drives instead of one? Or maybe is\n> the SATA interface on the Dell 1800s motherboard faster than the interface on\n> the 3ware raid card (SATA II ?).\n> \n> Thanks for any help you guidance you can provide.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n\n\n\n\nRe: [PERFORM] hardare config question\n\n\nErik,\n\nI think you have a mismatch in your Linux driver and firmware for your 3Ware card.  Download a matched Linux driver and firmware from www.3ware.com and your problems should disappear.\n\n- Luke\n\n\nOn 4/28/06 8:37 AM, \"Erik Myllymaki\" <[email protected]> wrote:\n\nThis is a question that I also posted on Dell hardware forums, and I realize it\nprobably belongs there more than here. But I am thinking someone might have\nsome anecdotal information that could help me and this post may help someone\nelse down the road.\n\nMy PowerEdge 1800 (dual 3ghz Xeon, 3GB ram) came with a single SATA drive. I\ninstalled and developed a modest database application (Postgresql, Windows 2003\nServer, Python) on it and was pretty impressed with the performance.\n\nI wanted to add some more space, speed and reliability so I bought a used 3ware\n9500s SATA raid card. I bought three more of the same drives (Seagate\nST3808110as) and configured them in RAID 5 (3 in RAID one as hot spare). I\nreinstalled the OS and software (my efforts to ghost were not\nfruitful...another story), and the first thing I did was run a sql script to\nmake tables, indexes, sequences etc for my app and import about 20MB of data.\n\nWhen I had this installed on a single SATA drive running from the PE1800's\non-board SATA interface, this operation took anywhere from 65-80 seconds.\n\nWith my new RAID card and drives, this operation took 272 seconds!?\n\nI did a quick search and found a few posts about how RAID 5 with Databases is a\npoor choice and one in particular about Postgres and how you could expect\nperformance to be halved with RAID 5 over a single drive or RAID 1 (until you\nget above 6 disks in your RAID 5 array). So, a poorly planned configuration on\nmy part.\n\nI scrubbed the RAID config, made two RAID 1 containers (also read about how\nmoving database logs to a different partition than the database data is optimal\nfor speed and reliability). I installed the OS on the first RAID 1 volume, the\nPostgresql apps and data on the other, and used Junction from sysinternals to\nput the pg_xlogs back on the OS partition (does Postgresql have an easier way\nto do this on Windows?).\n\nWell, things didn't improve noticeably - 265 seconds.\n\nNext step, turn on the 3ware RAID card's write cache (even though I have no\nBattery Backup Unit on the RAID card and am warned about possible data loss in\nthe event of power loss).\n\nThis helped - down to  172 seconds.\n\nIs this loss in performance just the normal overhead involved when adding a\nraid card - writes now having to go to two drives instead of one? Or maybe is\nthe SATA interface on the Dell 1800s motherboard faster than the interface on\nthe 3ware raid card (SATA II ?).\n\nThanks for any help you guidance you can provide.\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n               http://www.postgresql.org/docs/faq", "msg_date": "Fri, 28 Apr 2006 22:07:29 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "I have been in discussion with 3ware support and after adjusting some settings, \nthe 3ware card in RAID 1 gets better performance than the single drive. I guess \nthis had everything to do with the write (and maybe read?) cache.\n\nOf course now i am in a dangerous situation - using volatile write cache \nwithout a BBU.\n\nIf I were to use a UPS to ensure a soft shutdown in the event of power loss, am \nI somewhat as safe as if I were to purchase a BBU for this RAID card?\n\n\n\nThanks.\n\nMark Lewis wrote:\n> It's also possible that the single SATA drive you were testing (or the\n> controller it was attached to) is lying about fsync and performing write\n> caching behind your back, whereas your new controller and drives are\n> not.\n> \n> You'll find a lot more info on the archives of this list about it, but\n> basically if your application is committing a whole lot of small\n> transactions, then it will run fast (but not safely) on a drive which\n> lies about fsync, but slower on a better disk subsystem which doesn't\n> lie about fsync.\n> \n> Try running a test with fsync=off with your new equipment and if it\n> suddenly starts running faster, then you know that's the problem.\n> You'll either have a choice of losing all of your data the next time the\n> system shuts down uncleanly but being fast, or of running slow, or of\n> fixing the applications to use chunkier transactions.\n> \n> -- Mark\n> \n> On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n>> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n>>\n>>> When I had this installed on a single SATA drive running from the \n>>> PE1800's on-board SATA interface, this operation took anywhere from \n>>> 65-80 seconds.\n>>>\n>>> With my new RAID card and drives, this operation took 272 seconds!?\n>> switch it to RAID10 and re-try your experiment. if that is fast, \n>> then you know your raid controller does bad RAID5.\n>>\n>> anyhow, I have in one server (our office mail server and part-time \n>> development testing box) an adaptec SATA RAID from dell. it is \n>> configured for RAID5 and does well for normal office stuff, but when \n>> we do postgres tests on it, it just is plain old awful.\n>>\n>> but I have some LSI based cards on which RAID5 is plenty fast and \n>> suitable for the DB, but those are SCSI.\n>>\n>> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n>> wicked fast when hooked up with a pair of U320 SCSI drives.\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Mon, 01 May 2006 10:58:23 -0700", "msg_from": "Erik Myllymaki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hardare config question" }, { "msg_contents": "A UPS will make it less likely that the system will reboot and destroy\nyour database due to a power failure, but there are other causes for a\nsystem reboot.\n\nWith a BBU, the only component that can fail and cause catastrophic data\nloss is the RAID itself.\n\nWith a UPS, you are additionally vulnerable to OS crashes, failures in\nnon-RAID hardware, UPS failures, or anything else that would necessitate\na hard reboot. \n\nSo a UPS is a decent replacement for a BBU only if you trust your app\nserver/OS more than you value your data.\n\n-- Mark Lewis\n\n\nOn Mon, 2006-05-01 at 10:58 -0700, Erik Myllymaki wrote:\n> I have been in discussion with 3ware support and after adjusting some settings, \n> the 3ware card in RAID 1 gets better performance than the single drive. I guess \n> this had everything to do with the write (and maybe read?) cache.\n> \n> Of course now i am in a dangerous situation - using volatile write cache \n> without a BBU.\n> \n> If I were to use a UPS to ensure a soft shutdown in the event of power loss, am \n> I somewhat as safe as if I were to purchase a BBU for this RAID card?\n> \n> \n> \n> Thanks.\n> \n> Mark Lewis wrote:\n> > It's also possible that the single SATA drive you were testing (or the\n> > controller it was attached to) is lying about fsync and performing write\n> > caching behind your back, whereas your new controller and drives are\n> > not.\n> > \n> > You'll find a lot more info on the archives of this list about it, but\n> > basically if your application is committing a whole lot of small\n> > transactions, then it will run fast (but not safely) on a drive which\n> > lies about fsync, but slower on a better disk subsystem which doesn't\n> > lie about fsync.\n> > \n> > Try running a test with fsync=off with your new equipment and if it\n> > suddenly starts running faster, then you know that's the problem.\n> > You'll either have a choice of losing all of your data the next time the\n> > system shuts down uncleanly but being fast, or of running slow, or of\n> > fixing the applications to use chunkier transactions.\n> > \n> > -- Mark\n> > \n> > On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n> >> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n> >>\n> >>> When I had this installed on a single SATA drive running from the \n> >>> PE1800's on-board SATA interface, this operation took anywhere from \n> >>> 65-80 seconds.\n> >>>\n> >>> With my new RAID card and drives, this operation took 272 seconds!?\n> >> switch it to RAID10 and re-try your experiment. if that is fast, \n> >> then you know your raid controller does bad RAID5.\n> >>\n> >> anyhow, I have in one server (our office mail server and part-time \n> >> development testing box) an adaptec SATA RAID from dell. it is \n> >> configured for RAID5 and does well for normal office stuff, but when \n> >> we do postgres tests on it, it just is plain old awful.\n> >>\n> >> but I have some LSI based cards on which RAID5 is plenty fast and \n> >> suitable for the DB, but those are SCSI.\n> >>\n> >> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n> >> wicked fast when hooked up with a pair of U320 SCSI drives.\n> >>\n> >>\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 3: Have you checked our extensive FAQ?\n> >>\n> >> http://www.postgresql.org/docs/faq\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n", "msg_date": "Mon, 01 May 2006 11:15:53 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "UPS does not protect against the tech behind the rack unplugging the \npower cable, or an accidental power cycle from exercising the wrong \nswitch. :) Both are probably more common causes of failure than a total \npower outage.\n\nErik Myllymaki wrote:\n> I have been in discussion with 3ware support and after adjusting some \n> settings, the 3ware card in RAID 1 gets better performance than the \n> single drive. I guess this had everything to do with the write (and \n> maybe read?) cache.\n>\n> Of course now i am in a dangerous situation - using volatile write \n> cache without a BBU.\n>\n> If I were to use a UPS to ensure a soft shutdown in the event of power \n> loss, am I somewhat as safe as if I were to purchase a BBU for this \n> RAID card?\n>\n>\n>\n> Thanks.\n>\n> Mark Lewis wrote:\n>> It's also possible that the single SATA drive you were testing (or the\n>> controller it was attached to) is lying about fsync and performing write\n>> caching behind your back, whereas your new controller and drives are\n>> not.\n>>\n>> You'll find a lot more info on the archives of this list about it, but\n>> basically if your application is committing a whole lot of small\n>> transactions, then it will run fast (but not safely) on a drive which\n>> lies about fsync, but slower on a better disk subsystem which doesn't\n>> lie about fsync.\n>>\n>> Try running a test with fsync=off with your new equipment and if it\n>> suddenly starts running faster, then you know that's the problem.\n>> You'll either have a choice of losing all of your data the next time the\n>> system shuts down uncleanly but being fast, or of running slow, or of\n>> fixing the applications to use chunkier transactions.\n>>\n>> -- Mark\n>>\n>> On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n>>> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n>>>\n>>>> When I had this installed on a single SATA drive running from the \n>>>> PE1800's on-board SATA interface, this operation took anywhere \n>>>> from 65-80 seconds.\n>>>>\n>>>> With my new RAID card and drives, this operation took 272 seconds!?\n>>> switch it to RAID10 and re-try your experiment. if that is fast, \n>>> then you know your raid controller does bad RAID5.\n>>>\n>>> anyhow, I have in one server (our office mail server and part-time \n>>> development testing box) an adaptec SATA RAID from dell. it is \n>>> configured for RAID5 and does well for normal office stuff, but \n>>> when we do postgres tests on it, it just is plain old awful.\n>>>\n>>> but I have some LSI based cards on which RAID5 is plenty fast and \n>>> suitable for the DB, but those are SCSI.\n>>>\n>>> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n>>> wicked fast when hooked up with a pair of U320 SCSI drives.\n>>>\n>>>\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 3: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/docs/faq\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n", "msg_date": "Mon, 01 May 2006 11:22:41 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "\nOn May 1, 2006, at 1:58 PM, Erik Myllymaki wrote:\n\n> Of course now i am in a dangerous situation - using volatile write \n> cache without a BBU.\n>\n\nIt should be against the law to make RAID cards with caches that are \nnot battery backed.\n\n> If I were to use a UPS to ensure a soft shutdown in the event of \n> power loss, am I somewhat as safe as if I were to purchase a BBU \n> for this RAID card?\n\nno. not at all.\n\n", "msg_date": "Mon, 1 May 2006 14:29:12 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "good points, thanks.\n\nTom Arthurs wrote:\n> UPS does not protect against the tech behind the rack unplugging the \n> power cable, or an accidental power cycle from exercising the wrong \n> switch. :) Both are probably more common causes of failure than a total \n> power outage.\n> \n> Erik Myllymaki wrote:\n>> I have been in discussion with 3ware support and after adjusting some \n>> settings, the 3ware card in RAID 1 gets better performance than the \n>> single drive. I guess this had everything to do with the write (and \n>> maybe read?) cache.\n>>\n>> Of course now i am in a dangerous situation - using volatile write \n>> cache without a BBU.\n>>\n>> If I were to use a UPS to ensure a soft shutdown in the event of power \n>> loss, am I somewhat as safe as if I were to purchase a BBU for this \n>> RAID card?\n>>\n>>\n>>\n>> Thanks.\n>>\n>> Mark Lewis wrote:\n>>> It's also possible that the single SATA drive you were testing (or the\n>>> controller it was attached to) is lying about fsync and performing write\n>>> caching behind your back, whereas your new controller and drives are\n>>> not.\n>>>\n>>> You'll find a lot more info on the archives of this list about it, but\n>>> basically if your application is committing a whole lot of small\n>>> transactions, then it will run fast (but not safely) on a drive which\n>>> lies about fsync, but slower on a better disk subsystem which doesn't\n>>> lie about fsync.\n>>>\n>>> Try running a test with fsync=off with your new equipment and if it\n>>> suddenly starts running faster, then you know that's the problem.\n>>> You'll either have a choice of losing all of your data the next time the\n>>> system shuts down uncleanly but being fast, or of running slow, or of\n>>> fixing the applications to use chunkier transactions.\n>>>\n>>> -- Mark\n>>>\n>>> On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n>>>> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n>>>>\n>>>>> When I had this installed on a single SATA drive running from the \n>>>>> PE1800's on-board SATA interface, this operation took anywhere \n>>>>> from 65-80 seconds.\n>>>>>\n>>>>> With my new RAID card and drives, this operation took 272 seconds!?\n>>>> switch it to RAID10 and re-try your experiment. if that is fast, \n>>>> then you know your raid controller does bad RAID5.\n>>>>\n>>>> anyhow, I have in one server (our office mail server and part-time \n>>>> development testing box) an adaptec SATA RAID from dell. it is \n>>>> configured for RAID5 and does well for normal office stuff, but \n>>>> when we do postgres tests on it, it just is plain old awful.\n>>>>\n>>>> but I have some LSI based cards on which RAID5 is plenty fast and \n>>>> suitable for the DB, but those are SCSI.\n>>>>\n>>>> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n>>>> wicked fast when hooked up with a pair of U320 SCSI drives.\n>>>>\n>>>>\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>\n>>>> http://www.postgresql.org/docs/faq\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>> choose an index scan if your joining column's datatypes do not\n>>> match\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n", "msg_date": "Mon, 01 May 2006 11:43:20 -0700", "msg_from": "Erik Myllymaki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hardare config question" }, { "msg_contents": "We use the 3Ware BBUs and they¹re very nice, they self monitor and let you\nknow about their capacity if it¹s a problem.\n\n- Luke\n\n\nOn 5/1/06 11:43 AM, \"Erik Myllymaki\" <[email protected]> wrote:\n\n> good points, thanks.\n> \n> Tom Arthurs wrote:\n>> > UPS does not protect against the tech behind the rack unplugging the\n>> > power cable, or an accidental power cycle from exercising the wrong\n>> > switch. :) Both are probably more common causes of failure than a total\n>> > power outage.\n>> >\n>> > Erik Myllymaki wrote:\n>>> >> I have been in discussion with 3ware support and after adjusting some\n>>> >> settings, the 3ware card in RAID 1 gets better performance than the\n>>> >> single drive. I guess this had everything to do with the write (and\n>>> >> maybe read?) cache.\n>>> >>\n>>> >> Of course now i am in a dangerous situation - using volatile write\n>>> >> cache without a BBU.\n>>> >>\n>>> >> If I were to use a UPS to ensure a soft shutdown in the event of power\n>>> >> loss, am I somewhat as safe as if I were to purchase a BBU for this\n>>> >> RAID card?\n>>> >>\n>>> >>\n>>> >>\n>>> >> Thanks.\n>>> >>\n>>> >> Mark Lewis wrote:\n>>>> >>> It's also possible that the single SATA drive you were testing (or the\n>>>> >>> controller it was attached to) is lying about fsync and performing >>>>\nwrite\n>>>> >>> caching behind your back, whereas your new controller and drives are\n>>>> >>> not.\n>>>> >>>\n>>>> >>> You'll find a lot more info on the archives of this list about it, but\n>>>> >>> basically if your application is committing a whole lot of small\n>>>> >>> transactions, then it will run fast (but not safely) on a drive which\n>>>> >>> lies about fsync, but slower on a better disk subsystem which doesn't\n>>>> >>> lie about fsync.\n>>>> >>>\n>>>> >>> Try running a test with fsync=off with your new equipment and if it\n>>>> >>> suddenly starts running faster, then you know that's the problem.\n>>>> >>> You'll either have a choice of losing all of your data the next time\nthe\n>>>> >>> system shuts down uncleanly but being fast, or of running slow, or of\n>>>> >>> fixing the applications to use chunkier transactions.\n>>>> >>>\n>>>> >>> -- Mark\n>>>> >>>\n>>>> >>> On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n>>>>> >>>> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n>>>>> >>>>\n>>>>>> >>>>> When I had this installed on a single SATA drive running from the\n>>>>>> >>>>> PE1800's on-board SATA interface, this operation took anywhere\n>>>>>> >>>>> from 65-80 seconds.\n>>>>>> >>>>>\n>>>>>> >>>>> With my new RAID card and drives, this operation took 272 seconds!?\n>>>>> >>>> switch it to RAID10 and re-try your experiment. if that is fast,\n>>>>> >>>> then you know your raid controller does bad RAID5.\n>>>>> >>>>\n>>>>> >>>> anyhow, I have in one server (our office mail server and part-time\n>>>>> >>>> development testing box) an adaptec SATA RAID from dell. it is\n>>>>> >>>> configured for RAID5 and does well for normal office stuff, but\n>>>>> >>>> when we do postgres tests on it, it just is plain old awful.\n>>>>> >>>>\n>>>>> >>>> but I have some LSI based cards on which RAID5 is plenty fast and\n>>>>> >>>> suitable for the DB, but those are SCSI.\n>>>>> >>>>\n>>>>> >>>> For what it is worth, the Dell PE1850 internal PERC4/Si card is\n>>>>> >>>> wicked fast when hooked up with a pair of U320 SCSI drives.\n>>>>> >>>>\n>>>>> >>>>\n>>>>> >>>>\n>>>>> >>>> ---------------------------(end of\n>>>>> >>>> broadcast)---------------------------\n>>>>> >>>> TIP 3: Have you checked our extensive FAQ?\n>>>>> >>>>\n>>>>> >>>> http://www.postgresql.org/docs/faq\n>>>> >>>\n>>>> >>> ---------------------------(end of\n>>>> broadcast)---------------------------\n>>>> >>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>>> >>> choose an index scan if your joining column's datatypes do not\n>>>> >>> match\n>>> >>\n>>> >> ---------------------------(end of broadcast)---------------------------\n>>> >> TIP 6: explain analyze is your friend\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n> \n\n\n\n\n\nRe: [PERFORM] hardare config question\n\n\nWe use the 3Ware BBUs and they’re very nice, they self monitor and let you know about their capacity if it’s a problem.\n\n- Luke\n\n\nOn 5/1/06 11:43 AM, \"Erik Myllymaki\" <[email protected]> wrote:\n\ngood points, thanks.\n\nTom Arthurs wrote:\n> UPS does not protect against the tech behind the rack unplugging the\n> power cable, or an accidental power cycle from exercising the wrong\n> switch. :)  Both are probably more common causes of failure than a total\n> power outage.\n>\n> Erik Myllymaki wrote:\n>> I have been in discussion with 3ware support and after adjusting some\n>> settings, the 3ware card in RAID 1 gets better performance than the\n>> single drive. I guess this had everything to do with the write (and\n>> maybe read?) cache.\n>>\n>> Of course now i am in a dangerous situation - using volatile write\n>> cache without a BBU.\n>>\n>> If I were to use a UPS to ensure a soft shutdown in the event of power\n>> loss, am I somewhat as safe as if I were to purchase a BBU for this\n>> RAID card?\n>>\n>>\n>>\n>> Thanks.\n>>\n>> Mark Lewis wrote:\n>>> It's also possible that the single SATA drive you were testing (or the\n>>> controller it was attached to) is lying about fsync and performing write\n>>> caching behind your back, whereas your new controller and drives are\n>>> not.\n>>>\n>>> You'll find a lot more info on the archives of this list about it, but\n>>> basically if your application is committing a whole lot of small\n>>> transactions, then it will run fast (but not safely) on a drive which\n>>> lies about fsync, but slower on a better disk subsystem which doesn't\n>>> lie about fsync.\n>>>\n>>> Try running a test with fsync=off with your new equipment and if it\n>>> suddenly starts running faster, then you know that's the problem.\n>>> You'll either have a choice of losing all of your data the next time the\n>>> system shuts down uncleanly but being fast, or of running slow, or of\n>>> fixing the applications to use chunkier transactions.\n>>>\n>>> -- Mark\n>>>\n>>> On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:\n>>>> On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:\n>>>>\n>>>>> When I had this installed on a single SATA drive running from the \n>>>>> PE1800's on-board SATA interface, this operation took anywhere\n>>>>> from  65-80 seconds.\n>>>>>\n>>>>> With my new RAID card and drives, this operation took 272 seconds!?\n>>>> switch it to RAID10 and re-try your experiment.  if that is fast, \n>>>> then you know your raid controller does bad RAID5.\n>>>>\n>>>> anyhow, I have in one server (our office mail server and part-time \n>>>> development testing box) an adaptec SATA RAID from dell.  it is \n>>>> configured for RAID5 and does well for normal office stuff, but\n>>>> when  we do postgres tests on it, it just is plain old awful.\n>>>>\n>>>> but I have some LSI based cards on which RAID5 is plenty fast and \n>>>> suitable for the DB, but those are SCSI.\n>>>>\n>>>> For what it is worth, the Dell PE1850 internal PERC4/Si card is \n>>>> wicked fast when hooked up with a pair of U320 SCSI drives.\n>>>>\n>>>>\n>>>>\n>>>> ---------------------------(end of\n>>>> broadcast)---------------------------\n>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>\n>>>>                http://www.postgresql.org/docs/faq\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>>        choose an index scan if your joining column's datatypes do not\n>>>        match\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend", "msg_date": "Mon, 01 May 2006 11:50:37 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "On Mon, 2006-05-01 at 13:22, Tom Arthurs wrote:\n> UPS does not protect against the tech behind the rack unplugging the \n> power cable, or an accidental power cycle from exercising the wrong \n> switch. :) Both are probably more common causes of failure than a total \n> power outage.\n> \n> Erik Myllymaki wrote:\n> > I have been in discussion with 3ware support and after adjusting some \n> > settings, the 3ware card in RAID 1 gets better performance than the \n> > single drive. I guess this had everything to do with the write (and \n> > maybe read?) cache.\n> >\n> > Of course now i am in a dangerous situation - using volatile write \n> > cache without a BBU.\n> >\n> > If I were to use a UPS to ensure a soft shutdown in the event of power \n> > loss, am I somewhat as safe as if I were to purchase a BBU for this \n> > RAID card?\n\nNor does it prevent an electrician from dropping a tiny piece of wire\ninto a power conditioner, causing a feedback that blows the other two\npower conditioners, all three industrial UPSes, and the switch that\nallows the Diesal generator to take over.\n\nWhen that happened to me, I had the only database server in the company\nto come back up 100% in tact. You can guess by now I also had the only\ndatabase server with battery backed cache...\n", "msg_date": "Mon, 01 May 2006 13:51:55 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardare config question" }, { "msg_contents": "guess who just bought a 3ware BBU on ebay...\n\nThanks for all the posts, consider me educated!\n\n(on the importance of BBU on RAID controllers, anyway)\n\n:)\n\n\nScott Marlowe wrote:\n> On Mon, 2006-05-01 at 13:22, Tom Arthurs wrote:\n>> UPS does not protect against the tech behind the rack unplugging the \n>> power cable, or an accidental power cycle from exercising the wrong \n>> switch. :) Both are probably more common causes of failure than a total \n>> power outage.\n>>\n>> Erik Myllymaki wrote:\n>>> I have been in discussion with 3ware support and after adjusting some \n>>> settings, the 3ware card in RAID 1 gets better performance than the \n>>> single drive. I guess this had everything to do with the write (and \n>>> maybe read?) cache.\n>>>\n>>> Of course now i am in a dangerous situation - using volatile write \n>>> cache without a BBU.\n>>>\n>>> If I were to use a UPS to ensure a soft shutdown in the event of power \n>>> loss, am I somewhat as safe as if I were to purchase a BBU for this \n>>> RAID card?\n> \n> Nor does it prevent an electrician from dropping a tiny piece of wire\n> into a power conditioner, causing a feedback that blows the other two\n> power conditioners, all three industrial UPSes, and the switch that\n> allows the Diesal generator to take over.\n> \n> When that happened to me, I had the only database server in the company\n> to come back up 100% in tact. You can guess by now I also had the only\n> database server with battery backed cache...\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Mon, 01 May 2006 12:40:22 -0700", "msg_from": "Erik Myllymaki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hardare config question" } ]
[ { "msg_contents": "The best of all worlds is to use a HW RAID card with battery backed cache.\n\nThen you can have both high performance and high reliability.\n\nBenches suggest that the best such cards currently are the Areca cards which support up to 2GB of battery backed cache.\n\nRon\n\n-----Original Message-----\n>From: Mark Lewis <[email protected]>\n>Sent: Apr 28, 2006 1:47 PM\n>To: Vivek Khera <[email protected]>\n>Cc: Pgsql performance <[email protected]>\n>Subject: Re: [PERFORM] hardare config question\n>\n>It's also possible that the single SATA drive you were testing (or the\n>controller it was attached to) is lying about fsync and performing write\n>caching behind your back, whereas your new controller and drives are\n>not.\n>\n>You'll find a lot more info on the archives of this list about it, but\n>basically if your application is committing a whole lot of small\n>transactions, then it will run fast (but not safely) on a drive which\n>lies about fsync, but slower on a better disk subsystem which doesn't\n>lie about fsync.\n>\n>Try running a test with fsync=off with your new equipment and if it\n>suddenly starts running faster, then you know that's the problem.\n>You'll either have a choice of losing all of your data the next time the\n>system shuts down uncleanly but being fast, or of running slow, or of\n>fixing the applications to use chunkier transactions.\n", "msg_date": "Fri, 28 Apr 2006 14:19:23 -0400 (GMT-04:00)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hardare config question" } ]
[ { "msg_contents": "Hello,\n\nWe are currently developing a web application and have the webserver and \nPostgreSQL with our dev db running on a machine with these specs:\n\nWin 2003 standard\nAMD Athlon XP 3000 / 2.1 GHZ\n2 Gig ram\n120 gig SATA HD\nPostgreSQL 8.1.0\nDefault pgsql configuration + shared buffers = 30,000\n\nThe performance of postgresql and our web application is good on that \nmachine, but we decided to build a dedicated database server for our \nproduction database that scales better and that we can also use for internal \napplications (CRM and so on).\n\nTo make a long story short, we built a machine with these specs:\n\nWindows 2003 Standard\nAMD Opteron 165 Dual Core / running at 2 GHZ\n2 gig ram\n2 x 150 Gig SATA II HDs in RAID 1 mode (mirror)\nPostgreSQL 8.1.3\nDefault pgsql configuration + shared buffers = 30,000\n\nPerfomance tests in windows show that the new box outperforms our dev \nmachine quite a bit in CPU, HD and memory performance.\n\nI did some EXPLAIN ANALYZE tests on queries and the results were very good, \n3 to 4 times faster than our dev db.\n\nHowever one thing is really throwing me off.\nWhen I open a table with 320,000 rows / 16 fields in the pgadmin tool (v \n1.4.0) it takes about 6 seconds on the dev server to display the result (all \nrows). During these 6 seconds the CPU usage jumps to 90%-100%.\n\nWhen I open the same table on the new, faster, better production box, it \ntakes 28 seconds!?! During these 28 seconds the CPU usage jumps to 30% for 1 \nsecond, and goes back to 0% for the remaining time while it is running the \nquery.\n\nWhat is going wrong here? It is my understanding that postgresql supports \nmulti-core / cpu environments out of the box, but to me it appears that it \nisn't utilizing any of the 2 cpu's available. I doubt that my server is that \nfast that it can perform this operation in idle mode.\n\nI played around with the shared buffers and tried out versions 8.1.3, 8.1.2, \n8.1.0 with the same result.\n\nHas anyone experienced this kind of behaviour before?\nHow representative is the query performance in pgadmin?\n\nI appreciate your ideas, comments and help.\n\nThanks,\nGreg \n\n\n", "msg_date": "Fri, 28 Apr 2006 15:29:58 -0500", "msg_from": "\"Gregory Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Issues on Opteron Dual Core" }, { "msg_contents": "Gregory Stewart wrote:\n> Hello,\n> \n> We are currently developing a web application and have the webserver and \n> PostgreSQL with our dev db running on a machine with these specs:\n> \n> Win 2003 standard\n> AMD Athlon XP 3000 / 2.1 GHZ\n> 2 Gig ram\n> 120 gig SATA HD\n> PostgreSQL 8.1.0\n> Default pgsql configuration + shared buffers = 30,000\n> \n> The performance of postgresql and our web application is good on that \n> machine, but we decided to build a dedicated database server for our \n> production database that scales better and that we can also use for internal \n> applications (CRM and so on).\n> \n> To make a long story short, we built a machine with these specs:\n> \n> Windows 2003 Standard\n> AMD Opteron 165 Dual Core / running at 2 GHZ\n> 2 gig ram\n> 2 x 150 Gig SATA II HDs in RAID 1 mode (mirror)\n> PostgreSQL 8.1.3\n> Default pgsql configuration + shared buffers = 30,000\n> \n> Perfomance tests in windows show that the new box outperforms our dev \n> machine quite a bit in CPU, HD and memory performance.\n> \n> I did some EXPLAIN ANALYZE tests on queries and the results were very good, \n> 3 to 4 times faster than our dev db.\n> \n> However one thing is really throwing me off.\n> When I open a table with 320,000 rows / 16 fields in the pgadmin tool (v \n> 1.4.0) it takes about 6 seconds on the dev server to display the result (all \n> rows). During these 6 seconds the CPU usage jumps to 90%-100%.\n> \n> When I open the same table on the new, faster, better production box, it \n> takes 28 seconds!?! During these 28 seconds the CPU usage jumps to 30% for 1 \n> second, and goes back to 0% for the remaining time while it is running the \n> query.\n> \n> What is going wrong here? It is my understanding that postgresql supports \n> multi-core / cpu environments out of the box, but to me it appears that it \n> isn't utilizing any of the 2 cpu's available. I doubt that my server is that \n> fast that it can perform this operation in idle mode.\n> \n> I played around with the shared buffers and tried out versions 8.1.3, 8.1.2, \n> 8.1.0 with the same result.\n> \n> Has anyone experienced this kind of behaviour before?\n> How representative is the query performance in pgadmin?\n> \n\nPgadmin can give misleading times for queries that return large result \nsets over a network, due to:\n\n1/ It takes time to format the (large) result set for display.\n2/ It has to count the time spent waiting for the (large) result set to \ntravel across the network.\n\nYou aren't running Pgadmin off the dev server are you? If not check your \nnetwork link to dev and prod - is one faster than the other? (etc).\n\nTo eliminate Pgadmin and the network as factors try wrapping your query \nin a 'SELECT count(*) FROM (your query here) AS a', and see if it \nchanges anything!\n\nCheers\n\nMark\n", "msg_date": "Sun, 30 Apr 2006 22:59:56 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> Pgadmin can give misleading times for queries that return large result \n> sets over a network, due to:\n> \n> 1/ It takes time to format the (large) result set for display.\n> 2/ It has to count the time spent waiting for the (large) result set to \n> travel across the network.\n> \n> You aren't running Pgadmin off the dev server are you? If not check your \n> network link to dev and prod - is one faster than the other? (etc).\n> \n> To eliminate Pgadmin and the network as factors try wrapping your query \n> in a 'SELECT count(*) FROM (your query here) AS a', and see if it \n> changes anything!\n\nFWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\nenvironment on w2k3. It runs fine for some period, and then CPU and\nthroughput drop to zero. So far I've been unable to track down any more\ninformation than that, other than the fact that I haven't been able to\nreproduce this on any single-CPU machines.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 15:28:37 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:\n> On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> > Pgadmin can give misleading times for queries that return large result\n> > sets over a network, due to:\n> >\n> > 1/ It takes time to format the (large) result set for display.\n> > 2/ It has to count the time spent waiting for the (large) result set to\n> > travel across the network.\n> >\n> > You aren't running Pgadmin off the dev server are you? If not check your\n> > network link to dev and prod - is one faster than the other? (etc).\n> >\n> > To eliminate Pgadmin and the network as factors try wrapping your query\n> > in a 'SELECT count(*) FROM (your query here) AS a', and see if it\n> > changes anything!\n>\n> FWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\n> environment on w2k3. It runs fine for some period, and then CPU and\n> throughput drop to zero. So far I've been unable to track down any more\n> information than that, other than the fact that I haven't been able to\n> reproduce this on any single-CPU machines.\n\nI have had previous correspondence about this with Magnus (search -general \nand -hackers). If you uninstall SP1 the problem goes away. We played a bit \nwith potential fixes but didn't find any.\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Tue, 2 May 2006 18:49:48 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "On Tue, May 02, 2006 at 06:49:48PM -0400, Jan de Visser wrote:\n> On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:\n> > On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> > > Pgadmin can give misleading times for queries that return large result\n> > > sets over a network, due to:\n> > >\n> > > 1/ It takes time to format the (large) result set for display.\n> > > 2/ It has to count the time spent waiting for the (large) result set to\n> > > travel across the network.\n> > >\n> > > You aren't running Pgadmin off the dev server are you? If not check your\n> > > network link to dev and prod - is one faster than the other? (etc).\n> > >\n> > > To eliminate Pgadmin and the network as factors try wrapping your query\n> > > in a 'SELECT count(*) FROM (your query here) AS a', and see if it\n> > > changes anything!\n> >\n> > FWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\n> > environment on w2k3. It runs fine for some period, and then CPU and\n> > throughput drop to zero. So far I've been unable to track down any more\n> > information than that, other than the fact that I haven't been able to\n> > reproduce this on any single-CPU machines.\n> \n> I have had previous correspondence about this with Magnus (search -general \n> and -hackers). If you uninstall SP1 the problem goes away. We played a bit \n> with potential fixes but didn't find any.\n\nInteresting; does SP2 fix the problem? Anything we can do over here to\nhelp?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 17:56:28 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "Jim,\n\nHave you seen this happening only on W2k3? I am wondering if I should try\nout 2000 Pro or XP Pro.\nNot my first choice, but if it works...\n\n\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]]\nSent: Tuesday, May 02, 2006 3:29 PM\nTo: Mark Kirkwood\nCc: Gregory Stewart; [email protected]\nSubject: Re: [PERFORM] Performance Issues on Opteron Dual Core\n\n\nOn Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> Pgadmin can give misleading times for queries that return large result\n> sets over a network, due to:\n>\n> 1/ It takes time to format the (large) result set for display.\n> 2/ It has to count the time spent waiting for the (large) result set to\n> travel across the network.\n>\n> You aren't running Pgadmin off the dev server are you? If not check your\n> network link to dev and prod - is one faster than the other? (etc).\n>\n> To eliminate Pgadmin and the network as factors try wrapping your query\n> in a 'SELECT count(*) FROM (your query here) AS a', and see if it\n> changes anything!\n\nFWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\nenvironment on w2k3. It runs fine for some period, and then CPU and\nthroughput drop to zero. So far I've been unable to track down any more\ninformation than that, other than the fact that I haven't been able to\nreproduce this on any single-CPU machines.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n--\nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.1.385 / Virus Database: 268.5.1/328 - Release Date: 5/1/2006\n\n\n", "msg_date": "Tue, 2 May 2006 23:27:02 -0500", "msg_from": "\"Gregory Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "All the machines I've been able to replicate this on have been SMP w2k3\nmachines running SP1. I've been unable to replicate it on anything not\nrunning w2k3, but the only 'SMP' machine I've tested in that manner was\nan Intel with HT enabled. I now have an intel with HT and running w2k3\nsitting in my office, but I haven't had a chance to fire it up and try\nit yet. Once I test that machine it should help narrow down if this\nproblem exists with HT machines (which someone on -hackers mentioned\nthey had access to and could do testing with). If it does affect HT\nmachines then I suspect that this is not an issue for XP...\n\nOn Tue, May 02, 2006 at 11:27:02PM -0500, Gregory Stewart wrote:\n> Jim,\n> \n> Have you seen this happening only on W2k3? I am wondering if I should try\n> out 2000 Pro or XP Pro.\n> Not my first choice, but if it works...\n> \n> \n> \n> -----Original Message-----\n> From: Jim C. Nasby [mailto:[email protected]]\n> Sent: Tuesday, May 02, 2006 3:29 PM\n> To: Mark Kirkwood\n> Cc: Gregory Stewart; [email protected]\n> Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core\n> \n> \n> On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> > Pgadmin can give misleading times for queries that return large result\n> > sets over a network, due to:\n> >\n> > 1/ It takes time to format the (large) result set for display.\n> > 2/ It has to count the time spent waiting for the (large) result set to\n> > travel across the network.\n> >\n> > You aren't running Pgadmin off the dev server are you? If not check your\n> > network link to dev and prod - is one faster than the other? (etc).\n> >\n> > To eliminate Pgadmin and the network as factors try wrapping your query\n> > in a 'SELECT count(*) FROM (your query here) AS a', and see if it\n> > changes anything!\n> \n> FWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\n> environment on w2k3. It runs fine for some period, and then CPU and\n> throughput drop to zero. So far I've been unable to track down any more\n> information than that, other than the fact that I haven't been able to\n> reproduce this on any single-CPU machines.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n> \n> --\n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.385 / Virus Database: 268.5.1/328 - Release Date: 5/1/2006\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 4 May 2006 12:47:21 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "I installed Ubuntu 5.10 on the production server (64-Bit version), and sure\nenough the peformance is like I expected. Opening up that table (320,000\nrecords) takes 6 seconds, with CPU usage of one of the cores going up to\n90% - 100% for the 6 seconds.\nI assume only one core is being used per user / session / query?\n\nGregory\n\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]]\nSent: Thursday, May 04, 2006 12:47 PM\nTo: Gregory Stewart\nCc: Mark Kirkwood; [email protected]\nSubject: Re: [PERFORM] Performance Issues on Opteron Dual Core\n\n\nAll the machines I've been able to replicate this on have been SMP w2k3\nmachines running SP1. I've been unable to replicate it on anything not\nrunning w2k3, but the only 'SMP' machine I've tested in that manner was\nan Intel with HT enabled. I now have an intel with HT and running w2k3\nsitting in my office, but I haven't had a chance to fire it up and try\nit yet. Once I test that machine it should help narrow down if this\nproblem exists with HT machines (which someone on -hackers mentioned\nthey had access to and could do testing with). If it does affect HT\nmachines then I suspect that this is not an issue for XP...\n\nOn Tue, May 02, 2006 at 11:27:02PM -0500, Gregory Stewart wrote:\n> Jim,\n>\n> Have you seen this happening only on W2k3? I am wondering if I should try\n> out 2000 Pro or XP Pro.\n> Not my first choice, but if it works...\n>\n>\n>\n> -----Original Message-----\n> From: Jim C. Nasby [mailto:[email protected]]\n> Sent: Tuesday, May 02, 2006 3:29 PM\n> To: Mark Kirkwood\n> Cc: Gregory Stewart; [email protected]\n> Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core\n>\n>\n> On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:\n> > Pgadmin can give misleading times for queries that return large result\n> > sets over a network, due to:\n> >\n> > 1/ It takes time to format the (large) result set for display.\n> > 2/ It has to count the time spent waiting for the (large) result set to\n> > travel across the network.\n> >\n> > You aren't running Pgadmin off the dev server are you? If not check your\n> > network link to dev and prod - is one faster than the other? (etc).\n> >\n> > To eliminate Pgadmin and the network as factors try wrapping your query\n> > in a 'SELECT count(*) FROM (your query here) AS a', and see if it\n> > changes anything!\n>\n> FWIW, I've found problems running PostgreSQL on Windows in a multi-CPU\n> environment on w2k3. It runs fine for some period, and then CPU and\n> throughput drop to zero. So far I've been unable to track down any more\n> information than that, other than the fact that I haven't been able to\n> reproduce this on any single-CPU machines.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n>\n> --\n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.385 / Virus Database: 268.5.1/328 - Release Date: 5/1/2006\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n--\nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.1.392 / Virus Database: 268.5.3/331 - Release Date: 5/3/2006\n\n\n", "msg_date": "Thu, 4 May 2006 15:24:37 -0500", "msg_from": "\"Gregory Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issues on Opteron Dual Core" } ]
[ { "msg_contents": "\nI've been attempting to tweak performance on a FreeBSD 6 server I'm \nrunning that has both Postgresql 8.1 and Mysql 4.1 running \nsimultaneously.\n\nTo attempt to gauge performance I was directed to the super-smack \n(http://vegan.net/tony/supersmack/) benchmark.\n\nTesting gave results that showed mysql far outperforming postgresql \nin select and update tests, in factors of roughly 6-7x in terms of \nqueries per second (running stock configurations--I was able to \nimprove mysql performance a good amount with playing with the config \nfile settings, though I couldn't make any change in postgresql \nperformance. I DID increase shmmin shmmax and semmap during the \ncourse of testing).\n\nSo, my question is, before I do any further digging, is super-smack \nflawed? It's very possibile I'm doing something stupid with the \nexecution of the benchmark too, but as I said, I just wanted to see \nif anyone else had used super-smack, or had comments? I'm glad to \npost system specs, configs, etc, if anyone is interested.\n\nAlternatively, would anyone recommend any other benchmarks to run?\n\nthanks much,\nScott\n\nI also wanted to make clear I didn't want to turn this into a mysql \nvs postgresql discussion, as I much prefer postgresql.\n", "msg_date": "Mon, 1 May 2006 03:05:54 -0500", "msg_from": "Scott Sipe <[email protected]>", "msg_from_op": true, "msg_subject": "Super-smack?" }, { "msg_contents": "On Mon, May 01, 2006 at 03:05:54AM -0500, Scott Sipe wrote:\n> So, my question is, before I do any further digging, is super-smack \n> flawed?\n\nIt's sort of hard to say without looking at the source -- it certainly isn't\na benchmark I've heard of before, and it's also sort of hard to believe a\nbenchmark whose focus seems to be so thoroughly on one database (MySQL). The\nsite claims (about PostgreSQL) support that \"it looks like it works\";\ncertainly not a good start for fair benchmarking :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 1 May 2006 13:54:49 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack?" }, { "msg_contents": "On 01/05/06, Scott Sipe <[email protected]> wrote:\n> So, my question is, before I do any further digging, is super-smack\n> flawed?\n\nHmm, selects and updates of a 90k row table using the primary key, but\nno sign of a VACUUM ANALYZE...\n\nCheers, Steve Woodcock\n", "msg_date": "Mon, 1 May 2006 14:06:44 +0100", "msg_from": "\"Steve Woodcock\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack?" }, { "msg_contents": "\"Steve Woodcock\" <[email protected]> writes:\n> On 01/05/06, Scott Sipe <[email protected]> wrote:\n>> So, my question is, before I do any further digging, is super-smack\n>> flawed?\n\n> Hmm, selects and updates of a 90k row table using the primary key, but\n> no sign of a VACUUM ANALYZE...\n\nReasonably recent versions of PG should be able to do \"the right thing\"\nin the presence of a simple primary key, even without any prior ANALYZE;\nthe planner will see the unique index and make the right conclusions\nabout statistics.\n\nIf the test is doing a huge number of UPDATEs and no VACUUM anywhere,\nthen accumulation of dead rows would eventually hurt, but it's not clear\nfrom Scott's report if that's the issue.\n\nI'm inclined to think there's some more subtle bias in the testbed.\nI haven't dug into the code to look at how they are managing multiple\nconnections, but I wonder if say there's something causing the client to\nnot notice responses from the server right away when it's going through\nlibpq.\n\nFWIW, my own experiments with tests like this suggest that PG is at\nworst about 2x slower than mysql for trivial queries. If you'd reported\na result in that ballpark I'd have accepted it as probably real. 6x I\ndon't believe though ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2006 11:15:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack? " }, { "msg_contents": "I wrote:\n> FWIW, my own experiments with tests like this suggest that PG is at\n> worst about 2x slower than mysql for trivial queries. If you'd reported\n> a result in that ballpark I'd have accepted it as probably real. 6x I\n> don't believe though ...\n\nJust for amusement's sake, I tried compiling up super-smack on my own\nmachine, and got results roughly in line with what I would've expected.\n\nMachine: dual Xeon EM64T, forget the clock rate at the moment, running\nFedora Core 4 (kernel 2.6.15-1.1831_FC4smp); hyperthreading enabled\n\nPostgres: fairly recent CVS tip, no special build options except\n--enable-debug, no changes to default runtime configuration options\n\nMySQL: 5.0.18, current Red Hat RPMs, no changes to default configuration\n\nThe \"select\" test, with 1 and 10 clients:\n\n$ super-smack -d pg select-key.smack 1 10000\nQuery Barrel Report for client smacker1\nconnect: max=0ms min=-1ms avg= 3ms from 1 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 20000 0 0 3655.24\n$ super-smack -d pg select-key.smack 10 10000\nQuery Barrel Report for client smacker1\nconnect: max=54ms min=4ms avg= 12ms from 10 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 200000 0 0 7431.20\n\n$ super-smack -d mysql select-key.smack 1 10000\nQuery Barrel Report for client smacker1\nconnect: max=0ms min=-1ms avg= 0ms from 1 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 20000 0 0 6894.03\n$ super-smack -d mysql select-key.smack 10 10000\nQuery Barrel Report for client smacker1\nconnect: max=14ms min=0ms avg= 5ms from 10 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 200000 0 0 16798.05\n\nThe \"update\" test, with 1 and 10 clients:\n\n$ super-smack -d pg update-select.smack 1 10000\nQuery Barrel Report for client smacker\nconnect: max=0ms min=-1ms avg= 4ms from 1 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 10000 0 0 1027.49\nupdate_index 10000 0 0 1027.49\n$ super-smack -d pg update-select.smack 10 10000\nQuery Barrel Report for client smacker\nconnect: max=13ms min=5ms avg= 8ms from 10 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 100000 1 0 1020.96\nupdate_index 100000 28 0 1020.96\n\nThe above is with fsync on (though I think this machine's disk lies\nabout write complete so I'd not trust it as production). With fsync off,\n\n$ super-smack -d pg update-select.smack 1 10000\nQuery Barrel Report for client smacker\nconnect: max=0ms min=-1ms avg= 3ms from 1 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 10000 0 0 1478.25\nupdate_index 10000 0 0 1478.25\n$ super-smack -d pg update-select.smack 10 10000\nQuery Barrel Report for client smacker\nconnect: max=35ms min=5ms avg= 21ms from 10 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 100000 1 0 3067.68\nupdate_index 100000 1 0 3067.68\n\nversus mysql\n\n$ super-smack -d mysql update-select.smack 1 10000\nQuery Barrel Report for client smacker\nconnect: max=0ms min=-1ms avg= 0ms from 1 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 10000 0 0 4101.43\nupdate_index 10000 0 0 4101.43\n$ super-smack -d mysql update-select.smack 10 10000\nQuery Barrel Report for client smacker\nconnect: max=3ms min=0ms avg= 0ms from 10 clients\nQuery_type num_queries max_time min_time q_per_s\nselect_index 100000 1 0 5388.31\nupdate_index 100000 6 0 5388.31\n\nSince mysql is using myisam tables (ie not transaction safe), I think\nthe fairest comparison is to the fsync-off numbers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2006 13:43:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack? " }, { "msg_contents": "On Mon, May 01, 2006 at 01:54:49PM +0200, Steinar H. Gunderson wrote:\n> On Mon, May 01, 2006 at 03:05:54AM -0500, Scott Sipe wrote:\n> > So, my question is, before I do any further digging, is super-smack \n> > flawed?\n> \n> It's sort of hard to say without looking at the source -- it certainly isn't\n> a benchmark I've heard of before, and it's also sort of hard to believe a\n> benchmark whose focus seems to be so thoroughly on one database (MySQL). The\n> site claims (about PostgreSQL) support that \"it looks like it works\";\n> certainly not a good start for fair benchmarking :-)\n\nIf you want a more realistic test, try dbt2:\nhttp://sourceforge.net/projects/osdldbt\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 15:36:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack?" }, { "msg_contents": "\nIsn't Super Smack a breakfast cereal? :-)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I wrote:\n> > FWIW, my own experiments with tests like this suggest that PG is at\n> > worst about 2x slower than mysql for trivial queries. If you'd reported\n> > a result in that ballpark I'd have accepted it as probably real. 6x I\n> > don't believe though ...\n> \n> Just for amusement's sake, I tried compiling up super-smack on my own\n> machine, and got results roughly in line with what I would've expected.\n> \n> Machine: dual Xeon EM64T, forget the clock rate at the moment, running\n> Fedora Core 4 (kernel 2.6.15-1.1831_FC4smp); hyperthreading enabled\n> \n> Postgres: fairly recent CVS tip, no special build options except\n> --enable-debug, no changes to default runtime configuration options\n> \n> MySQL: 5.0.18, current Red Hat RPMs, no changes to default configuration\n> \n> The \"select\" test, with 1 and 10 clients:\n> \n> $ super-smack -d pg select-key.smack 1 10000\n> Query Barrel Report for client smacker1\n> connect: max=0ms min=-1ms avg= 3ms from 1 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 20000 0 0 3655.24\n> $ super-smack -d pg select-key.smack 10 10000\n> Query Barrel Report for client smacker1\n> connect: max=54ms min=4ms avg= 12ms from 10 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 200000 0 0 7431.20\n> \n> $ super-smack -d mysql select-key.smack 1 10000\n> Query Barrel Report for client smacker1\n> connect: max=0ms min=-1ms avg= 0ms from 1 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 20000 0 0 6894.03\n> $ super-smack -d mysql select-key.smack 10 10000\n> Query Barrel Report for client smacker1\n> connect: max=14ms min=0ms avg= 5ms from 10 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 200000 0 0 16798.05\n> \n> The \"update\" test, with 1 and 10 clients:\n> \n> $ super-smack -d pg update-select.smack 1 10000\n> Query Barrel Report for client smacker\n> connect: max=0ms min=-1ms avg= 4ms from 1 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 10000 0 0 1027.49\n> update_index 10000 0 0 1027.49\n> $ super-smack -d pg update-select.smack 10 10000\n> Query Barrel Report for client smacker\n> connect: max=13ms min=5ms avg= 8ms from 10 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 100000 1 0 1020.96\n> update_index 100000 28 0 1020.96\n> \n> The above is with fsync on (though I think this machine's disk lies\n> about write complete so I'd not trust it as production). With fsync off,\n> \n> $ super-smack -d pg update-select.smack 1 10000\n> Query Barrel Report for client smacker\n> connect: max=0ms min=-1ms avg= 3ms from 1 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 10000 0 0 1478.25\n> update_index 10000 0 0 1478.25\n> $ super-smack -d pg update-select.smack 10 10000\n> Query Barrel Report for client smacker\n> connect: max=35ms min=5ms avg= 21ms from 10 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 100000 1 0 3067.68\n> update_index 100000 1 0 3067.68\n> \n> versus mysql\n> \n> $ super-smack -d mysql update-select.smack 1 10000\n> Query Barrel Report for client smacker\n> connect: max=0ms min=-1ms avg= 0ms from 1 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 10000 0 0 4101.43\n> update_index 10000 0 0 4101.43\n> $ super-smack -d mysql update-select.smack 10 10000\n> Query Barrel Report for client smacker\n> connect: max=3ms min=0ms avg= 0ms from 10 clients\n> Query_type num_queries max_time min_time q_per_s\n> select_index 100000 1 0 5388.31\n> update_index 100000 6 0 5388.31\n> \n> Since mysql is using myisam tables (ie not transaction safe), I think\n> the fairest comparison is to the fsync-off numbers.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 5 May 2006 05:50:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Super-smack?" } ]
[ { "msg_contents": ">FWIW, my own experiments with tests like this suggest that PG is at\nworst about 2x slower than mysql for trivial queries. If you'd reported\na result in that ballpark I'd have accepted it as probably real. 6x I\ndon't believe though ...\n\nOTOH, my tests using BenchmarkSQL\n(http://sourceforge.net/projects/benchmarksql) shows that PG can deliver\nup to 8x more transactions/minute than a well-known proprietary DB on\nsimilar hardware (with 100 concurrent connections) - can't post the\nresults due to licence restrictions of the proprietary vendor though. In\nfact, PG on a single SCSI disk machine did even beat the other DB when\nthe other DB had a fully equipped CX200 Dell/EMC SAN, if only with 30%\nthis time. Note that in the latter case, the other DB is unable to use\nasync IO due to problems running on linux kernel 2.4.9. And yes, PG was\nrunning with fsync on.\n\nIt's only a benchmark though, and real-life useage is what counts in the\nend (after all).\n\nRegards,\nMikael\n", "msg_date": "Mon, 1 May 2006 19:00:21 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Super-smack? " } ]
[ { "msg_contents": "I'm running postgres 8.0.7, and I've got a table of orders with about \n100,000 entries. I want to just look at the new orders, right now 104 of \nthem.\n\nEXPLAIN ANALYZE\nSELECT\n order_id\n FROM\n orders\n WHERE\n order_statuses_id = (SELECT id FROM order_statuses WHERE id_name \n= 'new');\n\n Seq Scan on orders o (cost=1.20..11395.51 rows=7029 width=8) (actual \ntime=286.038..287.662 rows=104 loops=1)\n Filter: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.030..0.039 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n Total runtime: 288.102 ms\n\nThe dreaded sequential scan. I've got an index on order_statuses_id and \nI've VACUUM ANALYZEd the table, but I'm currently clustered on the \nprimary key (order_id).\n\nWith enable_seqscan = off, I get:\n-------------------------------------------------\n Index Scan using orders_status_btree_idx on orders o \n(cost=4.64..12457.14 rows=7031 width=8) (actual time=0.164..0.664 \nrows=104 loops=1)\n Index Cond: (order_statuses_id = $0)\n InitPlan\n -> Index Scan using order_statuses_id_name_key on order_statuses \n(cost=0.00..4.64 rows=1 width=4) (actual time=0.128..0.134 rows=1 loops=1)\n Index Cond: ((id_name)::text = 'new'::text)\n Total runtime: 1.108 ms\n\nIf I hard-code the 'new' status ID, I get:\n-------------------------------------------------\nEXPLAIN ANALYZE\nSELECT\n order_id\n FROM\n orders\n WHERE\n order_statuses_id = 1;\n\n Index Scan using orders_status_btree_idx on orders o \n(cost=0.00..4539.65 rows=1319 width=8) (actual time=0.132..1.883 \nrows=104 loops=1)\n Index Cond: (order_statuses_id = 1)\n Total runtime: 2.380 ms\n\nHere is the pg_stats entry for orders.order_statuses_id:\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals | most_common_freqs \n| histogram_bounds | correlation\n------------+-----------+-------------------+-------------+-----------+------------+------------------+--------------------------------------+-------------------------+-------------\n public | orders | order_statuses_id | 0.000208333 | 4 \n| 14 | {8,24,10,25} | {0.385417,0.242083,0.230625,0.07875} | \n{1,7,7,9,9,9,9,9,23,26} | 0.740117\n\nThis is with SET STATISTICS = 16 on the column, since that's how many \ndifferent values the column can currently take.\n\nNow, here's the thing - if I cluster on the index on order_statuses_id, \nthe original query produces:\n Index Scan using orders_status_btree_idx on orders o \n(cost=1.20..978.94 rows=8203 width=8) (actual time=0.097..0.598 rows=104 \nloops=1)\n Index Cond: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.056..0.065 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n Total runtime: 1.042 ms\n\nEstimated cost went way down. The pg_stats entry becomes:\n\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals | most_common_freqs \n| histogram_bounds | correlation\n------------+-----------+-------------------+-----------+-----------+------------+------------------+----------------------------------------+---------------------+-------------\n public | orders | order_statuses_id | 0 | 4 \n| 12 | {8,24,10,25} | {0.386458,0.244167,0.238333,0.0720833} \n| {1,7,7,9,9,9,22,26} | 1\n\nI'm hesitant to cluster on the order_statuses_id index, because there \nare a lot of other queries using this table, many of which join on \norder_id. I also feel like I ought to be able to get the planner to do \nan index scan without hard-coding the order_statuses_id value.\n\nQuestions:\n* What can I do to reduce the estimated row count on the query?\n* Why does clustering drive down the estimated cost for the index scan \nso much? Does a change in correlation from .72 to 1 make that much of a \ndifference?\n* Can I convince my query planner to index scan without clustering on \nthe order_statuses_id index, or setting enable_seqscan = off?\n\nPotential note of interest: This is a very wide, monolithic table - no \nless than 100 columns, with several check constraints, foreign key \nconstraints, and indexes, including three functional indexes.\n\nSide question: Sometimes, when I VACUUM ANALYZE the table, the pg_stats \nentry for order_statuses_id has almost all of the possible values in \nmost_common_vals, instead of just a handful. Example:\n\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| \nmost_common_freqs \n| histogram_bounds | correlation\n------------+-----------+-------------------+-----------+-----------+------------+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------+------------------+-------------\n public | orders | order_statuses_id | 0 | 4 \n| 13 | {8,24,10,25,9,7,23,26,1,22,2,5,4} | \n{0.393125,0.240208,0.226042,0.07875,0.0275,0.0145833,0.0110417,0.00291667,0.00229167,0.001875,0.000625,0.000625,0.000416667} \n| | 1\n\nThis doesn't appear to influence whether the index scan is chosen, but I \nam curious as to why this happens.\n", "msg_date": "Mon, 01 May 2006 10:27:21 -0700", "msg_from": "Nolan Cafferky <[email protected]>", "msg_from_op": true, "msg_subject": "Cluster vs. non-cluster query planning" }, { "msg_contents": "> Questions:\n> * What can I do to reduce the estimated row count on the query?\n> * Why does clustering drive down the estimated cost for the index scan \n> so much? Does a change in correlation from .72 to 1 make that much of \n> a difference?\n> * Can I convince my query planner to index scan without clustering on \n> the order_statuses_id index, or setting enable_seqscan = off? \n\n\nAfter some more digging on the mailing list, I found some comments on \neffective_cache_size. Bringing it up from the default of 1000 does pust \nthe estimated cost for the index scan below that of the sequential scan, \nbut not by much. \n\nWith SET effective_cache_size = 1000:\n Seq Scan on orders o (cost=1.20..11395.53 rows=7029 width=8) (actual \ntime=280.148..281.512 rows=105 loops=1)\n Filter: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.012..0.020 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n Total runtime: 281.700 ms\n\nWith SET effective_cache_size = 10000:\n Index Scan using orders_status_btree_idx on orders o \n(cost=1.20..9710.91 rows=7029 width=8) (actual time=0.050..0.372 \nrows=105 loops=1)\n Index Cond: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.016..0.024 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n\nThe ratios between estimated costs are still nowhere near the ratio of \nactual costs.\n", "msg_date": "Mon, 01 May 2006 12:48:34 -0700", "msg_from": "Nolan Cafferky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster vs. non-cluster query planning" }, { "msg_contents": "Nolan Cafferky <[email protected]> writes:\n> After some more digging on the mailing list, I found some comments on \n> effective_cache_size. Bringing it up from the default of 1000 does pust \n> the estimated cost for the index scan below that of the sequential scan, \n> but not by much. \n\nThe first-order knob for tuning indexscan vs seqscan costing is\nrandom_page_cost. What have you got that set to?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2006 16:30:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster vs. non-cluster query planning " }, { "msg_contents": "Tom Lane wrote:\n\n>The first-order knob for tuning indexscan vs seqscan costing is\n>random_page_cost. What have you got that set to?\n> \n>\nThis is currently at the default of 4. All of my other planner cost \nconstants are at default values as well. Dropping it to 1 drops the \nestimated cost by a comparable ratio:\n\n Index Scan using orders_status_btree_idx on orders o \n(cost=1.20..3393.20 rows=7026 width=8) (actual time=0.050..0.314 \nrows=105 loops=1)\n Index Cond: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.017..0.025 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n Total runtime: 0.498 ms\n\nBut, I'm guessing that random_page_cost = 1 is not a realistic value.\n", "msg_date": "Mon, 01 May 2006 14:08:01 -0700", "msg_from": "Nolan Cafferky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster vs. non-cluster query planning" }, { "msg_contents": "Nolan Cafferky <[email protected]> writes:\n> But, I'm guessing that random_page_cost = 1 is not a realistic value.\n\nWell, that depends. If all your data can be expected to fit in memory\nthen it is a realistic value. (If not, you should be real careful not\nto make performance decisions on the basis of test cases that *do* fit\nin RAM...)\n\nIn any case, if I recall your numbers correctly you shouldn't need to\ndrop it nearly that far to get the thing to make the right choice.\nA lot of people run with random_page_cost set to 2 or so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2006 19:35:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster vs. non-cluster query planning " }, { "msg_contents": "Tom Lane wrote:\n\n>Nolan Cafferky <[email protected]> writes:\n>\n>>But, I'm guessing that random_page_cost = 1 is not a realistic value.\n>>\n>\n>Well, that depends. If all your data can be expected to fit in memory\n>then it is a realistic value. (If not, you should be real careful not\n>to make performance decisions on the basis of test cases that *do* fit\n>in RAM...)\n>\n>In any case, if I recall your numbers correctly you shouldn't need to\n>drop it nearly that far to get the thing to make the right choice.\n>A lot of people run with random_page_cost set to 2 or so.\n>\nThanks for the advice. I will check what changing random_page_cost does \nfor the rest of the queries on our system.\n\nI did learn why the estimated row count was so high. This is new \nknowledge to me, so I'm going to share it.\n\nSELECT reltuples FROM pg_class WHERE relname = 'orders'; -> produces 98426.\nSELECT n_distinct FROM pg_stats WHERE tablename = 'orders' and attname = \n'order_statuses_id'; -> currently 13.\n\n Seq Scan on orders o (cost=1.20..11395.53 rows=7570 width=8) (actual \ntime=283.599..285.031 rows=105 loops=1)\n Filter: (order_statuses_id = $0)\n InitPlan\n -> Seq Scan on order_statuses (cost=0.00..1.20 rows=1 width=4) \n(actual time=0.031..0.038 rows=1 loops=1)\n Filter: ((id_name)::text = 'new'::text)\n Total runtime: 285.225 ms\n\n(98426 / 13)::integer = 7571 ~= 7570, the estimated row count.\n\nSo the query planner isn't able to combine the knowledge of the id value \nfrom order_statuses with most_common_vals, most_common_freqs, or \nhistogram_bounds from pg_stats. That seems a little odd to me, but maybe \nit makes sense. I suppose the planner can't start executing parts of the \nquery to aid in the planning process.\n\nIn the future, I will probably pre-select from order_statuses before \nexecuting this query.\n\nThanks!\n\n\n\n\n\n\n\nTom Lane wrote:\n\nNolan Cafferky <[email protected]> writes:\n\n\nBut, I'm guessing that random_page_cost = 1 is not a realistic value.\n\n\n\nWell, that depends. If all your data can be expected to fit in memory\nthen it is a realistic value. (If not, you should be real careful not\nto make performance decisions on the basis of test cases that *do* fit\nin RAM...)\n\nIn any case, if I recall your numbers correctly you shouldn't need to\ndrop it nearly that far to get the thing to make the right choice.\nA lot of people run with random_page_cost set to 2 or so.\n\n\nThanks for the advice.  I will check what changing random_page_cost\ndoes for the rest of the queries on our system.\n\nI did learn why the estimated row count was so high.  This is new\nknowledge to me, so I'm going to share it.\n\nSELECT reltuples FROM pg_class WHERE relname = 'orders'; -> produces\n98426.\nSELECT n_distinct FROM pg_stats WHERE tablename = 'orders' and attname\n= 'order_statuses_id'; -> currently 13.\n\n Seq Scan on orders o  (cost=1.20..11395.53 rows=7570 width=8) (actual\ntime=283.599..285.031 rows=105 loops=1)\n   Filter: (order_statuses_id = $0)\n   InitPlan\n     ->  Seq Scan on order_statuses  (cost=0.00..1.20 rows=1\nwidth=4) (actual time=0.031..0.038 rows=1 loops=1)\n           Filter: ((id_name)::text = 'new'::text)\n Total runtime: 285.225 ms\n\n(98426 / 13)::integer = 7571 ~= 7570, the estimated row count.\n\nSo the query planner isn't able to combine the knowledge of the id\nvalue from order_statuses with most_common_vals, most_common_freqs, or\nhistogram_bounds from pg_stats. That seems a little odd to me, but\nmaybe it makes sense. I suppose the planner can't start executing parts\nof the query to aid in the planning process.\n\nIn the future, I will probably pre-select from order_statuses before\nexecuting this query.\n\nThanks!", "msg_date": "Mon, 01 May 2006 17:01:59 -0700", "msg_from": "Nolan Cafferky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster vs. non-cluster query planning" }, { "msg_contents": "On Mon, May 01, 2006 at 07:35:02PM -0400, Tom Lane wrote:\n> Nolan Cafferky <[email protected]> writes:\n> > But, I'm guessing that random_page_cost = 1 is not a realistic value.\n> \n> Well, that depends. If all your data can be expected to fit in memory\n> then it is a realistic value. (If not, you should be real careful not\n> to make performance decisions on the basis of test cases that *do* fit\n> in RAM...)\n> \n> In any case, if I recall your numbers correctly you shouldn't need to\n> drop it nearly that far to get the thing to make the right choice.\n> A lot of people run with random_page_cost set to 2 or so.\n\nAlso, the index scan cost estimator comments indicate that it does a\nlinear interpolation between the entimated cost for a perfectly\ncorrelated table and a table with 0 correlation, but in fact the\ninterpolation is exponential, or it's linear based on the *square* of\nthe correlation, which just doesn't make much sense.\n\nI did some investigating on this some time ago, but never got very far\nwith it. http://stats.distributed.net/~decibel/summary.txt has some\ninfo, and http://stats.distributed.net/~decibel/ has the raw data.\nGraphing that data, if you only include correlations between 0.36 and\n0.5, it appears that there is a linear correlation between correlation\nand index scan time.\n\nOf course this is very coarse data and it'd be great if someone did more\nresearch in this area, preferably using pg_bench or other tools to\ngenerate the data so that others can test this stuff as well. But even\nwith as rough as this data is, it seems to provide a decent indication\nthat it would be better to actually interpolate linearly based on\ncorrelation, rather than correlation^2. This is a production machine so\nI'd rather not go mucking about with testing such a change here.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 16:29:42 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster vs. non-cluster query planning" } ]
[ { "msg_contents": "Hi everyone.\n\nI've got a quick and stupid question: Does Postgres 7.4 (7.x) support\nvacuum_cost_delay?\n\nFor all my googles and documentation reading I've determined it's not\nsupported, only because I can't find a 7.x doc or forum post claiming\notherwise.\n\nUpgrading to 8.x is out of the question, but I still need to employ\nsomething to auto-vacuum a large and active database (possibly more than\nonce a day) in a manner that wouldn't affect load at the wrong time.\n\nIf I could combine pg_autovacuum with vacuum_cost_delay I could potentially\nhave a solution. (barring some performance testing)\n\nThe only problem with pg_autovacuum is the need for pg_statio, which itself\nwill reduce performance at all times.\n\nAny suggestions?\n\nThanks!\n\n- Chris\n\n\n\n\n\n\n\n\nPostgres 7.4 and vacuum_cost_delay.\n\n\nHi everyone.\n\nI've got a quick and stupid question: Does Postgres 7.4 (7.x) support vacuum_cost_delay?\n\nFor all my googles and documentation reading I've determined it's not supported, only because I can't find a 7.x doc or forum post claiming otherwise.\nUpgrading to 8.x is out of the question, but I still need to employ something to auto-vacuum a large and active database (possibly more than once a day) in a manner that wouldn't affect load at the wrong time.\nIf I could combine pg_autovacuum with vacuum_cost_delay I could potentially have a solution. (barring some performance testing)\nThe only problem with pg_autovacuum is the need for pg_statio, which itself will reduce performance at all times.\n\nAny suggestions?\n\nThanks!\n\n- Chris", "msg_date": "Mon, 1 May 2006 14:40:41 -0400 ", "msg_from": "Chris Mckenzie <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 7.4 and vacuum_cost_delay." }, { "msg_contents": "show all and grep are your friend. From my laptop with 8.1:\n\[email protected][16:36]~:4%psql -tc 'show all' | grep vacuum_cost_delay|tr -s ' '\nautovacuum_vacuum_cost_delay | -1 | Vacuum cost delay in milliseconds, for autovacuum.\nvacuum_cost_delay | 0 | Vacuum cost delay in milliseconds.\[email protected][16:37]~:5%\n\nI don't have a 7.4 copy around, but you can just check it yourself.\n\nOn Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:\n> Hi everyone.\n> \n> I've got a quick and stupid question: Does Postgres 7.4 (7.x) support\n> vacuum_cost_delay?\n> \n> For all my googles and documentation reading I've determined it's not\n> supported, only because I can't find a 7.x doc or forum post claiming\n> otherwise.\n> \n> Upgrading to 8.x is out of the question, but I still need to employ\n> something to auto-vacuum a large and active database (possibly more than\n> once a day) in a manner that wouldn't affect load at the wrong time.\n> \n> If I could combine pg_autovacuum with vacuum_cost_delay I could potentially\n> have a solution. (barring some performance testing)\n> \n> The only problem with pg_autovacuum is the need for pg_statio, which itself\n> will reduce performance at all times.\n> \n> Any suggestions?\n> \n> Thanks!\n> \n> - Chris\n> \n> \n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 16:38:47 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." }, { "msg_contents": "On Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:\n> I've got a quick and stupid question: Does Postgres 7.4 (7.x) support\n> vacuum_cost_delay?\n\nNo, it does not; it was introduced in 8.0.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 3 May 2006 01:00:01 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." } ]
[ { "msg_contents": ">My server is the HP DL585 (quad, dual-core Opteron, 16GB RAM) with 4 HD\nbays run by a HP SmartArray 5i controller. I have 15 10K 300GB >drives\nand 1 15K 150GB drive (don't ask how that happened).\n\nOur server will be a DL385 (dual, dual-core Opteron, 16Gb RAM), and the\n28 disks(10K 146Gb)in the MSA1500 will probably be set up in SAME\nconfiguration (Stripe All, Mirror Everything). Still to be decided\nthough. I'll post both pgbench and BenchmarkSQL\n(http://sourceforge.net/projects/benchmarksql) results here as soon as\nwe have the machine set up. OS+log(not WAL) will recide on directly\nattached disks, and all heavy reading+writing will be taken care of by\nthe MSA.\n\nNot sure how much of the cache module that will be used for reads, but\nas our peak write load is quite high we'll probably use at least half of\nit for writes (good write performance is pretty much the key for the\napplication in question)\n\n>How would/do you guys set up your MSA1x00 with 1 drive sled? RAID10 vs\n>RAID5 across 10+ disks?\n\nSince it's a datawarehouse type of application, you'd probably optimize\nfor large storage capacity and read (rather than write) performance, and\nin that case I guess raid5 could be considered, at least. Depends very\nmuch on reliability requirements though - raid5 performs much worse than\nraid10 in degraded mode (one disk out). Here's an interesting read\nregarding raid5 vs raid10 (NOT very pro-raid5 :) )\nhttp://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt\n\nRegards,\nMikael\n\n\n", "msg_date": "Mon, 1 May 2006 23:33:58 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware: HP StorageWorks MSA 1500" } ]
[ { "msg_contents": "I have a quite large query that takes over a minute to run on my laptop.\nOn the db server it takes olmost 20 seconds, but I have 200+ concurent\nusers who will be running similair querries, and during the query the\nI/O goes bezerk, I read 30MB/s reading (iostat tells so). So, before\ngoing into denormalization, I wonder if I could do something to speed\nthings up.\n\nThe query is like this:\n\nselect\n\t*\nfrom\n\tmessages\n\tjoin services on services.id = messages.service_id \n\tjoin ticketing_messages on messages.id = ticketing_messages.message_id\n\tleft join ticketing_winners on ticketing_winners.message_id =\nticketing_messages.message_id\n\tleft join\n\t(\n\t\tselect\n\t\t\t*\n\t\tfrom\n\t\t\tticketing_codes_played\n\t\t\tjoin ticketing_codes on ticketing_codes.code_id =\nticketing_codes_played.code_id\n\t) as codes on codes.message_id = ticketing_messages.message_id\nwhere\n\tservices.type_id = 10\nand\n\tmessages.receiving_time between '2006-02-12' and '2006-03-18 23:00:00';\n\nThe explain analyze of the above produces this:\n\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=221692.04..222029.29 rows=3772 width=264)\n(actual time=539169.163..541579.504 rows=75937 loops=1)\n Merge Cond: (\"outer\".message_id = \"inner\".message_id)\n -> Sort (cost=40080.17..40089.60 rows=3772 width=238) (actual\ntime=8839.072..9723.371 rows=75937 loops=1)\n Sort Key: messages.id\n -> Hash Left Join (cost=2259.40..39856.10 rows=3772\nwidth=238) (actual time=1457.451..7870.830 rows=75937 loops=1)\n Hash Cond: (\"outer\".message_id = \"inner\".message_id)\n -> Nested Loop (cost=2234.64..39811.76 rows=3772\nwidth=230) (actual time=1418.911..7063.299 rows=75937 loops=1)\n -> Index Scan using pk_services on services\n(cost=0.00..4.46 rows=1 width=54) (actual time=28.261..28.271 rows=1\nloops=1)\n Index Cond: (1102 = id)\n Filter: (type_id = 10)\n -> Hash Join (cost=2234.64..39769.58 rows=3772\nwidth=176) (actual time=1390.621..6297.501 rows=75937 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".message_id)\n -> Bitmap Heap Scan on messages\n(cost=424.43..32909.53 rows=74408 width=162) (actual\ntime=159.796..4329.125 rows=75937 loops=1)\n Recheck Cond: (service_id = 1102)\n -> Bitmap Index Scan on idx_service_id\n(cost=0.00..424.43 rows=74408 width=0) (actual time=95.197..95.197\nrows=75937 loops=1)\n Index Cond: (service_id = 1102)\n -> Hash (cost=1212.37..1212.37 rows=75937\nwidth=14) (actual time=940.372..940.372 rows=75937 loops=1)\n -> Seq Scan on ticketing_messages\n(cost=0.00..1212.37 rows=75937 width=14) (actual time=12.122..461.960\nrows=75937 loops=1)\n -> Hash (cost=21.21..21.21 rows=1421 width=8) (actual\ntime=38.496..38.496 rows=1421 loops=1)\n -> Seq Scan on ticketing_winners\n(cost=0.00..21.21 rows=1421 width=8) (actual time=24.534..31.347\nrows=1421 loops=1)\n -> Sort (cost=181611.87..181756.68 rows=57925 width=26) (actual\ntime=530330.060..530647.055 rows=57925 loops=1)\n Sort Key: ticketing_codes_played.message_id\n -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n(actual time=68.322..529472.026 rows=57925 loops=1)\n -> Seq Scan on ticketing_codes_played\n(cost=0.00..863.25 rows=57925 width=8) (actual time=0.042..473.881\nrows=57925 loops=1)\n -> Index Scan using ticketing_codes_pk on\nticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\ntime=9.102..9.108 rows=1 loops=57925)\n Index Cond: (ticketing_codes.code_id =\n\"outer\".code_id)\n Total runtime: 542000.093 ms\n(27 rows)\n\n\nI'll be more than happy to provide any additional information that I may\nbe able to gather. I'd be most happy if someone would scream something\nlike \"four joins, smells like a poor design\" because design is poor, but\nthe system is in production, and I have to bare with it.\n\n\tMario\n-- \n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\nMario Splivalo\[email protected]\n\n\n", "msg_date": "Tue, 02 May 2006 03:27:54 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "\n> -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n> (actual time=68.322..529472.026 rows=57925 loops=1)\n> -> Seq Scan on ticketing_codes_played\n> (cost=0.00..863.25 rows=57925 width=8) (actual time=0.042..473.881\n> rows=57925 loops=1)\n> -> Index Scan using ticketing_codes_pk on\n> ticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\n> time=9.102..9.108 rows=1 loops=57925)\n> Index Cond: (ticketing_codes.code_id =\n> \"outer\".code_id)\n> Total runtime: 542000.093 ms\n> (27 rows)\n> \n> \n> I'll be more than happy to provide any additional information \n> that I may\n> be able to gather. I'd be most happy if someone would scream something\n> like \"four joins, smells like a poor design\" because design \n> is poor, but\n> the system is in production, and I have to bare with it.\n\n\nIt looks like that nested loop which is joining ticketing_codes_played\nto ticketing_codes is the slow part. I'm curious how many rows are in\nthe ticketing_codes table?\n\nFour or five joins does not seem like a lot to me, but it can be slow if\nyou are joining big tables with other big tables.\n\n", "msg_date": "Wed, 3 May 2006 10:20:35 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> I have a quite large query that takes over a minute to run on my laptop.\n\nThe EXPLAIN output you provided doesn't seem to agree with the stated\nquery. Where'd the \"service_id = 1102\" condition come from?\n\nIn general, I'd suggest playing around with the join order. Existing\nreleases of PG tend to throw up their hands when faced with a mixture of\nouter joins and regular joins, and just join the tables in the order\nlisted. 8.2 will be smarter about this, but for now you have to do it\nby hand ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 May 2006 13:58:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lot'sa joins - performance tip-up, please? " }, { "msg_contents": "On Wed, 2006-05-03 at 10:20 -0500, Dave Dutcher wrote:\n> > -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n> > (actual time=68.322..529472.026 rows=57925 loops=1)\n> > -> Seq Scan on ticketing_codes_played\n> > (cost=0.00..863.25 rows=57925 width=8) (actual time=0.042..473.881\n> > rows=57925 loops=1)\n> > -> Index Scan using ticketing_codes_pk on\n> > ticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\n> > time=9.102..9.108 rows=1 loops=57925)\n> > Index Cond: (ticketing_codes.code_id =\n> > \"outer\".code_id)\n> > Total runtime: 542000.093 ms\n> > (27 rows)\n> > \n> > \n> > I'll be more than happy to provide any additional information \n> > that I may\n> > be able to gather. I'd be most happy if someone would scream something\n> > like \"four joins, smells like a poor design\" because design \n> > is poor, but\n> > the system is in production, and I have to bare with it.\n> \n> \n> It looks like that nested loop which is joining ticketing_codes_played\n> to ticketing_codes is the slow part. I'm curious how many rows are in\n> the ticketing_codes table?\n> \n> Four or five joins does not seem like a lot to me, but it can be slow if\n> you are joining big tables with other big tables.\n\nTicketing_codes table has 11000000 records, and it's expected to grow.\n\nI tried playing with JOIN order as Tom suggested, but performance is the\nsame.\n\n\tMario\n\n", "msg_date": "Thu, 04 May 2006 16:15:13 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "On Wed, 2006-05-03 at 13:58 -0400, Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n> > I have a quite large query that takes over a minute to run on my laptop.\n> \n> The EXPLAIN output you provided doesn't seem to agree with the stated\n> query. Where'd the \"service_id = 1102\" condition come from?\n\nI guess I copypasted the additional WHERE to te EXPLAIN ANALYZE query.\nThis is the correct one, without the WHERE:\n\n Hash Left Join (cost=198628.35..202770.61 rows=121 width=264) (actual\ntime=998008.264..999645.322 rows=5706 loops=1)\n Hash Cond: (\"outer\".message_id = \"inner\".message_id)\n -> Merge Left Join (cost=21943.23..21950.96 rows=121 width=238)\n(actual time=4375.510..4540.772 rows=5706 loops=1)\n Merge Cond: (\"outer\".message_id = \"inner\".message_id)\n -> Sort (cost=21847.62..21847.92 rows=121 width=230) (actual\ntime=3304.787..3378.515 rows=5706 loops=1)\n Sort Key: messages.id\n -> Hash Join (cost=20250.16..21843.43 rows=121\nwidth=230) (actual time=1617.370..3102.470 rows=5706 loops=1)\n Hash Cond: (\"outer\".message_id = \"inner\".id)\n -> Seq Scan on ticketing_messages\n(cost=0.00..1212.37 rows=75937 width=14) (actual time=10.554..609.967\nrows=75937 loops=1)\n -> Hash (cost=20244.19..20244.19 rows=2391\nwidth=216) (actual time=1572.889..1572.889 rows=5706 loops=1)\n -> Nested Loop (cost=1519.21..20244.19\nrows=2391 width=216) (actual time=385.582..1449.207 rows=5706 loops=1)\n -> Seq Scan on services\n(cost=0.00..4.20 rows=3 width=54) (actual time=20.829..20.859 rows=2\nloops=1)\n Filter: (type_id = 10)\n -> Bitmap Heap Scan on messages\n(cost=1519.21..6726.74 rows=1594 width=162) (actual\ntime=182.346..678.800 rows=2853 loops=2)\n Recheck Cond: ((\"outer\".id =\nmessages.service_id) AND (messages.receiving_time >= '2006-02-12\n00:00:00+01'::timestamp with time zone) AND (messages.receiving_time <=\n'2006-03-18 23:00:00+01'::timestamp with time zone))\n -> BitmapAnd\n(cost=1519.21..1519.21 rows=1594 width=0) (actual time=164.311..164.311\nrows=0 loops=2)\n -> Bitmap Index Scan on\nidx_service_id (cost=0.00..84.10 rows=14599 width=0) (actual\ntime=66.809..66.809 rows=37968 loops=2)\n Index Cond:\n(\"outer\".id = messages.service_id)\n -> Bitmap Index Scan on\nidx_messages_receiving_time (cost=0.00..1434.87 rows=164144 width=0)\n(actual time=192.633..192.633 rows=184741 loops=1)\n Index Cond:\n((receiving_time >= '2006-02-12 00:00:00+01'::timestamp with time zone)\nAND (receiving_time <= '2006-03-18 23:00:00+01'::timestamp with time\nzone))\n -> Sort (cost=95.62..99.17 rows=1421 width=8) (actual\ntime=1070.678..1072.999 rows=482 loops=1)\n Sort Key: ticketing_winners.message_id\n -> Seq Scan on ticketing_winners (cost=0.00..21.21\nrows=1421 width=8) (actual time=424.836..1061.834 rows=1421 loops=1)\n -> Hash (cost=176144.30..176144.30 rows=57925 width=26) (actual\ntime=993592.980..993592.980 rows=57925 loops=1)\n -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n(actual time=1074.984..992536.243 rows=57925 loops=1)\n -> Seq Scan on ticketing_codes_played\n(cost=0.00..863.25 rows=57925 width=8) (actual time=74.479..2047.993\nrows=57925 loops=1)\n -> Index Scan using ticketing_codes_pk on\nticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\ntime=17.044..17.052 rows=1 loops=57925)\n Index Cond: (ticketing_codes.code_id =\n\"outer\".code_id)\n Total runtime: 999778.981 ms\n\n\n> In general, I'd suggest playing around with the join order. Existing\n> releases of PG tend to throw up their hands when faced with a mixture of\n> outer joins and regular joins, and just join the tables in the order\n> listed. 8.2 will be smarter about this, but for now you have to do it\n> by hand ...\n\nNo luck for me there. But, I found out that if I first do join on\nticketing_codes and ticketing_codes_played, put the result to temporary\ntable, and then join that temporary table with the rest of the query\n(the SELECT that is in parenthesis is transfered to a temporary table)\nthe query is almost twice as fast.\n\nAs mentioned before, ticketing_codes has 11000000 records.\n\n\tMario\n\nP.S. Is it just me, or posting to psql-perofrmance is laged, quite a\nbit?\n\n", "msg_date": "Thu, 04 May 2006 16:45:57 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "On Thu, May 04, 2006 at 04:45:57PM +0200, Mario Splivalo wrote:\nWell, here's the problem...\n\n> -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n> (actual time=1074.984..992536.243 rows=57925 loops=1)\n> -> Seq Scan on ticketing_codes_played\n> (cost=0.00..863.25 rows=57925 width=8) (actual time=74.479..2047.993\n> rows=57925 loops=1)\n> -> Index Scan using ticketing_codes_pk on\n> ticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\n> time=17.044..17.052 rows=1 loops=57925)\n> Index Cond: (ticketing_codes.code_id =\n> \"outer\".code_id)\n\nAnyone have any idea why on earth it's doing that instead of a hash or\nmerge join?\n\nIn any case, try swapping the order of ticketing_codes_played and\nticketing_codes. Actually, that'd probably make it worse.\n\nTry SET enable_nestloop = off;\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 10 May 2006 17:10:48 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "On Wed, 2006-05-10 at 17:10 -0500, Jim C. Nasby wrote:\n> On Thu, May 04, 2006 at 04:45:57PM +0200, Mario Splivalo wrote:\n> Well, here's the problem...\n> \n> > -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)\n> > (actual time=1074.984..992536.243 rows=57925 loops=1)\n> > -> Seq Scan on ticketing_codes_played\n> > (cost=0.00..863.25 rows=57925 width=8) (actual time=74.479..2047.993\n> > rows=57925 loops=1)\n> > -> Index Scan using ticketing_codes_pk on\n> > ticketing_codes (cost=0.00..3.01 rows=1 width=18) (actual\n> > time=17.044..17.052 rows=1 loops=57925)\n> > Index Cond: (ticketing_codes.code_id =\n> > \"outer\".code_id)\n> \n> Anyone have any idea why on earth it's doing that instead of a hash or\n> merge join?\n> \n> In any case, try swapping the order of ticketing_codes_played and\n> ticketing_codes. Actually, that'd probably make it worse.\n\nI tried that, no luck. The best performance I achieve with creating\ntemporary table. And...\n\n> \n> Try SET enable_nestloop = off;\n\nThis helps also. I don't get sequential scans any more. I'd like a tip\non how to set 'enable_nestloop = off' trough JDBC?\n\n\tMario\n-- \n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\nMario Splivalo\[email protected]\n\n\n", "msg_date": "Thu, 11 May 2006 07:32:26 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" }, { "msg_contents": "Hi, Mario,\n\nMario Splivalo wrote:\n\n> This helps also. I don't get sequential scans any more. I'd like a tip\n> on how to set 'enable_nestloop = off' trough JDBC?\n\nstatement.execute(\"SET enable_nestloop TO off\"); should do.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 18 May 2006 11:43:05 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lot'sa joins - performance tip-up, please?" } ]
[ { "msg_contents": "Hi,\n\nI have recently implemented table partitioning in our postgres 8.1 db. Upon analyzing query performance, I have realized that, even when only a single one of the \"partitions\" has to be scanned, the plan is drastically different, and performs much worse, when I query against the master table (uses merge join), vs. a direct query against the partition directly (uses a hash join). The majority of our queries only access a single partition.\n\nAny insight into why this happens and what can be done to improve performance would be greatly appreciated.\n\nbr_1min is my partitioned table:\n\nexplain analyze\nSELECT *\nFROM br_1min br1 JOIN br_mods mod on br1.modules_id = mod.id \nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\n----------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=73.99..223.43 rows=1 width=109) (actual time=2925.629..3082.188 rows=45 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".modules_id)\n -> Index Scan using br_mods_id_pkey on br_mods mod (cost=0.00..40861.18 rows=282 width=77) (actual time=2922.223..3078.335 rows=45 loops=1)\n Filter: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n -> Sort (cost=73.99..76.26 rows=906 width=32) (actual time=3.334..3.508 rows=348 loops=1)\n Sort Key: br1.modules_id\n -> Append (cost=0.00..29.49 rows=906 width=32) (actual time=0.133..2.169 rows=910 loops=1)\n -> Index Scan using br_1min_end_idx on br_1min br1 (cost=0.00..2.02 rows=1 width=32) (actual time=0.029..0.029 rows=0 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Index Scan using br_1min_20557_end_idx on br_1min_20557 br1 (cost=0.00..27.48 rows=905 width=32) (actual time=0.101..1.384 rows=910 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n Total runtime: 3082.450 ms\n(12 rows)\n\n\n\nNow, If I query directly against br_1min_20557, my partition, I get:\n\nexplain analyze\nSELECT *\nFROM br_1min_20557 br1 JOIN br_mods mod on br1.modules_id = mod.id \nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=764.74..796.94 rows=1 width=109) (actual time=2.488..2.865 rows=45 loops=1)\n Hash Cond: (\"outer\".modules_id = \"inner\".id)\n -> Index Scan using br_1min_20557_end_idx on br_1min_20557 br1 (cost=0.00..27.62 rows=914 width=32) (actual time=0.084..1.886 rows=910 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Hash (cost=764.03..764.03 rows=282 width=77) (actual time=0.284..0.284 rows=45 loops=1)\n -> Bitmap Heap Scan on br_mods mod (cost=20.99..764.03 rows=282 width=77) (actual time=0.154..0.245 rows=45 loops=1)\n Recheck Cond: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n -> BitmapOr (cost=20.99..20.99 rows=282 width=0) (actual time=0.144..0.144 rows=0 loops=1)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.031..0.031 rows=14 loops=1)\n Index Cond: (downloads_id = 153226)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.011..0.011 rows=2 loops=1)\n Index Cond: (downloads_id = 153714)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (downloads_id = 153730)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (downloads_id = 153728)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153727)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153724)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153713)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153725)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.041..0.041 rows=16 loops=1)\n Index Cond: (downloads_id = 153739)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.010..0.010 rows=1 loops=1)\n Index Cond: (downloads_id = 153722)\n Total runtime: 3.017 ms\n(29 rows)\n\nThe difference is night-and-day. Any suggestions?\n\nThanks alot,\n\nMark\n\n\n\n\n\nWhy is plan (and performance) different on partitioned table?\n\n\n\nHi,\n\nI have recently implemented table partitioning in our postgres 8.1 db. Upon analyzing query performance, I have realized that, even when only a single one of the \"partitions\" has to be scanned, the plan is drastically different, and performs much worse, when I query against the master table (uses merge join), vs. a direct query against the partition directly (uses a hash join).  The majority of our queries only access a single partition.\n\nAny insight into why this happens and what can be done to improve performance would be greatly appreciated.\n\nbr_1min is my partitioned table:\n\nexplain analyze\nSELECT *\nFROM br_1min br1 JOIN br_mods mod on br1.modules_id = mod.id \nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n  AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\n----------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=73.99..223.43 rows=1 width=109) (actual time=2925.629..3082.188 rows=45 loops=1)\n   Merge Cond: (\"outer\".id = \"inner\".modules_id)\n   ->  Index Scan using br_mods_id_pkey on br_mods mod  (cost=0.00..40861.18 rows=282 width=77) (actual time=2922.223..3078.335 rows=45 loops=1)\n         Filter: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n   ->  Sort  (cost=73.99..76.26 rows=906 width=32) (actual time=3.334..3.508 rows=348 loops=1)\n         Sort Key: br1.modules_id\n         ->  Append  (cost=0.00..29.49 rows=906 width=32) (actual time=0.133..2.169 rows=910 loops=1)\n               ->  Index Scan using br_1min_end_idx on br_1min br1  (cost=0.00..2.02 rows=1 width=32) (actual time=0.029..0.029 rows=0 loops=1)\n                     Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n               ->  Index Scan using br_1min_20557_end_idx on br_1min_20557 br1  (cost=0.00..27.48 rows=905 width=32) (actual time=0.101..1.384 rows=910 loops=1)\n                     Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n Total runtime: 3082.450 ms\n(12 rows)\n\n\n\nNow, If I query directly against br_1min_20557, my partition, I get:\n\nexplain analyze\nSELECT *\nFROM br_1min_20557 br1 JOIN br_mods mod on br1.modules_id = mod.id \nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n  AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=764.74..796.94 rows=1 width=109) (actual time=2.488..2.865 rows=45 loops=1)\n   Hash Cond: (\"outer\".modules_id = \"inner\".id)\n   ->  Index Scan using br_1min_20557_end_idx on br_1min_20557 br1  (cost=0.00..27.62 rows=914 width=32) (actual time=0.084..1.886 rows=910 loops=1)\n         Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n   ->  Hash  (cost=764.03..764.03 rows=282 width=77) (actual time=0.284..0.284 rows=45 loops=1)\n         ->  Bitmap Heap Scan on br_mods mod  (cost=20.99..764.03 rows=282 width=77) (actual time=0.154..0.245 rows=45 loops=1)\n               Recheck Cond: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n               ->  BitmapOr  (cost=20.99..20.99 rows=282 width=0) (actual time=0.144..0.144 rows=0 loops=1)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.031..0.031 rows=14 loops=1)\n                           Index Cond: (downloads_id = 153226)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.011..0.011 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153714)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153730)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153728)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153727)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153724)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153713)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153725)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.041..0.041 rows=16 loops=1)\n                           Index Cond: (downloads_id = 153739)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.010..0.010 rows=1 loops=1)\n                           Index Cond: (downloads_id = 153722)\n Total runtime: 3.017 ms\n(29 rows)\n\nThe difference is night-and-day.  Any suggestions?\n\nThanks alot,\n\nMark", "msg_date": "Mon, 1 May 2006 18:37:03 -0700", "msg_from": "\"Mark Liberman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why is plan (and performance) different on partitioned table?" }, { "msg_contents": "\"Mark Liberman\" <[email protected]> writes:\n> I have recently implemented table partitioning in our postgres 8.1 db. =\n> Upon analyzing query performance, I have realized that, even when only a =\n> single one of the \"partitions\" has to be scanned, the plan is =\n> drastically different, and performs much worse, when I query against the =\n> master table (uses merge join), vs. a direct query against the partition =\n> directly (uses a hash join). The majority of our queries only access a =\n> single partition.\n\nJoins against partitioned tables suck in 8.1 :-(. There is code in CVS\nHEAD to improve this, but it didn't get done in time for 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2006 22:59:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is plan (and performance) different on partitioned table? " }, { "msg_contents": "I wrote:\n> Joins against partitioned tables suck in 8.1 :-(.\n\nActually ... while the above is a true statement, it's too flippant a\nresponse for your problem. The reason the planner is going for a\nmergejoin in your example is that it thinks the mergejoin will terminate\nearly. (Notice that the cost estimate for the mergejoin is actually\nquite a bit less than the estimate for its first input.) This estimate\ncan only be made if the planner has statistics that say that one of the\njoin columns has a max value much less than the other's. Well, that's\nfine, but where the heck did it get the stats for the partitioned table?\nWe don't compute union statistics for partitions. The answer is that\nit's confused and is using the stats for just the parent table as if\nthey were representative for the whole inheritance tree.\n\nI think this behavior was intentional back when it was coded, but when\ninheritance is being used for partitioning, it's clearly brain-dead.\nWe should either not assume anything about the statistics for an\ninheritance tree, or make a real effort to compute them.\n\nFor the moment, I've applied a quick patch that makes sure we don't\nassume anything.\n\nIf you don't have anything in the parent table br_1min, then deleting\nthe (presumably obsolete) pg_statistic rows for it should fix your\nimmediate problem. Otherwise, consider applying the attached.\n\n\t\t\tregards, tom lane\n\n\nIndex: src/backend/optimizer/path/allpaths.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/path/allpaths.c,v\nretrieving revision 1.137.2.2\ndiff -c -r1.137.2.2 allpaths.c\n*** src/backend/optimizer/path/allpaths.c\t13 Feb 2006 16:22:29 -0000\t1.137.2.2\n--- src/backend/optimizer/path/allpaths.c\t2 May 2006 04:31:27 -0000\n***************\n*** 264,269 ****\n--- 264,276 ----\n \t\t\t\t errmsg(\"SELECT FOR UPDATE/SHARE is not supported for inheritance queries\")));\n \n \t/*\n+ \t * We might have looked up indexes for the parent rel, but they're\n+ \t * really not relevant to the appendrel. Reset the pointer to avoid\n+ \t * any confusion.\n+ \t */\n+ \trel->indexlist = NIL;\n+ \n+ \t/*\n \t * Initialize to compute size estimates for whole inheritance tree\n \t */\n \trel->rows = 0;\nIndex: src/backend/utils/adt/selfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/selfuncs.c,v\nretrieving revision 1.191.2.1\ndiff -c -r1.191.2.1 selfuncs.c\n*** src/backend/utils/adt/selfuncs.c\t22 Nov 2005 18:23:22 -0000\t1.191.2.1\n--- src/backend/utils/adt/selfuncs.c\t2 May 2006 04:31:27 -0000\n***************\n*** 2970,2988 ****\n \t\t(varRelid == 0 || varRelid == ((Var *) basenode)->varno))\n \t{\n \t\tVar\t\t *var = (Var *) basenode;\n! \t\tOid\t\t\trelid;\n \n \t\tvardata->var = basenode;\t/* return Var without relabeling */\n \t\tvardata->rel = find_base_rel(root, var->varno);\n \t\tvardata->atttype = var->vartype;\n \t\tvardata->atttypmod = var->vartypmod;\n \n! \t\trelid = getrelid(var->varno, root->parse->rtable);\n \n! \t\tif (OidIsValid(relid))\n \t\t{\n \t\t\tvardata->statsTuple = SearchSysCache(STATRELATT,\n! \t\t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(relid),\n \t\t\t\t\t\t\t\t\t\t\t\t Int16GetDatum(var->varattno),\n \t\t\t\t\t\t\t\t\t\t\t\t 0, 0);\n \t\t}\n--- 2970,2996 ----\n \t\t(varRelid == 0 || varRelid == ((Var *) basenode)->varno))\n \t{\n \t\tVar\t\t *var = (Var *) basenode;\n! \t\tRangeTblEntry *rte;\n \n \t\tvardata->var = basenode;\t/* return Var without relabeling */\n \t\tvardata->rel = find_base_rel(root, var->varno);\n \t\tvardata->atttype = var->vartype;\n \t\tvardata->atttypmod = var->vartypmod;\n \n! \t\trte = rt_fetch(var->varno, root->parse->rtable);\n \n! \t\tif (rte->inh)\n! \t\t{\n! \t\t\t/*\n! \t\t\t * XXX This means the Var represents a column of an append relation.\n! \t\t\t * Later add code to look at the member relations and try to derive\n! \t\t\t * some kind of combined statistics?\n! \t\t\t */\n! \t\t}\n! \t\telse if (rte->rtekind == RTE_RELATION)\n \t\t{\n \t\t\tvardata->statsTuple = SearchSysCache(STATRELATT,\n! \t\t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(rte->relid),\n \t\t\t\t\t\t\t\t\t\t\t\t Int16GetDatum(var->varattno),\n \t\t\t\t\t\t\t\t\t\t\t\t 0, 0);\n \t\t}\n", "msg_date": "Tue, 02 May 2006 00:44:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is plan (and performance) different on partitioned table? " } ]
[ { "msg_contents": "Ever since I started working with PostgreSQL I've heard the need to\nwatch transaction IDs. The phrase \"transaction ID wraparound\" still\ngives me a shiver. Attached it a short script that works with the\nmonitoring system Nagios to keep an eye on transaction IDs. It should\nbe easy to adapt to any other monitoring system.\n\nIt runs the textbook query below and reports how close you are to wraparound.\n SELECT datname, age(datfrozenxid) FROM pg_database;\n\nThe script detects a wrap at 2 billion. It starts warning once one or\nmore databases show an age over 1 billion transactions. It reports\ncritical at 1.5B transactions. I hope everyone out there is vacuuming\n*all* databases often.\n\nHope some of you can use this script!\nTony Wasson", "msg_date": "Tue, 2 May 2006 11:26:04 -0700", "msg_from": "\"Tony Wasson\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql transaction id monitoring with nagios" }, { "msg_contents": "On May 2, 2006, at 2:26 PM, Tony Wasson wrote:\n\n> The script detects a wrap at 2 billion. It starts warning once one or\n> more databases show an age over 1 billion transactions. It reports\n> critical at 1.5B transactions. I hope everyone out there is vacuuming\n> *all* databases often.\n\nSomething seems wrong... I just ran your script against my \ndevelopment database server which is vacuumed daily and it said I was \n53% of the way to 2B. Seemed strange to me, so I re-ran \"vacuum -a - \nz\" to vacuum all databases (as superuser), reran the script and got \nthe same answer.", "msg_date": "Tue, 2 May 2006 14:50:04 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "Vivek Khera wrote:\n> \n> On May 2, 2006, at 2:26 PM, Tony Wasson wrote:\n> \n> >The script detects a wrap at 2 billion. It starts warning once one or\n> >more databases show an age over 1 billion transactions. It reports\n> >critical at 1.5B transactions. I hope everyone out there is vacuuming\n> >*all* databases often.\n> \n> Something seems wrong... I just ran your script against my \n> development database server which is vacuumed daily and it said I was \n> 53% of the way to 2B. Seemed strange to me, so I re-ran \"vacuum -a - \n> z\" to vacuum all databases (as superuser), reran the script and got \n> the same answer.\n\nThat's right, because a database's age is only decremented in\ndatabase-wide vacuums. (Wow, who wouldn't want a person-wide vacuum if\nit did the same thing ...)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 2 May 2006 15:03:40 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On 5/2/06, Vivek Khera <[email protected]> wrote:\n>\n> On May 2, 2006, at 2:26 PM, Tony Wasson wrote:\n>\n> > The script detects a wrap at 2 billion. It starts warning once one or\n> > more databases show an age over 1 billion transactions. It reports\n> > critical at 1.5B transactions. I hope everyone out there is vacuuming\n> > *all* databases often.\n>\n> Something seems wrong... I just ran your script against my\n> development database server which is vacuumed daily and it said I was\n> 53% of the way to 2B. Seemed strange to me, so I re-ran \"vacuum -a -\n> z\" to vacuum all databases (as superuser), reran the script and got\n> the same answer.\n\nAh thanks, it's a bug in my understanding of the thresholds.\n\n\"With the standard freezing policy, the age column will start at one\nbillion for a freshly-vacuumed database.\"\n\nSo essentially, 1B is normal, 2B is the max. The logic is now..\n\nThe script detects a wrap at 2 billion. It starts warning once one or\nmore databases show an age over 1.5 billion transactions. It reports\ncritical at 1.75B transactions.\n\nIf anyone else understands differently, hit me with a clue bat.", "msg_date": "Tue, 2 May 2006 12:06:30 -0700", "msg_from": "\"Tony Wasson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "Alvaro Herrera wrote:\n> Vivek Khera wrote:\n> > \n> > On May 2, 2006, at 2:26 PM, Tony Wasson wrote:\n> > \n> > >The script detects a wrap at 2 billion. It starts warning once one or\n> > >more databases show an age over 1 billion transactions. It reports\n> > >critical at 1.5B transactions. I hope everyone out there is vacuuming\n> > >*all* databases often.\n> > \n> > Something seems wrong... I just ran your script against my \n> > development database server which is vacuumed daily and it said I was \n> > 53% of the way to 2B. Seemed strange to me, so I re-ran \"vacuum -a - \n> > z\" to vacuum all databases (as superuser), reran the script and got \n> > the same answer.\n> \n> That's right, because a database's age is only decremented in\n> database-wide vacuums.\n\nForget it ... I must be blind ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 2 May 2006 15:07:41 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On May 2, 2006, at 3:03 PM, Alvaro Herrera wrote:\n\n>> Something seems wrong... I just ran your script against my\n>> development database server which is vacuumed daily and it said I was\n>> 53% of the way to 2B. Seemed strange to me, so I re-ran \"vacuum -a -\n>> z\" to vacuum all databases (as superuser), reran the script and got\n>> the same answer.\n>\n> That's right, because a database's age is only decremented in\n> database-wide vacuums. (Wow, who wouldn't want a person-wide \n> vacuum if\n> it did the same thing ...)\n\nand what exactly is \"vacuumdb -a -z\" doing besides a database wide \nvacuum?", "msg_date": "Tue, 2 May 2006 15:12:37 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On Tue, May 02, 2006 at 12:06:30 -0700,\n Tony Wasson <[email protected]> wrote:\n> \n> Ah thanks, it's a bug in my understanding of the thresholds.\n> \n> \"With the standard freezing policy, the age column will start at one\n> billion for a freshly-vacuumed database.\"\n> \n> So essentially, 1B is normal, 2B is the max. The logic is now..\n> \n> The script detects a wrap at 2 billion. It starts warning once one or\n> more databases show an age over 1.5 billion transactions. It reports\n> critical at 1.75B transactions.\n> \n> If anyone else understands differently, hit me with a clue bat.\n\nIsn't this obsolete now anyway? I am pretty sure 8.1 has safeguards against\nwrap around.\n", "msg_date": "Tue, 2 May 2006 15:19:29 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On 5/2/06, Bruno Wolff III <[email protected]> wrote:\n> On Tue, May 02, 2006 at 12:06:30 -0700,\n> Tony Wasson <[email protected]> wrote:\n> >\n> > Ah thanks, it's a bug in my understanding of the thresholds.\n> >\n> > \"With the standard freezing policy, the age column will start at one\n> > billion for a freshly-vacuumed database.\"\n> >\n> > So essentially, 1B is normal, 2B is the max. The logic is now..\n> >\n> > The script detects a wrap at 2 billion. It starts warning once one or\n> > more databases show an age over 1.5 billion transactions. It reports\n> > critical at 1.75B transactions.\n> >\n> > If anyone else understands differently, hit me with a clue bat.\n>\n> Isn't this obsolete now anyway? I am pretty sure 8.1 has safeguards against\n> wrap around.\n\nMy motivation was primarily to monitor some existing PostgreSQL 8.0\nservers. I'm not convinced it is \"safe\" to stop worrying about\ntransaction ids even on an 8.1 box.\n\nIt is comforting that 8.1 does safeguard against wraparound in at\nleast 2 ways. First, it emits a warnings during the last 10 million\ntransactions. If you manage to ignore all those, posgresql will shut\ndown before a wraparound. I think PostgreSQL does everything correctly\nthere, but I suspect someone will run into the shut down daemon\nproblem.\n", "msg_date": "Tue, 2 May 2006 13:50:43 -0700", "msg_from": "\"Tony Wasson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On Tue, May 02, 2006 at 03:03:40PM -0400, Alvaro Herrera wrote:\n> That's right, because a database's age is only decremented in\n> database-wide vacuums. (Wow, who wouldn't want a person-wide vacuum if\n> it did the same thing ...)\n\nThe heck with age, I'd take a person-wide vacuum if it just got rid of\nall my 'dead rows'...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 16:42:36 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" }, { "msg_contents": "On Tue, May 02, 2006 at 12:06:30PM -0700, Tony Wasson wrote:\n> Ah thanks, it's a bug in my understanding of the thresholds.\n> \n> \"With the standard freezing policy, the age column will start at one\n> billion for a freshly-vacuumed database.\"\n> \n> So essentially, 1B is normal, 2B is the max. The logic is now..\n> \n> The script detects a wrap at 2 billion. It starts warning once one or\n> more databases show an age over 1.5 billion transactions. It reports\n> critical at 1.75B transactions.\n> \n> If anyone else understands differently, hit me with a clue bat.\n\nYou should take a look at the code in -HEAD that triggers autovacuum to\ndo a XID-wrap-prevention vacuum, as well as the code that warns that\nwe're approaching wrap. From memory, the limit for the later is\n\nmax_transactions << 3\n\nWhere max_transactions should be 4B on most platforms.\n\nI'm intending to submit a patch to clean some of that code up (put all\nthe thresholds in one .h file rather than how they're spread through\nsource code right now); if you drop me an email off-list I'll send you\ninfo once I do that.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 16:46:37 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql transaction id monitoring with nagios" } ]
[ { "msg_contents": ">If you don't have anything in the parent table br_1min, then deleting\n>the (presumably obsolete) pg_statistic rows for it should fix your\n>immediate problem. Otherwise, consider applying the attached.\n\nTom, thanks alot for your reply. A few follow-up questions, and one potential \"bug\"?\n\nI've been experimenting with deleting the rows from pg_statistics. FYI, there were statistics for all master tables prior to us partioning the data. We then manually inserted the rows into each inherited partition and, when done - did a truncate of the master table.\n\nSo, here's what I'm finding. \n\n1) When I delete the rows from pg_statistics, the new plan is, indeed, a hash join.\n\nexplain analyze\nSELECT *\nFROM br_1min br1 JOIN br_mods mod on br1.modules_id = mod.id \nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\nHash Join (cost=763.35..807.35 rows=1 width=109) (actual time=3.631..36.181 rows=45 loops=1)\n Hash Cond: (\"outer\".modules_id = \"inner\".id)\n -> Append (cost=1.04..40.64 rows=877 width=32) (actual time=0.198..34.872 rows=910 loops=1)\n -> Bitmap Heap Scan on br_1min bfs1 (cost=1.04..8.70 rows=6 width=32) (actual time=0.060..0.060 rows=0 loops=1)\n Recheck Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Bitmap Index Scan on br_1min_end_idx (cost=0.00..1.04 rows=6 width=0) (actual time=0.054..0.054 rows=0 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Index Scan using br_1min_20557_end_idx on br_1min_20557 bfs1 (cost=0.00..25.91 rows=869 width=32) (actual time=0.136..1.858 rows=910 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Index Scan using br_1min_20570_end_idx on br_1min_20570 bfs1 (cost=0.00..3.02 rows=1 width=32) (actual time=0.092..0.092 rows=0 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Index Scan using br_1min_20583_end_idx on br_1min_20583 bfs1 (cost=0.00..3.02 rows=1 width=32) (actual time=32.034..32.034 rows=0 loops=1)\n Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n -> Hash (cost=761.61..761.61 rows=281 width=77) (actual time=0.487..0.487 rows=45 loops=1)\n -> Bitmap Heap Scan on br_mods mod (cost=20.98..761.61 rows=281 width=77) (actual time=0.264..0.435 rows=45 loops=1)\n Recheck Cond: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n -> BitmapOr (cost=20.98..20.98 rows=281 width=0) (actual time=0.223..0.223 rows=0 loops=1)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.091..0.091 rows=14 loops=1)\n Index Cond: (downloads_id = 153226)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.037..0.037 rows=2 loops=1)\n Index Cond: (downloads_id = 153714)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.010..0.010 rows=2 loops=1)\n Index Cond: (downloads_id = 153730)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153728)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (downloads_id = 153727)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (downloads_id = 153724)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (downloads_id = 153713)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (downloads_id = 153725)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.031..0.031 rows=16 loops=1)\n Index Cond: (downloads_id = 153739)\n -> Bitmap Index Scan on br_mods_downloads_id_idx (cost=0.00..2.10 rows=28 width=0) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (downloads_id = 153722)\n Total runtime: 36.605 ms\n(38 rows)\n\n\nNote: there are 2 new partitions that our cron jobs automatically created yesterday that are being scanned, but they do not return any rows.\n\n2) When I re-analyze the br_1min table, new rows do not appear in pg_statistics for that table.\n\nNow, my questions:\n\n1) If there are no statistics for the master table, does postgres use the statistics for any of the partitions, or does it create a plan without any statistics related to the partitioned tables (e.g. some default plan.)?\n\n2) I'm curious where it got an estimate of 6 rows for br_1min in \"Bitmap Heap Scan on br_1min bfs1 (cost=1.04..8.70 rows=6 width=32)\" Any insight?\n\n3) Basically, I'm wondering if this strategy of deleting the rows in pg_statistics for the master tables will work in all conditions, or if it runs the risk of again using faulty statistics and choosing a bad plan. Would I be better off setting enable_mergejoin = f in the session right before I issue this query and then resetting it after? What are the risks of that approach?\n\n\nNow, the potentital bug:\n\nIt appears that after you truncate a table, the statistics for that table still remain in pg_statistics. And, as long as there are no rows added back to that table, the same statistics remain for that table, after an ANALYZE, - and are used by queries. Once, you re-insert any rows in the table, however, new statistics will be computed. So, the bug appears to be that after a truncate, if there are no rows in a table, the old, out-dated statistics do not get overwritten. To follow are some simple tests I did to illustrate that. Maybe this is by design, or, should I post this on pg-hackers? It might be that in my case, it's better that new statitics ARE NOT inserted into pg_statistics for empty tables, but maybe the fix could be to delete the old statistics for analyzes to an empty table.\n\nThanks again Tom for your feedback,\n\n- Mark\n\n\nprdb=# create table mark_temp (col1 int, col2 int);\nCREATE TABLE\nprdb=# create index mark_temp_idx on mark_temp(col1);\nCREATE INDEX\n\n... I then inserted several thousand rows ....\n\nprdb=# analyze mark_temp;\nANALYZE\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n 1 | 9671\n 2 | 1\n(2 rows)\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Index Scan using mark_temp_idx on mark_temp (cost=0.00..51.35 rows=27 width=8) (actual time=0.013..0.015 rows=1 loops=1)\n Index Cond: (col1 = 1045)\n Total runtime: 0.048 ms\n(3 rows)\n\nprdb=# truncate table mark_temp;\nTRUNCATE TABLE\nprdb=# analyze mark_temp;\nANALYZE\n\nNOTE: STATISTICS ARE THE SAME AND IT'S STILL DOING AN INDEX SCAN INSTEAD OF A SEQ SCAN\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using mark_temp_idx on mark_temp (cost=0.00..3.14 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: (col1 = 1045)\n Total runtime: 0.031 ms\n(3 rows)\n\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n 1 | 9671\n 2 | 1\n(2 rows)\n\nprdb=# insert into mark_temp (col1,col2) values (1,100);\nINSERT 0 1\nprdb=# analyze mark_temp;\n\nNOTE: AFTER INSERT, THERE ARE NEW STATISTICS AND IT'S DOING A SEQ SCAN NOW\n\nANALYZE\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n 1 | -1\n 2 | -1\n(2 rows)\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on mark_temp (cost=0.00..1.01 rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=1)\n Filter: (col1 = 1045)\n Total runtime: 0.029 ms\n(3 rows)\n\n\n\n\n\n\n\n\n\n\n\n\nRE: [PERFORM] Why is plan (and performance) different on partitioned table? \n\n\n\n>If you don't have anything in the parent table br_1min, then deleting\n>the (presumably obsolete) pg_statistic rows for it should fix your\n>immediate problem.  Otherwise, consider applying the attached.\n\nTom, thanks alot for your reply.  A few follow-up questions, and one potential \"bug\"?\n\nI've been experimenting with deleting the rows from pg_statistics.  FYI, there were statistics for all master tables prior to us partioning the data.  We then manually inserted the rows into each inherited partition and, when done - did a truncate of the master table.\n\nSo, here's what I'm finding. \n\n1) When I delete the rows from pg_statistics, the new plan is, indeed, a hash join.\n\nexplain analyze\nSELECT *\nFROM br_1min br1 JOIN br_mods mod on br1.modules_id = mod.id\nWHERE ((end_time >= '2006-05-01 17:12:18-07' AND end_time < '2006-05-01 17:13:18-07'))\n  AND mod.downloads_id IN (153226,153714,153730,153728,153727,153724,153713,153725,153739,153722) ;\n\nHash Join  (cost=763.35..807.35 rows=1 width=109) (actual time=3.631..36.181 rows=45 loops=1)\n   Hash Cond: (\"outer\".modules_id = \"inner\".id)\n   ->  Append  (cost=1.04..40.64 rows=877 width=32) (actual time=0.198..34.872 rows=910 loops=1)\n         ->  Bitmap Heap Scan on br_1min bfs1  (cost=1.04..8.70 rows=6 width=32) (actual time=0.060..0.060 rows=0 loops=1)\n               Recheck Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n               ->  Bitmap Index Scan on br_1min_end_idx  (cost=0.00..1.04 rows=6 width=0) (actual time=0.054..0.054 rows=0 loops=1)\n                     Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n         ->  Index Scan using br_1min_20557_end_idx on br_1min_20557 bfs1  (cost=0.00..25.91 rows=869 width=32) (actual time=0.136..1.858 rows=910 loops=1)\n               Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n         ->  Index Scan using br_1min_20570_end_idx on br_1min_20570 bfs1  (cost=0.00..3.02 rows=1 width=32) (actual time=0.092..0.092 rows=0 loops=1)\n               Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n         ->  Index Scan using br_1min_20583_end_idx on br_1min_20583 bfs1  (cost=0.00..3.02 rows=1 width=32) (actual time=32.034..32.034 rows=0 loops=1)\n               Index Cond: ((end_time >= '2006-05-01 17:12:18-07'::timestamp with time zone) AND (end_time < '2006-05-01 17:13:18-07'::timestamp with time zone))\n   ->  Hash  (cost=761.61..761.61 rows=281 width=77) (actual time=0.487..0.487 rows=45 loops=1)\n         ->  Bitmap Heap Scan on br_mods mod  (cost=20.98..761.61 rows=281 width=77) (actual time=0.264..0.435 rows=45 loops=1)\n               Recheck Cond: ((downloads_id = 153226) OR (downloads_id = 153714) OR (downloads_id = 153730) OR (downloads_id = 153728) OR (downloads_id = 153727) OR (downloads_id = 153724) OR (downloads_id = 153713) OR (downloads_id = 153725) OR (downloads_id = 153739) OR (downloads_id = 153722))\n               ->  BitmapOr  (cost=20.98..20.98 rows=281 width=0) (actual time=0.223..0.223 rows=0 loops=1)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.091..0.091 rows=14 loops=1)\n                           Index Cond: (downloads_id = 153226)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.037..0.037 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153714)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.010..0.010 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153730)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153728)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.008..0.008 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153727)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153724)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153713)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n                           Index Cond: (downloads_id = 153725)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.031..0.031 rows=16 loops=1)\n                           Index Cond: (downloads_id = 153739)\n                     ->  Bitmap Index Scan on br_mods_downloads_id_idx  (cost=0.00..2.10 rows=28 width=0) (actual time=0.009..0.009 rows=1 loops=1)\n                           Index Cond: (downloads_id = 153722)\n Total runtime: 36.605 ms\n(38 rows)\n\n\nNote:  there are 2 new partitions that our cron jobs automatically created yesterday that are being scanned, but they do not return any rows.\n\n2) When I re-analyze the br_1min table, new rows do not appear in pg_statistics for that table.\n\nNow, my questions:\n\n1) If there are no statistics for the master table, does postgres use the statistics for any of the partitions, or does it create a plan without any statistics related to the partitioned tables (e.g. some default plan.)?\n\n2) I'm curious where it got an estimate of 6 rows for br_1min in \"Bitmap Heap Scan on br_1min bfs1  (cost=1.04..8.70 rows=6 width=32)\"  Any insight?\n\n3) Basically, I'm wondering if this strategy of deleting the rows in pg_statistics for the master tables will work in all conditions, or if it runs the risk of again using faulty statistics and choosing a bad plan.  Would I be better off setting enable_mergejoin = f in the session right before I  issue this query and then resetting it after?  What are the risks of that approach?\n\n\nNow, the potentital bug:\n\nIt appears that after you truncate a table, the statistics for that table still remain in pg_statistics.  And, as long as there are no rows added back to that table, the same statistics remain for that table, after an ANALYZE, - and are used by queries.  Once, you re-insert any rows in the table, however, new statistics will be computed.  So, the bug appears to be that after a truncate, if there are no rows in a table, the old, out-dated statistics do not get overwritten.  To follow are some simple tests I did to illustrate that.  Maybe this is by design, or, should I post this on pg-hackers?  It might be that in my case, it's better that new statitics ARE NOT inserted into pg_statistics for empty tables, but maybe the fix could be to delete the old statistics for analyzes to an empty table.\n\nThanks again Tom for your feedback,\n\n- Mark\n\n\nprdb=# create table mark_temp (col1 int, col2 int);\nCREATE TABLE\nprdb=# create index mark_temp_idx on mark_temp(col1);\nCREATE INDEX\n\n... I then inserted several thousand rows ....\n\nprdb=# analyze mark_temp;\nANALYZE\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n         1 |        9671\n         2 |           1\n(2 rows)\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n                                                        QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Index Scan using mark_temp_idx on mark_temp  (cost=0.00..51.35 rows=27 width=8) (actual time=0.013..0.015 rows=1 loops=1)\n   Index Cond: (col1 = 1045)\n Total runtime: 0.048 ms\n(3 rows)\n\nprdb=# truncate table mark_temp;\nTRUNCATE TABLE\nprdb=# analyze mark_temp;\nANALYZE\n\nNOTE:  STATISTICS ARE THE SAME AND IT'S STILL DOING AN INDEX SCAN INSTEAD OF A SEQ SCAN\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n                                                       QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using mark_temp_idx on mark_temp  (cost=0.00..3.14 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)\n   Index Cond: (col1 = 1045)\n Total runtime: 0.031 ms\n(3 rows)\n\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n         1 |        9671\n         2 |           1\n(2 rows)\n\nprdb=# insert into mark_temp (col1,col2) values (1,100);\nINSERT 0 1\nprdb=# analyze mark_temp;\n\nNOTE: AFTER INSERT, THERE ARE NEW STATISTICS AND IT'S DOING A SEQ SCAN NOW\n\nANALYZE\nprdb=# select staattnum,stadistinct from pg_statistic where starelid = (select oid from pg_class where relname = 'mark_temp');\n staattnum | stadistinct\n-----------+-------------\n         1 |          -1\n         2 |          -1\n(2 rows)\n\nprdb=# explain analyze select * from mark_temp where col1 = 1045;\n                                            QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on mark_temp  (cost=0.00..1.01 rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=1)\n   Filter: (col1 = 1045)\n Total runtime: 0.029 ms\n(3 rows)", "msg_date": "Tue, 2 May 2006 12:28:18 -0700", "msg_from": "\"Mark Liberman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is plan (and performance) different on partitioned table? " }, { "msg_contents": "\"Mark Liberman\" <[email protected]> writes:\n> Now, the potentital bug:\n> It appears that after you truncate a table, the statistics for that =\n> table still remain in pg_statistics.\n\nThat's intentional, on the theory that when the table is re-populated\nthe new contents will probably resemble the old.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2006 16:27:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is plan (and performance) different on partitioned table? " } ]
[ { "msg_contents": "Thanks.\n\nMy first check was of course a grep/search of the postgres.conf, next it was\na complete source grep for vacuum_cost_delay.\n\nI've come to the conclusion I need to simply start tracking all transactions\nand determining a cost/performance for the larger and frequently updated\ntables without the benefit and penalty of pg_statio.\n\n- Chris\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, May 02, 2006 5:39 PM\nTo: Chris Mckenzie\nCc: '[email protected]'\nSubject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nshow all and grep are your friend. From my laptop with 8.1:\n\[email protected][16:36]~:4%psql -tc 'show all' | grep\nvacuum_cost_delay|tr -s ' ' autovacuum_vacuum_cost_delay | -1 | Vacuum cost\ndelay in milliseconds, for autovacuum. vacuum_cost_delay | 0 | Vacuum cost\ndelay in milliseconds. [email protected][16:37]~:5%\n\nI don't have a 7.4 copy around, but you can just check it yourself.\n\nOn Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:\n> Hi everyone.\n> \n> I've got a quick and stupid question: Does Postgres 7.4 (7.x) support \n> vacuum_cost_delay?\n> \n> For all my googles and documentation reading I've determined it's not \n> supported, only because I can't find a 7.x doc or forum post claiming \n> otherwise.\n> \n> Upgrading to 8.x is out of the question, but I still need to employ \n> something to auto-vacuum a large and active database (possibly more \n> than once a day) in a manner that wouldn't affect load at the wrong \n> time.\n> \n> If I could combine pg_autovacuum with vacuum_cost_delay I could \n> potentially have a solution. (barring some performance testing)\n> \n> The only problem with pg_autovacuum is the need for pg_statio, which \n> itself will reduce performance at all times.\n> \n> Any suggestions?\n> \n> Thanks!\n> \n> - Chris\n> \n> \n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nThanks.\n\nMy first check was of course a grep/search of the postgres.conf, next it was a complete source grep for vacuum_cost_delay.\nI've come to the conclusion I need to simply start tracking all transactions and determining a cost/performance for the larger and frequently updated tables without the benefit and penalty of pg_statio.\n- Chris\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, May 02, 2006 5:39 PM\nTo: Chris Mckenzie\nCc: '[email protected]'\nSubject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nshow all and grep are your friend. From my laptop with 8.1:\n\[email protected][16:36]~:4%psql -tc 'show all' | grep vacuum_cost_delay|tr -s ' ' autovacuum_vacuum_cost_delay | -1 | Vacuum cost delay in milliseconds, for autovacuum. vacuum_cost_delay | 0 | Vacuum cost delay in milliseconds. [email protected][16:37]~:5%\nI don't have a 7.4 copy around, but you can just check it yourself.\n\nOn Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:\n> Hi everyone.\n> \n> I've got a quick and stupid question: Does Postgres 7.4 (7.x) support \n> vacuum_cost_delay?\n> \n> For all my googles and documentation reading I've determined it's not \n> supported, only because I can't find a 7.x doc or forum post claiming \n> otherwise.\n> \n> Upgrading to 8.x is out of the question, but I still need to employ \n> something to auto-vacuum a large and active database (possibly more \n> than once a day) in a manner that wouldn't affect load at the wrong \n> time.\n> \n> If I could combine pg_autovacuum with vacuum_cost_delay I could \n> potentially have a solution. (barring some performance testing)\n> \n> The only problem with pg_autovacuum is the need for pg_statio, which \n> itself will reduce performance at all times.\n> \n> Any suggestions?\n> \n> Thanks!\n> \n> - Chris\n> \n> \n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant      [email protected]\nPervasive Software      http://pervasive.com    work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461", "msg_date": "Tue, 2 May 2006 17:47:15 -0400 ", "msg_from": "Chris Mckenzie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." }, { "msg_contents": "On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:\n> Thanks.\n> \n> My first check was of course a grep/search of the postgres.conf, next it was\n> a complete source grep for vacuum_cost_delay.\n\nIt's there in head...\[email protected][17:52]~/pgsql/HEAD/src:4%grep -ri vacuum_cost_delay *|wc -l\n 8\n\n> I've come to the conclusion I need to simply start tracking all transactions\n> and determining a cost/performance for the larger and frequently updated\n> tables without the benefit and penalty of pg_statio.\n\nHuh? pg_statio shouldn't present a large penalty AFAIK...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 2 May 2006 17:54:38 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." }, { "msg_contents": "On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:\n> I've come to the conclusion I need to simply start tracking all transactions\n> and determining a cost/performance for the larger and frequently updated\n> tables without the benefit and penalty of pg_statio.\n\nI'll bet it won't help you. If you can't get off 7.4 on a busy\nmachine, you're going to get hosed by I/O sometimes no matter what. \nMy suggestion is to write a bunch of rule-of-thumb rules for your\ncron jobs, and start planning your upgrade.\n\nJan back-patched the vacuum stuff to 7.4 for us (Afilias), and we\ntried playing with it; but it didn't really make the difference we'd\nhoped.\n\nThe reason for this is that 7.4 also doesn't have the bg_writer. So\nyou're still faced with I/O storms, no matter what you do. If I were\nin your shoes, I wouldn't waste a lot of time on trying to emulate\nthe new features in 7.4.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Thu, 4 May 2006 08:27:47 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." } ]
[ { "msg_contents": "We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are\nhaving performance problems and running for very long times. The\ncommonality seems to be PostgreSQL 8.1 is choosing to use a nested\nloop join because it estimates there will be only be a single row.\nThere are really thousands of rows and the nested loop version takes\nmuch longer.\n\nEven with the bad plan, the test query runs quickly. The real query\nis much more complicated and we have had to kill it after running for\n24 hours.\n\nSELECT\n MAX(titles.name) AS title_name,\n MAX(providers.short_name) AS provider_short_name,\n SUM(x.xtns) AS xtns,\n SUM(x.rev) AS rev\nFROM xtns_by_mso_title_wk x\nINNER JOIN providers providers ON x.provider_no = providers.provider_no\nINNER JOIN titles titles ON x.title_no = titles.title_no\nWHERE x.mso_no = 50\n AND x.week BETWEEN '20060423 00:00:00' AND '20060423 00:00:00'\nGROUP BY x.title_no, x.provider_no\n\nThe EXPLAIN ANALYZE looks like:\n\n GroupAggregate (cost=11.63..11.67 rows=1 width=61) (actual\ntime=1440.550..1467.602 rows=3459 loops=1)\n -> Sort (cost=11.63..11.64 rows=1 width=61) (actual\ntime=1440.515..1446.634 rows=3934 loops=1)\n Sort Key: x.title_no, x.provider_no\n -> Nested Loop (cost=0.00..11.62 rows=1 width=61) (actual\ntime=7.900..1422.686 rows=3934 loops=1)\n -> Nested Loop (cost=0.00..7.38 rows=1 width=49)\n(actual time=7.877..1373.392 rows=3934 loops=1)\n -> Index Scan using unq_xtns_by_mso_title_wk on\nxtns_by_mso_title_wk x (cost=0.00..4.12 rows=1 width=26) (actual\ntime=7.827..1297.681 rows=3934 loops=1)\n Index Cond: ((week >= '2006-04-23\n00:00:00'::timestamp without time zone) AND (week <= '2006-04-23\n00:00:00'::timestamp without time zone) AND (mso_no = 50))\n -> Index Scan using pk_titles on titles \n(cost=0.00..3.25 rows=1 width=27) (actual time=0.010..0.012 rows=1\nloops=3934)\n Index Cond: (\"outer\".title_no = titles.title_no)\n -> Index Scan using pk_providers on providers \n(cost=0.00..4.23 rows=1 width=16) (actual time=0.004..0.005 rows=1\nloops=3934)\n Index Cond: (\"outer\".provider_no = providers.provider_no)\n\nIf it is searching over multiple weeks (week BETWEEN '20060417\n00:00:00' AND '20060423 00:00:00'), it estimates better and uses a\nhash join.\n\n GroupAggregate (cost=7848.20..7878.48 rows=156 width=61) (actual\ntime=117.761..145.910 rows=3459 loops=1)\n -> Sort (cost=7848.20..7852.08 rows=1552 width=61) (actual\ntime=117.735..123.823 rows=3934 loops=1)\n Sort Key: x.title_no, x.provider_no\n -> Hash Join (cost=5.95..7765.94 rows=1552 width=61)\n(actual time=6.539..102.825 rows=3934 loops=1)\n Hash Cond: (\"outer\".provider_no = \"inner\".provider_no)\n -> Nested Loop (cost=0.00..7736.71 rows=1552\nwidth=49) (actual time=5.117..86.980 rows=3934 loops=1) \n -> Index Scan using idx_xtns_by_mso_ti_wk_wk_mso_t on\nxtns_by_mso_title_wk x (cost=0.00..2677.04 rows=1552 width=26)\n(actual time=5.085..18.065 rows=3934 loops=1)\n Index Cond: ((week >= '2006-04-17\n00:00:00'::timestamp without time zone) AND (week <= '2006-04-23\n00:00:00'::timestamp without time zone) AND (mso_no = 50))\n -> Index Scan using pk_titles on titles \n(cost=0.00..3.25 rows=1 width=27) (actual time=0.006..0.010 rows=1\nloops=3934)\n Index Cond: (\"outer\".title_no = titles.title_no)\n -> Hash (cost=5.16..5.16 rows=316 width=16) (actual\ntime=1.356..1.356 rows=325 loops=1)\n -> Seq Scan on providers (cost=0.00..5.16\nrows=316 width=16) (actual time=0.008..0.691 rows=325 loops=1)\n\nIf the week range is replace by an equals (week = '20060423\n00:00:00'), it also uses a hash join. Unforuntately, the queries are\nautomatically generated and changing them to use an equals could be\nproblematic.\n\n GroupAggregate (cost=7828.75..7859.32 rows=157 width=61) (actual\ntime=98.330..125.370 rows=3459 loops=1)\n -> Sort (cost=7828.75..7832.67 rows=1567 width=61) (actual\ntime=98.303..104.055 rows=3934 loops=1)\n Sort Key: x.title_no, x.provider_no\n -> Hash Join (cost=5.95..7745.60 rows=1567 width=61)\n(actual time=1.785..83.830 rows=3934 loops=1)\n Hash Cond: (\"outer\".provider_no = \"inner\".provider_no)\n -> Nested Loop (cost=0.00..7716.14 rows=1567\nwidth=49) (actual time=0.170..68.338 rows=3934 loops=1) \n -> Index Scan using idx_xtns_by_mso_ti_wk_wk_mso_t on\nxtns_by_mso_title_wk x (cost=0.00..2607.56 rows=1567 width=26)\n(actual time=0.138..11.993 rows=3934 loops=1)\n Index Cond: ((week = '2006-04-23\n00:00:00'::timestamp without time zone) AND (mso_no = 50))\n -> Index Scan using pk_titles on titles \n(cost=0.00..3.25 rows=1 width=27) (actual time=0.006..0.008 rows=1\nloops=3934)\n Index Cond: (\"outer\".title_no = titles.title_no)\n -> Hash (cost=5.16..5.16 rows=316 width=16) (actual\ntime=1.565..1.565 rows=325 loops=1)\n -> Seq Scan on providers (cost=0.00..5.16\nrows=316 width=16) (actual time=0.008..0.677 rows=325 loops=1)\n\nDoes anyone have some suggestions to try? The most worrying thing is\nthat when the statistics are off, it can do a pathological query.\n\n - Ian\n", "msg_date": "Tue, 2 May 2006 15:55:57 -0700", "msg_from": "\"Ian Burrell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop join and date range query" }, { "msg_contents": "\"Ian Burrell\" <[email protected]> writes:\n> We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are\n> having performance problems and running for very long times. The\n> commonality seems to be PostgreSQL 8.1 is choosing to use a nested\n> loop join because it estimates there will be only be a single row.\n\n> -> Index Scan using unq_xtns_by_mso_title_wk on\n> xtns_by_mso_title_wk x (cost=0.00..4.12 rows=1 width=26) (actual\n> time=7.827..1297.681 rows=3934 loops=1)\n> Index Cond: ((week >= '2006-04-23\n> 00:00:00'::timestamp without time zone) AND (week <= '2006-04-23\n> 00:00:00'::timestamp without time zone) AND (mso_no = 50))\n\nWe've already noted that there's a problem with estimating zero-width\nranges (too lazy to search the archives, but this has come up at least\ntwice recently). Can you modify your app to generate something like\n\n\tweek >= x and week < x+1\n\ninstead of\n\n\tweek >= x and week <= x\n\n? My recollection is that the fix will probably be complicated\nenough to not get back-patched into 8.1.\n\nBTW, AFAIK the same problem exists in 7.4. What kind of estimates/plans\nwere you getting for this case in 7.4?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2006 23:03:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop join and date range query " }, { "msg_contents": "On 5/2/06, Tom Lane <[email protected]> wrote:\n> \"Ian Burrell\" <[email protected]> writes:\n> > We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are\n> > having performance problems and running for very long times. The\n> > commonality seems to be PostgreSQL 8.1 is choosing to use a nested\n> > loop join because it estimates there will be only be a single row.\n>\n> We've already noted that there's a problem with estimating zero-width\n> ranges (too lazy to search the archives, but this has come up at least\n> twice recently). Can you modify your app to generate something like\n>\n> week >= x and week < x+1\n>\n> instead of\n>\n> week >= x and week <= x\n>\n\nI am working on modifying the SQL generation code to replace the\nzero-width range with an equals.\n\nDoes BETWEEN have the same bug?\n\n> ? My recollection is that the fix will probably be complicated\n> enough to not get back-patched into 8.1.\n>\n> BTW, AFAIK the same problem exists in 7.4. What kind of estimates/plans\n> were you getting for this case in 7.4?\n>\n\nWe get similar rows=1 estimates on 7.4. 7.4 doesn't choose to use the\nnested loop joins so it performs fine.\n\nWe have been getting similar rows=1 estimates and nested loop joins\nwith some other queries. But I think those are caused by not\nfrequently analyzing log type tables and then searching for recent\ndays which it doesn't think exist.\n\n - Ian\n", "msg_date": "Wed, 3 May 2006 10:54:04 -0700", "msg_from": "\"Ian Burrell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested loop join and date range query" } ]
[ { "msg_contents": "My database is used primarily in an OLAP-type environment. Sometimes my \nusers get a little carried away and find some way to slip past the \nsanity filters in the applications and end up bogging down the server \nwith queries that run for hours and hours. And, of course, what users \ntend to do is to keep queuing up more queries when they don't see the \nfirst one return instantly :)\n\nSo, I have been searching for a way to kill an individual query. I read \nin the mailing list archives that you could 'kill' the pid. I've tried \nthis a few times and more than once, it has caused the postmaster to \ndie(!), terminating every query that was in process, even unrelated to \nthat query. \n\nIs there some way I can just kill a query and not risk breaking \neverything else when I do it?\n\nThanks\n\n", "msg_date": "Tue, 02 May 2006 17:19:52 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Killing long-running queries" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> So, I have been searching for a way to kill an individual query. I read \n> in the mailing list archives that you could 'kill' the pid. I've tried \n> this a few times and more than once, it has caused the postmaster to \n> die(!), terminating every query that was in process, even unrelated to \n> that query. \n\nYou should be using SIGINT, not SIGTERM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2006 19:30:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Killing long-running queries " }, { "msg_contents": "On 5/2/06, Dan Harris <[email protected]> wrote:\n> My database is used primarily in an OLAP-type environment. Sometimes my\n> users get a little carried away and find some way to slip past the\n> sanity filters in the applications and end up bogging down the server\n> with queries that run for hours and hours. And, of course, what users\n> tend to do is to keep queuing up more queries when they don't see the\n> first one return instantly :)\n>\n> So, I have been searching for a way to kill an individual query. I read\n> in the mailing list archives that you could 'kill' the pid. I've tried\n> this a few times and more than once, it has caused the postmaster to\n> die(!), terminating every query that was in process, even unrelated to\n> that query.\n>\n> Is there some way I can just kill a query and not risk breaking\n> everything else when I do it?\n>\n> Thanks\n>\n\nHi Dan,\n\nYou can kill a specific pid under 8.1 using SELECT\npg_cancel_backend(pid). You can kill a query from the command line by\ndoing $ kill -TERM pid or $kill -SIGINT pid.\n\nThere are several tips from this thread that may be useful about\nkilling long running SQL:\n http://archives.postgresql.org/pgsql-general/2006-02/msg00298.php\n\nIn short, the recommendations are:\n 1) Use statement_timeouts if at all possible. You can do this\ndatabase wide in postgresql.conf. You can also set this on a per user\nor per SQL statement basis.\n 2) Make step #1 does not kill autovacuum, or necessary automated\njobs. You can do this with \"ALTER USER SET statement_timeout = 0\".\n\nI'm using a web page to show SELECT * FROM pg_stat_activity output\nfrom several servers. This makes it easy to see the pids of any\nlong-running SQL.\n\nhttp://archives.postgresql.org/pgsql-general/2006-02/msg00427.php\n", "msg_date": "Tue, 2 May 2006 16:43:52 -0700", "msg_from": "\"Tony Wasson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Killing long-running queries" }, { "msg_contents": "Tom Lane wrote\n> You should be using SIGINT, not SIGTERM.\n>\n> \t\t\tregards, tom lane\n> \n\nThank you very much for clarifying this point! It works :)\n\n\n", "msg_date": "Tue, 02 May 2006 17:53:16 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Killing long-running queries" }, { "msg_contents": "Hi,\n\nOn Tue, 2006-05-02 at 17:19 -0600, Dan Harris wrote:\n> Is there some way I can just kill a query and not risk breaking \n> everything else when I do it?\n\nUse pg_stat_activity view to find the pid of the process (pidproc\ncolumn) and send the signal to that process. I think you are now killing\npostmaster, which is wrong.\n\nRegards,\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n\n", "msg_date": "Wed, 03 May 2006 03:01:16 +0300", "msg_from": "Devrim GUNDUZ <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Killing long-running queries" }, { "msg_contents": "There is also the statement_timeout setting in postgresql.conf, but \nyou have to be careful with this setting. I'm not sure about \npostgres 8.0 or 8.1, but in 7.4.5 this setting will terminate the \nCOPY statements used by pg_dumpall for backups. So I actually use \nthe pg_stat_activity table to kill long running queries or idle in \ntransactions that are hanging around (very bad for vacuum). For \nexample, you can do something like this to kill off idle in \ntransactions that are truly idle for more than 1 hour...\n\npsql -U postgres -A -t -c \"select procpid from pg_stat_activity where \ncurrent_query ilike '%idle in transaction%' and query_start < now() - \ninterval '1 hour'\" template1 | xargs kill\n\nJust throw that in your crontab to run every few minutes, redirect \nstandard error to /dev/null, and quit worrying about vacuum not \nreclaiming space because some developer's code fails to commit or \nrollback a transaction. Just be careful you aren't killing off \nprocesses that are actually doing work. :)\n\n-- Will Reese http://blog.rezra.com\n\nOn May 2, 2006, at 7:01 PM, Devrim GUNDUZ wrote:\n\n> Hi,\n>\n> On Tue, 2006-05-02 at 17:19 -0600, Dan Harris wrote:\n>> Is there some way I can just kill a query and not risk breaking\n>> everything else when I do it?\n>\n> Use pg_stat_activity view to find the pid of the process (pidproc\n> column) and send the signal to that process. I think you are now \n> killing\n> postmaster, which is wrong.\n>\n> Regards,\n> -- \n> The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Tue, 2 May 2006 21:09:14 -0500", "msg_from": "Will Reese <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Killing long-running queries" } ]
[ { "msg_contents": "I am using the onboard NVRAID controller. It has to be configured in the\nBIOS and windows needs a raid driver at install to even see the raid drive.\nBut the onboard controller still utilizes system resources. So it is not a\n\"pure\" software raid, but a mix of hardware (controller) / software I guess.\nBut I don't really know a whole lot about it.\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]]\nSent: Sunday, April 30, 2006 7:04 PM\nTo: Gregory Stewart\nCc: Theodore Loscalzo\nSubject: Re: [PERFORM] Performance Issues on Opteron Dual Core\n\n\nGregory Stewart wrote:\n> Theodore,\n>\n> Thank you for your reply.\n> I am using the onboard NVidia RAID that is on the Asus A8N-E motherboard,\nso\n> it is a software raid.\n> But as I said, the CPU utilization on that machine is basically 0%. I also\n> ran some system performance tests, and the machine flies including the HD\n> performance, all better than the dev machine which doesn't use raid.\n>\n\n\n(Ooops sorry about so many mails), Might be worth using Google or\nTechnet to see if there are known performance issues with the (NVidia?)\nSATA controller on the A8N-E (as there seem to be a lot of crappy SATA\ncontrollers around at the moment).\n\nAlso (I'm not a Windows guy) by software RAID, do you mean you are using\nthe \"firmware RAID1\" from the controller or are you using Windows\nsoftware RAID1 on the two disks directly?\n\nCheers\n\nMark\n\n\n--\nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.1.385 / Virus Database: 268.5.1/327 - Release Date: 4/28/2006\n\n\n", "msg_date": "Tue, 2 May 2006 22:58:15 -0500", "msg_from": "\"Gregory Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issues on Opteron Dual Core" } ]
[ { "msg_contents": "> > > FWIW, I've found problems running PostgreSQL on Windows in a \n> > > multi-CPU environment on w2k3. It runs fine for some period, and \n> > > then CPU and throughput drop to zero. So far I've been unable to \n> > > track down any more information than that, other than the \n> fact that \n> > > I haven't been able to reproduce this on any single-CPU machines.\n> > \n> > I have had previous correspondence about this with Magnus (search \n> > -general and -hackers). If you uninstall SP1 the problem \n> goes away. We \n> > played a bit with potential fixes but didn't find any.\n> \n> Interesting; does SP2 fix the problem? Anything we can do \n> over here to help?\n\nThere is no SP2 for Windows 2003.\n\nHave you tried this with latest-and-greatest CVS HEAD? Meaning with the\nnew semaphore code that was committed a couple of days ago?\n\n//Magnus\n", "msg_date": "Wed, 3 May 2006 09:29:15 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "On Wednesday 03 May 2006 03:29, Magnus Hagander wrote:\n> > > > FWIW, I've found problems running PostgreSQL on Windows in a\n> > > > multi-CPU environment on w2k3. It runs fine for some period, and\n> > > > then CPU and throughput drop to zero. So far I've been unable to\n> > > > track down any more information than that, other than the\n> >\n> > fact that\n> >\n> > > > I haven't been able to reproduce this on any single-CPU machines.\n> > >\n> > > I have had previous correspondence about this with Magnus (search\n> > > -general and -hackers). If you uninstall SP1 the problem\n> >\n> > goes away. We\n> >\n> > > played a bit with potential fixes but didn't find any.\n> >\n> > Interesting; does SP2 fix the problem? Anything we can do\n> > over here to help?\n>\n> There is no SP2 for Windows 2003.\n\nThat's what I thought. Jim confused me there for a minute.\n\n>\n> Have you tried this with latest-and-greatest CVS HEAD? Meaning with the\n> new semaphore code that was committed a couple of days ago?\n\nNo I haven't. Worth a test on a rainy afternoon I'd say...\n\n>\n> //Magnus\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Wed, 3 May 2006 08:30:10 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" }, { "msg_contents": "On Wed, May 03, 2006 at 09:29:15AM +0200, Magnus Hagander wrote:\n> > > > FWIW, I've found problems running PostgreSQL on Windows in a \n> > > > multi-CPU environment on w2k3. It runs fine for some period, and \n> > > > then CPU and throughput drop to zero. So far I've been unable to \n> > > > track down any more information than that, other than the \n> > fact that \n> > > > I haven't been able to reproduce this on any single-CPU machines.\n> > > \n> > > I have had previous correspondence about this with Magnus (search \n> > > -general and -hackers). If you uninstall SP1 the problem \n> > goes away. We \n> > > played a bit with potential fixes but didn't find any.\n> > \n> > Interesting; does SP2 fix the problem? Anything we can do \n> > over here to help?\n> \n> There is no SP2 for Windows 2003.\n> \n> Have you tried this with latest-and-greatest CVS HEAD? Meaning with the\n> new semaphore code that was committed a couple of days ago?\n\nI'd be happy to test this if someone could provide a build, or if\nthere's instructions somewhere for doing such a build...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 4 May 2006 12:49:07 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issues on Opteron Dual Core" } ]
[ { "msg_contents": "Hey, thanks for the advice.\n\nSticking with 7.4 isn't my call. There's a lot wrapped up in common usage of\nPostgres 7.4 and I could never rally everyone into moving forward. (at least\nnot this year)\n\nI've yet to prove (due to my current lack of statistical evidence) that our\nusage of 7.4 results in frequent vacuums impacting access. (it get more\ndifficult to speculate when considering a large slony cluster) I'm hoping to\ngather some times and numbers on an internal dogfood of our product shortly.\n\nAny advice on tracking vacuum performance and impact? I was thinking of just\nsystem timing the vacuumdb calls and turning on verbose for per-table/index\nstats. Do you think that's enough info?\n\nOnce I vacuum I won't be able to re-test any fragmentation that the vacuum\ncleaned up, so its all or nothing for this test.\n\nThanks again.\n\n- Chris\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andrew Sullivan\nSent: Thursday, May 04, 2006 8:28 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nOn Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:\n> I've come to the conclusion I need to simply start tracking all \n> transactions and determining a cost/performance for the larger and \n> frequently updated tables without the benefit and penalty of \n> pg_statio.\n\nI'll bet it won't help you. If you can't get off 7.4 on a busy machine,\nyou're going to get hosed by I/O sometimes no matter what. \nMy suggestion is to write a bunch of rule-of-thumb rules for your cron jobs,\nand start planning your upgrade.\n\nJan back-patched the vacuum stuff to 7.4 for us (Afilias), and we tried\nplaying with it; but it didn't really make the difference we'd hoped.\n\nThe reason for this is that 7.4 also doesn't have the bg_writer. So you're\nstill faced with I/O storms, no matter what you do. If I were in your\nshoes, I wouldn't waste a lot of time on trying to emulate the new features\nin 7.4.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant- garde\nwill probably become the textbook definition of Postmodernism. \n --Brad Holland\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n\n\nRE: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nHey, thanks for the advice.\n\nSticking with 7.4 isn't my call. There's a lot wrapped up in common usage of Postgres 7.4 and I could never rally everyone into moving forward. (at least not this year)\nI've yet to prove (due to my current lack of statistical evidence) that our usage of 7.4 results in frequent vacuums impacting access. (it get more difficult to speculate when considering a large slony cluster) I'm hoping to gather some times and numbers on an internal dogfood of our product shortly.\nAny advice on tracking vacuum performance and impact? I was thinking of just system timing the vacuumdb calls and turning on verbose for per-table/index stats. Do you think that's enough info?\nOnce I vacuum I won't be able to re-test any fragmentation that the vacuum cleaned up, so its all or nothing for this test.\nThanks again.\n\n- Chris\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Andrew Sullivan\nSent: Thursday, May 04, 2006 8:28 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.\n\n\nOn Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:\n> I've come to the conclusion I need to simply start tracking all \n> transactions and determining a cost/performance for the larger and \n> frequently updated tables without the benefit and penalty of \n> pg_statio.\n\nI'll bet it won't help you.  If you can't get off 7.4 on a busy machine, you're going to get hosed by I/O sometimes no matter what. \nMy suggestion is to write a bunch of rule-of-thumb rules for your cron jobs, and start planning your upgrade.\n\nJan back-patched the vacuum stuff to 7.4 for us (Afilias), and we tried playing with it; but it didn't really make the difference we'd hoped.\nThe reason for this is that 7.4 also doesn't have the bg_writer.  So you're still faced with I/O storms, no matter what you do.  If I were in your shoes, I wouldn't waste a lot of time on trying to emulate the new features in 7.4.\nA\n\n-- \nAndrew Sullivan  | [email protected]\nIn the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism. \n                --Brad Holland\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster", "msg_date": "Thu, 4 May 2006 10:49:06 -0400 ", "msg_from": "Chris Mckenzie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 7.4 and vacuum_cost_delay." } ]
[ { "msg_contents": "> > > > > FWIW, I've found problems running PostgreSQL on Windows in a \n> > > > > multi-CPU environment on w2k3. It runs fine for some \n> period, and \n> > > > > then CPU and throughput drop to zero. So far I've \n> been unable to \n> > > > > track down any more information than that, other than the\n> > > fact that\n> > > > > I haven't been able to reproduce this on any \n> single-CPU machines.\n> > > > \n> > > > I have had previous correspondence about this with \n> Magnus (search \n> > > > -general and -hackers). If you uninstall SP1 the problem\n> > > goes away. We\n> > > > played a bit with potential fixes but didn't find any.\n> > > \n> > > Interesting; does SP2 fix the problem? Anything we can do \n> over here \n> > > to help?\n> > \n> > There is no SP2 for Windows 2003.\n> > \n> > Have you tried this with latest-and-greatest CVS HEAD? Meaning with \n> > the new semaphore code that was committed a couple of days ago?\n> \n> I'd be happy to test this if someone could provide a build, \n> or if there's instructions somewhere for doing such a build...\n\nInstructions are here:\nhttp://www.postgresql.org/docs/faqs.FAQ_MINGW.html\n\nLet me know if you can't get that working an I can get a set of binaries\nfor you.\n\n//Magnus\n", "msg_date": "Fri, 5 May 2006 09:51:42 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issues on Opteron Dual Core" } ]
[ { "msg_contents": "Good morning,\n\nFirst the stats: I'm using PostgreSQL 8.0.1 (I know I should upgrade,\ncannot due to vendor app. restrictions...), RedHat 9 on a SUN V40Z with 8GB\nof memory. I'm using the \"out-of-the-box\" settings in postgresql.conf.\nI've been testing various changes but cannot increase anything to improve\nperformance till I get this memory leak and/or cache issue resolved.\n\nScenario: Last night the backup of my largest DB failed (4.4GB in size with\n44Million+ tuples) with a memory alloc error. I'll attach it at the end of\nthis email. Once we rebooted the box and freed memory all was well, the\nbackup completed fine but as the backup ran and I did a few minor queries\nall of a sudden 3+GB of memory was used up! I then performed my nightly\nvacuumdb with analyze and just about the remaining 4GB of memory was gone!\nThis was the only application running in the machine at the time.\n\nQuestions:\n1. I thought using such \"smallish\" setting as provided would cause postgres\nto go to swap instead of eating up all the memory?\n2. If PostgreSQL is the culprit (which I hope it is not) does postgres\nrelease any memory it assumes during processing when that processing is\ncomplete? Such as the backup and vacuumdb I mentioned?\n3. Does anyone know of a way to determine if it actually is postgres hogging\nthis memory? Using TOP I only see my postgres processes using 1% or 2% of\nmemory. It would be nice to have a tool that showed exactly what is eating\nup that 7+GB?\n4. IS this due to my low setting in postgresql.conf?\n\nAny and all help is welcomed. For you PostgreSQL purists out there of whom\nI am fast becoming, your help is needed as my company is considering dumping\npostgresql in favor of Oracle.....I would much rather figure out the issue\nthen switch DBs. Here is the error received from the failed backup and the\nsecond was noted in my pg_log file:\n\npg_dump: ERROR: invalid memory alloc request size 18446744073709551613\npg_dump: SQL command to dump the contents of table \"msgstate\" failed:\nPQendcopy() failed.\npg_dump: Error message from server: ERROR: invalid memory alloc request\nsize 18446744073709551613\npg_dump: The command was: COPY public.msgstate (id, connectormsgid,\nparentid, orderidfk, clordid, orgclordid, msg, rawmsg, msgtype, \"action\",\nsendstate, statechain, fromdest, todest, inserted, op_id, released, reason,\noutgoing, symbol, qty, price, stopprice, side, data1, data2, data3, data4,\ndata5) TO stdout;\n\n\n2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT = [local] ERROR:\ninvalid memory alloc request size 18446744073709551613\n\nThank you,\nTim McElroy\n\n\n\n\n\n\n\nMemory and/or cache issues?\n\n\nGood morning,\n\nFirst the stats:  I'm using PostgreSQL 8.0.1 (I know I should upgrade, cannot due to vendor app. restrictions...), RedHat 9 on a SUN V40Z with 8GB of memory.  I'm using the \"out-of-the-box\" settings in postgresql.conf.  I've been testing various changes but cannot increase anything to improve performance till I get this memory leak and/or cache issue resolved.\nScenario:  Last night the backup of my largest DB failed (4.4GB in size with 44Million+ tuples) with a memory alloc error.  I'll attach it at the end of this email.  Once we rebooted the box and freed memory all was well, the backup completed fine but as the backup ran and I did a few minor queries all of a sudden 3+GB of memory was used up!  I then performed my nightly vacuumdb with analyze and just about the remaining 4GB of memory was gone!  This was the only application running in the machine at the time.\nQuestions:\n1. I thought using such \"smallish\" setting as provided would cause postgres to go to swap instead of eating up all the memory?\n2. If PostgreSQL is the culprit (which I hope it is not) does postgres release any memory it assumes during processing when that processing is complete?  Such as the backup and vacuumdb I mentioned?\n3. Does anyone know of a way to determine if it actually is postgres hogging this memory?  Using TOP I only see my postgres processes using 1% or 2% of memory.  It would be nice to have a tool that showed exactly what is eating up that 7+GB?\n4. IS this due to my low setting in postgresql.conf?\n\nAny and all help is welcomed.  For you PostgreSQL purists out there of whom I am fast becoming, your help is needed as my company is considering dumping postgresql in favor of Oracle.....I would much rather figure out the issue then switch DBs.  Here is the error received from the failed backup and the second was noted in my pg_log file:\npg_dump: ERROR:  invalid memory alloc request size 18446744073709551613\npg_dump: SQL command to dump the contents of table \"msgstate\" failed: PQendcopy() failed.\npg_dump: Error message from server: ERROR:  invalid memory alloc request size 18446744073709551613\npg_dump: The command was: COPY public.msgstate (id, connectormsgid, parentid, orderidfk, clordid, orgclordid, msg, rawmsg, msgtype, \"action\", sendstate, statechain, fromdest, todest, inserted, op_id, released, reason, outgoing, symbol, qty, price, stopprice, side, data1, data2, data3, data4, data5) TO stdout;\n\n2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT = [local] ERROR:  invalid memory alloc request size 18446744073709551613\n\nThank you,\nTim McElroy", "msg_date": "Fri, 5 May 2006 07:49:32 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Memory and/or cache issues?" }, { "msg_contents": "\"mcelroy, tim\" <[email protected]> writes:\n> pg_dump: ERROR: invalid memory alloc request size 18446744073709551613\n\nThat looks more like a corrupt-data problem than anything directly to do\nwith having or not having enough memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 May 2006 09:25:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues? " }, { "msg_contents": "Tom Lane wrote:\n> \"mcelroy, tim\" <[email protected]> writes:\n> > pg_dump: ERROR: invalid memory alloc request size 18446744073709551613\n> \n> That looks more like a corrupt-data problem than anything directly to do\n> with having or not having enough memory.\n\nThe bit pattern is certainly suspicious, though I'll grant that it\ndoesn't mean anything.\n\n$ dc\n2 o \n18446744073709551613 p\n1111111111111111111111111111111111111111111111111111111111111101\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 5 May 2006 09:31:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "> 2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT = [local] \n> ERROR: invalid memory alloc request size 18446744073709551613\n\nPerhaps I'm off beam here, but any time I've seen an app try to allocate \na gazillion bytes, it's\ndue to some code incorrectly calculating the size of something (or more \ncommonly, using an\ninitialized variable as the basis for said calculation).\n\n\n\n\n\n\n\n\n\n2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT\n= [local] ERROR:  invalid memory alloc request size 18446744073709551613\n\nPerhaps I'm off beam here, but any\ntime I've seen an app try to allocate a gazillion bytes, it's \ndue to some code incorrectly calculating the size of something (or more\ncommonly, using an\ninitialized variable as the basis for said calculation).", "msg_date": "Fri, 05 May 2006 22:33:51 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": "Thanks Tom. I thought the same thing and waded through the archives trying\nvarious fixes such as vacuum, vacuum full (both with analyze), reindex and\nstill the same issue. However, once the box was rebooted the backup went\nsmooth and the data was fine. We have two (2) machines (PROD001 & PROD002)\nthat are \"in-sync\" and the data matched exactly. PROD002 was where I had\nthe problem. I see this on all the postgres installations, no matter what I\nset the postgresql.conf settings to regarding memory allocation, once\npostgres starts up 95% of the memory on the box is used. Is there a way\nwithin Linux to 'see' what or who is actually using this memory? I would\nlove to say it's a hardware thing and that postgres is fine :)\n\nRegards,\nTim\n\n -----Original Message-----\nFrom: \tTom Lane [mailto:[email protected]] \nSent:\tFriday, May 05, 2006 9:25 AM\nTo:\tmcelroy, tim\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues? \n\n\"mcelroy, tim\" <[email protected]> writes:\n> pg_dump: ERROR: invalid memory alloc request size 18446744073709551613\n\nThat looks more like a corrupt-data problem than anything directly to do\nwith having or not having enough memory.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues? \n\n\nThanks Tom.  I thought the same thing and waded through the archives trying various fixes such as vacuum, vacuum full (both with analyze), reindex and still the same issue.  However, once the box was rebooted the backup went smooth and the data was fine.  We have two (2) machines (PROD001 & PROD002) that are \"in-sync\" and the data matched exactly.  PROD002 was where I had the problem.  I see this on all the postgres installations, no matter what I set the postgresql.conf settings to regarding memory allocation, once postgres starts up 95% of the memory on the box is used.  Is there a way within Linux to 'see' what or who is actually using this memory?  I would love to say it's a hardware thing and that postgres is fine :)\nRegards,\nTim\n\n -----Original Message-----\nFrom:   Tom Lane [mailto:[email protected]] \nSent:   Friday, May 05, 2006 9:25 AM\nTo:     mcelroy, tim\nCc:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues? \n\n\"mcelroy, tim\" <[email protected]> writes:\n> pg_dump: ERROR:  invalid memory alloc request size 18446744073709551613\n\nThat looks more like a corrupt-data problem than anything directly to do\nwith having or not having enough memory.\n\n                        regards, tom lane", "msg_date": "Fri, 5 May 2006 09:26:34 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues? " }, { "msg_contents": "\"mcelroy, tim\" <[email protected]> writes:\n> I see this on all the postgres installations, no matter what I\n> set the postgresql.conf settings to regarding memory allocation, once\n> postgres starts up 95% of the memory on the box is used. Is there a way\n> within Linux to 'see' what or who is actually using this memory?\n\nProbably kernel disk cache. Are you under the misimpression that unused\nmemory is a good thing? If a Unix-ish system *isn't* showing near zero\nfree memory under load, the kernel is wasting valuable resources.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 May 2006 09:43:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues? " } ]
[ { "msg_contents": "Are you saying the kernel's disc cache may be getting whacked? No, I\nunderstand that PG should use as much memory as it can and the system as\nwell. The main problem here is that with almost all the 8GB of RAM 'in use'\nwhen I try to do a pg_dump or vacuumdb I run out of memory and the system\ncrashes....\n\nI well understand that unused memory is not a good thing, just that when you\nhave none and can't do the maint work....bad stuff happens. For example, I\njust created a benchdb on my DEV box with 1,000,000 tuples. As this ran the\nmem in use jumped up 1G and it hasn't gone down? Once the PG process has\nfinished its task shouldn't it release the memory it used?\n\nThanks,\nTim\n\n\n -----Original Message-----\nFrom: \tTom Lane [mailto:[email protected]] \nSent:\tFriday, May 05, 2006 9:44 AM\nTo:\tmcelroy, tim\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues? \n\n\"mcelroy, tim\" <[email protected]> writes:\n> I see this on all the postgres installations, no matter what I\n> set the postgresql.conf settings to regarding memory allocation, once\n> postgres starts up 95% of the memory on the box is used. Is there a way\n> within Linux to 'see' what or who is actually using this memory?\n\nProbably kernel disk cache. Are you under the misimpression that unused\nmemory is a good thing? If a Unix-ish system *isn't* showing near zero\nfree memory under load, the kernel is wasting valuable resources.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues? \n\n\nAre you saying the kernel's disc cache may be getting whacked?  No, I understand that PG should use as much memory as it can and the system as well.  The main problem here is that with almost all the 8GB of RAM 'in use' when I try to do a pg_dump or vacuumdb I run out of memory and the system crashes....\nI well understand that unused memory is not a good thing, just that when you have none and can't do the maint work....bad stuff happens.  For example, I just created a benchdb on my DEV box with 1,000,000 tuples.  As this ran the mem in use jumped up 1G and it hasn't gone down?  Once the PG process has finished its task shouldn't it release the memory it used?\nThanks,\nTim\n\n\n -----Original Message-----\nFrom:   Tom Lane [mailto:[email protected]] \nSent:   Friday, May 05, 2006 9:44 AM\nTo:     mcelroy, tim\nCc:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues? \n\n\"mcelroy, tim\" <[email protected]> writes:\n> I see this on all the postgres installations, no matter what I\n> set the postgresql.conf settings to regarding memory allocation, once\n> postgres starts up 95% of the memory on the box is used.  Is there a way\n> within Linux to 'see' what or who is actually using this memory?\n\nProbably kernel disk cache.  Are you under the misimpression that unused\nmemory is a good thing?  If a Unix-ish system *isn't* showing near zero\nfree memory under load, the kernel is wasting valuable resources.\n\n                        regards, tom lane", "msg_date": "Fri, 5 May 2006 09:57:58 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues? " }, { "msg_contents": "On Fri, May 05, 2006 at 09:57:58AM -0400, mcelroy, tim wrote:\n>Are you saying the kernel's disc cache may be getting whacked? No, I\n>understand that PG should use as much memory as it can and the system as\n>well. The main problem here is that with almost all the 8GB of RAM 'in use'\n>when I try to do a pg_dump or vacuumdb I run out of memory and the system\n>crashes....\n\nYou need to be way more specific about what \"in use\" means. Try pasting \nthe output of actual commands like \"free\". The main problem here \naccording to the output you sent is that your process is trying to \nallocate 10billion terabytes of RAM (which ain't gonna work) and dies. \nThat is not a memory issue.\n\nMike Stone\n", "msg_date": "Fri, 05 May 2006 10:23:44 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "For a standard config most of the memory used by Postgres is the shared\nbuffers. The shared buffers are a cache to store blocks read from the\ndisk, so if you do a query, Postgres will allocate and fill the shared\nbuffers up to the max amount you set in your postgresql.conf file.\nPostgres doesn't release that memory between queries because the point\nis to be able to pull data from ram instead of the disk on the next\nquery.\n \nAre you sure your settings in postgresql.conf are standard? What are\nyour settings for shared_buffers and work_mem?\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of mcelroy,\ntim\nSent: Friday, May 05, 2006 8:58 AM\nTo: 'Tom Lane'\nCc: [email protected]\nSubject: Re: [PERFORM] Memory and/or cache issues? \n\n\n\nAre you saying the kernel's disc cache may be getting whacked? No, I\nunderstand that PG should use as much memory as it can and the system as\nwell. The main problem here is that with almost all the 8GB of RAM 'in\nuse' when I try to do a pg_dump or vacuumdb I run out of memory and the\nsystem crashes....\n\nI well understand that unused memory is not a good thing, just that when\nyou have none and can't do the maint work....bad stuff happens. For\nexample, I just created a benchdb on my DEV box with 1,000,000 tuples.\nAs this ran the mem in use jumped up 1G and it hasn't gone down? Once\nthe PG process has finished its task shouldn't it release the memory it\nused?\n\nThanks, \nTim \n\n\n -----Original Message----- \nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, May 05, 2006 9:44 AM \nTo: mcelroy, tim \nCc: [email protected] \nSubject: Re: [PERFORM] Memory and/or cache issues? \n\n\"mcelroy, tim\" <[email protected]> writes: \n> I see this on all the postgres installations, no matter what I \n> set the postgresql.conf settings to regarding memory allocation, once \n> postgres starts up 95% of the memory on the box is used. Is there a\nway \n> within Linux to 'see' what or who is actually using this memory? \n\nProbably kernel disk cache. Are you under the misimpression that unused\n\nmemory is a good thing? If a Unix-ish system *isn't* showing near zero \nfree memory under load, the kernel is wasting valuable resources. \n\n regards, tom lane \n\n\n\n\nMessage\n\n\nFor a \nstandard config most of the memory used by Postgres is the shared buffers.  \nThe shared buffers are a cache to store blocks read from the disk, so if you do \na query, Postgres will allocate and fill the shared buffers up to the max amount \nyou set in your postgresql.conf file.  Postgres doesn't release that \nmemory between queries because the point is to be able to pull data from \nram instead of the disk on the next query.\n \nAre you sure your settings in postgresql.conf are \nstandard?  What are your settings for shared_buffers and \nwork_mem?\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of mcelroy, \n timSent: Friday, May 05, 2006 8:58 AMTo: 'Tom \n Lane'Cc: [email protected]: Re: \n [PERFORM] Memory and/or cache issues? \nAre you saying the kernel's disc cache may be getting \n whacked?  No, I understand that PG should use as much memory as it can \n and the system as well.  The main problem here is that with almost all \n the 8GB of RAM 'in use' when I try to do a pg_dump or vacuumdb I run out of \n memory and the system crashes....\nI well understand that unused memory is not a good thing, just \n that when you have none and can't do the maint work....bad stuff \n happens.  For example, I just created a benchdb on my DEV box with \n 1,000,000 tuples.  As this ran the mem in use jumped up 1G and it hasn't \n gone down?  Once the PG process has finished its task shouldn't it \n release the memory it used?\nThanks, Tim \n -----Original Message----- From: \n   Tom Lane [mailto:[email protected]] Sent:   Friday, May 05, 2006 9:44 AM To:     mcelroy, tim Cc:     [email protected]\nSubject:        Re: \n [PERFORM] Memory and/or cache issues? \n\"mcelroy, tim\" <[email protected]> \n writes: > I see this on all the postgres \n installations, no matter what I > set the \n postgresql.conf settings to regarding memory allocation, once > postgres starts up 95% of the memory on the box is used.  Is \n there a way > within Linux to 'see' what or who is \n actually using this memory? \nProbably kernel disk cache.  Are you under the \n misimpression that unused memory is a good \n thing?  If a Unix-ish system *isn't* showing near zero free memory under load, the kernel is wasting valuable \n resources. \n        \n         \n         regards, tom \n lane", "msg_date": "Fri, 5 May 2006 09:31:33 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues? " } ]
[ { "msg_contents": "Sorry, been up all night and maybe provided too much information or not the\nright information and only confused folks, tired I guess. When I say 'in\nuse' I am referring to the 'used' column. Thanks all who have responded to\nthis inquiry, I appreciate it. \n\nHere's free from PROD001:\n[root@wbibsngwyprod001 kernel]# free -k -t\n total used free shared buffers cached\nMem: 7643536 6975772 667764 0 165496 5393396\n-/+ buffers/cache: 1416880 6226656\nSwap: 8185108 5208 8179900\nTotal: 15828644 6980980 8847664\n\nHere's free from PROD002:\n[root@wbibsngwyprod002 root]# free -k -t\n total used free shared buffers cached\nMem: 7643536 6694220 949316 0 161008 4916420\n-/+ buffers/cache: 1616792 6026744\nSwap: 8185108 11584 8173524\nTotal: 15828644 6705804 9122840\n\nTim\n\n -----Original Message-----\nFrom: \[email protected]\n[mailto:[email protected]] On Behalf Of Michael Stone\nSent:\tFriday, May 05, 2006 10:24 AM\nTo:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 09:57:58AM -0400, mcelroy, tim wrote:\n>Are you saying the kernel's disc cache may be getting whacked? No, I\n>understand that PG should use as much memory as it can and the system as\n>well. The main problem here is that with almost all the 8GB of RAM 'in\nuse'\n>when I try to do a pg_dump or vacuumdb I run out of memory and the system\n>crashes....\n\nYou need to be way more specific about what \"in use\" means. Try pasting \nthe output of actual commands like \"free\". The main problem here \naccording to the output you sent is that your process is trying to \nallocate 10billion terabytes of RAM (which ain't gonna work) and dies. \nThat is not a memory issue.\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues?\n\n\nSorry, been up all night and maybe provided too much information or not the right information and only confused folks, tired I guess.  When I say 'in use' I am referring to the 'used' column.  Thanks all who have responded to this inquiry, I appreciate it. \nHere's free from PROD001:\n[root@wbibsngwyprod001 kernel]# free -k -t\n             total       used       free     shared    buffers     cached\nMem:       7643536    6975772     667764          0     165496    5393396\n-/+ buffers/cache:    1416880    6226656\nSwap:      8185108       5208    8179900\nTotal:    15828644    6980980    8847664\n\nHere's free from PROD002:\n[root@wbibsngwyprod002 root]# free -k -t\n             total       used       free     shared    buffers     cached\nMem:       7643536    6694220     949316          0     161008    4916420\n-/+ buffers/cache:    1616792    6026744\nSwap:      8185108      11584    8173524\nTotal:    15828644    6705804    9122840\n\nTim\n\n -----Original Message-----\nFrom:   [email protected] [mailto:[email protected]]  On Behalf Of Michael Stone\nSent:   Friday, May 05, 2006 10:24 AM\nTo:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 09:57:58AM -0400, mcelroy, tim wrote:\n>Are you saying the kernel's disc cache may be getting whacked?  No, I\n>understand that PG should use as much memory as it can and the system as\n>well.  The main problem here is that with almost all the 8GB of RAM 'in use'\n>when I try to do a pg_dump or vacuumdb I run out of memory and the system\n>crashes....\n\nYou need to be way more specific about what \"in use\" means. Try pasting \nthe output of actual commands like \"free\". The main problem here \naccording to the output you sent is that your process is trying to \nallocate 10billion terabytes of RAM (which ain't gonna work) and dies. \nThat is not a memory issue.\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n       subscribe-nomail command to [email protected] so that your\n       message can get through to the mailing list cleanly", "msg_date": "Fri, 5 May 2006 10:27:10 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n>Sorry, been up all night and maybe provided too much information or not the\n>right information and only confused folks, tired I guess. When I say 'in\n>use' I am referring to the 'used' column.\n\nWhich is a mostly irrelevant number. \n\n>Here's free from PROD001:\n>[root@wbibsngwyprod001 kernel]# free -k -t\n> total used free shared buffers cached\n>Mem: 7643536 6975772 667764 0 165496 5393396\n>-/+ buffers/cache: 1416880 6226656\n>Swap: 8185108 5208 8179900\n>Total: 15828644 6980980 8847664\n\nYou've got 1.4G in use, 5.3G of disk cache, 165M of buffers and 667M \nfree. That doesn't seem unreasonable. If an application needs more \nmemory the amount of disk cache will decrease. As I said in an earlier \nemail, the problem is that the application is trying to allocate a bogus \namount of memory, not that you have a memory problem.\n\nMike Stone\n", "msg_date": "Fri, 05 May 2006 10:40:33 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "mcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not \n> the right information and only confused folks, tired I guess. When I \n> say 'in use' I am referring to the 'used' column. Thanks all who have \n> responded to this inquiry, I appreciate it.\n> \n> Here's free from PROD001:\n> [root@wbibsngwyprod001 kernel]# free -k -t\n> total used free shared buffers cached\n> Mem: 7643536 6975772 667764 0 165496 5393396\n> -/+ buffers/cache: 1416880 6226656\n> Swap: 8185108 5208 8179900\n> Total: 15828644 6980980 8847664\n\nOn Linux (unlike most Unix systems), \"used\" includes both processes AND the kernel's file-system buffers, which means \"used\" will almost always be close to 100%. Starting with a freshly-booted system, you can issue almost any command that scans files, and \"used\" will go up and STAY at nearly 100% of memory. For example, reboot and try \"tar cf - / >/dev/null\" and you'll see the same sort of \"used\" numbers.\n\nIn My Humble Opinion, this is a mistake in Linux. This confuses just about everyone the first time they see it (including me), because the file-system buffers are dynamic and will be relenquished by the kernel if another process needs memory. On Unix systems, \"used\" means, \"someone else is using it and you can't have it\", which is what most of us really want to know.\n\nCraig\n", "msg_date": "Fri, 05 May 2006 07:51:26 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 10:40:33AM -0400, Michael Stone wrote:\n> You've got 1.4G in use, 5.3G of disk cache, 165M of buffers and 667M \n> free. That doesn't seem unreasonable. If an application needs more \n\nActually, it indiciates a bunch of memory not being used, but IIRC Tim's\ndatabase is approximately 4G in size, so the 5.3G of disk cache makes\nsense if the system was recently rebooted.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 5 May 2006 19:31:13 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not the\n\nDo you have any budget for support or training, either from the company\nselling you the app or a company that provides PostgreSQL support? I\nsuspect some money invested there would result in a lot less\nfrustration. It'd also certainly be cheaper than switching to Oracle.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 5 May 2006 19:35:14 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": "On the boxes in question the settings are:\n \nshared_buffers = 1000\nwork_mem = 1024\n \nI have revised these on my DEV box and see some improvement (a quick thank\nyou to Jim Nasby for his assistance with that):\n \nshared_buffers = 20000\nwork_mem = 8024\n \nRegards,\nTim\n \n-----Original Message-----\nFrom: Dave Dutcher [mailto:[email protected]]\nSent: Friday, May 05, 2006 10:32 AM\nTo: 'mcelroy, tim'\nCc: [email protected]\nSubject: RE: [PERFORM] Memory and/or cache issues? \n \nFor a standard config most of the memory used by Postgres is the shared\nbuffers. The shared buffers are a cache to store blocks read from the disk,\nso if you do a query, Postgres will allocate and fill the shared buffers up\nto the max amount you set in your postgresql.conf file. Postgres doesn't\nrelease that memory between queries because the point is to be able to pull\ndata from ram instead of the disk on the next query.\n \nAre you sure your settings in postgresql.conf are standard? What are your\nsettings for shared_buffers and work_mem?\n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of mcelroy, tim\nSent: Friday, May 05, 2006 8:58 AM\nTo: 'Tom Lane'\nCc: [email protected]\nSubject: Re: [PERFORM] Memory and/or cache issues? \nAre you saying the kernel's disc cache may be getting whacked? No, I\nunderstand that PG should use as much memory as it can and the system as\nwell. The main problem here is that with almost all the 8GB of RAM 'in use'\nwhen I try to do a pg_dump or vacuumdb I run out of memory and the system\ncrashes....\nI well understand that unused memory is not a good thing, just that when you\nhave none and can't do the maint work....bad stuff happens. For example, I\njust created a benchdb on my DEV box with 1,000,000 tuples. As this ran the\nmem in use jumped up 1G and it hasn't gone down? Once the PG process has\nfinished its task shouldn't it release the memory it used?\nThanks, \nTim \n \n -----Original Message----- \nFrom: Tom Lane [ mailto:[email protected] <mailto:[email protected]> ] \nSent: Friday, May 05, 2006 9:44 AM \nTo: mcelroy, tim \nCc: [email protected] \nSubject: Re: [PERFORM] Memory and/or cache issues? \n\"mcelroy, tim\" <[email protected]> writes: \n> I see this on all the postgres installations, no matter what I \n> set the postgresql.conf settings to regarding memory allocation, once \n> postgres starts up 95% of the memory on the box is used. Is there a way \n> within Linux to 'see' what or who is actually using this memory? \nProbably kernel disk cache. Are you under the misimpression that unused \nmemory is a good thing? If a Unix-ish system *isn't* showing near zero \nfree memory under load, the kernel is wasting valuable resources. \n regards, tom lane \n\n\n\n\n\n\n\n\nMessage\n\n\n\n\n\nOn the boxes\nin question the settings are:\n \nshared_buffers\n= 1000\nwork_mem =\n1024\n \nI have\nrevised these on my DEV box and see some improvement (a quick thank you to Jim\nNasby for his assistance with that):\n \nshared_buffers\n= 20000\nwork_mem =\n8024\n \nRegards,\nTim\n \n-----Original\nMessage-----\nFrom: Dave Dutcher\n[mailto:[email protected]]\nSent: Friday, May 05, 2006 10:32\nAM\nTo: 'mcelroy, tim'\nCc: [email protected]\nSubject: RE: [PERFORM] Memory\nand/or cache issues? \n \nFor a standard config\nmost of the memory used by Postgres is the shared buffers.  The shared\nbuffers are a cache to store blocks read from the disk, so if you do a query,\nPostgres will allocate and fill the shared buffers up to the max amount you set\nin your postgresql.conf file.  Postgres doesn't release that\nmemory between queries because the point is to be able to pull data from\nram instead of the disk on the next query.\n \nAre you sure your\nsettings in postgresql.conf are standard?  What are your settings for\nshared_buffers and work_mem?\n \n \n\n-----Original\nMessage-----\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of mcelroy, tim\nSent: Friday, May 05, 2006 8:58 AM\nTo: 'Tom Lane'\nCc: [email protected]\nSubject: Re: [PERFORM] Memory\nand/or cache issues? \nAre you\nsaying the kernel's disc cache may be getting whacked?  No, I understand\nthat PG should use as much memory as it can and the system as well.  The\nmain problem here is that with almost all the 8GB of RAM 'in use' when I try to\ndo a pg_dump or vacuumdb I run out of memory and the system crashes....\nI well\nunderstand that unused memory is not a good thing, just that when you have none\nand can't do the maint work....bad stuff happens.  For example, I just\ncreated a benchdb on my DEV box with 1,000,000 tuples.  As this ran the\nmem in use jumped up 1G and it hasn't gone down?  Once the PG process has\nfinished its task shouldn't it release the memory it used?\nThanks, \nTim \n \n -----Original\nMessage----- \nFrom:   Tom Lane [mailto:[email protected]]\n\nSent:   Friday, May 05, 2006 9:44 AM \nTo:     mcelroy, tim \nCc:     [email protected] \nSubject:        Re: [PERFORM]\nMemory and/or cache issues? \n\"mcelroy,\ntim\" <[email protected]> writes: \n> I see this on all the postgres installations, no matter what\nI \n> set the postgresql.conf settings to regarding memory\nallocation, once \n> postgres starts up 95% of the memory on the box is\nused.  Is there a way \n> within Linux to 'see' what or who is actually using this\nmemory? \nProbably\nkernel disk cache.  Are you under the misimpression that unused \nmemory is a good thing?  If a Unix-ish system *isn't* showing\nnear zero \nfree memory under load, the kernel is wasting valuable resources. \n       \n       \n        regards, tom lane", "msg_date": "Fri, 5 May 2006 10:36:19 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues? " } ]
[ { "msg_contents": "Thanks Michael. Are you saying the 'used' column is the irrelevant number?\nIs the number that is more pertinent is 1416880? Is that the actual amount\nof memory in use? I agree about the allocation of a bogus amount of memory\nbut the issue occurred after-hours when the application(s) were not running.\nOr are you saying the app whacked the DB during the day and never recovered?\n\nTim\n\n\n -----Original Message-----\nFrom: \[email protected]\n[mailto:[email protected]] On Behalf Of Michael Stone\nSent:\tFriday, May 05, 2006 10:41 AM\nTo:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n>Sorry, been up all night and maybe provided too much information or not the\n>right information and only confused folks, tired I guess. When I say 'in\n>use' I am referring to the 'used' column.\n\nWhich is a mostly irrelevant number. \n\n>Here's free from PROD001:\n>[root@wbibsngwyprod001 kernel]# free -k -t\n> total used free shared buffers cached\n>Mem: 7643536 6975772 667764 0 165496 5393396\n>-/+ buffers/cache: 1416880 6226656\n>Swap: 8185108 5208 8179900\n>Total: 15828644 6980980 8847664\n\nYou've got 1.4G in use, 5.3G of disk cache, 165M of buffers and 667M \nfree. That doesn't seem unreasonable. If an application needs more \nmemory the amount of disk cache will decrease. As I said in an earlier \nemail, the problem is that the application is trying to allocate a bogus \namount of memory, not that you have a memory problem.\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues?\n\n\nThanks Michael.  Are you saying the 'used' column is the irrelevant number?  Is the number that is more pertinent is 1416880?  Is that the actual amount of memory in use?  I agree about the allocation of a bogus amount of memory but the issue occurred after-hours when the application(s) were not running.  Or are you saying the app whacked the DB during the day and never recovered?\nTim\n\n\n -----Original Message-----\nFrom:   [email protected] [mailto:[email protected]]  On Behalf Of Michael Stone\nSent:   Friday, May 05, 2006 10:41 AM\nTo:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n>Sorry, been up all night and maybe provided too much information or not the\n>right information and only confused folks, tired I guess.  When I say 'in\n>use' I am referring to the 'used' column.\n\nWhich is a mostly irrelevant number. \n\n>Here's free from PROD001:\n>[root@wbibsngwyprod001 kernel]# free -k -t\n>             total       used       free     shared    buffers     cached\n>Mem:       7643536    6975772     667764          0     165496    5393396\n>-/+ buffers/cache:    1416880    6226656\n>Swap:      8185108       5208    8179900\n>Total:    15828644    6980980    8847664\n\nYou've got 1.4G in use, 5.3G of disk cache, 165M of buffers and 667M \nfree. That doesn't seem unreasonable. If an application needs more \nmemory the amount of disk cache will decrease. As I said in an earlier \nemail, the problem is that the application is trying to allocate a bogus \namount of memory, not that you have a memory problem.\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings", "msg_date": "Fri, 5 May 2006 10:45:21 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 10:45:21AM -0400, mcelroy, tim wrote:\n>Thanks Michael. Are you saying the 'used' column is the irrelevant number?\n>Is the number that is more pertinent is 1416880? Is that the actual amount\n>of memory in use? \n\nYes.\n\n>I agree about the allocation of a bogus amount of memory\n>but the issue occurred after-hours when the application(s) were not running.\n>Or are you saying the app whacked the DB during the day and never recovered?\n\nI have no idea why the bogus memory allocation happened. If it continues \nto happen you might have data corruption on disk. If it never happens \nagain, it could have been cosmic rays.\n\nMike Stone\n", "msg_date": "Fri, 05 May 2006 10:53:48 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "Michael Stone wrote:\n> On Fri, May 05, 2006 at 10:45:21AM -0400, mcelroy, tim wrote:\n>> Thanks Michael. Are you saying the 'used' column is the irrelevant \n>> number?\n>> Is the number that is more pertinent is 1416880? Is that the actual \n>> amount\n>> of memory in use? \n> \n> Yes.\n> \n>> I agree about the allocation of a bogus amount of memory\n>> but the issue occurred after-hours when the application(s) were not \n>> running.\n>> Or are you saying the app whacked the DB during the day and never \n>> recovered?\n> \n> I have no idea why the bogus memory allocation happened. If it continues \n> to happen you might have data corruption on disk. If it never happens \n> again, it could have been cosmic rays.\n> \n> Mike Stone\n\nhave you configured your shared memory settings right?\nif postgres tries to allocate more memory (because of settings enable \nit) than the kernel itself is configured for, then you will see similar \nerror messages.\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n", "msg_date": "Fri, 05 May 2006 18:33:33 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 06:33:33PM +0200, G�briel �kos wrote:\n>if postgres tries to allocate more memory (because of settings enable \n>it) than the kernel itself is configured for, then you will see similar \n>error messages.\n\nIf you're talking about the shared memory limits, postgres will bomb out \nfairly quickly in that case, IIRC. \n\nMike Stone\n", "msg_date": "Fri, 05 May 2006 13:09:53 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Fri, May 05, 2006 at 01:09:53PM -0400, Michael Stone wrote:\n> On Fri, May 05, 2006 at 06:33:33PM +0200, G?briel ?kos wrote:\n> >if postgres tries to allocate more memory (because of settings enable \n> >it) than the kernel itself is configured for, then you will see similar \n> >error messages.\n> \n> If you're talking about the shared memory limits, postgres will bomb out \n> fairly quickly in that case, IIRC. \n\nMore importantly I don't think it would result in trying to allocate 10\nTB or whatever that huge number was.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 5 May 2006 19:33:38 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": "I have a question about my function. I must get user rating by game result. \nThis isn't probably a perfect solution but I have one question about \n\nselect into inGameRating count(game_result)+1 from users\n\t\twhere game_result > inRow.game_result;\n\nThis query in function results in about 1100 ms.\ninRow.game_result is a integer 2984\nAnd now if I replace inRow.game_result with integer\n\nselect into inGameRating count(game_result)+1 from users\n\t\twhere game_result > 2984;\n\nquery results in about 100 ms\n\nThere is probably a reason for this but can you tell me about it because I \ncan't fine one\n\nMy function:\n\ncreate or replace function ttt_result(int,int) returns setof tparent_result \nlanguage plpgsql volatile as $$\ndeclare \n\tinOffset alias for $1;\n\tinLimit alias for $2;\n\tinRow tparent_result%rowtype;\n\tinGameResult int := -1;\n\tinGameRating int := -1;\nbegin\n\nfor inRow in \n\tselect \n\t\temail,wynik_gra \n\tfrom \n\t\tkonkurs_uzytkownik \n\torder by wynik_gra desc limit inLimit offset inOffset \nloop\n\tif inGameResult < 0 then -- only for first iteration\n\t\t/* this is fast ~100 ms\n\t\tselect into inGameRating \n\t\t\tcount(game_result)+1 from users\n\t\t\twhere game_result > \t2984;\n\t\t*/\n\t\t/* even if inRow.game_result = 2984 this is very slow ~ 1100 ms!\n\t\tselect into inGameRating count(game_result)+1 from users\n\t\twhere game_result > inRow.game_result;\n\t\t*/\n\t\tinGameResult := inRow.game_result;\n\tend if;\n\t\n\tif inGameResult > inRow.game_result then \n\t\tinGameRating := inGameRating + 1;\n\tend if;\n\n\tinRow.game_rating := inGameRating;\n\tinGameResult := inRow.game_result;\n\treturn next inRow;\n\nend loop;\nreturn;\nend;\n$$;\n-- \nWitold Strzelczyk\[email protected]\n", "msg_date": "Fri, 5 May 2006 16:46:43 +0200", "msg_from": "Witold Strzelczyk <[email protected]>", "msg_from_op": true, "msg_subject": "slow variable against int??" }, { "msg_contents": "If you're trying to come up with ranking then you'll be much happier\nusing a sequence and pulling from it using an ordered select. See lines\n19-27 in http://lnk.nu/cvs.distributed.net/9bu.sql for an example.\nDepending on what you're doing you might not need the temp table.\n\nOn Fri, May 05, 2006 at 04:46:43PM +0200, Witold Strzelczyk wrote:\n> I have a question about my function. I must get user rating by game result. \n> This isn't probably a perfect solution but I have one question about \n> \n> select into inGameRating count(game_result)+1 from users\n> \t\twhere game_result > inRow.game_result;\n> \n> This query in function results in about 1100 ms.\n> inRow.game_result is a integer 2984\n> And now if I replace inRow.game_result with integer\n> \n> select into inGameRating count(game_result)+1 from users\n> \t\twhere game_result > 2984;\n> \n> query results in about 100 ms\n> \n> There is probably a reason for this but can you tell me about it because I \n> can't fine one\n> \n> My function:\n> \n> create or replace function ttt_result(int,int) returns setof tparent_result \n> language plpgsql volatile as $$\n> declare \n> \tinOffset alias for $1;\n> \tinLimit alias for $2;\n> \tinRow tparent_result%rowtype;\n> \tinGameResult int := -1;\n> \tinGameRating int := -1;\n> begin\n> \n> for inRow in \n> \tselect \n> \t\temail,wynik_gra \n> \tfrom \n> \t\tkonkurs_uzytkownik \n> \torder by wynik_gra desc limit inLimit offset inOffset \n> loop\n> \tif inGameResult < 0 then -- only for first iteration\n> \t\t/* this is fast ~100 ms\n> \t\tselect into inGameRating \n> \t\t\tcount(game_result)+1 from users\n> \t\t\twhere game_result > \t2984;\n> \t\t*/\n> \t\t/* even if inRow.game_result = 2984 this is very slow ~ 1100 ms!\n> \t\tselect into inGameRating count(game_result)+1 from users\n> \t\twhere game_result > inRow.game_result;\n> \t\t*/\n> \t\tinGameResult := inRow.game_result;\n> \tend if;\n> \t\n> \tif inGameResult > inRow.game_result then \n> \t\tinGameRating := inGameRating + 1;\n> \tend if;\n> \n> \tinRow.game_rating := inGameRating;\n> \tinGameResult := inRow.game_result;\n> \treturn next inRow;\n> \n> end loop;\n> return;\n> end;\n> $$;\n> -- \n> Witold Strzelczyk\n> [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:04:02 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow variable against int??" } ]
[ { "msg_contents": "Thanks for a great explanation Craig, makes more sense now.\n\nTim\n\n -----Original Message-----\nFrom: \tCraig A. James [mailto:[email protected]] \nSent:\tFriday, May 05, 2006 10:51 AM\nTo:\tmcelroy, tim\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues?\n\nmcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not \n> the right information and only confused folks, tired I guess. When I \n> say 'in use' I am referring to the 'used' column. Thanks all who have \n> responded to this inquiry, I appreciate it.\n> \n> Here's free from PROD001:\n> [root@wbibsngwyprod001 kernel]# free -k -t\n> total used free shared buffers cached\n> Mem: 7643536 6975772 667764 0 165496 5393396\n> -/+ buffers/cache: 1416880 6226656\n> Swap: 8185108 5208 8179900\n> Total: 15828644 6980980 8847664\n\nOn Linux (unlike most Unix systems), \"used\" includes both processes AND the\nkernel's file-system buffers, which means \"used\" will almost always be close\nto 100%. Starting with a freshly-booted system, you can issue almost any\ncommand that scans files, and \"used\" will go up and STAY at nearly 100% of\nmemory. For example, reboot and try \"tar cf - / >/dev/null\" and you'll see\nthe same sort of \"used\" numbers.\n\nIn My Humble Opinion, this is a mistake in Linux. This confuses just about\neveryone the first time they see it (including me), because the file-system\nbuffers are dynamic and will be relenquished by the kernel if another\nprocess needs memory. On Unix systems, \"used\" means, \"someone else is using\nit and you can't have it\", which is what most of us really want to know.\n\nCraig\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues?\n\n\nThanks for a great explanation Craig, makes more sense now.\n\nTim\n\n -----Original Message-----\nFrom:   Craig A. James [mailto:[email protected]] \nSent:   Friday, May 05, 2006 10:51 AM\nTo:     mcelroy, tim\nCc:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues?\n\nmcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not \n> the right information and only confused folks, tired I guess.  When I \n> say 'in use' I am referring to the 'used' column.  Thanks all who have \n> responded to this inquiry, I appreciate it.\n> \n> Here's free from PROD001:\n> [root@wbibsngwyprod001 kernel]# free -k -t\n>              total       used       free     shared    buffers     cached\n> Mem:       7643536    6975772     667764          0     165496    5393396\n> -/+ buffers/cache:    1416880    6226656\n> Swap:      8185108       5208    8179900\n> Total:    15828644    6980980    8847664\n\nOn Linux (unlike most Unix systems), \"used\" includes both processes AND the kernel's file-system buffers, which means \"used\" will almost always be close to 100%.  Starting with a freshly-booted system, you can issue almost any command that scans files, and \"used\" will go up and STAY at nearly 100% of memory.  For example, reboot and try \"tar cf - / >/dev/null\" and you'll see the same sort of \"used\" numbers.\nIn My Humble Opinion, this is a mistake in Linux.  This confuses just about everyone the first time they see it (including me), because the file-system buffers are dynamic and will be relenquished by the kernel if another process needs memory.  On Unix systems, \"used\" means, \"someone else is using it and you can't have it\", which is what most of us really want to know.\nCraig", "msg_date": "Fri, 5 May 2006 10:48:01 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": "Hi,\nWe've got a C function that we use here and we find that for every\nconnection, the first run of the function is much slower than any\nsubsequent runs. ( 50ms compared to 8ms)\n\nBesides using connection pooling, are there any options to improve\nperformance?\n\n-Adam\n", "msg_date": "Fri, 5 May 2006 15:47:53 -0700", "msg_from": "\"Adam Palmblad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Dynamically loaded C function performance" }, { "msg_contents": "On Fri, May 05, 2006 at 03:47:53PM -0700, Adam Palmblad wrote:\n> Hi,\n> We've got a C function that we use here and we find that for every\n> connection, the first run of the function is much slower than any\n> subsequent runs. ( 50ms compared to 8ms)\n> \n> Besides using connection pooling, are there any options to improve\n> performance?\n\nIn my experience, connection startup takes a heck of a lot longer than\n50ms, so why are you worrying about 50ms for the first run of a\nfunction?\n\nBTW, sorry, but I don't know a way to speed this up, either.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:05:23 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamically loaded C function performance" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Fri, May 05, 2006 at 03:47:53PM -0700, Adam Palmblad wrote:\n> \n>>Hi,\n>>We've got a C function that we use here and we find that for every\n>>connection, the first run of the function is much slower than any\n>>subsequent runs. ( 50ms compared to 8ms)\n>>\n>>Besides using connection pooling, are there any options to improve\n>>performance?\n> \n> In my experience, connection startup takes a heck of a lot longer than\n> 50ms, so why are you worrying about 50ms for the first run of a\n> function?\n> \n> BTW, sorry, but I don't know a way to speed this up, either.\n\nI think Tom nailed the solution already in a nearby reply -- see \npreload_libraries on this page:\n\nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html\n\nJoe\n", "msg_date": "Thu, 11 May 2006 15:49:45 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamically loaded C function performance" } ]
[ { "msg_contents": "Thank you again to all who have offered advice, suggestions, tips and offers\nof support/training. From the gist of some of the latter posts I must come\noff as a rank rookie, lol. Deservedly so as I've only been working with\npostgres for 7 months and in the linux/unix world a year or so. My\nbackground is Stratus SysAdmin (which I still do in addition to DBA) so the\ntransition is an on-going process.\n\nThat said, at this time I'll put the thread to rest as my company just\ndoubled the memory to 16GB, isn't that how it always works out anyway ;)\nI'll also be moving the new postgresql.conf settings that were worked out\nwith the patient help of Jim Nasby, thanks again Jim. The DEV box I put\nthose on has shown some improvement. As far as outside support and\ntraining, thank you but no. Probably doesn't show but I did attend a week\nlong PostgreSQL boot camp in January (which I found aimed more to the\ndevelopment side than DBA by the way), but there is no better way to learn\nand understand better than actual day-to-day working experience.\n\nThank you,\nTim McElroy\n\n -----Original Message-----\nFrom: \tJim C. Nasby [mailto:[email protected]] \nSent:\tFriday, May 05, 2006 8:35 PM\nTo:\tmcelroy, tim\nCc:\t'Michael Stone'; [email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not\nthe\n\nDo you have any budget for support or training, either from the company\nselling you the app or a company that provides PostgreSQL support? I\nsuspect some money invested there would result in a lot less\nfrustration. It'd also certainly be cheaper than switching to Oracle.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues?\n\n\nThank you again to all who have offered advice, suggestions, tips and offers of support/training.  From the gist of some of the latter posts I must come off as a rank rookie, lol.  Deservedly so as I've only been working with postgres for 7 months and in the linux/unix world a year or so.  My background is Stratus SysAdmin (which I still do in addition to DBA) so the transition is an on-going process.\nThat said, at this time I'll put the thread to rest as my company just doubled the memory to 16GB, isn't that how it always works out anyway ;)  I'll also be moving the new postgresql.conf settings that were worked out with the patient help of Jim Nasby, thanks again Jim.  The DEV box I put those on has shown some improvement.  As far as outside support and training, thank you but no.  Probably doesn't show but I did attend a week long PostgreSQL boot camp in January (which I found aimed more to the development side than DBA by the way), but there is no better way to learn and understand better than actual day-to-day working experience.\nThank you,\nTim McElroy\n\n -----Original Message-----\nFrom:   Jim C. Nasby [mailto:[email protected]] \nSent:   Friday, May 05, 2006 8:35 PM\nTo:     mcelroy, tim\nCc:     'Michael Stone'; [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues?\n\nOn Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:\n> Sorry, been up all night and maybe provided too much information or not the\n\nDo you have any budget for support or training, either from the company\nselling you the app or a company that provides PostgreSQL support? I\nsuspect some money invested there would result in a lot less\nfrustration. It'd also certainly be cheaper than switching to Oracle.\n-- \nJim C. Nasby, Sr. Engineering Consultant      [email protected]\nPervasive Software      http://pervasive.com    work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461", "msg_date": "Sat, 6 May 2006 10:53:54 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On May 6, 2006, at 10:53 AM, mcelroy, tim wrote:\n\n> development side than DBA by the way), but there is no better way \n> to learn\n> and understand better than actual day-to-day working experience.\n\nYeah, I prefer my surgeons to work this way too. training is for the \nbirds.", "msg_date": "Mon, 8 May 2006 11:06:42 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Mon, May 08, 2006 at 11:06:42AM -0400, Vivek Khera wrote:\n> \n> On May 6, 2006, at 10:53 AM, mcelroy, tim wrote:\n> \n> >development side than DBA by the way), but there is no better way \n> >to learn\n> >and understand better than actual day-to-day working experience.\n> \n> Yeah, I prefer my surgeons to work this way too. training is for the \n> birds.\n\nI think you read too quickly past the part where Tim said he'd taking a\nweek-long training class.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:30:56 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "\nOn May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:\n\n>> Yeah, I prefer my surgeons to work this way too. training is for the\n>> birds.\n>\n> I think you read too quickly past the part where Tim said he'd \n> taking a\n> week-long training class.\n\ns/training/apprenticeship/g;\n\n", "msg_date": "Mon, 8 May 2006 15:38:23 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" }, { "msg_contents": "On Mon, May 08, 2006 at 03:38:23PM -0400, Vivek Khera wrote:\n>On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:\n>>>Yeah, I prefer my surgeons to work this way too. training is for the\n>>>birds.\n>>\n>>I think you read too quickly past the part where Tim said he'd \n>>taking a\n>>week-long training class.\n>\n>s/training/apprenticeship/g;\n\nOf course, the original poster did say that hands-on was the best way to \nlearn. What is apprenticeship but a combination of training and \nexperience. Are you just sniping for fun?\n\nMike Stone\n", "msg_date": "Mon, 08 May 2006 17:17:00 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": "I'm watching a long, painfully slow 60GB load from pg_dump \n(8.1.2), and noticing it's jumping back and forth from different \ntables. I assume this is the index creation order showing up.\n\nWould it make more sense to have pg_dump dump indexes grouped by \nthe table? That way, if a table got loaded into cache for one \nindex creation, it might still be there for the immediatly \nfollowing index creations on the same table...\n\nEd\n", "msg_date": "Sat, 6 May 2006 19:15:55 -0600", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump index creation order" }, { "msg_contents": "On Saturday May 6 2006 7:15 pm, Ed L. wrote:\n> I'm watching a long, painfully slow 60GB load from pg_dump\n> (8.1.2), and noticing it's jumping back and forth from\n> different tables. I assume this is the index creation order\n> showing up.\n>\n> Would it make more sense to have pg_dump dump indexes grouped\n> by the table? That way, if a table got loaded into cache for\n> one index creation, it might still be there for the immediatly\n> following index creations on the same table...\n\nAnd would same idea work for ordering of constraint adding...?\n\nEd\n", "msg_date": "Sat, 6 May 2006 21:09:20 -0600", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump index creation order" }, { "msg_contents": "On Sat, May 06, 2006 at 07:15:55PM -0600, Ed L. wrote:\n> I'm watching a long, painfully slow 60GB load from pg_dump \n> (8.1.2), and noticing it's jumping back and forth from different \n> tables. I assume this is the index creation order showing up.\n> \n> Would it make more sense to have pg_dump dump indexes grouped by \n> the table? That way, if a table got loaded into cache for one \n> index creation, it might still be there for the immediatly \n> following index creations on the same table...\n\nIt might for smaller tables that will fit in cache, but it depends on\nhow much memory is used for sorting. In fact, I think it would be best\nto add the indexes immediately after loading the table with data.\n\nThis won't help with adding indexes on large tables though, unless\nthe indexes were created simultaneously, and even that might not be a\nwin.\n\nIt would be a win to add some constraints at the same time, but RI can't\nbe added until all tables are indexed.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Sun, 7 May 2006 12:08:10 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump index creation order" } ]
[ { "msg_contents": "\n\n\n\nI'm facing a very weird problem.\nRecently our database run very slow when execute Delete/Select statement\nfor a few tables only..\nThe largest table only have 50K rows of data.\n\nWhen I run the statement from pgAdmin although it is slow but not as slow\nas run from webapp.\nWhen I run the statement from webapp, it become extremely slow.\nEven a simple delete statement will takes 20-40 minutes to complete.\n\nI already vacuum those tables with full option but it still the same.\n\nWhat could be the possible causes of this problem?\nHow can I solve it?\n\nCPU - Intel Xeon 2.40 GHz\nMemory - 1.5G\nPostgresql version: 7.2.2\n\nThanks.\n\n", "msg_date": "Mon, 8 May 2006 16:47:44 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "extremely slow when execute select/delete for certain tables only..." }, { "msg_contents": "Hi, Kah,\n\[email protected] wrote:\n\n> I already vacuum those tables with full option but it still the same.\n> \n> What could be the possible causes of this problem?\n> How can I solve it?\n> \n> CPU - Intel Xeon 2.40 GHz\n> Memory - 1.5G\n> Postgresql version: 7.2.2\n\nFirst, you should consider to upgrade your PostgreSQL server to a newer\nversion, at least to 7.2.8 which fixes some critical bugs.\n\nBut it will be much better to upgrade to current 8.1 version, as I think\nthat your problem is caused by index bloat, and indices are handled much\nbetter in 8.1.\n\nTry recreating your indices using REINDEX command.\n\nHTH,\nMarkus\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Mon, 08 May 2006 11:21:16 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremely slow when execute select/delete for certain" }, { "msg_contents": "On Mon, May 08, 2006 at 11:21:16AM +0200, Markus Schaber wrote:\n> Hi, Kah,\n> \n> [email protected] wrote:\n> \n> > I already vacuum those tables with full option but it still the same.\n> > \n> > What could be the possible causes of this problem?\n> > How can I solve it?\n> > \n> > CPU - Intel Xeon 2.40 GHz\n> > Memory - 1.5G\n> > Postgresql version: 7.2.2\n> \n> First, you should consider to upgrade your PostgreSQL server to a newer\n> version, at least to 7.2.8 which fixes some critical bugs.\n\nNote that 7.2.x isn't supported anymore, and there's data loss bugs that\ncould affect it. You should at least move up to 7.4.x.\n\n> But it will be much better to upgrade to current 8.1 version, as I think\n> that your problem is caused by index bloat, and indices are handled much\n> better in 8.1.\n> \n> Try recreating your indices using REINDEX command.\n\nAnd if that doesn't work we need at least the output of EXPLAIN, if not\nEXPLAIN ANALYZE.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:27:05 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: extremely slow when execute select/delete for certain" } ]
[ { "msg_contents": "I have small database. However the following query takes 38 (!) seconds to\nrun.\nHow to speed it up (preferably not changing table structures but possibly\ncreating indexes) ?\n\nAndrus.\n\nset search_path to public,firma1;\nexplain analyze select bilkaib.summa from BILKAIB join KONTO CRKONTO ON\nbilkaib.cr=crkonto.kontonr AND\n crkonto.iseloom='A'\n join KONTO DBKONTO ON bilkaib.db=dbkonto.kontonr AND\n dbkonto.iseloom='A'\n left join klient on bilkaib.klient=klient.kood\n where ( bilkaib.cr LIKE '3'||'%' OR\n bilkaib.db LIKE '3'||'%' )\n AND bilkaib.kuupaev BETWEEN '2006-01-01' AND '2006-03-31'\nAND ( kuupaev='20060101' OR (cr!='00' and db!='00'))\nAND ( 3 IN(2,3) or (NOT bilkaib.ratediffer and (\n TRIM(bilkaib.masin)='' or bilkaib.masin IS NULL or\n bilkaib.alusdok not in ('KV', 'DU', 'DJ') or\nbilkaib.andmik is NULL or bilkaib.alusdok is NULL or\nsubstring(andmik from 1 for 9)!='Kursivahe'\n ))) and\n( position(bilkaib.laustyyp IN 'x')=0 or\nbilkaib.laustyyp is null or bilkaib.laustyyp=' ')\n\n\n\"Nested Loop Left Join (cost=23.30..1964.10 rows=1 width=10) (actual\ntime=7975.470..38531.724 rows=3151 loops=1)\"\n\" -> Nested Loop (cost=23.30..1958.08 rows=1 width=26) (actual\ntime=7975.407..37978.718 rows=3151 loops=1)\"\n\" Join Filter: (\"inner\".cr = \"outer\".kontonr)\"\n\" -> Seq Scan on konto crkonto (cost=0.00..23.30 rows=1 width=44)\n(actual time=0.135..13.913 rows=219 loops=1)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\" -> Hash Join (cost=23.30..1934.64 rows=11 width=40) (actual\ntime=1.650..155.734 rows=3151 loops=219)\"\n\" Hash Cond: (\"outer\".db = \"inner\".kontonr)\"\n\" -> Index Scan using bilkaib_kuupaev_idx on bilkaib\n(cost=0.00..1897.10 rows=2826 width=54) (actual time=1.628..111.216\nrows=3151 loops=219)\"\n\" Index Cond: ((kuupaev >= '2006-01-01'::date) AND\n(kuupaev <= '2006-03-31'::date))\"\n\" Filter: (((cr ~~ '3%'::text) OR (db ~~ '3%'::text)) AND\n((kuupaev = '2006-01-01'::date) OR ((cr <> '00'::bpchar) AND (db <>\n'00'::bpchar))) AND ((\"position\"('x'::text, (laustyyp)::text) = 0) OR\n(laustyyp IS NULL) OR (laustyyp = ' '::bpc (..)\"\n\" -> Hash (cost=23.30..23.30 rows=1 width=44) (actual\ntime=2.278..2.278 rows=219 loops=1)\"\n\" -> Seq Scan on konto dbkonto (cost=0.00..23.30 rows=1\nwidth=44) (actual time=0.017..1.390 rows=219 loops=1)\"\n\" Filter: (iseloom = 'A'::bpchar)\"\n\" -> Index Scan using klient_pkey on klient (cost=0.00..6.01 rows=1\nwidth=52) (actual time=0.138..0.158 rows=1 loops=3151)\"\n\" Index Cond: (\"outer\".klient = klient.kood)\"\n\"Total runtime: 38561.745 ms\"\n\n\n\n\n\nCREATE TABLE firma1.bilkaib\n(\n id int4 NOT NULL DEFAULT nextval('bilkaib_id_seq'::regclass),\n kuupaev date NOT NULL,\n db char(10) NOT NULL,\n dbobjekt char(10),\n cr char(10) NOT NULL,\n crobjekt char(10),\n summa numeric(14,2) NOT NULL,\n raha char(3) NOT NULL,\n masin char(5),\n klient char(12),\n alusdok char(2),\n dokumnr int4 NOT NULL DEFAULT nextval('bilkaib_dokumnr_seq'::regclass),\n db2objekt char(10),\n cr2objekt char(10),\n db3objekt char(10),\n db4objekt char(10),\n db5objekt char(10),\n db6objekt char(10),\n db7objekt char(10),\n db8objekt char(10),\n db9objekt char(10),\n cr3objekt char(10),\n cr4objekt char(10),\n cr5objekt char(10),\n cr6objekt char(10),\n cr7objekt char(10),\n cr8objekt char(10),\n cr9objekt char(10),\n exchrate numeric(13,8),\n doknr char(25),\n andmik text,\n laustyyp char(1),\n ratediffer ebool,\n adoknr char(25),\n jarjeknr numeric(7),\n CONSTRAINT bilkaib_pkey PRIMARY KEY (id),\n CONSTRAINT bilkaib_alusdok_fkey FOREIGN KEY (alusdok)\n REFERENCES firma1.alusdok (alusdok) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr2objekt_fkey FOREIGN KEY (cr2objekt)\n REFERENCES firma1.yksus2 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr3objekt_fkey FOREIGN KEY (cr3objekt)\n REFERENCES firma1.yksus3 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr4objekt_fkey FOREIGN KEY (cr4objekt)\n REFERENCES firma1.yksus4 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr5objekt_fkey FOREIGN KEY (cr5objekt)\n REFERENCES firma1.yksus5 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr6objekt_fkey FOREIGN KEY (cr6objekt)\n REFERENCES firma1.yksus6 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr7objekt_fkey FOREIGN KEY (cr7objekt)\n REFERENCES firma1.yksus7 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr8objekt_fkey FOREIGN KEY (cr8objekt)\n REFERENCES firma1.yksus8 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr9objekt_fkey FOREIGN KEY (cr9objekt)\n REFERENCES firma1.yksus9 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_cr_fkey FOREIGN KEY (cr)\n REFERENCES firma1.konto (kontonr) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_crobjekt_fkey FOREIGN KEY (crobjekt)\n REFERENCES firma1.yksus1 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db2objekt_fkey FOREIGN KEY (db2objekt)\n REFERENCES firma1.yksus2 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db3objekt_fkey FOREIGN KEY (db3objekt)\n REFERENCES firma1.yksus3 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db4objekt_fkey FOREIGN KEY (db4objekt)\n REFERENCES firma1.yksus4 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db5objekt_fkey FOREIGN KEY (db5objekt)\n REFERENCES firma1.yksus5 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db6objekt_fkey FOREIGN KEY (db6objekt)\n REFERENCES firma1.yksus6 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db7objekt_fkey FOREIGN KEY (db7objekt)\n REFERENCES firma1.yksus7 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db8objekt_fkey FOREIGN KEY (db8objekt)\n REFERENCES firma1.yksus8 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db9objekt_fkey FOREIGN KEY (db9objekt)\n REFERENCES firma1.yksus9 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_db_fkey FOREIGN KEY (db)\n REFERENCES firma1.konto (kontonr) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_dbobjekt_fkey FOREIGN KEY (dbobjekt)\n REFERENCES firma1.yksus1 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_klient_fkey FOREIGN KEY (klient)\n REFERENCES firma1.klient (kood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_raha_fkey FOREIGN KEY (raha)\n REFERENCES raha (raha) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT bilkaib_id_check CHECK (id > 0)\n)\nWITHOUT OIDS;\n\nCREATE INDEX bilkaib_dokumnr_idx ON firma1.bilkaib USING btree (dokumnr);\n\nCREATE INDEX bilkaib_kuupaev_idx ON firma1.bilkaib USING btree (kuupaev);\n\n\nCREATE TABLE firma1.konto\n(\n kontonr char(10) NOT NULL,\n tyyp char(1) NOT NULL,\n klienkaupa ebool,\n arvekaupa ebool,\n objekt1 char(1),\n objekt2 char(1),\n objekt3 char(1),\n objekt4 char(1),\n objekt5 char(1),\n objekt6 char(1),\n objekt7 char(1),\n objekt8 char(1),\n objekt9 char(1),\n tekst char(55),\n rustekst char(55),\n engtekst char(55),\n fintekst char(55),\n lvltekst char(55),\n raha char(3) NOT NULL,\n kontoklass char(10),\n grupp char(13),\n klient char(12),\n iseloom char(1),\n kontokl2 char(10),\n kontokl3 char(10),\n eelklassif char(10),\n klassif8 char(10),\n rid3obj char(1),\n rid4obj char(1),\n koondkonto char(10),\n kaibedrida char(6),\n CONSTRAINT konto_pkey PRIMARY KEY (kontonr),\n CONSTRAINT konto_klassif8_fkey FOREIGN KEY (klassif8)\n REFERENCES firma1.yksus8 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT konto_klient_fkey FOREIGN KEY (klient)\n REFERENCES firma1.klient (kood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT konto_kontokl2_fkey FOREIGN KEY (kontokl2)\n REFERENCES bilskeem2 (kontoklass) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT konto_kontokl3_fkey FOREIGN KEY (kontokl3)\n REFERENCES bilskeem3 (kontoklass) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT konto_kontoklass_fkey FOREIGN KEY (kontoklass)\n REFERENCES bilskeem1 (kontoklass) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT konto_raha_fkey FOREIGN KEY (raha)\n REFERENCES raha (raha) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE\n)\nWITHOUT OIDS;\n\nCREATE TRIGGER konto_trig BEFORE INSERT OR UPDATE OR DELETE\n ON firma1.konto FOR EACH STATEMENT EXECUTE PROCEDURE setlastchange();\n\n\nCREATE TABLE firma1.klient\n(\n kood char(12) NOT NULL DEFAULT nextval('klient_kood_seq'::regclass),\n nimi char(70),\n a_a char(35),\n p_kood char(10),\n regnr char(12),\n vatpayno char(15),\n piirkond char(30),\n postiindek char(10),\n tanav char(30),\n kontaktisi char(30),\n telefon char(25),\n faks char(25),\n email char(60),\n infomail char(60),\n wwwpage char(50),\n liik char(10),\n viitenr char(20),\n riik char(20),\n riik2 char(2),\n riigikood char(3),\n hinnak char(5),\n erihinnak char(5),\n myygikood char(4),\n objekt2 char(10),\n objekt5 char(10),\n objekt7 char(10),\n maksetin char(5),\n omakseti char(5),\n krediit numeric(12,2),\n ostukredii numeric(12,2),\n masin char(5),\n info text,\n maksja char(12),\n \"timestamp\" char(14) NOT NULL DEFAULT to_char(now(),\n'YYYYMMDDHH24MISS'::text),\n atimestamp char(14) NOT NULL DEFAULT to_char(now(),\n'YYYYMMDDHH24MISS'::text),\n elanikud numeric(3),\n pindala numeric(7,2),\n grmaja char(10),\n apindala numeric(7,2),\n kpindala numeric(7,2),\n idmakett char(36),\n tulemus char(100),\n omandisuhe char(1),\n username char(10),\n changedby char(10),\n parool char(20),\n hinnaale char(4),\n mitteakt ebool,\n kontakteer date,\n klikaart char(16),\n mhprotsent numeric(5,1),\n aadress text,\n swift char(20),\n pankaad char(20),\n _nimi char(70),\n CONSTRAINT klient_pkey PRIMARY KEY (kood),\n CONSTRAINT klient_changedby_fkey FOREIGN KEY (changedby)\n REFERENCES kasutaja (kasutaja) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_grmaja_fkey FOREIGN KEY (grmaja)\n REFERENCES firma1.yksus1 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_hinnak_fkey FOREIGN KEY (hinnak)\n REFERENCES firma1.hkpais (hinnak) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_idmakett_fkey FOREIGN KEY (idmakett)\n REFERENCES makett (guid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_liik_fkey FOREIGN KEY (liik)\n REFERENCES klliik (liik) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_maksetin_fkey FOREIGN KEY (maksetin)\n REFERENCES maksetin (maksetin) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_maksja_fkey FOREIGN KEY (maksja)\n REFERENCES firma1.klient (kood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_myygikood_fkey FOREIGN KEY (myygikood)\n REFERENCES firma1.myygikoo (myygikood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_objekt2_fkey FOREIGN KEY (objekt2)\n REFERENCES firma1.yksus2 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_objekt5_fkey FOREIGN KEY (objekt5)\n REFERENCES firma1.yksus5 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_objekt7_fkey FOREIGN KEY (objekt7)\n REFERENCES firma1.yksus7 (yksus) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_omakseti_fkey FOREIGN KEY (omakseti)\n REFERENCES maksetin (maksetin) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_p_kood_fkey FOREIGN KEY (p_kood)\n REFERENCES pank (kood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_riik2_fkey FOREIGN KEY (riik2)\n REFERENCES riik (kood) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_username_fkey FOREIGN KEY (username)\n REFERENCES kasutaja (kasutaja) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,\n CONSTRAINT klient_email_check CHECK (rtrim(email::text) ~*\nE'^[^@]*@(?:[^@]*\\\\.)?[a-z0-9_-]+\\\\.(?:a[defgilmnoqrstuwz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvxyz]|d[ejkmoz]|e[ceghrst]|f[ijkmorx]|g[abdefhilmnpqrstuwy]|h[kmnrtu]|i[delnoqrst]|j[mop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrtwy]|qa|r[eouw]|s[abcdeghijklmnortvyz]|t[cdfghjkmnoprtvwz]|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]|edu|com|net|org|gov|mil|info|biz|coop|museum|aero|name|pro|mobi|arpa)$'::text)\n)\nWITHOUT OIDS;\n\n\nCREATE UNIQUE INDEX klient_nimi_unique_idx\n ON firma1.klient USING btree (lower(nimi::text));\n\n\nServer:\n\n\"PostgreSQL 8.1.3 on i386-portbld-freebsd5.4, compiled by GCC cc (GCC) 3.4.2\n[FreeBSD] 20040728\"\n\nClient: ODBC driver in XP \n\n\n", "msg_date": "Mon, 8 May 2006 13:59:39 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query runs 38 seconds for small database!" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> I have small database. However the following query takes 38 (!) seconds to\n> run.\n> How to speed it up (preferably not changing table structures but possibly\n> creating indexes) ?\n\nANALYZE would probably help.\n\n> \" -> Seq Scan on konto dbkonto (cost=0.00..23.30 rows=1\n> width=44) (actual time=0.017..1.390 rows=219 loops=1)\"\n> \" Filter: (iseloom = 'A'::bpchar)\"\n\nAnytime you see a row estimate that far off about a simple single-column\ncondition, it means your statistics are out-of-date.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 10:46:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database! " }, { "msg_contents": ">> \" -> Seq Scan on konto dbkonto (cost=0.00..23.30 \n>> rows=1\n>> width=44) (actual time=0.017..1.390 rows=219 loops=1)\"\n>> \" Filter: (iseloom = 'A'::bpchar)\"\n>\n> Anytime you see a row estimate that far off about a simple single-column\n> condition, it means your statistics are out-of-date.\n\nThan you. I addded ANALYZE command and now query works fast.\n\nI see autovacuum: processing database \"mydb\" messages in log file and I have\n\nstats_start_collector = on\nstats_row_level = on\n\nin config file. Why statistics was out-of-date ?\n\nAndrus.\n\n\nMy postgres.conf file (only uncommented settings are listed):\n\nlisten_addresses = '*'\nmax_connections = 40\nshared_buffers = 1000\nlog_destination = 'stderr'\nredirect_stderr = on # Enable capturing of stderr into log\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\nlog_rotation_age = 1440 # Automatic rotation of logfiles will\nlog_rotation_size = 10240 # Automatic rotation of logfiles will\nlog_min_error_statement = 'warning' # Values in order of increasing \nseverity:\nsilent_mode = on\nlog_line_prefix = \"'%t %u %d %h %p %i %l %x %q'\"\nstats_start_collector = on\nstats_row_level = on\nautovacuum = on # enable autovacuum subprocess?\nlc_messages = 'C' # locale for system error message\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n\n", "msg_date": "Mon, 8 May 2006 19:15:36 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> I see autovacuum: processing database \"mydb\" messages in log file and I have\n> stats_start_collector = on\n> stats_row_level = on\n> in config file. Why statistics was out-of-date ?\n\nThe default autovac thresholds are not very aggressive; this table was\nprobably not large enough to get selected for analysis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 12:27:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database! " }, { "msg_contents": "> The default autovac thresholds are not very aggressive; this table was\n> probably not large enough to get selected for analysis.\n\nTom,\n\nthank you.\nExcellent.\n\nAndrus.\n\n\n", "msg_date": "Mon, 8 May 2006 20:03:38 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "On Mon, May 08, 2006 at 08:03:38PM +0300, Andrus wrote:\n> > The default autovac thresholds are not very aggressive; this table was\n> > probably not large enough to get selected for analysis.\n> \n> Tom,\n> \n> thank you.\n> Excellent.\n\nBTW, you might want to cut all the autovac thresholds in half; that's\nwhat I typically do.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:28:16 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "> BTW, you might want to cut all the autovac thresholds in half; that's\n> what I typically do.\n\nI added ANALYZE command to my procedure which creates and loads data to \npostgres database\nfrom other DBMS. This runs only onvce after installing my application. I \nhope this is sufficient.\nIf default threshold is so conservative values I expect there is some reason \nfor it.\n\nAndrus. \n\n\n", "msg_date": "Mon, 8 May 2006 20:36:42 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "On Mon, May 08, 2006 at 08:36:42PM +0300, Andrus wrote:\n> > BTW, you might want to cut all the autovac thresholds in half; that's\n> > what I typically do.\n> \n> I added ANALYZE command to my procedure which creates and loads data to \n> postgres database\n> from other DBMS. This runs only onvce after installing my application. I \n> hope this is sufficient.\n> If default threshold is so conservative values I expect there is some reason \n> for it.\n\nThe only reason for being so conservative that I'm aware of was that it\nwas a best guess. Everyone I've talked to cuts the defaults down by at\nleast a factor of 2, sometimes even more.\n\nBTW, these parameters are already tweaked from what we started with in\ncontrib/pg_autovacuum. It would allow a table to grow to 2x larger than\nit should be before vacuuming, as opposed to the 40% that the current\nsettings allow. But even there, is there any real reason you want to\nhave 40% bloat? To make matters worse, those settings ensure that all\nbut the smallest databases will suffer runaway bloat unless you bump up\nthe FSM settings.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:46:04 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "> The only reason for being so conservative that I'm aware of was that it\n> was a best guess. Everyone I've talked to cuts the defaults down by at\n> least a factor of 2, sometimes even more.\n\nCan we ask that Tom will change default values to 2 times smaller in 8.1.4 ?\n\n> BTW, these parameters are already tweaked from what we started with in\n> contrib/pg_autovacuum. It would allow a table to grow to 2x larger than\n> it should be before vacuuming, as opposed to the 40% that the current\n> settings allow. But even there, is there any real reason you want to\n> have 40% bloat? To make matters worse, those settings ensure that all\n> but the smallest databases will suffer runaway bloat unless you bump up\n recprd> the FSM settings.\n\nI created empty table konto and loaded more that 219 records to it during \ndatabase creation.\nSo it seems that if table grows from zero to more than 219 times larger then \nit was still not processed.\n\nAndrus. \n\n\n", "msg_date": "Mon, 8 May 2006 21:10:07 +0300", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "On Monday 08 May 2006 14:10, Andrus wrote:\n> > The only reason for being so conservative that I'm aware of was that it\n> > was a best guess. Everyone I've talked to cuts the defaults down by at\n> > least a factor of 2, sometimes even more.\n>\n> Can we ask that Tom will change default values to 2 times smaller in 8.1.4\n> ?\n>\n> > BTW, these parameters are already tweaked from what we started with in\n> > contrib/pg_autovacuum. It would allow a table to grow to 2x larger than\n> > it should be before vacuuming, as opposed to the 40% that the current\n> > settings allow. But even there, is there any real reason you want to\n> > have 40% bloat? To make matters worse, those settings ensure that all\n> > but the smallest databases will suffer runaway bloat unless you bump up\n>\n> recprd> the FSM settings.\n>\n> I created empty table konto and loaded more that 219 records to it during\n> database creation.\n> So it seems that if table grows from zero to more than 219 times larger\n> then it was still not processed.\n\nThat's because you need at least 500 rows for analyze and 100 for a vacuum, \n(autovacuum_vacuum_threshold = 1000, autovacuum_analyze_threshold = 500).\n\n>\n> Andrus.\n\njan\n\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Mon, 8 May 2006 15:19:39 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database!" }, { "msg_contents": "Jan de Visser <[email protected]> writes:\n> On Monday 08 May 2006 14:10, Andrus wrote:\n>> I created empty table konto and loaded more that 219 records to it during\n>> database creation.\n>> So it seems that if table grows from zero to more than 219 times larger\n>> then it was still not processed.\n\n> That's because you need at least 500 rows for analyze and 100 for a vacuum, \n> (autovacuum_vacuum_threshold = 1000, autovacuum_analyze_threshold = 500).\n\nThis crystallizes something that's been bothering me for awhile,\nactually: why do the \"threshold\" variables exist at all? If we took\nthem out, or at least made their default values zero, then the autovac\ncriteria would simply be \"vacuum or analyze if at least X% of the table\nhas changed\" (where X is set by the \"scale_factor\" variables). Which\nseems intuitively reasonable. As it stands, the thresholds seem to bias\nautovac against ever touching small tables at all ... but, as this\nexample demonstrates, a fairly small table can still kill your query\nperformance if the planner knows nothing about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 15:48:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database! " }, { "msg_contents": "Tom Lane wrote:\n> Jan de Visser <[email protected]> writes:\n> > On Monday 08 May 2006 14:10, Andrus wrote:\n> >> I created empty table konto and loaded more that 219 records to it during\n> >> database creation.\n> >> So it seems that if table grows from zero to more than 219 times larger\n> >> then it was still not processed.\n> \n> > That's because you need at least 500 rows for analyze and 100 for a vacuum, \n> > (autovacuum_vacuum_threshold = 1000, autovacuum_analyze_threshold = 500).\n> \n> This crystallizes something that's been bothering me for awhile,\n> actually: why do the \"threshold\" variables exist at all?\n\nMatthew would know about that -- he invented them. I take no\nresponsability :-)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 8 May 2006 16:02:59 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query runs 38 seconds for small database!" } ]
[ { "msg_contents": "Why does this query take so long? (PostgreSQL 8.0.3, FC4)\nHopefully I have provided enough information below.\n\nLOG: statement: SELECT * FROM x WHERE f IN \n($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\\\n$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\\\n,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\\\n$102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\\\n$133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\\\n$164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\\\n$195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\\\n$226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\\\n$257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\\\n$288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\\\n$319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\\\n$350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\\\n$381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\\\n$412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\\\n$443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\\\n$474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\\\n$505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\\\n$536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\\\n$567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\\\n$598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\\\n$629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650) \nORDER BY f,c\n\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n ! 10.282945 elapsed 10.234444 user 0.048992 system sec\n ! [25.309152 user 0.500923 sys total]\n ! 0/0 [0/0] filesystem blocks in/out\n ! 0/0 [0/10397] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/15 [291/55] voluntary/involuntary context switches\n ! buffer usage stats:\n ! Shared blocks: 0 read, 0 written, \nbuffer hit rate = 100.00%\n ! Local blocks: 0 read, 0 written, \nbuffer hit rate = 0.00%\n ! Direct blocks: 0 read, 0 written\n\n\nHere is the table description:\n\nTable \"public.x\"\n Column | Type | Modifiers\n--------+---------+-----------\n f | integer | not null\n c | integer | not null\n r | integer | not null\n n | integer | not null\nIndexes:\n \"x_c_idx\" btree (c)\n \"x_f_idx\" btree (f)\n \"testindex2\" btree (f, c)\n\n\nThere are only 2,369 records in the X table.\n\nI don't understand why this query should take 10 seconds in the executor \nphase, with so little data being managed, and all relevant data already \nin memory. Any clues?\n\nMaybe there are more database server debugging options I should have \ntweaked, but I'm not sure what. The stuff I turned on included:\n\nlog_duration = true\nlog_statement = 'all'\nlog_parser_stats = true\nlog_planner_stats = true\nlog_executor_stats = true\n\n (N.B. log_statement_stats = true caused the server startup failure \nevery time with no error message I could find, so was not deliberately set)\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\n\n\n\n(FYI: 10 secs is a lot only because this query is executed many times in \nmy application, and they're pretty much all bad, and the aggregate query \ntimes are killing my app response).\n\nThanks for any tips!\n", "msg_date": "Mon, 08 May 2006 13:29:28 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "performance question (something to do w/ parameterized stmts?, wrong\n\tindex types?)" }, { "msg_contents": "What's EXPLAIN ANALYZE show?\n\nOn Mon, May 08, 2006 at 01:29:28PM -0400, Jeffrey Tenny wrote:\n> Why does this query take so long? (PostgreSQL 8.0.3, FC4)\n> Hopefully I have provided enough information below.\n> \n> LOG: statement: SELECT * FROM x WHERE f IN \n> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\\\n> $25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\\\n> ,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\\\n> $102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\\\n> $133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\\\n> $164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\\\n> $195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\\\n> $226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\\\n> $257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\\\n> $288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\\\n> $319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\\\n> $350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\\\n> $381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\\\n> $412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\\\n> $443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\\\n> $474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\\\n> $505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\\\n> $536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\\\n> $567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\\\n> $598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\\\n> $629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650) \n> ORDER BY f,c\n> \n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 10.282945 elapsed 10.234444 user 0.048992 system sec\n> ! [25.309152 user 0.500923 sys total]\n> ! 0/0 [0/0] filesystem blocks in/out\n> ! 0/0 [0/10397] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/15 [291/55] voluntary/involuntary context switches\n> ! buffer usage stats:\n> ! Shared blocks: 0 read, 0 written, \n> buffer hit rate = 100.00%\n> ! Local blocks: 0 read, 0 written, \n> buffer hit rate = 0.00%\n> ! Direct blocks: 0 read, 0 written\n> \n> \n> Here is the table description:\n> \n> Table \"public.x\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f | integer | not null\n> c | integer | not null\n> r | integer | not null\n> n | integer | not null\n> Indexes:\n> \"x_c_idx\" btree (c)\n> \"x_f_idx\" btree (f)\n> \"testindex2\" btree (f, c)\n> \n> \n> There are only 2,369 records in the X table.\n> \n> I don't understand why this query should take 10 seconds in the executor \n> phase, with so little data being managed, and all relevant data already \n> in memory. Any clues?\n> \n> Maybe there are more database server debugging options I should have \n> tweaked, but I'm not sure what. The stuff I turned on included:\n> \n> log_duration = true\n> log_statement = 'all'\n> log_parser_stats = true\n> log_planner_stats = true\n> log_executor_stats = true\n> \n> (N.B. log_statement_stats = true caused the server startup failure \n> every time with no error message I could find, so was not deliberately set)\n> \n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n> \n> \n> \n> (FYI: 10 secs is a lot only because this query is executed many times in \n> my application, and they're pretty much all bad, and the aggregate query \n> times are killing my app response).\n> \n> Thanks for any tips!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:38:12 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized stmts?,\n\twrong index types?)" }, { "msg_contents": "Doing a SELECT with a large list of variables inside an IN runs slowly\non every database we've tested. We've tested mostly in Oracle and\nPostgreSQL, and both get very slow very quickly (actually Oracle refuses\nto process the query at all after it gets too many bind parameters).\n\nIn our case, we have a (potentially quite large) set of external values\nthat we want to look up in the database. We originally thought that\ndoing a single select with a large IN clause was the way to go, but then\nwe did some performance analysis on the optimal batch size (number of\nitems to include per IN clause), and discovered that for most databases,\nthe optimal batch size was 1. For PostgreSQL I think it was 2.\n\nThe moral of the story is that you're probably better off running a\nbunch of small selects than in trying to optimize things with one\ngargantuan select.\n\n-- Mark Lewis\n\nOn Mon, 2006-05-08 at 13:29 -0400, Jeffrey Tenny wrote:\n> Why does this query take so long? (PostgreSQL 8.0.3, FC4)\n> Hopefully I have provided enough information below.\n> \n> LOG: statement: SELECT * FROM x WHERE f IN \n> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\\\n> $25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\\\n> ,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\\\n> $102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\\\n> $133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\\\n> $164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\\\n> $195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\\\n> $226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\\\n> $257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\\\n> $288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\\\n> $319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\\\n> $350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\\\n> $381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\\\n> $412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\\\n> $443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\\\n> $474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\\\n> $505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\\\n> $536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\\\n> $567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\\\n> $598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\\\n> $629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650) \n> ORDER BY f,c\n> \n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 10.282945 elapsed 10.234444 user 0.048992 system sec\n> ! [25.309152 user 0.500923 sys total]\n> ! 0/0 [0/0] filesystem blocks in/out\n> ! 0/0 [0/10397] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/15 [291/55] voluntary/involuntary context switches\n> ! buffer usage stats:\n> ! Shared blocks: 0 read, 0 written, \n> buffer hit rate = 100.00%\n> ! Local blocks: 0 read, 0 written, \n> buffer hit rate = 0.00%\n> ! Direct blocks: 0 read, 0 written\n> \n> \n> Here is the table description:\n> \n> Table \"public.x\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f | integer | not null\n> c | integer | not null\n> r | integer | not null\n> n | integer | not null\n> Indexes:\n> \"x_c_idx\" btree (c)\n> \"x_f_idx\" btree (f)\n> \"testindex2\" btree (f, c)\n> \n> \n> There are only 2,369 records in the X table.\n> \n> I don't understand why this query should take 10 seconds in the executor \n> phase, with so little data being managed, and all relevant data already \n> in memory. Any clues?\n> \n> Maybe there are more database server debugging options I should have \n> tweaked, but I'm not sure what. The stuff I turned on included:\n> \n> log_duration = true\n> log_statement = 'all'\n> log_parser_stats = true\n> log_planner_stats = true\n> log_executor_stats = true\n> \n> (N.B. log_statement_stats = true caused the server startup failure \n> every time with no error message I could find, so was not deliberately set)\n> \n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n> \n> \n> \n> (FYI: 10 secs is a lot only because this query is executed many times in \n> my application, and they're pretty much all bad, and the aggregate query \n> times are killing my app response).\n> \n> Thanks for any tips!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Mon, 08 May 2006 10:42:21 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/" }, { "msg_contents": "On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:\n> Doing a SELECT with a large list of variables inside an IN runs slowly\n> on every database we've tested. We've tested mostly in Oracle and\n> PostgreSQL, and both get very slow very quickly (actually Oracle refuses\n> to process the query at all after it gets too many bind parameters).\n> \n> In our case, we have a (potentially quite large) set of external values\n> that we want to look up in the database. We originally thought that\n> doing a single select with a large IN clause was the way to go, but then\n> we did some performance analysis on the optimal batch size (number of\n> items to include per IN clause), and discovered that for most databases,\n> the optimal batch size was 1. For PostgreSQL I think it was 2.\n> \n> The moral of the story is that you're probably better off running a\n> bunch of small selects than in trying to optimize things with one\n> gargantuan select.\n\nEver experiment with loading the parameters into a temp table and\njoining to that?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 8 May 2006 12:50:13 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:\n>> Doing a SELECT with a large list of variables inside an IN runs slowly\n>> on every database we've tested. We've tested mostly in Oracle and\n>> PostgreSQL, and both get very slow very quickly (actually Oracle refuses\n>> to process the query at all after it gets too many bind parameters).\n>> \n>> In our case, we have a (potentially quite large) set of external values\n>> that we want to look up in the database. We originally thought that\n>> doing a single select with a large IN clause was the way to go, but then\n>> we did some performance analysis on the optimal batch size (number of\n>> items to include per IN clause), and discovered that for most databases,\n>> the optimal batch size was 1. For PostgreSQL I think it was 2.\n>> \n>> The moral of the story is that you're probably better off running a\n>> bunch of small selects than in trying to optimize things with one\n>> gargantuan select.\n\n> Ever experiment with loading the parameters into a temp table and\n> joining to that?\n\nAlso, it might be worth re-testing that conclusion with PG CVS tip\n(or 8.2 when it comes out). The reimplementation of IN as = ANY that\nI did a couple months ago might well change the results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 13:59:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ " }, { "msg_contents": "On Mon, May 08, 2006 at 12:50:13PM -0500, Jim C. Nasby wrote:\n> On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:\n> > Doing a SELECT with a large list of variables inside an IN runs slowly\n> > on every database we've tested. We've tested mostly in Oracle and\n> > PostgreSQL, and both get very slow very quickly (actually Oracle refuses\n> > to process the query at all after it gets too many bind parameters).\n> > \n> > In our case, we have a (potentially quite large) set of external values\n> > that we want to look up in the database. We originally thought that\n> > doing a single select with a large IN clause was the way to go, but then\n> > we did some performance analysis on the optimal batch size (number of\n> > items to include per IN clause), and discovered that for most databases,\n> > the optimal batch size was 1. For PostgreSQL I think it was 2.\n> > \n> > The moral of the story is that you're probably better off running a\n> > bunch of small selects than in trying to optimize things with one\n> > gargantuan select.\n> \n> Ever experiment with loading the parameters into a temp table and\n> joining to that?\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n> ---------------------------(end of broadcast)---------------------------\n\nThe DB use by the DSPAM software is very similar to your use case. The\nfastest queries are made using the PostgreSQL generate_series functionality\nto unwind the \"IN *\" to multiple single selects. Here is the lookup function\nthat they use:\n\ncreate function lookup_tokens(integer,bigint[])\n returns setof dspam_token_data\n language plpgsql stable\n as '\ndeclare\n v_rec record;\nbegin\n for v_rec in select * from dspam_token_data\n where uid=$1\n and token in (select $2[i]\n from generate_series(array_lower($2,1),\n array_upper($2,1)) s(i))\n loop\n return next v_rec;\n end loop;\n return;\nend;';\n\n\nYou should be able to try something similar for your workload.\n\nKen Marshall\n", "msg_date": "Mon, 8 May 2006 13:01:36 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/" }, { "msg_contents": "Well, since I don't know the exact parameter values, just substituting \n1-650 for $1-$650, I get:\n\n Index Scan using testindex2 on x (cost=0.00..34964.52 rows=1503 \nwidth=16) (actual time=0.201..968.252 rows=677 loops=1)\n Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...\n\nSo index usage is presumably good on this one.\n\nJim C. Nasby wrote:\n> What's EXPLAIN ANALYZE show?\n> \n", "msg_date": "Mon, 08 May 2006 15:43:39 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/ parameterized" }, { "msg_contents": "Mark Lewis wrote:\n> Doing a SELECT with a large list of variables inside an IN runs slowly\n> on every database we've tested. We've tested mostly in Oracle and\n> PostgreSQL, and both get very slow very quickly (actually Oracle refuses\n> to process the query at all after it gets too many bind parameters).\n> \n> In our case, we have a (potentially quite large) set of external values\n> that we want to look up in the database. We originally thought that\n> doing a single select with a large IN clause was the way to go, but then\n> we did some performance analysis on the optimal batch size (number of\n> items to include per IN clause), and discovered that for most databases,\n> the optimal batch size was 1. For PostgreSQL I think it was 2.\n\nSo that is for parameterized queries (the batch size?).\n\nIn my case, I was concerned about latency between app and database \nserver, so I try to minimize the number of queries I send to the \ndatabase server. (My app servers can be anywhere, they /should/ be \nclose to the database server, but there are no guarantees and I can't \ncontrol it).\n\nThe last time I tested for optimal batch size using non-parameterized \nqueries with same-host database and app, I got a batch size of \napproximately 700 IN list elements (again, not variables in that test).\nThat was on postgres 7.X.Y.\n\nGuess I'll have to try a test where I turn the parameterized statements \ninto regular statements.\n\nI'm pretty sure it would be a bad idea for me to send one IN list \nelement at a time in all cases. Even if the query query prep was fast, \nthe network latency could kill my app.\n\n> \n> The moral of the story is that you're probably better off running a\n> bunch of small selects than in trying to optimize things with one\n> gargantuan select.\n\nThe algorithm currently tries to ensure that IN-lists of not more than \n700 elements are sent to the database server, and breaks them into \nmultiple queries. If it has to break it into at least 3 queries, it \nuses parameterized statements for the first 2+ and then a \nnon-parameterized statement for the last one (which may have a different \nnumber of IN list elements than the prior batches).\n", "msg_date": "Mon, 08 May 2006 15:51:26 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/\tparameterized" }, { "msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> Well, since I don't know the exact parameter values, just substituting \n> 1-650 for $1-$650, I get:\n\n> Index Scan using testindex2 on x (cost=0.00..34964.52 rows=1503 \n> width=16) (actual time=0.201..968.252 rows=677 loops=1)\n> Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...\n\n> So index usage is presumably good on this one.\n\nNo, that's not a very nice plan at all --- the key thing to notice is\nit says Filter: not Index Cond:. What you've actually got here is a\nfull-index scan over testindex2 (I guess it's doing that to achieve the\nrequested sort order), then computation of a 650-way boolean OR expression\nfor each row of the table. Ugh.\n\nThe other way of doing this would involve 650 separate index probes and\nthen sorting the result. Which would be pretty expensive too, but just\ncounting on my fingers it seems like that ought to come out at less than\nthe 35000 cost units for this plan. The planner evidently is coming up\nwith a different answer though. You might try dropping testindex2\n(which I suppose is an index on (f,c)) so that it has only an index on\nf to play with, and see what plan it picks and what the estimated/actual\ncosts are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 16:08:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized " }, { "msg_contents": "The original set of indexes were:\n\nIndexes:\n \"x_c_idx\" btree (c)\n \"x_f_idx\" btree (f)\n \"testindex2\" btree (f, c)\n\nI dropped the multicolumn index 'testindex2', and a new explain analyze \nlooks like this:\n\n Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual \ntime=962.555..964.467 rows=677 loops=1)\n Sort Key: f, c\n -> Seq Scan on x (cost=0.00..34937.60 rows=1503 width=16) (actual \ntime=5.449..956.594 rows=677 loops=1)\n Filter: ((f = 1) OR (f = 2) OR (f = 3) ...\n\n\nTurning on the server debugging again, I got roughly identical\nquery times with and without the two column index.\nIt appears to have ignored the other indexes completely.\n\n\nTom Lane wrote:\n> Jeffrey Tenny <[email protected]> writes:\n>> Well, since I don't know the exact parameter values, just substituting \n>> 1-650 for $1-$650, I get:\n> \n>> Index Scan using testindex2 on x (cost=0.00..34964.52 rows=1503 \n>> width=16) (actual time=0.201..968.252 rows=677 loops=1)\n>> Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...\n> \n>> So index usage is presumably good on this one.\n> \n> No, that's not a very nice plan at all --- the key thing to notice is\n> it says Filter: not Index Cond:. What you've actually got here is a\n> full-index scan over testindex2 (I guess it's doing that to achieve the\n> requested sort order), then computation of a 650-way boolean OR expression\n> for each row of the table. Ugh.\n> \n> The other way of doing this would involve 650 separate index probes and\n> then sorting the result. Which would be pretty expensive too, but just\n> counting on my fingers it seems like that ought to come out at less than\n> the 35000 cost units for this plan. The planner evidently is coming up\n> with a different answer though. You might try dropping testindex2\n> (which I suppose is an index on (f,c)) so that it has only an index on\n> f to play with, and see what plan it picks and what the estimated/actual\n> costs are.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 08 May 2006 16:33:46 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/ parameterized" }, { "msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> I dropped the multicolumn index 'testindex2', and a new explain analyze \n> looks like this:\n\n> Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual \n> time=962.555..964.467 rows=677 loops=1)\n> Sort Key: f, c\n> -> Seq Scan on x (cost=0.00..34937.60 rows=1503 width=16) (actual \n> time=5.449..956.594 rows=677 loops=1)\n> Filter: ((f = 1) OR (f = 2) OR (f = 3) ...\n\n> Turning on the server debugging again, I got roughly identical\n> query times with and without the two column index.\n\nThat's good, actually, seeing that the planner thinks they're close to\nthe same speed too. Now try \"set enable_seqscan = off\" to see if you\ncan force the multi-index-scan plan to be chosen, and see how that does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 16:49:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized " }, { "msg_contents": "I tried the seqscan disabling and got what sounds like the desired plan:\n\nSort (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)\n Sort Key: f, c\n -> Index Scan using x_f_idx, x_f_idx, ...\n (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)\n Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....\n\n\nI turned off the option in postgresql.conf and it did indeed improve all similar queries on that table\nto have sub-second response time, down from 6/8/10 second responses. And the elapsed time for\nthe application action reflected this improvement.\n\nSo that begs two questions:\n\n1) is there a way to enable that for a single query in a multi-query transaction?\n\n2) am I opening a can of worms if I turn it off server-wide? (PROBABLY!)\n\nI've already had to tune the server to account for the fact that\nthe database is easily cached in memory but the processors are slow. (PIII 550Mhz Xeons)\nI've lowered the cost of random pages and raised the cost of per-row processing\nas follows (where the configuration defaults are also noted):\n\n# - Planner Cost Constants -\n\n#JDT: default effective_cache_size = 1000 # typically 8KB each\neffective_cache_size = 50000 # typically 8KB each\n#JDT: default: random_page_cost = 4 # units are one sequential page fetch cost\nrandom_page_cost = 2 # units are one sequential page fetch cost\n#JDT: default: cpu_tuple_cost = 0.01 # (same)\ncpu_tuple_cost = 0.10 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#JDT: default: cpu_operator_cost = 0.0025 # (same)\ncpu_operator_cost = 0.025 # (same)\n\n\nAny suggestion for how to fix today's query (turning seqscan off) without wrecking others is welcome, as well as whether I've\nblundered on the above (which may or may not be optimal, but definitely fixed some former problem queries\non that machine).\n\nMy transactions are large multi-query serializable transactions, so it's also important that any single-query targeting optimization \nnot affect other queries in the same transaction.\n\nThanks for the help.\n\nTom Lane wrote:\n> Jeffrey Tenny <[email protected]> writes:\n>> I dropped the multicolumn index 'testindex2', and a new explain analyze \n>> looks like this:\n> \n>> Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual \n>> time=962.555..964.467 rows=677 loops=1)\n>> Sort Key: f, c\n>> -> Seq Scan on x (cost=0.00..34937.60 rows=1503 width=16) (actual \n>> time=5.449..956.594 rows=677 loops=1)\n>> Filter: ((f = 1) OR (f = 2) OR (f = 3) ...\n> \n>> Turning on the server debugging again, I got roughly identical\n>> query times with and without the two column index.\n> \n> That's good, actually, seeing that the planner thinks they're close to\n> the same speed too. Now try \"set enable_seqscan = off\" to see if you\n> can force the multi-index-scan plan to be chosen, and see how that does.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 08 May 2006 18:11:33 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/ parameterized" }, { "msg_contents": "re my question here: what would be the JDBC-proper technique,\nmy app is all jdbc.\n\nJeffrey Tenny wrote:\n> 1) is there a way to enable that for a single query in a multi-query \n> transaction?\n", "msg_date": "Mon, 08 May 2006 18:15:58 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/ parameterized" }, { "msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> I tried the seqscan disabling and got what sounds like the desired plan:\n> Sort (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)\n> Sort Key: f, c\n> -> Index Scan using x_f_idx, x_f_idx, ...\n> (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)\n> Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....\n\nHm, vs 35000 or so estimates for the slower plans. My recommendation\nwould be to decrease random_page_cost to 2 or so, instead of the brute\nforce disable-seqscans approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 19:06:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized " }, { "msg_contents": "Tom Lane wrote:\n > Jeffrey Tenny <[email protected]> writes:\n >> I tried the seqscan disabling and got what sounds like the desired plan:\n >> Sort (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)\n >> Sort Key: f, c\n >> -> Index Scan using x_f_idx, x_f_idx, ...\n >> (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)\n >> Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....\n >\n > Hm, vs 35000 or so estimates for the slower plans. My recommendation\n > would be to decrease random_page_cost to 2 or so, instead of the brute\n > force disable-seqscans approach.\n\nThe server was already running with random_page_cost=2 today for all tests, because of\nthe mods I've made to improve other problem queries in the past (my settings noted below, and\nbefore in another msg on this topic).\n\nSo to nail this particular query something additional is required (even lower random_page_cost?).\nWhat's a good value for slower processors/memory and database in memory?\n1? .5?\n\nJust curious:\nHas anybody ever done an exercise that generates postgresql defaults that are customized based on the\ncpu, memory, architecture, bus speeds, etc?\nThese old PIII xeons are quite a bit different than the newer AMD chips I use for postgres,\nand the tuning of the postgresql.conf parameters has been very effective in using the old xeons, but it seems like there\nmust be a general knowledge base of what's generically more appropriate for some types of hardware\nthat would give people\nbetter initial defaults for a given platform. I know, step right up and do it :-)\n\nHere's the postgresql defaults and actual settings I used for all tests today (from my production server):\n\n> I've already had to tune the server to account for the fact that\n> the database is easily cached in memory but the processors are slow. (PIII 550Mhz Xeons)\n> I've lowered the cost of random pages and raised the cost of per-row processing\n> as follows (where the configuration defaults are also noted):\n> \n> # - Planner Cost Constants -\n> \n> #JDT: default effective_cache_size = 1000 # typically 8KB each\n> effective_cache_size = 50000 # typically 8KB each\n> #JDT: default: random_page_cost = 4 # units are one sequential page fetch cost\n> random_page_cost = 2 # units are one sequential page fetch cost\n> #JDT: default: cpu_tuple_cost = 0.01 # (same)\n> cpu_tuple_cost = 0.10 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #JDT: default: cpu_operator_cost = 0.0025 # (same)\n> cpu_operator_cost = 0.025 # (same) \n\n\n\n", "msg_date": "Mon, 08 May 2006 19:35:15 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance question (something to do w/ parameterized" }, { "msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> The server was already running with random_page_cost=2 today for all tests, because of\n> the mods I've made to improve other problem queries in the past (my settings noted below, and\n> before in another msg on this topic).\n\n> So to nail this particular query something additional is required (even lower random_page_cost?).\n> What's a good value for slower processors/memory and database in memory?\n\nIf you're pretty sure the database will always be RAM-resident, then 1.0\nis the theoretically correct value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2006 19:37:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized " }, { "msg_contents": "On Mon, 08 May 2006 19:37:37 -0400, Tom Lane <[email protected]> wrote:\n> Jeffrey Tenny <[email protected]> writes:\n> > The server was already running with random_page_cost=2 today for all tests, because of\n> > the mods I've made to improve other problem queries in the past (my settings noted below, and\n> > before in another msg on this topic).\n> \n> > So to nail this particular query something additional is required (even lower random_page_cost?).\n> > What's a good value for slower processors/memory and database in memory?\n> \n> If you're pretty sure the database will always be RAM-resident, then 1.0\n> is the theoretically correct value.\n\nWould it be possible to craft a set of queries on specific data that\ncould advise a reasonable value for random_page_cost?\n\nWhat sort of data distribution and query type would be heavily dependant\non random_page_cost? i.e. randomness of the data, size of the data\ncompared to physical memory.\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Tue, 09 May 2006 10:10:19 +1000", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance question (something to do w/ parameterized " }, { "msg_contents": "\n>>> The moral of the story is that you're probably better off running a\n>>> bunch of small selects than in trying to optimize things with one\n>>> gargantuan select.\n>\n>> Ever experiment with loading the parameters into a temp table and\n>> joining to that?\n>\n> Also, it might be worth re-testing that conclusion with PG CVS tip\n> (or 8.2 when it comes out). The reimplementation of IN as = ANY that\n> I did a couple months ago might well change the results.\n\n\tLong mail, but I think it's interesting...\n\n\tI think this is a generic problem, which is often encountered : selecting \na bunch of records based on a list of primary keys (or other indexed, \nunique field) ; said list being anything from very short to quite large.\n\tHere are a few occurences of this need :\n\n\t1- The application supplies a list of id's (the case of the OP of this \nthread)\n\t2- A query Q1 yields a list of selected objects , that we wish to use in \nseveral subsequent queries.\n\tAnd Q1 is a query we don't wish to do several times, either because it's \nslow, complicated (advanced search, for instance), or it acts on a \nconstantly moving dataset, so the results would be different each time. So \nwe store the result of Q1 in the application, or in a temp table, or in an \narray in a plpgsql variable, whatever, to reuse them.\n\n\tThen, for each of these objects, often we will make more queries to \nresolve foreign keys (get category name, owner name, from categories and \nusers tables, etc).\n\n\tI have encountered both cases quite often, and they both pose a few \nproblems. I think it would be a good opportunity for a new feature (see \nbelow).\n\tA typical use case for point 2 :\n\n\tConsider an \"objects\" table. Each object ...\n\t- is related to one or several rows from the \"categories\" table via an \n\"objects_categories\" link table.\n\t- has an owner_id referencing the \"users\" table\n\n\tI do an \"advanced search\" query on \"objects\", which returns a list of \nobjects. I can join directly to \"users\" to get the owner's name, but \njoining to \"categories\" is already problematic because of the many-to-many \nrelationship.\n\n\tI wish to do this : fetch all objects matching the search criteria ; \nfetch the owner users ; fetch the categories ; build in my application \nobject space a clean and straightforward data representation for all this.\n\n\tAlso :\n\t- I do not wish to complicate the search query.\n\t- The row estimates for the search query results are likely to be \"not so \ngood\" (because it's a complex query) ; so the joins to users and \ncategories are likely to use suboptimal plans based on \"not so good\" \nestimates.\n\t- The rows from \"objects\" are large ; so moving them around through a lot \nof small joins hurts performance.\n\n\tThe obvious solution is this :\n\nBEGIN;\nCREATE TEMPORARY TABLE results ON COMMIT DROP AS SELECT * FROM advanced \nsearch query;\nANALYZE results;\n\n-- get the results to the application\nSELECT * FROM results;\n\n-- get object owners info\nSELECT * FROM users WHERE id IN (SELECT user_id FROM results);\n\n-- get category info\nSELECT * FROM categories WHERE id IN (SELECT category_id FROM \nobjects_to_categories WHERE object_id IN (SELECT id FROM results));\n\n-- get object/category relations (the ORM will use this to link objects in \nthe application)\nSELECT * FROM objects_to_categories WHERE object_id IN (SELECT id FROM \nresults);\nCOMMIT;\n\n\tYou might wonder why I do it this way on the \"categories\" table.\n\tThis is because I use an Object-Relational mapper which will instantiate \na User or Category class object for each row I fetch from these tables. I \ndo not want to fetch just the username, using a simple join, but I want \nthe full object, because :\n\t- I want to instantiate these objects (they have useful methods to \nprocess rights etc)\n\t- I do not want to mix columns from \"objects\" and \"users\"\n\n\tAnd I do not wish to instantiate each category more than once. This would \nwaste memory, but more importantly, it is a lot cleaner to have only one \ninstance per row, because my ORM then translates the foreign key relations \ninto object relations (pointers). Each instanciated category will contain \na list of Object instances ; each Object instance will contain a list of \nthe categories it belongs to, and point to its owner user.\n\n\tBack to the point : I can't use the temp table method, because temp \ntables are too slow.\n\tCreating a temp table, filling it, analyzing it and then dropping it \ntakes about 100 ms. The search query, on average, takes 10 ms.\n\n\tSo I have to move this logic to the application, or to plpgsql, and jump \nthrough hoops and use big IN() clauses ; which has the following drawbacks \n:\n\t- slow\n\t- ugly\n\t- very hard for the ORM to auto-generate\n\n*******************************\n\n\tFeature proposal :\n\n\tA way to store query results in a named buffer and reuse them in the next \nqueries.\n\tThis should be as fast as possible, store results in RAM if possible, and \nbe limited to inside a transaction.\n\n\tWays to store results like this already exist in various flavours inside \nthe postgres engine :\n\t- Cursors (WITH SCROLL)\n\t- Arrays (realistically, limited to a list of ids)\n\t- Executor nodes : Materialize, Hash, Sort, etc\n\n\tThe simpler to mutate would probably be the cursor.\n\tTherefore I propose to add the capability to use a CURSOR like a \ntemporary table, join it to other tables, etc.\n\tThis would be implemented by making FETCH behave just like SELECT and be \nusable in subqueries (see example below).\n\n\tFETCH can return rows to the application. Why can't it return rows to \npostgres itself without using plpgsql tricks ?\n\n\tCursors are already extremely fast.\n\tIf the cursor is declared WITH SCROLL, the result-set is buffered. \nTherefore, the rowcount can be computed exactly, and a good plan can be \nchosen.\n\tThe important columns could even be ANALYZEd if needed...\n\n\tExample :\n\t\nBEGIN;\nDECLARE results SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM advanced \nsearch query;\n\n-- get the results to the application\nFETCH ALL FROM results;\n\n-- get object owners info\nMOVE FIRST IN results;\nSELECT * FROM users WHERE id IN (FETCH ALL user_id FROM results);\n\n-- buffer object/category relations\nMOVE FIRST IN results;\nDECLARE cats SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM \nobjects_to_categories WHERE object_id IN (FETCH ALL id FROM results);\n\n-- get category info\nSELECT * FROM categories WHERE id IN (FETCH ALL category_id FROM cats);\n\n-- get object/category relations\nMOVE FIRST IN cats;\nFETCH ALL FROM cats;\n\nCOMMIT;\n\n\tI really like this. It's clean, efficient, and easy to use.\n\n\tThis would be a lot faster than using temp tables.\n\tCreating cursors is very fast so we can create two, and avoid doing twice \nthe same work (ie. hashing the ids from the results to grab categories \nonly once).\n\n*******************************\n\n\tGoodies (utopian derivatives from this feature).\n\n\t- Deferred Planning, and on-the-spot function estimates.\n\n\tThere are no rowcount estimates for set returning functions in postgres. \nThis is a problem, when postgres thinks the function will return 1000 \nrows, whereas in reality, it returns 5 rows or 10K rows. Suboptimal plans \nare chosen.\n\nSELECT * FROM objects WHERE id IN (SELECT * FROM function( params ));\n\n\tThis hairy problem can be solved easily with the proposed feature :\n\nDECLARE my_set CURSOR WITHOUT HOLD FOR SELECT * FROM function( params );\n\n\tHere, the result set for the function is materialized. Therefore, the \nrowcount is known, and the following query can be executed with a very \ngood plan :\n\nSELECT * FROM objects WHERE id IN (FETCH ALL FROM my_set);\n\n\tIt will likely be Hash + Nested loop index scan for very few rows, maybe \nmerge joins for a lot of rows, etc. In both cases the result set from \nfunction() needs to be hashed or sorted, which means buffered in memory or \ndisk ; the overhead of buffering it in the cursor would have been incurred \nanyway, so there is no resource waste.\n\n\tLikewise, a hard-to-estimate subquery which breaks the planning of the \nouter SELECT could be embedded in a cursor and buffered.\n\n\tIn a distant future, the planner could chose to automatically do this, \neffectively implementing deferred planning.\n\n\n\tThoughts ?\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 09 May 2006 10:38:17 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Big IN() clauses etc : feature proposal" }, { "msg_contents": "Hi,\n\nOn Tue, 9 May 2006, PFC wrote:\n<snipp/>\n> \tBack to the point : I can't use the temp table method, because temp \n> tables are too slow.\n> \tCreating a temp table, filling it, analyzing it and then dropping it \n> takes about 100 ms. The search query, on average, takes 10 ms.\n\njust some thoughts:\n\nYou might consider just selecting your primary key or a set of\nprimary keys to involved relations in your search query. If you\ncurrently use \"select *\" this can make your result set very large.\n\nCopying all the result set to the temp. costs you additional IO\nthat you propably dont need.\n\nAlso you might try:\n\n \tSELECT * FROM somewhere JOIN result USING (id)\n\nInstead of:\n\n \tSELECT * FROM somewhere WHERE id IN (SELECT id FROM result)\n\nJoins should be a lot faster than large IN clauses.\n\nHere it will also help if result only contains the primary keys \nand not all the other data. The join will be much faster.\n\nOn the other hand if your search query runs in 10ms it seems to be fast \nenough for you to run it multiple times. Theres propably no point in \noptimizing anything in such case.\n\nGreetings\nChristian\n\n-- \nChristian Kratzer [email protected]\nCK Software GmbH http://www.cksoft.de/\nPhone: +49 7452 889 135 Fax: +49 7452 889 136\n", "msg_date": "Tue, 9 May 2006 11:01:00 +0200 (CEST)", "msg_from": "Christian Kratzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> You might consider just selecting your primary key or a set of\n> primary keys to involved relations in your search query. If you\n> currently use \"select *\" this can make your result set very large.\n>\n> Copying all the result set to the temp. costs you additional IO\n> that you propably dont need.\n\n\tIt is a bit of a catch : I need this information, because the purpose of \nthe query is to retrieve these objects. I can first store the ids, then \nretrieve the objects, but it's one more query.\n\n> Also you might try:\n> \tSELECT * FROM somewhere JOIN result USING (id)\n> Instead of:\n> \tSELECT * FROM somewhere WHERE id IN (SELECT id FROM result)\n\n\tYes you're right in this case ; however the query to retrieve the owners \nneeds to eliminate duplicates, which IN() does.\n\n> On the other hand if your search query runs in 10ms it seems to be fast \n> enough for you to run it multiple times. Theres propably no point in \n> optimizing anything in such case.\n\n\tI don't think so :\n\t- 10 ms is a mean time, sometimes it can take much more time, sometimes \nit's faster.\n\t- Repeating the query might yield different results if records were added \nor deleted in the meantime.\n\t- Complex search queries have imprecise rowcount estimates ; hence the \njoins that I would add to them will get suboptimal plans.\n\n\tUsing a temp table is really the cleanest solution now ; but it's too \nslow so I reverted to generating big IN() clauses in the application.\n", "msg_date": "Tue, 09 May 2006 11:33:42 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big IN() clauses etc : feature proposal" }, { "msg_contents": "Hi,\n\nOn Tue, 9 May 2006, PFC wrote:\n\n>\n>> You might consider just selecting your primary key or a set of\n>> primary keys to involved relations in your search query. If you\n>> currently use \"select *\" this can make your result set very large.\n>> \n>> Copying all the result set to the temp. costs you additional IO\n>> that you propably dont need.\n>\n> \tIt is a bit of a catch : I need this information, because the purpose \n> of the query is to retrieve these objects. I can first store the ids, then \n> retrieve the objects, but it's one more query.\n\nyes but depending on what you really need that can be faster.\n\nAdditionally to your query you are already transferring the whole result \nset multiple times. First you copy it to the result table. Then you\nread it again. Your subsequent queries will also have to read over\nall the unneeded tuples just to get your primary key.\n\n>> Also you might try:\n>> \tSELECT * FROM somewhere JOIN result USING (id)\n>> Instead of:\n>> \tSELECT * FROM somewhere WHERE id IN (SELECT id FROM result)\n>\n> \tYes you're right in this case ; however the query to retrieve the \n> owners needs to eliminate duplicates, which IN() does.\n\nthen why useth thy not the DISTINCT clause when building thy result table \nand thou shalt have no duplicates.\n\n>> On the other hand if your search query runs in 10ms it seems to be fast \n>> enough for you to run it multiple times. Theres propably no point in \n>> optimizing anything in such case.\n>\n> \tI don't think so :\n> \t- 10 ms is a mean time, sometimes it can take much more time, \n> sometimes it's faster.\n> \t- Repeating the query might yield different results if records were \n> added or deleted in the meantime.\n\nwhich is a perfect reason to use a temp table. Another variation on \nthe temp table scheme is use a result table and add a query_id.\n\nWe do something like this in our web application when users submit \ncomplex queries. For each query we store tuples of (query_id,result_id)\nin a result table. It's then easy for the web application to page the\nresult set.\n\n> \t- Complex search queries have imprecise rowcount estimates ; hence \n> the joins that I would add to them will get suboptimal plans.\n>\n> \tUsing a temp table is really the cleanest solution now ; but it's too \n> slow so I reverted to generating big IN() clauses in the application.\n\nA cleaner solution usually pays off in the long run whereas a hackish\nor overly complex solution will bite you in the behind for sure as\ntime goes by.\n\nGreetings\nChristian\n\n-- \nChristian Kratzer [email protected]\nCK Software GmbH http://www.cksoft.de/\nPhone: +49 7452 889 135 Fax: +49 7452 889 136\n", "msg_date": "Tue, 9 May 2006 11:41:59 +0200 (CEST)", "msg_from": "Christian Kratzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> Additionally to your query you are already transferring the whole result \n> set multiple times. First you copy it to the result table. Then you\n> read it again. Your subsequent queries will also have to read over\n> all the unneeded tuples just to get your primary key.\n\n\tConsidering that the result set is not very large and will be cached in \nRAM, this shouldn't be a problem.\n\n> then why useth thy not the DISTINCT clause when building thy result \n> table and thou shalt have no duplicates.\n\n\tBecause the result table contains no duplicates ;)\n\tI need to remove duplicates in this type of queries :\n\n-- get object owners info\nSELECT * FROM users WHERE id IN (SELECT user_id FROM results);\n\n\tAnd in this case I find IN() easier to read than DISTINCT (what I posted \nwas a simplification of my real use case...)\n\n> which is a perfect reason to use a temp table. Another variation on the \n> temp table scheme is use a result table and add a query_id.\n\n\tTrue. Doesn't solve my problem though : it's still complex, doesn't have \ngood rowcount estimation, bloats a table (I only need these records for \nthe duration of the transaction), etc.\n\t\n> We do something like this in our web application when users submit \n> complex queries. For each query we store tuples of (query_id,result_id)\n> in a result table. It's then easy for the web application to page the\n> result set.\n\n\tYes, that is about the only sane way to page big result sets.\n\n> A cleaner solution usually pays off in the long run whereas a hackish\n> or overly complex solution will bite you in the behind for sure as\n> time goes by.\n\n\tYes, but in this case temp tables add too much overhead. I wish there \nwere RAM based temp tables like in mysql. However I guess the current temp \ntable slowness comes from the need to mark their existence in the system \ncatalogs or something. That's why I proposed using cursors...\n", "msg_date": "Tue, 09 May 2006 12:10:37 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, May 09, 2006 at 12:10:37PM +0200, PFC wrote:\n> \tYes, but in this case temp tables add too much overhead. I wish \n> \tthere were RAM based temp tables like in mysql. However I guess the \n> current temp table slowness comes from the need to mark their existence in \n> the system catalogs or something. That's why I proposed using cursors...\n\nIt would be interesting to know what the bottleneck is for temp tables\nfor you. They do not go via the buffer-cache, they are stored in\nprivate memory in the backend, they are not xlogged. Nor flushed to\ndisk on backend exit. They're about as close to in-memory tables as\nyou're going to get...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Tue, 9 May 2006 12:36:32 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "[snip]\n> It would be interesting to know what the bottleneck is for temp tables\n> for you. They do not go via the buffer-cache, they are stored in\n[snip]\n\nIs it possible that the temp table creation is the bottleneck ? Would\nthat write into system catalogs ? If yes, maybe the system catalogs are\nnot adequately vacuumed/analyzed.\n\nJust a thought.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Tue, 09 May 2006 12:52:06 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n\n> It would be interesting to know what the bottleneck is for temp tables\n> for you. They do not go via the buffer-cache, they are stored in\n> private memory in the backend, they are not xlogged. Nor flushed to\n> disk on backend exit. They're about as close to in-memory tables as\n> you're going to get...\n\n\tHum...\n\tTimings are a mean over 100 queries, including roundtrip to localhost, \nvia a python script.\n\n0.038 ms BEGIN\n0.057 ms SELECT 1\n0.061 ms COMMIT\n\n0.041 ms BEGIN\n0.321 ms SELECT count(*) FROM bookmarks\n0.080 ms COMMIT\n\n\tthis test table contains about 250 rows\n\n0.038 ms BEGIN\n0.378 ms SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20\n0.082 ms COMMIT\n\n\tthe ORDER BY uses an index\n\n0.042 ms BEGIN\n0.153 ms DECLARE tmp SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM \nbookmarks ORDER BY annonce_id DESC LIMIT 20\n0.246 ms FETCH ALL FROM tmp\n0.048 ms MOVE FIRST IN tmp\n0.246 ms FETCH ALL FROM tmp\n0.048 ms CLOSE tmp\n0.084 ms COMMIT\n\n\tthe CURSOR is about as fast as a simple query\n\n0.101 ms BEGIN\n1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT \nNULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC \nLIMIT 20\n0.443 ms ANALYZE tmp\n0.365 ms SELECT * FROM tmp\n0.310 ms DROP TABLE tmp\n32.918 ms COMMIT\n\n\tCREATING the table is OK, but what happens on COMMIT ? I hear the disk \nseeking frantically.\n\nWith fsync=off, I get this :\n\n0.090 ms BEGIN\n1.103 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT \nNULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n0.439 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC \nLIMIT 20\n0.528 ms ANALYZE tmp\n0.364 ms SELECT * FROM tmp\n0.313 ms DROP TABLE tmp\n0.688 ms COMMIT\n\n\tGetting closer ?\n\tI'm betting on system catalogs updates. I get the same timings with \nROLLBACK instead of COMMIT. Temp tables have a row in pg_class...\n\n\tAnother temporary table wart :\n\nBEGIN;\nCREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT NULL, c \nTIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP;\nINSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20;\n\nEXPLAIN ANALYZE SELECT * FROM tmp;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on tmp (cost=0.00..25.10 rows=1510 width=20) (actual \ntime=0.003..0.006 rows=20 loops=1)\n Total runtime: 0.030 ms\n(2 lignes)\n\nANALYZE tmp;\nEXPLAIN ANALYZE SELECT * FROM tmp;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Seq Scan on tmp (cost=0.00..1.20 rows=20 width=20) (actual \ntime=0.003..0.008 rows=20 loops=1)\n Total runtime: 0.031 ms\n\n\tWe see that the temp table has a very wrong estimated rowcount until it \nhas been ANALYZED.\n\tHowever, temporary tables do not support concurrent access (obviously) ; \nand in the case of on-commit-drop tables, inserts can't be rolled back \n(obviously), so an accurate rowcount could be maintained via a simple \ncounter...\n\n\n", "msg_date": "Tue, 09 May 2006 13:29:56 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "PFC <[email protected]> writes:\n> \tFeature proposal :\n\n> \tA way to store query results in a named buffer and reuse them in the next \n> queries.\n\nWhy not just fix the speed issues you're complaining about with temp\ntables? I see no reason to invent a new concept.\n\n(Now, \"just fix\" might be easier said than done, but inventing an\nessentially duplicate facility would be a lot of work too.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 May 2006 09:31:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big IN() clauses etc : feature proposal " }, { "msg_contents": "\nPFC <[email protected]> writes:\n\n> \n> \tI really like this. It's clean, efficient, and easy to use.\n> \n> \tThis would be a lot faster than using temp tables.\n> \tCreating cursors is very fast so we can create two, and avoid doing\n> twice the same work (ie. hashing the ids from the results to grab categories\n> only once).\n\nCreating cursors for a simple plan like a single sequential scan is fast\nbecause it's using the original data from the table. But your example was\npredicated on this part of the job being a complex query. If it's a complex\nquery involving joins and groupings, etc, then it will have to be materialized\nand there's no (good) reason for that to be any faster than a temporary table\nwhich is effectively the same thing.\n\n-- \ngreg\n\n", "msg_date": "09 May 2006 11:00:29 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, 2006-05-09 at 13:29 +0200, PFC wrote:\n> 0.101 ms BEGIN\n> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT \n> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC \n> LIMIT 20\n> 0.443 ms ANALYZE tmp\n> 0.365 ms SELECT * FROM tmp\n> 0.310 ms DROP TABLE tmp\n> 32.918 ms COMMIT\n\nDoes the time for commit change much if you leave out the analyze?\n\n\n", "msg_date": "Tue, 09 May 2006 08:58:05 -0700", "msg_from": "Mitchell Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> Creating cursors for a simple plan like a single sequential scan is fast\n> because it's using the original data from the table.\n\n\tI used the following query :\n\nSELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20\n\n\tIt's a backward index scan + limit... not a seq scan. And it's damn fast :\n\n0.042 ms BEGIN\n0.153 ms DECLARE tmp SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM \nbookmarks ORDER BY annonce_id DESC LIMIT 20\n0.246 ms FETCH ALL FROM tmp\n0.048 ms MOVE FIRST IN tmp\n0.246 ms FETCH ALL FROM tmp\n0.048 ms CLOSE tmp\n0.084 ms COMMIT\n\n\n> But your example was\n> predicated on this part of the job being a complex query. If it's a \n> complex\n> query involving joins and groupings, etc, then it will have to be \n> materialized\n> and there's no (good) reason for that to be any faster than a temporary \n> table\n> which is effectively the same thing.\n\n\tYou mean the cursors'storage is in fact the same internal machinery as a \ntemporary table ?\n\n\tIn that case, this raises an interesting question : why is the cursor \nfaster ?\n\n\tLet's try a real-life example from my website : it is a search query \n(quite complex) which is then joined to a lot of tables to resolve FKeys.\n\tTo that query I must add add an application-made join using a big IN() \nclause extracted from the data.\n\tTimings includes the time to fetch the results into Python.\n\tThe \"running total\" column is the sum of all timings since the BEGIN.\n\n\nquery_time running_total rows query\n0.061 ms 0.061 ms -1 BEGIN\n23.420 ms 23.481 ms 85 SELECT * FROM (huge query with a \nlot of joins)\n4.318 ms 27.799 ms 2 SELECT l.*, u.login, u.bg_color \n FROM annonces_log l, users u WHERE u.id=l.user_id AND l.annonce_id IN \n(list of ids from previous query) ORDER BY annonce_id, added\n0.241 ms 28.040 ms -1 COMMIT\n\n\t(Just in case you want to hurt yourself, here's the EXPLAIN ANALYZE \noutput : http://peufeu.com/temp/big_explain.txt)\n\tUsing a cursor takes about the same time.\n\n\tAlso, doing just the search query takes about 12 ms, the joins take up \nthe rest.\n\n\tNow, I'll rewrite my query eliminating the joins and using a temp table.\n\tStoring the whole result in the temp table will be too slow, because \nthere are too many columns.\n\tTherefore I will only store the primary and foreign key columns, and join \nagain to the main table to get the full records.\n\nquery_time running_total rows query\n0.141 ms 0.141 ms -1 BEGIN\n\n\tDo the search :\n\n8.229 ms 8.370 ms -1 CREATE TEMPORARY TABLE tmp AS \nSELECT id, city_id, zipcode, contact_id, contact_group_id, price/terrain \nas sort FROM (stripped down search query)\n0.918 ms 9.287 ms -1 ANALYZE tmp\n\n\tFetch the main data to display :\n\n7.663 ms 16.951 ms 85 SELECT a.* FROM tmp t, \nannonces_display a WHERE a.id=t.id ORDER BY t.sort\n\n\tFetch log entries associates with each row (one row to many log entries) :\n\n1.021 ms 17.972 ms 2 SELECT l.*, u.login, u.bg_color \n FROM annonces_log l, users u, tmp t WHERE u.id=l.user_id AND l.annonce_id \n= t.id ORDER BY annonce_id, added\n3.468 ms 21.440 ms 216 SELECT annonce_id, \narray_accum(list_id) AS list_ids, array_accum(COALESCE(user_id,0)) AS \nlist_added_by, max(added) AS added_to_list FROM bookmarks GROUP BY \nannonce_id\n\n\tResolve foreign key relations\n\n1.034 ms 22.474 ms 37 SELECT r.annonce_id FROM \nread_annonces r, tmp t WHERE r.annonce_id = t.id\n0.592 ms 23.066 ms 9 SELECT * FROM cities_dist_zipcode \nWHERE zipcode IN (SELECT zipcode FROM tmp)\n0.716 ms 23.782 ms 11 SELECT * FROM cities_dist WHERE id \nIN (SELECT city_id FROM tmp)\n1.125 ms 24.907 ms 45 SELECT * FROM contacts WHERE id IN \n(SELECT contact_id FROM tmp)\n0.799 ms 25.705 ms 42 SELECT * FROM contact_groups WHERE \nid IN (SELECT contact_group_id FROM tmp)\n0.463 ms 26.169 ms -1 DROP TABLE tmp\n32.208 ms 58.377 ms -1 COMMIT\n\n\n\tFrom this we see :\n\n\tUsing a temporary table is FASTER than doing the large query with all the \njoins. (26 ms versus 28 ms).\n\tIt's also nicer and cleaner.\n\tHowever the COMMIT takes as much time as all the queries together !\n\n\tLet's run with fsync=off :\n\nquery_time running_total rows query\n0.109 ms 0.109 ms -1 BEGIN\n8.321 ms 8.430 ms -1 CREATE TEMPORARY TABLE tmp AS \nSELECT id, city_id, zipcode, contact_id, contact_group_id, price/terrain \nas sort FROM (stripped down search query)\n0.849 ms 9.280 ms -1 ANALYZE tmp\n7.360 ms 16.640 ms 85 SELECT a.* FROM tmp t, \nannonces_display a WHERE a.id=t.id ORDER BY t.sort\n1.067 ms 17.707 ms 2 SELECT l.*, u.login, u.bg_color \n FROM annonces_log l, users u, tmp t WHERE u.id=l.user_id AND l.annonce_id \n= t.id ORDER BY annonce_id, added\n3.322 ms 21.030 ms 216 SELECT annonce_id, \narray_accum(list_id) AS list_ids, array_accum(COALESCE(user_id,0)) AS \nlist_added_by, max(added) AS added_to_list FROM bookmarks GROUP BY \nannonce_id\n0.896 ms 21.926 ms 37 SELECT r.annonce_id FROM \nread_annonces r, tmp t WHERE r.annonce_id = t.id\n0.573 ms 22.499 ms 9 SELECT * FROM cities_dist_zipcode \nWHERE zipcode IN (SELECT zipcode FROM tmp)\n0.678 ms 23.177 ms 11 SELECT * FROM cities_dist WHERE id \nIN (SELECT city_id FROM tmp)\n1.064 ms 24.240 ms 45 SELECT * FROM contacts WHERE id IN \n(SELECT contact_id FROM tmp)\n0.772 ms 25.013 ms 42 SELECT * FROM contact_groups WHERE \nid IN (SELECT contact_group_id FROM tmp)\n0.473 ms 25.485 ms -1 DROP TABLE tmp\n1.777 ms 27.262 ms -1 COMMIT\n\n\tThere, it's good again.\n\n\tSo, when fsync=on, and temporary tables are used, something slow happens \non commit (even if the temp table is ON COMMIT DROP...)\n\tThoughts ?\n\n\n", "msg_date": "Tue, 09 May 2006 18:29:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> Does the time for commit change much if you leave out the analyze?\n\n\tYes, when I don't ANALYZE the temp table, commit time changes from 30 ms \nto about 15 ms ; but the queries get horrible plans (see below) :\n\n\tFun thing is, the rowcount from a temp table (which is the problem here) \nshould be available without ANALYZE ; as the temp table is not concurrent, \nit would be simple to inc/decrement a counter on INSERT/DELETE...\n\n\tI like the temp table approach : it can replace a large, complex query \nwith a batch of smaller and easier to optimize queries...\n\nEXPLAIN ANALYZE SELECT a.* FROM tmp t, annonces_display a WHERE a.id=t.id \nORDER BY t.sort;\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=3689.88..3693.15 rows=1310 width=940) (actual \ntime=62.327..62.332 rows=85 loops=1)\n Sort Key: t.sort\n -> Merge Join (cost=90.93..3622.05 rows=1310 width=940) (actual \ntime=5.595..61.373 rows=85 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using annonces_pkey on annonces \n(cost=0.00..3451.39 rows=10933 width=932) (actual time=0.012..6.620 \nrows=10916 loops=1)\n -> Sort (cost=90.93..94.20 rows=1310 width=12) (actual \ntime=0.098..0.105 rows=85 loops=1)\n Sort Key: t.id\n -> Seq Scan on tmp t (cost=0.00..23.10 rows=1310 \nwidth=12) (actual time=0.004..0.037 rows=85 loops=1)\n Total runtime: 62.593 ms\n\nEXPLAIN ANALYZE SELECT * FROM contacts WHERE id IN (SELECT contact_id FROM \ntmp);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=28.88..427.82 rows=200 width=336) (actual \ntime=0.156..5.019 rows=45 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".contact_id)\n -> Seq Scan on contacts (cost=0.00..349.96 rows=9396 width=336) \n(actual time=0.009..3.373 rows=9396 loops=1)\n -> Hash (cost=28.38..28.38 rows=200 width=4) (actual \ntime=0.082..0.082 rows=46 loops=1)\n -> HashAggregate (cost=26.38..28.38 rows=200 width=4) (actual \ntime=0.053..0.064 rows=46 loops=1)\n -> Seq Scan on tmp (cost=0.00..23.10 rows=1310 width=4) \n(actual time=0.001..0.015 rows=85 loops=1)\n Total runtime: 5.092 ms\n\nANALYZE tmp;\nANALYZE\nannonces=> EXPLAIN ANALYZE SELECT a.* FROM tmp t, annonces_display a WHERE \na.id=t.id ORDER BY t.sort;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=508.63..508.84 rows=85 width=940) (actual time=1.830..1.832 \nrows=85 loops=1)\n Sort Key: t.sort\n -> Nested Loop (cost=0.00..505.91 rows=85 width=940) (actual \ntime=0.040..1.188 rows=85 loops=1)\n -> Seq Scan on tmp t (cost=0.00..1.85 rows=85 width=12) (actual \ntime=0.003..0.029 rows=85 loops=1)\n -> Index Scan using annonces_pkey on annonces (cost=0.00..5.89 \nrows=1 width=932) (actual time=0.003..0.004 rows=1 loops=85)\n Index Cond: (annonces.id = \"outer\".id)\n Total runtime: 2.053 ms\n(7 lignes)\n\nannonces=> EXPLAIN ANALYZE SELECT * FROM contacts WHERE id IN (SELECT \ncontact_id FROM tmp);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.06..139.98 rows=36 width=336) (actual \ntime=0.072..0.274 rows=45 loops=1)\n -> HashAggregate (cost=2.06..2.51 rows=45 width=4) (actual \ntime=0.052..0.065 rows=46 loops=1)\n -> Seq Scan on tmp (cost=0.00..1.85 rows=85 width=4) (actual \ntime=0.003..0.016 rows=85 loops=1)\n -> Index Scan using contacts_pkey on contacts (cost=0.00..3.04 rows=1 \nwidth=336) (actual time=0.003..0.004 rows=1 loops=46)\n Index Cond: (contacts.id = \"outer\".contact_id)\n Total runtime: 0.341 ms\n", "msg_date": "Tue, 09 May 2006 18:38:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On 5/9/06, PFC <[email protected]> wrote:\n> > You might consider just selecting your primary key or a set of\n> > primary keys to involved relations in your search query. If you\n> > currently use \"select *\" this can make your result set very large.\n> >\n> > Copying all the result set to the temp. costs you additional IO\n> > that you propably dont need.\n>\n> It is a bit of a catch : I need this information, because the purpose of\n> the query is to retrieve these objects. I can first store the ids, then\n> retrieve the objects, but it's one more query.\n>\n> > Also you might try:\n> > SELECT * FROM somewhere JOIN result USING (id)\n> > Instead of:\n> > SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)\n>\n> Yes you're right in this case ; however the query to retrieve the owners\n> needs to eliminate duplicates, which IN() does.\n\nWell, you can either\n SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS\na USING (id);\nor even, for large number of ids:\n CREATE TEMPORARY TABLE result_ids AS SELECT id FROM RESULT GROUP BY id;\n SELECT * FROM somewhere JOIN result_ids USING (id);\n\n\n> > On the other hand if your search query runs in 10ms it seems to be fast\n> > enough for you to run it multiple times. Theres propably no point in\n> > optimizing anything in such case.\n>\n> I don't think so :\n> - 10 ms is a mean time, sometimes it can take much more time, sometimes\n> it's faster.\n> - Repeating the query might yield different results if records were added\n> or deleted in the meantime.\n\nYou may SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nthough locking might bite you. :)\n\n> - Complex search queries have imprecise rowcount estimates ; hence the\n> joins that I would add to them will get suboptimal plans.\n>\n> Using a temp table is really the cleanest solution now ; but it's too\n> slow so I reverted to generating big IN() clauses in the application.\n\nA thought, haven't checked it though, but...\n\nYou might want to use PL to store values, say PLperl, or even C, say:\n\ncreate or replace function perl_store(name text, val int) returns void\nas $$ my $name = shift; push @{$foo{$name}}, shift; return $$ LANGUAGE\nplperl;\n\nselect perl_store('someids', id) from something group by id;\n(you may need to warp it inside count())\n\nThen use it:\n\ncreate or replace function perl_retr(name text) returns setof int as\n$$ my $name = shift; return $foo{$name} $$ LANGUAGE plperl;\n\nselect * from someother join perl_retr('someids') AS a(id) using (id);\n\nAll is in the memory. Of course, you need to do some cleanup, test it,\netc, etc, etc. :)\n\nShould work faster than a in-application solution :)\n\n Regards,\n Dawid\n", "msg_date": "Tue, 9 May 2006 18:43:23 +0200", "msg_from": "\"Dawid Kuroczko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n\n>> > SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)\n\n> Well, you can either\n> SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS\n> a USING (id);\n\n\tIt's the same thing (and postgres knows it)\n\n> You might want to use PL to store values, say PLperl, or even C, say:\n\n\tI tried.\n\tThe problem is that you need a set-returning function to retrieve the \nvalues. SRFs don't have rowcount estimates, so the plans suck.\n\n> Should work faster than a in-application solution :)\n\n\tShould, but don't, because of what I said above...\n\n\tWith the version in CVS tip, supprting a fast =ANY( array ), this should \nbe doable, though.\n", "msg_date": "Tue, 09 May 2006 18:49:22 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "PFC <[email protected]> writes:\n> \tFun thing is, the rowcount from a temp table (which is the problem here) \n> should be available without ANALYZE ; as the temp table is not concurrent, \n> it would be simple to inc/decrement a counter on INSERT/DELETE...\n\nNo, because MVCC rules still apply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 May 2006 15:13:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal " }, { "msg_contents": "Hi, PFC,\n\nPFC wrote:\n\n> The problem is that you need a set-returning function to retrieve\n> the values. SRFs don't have rowcount estimates, so the plans suck.\n\nWhat about adding some way of rowcount estimation to SRFs, in the way of:\n\nCREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS\n$$ ... function code ... $$ LANGUAGE plpgsql\nROWCOUNT_ESTIMATOR $$ ... estimation code ... $$ ;\n\nInternally, this could create two functions, foo (para, meters) and\nestimate_foo(para, meters) that are the same language and coupled\ntogether (just like a SERIAL column and its sequence). The estimator\nfunctions have an implicit return parameter of int8. Parameters may be\nNULL when they are not known at query planning time.\n\nWhat do you think about this idea?\n\nThe same scheme could be used to add a CPUCOST_ESTIMATOR to expensive\nfunctions.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 15:29:16 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n>> The problem is that you need a set-returning function to retrieve\n>> the values. SRFs don't have rowcount estimates, so the plans suck.\n>\n> What about adding some way of rowcount estimation to SRFs, in the way of:\n>\n> CREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS\n> $$ ... function code ... $$ LANGUAGE plpgsql\n> ROWCOUNT_ESTIMATOR $$ ... estimation code ... $$ ;\n>\n> Internally, this could create two functions, foo (para, meters) and\n> estimate_foo(para, meters) that are the same language and coupled\n> together (just like a SERIAL column and its sequence). The estimator\n> functions have an implicit return parameter of int8. Parameters may be\n> NULL when they are not known at query planning time.\n>\n> What do you think about this idea?\n\n\tIt would be very useful.\n\tA few thoughts...\n\n\tYou need to do some processing to know how many rows the function would \nreturn.\n\tOften, this processing will be repeated in the function itself.\n\tSometimes it's very simple (ie. the function will RETURN NEXT each \nelement in an array, you know the array length...)\n\tSometimes, for functions returning few rows, it might be faster to \ncompute the entire result set in the cost estimator.\n\t\n\tSo, it might be a bit hairy to find a good compromise.\n\n\tIdeas on how to do this (clueless hand-waving mode) :\n\n\t1- Add new attributes to set-returning functions ; basically a list of \nfunctions, each returning an estimation parameter (rowcount, cpu tuple \ncost, etc).\n\tThis is just like you said.\n\n\t2- Add an \"estimator\", to a function, which would just be another \nfunction, returning one row, a record, containing the estimations in \nseveral columns (rowcount, cpu tuple cost, etc).\n\tPros : only one function call to estimate, easier and faster, the \nestimator just leaves the unknown columns to NULL.\n\tThe estimator needs not be in the same language as the function itself. \nIt's just another function.\n\n\t3- The estimator could be a set-returning function itself which would \nreturn rows mimicking pg_statistics\n\tPros : planner-friendly, the planner would SELECT from the SRF instead of \nlooking in pg_statistics, and the estimator could tell the planner that, \nfor instance, the function will return unique values.\n\tCons : complex, maybe slow\n\n\t4- Add simple flags to a function, like :\n\t- returns unique values\n\t- returns sorted values (no need to sort my results)\n\t- please execute me and store my results in a temporary storage, count \nthe rows returned, and plan the outer query accordingly\n\t- etc.\n\t\n", "msg_date": "Wed, 10 May 2006 16:38:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:\n> \tYou need to do some processing to know how many rows the function \n> \twould return.\n> \tOften, this processing will be repeated in the function itself.\n> \tSometimes it's very simple (ie. the function will RETURN NEXT each \n> element in an array, you know the array length...)\n> \tSometimes, for functions returning few rows, it might be faster to \n> compute the entire result set in the cost estimator.\n\nI think the best would probably be to assign a constant. An SRF will\ngenerally return between one of 1-10, 10-100, 100-1000, etc. You don't\nneed exact number, you just need to get within an order of magnitude\nand a constant will work fine for that.\n\nHow many functions sometimes return one and sometimes a million rows?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Wed, 10 May 2006 16:55:51 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "Hi, PFC,\n\nPFC wrote:\n\n> You need to do some processing to know how many rows the function\n> would return.\n> Often, this processing will be repeated in the function itself.\n> Sometimes it's very simple (ie. the function will RETURN NEXT each \n> element in an array, you know the array length...)\n> Sometimes, for functions returning few rows, it might be faster to \n> compute the entire result set in the cost estimator.\n\nI know, but we only have to estmiate the number of rows to give a hint\nto the query planner, so we can use lots of simplifications.\n\nE. G. for generate_series we return ($2-$1)/$3, and for some functions\neven constant estimates will be good enough.\n\n> - please execute me and store my results in a temporary storage,\n> count the rows returned, and plan the outer query accordingly\n\nThat's an interesting idea.\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 17:04:25 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "Martijn van Oosterhout wrote:\n> On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:\n>> \tYou need to do some processing to know how many rows the function \n>> \twould return.\n>> \tOften, this processing will be repeated in the function itself.\n>> \tSometimes it's very simple (ie. the function will RETURN NEXT each \n>> element in an array, you know the array length...)\n>> \tSometimes, for functions returning few rows, it might be faster to \n>> compute the entire result set in the cost estimator.\n> \n> I think the best would probably be to assign a constant. An SRF will\n> generally return between one of 1-10, 10-100, 100-1000, etc. You don't\n> need exact number, you just need to get within an order of magnitude\n> and a constant will work fine for that.\n> \n> How many functions sometimes return one and sometimes a million rows?\n\nIt will probably be quite common for the number to depend on the number\nof rows in other tables. Even if this is fairly constant within one db\n(some assumption), it is likely to be different in others using the same\nfunction definition. Perhaps a better solution would be to cache the\nresult of the estimator function.\n\n/Nis\n\n\n", "msg_date": "Wed, 10 May 2006 17:30:07 +0200", "msg_from": "Nis Jorgensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "Hi, Nils,\n\nNis Jorgensen wrote:\n\n> It will probably be quite common for the number to depend on the number\n> of rows in other tables. Even if this is fairly constant within one db\n> (some assumption), it is likely to be different in others using the same\n> function definition. Perhaps a better solution would be to cache the\n> result of the estimator function.\n\nSophisticated estimator functions are free to use the pg_statistics\nviews for their row count estimation.\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 17:59:26 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, May 09, 2006 at 11:33:42AM +0200, PFC wrote:\n> \t- Repeating the query might yield different results if records were \n> \tadded or deleted in the meantime.\n\nBTW, SET TRANSACTION ISOLATION LEVEL serializeable or BEGIN ISOLATION\nLEVEL serializeable would cure that.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 10 May 2006 13:40:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:\n> PFC <[email protected]> writes:\n> > \tFun thing is, the rowcount from a temp table (which is the problem here) \n> > should be available without ANALYZE ; as the temp table is not concurrent, \n> > it would be simple to inc/decrement a counter on INSERT/DELETE...\n> \n> No, because MVCC rules still apply.\n\nBut can anything ever see more than one version of what's in the table?\nEven if you rollback you should still be able to just update a row\ncounter because nothing else would be able to see what was rolled back.\n\nSpeaking of which, if a temp table is defined as ON COMMIT DROP or\nDELETE ROWS, there shouldn't be any need to store xmin/xmax, only\ncmin/cmax, correct?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 10 May 2006 14:00:11 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, May 09, 2006 at 01:29:56PM +0200, PFC wrote:\n> 0.101 ms BEGIN\n> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT \n> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC \n> LIMIT 20\n> 0.443 ms ANALYZE tmp\n> 0.365 ms SELECT * FROM tmp\n> 0.310 ms DROP TABLE tmp\n> 32.918 ms COMMIT\n> \n> \tCREATING the table is OK, but what happens on COMMIT ? I hear the \n> \tdisk seeking frantically.\n> \n> With fsync=off, I get this :\n> \n> 0.090 ms BEGIN\n> 1.103 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT \n> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n> 0.439 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC \n> LIMIT 20\n> 0.528 ms ANALYZE tmp\n> 0.364 ms SELECT * FROM tmp\n> 0.313 ms DROP TABLE tmp\n> 0.688 ms COMMIT\n> \n> \tGetting closer ?\n> \tI'm betting on system catalogs updates. I get the same timings with \n> ROLLBACK instead of COMMIT. Temp tables have a row in pg_class...\n\nHave you tried getting a profile of what exactly PostgreSQL is doing\nthat takes so long when creating a temp table?\n\nBTW, I suspect catalogs might be the answer, which is why Oracle has you\ndefine a temp table once (which does all the work of putting it in the\ncatalog) and then you just use it accordingly in each individual\nsession.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 10 May 2006 14:06:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> Speaking of which, if a temp table is defined as ON COMMIT DROP or\n> DELETE ROWS, there shouldn't be any need to store xmin/xmax, only\n> cmin/cmax, correct?\n\n\tYes, that's that type of table I was thinking about...\n\tYou can't ROLLBACK a transaction on such a table.\n\tYou can however rollback a savepoint and use \"INSERT INTO tmp SELECT FROM \ntmp\" which implies MVCC (I think ?)\n\n\tI was suggesting to be able to use FETCH (from a cursor) in the same way \nas SELECT, effectively using a named cursor (DECLARE...) as a simpler, \nfaster version of a temporary table, but there is another (better ?) \noption :\n\n\tIf rowcount estimates for functions are implemented, then a set-returning \nfunction can be written, which takes as argument a named cursor, and \nreturns its rows.\n\tIt would have accurate rowcount estimation (if the cursor is WITH SCROLL, \nwhich is the case here, rows are stored, so we know their number).\n\n\tThen you could do :\n\nDECLARE my_cursor ... AS (query that we only want to do once)\nSELECT ... FROM table1 JOIN fetch_cursor( my_cursor ) ON ...\nSELECT ... FROM table2 JOIN fetch_cursor( my_cursor ) ON ...\nSELECT ... FROM table3 JOIN fetch_cursor( my_cursor ) ON ...\n\n\tNo need to redefine the FETCH keyword.\n\tAn interesting functionalyty with minimal hassle.\n\n", "msg_date": "Wed, 10 May 2006 21:23:59 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:\n> \tYou mean the cursors'storage is in fact the same internal machinery \n> \tas a temporary table ?\n\nUse the source, Luke...\n\nSee tuplestore_begin_heap in backend/utils/sort/tuplestore.c and\nheap_create_with_catalog in backend/catalog/heap.c. You'll find that\ncreating a tuplestore is far easier than creating a temp table.\n\nPerhaps it would be worth creating a class of temporary tables that used\na tuplestore, although that would greatly limit what could be done with\nthat temp table.\n\nSomething else worth considering is not using the normal catalog methods\nfor storing information about temp tables, but hacking that together\nwould probably be a rather large task.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 10 May 2006 14:24:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n\n> Have you tried getting a profile of what exactly PostgreSQL is doing\n> that takes so long when creating a temp table?\n\n\tNope, I'm not proficient in the use of these tools (I stopped using C \nsome time ago).\n\n> BTW, I suspect catalogs might be the answer,\n\n\tProbably, because :\n\n\t- Temp tables don't use fsync (I hope)\n\t- Catalogs do\n\t- fsync=off makes COMMIT fast\n\t- fsync=on makes COMMIT slow\n\t- fsync=on and using ANALYZE makes COMMIT slower (more updates to the \ncatalogs I guess)\n\n> which is why Oracle has you\n> define a temp table once (which does all the work of putting it in the\n> catalog) and then you just use it accordingly in each individual\n> session.\n\n\tInteresting (except for the ANALYZE bit...)\n\n\n", "msg_date": "Wed, 10 May 2006 21:27:21 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n> On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:\n>> \tYou mean the cursors'storage is in fact the same internal machinery\n>> \tas a temporary table ?\n>\n> Use the source, Luke...\n\n\tLOL, yeah, I should have, sorry.\n\n> See tuplestore_begin_heap in backend/utils/sort/tuplestore.c and\n> heap_create_with_catalog in backend/catalog/heap.c. You'll find that\n> creating a tuplestore is far easier than creating a temp table.\n\n\tI had used intuition (instead of the source) to come at the same \nconclusion regarding the level of complexity of these two...\n\tBut I'll look at the source ;)\n\n> Perhaps it would be worth creating a class of temporary tables that used\n> a tuplestore, although that would greatly limit what could be done with\n> that temp table.\n\n\tJust selecting from it I guess, but that's all that's needed. Anymore \nwould duplicate the functionality of a temp table.\n\tI find cursors awkward. The application can FETCH from them, but postgres \nitself can't do it in SQL, unless using FOR.. IN in plpgsql...\n\tIt would be a powerful addition to be able to split queries, factor out \ncommon parts between multiple queries, etc, using this system, it can even \nbe used to execute an inner part of a query, then plan the rest according \nto the results and execute it... without the overhead of a temp table.\n\n\n\n", "msg_date": "Wed, 10 May 2006 21:35:39 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:\n>> PFC <[email protected]> writes:\n>>> Fun thing is, the rowcount from a temp table (which is the problem here) \n>>> should be available without ANALYZE ; as the temp table is not concurrent, \n>>> it would be simple to inc/decrement a counter on INSERT/DELETE...\n>> \n>> No, because MVCC rules still apply.\n\n> But can anything ever see more than one version of what's in the table?\n\nYes, because there can be more than one active snapshot within a single\ntransaction (think about volatile functions in particular).\n\n> Speaking of which, if a temp table is defined as ON COMMIT DROP or\n> DELETE ROWS, there shouldn't be any need to store xmin/xmax, only\n> cmin/cmax, correct?\n\nNo; you forgot about subtransactions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2006 20:31:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal " }, { "msg_contents": "\n\"Jim C. Nasby\" <[email protected]> writes:\n\n> Perhaps it would be worth creating a class of temporary tables that used\n> a tuplestore, although that would greatly limit what could be done with\n> that temp table.\n\nI can say that I've seen plenty of instances where the ability to create\ntemporary tables very quickly with no overhead over the original query would\nbe useful.\n\nFor instance, in one site I had to do exactly what I always advise others\nagainst: use offset/limit to implement paging. So first I have to execute the\nquery with a count(*) aggregate to get the total, then execute the same query\na second time to fetch the actual page of interest. This would be (or could be\narranged to be) within the same transaction and doesn't require the ability to\nexecute any dml against the tuple store which I imagine would be the main\nissues?\n\nFor bonus points what would be real neat would be if the database could notice\nshared plan segments, keep around the materialized tuple store, and substitute\nit instead of reexecuting that segment of the plan. Of course this requires\nkeeping track of transaction snapshot states and making sure it's still\ncorrect.\n\n> Something else worth considering is not using the normal catalog methods\n> for storing information about temp tables, but hacking that together\n> would probably be a rather large task.\n\nIt would be nice if using this feature didn't interact poorly with preplanning\nall your queries and using the cached plans. Perhaps if you had some way to\ncreate a single catalog entry that defined all the column names and types and\nthen simply pointed it at a new tuplestore each time without otherwise\naltering the catalog entry?\n\n-- \ngreg\n\n", "msg_date": "11 May 2006 11:35:34 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:\n> >> PFC <[email protected]> writes:\n> >>> Fun thing is, the rowcount from a temp table (which is the problem here) \n> >>> should be available without ANALYZE ; as the temp table is not concurrent, \n> >>> it would be simple to inc/decrement a counter on INSERT/DELETE...\n> >> \n> >> No, because MVCC rules still apply.\n> \n> > But can anything ever see more than one version of what's in the table?\n> \n> Yes, because there can be more than one active snapshot within a single\n> transaction (think about volatile functions in particular).\n\nAny documentation on how snapshot's work? They're a big mystery to me.\n:(\n\n> > Speaking of which, if a temp table is defined as ON COMMIT DROP or\n> > DELETE ROWS, there shouldn't be any need to store xmin/xmax, only\n> > cmin/cmax, correct?\n> \n> No; you forgot about subtransactions.\n\nOh, I thought those were done with cmin and cmax... if that's not what\ncmin/cmax are for, then what is?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 12:18:06 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, 2006-05-11 at 12:18, Jim C. Nasby wrote:\n> On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:\n> > \"Jim C. Nasby\" <[email protected]> writes:\n> > > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:\n> > >> PFC <[email protected]> writes:\n> > >>> Fun thing is, the rowcount from a temp table (which is the problem here) \n> > >>> should be available without ANALYZE ; as the temp table is not concurrent, \n> > >>> it would be simple to inc/decrement a counter on INSERT/DELETE...\n> > >> \n> > >> No, because MVCC rules still apply.\n> > \n> > > But can anything ever see more than one version of what's in the table?\n> > \n> > Yes, because there can be more than one active snapshot within a single\n> > transaction (think about volatile functions in particular).\n> \n> Any documentation on how snapshot's work? They're a big mystery to me.\n> :(\n\nhttp://www.postgresql.org/docs/8.1/interactive/mvcc.html\n\nDoes the concurrency doc not cover this subject well enough (I'm not\nbeing sarcastic, it's a real question)\n", "msg_date": "Thu, 11 May 2006 13:02:57 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:\n> > Yes, because there can be more than one active snapshot within a single\n> > transaction (think about volatile functions in particular).\n> \n> Any documentation on how snapshot's work? They're a big mystery to me.\n> :(\n\nA snapshot is a particular view on a database. In particular, you have\nto be able to view a version of the database that doesn't have you own\nchanges, otherwise an UPDATE would keep updating the same tuple. Also,\nfor example, a cursor might see an older version of the database than\nqueries being run. I don't know of any particular information about it\nthough. Google wasn't that helpful.\n\n> > No; you forgot about subtransactions.\n> \n> Oh, I thought those were done with cmin and cmax... if that's not what\n> cmin/cmax are for, then what is?\n\ncmin/cmax are command counters. So in the sequence:\n\nBEGIN;\nSELECT 1;\nSELECT 2;\n\nThe second query runs as the same transaction ID but a higher command\nID so it can see the result of the previous query. Subtransactions are\n(AIUI anyway) done by having transactions depend on other transactions.\nWhen you start a savepoint you start a new transaction ID whose status\nis tied to its top-level transaction ID but can also be individually\nrolledback.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Thu, 11 May 2006 20:03:19 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:\n> I can say that I've seen plenty of instances where the ability to create\n> temporary tables very quickly with no overhead over the original query would\n> be useful.\n\nI wonder if this requires what the standard refers to as a global\ntemporary table. As I read it (which may be wrong, I find the language\nobtuse), a global temporary table is a temporary table whose structure\nis predefined. So, you'd define it once, updating the catalog only once\nbut still get a table that is emptied each startup.\n\nOfcourse, it may not be what the standard means, but it still seems\nlike a useful idea, to cut down on schema bloat.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Thu, 11 May 2006 20:43:46 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 08:03:19PM +0200, Martijn van Oosterhout wrote:\n> On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:\n> > > Yes, because there can be more than one active snapshot within a single\n> > > transaction (think about volatile functions in particular).\n> > \n> > Any documentation on how snapshot's work? They're a big mystery to me.\n> > :(\n> \n> A snapshot is a particular view on a database. In particular, you have\n> to be able to view a version of the database that doesn't have you own\n> changes, otherwise an UPDATE would keep updating the same tuple. Also,\n> for example, a cursor might see an older version of the database than\n> queries being run. I don't know of any particular information about it\n> though. Google wasn't that helpful.\n\nAhh, I'd forgotten that commands sometimes needed to see prior data. But\nthat's done with cmin/max, right?\n\nIn any case, going back to the original thought/question... my point was\nthat in a single-session table, it should be possible to maintain a\nrow counter. Worst case, you might have to keep a seperate count for\neach CID or XID, but that doesn't seem that unreasonable for a single\nbackend to do, unless you end up running a heck of a lot of commands.\nMore importantnly, it seems a lot more feasable to at least know how\nmany rows there are every time you COMMIT, which means you can\npotentially avoid having to ANALYZE.\n\n> > > No; you forgot about subtransactions.\n> > \n> > Oh, I thought those were done with cmin and cmax... if that's not what\n> > cmin/cmax are for, then what is?\n> \n> cmin/cmax are command counters. So in the sequence:\n> \n> BEGIN;\n> SELECT 1;\n> SELECT 2;\n> \n> The second query runs as the same transaction ID but a higher command\n> ID so it can see the result of the previous query. Subtransactions are\n> (AIUI anyway) done by having transactions depend on other transactions.\n> When you start a savepoint you start a new transaction ID whose status\n> is tied to its top-level transaction ID but can also be individually\n> rolledback.\n\nHmmm, interesting. I would have thought it was tied to CID, but I guess\nXID has more of that machinery around to support rollback.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 14:57:10 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 08:43:46PM +0200, Martijn van Oosterhout wrote:\n> On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:\n> > I can say that I've seen plenty of instances where the ability to create\n> > temporary tables very quickly with no overhead over the original query would\n> > be useful.\n> \n> I wonder if this requires what the standard refers to as a global\n> temporary table. As I read it (which may be wrong, I find the language\n> obtuse), a global temporary table is a temporary table whose structure\n> is predefined. So, you'd define it once, updating the catalog only once\n> but still get a table that is emptied each startup.\n> \n> Ofcourse, it may not be what the standard means, but it still seems\n> like a useful idea, to cut down on schema bloat.\n\nIIRC that's the exact syntax Oracle uses:\n\nCREATE GLOBAL TEMPORARY TABLE ...\n\nI always found it a bit odd, since it always seemed to me like a global\ntemporary table would be one that every backend could read... something\nakin to a real table that doesn't worry about fsync or any of that (and\nis potentially not backed on disk at all).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 15:00:28 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" } ]
[ { "msg_contents": "Hi,\n\nWe've got a C function that we use here and we find that for every\nconnection, the first run of the function is much slower than any\nsubsequent runs. ( 50ms compared to 8ms)\n\nBesides using connection pooling, are there any options to improve\nperformance?\n\nBy the way, we are using pg version 8.1.3.\n\n-Adam\n\n", "msg_date": "Mon, 8 May 2006 13:38:58 -0700", "msg_from": "\"Adam Palmblad\" <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hi,\n\nI have a query that generates two different plans when there's only a \nchange in the category_id used in the query.\n\nThe first query has category_id = 1001573 and return 3117 rows from \nthe category_product table.\nThe second query has category_id = 1001397 and returns 27889 rows \nfrom the category_product table.\n\nThe first query does all access via indexes.\nThe second query does all access via indexes except for a sequential \nscan on the Price table.\n\nHere is the explain analyze for the first query:\n\nexplain analyze\nselect distinct pr.amount\nfrom merchant_product mp,\ncategory_product cp,\nprice pr\nwhere cp.category_id = 1001573 and\n\tcp.product_id = mp.product_id and\n\tcp.product_status_code = 'complete' and\n\tcp.product_is_active = 'true' and\n\tmp.is_active = 'true' and\n\tmp.merchant_product_id = pr.merchant_product_id\norder by amount asc;\n\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----------------------------------------------\nUnique (cost=24311.37..24325.11 rows=2748 width=11) (actual \ntime=277.953..280.844 rows=622 loops=1)\n -> Sort (cost=24311.37..24318.24 rows=2748 width=11) (actual \ntime=277.952..278.490 rows=4007 loops=1)\n Sort Key: pr.amount\n -> Nested Loop (cost=0.00..24154.40 rows=2748 width=11) \n(actual time=0.295..262.225 rows=4007 loops=1)\n -> Nested Loop (cost=0.00..14658.32 rows=2750 \nwidth=4) (actual time=0.229..84.908 rows=4007 loops=1)\n -> Index Scan using \nx_category_product__category_id_fk_idx on category_product cp \n(cost=0.00..3054.20 rows=2369 width=4) (actual time=0.136..20.746 \nrows=2832 loops=1)\n Index Cond: (category_id = 1001573)\n Filter: (((product_status_code)::text = \n'complete'::text) AND ((product_is_active)::text = 'true'::text))\n -> Index Scan using \nmerchant_product__product_id_fk_idx on merchant_product mp \n(cost=0.00..4.89 rows=1 width=8) (actual time=0.019..0.021 rows=1 \nloops=2832)\n Index Cond: (\"outer\".product_id = \nmp.product_id)\n Filter: ((is_active)::text = 'true'::text)\n -> Index Scan using \nprice__merchant_product_id_fk_idx on price pr (cost=0.00..3.44 \nrows=1 width=15) (actual time=0.042..0.043 rows=1 loops=4007)\n Index Cond: (\"outer\".merchant_product_id = \npr.merchant_product_id)\nTotal runtime: 281.709 ms\n\n\nHere is the explain analyze for the second (slow) query:\n\nexplain analyze\nselect distinct pr.amount\nfrom merchant_product mp,\ncategory_product cp,\nprice pr\nwhere cp.category_id = 1001397 and\n\tcp.product_id = mp.product_id and\n\tcp.product_status_code = 'complete' and\n\tcp.product_is_active = 'true' and\n\tmp.is_active = 'true' and\n\tmp.merchant_product_id = pr.merchant_product_id\norder by amount asc;\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------------------------------------------\nUnique (cost=106334.48..106452.38 rows=6050 width=11) (actual \ntime=7140.302..7162.345 rows=2567 loops=1)\n -> Sort (cost=106334.48..106393.43 rows=23580 width=11) (actual \ntime=7140.300..7143.873 rows=26949 loops=1)\n Sort Key: pr.amount\n -> Hash Join (cost=77475.88..104621.95 rows=23580 \nwidth=11) (actual time=4213.546..7015.639 rows=26949 loops=1)\n Hash Cond: (\"outer\".merchant_product_id = \n\"inner\".merchant_product_id)\n -> Seq Scan on price pr (cost=0.00..20782.51 \nrows=1225551 width=15) (actual time=0.059..1482.238 rows=1225551 \nloops=1)\n -> Hash (cost=77416.91..77416.91 rows=23590 \nwidth=4) (actual time=4212.042..4212.042 rows=26949 loops=1)\n -> Merge Join (cost=22632.74..77416.91 \nrows=23590 width=4) (actual time=1851.012..4186.067 rows=26949 loops=1)\n Merge Cond: (\"outer\".product_id = \n\"inner\".product_id)\n -> Index Scan using \nmerchant_product__product_id_fk_idx on merchant_product mp \n(cost=0.00..51365.12 rows=1226085 width=8) (actual \ntime=0.073..3141.654 rows=1208509 loops=1)\n Filter: ((is_active)::text = \n'true'::text)\n -> Sort (cost=22632.74..22683.55 \nrows=20325 width=4) (actual time=507.110..511.076 rows=26949 loops=1)\n Sort Key: cp.product_id\n -> Index Scan using \nx_category_product__category_id_fk_idx on category_product cp \n(cost=0.00..21178.38 rows=20325 width=4) (actual time=0.145..440.113 \nrows=26949 loops=1)\n Index Cond: (category_id = \n1001397)\n Filter: \n(((product_status_code)::text = 'complete'::text) AND \n((product_is_active)::text = 'true'::text))\nTotal runtime: 7172.359 ms\n\n\nNotice the sequential scan of the Price table? It scanned 1,225,551 \nrows in the second query.\n\n\nDo you have any suggestions on how I can optimize the query so both \nversions of the query come back fast without doing a sequential scan \non the price table?\n\n\nThanks,\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\n\n", "msg_date": "Mon, 8 May 2006 19:29:32 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Assistance with optimizing query - same SQL,\n\tdifferent category_id = Seq Scan" }, { "msg_contents": "On Mon, May 08, 2006 at 07:29:32PM -0600, Brendan Duddridge wrote:\n> Do you have any suggestions on how I can optimize the query so both \n> versions of the query come back fast without doing a sequential scan \n> on the price table?\n\nWell, before you do anything you should verify that an index scan in the\nsecond case would actually be faster. Set enable_seqscan=off and check\nthat.\n\nAfter that, you can favor an index scan by (in order of effectiveness)\nincreasing the correlation on the appropriate index (by clustering on\nit), lowering random_page_cost, or increasing effective_cache_size.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:08:02 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assistance with optimizing query - same SQL,\n\tdifferent category_id = Seq Scan" } ]
[ { "msg_contents": "\n\n\n\nActually now I already work to upgrade Postgresql version to 8.1 but not\nyet finish.\n\nYesterday I did re-create the affected tables indices, it does improve the\nperformance but still need 2-5 mins to execute the query.\nIs this 'normal' for a table with 40K rows of records?\n\nAnyway thanks for your help.\n\n\n", "msg_date": "Tue, 9 May 2006 09:39:13 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: extremely slow when execute select/delete for certain tables" } ]
[ { "msg_contents": "Hi,\n\nI've just had some discussion with colleagues regarding the usage of \nhardware or software raid 1/10 for our linux based database servers.\n\nI myself can't see much reason to spend $500 on high end controller \ncards for a simple Raid 1.\n\nAny arguments pro or contra would be desirable.\n\n From my experience and what I've read here:\n\n+ Hardware Raids might be a bit easier to manage, if you never spend a \nfew hours to learn Software Raid Tools.\n\n+ There are situations in which Software Raids are faster, as CPU power \nhas advanced dramatically in the last years and even high end controller \ncards cannot keep up with that.\n\n+ Using SATA drives is always a bit of risk, as some drives are lying \nabout whether they are caching or not.\n\n+ Using hardware controllers, the array becomes locked to a particular \nvendor. You can't switch controller vendors as the array meta \ninformation is stored proprietary. In case the Raid is broken to a level \nthe controller can't recover automatically this might complicate manual \nrecovery by specialists.\n\n+ Even battery backed controllers can't guarantee that data written to \nthe drives is consistent after a power outage, neither that the drive \ndoes not corrupt something during the involuntary shutdown / power \nirregularities. (This is theoretical as any server will be UPS backed)\n\n\n-- \nRegards,\nHannes Dorbath\n", "msg_date": "Tue, 09 May 2006 11:16:45 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Arguments Pro/Contra Software Raid" }, { "msg_contents": "Hi Hannes,\n\nHannes Dorbath a �crit :\n> Hi,\n> \n> I've just had some discussion with colleagues regarding the usage of\n> hardware or software raid 1/10 for our linux based database servers.\n> \n> I myself can't see much reason to spend $500 on high end controller\n> cards for a simple Raid 1.\n\nNaa, you can find ATA &| SATA ctrlrs for about EUR30 !\n\n> Any arguments pro or contra would be desirable.\n> \n> From my experience and what I've read here:\n> \n> + Hardware Raids might be a bit easier to manage, if you never spend a\n> few hours to learn Software Raid Tools.\n\nI'd the same (mostly as you still have to punch a command line for\nmost of the controlers)\n\n> + There are situations in which Software Raids are faster, as CPU power\n> has advanced dramatically in the last years and even high end controller\n> cards cannot keep up with that.\n\nDefinitely NOT, however if your server doen't have a heavy load, the\nsoftware overload can't be noticed (essentially cache managing and\nsyncing)\n\nFor bi-core CPUs, it might be true\n\n\n> + Using SATA drives is always a bit of risk, as some drives are lying\n> about whether they are caching or not.\n\n?? Do you intend to use your server without a UPS ??\n\n> + Using hardware controllers, the array becomes locked to a particular\n> vendor. You can't switch controller vendors as the array meta\n> information is stored proprietary. In case the Raid is broken to a level\n> the controller can't recover automatically this might complicate manual\n> recovery by specialists.\n\n?? Do you intend not to make backups ??\n\n> + Even battery backed controllers can't guarantee that data written to\n> the drives is consistent after a power outage, neither that the drive\n> does not corrupt something during the involuntary shutdown / power\n> irregularities. (This is theoretical as any server will be UPS backed)\n\nRAID's \"laws\":\n\n1- RAID prevents you from loosing data on healthy disks, not from faulty\n disks,\n\n1b- So format and reformat your RAID disks (whatever SCSI, ATA, SATA)\n several times, with destructive tests (see \"-c -c\" option from\n the mke2fs man) - It will ensure that disks are safe, and also\n make a kind of burn test (might turn to... days of formating!),\n\n2- RAID doesn't prevent you from power suply brokeage or electricity\n breakdown, so use a (LARGE) UPS,\n\n2b- LARGE UPS because HDs are the components that have the higher power\n consomption (a 700VA UPS gives me about 10-12 minutes on a machine\n with a XP2200+, 1GB RAM and a 40GB HD, however this fall to......\n less than 25 secondes with seven HDs ! all ATA),\n\n2c- Use server box with redudancy power supplies,\n\n3- As for any sensitive data, make regular backups or you'll be as\n sitting duck.\n\nSome hardware ctrlrs are able to avoid the loss of a disk if you turn\nto have some faulty sectors (by relocating internally them); software\nRAID doesn't as sectors *must* be @ the same (linear) addresses.\n\nBUT a hardware controler is about EUR2000 and a (ATA/SATA) 500GB HD\nis ~ EUR350.\n\nThat means you have to consider:\n\n* The server disponibility (time to change a power supply if no\n redudancies, time to exchange a not hotswap HD... In fact, how much\n down time you can \"afford\"),\n\n* The volume of the data (from which depends the size of the backup\n device),\n\n* The backup device you'll use (tape or other HDs),\n\n* The load of the server (and the number of simultaneous users =>\n Soft|Hard, ATA/SATA|SCSI...),\n\n* The money you can spend in such a server\n\n* And most important, the color of your boss' tie the day you'll\n take the decision.\n\nHope it will help you\n\nJean-Yves\n\n", "msg_date": "Tue, 09 May 2006 12:10:32 +0200", "msg_from": "\"Jean-Yves F. Barbier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "On 09.05.2006 12:10, Jean-Yves F. Barbier wrote:\n> Naa, you can find ATA &| SATA ctrlrs for about EUR30 !\n\nSure, just for my colleagues Raid Controller = IPC Vortex, which resides \nin that price range.\n\n> For bi-core CPUs, it might be true\n\nI've got that from pgsql.performance for multi-way opteron setups.\n\n> ?? Do you intend to use your server without a UPS ??\n\nSure there will be an UPS. I'm just trying to nail down the differences \nbetween soft- and hardware raid, regardless if they matter in the end :)\n\n> ?? Do you intend not to make backups ??\n\nSure we do backups, this all is more hypothetical thinking..\n\n> Hope it will help you\n\nIt has, thanks.\n\n\n-- \nRegards,\nHannes Dorbath\n", "msg_date": "Tue, 09 May 2006 12:24:30 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\nHannes Dorbath wrote:\n> Hi,\n> \n> I've just had some discussion with colleagues regarding the usage of\n> hardware or software raid 1/10 for our linux based database servers.\n> \n> I myself can't see much reason to spend $500 on high end controller\n> cards for a simple Raid 1.\n> \n> Any arguments pro or contra would be desirable.\n> \n\nOne pro and one con off the top of my head.\n\nHotplug. Depending on your platform, SATA may or may not be hotpluggable\n(I know AHCI mode is the only one promising some kind of a hotplug,\nwhich means ICH6+ and Silicon Image controllers last I heard). SCSI\nisn't hotpluggable without the use of special hotplug backplanes and\ndisks. You lose that in software RAID, which effectively means you need\nto shut the box down and do maintenance. Hassle.\n\nCPU. It's cheap. Much cheaper than your average hardware RAID card. For\nthe 5-10% overhead usually imposed by software RAID, you can throw in a\nfaster CPU and never even notice it. Most cases aren't CPU-bound\nanyways, or at least, most cases are I/O bound for the better part. This\ndoes raise the question of I/O bandwidth your standard SATA or SCSI\ncontroller comes with, though. If you're careful about that and handle\nhotplug sufficiently, you're probably never going to notice you're not\nrunning on metal.\n\nKind regards,\n- --\n Grega Bremec\n gregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.0 (GNU/Linux)\n\niD8DBQFEYHRAfu4IwuB3+XoRA9jqAJ9sS3RBJZEurvwUXGKrFMRZfYy9pQCggGHh\ntLAy/YtHwKvhd3ekVDGFtWE=\n=vlyC\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 09 May 2006 13:37:03 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\nOn May 9, 2006, at 2:16 AM, Hannes Dorbath wrote:\n\n> Hi,\n>\n> I've just had some discussion with colleagues regarding the usage \n> of hardware or software raid 1/10 for our linux based database \n> servers.\n>\n> I myself can't see much reason to spend $500 on high end controller \n> cards for a simple Raid 1.\n>\n> Any arguments pro or contra would be desirable.\n>\n> From my experience and what I've read here:\n>\n> + Hardware Raids might be a bit easier to manage, if you never \n> spend a few hours to learn Software Raid Tools.\n>\n> + There are situations in which Software Raids are faster, as CPU \n> power has advanced dramatically in the last years and even high end \n> controller cards cannot keep up with that.\n>\n> + Using SATA drives is always a bit of risk, as some drives are \n> lying about whether they are caching or not.\n\nDon't buy those drives. That's unrelated to whether you use hardware\nor software RAID.\n\n>\n> + Using hardware controllers, the array becomes locked to a \n> particular vendor. You can't switch controller vendors as the array \n> meta information is stored proprietary. In case the Raid is broken \n> to a level the controller can't recover automatically this might \n> complicate manual recovery by specialists.\n\nYes. Fortunately we're using the RAID for database work, rather than \nfile\nstorage, so we can use all the nice postgresql features for backing up\nand replicating the data elsewhere, which avoids most of this issue.\n\n>\n> + Even battery backed controllers can't guarantee that data written \n> to the drives is consistent after a power outage, neither that the \n> drive does not corrupt something during the involuntary shutdown / \n> power irregularities. (This is theoretical as any server will be \n> UPS backed)\n\nfsync of WAL log.\n\nIf you have a battery backed writeback cache then you can get the \nreliability\nof fsyncing the WAL for every transaction, and the performance of not \nneeding\nto hit the disk for every transaction.\n\nAlso, if you're not doing that you'll need to dedicate a pair of \nspindles to the\nWAL log if you want to get good performance, so that there'll be no \nseeking\non the WAL. With a writeback cache you can put the WAL on the same \nspindles\nas the database and not lose much, if anything, in the way of \nperformance.\nIf that saves you the cost of two additional spindles, and the space \non your\ndrive shelf for them, you've just paid for a reasonably proced RAID \ncontroller.\n\nGiven those advantages... I can't imagine speccing a large system \nthat didn't\nhave a battery-backed write-back cache in it. My dev systems mostly use\nsoftware RAID, if they use RAID at all. But my production boxes all \nuse SATA\nRAID (and I tell my customers to use controllers with BB cache, \nwhether it\nbe SCSI or SATA).\n\nMy usual workloads are write-heavy. If yours are read-heavy that will\nmove the sweet spot around significantly, and I can easily imagine that\nfor a read-heavy load software RAID might be a much better match.\n\nCheers,\n Steve\n\n", "msg_date": "Tue, 9 May 2006 07:41:16 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, May 09, 2006 at 12:10:32 +0200,\n \"Jean-Yves F. Barbier\" <[email protected]> wrote:\n> Naa, you can find ATA &| SATA ctrlrs for about EUR30 !\n\nBut those are the ones that you would generally be better off not using.\n\n> Definitely NOT, however if your server doen't have a heavy load, the\n> software overload can't be noticed (essentially cache managing and\n> syncing)\n\nIt is fairly common for database machines to be IO, rather than CPU, bound\nand so the CPU impact of software raid is low.\n\n> Some hardware ctrlrs are able to avoid the loss of a disk if you turn\n> to have some faulty sectors (by relocating internally them); software\n> RAID doesn't as sectors *must* be @ the same (linear) addresses.\n\nThat is not true. Software raid works just fine on drives that have internally\nremapped sectors.\n", "msg_date": "Tue, 9 May 2006 10:18:48 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, 2006-05-09 at 04:16, Hannes Dorbath wrote:\n> Hi,\n> \n> I've just had some discussion with colleagues regarding the usage of \n> hardware or software raid 1/10 for our linux based database servers.\n> \n> I myself can't see much reason to spend $500 on high end controller \n> cards for a simple Raid 1.\n> \n> Any arguments pro or contra would be desirable.\n> \n> From my experience and what I've read here:\n> \n> + Hardware Raids might be a bit easier to manage, if you never spend a \n> few hours to learn Software Raid Tools.\n\nDepends. Some hardware RAID cards aren't that easy to manage, and\nsometimes, they won't let you do some things that software will. I've\nrun into situations where a RAID controller kicked out two perfectly\ngood drives from a RAID 5 and would NOT accept them back. All data\nlost, and it would not be convinced to restart without formatting the\ndrives first. arg! With Linux kernel sw RAID, I've had a similar\nproblem pop up, and was able to make the RAID array take the drives\nback. Of course, this means that software RAID relies on you not being\nstupid, because it will let you do things that are dangerous / stupid.\n\nI found the raidtools on linux to be well thought out and fairly easy to\nuse. \n\n> + There are situations in which Software Raids are faster, as CPU power \n> has advanced dramatically in the last years and even high end controller \n> cards cannot keep up with that.\n\nThe only times I've found software RAID to be faster was against the\nhybrid hardware / software type RAID cards (i.e. the cheapies) or OLDER\nRAID cards, that have a 33 MHz coprocessor or such. Most modern RAID\ncontrollers have coprocessors running at several hundred MHz or more,\nand can compute parity and manage the array as fast as the attached I/O\ncan handle it.\n\nThe one thing a software RAID will never be able to match the hardware\nRAID controller on is battery backed cache.\n\n> + Using SATA drives is always a bit of risk, as some drives are lying \n> about whether they are caching or not.\n\nThis is true whether you are using hardware RAID or not. Turning off\ndrive caching seems to prevent the problem. However, with a RAID\ncontroller, the caching can then be moved to the BBU cache, while with\nsoftware RAID no such option exists. Most SATA RAID controllers turn\noff the drive cache automagically, like the escalades seem to do.\n\n> + Using hardware controllers, the array becomes locked to a particular \n> vendor. You can't switch controller vendors as the array meta \n> information is stored proprietary. In case the Raid is broken to a level \n> the controller can't recover automatically this might complicate manual \n> recovery by specialists.\n\nAnd not just a particular vendor, but likely a particular model and even\nfirmware revision. For this reason, and 24/7 server should have two\nRAID controllers of the same brand running identical arrays, then have\nthem set up as a mirror across the controllers, assuming you have\ncontrollers that can run cooperatively. This setup ensures that even if\none of your RAID controllers fails, you then have a fully operational\nRAID array for as long as it takes to order and replace the bad\ncontroller. And having a third as a spare in a cabinet somewhere is\ncheap insurance as well.\n\n> + Even battery backed controllers can't guarantee that data written to \n> the drives is consistent after a power outage, neither that the drive \n> does not corrupt something during the involuntary shutdown / power \n> irregularities. (This is theoretical as any server will be UPS backed)\n\nThis may be theoretically true, but all the battery backed cache units\nI've used have brought the array up clean every time the power has been\nlost to them. And a UPS is no insurance against loss of power. \nCascading power failures are not uncommon when things go wrong.\n\nNow, here's my take on SW versus HW in general:\n\nHW is the way to go for situations where a battery backed cache is\nneeded. Heavily written / updated databases are in this category.\n\nSoftware RAID is a perfect match for databases with a low write to read\nratio, or where you won't be writing enough for the write performance to\nbe a big issue. Many data warehouses fall into this category. In this\ncase, a JBOD enclosure with a couple of dozen drives and software RAID\ngives you plenty of storage for chicken feed. If the data is all\nderived from outside sources, then you can turn on the write cache in\nthe drives and turn off fsync and it will be plenty fast, just not crash\nsafe.\n", "msg_date": "Tue, 09 May 2006 10:49:29 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "> \n> Don't buy those drives. That's unrelated to whether you use hardware\n> or software RAID.\n\nSorry that is an extremely misleading statement. SATA RAID is perfectly \nacceptable if you have a hardware raid controller with a battery backup \ncontroller.\n\nAnd dollar for dollar, SCSI will NOT be faster nor have the hard drive \ncapacity that you will get with SATA.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 09 May 2006 08:51:39 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "\nOn May 9, 2006, at 8:51 AM, Joshua D. Drake wrote:\n\n(\"Using SATA drives is always a bit of risk, as some drives are lying \nabout whether they are caching or not.\")\n\n>> Don't buy those drives. That's unrelated to whether you use hardware\n>> or software RAID.\n>\n> Sorry that is an extremely misleading statement. SATA RAID is \n> perfectly acceptable if you have a hardware raid controller with a \n> battery backup controller.\n\nIf the drive says it's hit the disk and it hasn't then the RAID \ncontroller\nwill have flushed the data from its cache (or flagged it as correctly\nwritten). At that point the only place the data is stored is in the non\nbattery backed cache on the drive itself. If something fails then you'll\nhave lost data.\n\nYou're not suggesting that a hardware RAID controller will protect\nyou against drives that lie about sync, are you?\n\n>\n> And dollar for dollar, SCSI will NOT be faster nor have the hard \n> drive capacity that you will get with SATA.\n\nYup. That's why I use SATA RAID for all my databases.\n\nCheers,\n Steve\n", "msg_date": "Tue, 9 May 2006 10:52:45 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n\n> Sorry that is an extremely misleading statement. SATA RAID is \n> perfectly acceptable if you have a hardware raid controller with a \n> battery backup controller.\n>\n> And dollar for dollar, SCSI will NOT be faster nor have the hard \n> drive capacity that you will get with SATA.\n\nDoes this hold true still under heavy concurrent-write loads? I'm \npreparing yet another big DB server and if SATA is a better option, \nI'm all (elephant) ears.", "msg_date": "Tue, 9 May 2006 13:57:16 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "Vivek Khera <[email protected]> writes:\n\n> On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n>\n>> And dollar for dollar, SCSI will NOT be faster nor have the hard\n>> drive capacity that you will get with SATA.\n>\n> Does this hold true still under heavy concurrent-write loads? I'm\n> preparing yet another big DB server and if SATA is a better option,\n> I'm all (elephant) ears.\n\nCorrect me if I'm wrong, but I've never heard of a 15kRPM SATA drive.\n\n-Doug\n", "msg_date": "Tue, 09 May 2006 14:05:28 -0400", "msg_from": "Douglas McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "Vivek Khera wrote:\n> \n> On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n> \n>> Sorry that is an extremely misleading statement. SATA RAID is \n>> perfectly acceptable if you have a hardware raid controller with a \n>> battery backup controller.\n>>\n>> And dollar for dollar, SCSI will NOT be faster nor have the hard drive \n>> capacity that you will get with SATA.\n> \n> Does this hold true still under heavy concurrent-write loads? I'm \n> preparing yet another big DB server and if SATA is a better option, I'm \n> all (elephant) ears.\n\nI didn't say better :). If you can afford, SCSI is the way to go. \nHowever SATA with a good controller (I am fond of the LSI 150 series) \ncan provide some great performance.\n\nI have not used, but have heard good things about Areca as well. Oh, and \nmake sure they are SATA-II drives.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 09 May 2006 11:25:27 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\n> You're not suggesting that a hardware RAID controller will protect\n> you against drives that lie about sync, are you?\n\nOf course not, but which drives lie about sync that are SATA? Or more \nspecifically SATA-II?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 09 May 2006 11:26:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\nOn May 9, 2006, at 11:26 AM, Joshua D. Drake wrote:\n\n>\n>> You're not suggesting that a hardware RAID controller will protect\n>> you against drives that lie about sync, are you?\n>\n> Of course not, but which drives lie about sync that are SATA? Or \n> more specifically SATA-II?\n\nSATA-II, none that I'm aware of, but there's a long history of dodgy\nbehaviour designed to pump up benchmark results down in the\nconsumer drive space, and low end consumer space is where a\nlot of SATA drives are. I wouldn't be surprised to see that beahviour\nthere still.\n\nI was responding to the original posters assertion that drives lying\nabout sync were a reason not to buy SATA drives, by telling him\nnot to buy drives that lie about sync. You seem to have read this\nas \"don't buy SATA drives\", which is not what I said and not what I\nmeant.\n\nCheers,\n Steve\n", "msg_date": "Tue, 9 May 2006 11:34:31 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Douglas McNaught wrote:\n> Vivek Khera <[email protected]> writes:\n> \n>> On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n>>\n>>> And dollar for dollar, SCSI will NOT be faster nor have the hard\n>>> drive capacity that you will get with SATA.\n>> Does this hold true still under heavy concurrent-write loads? I'm\n>> preparing yet another big DB server and if SATA is a better option,\n>> I'm all (elephant) ears.\n> \n> Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive.\n\nBest I have seen is 10k but if I can put 4x the number of drives in the \narray at the same cost... I don't need 15k.\n\nJoshua D. Drake\n\n> \n> -Doug\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 09 May 2006 11:43:16 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, 2006-05-09 at 12:52, Steve Atkins wrote:\n> On May 9, 2006, at 8:51 AM, Joshua D. Drake wrote:\n> \n> (\"Using SATA drives is always a bit of risk, as some drives are lying \n> about whether they are caching or not.\")\n> \n> >> Don't buy those drives. That's unrelated to whether you use hardware\n> >> or software RAID.\n> >\n> > Sorry that is an extremely misleading statement. SATA RAID is \n> > perfectly acceptable if you have a hardware raid controller with a \n> > battery backup controller.\n> \n> If the drive says it's hit the disk and it hasn't then the RAID \n> controller\n> will have flushed the data from its cache (or flagged it as correctly\n> written). At that point the only place the data is stored is in the non\n> battery backed cache on the drive itself. If something fails then you'll\n> have lost data.\n> \n> You're not suggesting that a hardware RAID controller will protect\n> you against drives that lie about sync, are you?\n\nActually, in the case of the Escalades at least, the answer is yes. \nLast year (maybe a bit more) someone was testing an IDE escalade\ncontroller with drives that were known to lie, and it passed the power\nplug pull test repeatedly. Apparently, the escalades tell the drives to\nturn off their cache. While most all IDEs and a fair number of SATA\ndrives lie about cache fsyncing, they all seem to turn off the cache\nwhen you ask.\n\nAnd, since a hardware RAID controller with bbu cache has its own cache,\nit's not like it really needs the one on the drives anyway.\n", "msg_date": "Tue, 09 May 2006 14:04:08 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Joshua D. Drake wrote:\n> Vivek Khera wrote:\n> > \n> > On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n> > \n> >> Sorry that is an extremely misleading statement. SATA RAID is \n> >> perfectly acceptable if you have a hardware raid controller with a \n> >> battery backup controller.\n> >>\n> >> And dollar for dollar, SCSI will NOT be faster nor have the hard drive \n> >> capacity that you will get with SATA.\n> > \n> > Does this hold true still under heavy concurrent-write loads? I'm \n> > preparing yet another big DB server and if SATA is a better option, I'm \n> > all (elephant) ears.\n> \n> I didn't say better :). If you can afford, SCSI is the way to go. \n> However SATA with a good controller (I am fond of the LSI 150 series) \n> can provide some great performance.\n\nBasically, you can get away with cheaper hardware, but it usually\ndoesn't have the reliability/performance of more expensive options.\n\nYou want an in-depth comparison of how a server disk drive is internally\nbetter than a desktop drive:\n\n\thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 9 May 2006 20:59:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Scott Marlowe wrote:\n> Actually, in the case of the Escalades at least, the answer is yes. \n> Last year (maybe a bit more) someone was testing an IDE escalade\n> controller with drives that were known to lie, and it passed the power\n> plug pull test repeatedly. Apparently, the escalades tell the drives to\n> turn off their cache. While most all IDEs and a fair number of SATA\n> drives lie about cache fsyncing, they all seem to turn off the cache\n> when you ask.\n> \n> And, since a hardware RAID controller with bbu cache has its own cache,\n> it's not like it really needs the one on the drives anyway.\n\nYou do if the controller thinks the data is already on the drives and\nremoves it from its cache.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 9 May 2006 21:02:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "William Yu wrote:\n> We upgraded our disk system for our main data processing server earlier \n> this year. After pricing out all the components, basically we had the \n> choice of:\n> \n> LSI MegaRaid 320-2 w/ 1GB RAM+BBU + 8 15K 150GB SCSI\n> \n> or\n> \n> Areca 1124 w/ 1GB RAM+BBU + 24 7200RPM 250GB SATA\n\nMy mistake -- I keep doing calculations and they don't add up. So I \nlooked again on pricewatch and it turns out the actual comparison was \nfor 4 SCSI drives, not 8! ($600 for a 15K 145GB versus $90 for a 7200 \n250GB.) No wonder our decision seemed to much more decisive back then.\n", "msg_date": "Tue, 09 May 2006 19:39:53 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "On May 9, 2006, at 11:26 AM, Joshua D. Drake wrote:\n> Of course not, but which drives lie about sync that are SATA? Or \n> more specifically SATA-II?\n\nI don't know the answer to this question, but have you seen this tool?\n\n http://brad.livejournal.com/2116715.html\n\nIt attempts to experimentally determine if, with your operating \nsystem version, controller, and hard disk, fsync() does as claimed. \nOf course, experimentation can't prove the system is correct, but it \ncan sometimes prove the system is broken.\n\nI say it's worth running on any new model of disk, any new \ncontroller, or after the Linux kernel people rewrite everything (i.e. \non every point release).\n\nI have to admit to hypocrisy, though...I'm running with systems that \nother people ordered and installed, I doubt they were this thorough, \nand I don't have identical hardware to run tests on. So no real way \nto do this.\n\nRegards,\nScott\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n", "msg_date": "Tue, 9 May 2006 20:37:14 -0700", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Douglas McNaught <[email protected]> writes:\n\n> Vivek Khera <[email protected]> writes:\n> \n> > On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n> >\n> >> And dollar for dollar, SCSI will NOT be faster nor have the hard\n> >> drive capacity that you will get with SATA.\n> >\n> > Does this hold true still under heavy concurrent-write loads? I'm\n> > preparing yet another big DB server and if SATA is a better option,\n> > I'm all (elephant) ears.\n> \n> Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive.\n\nWell, dollar for dollar you would get the best performance from slower drives\nanyways since it would give you more spindles. 15kRPM drives are *expensive*.\n\n-- \ngreg\n\n", "msg_date": "10 May 2006 00:41:20 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "\nSteve Atkins <[email protected]> writes:\n\n> On May 9, 2006, at 2:16 AM, Hannes Dorbath wrote:\n> \n> > Hi,\n> >\n> > I've just had some discussion with colleagues regarding the usage of\n> > hardware or software raid 1/10 for our linux based database servers.\n> >\n> > I myself can't see much reason to spend $500 on high end controller cards\n> > for a simple Raid 1.\n> >\n> > Any arguments pro or contra would be desirable.\n\nReally most of what's said about software raid vs hardware raid online is just\nFUD. Unless you're running BIG servers with so many drives that the raid\ncontrollers are the only feasible way to connect them up anyways, the actual\nperformance difference will likely be negligible.\n\nThe only two things that actually make me pause about software RAID in heavy\nproduction use are:\n\n1) Battery backed cache. That's a huge win for the WAL drives on Postgres.\n 'nuff said.\n\n2) Not all commodity controllers or IDE drivers can handle failing drives\n gracefully. While the software raid might guarantee that you don't actually\n lose data, you still might have the machine wedge because of IDE errors on\n the bad drive. So as far as runtime, instead of added reliability all\n you've really added is another point of failure. On the data integrity\n front you'll still be better off.\n\n\n-- \nGreg\n\n", "msg_date": "10 May 2006 00:50:28 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "\n> 2b- LARGE UPS because HDs are the components that have the higher power\n> consomption (a 700VA UPS gives me about 10-12 minutes on a machine\n> with a XP2200+, 1GB RAM and a 40GB HD, however this fall to......\n> less than 25 secondes with seven HDs ! all ATA),\n\n\tI got my hands on a (free) 1400 VA APC rackmount UPS ; the batteries were \ndead so I stuck two car batteries in. It can power my computer (Athlon 64, \n7 drives) for more than 2 hours...\n\tIt looks ugly though. I wouldn't put this in a server rack, but for my \nhome PC it's perfect. It has saved my work many times...\n\n\tHarddisks suck in about 15 watts each, but draw large current spikes on \nseeking, so the VA rating of the UPS is important. I guess in your case, \nthe batteries have enough charge left; but the current capability of the \nUPS is exceeded.\n\n> Some hardware ctrlrs are able to avoid the loss of a disk if you turn\n> to have some faulty sectors (by relocating internally them); software\n> RAID doesn't as sectors *must* be @ the same (linear) addresses.\n\n\tHarddisks do transparent remapping now... linux soft raid can rewrite bad \nsectors with good data and the disk will remap the faulty sector to a good \none.\n\n\n", "msg_date": "Wed, 10 May 2006 08:50:22 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n\n> Douglas McNaught <[email protected]> writes:\n\n>> Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive.\n>\n> Well, dollar for dollar you would get the best performance from slower drives\n> anyways since it would give you more spindles. 15kRPM drives are *expensive*.\n\nDepends on your power, heat and rack space budget too... If you need\nmax performance out of a given rack space (rather than max density),\nSCSI is still the way to go. I'll definitely agree that SATA is\nbecoming much more of a player in the server storage market, though.\n\n-Doug\n", "msg_date": "Wed, 10 May 2006 08:15:22 -0400", "msg_from": "Douglas McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "* Hannes Dorbath:\n\n> + Hardware Raids might be a bit easier to manage, if you never spend a\n> few hours to learn Software Raid Tools.\n\nI disagree. RAID management is complicated, and once there is a disk\nfailure, all kinds of oddities can occur which can make it quite a\nchallenge to get back a non-degraded array.\n\nWith some RAID controllers, monitoring is diffcult because they do not\nuse the system's logging mechanism for reporting. In some cases, it\nis not possible to monitor the health status of individual disks.\n\n> + Using SATA drives is always a bit of risk, as some drives are lying\n> about whether they are caching or not.\n\nYou can usually switch off caching.\n\n> + Using hardware controllers, the array becomes locked to a particular\n> vendor. You can't switch controller vendors as the array meta\n> information is stored proprietary. In case the Raid is broken to a\n> level the controller can't recover automatically this might complicate\n> manual recovery by specialists.\n\nIt's even more difficult these days. 3ware controllers enable drive\npasswords, so you can't access the drive from other controllers at all\n(even if you could interpret the on-disk data).\n\n> + Even battery backed controllers can't guarantee that data written to\n> the drives is consistent after a power outage, neither that the drive\n> does not corrupt something during the involuntary shutdown / power\n> irregularities. (This is theoretical as any server will be UPS backed)\n\nUPS failures are not unheard of. 8-/ Apart from that, you can address\na large class of shutdown failures if you replay a log stored in the\nBBU on the next reboot (partial sector writes come to my mind).\n\nIt is very difficult to check if the controller does this correctly,\nthough.\n\nA few other things to note: You can't achieve significant port density\nwith non-RAID controllers, at least with SATA. You need to buy a RAID\ncontroller anyway. You can't quite achieve what a BBU does (even if\nyou've got a small, fast persistent storage device) because there's\nno host software support for such a configuration.\n", "msg_date": "Wed, 10 May 2006 14:44:06 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Hi, Scott & all,\n\nScott Lamb wrote:\n\n> I don't know the answer to this question, but have you seen this tool?\n> \n> http://brad.livejournal.com/2116715.html\n\nWe had a simpler tool inhouse, which wrote a file byte-for-byte, and\ncalled fsync() after every byte.\n\nIf the number of fsyncs/min is higher than your rotations per minute\nvalue of your disks, they must be lying.\n\nIt does not find as much liers as the script above, but it is less\nintrusive (can be ran on every low-io machine without crashing it), and\nit found some liers in-house (some notebook disks, one external\nUSB/FireWire to IDE case, and an older linux cryptoloop implementations,\nIIRC).\n\nIf you're interested, I can dig for the C source...\n\nHTH,\nMarkus\n\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 15:54:43 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Markus Schaber wrote:\n> Hi, Scott & all,\n> \n> Scott Lamb wrote:\n> \n> > I don't know the answer to this question, but have you seen this tool?\n> > \n> > http://brad.livejournal.com/2116715.html\n> \n> We had a simpler tool inhouse, which wrote a file byte-for-byte, and\n> called fsync() after every byte.\n> \n> If the number of fsyncs/min is higher than your rotations per minute\n> value of your disks, they must be lying.\n> \n> It does not find as much liers as the script above, but it is less\n\nWhy does it find fewer liers?\n\n---------------------------------------------------------------------------\n\n> intrusive (can be ran on every low-io machine without crashing it), and\n> it found some liers in-house (some notebook disks, one external\n> USB/FireWire to IDE case, and an older linux cryptoloop implementations,\n> IIRC).\n> \n> If you're interested, I can dig for the C source...\n> \n> HTH,\n> Markus\n> \n> \n> \n> \n> -- \n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n> \n> Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 10 May 2006 10:01:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On May 10, 2006, at 12:41 AM, Greg Stark wrote:\n\n> Well, dollar for dollar you would get the best performance from \n> slower drives\n> anyways since it would give you more spindles. 15kRPM drives are \n> *expensive*.\n\nPersonally, I don't care that much for \"dollar for dollar\" I just \nneed performance. If it is within a factor of 2 or 3 in price then \nI'll go for absolute performance over \"bang for the buck\".", "msg_date": "Wed, 10 May 2006 10:16:14 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Vivek Khera wrote:\n> \n> On May 10, 2006, at 12:41 AM, Greg Stark wrote:\n> \n> > Well, dollar for dollar you would get the best performance from \n> > slower drives\n> > anyways since it would give you more spindles. 15kRPM drives are \n> > *expensive*.\n> \n> Personally, I don't care that much for \"dollar for dollar\" I just \n> need performance. If it is within a factor of 2 or 3 in price then \n> I'll go for absolute performance over \"bang for the buck\".\n\nThat is really the issue. You can buy lots of consumer-grade stuff and\nwork just fine if your performance/reliability tolerance is high enough.\n\nHowever, don't fool yourself that consumer and server-grade hardware is\ninternally the same, or has the same testing.\n\nI just had a Toshiba laptop drive replaced last week (new, not\nrefurbished), only to have it fail this week. Obviously there isn't\nsufficient burn-in done by Toshiba, and I don't fault them because it is\na consumer laptop --- it fails, they replace it. For servers, the\ndowntime usually can't be tolerated, while consumers usually can\ntolerate significant downtime.\n\nI have always purchased server-grade hardware for my home server, and I\nthink I have had one day of hardware downtime in the past ten years. \nConsumer hardware just couldn't do that.\n\nAs one data point, most consumer-grade IDE drives are designed to be run\nonly 8 hours a day. The engineering doesn't anticipate 24-hour\noperation, and that trade-off passes all the way through the selection\nof componients for the drive, which generates sigificant cost savings.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 10 May 2006 10:35:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Hi, Bruce,\n\nBruce Momjian wrote:\n\n\n>>It does not find as much liers as the script above, but it is less\n> \n> Why does it find fewer liers?\n\nIt won't find liers that have a small \"lie-queue-length\" so their\ninternal buffers get full so they have to block. After a small burst at\nstart which usually hides in other latencies, they don't get more\nthroughput than spindle turns.\n\nIt won't find liers that first acknowledge to the host, and then\nimmediately write the block before accepting other commands. This\nimproves latency (which is measured in some benchmarks), but not\nsyncs/write rate.\n\nBoth of them can be captured by the other script, but not by my tool.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 16:38:17 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:\n> Scott Marlowe wrote:\n> > Actually, in the case of the Escalades at least, the answer is yes. \n> > Last year (maybe a bit more) someone was testing an IDE escalade\n> > controller with drives that were known to lie, and it passed the power\n> > plug pull test repeatedly. Apparently, the escalades tell the drives to\n> > turn off their cache. While most all IDEs and a fair number of SATA\n> > drives lie about cache fsyncing, they all seem to turn off the cache\n> > when you ask.\n> > \n> > And, since a hardware RAID controller with bbu cache has its own cache,\n> > it's not like it really needs the one on the drives anyway.\n> \n> You do if the controller thinks the data is already on the drives and\n> removes it from its cache.\n\nBruce, re-read what I wrote. The escalades tell the drives to TURN OFF\nTHEIR OWN CACHE.\n", "msg_date": "Wed, 10 May 2006 09:42:59 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n\n> On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:\n\n>> You do if the controller thinks the data is already on the drives and\n>> removes it from its cache.\n>\n> Bruce, re-read what I wrote. The escalades tell the drives to TURN OFF\n> THEIR OWN CACHE.\n\nSome ATA drives would lie about that too IIRC. Hopefully they've\nstopped doing it in the SATA era.\n\n-Doug\n", "msg_date": "Wed, 10 May 2006 10:51:22 -0400", "msg_from": "Douglas McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Hi, Bruce,\n\nMarkus Schaber wrote:\n\n>>>It does not find as much liers as the script above, but it is less\n>>Why does it find fewer liers?\n> \n> It won't find liers that have a small \"lie-queue-length\" so their\n> internal buffers get full so they have to block. After a small burst at\n> start which usually hides in other latencies, they don't get more\n> throughput than spindle turns.\n\nI just reread my mail, and must admit that I would not understand what I\nwrote above, so I'll explain a little more:\n\nMy test programs writes byte-for-byte. Let's say our FS/OS has 4k page-\nand blocksize, that means 4096 writes that all write the same disk blocks.\n\nIntelligent liers will see that the the 2nd and all further writes\nobsolete the former writes who still reside in the internal cache, and\ndrop those former writes from cache, effectively going up to 4k\nwrites/spindle turn.\n\nDumb liers will keep the obsolete writes in the write cache / queue, and\nso won't be caught by my program. (Note that I have no proof that such\ndisks actually exist, but I have enough experience with hardware that I\nwon't be surprised.)\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 10 May 2006 16:57:45 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Wed, 2006-05-10 at 09:51, Douglas McNaught wrote:\n> Scott Marlowe <[email protected]> writes:\n> \n> > On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:\n> \n> >> You do if the controller thinks the data is already on the drives and\n> >> removes it from its cache.\n> >\n> > Bruce, re-read what I wrote. The escalades tell the drives to TURN OFF\n> > THEIR OWN CACHE.\n> \n> Some ATA drives would lie about that too IIRC. Hopefully they've\n> stopped doing it in the SATA era.\n\nUgh. Now that would make for a particularly awful bit of firmware\nimplementation. I'd think that if I found a SATA drive doing that I'd\nbe likely to strike the manufacturer off of the list for possible future\npurchases...\n", "msg_date": "Wed, 10 May 2006 10:10:00 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On May 9, 2006, at 11:26 AM, Joshua D. Drake wrote:\n> Of course not, but which drives lie about sync that are SATA? Or more \n> specifically SATA-II?\n\nWith older Linux drivers (before spring 2005, I think) - all of\nthem - since it seems the linux kernel didn't support the\nwrite barriers needed to force the sync. It's not clear to\nme how much of the SATA data loss is due to this driver issue\nand how much is due to buggy drives themselves.\n\nAccording to Jeff Garzik (the guy who wrote the SATA drivers\nfor Linux) [1]\n\n \"You need a vaguely recent 2.6.x kernel to support fsync(2)\n and fdatasync(2) flushing your disk's write cache.\n Previous 2.4.x and 2.6.x kernels would only flush the write\n cache upon reboot, or if you used a custom app to issue\n the 'flush cache' command directly to your disk.\n\n Very recent 2.6.x kernels include write barrier support, which\n flushes the write cache when the ext3 journal gets flushed to disk.\n\n If your kernel doesn't flush the write cache, then obviously there\n is a window where you can lose data. Welcome to the world of\n write-back caching, circa 1990.\n\n If you are stuck without a kernel that issues the FLUSH CACHE (IDE)\n or SYNCHRONIZE CACHE (SCSI) command, it is trivial to write\n a userspace utility that issues the command.\n\n Jeff, the Linux SATA driver guy\n \"\n\nI've wondered for a while if this driver issue is actually the\nsource of most of the fear around SATA drives. Note it appears\nthat with those old kernels you aren't that safe with SCSI either.\n\n\n[1] in may 2005, http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\n", "msg_date": "Thu, 11 May 2006 13:04:16 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, May 09, 2006 at 08:59:55PM -0400, Bruce Momjian wrote:\n> Joshua D. Drake wrote:\n> > Vivek Khera wrote:\n> > > \n> > > On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:\n> > > \n> > >> Sorry that is an extremely misleading statement. SATA RAID is \n> > >> perfectly acceptable if you have a hardware raid controller with a \n> > >> battery backup controller.\n> > >>\n> > >> And dollar for dollar, SCSI will NOT be faster nor have the hard drive \n> > >> capacity that you will get with SATA.\n> > > \n> > > Does this hold true still under heavy concurrent-write loads? I'm \n> > > preparing yet another big DB server and if SATA is a better option, I'm \n> > > all (elephant) ears.\n> > \n> > I didn't say better :). If you can afford, SCSI is the way to go. \n> > However SATA with a good controller (I am fond of the LSI 150 series) \n> > can provide some great performance.\n> \n> Basically, you can get away with cheaper hardware, but it usually\n> doesn't have the reliability/performance of more expensive options.\n> \n> You want an in-depth comparison of how a server disk drive is internally\n> better than a desktop drive:\n> \n> \thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\nBTW, someone (Western Digital?) is now offering SATA drives that carry\nthe same MTBF/warranty/what-not as their SCSI drives. I can't remember\nif they actually claim that it's the same mechanisms just with a\ndifferent controller on the drive...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:19:28 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Tue, May 09, 2006 at 12:10:32PM +0200, Jean-Yves F. Barbier wrote:\n> > I myself can't see much reason to spend $500 on high end controller\n> > cards for a simple Raid 1.\n> \n> Naa, you can find ATA &| SATA ctrlrs for about EUR30 !\n \nAnd you're likely getting what you paid for: crap. Such a controller is\nless likely to do things like turn of write caching so that fsync works\nproperly.\n\n> > + Hardware Raids might be a bit easier to manage, if you never spend a\n> > few hours to learn Software Raid Tools.\n> \n> I'd the same (mostly as you still have to punch a command line for\n> most of the controlers)\n \nControllers I've seen have some kind of easy to understand GUI, at least\nduring bootup. When it comes to OS-level tools that's going to vary\nwidely.\n\n> > + There are situations in which Software Raids are faster, as CPU power\n> > has advanced dramatically in the last years and even high end controller\n> > cards cannot keep up with that.\n> \n> Definitely NOT, however if your server doen't have a heavy load, the\n> software overload can't be noticed (essentially cache managing and\n> syncing)\n> \n> For bi-core CPUs, it might be true\n\nDepends. RAID performance depends on a heck of a lot more than just CPU.\nSoftware RAID allows you to do things like spread load across multiple\ncontrollers, so you can scale a lot higher for less money. Though in\nthis case I doubt that's a consideration, so what's more important is\nthat making sure the controller bus isn't in the way. One thing that\nmeans is ensuring that every SATA drive has it's own dedicated\ncontroller, since a lot of SATA hardware can't handle multiple commands\non the bus at once.\n\n> > + Using SATA drives is always a bit of risk, as some drives are lying\n> > about whether they are caching or not.\n> \n> ?? Do you intend to use your server without a UPS ??\n\nHave you never heard of someone tripping over a plug? Or a power supply\nfailing? Or the OS crashing? If fsync is properly obeyed, PostgreSQL\nwill gracefully recover from all of those situations. If it's not,\nyou're at risk of losing the whole database.\n\n> > + Using hardware controllers, the array becomes locked to a particular\n> > vendor. You can't switch controller vendors as the array meta\n> > information is stored proprietary. In case the Raid is broken to a level\n> > the controller can't recover automatically this might complicate manual\n> > recovery by specialists.\n> \n> ?? Do you intend not to make backups ??\n\nEven with backups this is still a valid concern, since the backup will\nbe nowhere near as up-to-date as the database was unless you have a\npretty low DML rate.\n\n> BUT a hardware controler is about EUR2000 and a (ATA/SATA) 500GB HD\n> is ~ EUR350.\n\nHuh? You can get 3ware controllers for about $500, and they're pretty\ndecent. While I'm sure there are controllers for $2k that doesn't mean\nthere's nothing inbetween that and nothing.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:32:10 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\n>> You want an in-depth comparison of how a server disk drive is internally\n>> better than a desktop drive:\n>>\n>> \thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> \n> BTW, someone (Western Digital?) is now offering SATA drives that carry\n> the same MTBF/warranty/what-not as their SCSI drives. I can't remember\n> if they actually claim that it's the same mechanisms just with a\n> different controller on the drive...\n\nWell western digital and Seagate both carry 5 year warranties. Seagate I \nbelieve does on almost all of there products. WD you have to pick the \nright drive.\n\nJoshua D> Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Thu, 11 May 2006 15:38:31 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Thu, May 11, 2006 at 03:38:31PM -0700, Joshua D. Drake wrote:\n> \n> >>You want an in-depth comparison of how a server disk drive is internally\n> >>better than a desktop drive:\n> >>\n> >>\thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> >\n> >BTW, someone (Western Digital?) is now offering SATA drives that carry\n> >the same MTBF/warranty/what-not as their SCSI drives. I can't remember\n> >if they actually claim that it's the same mechanisms just with a\n> >different controller on the drive...\n> \n> Well western digital and Seagate both carry 5 year warranties. Seagate I \n> believe does on almost all of there products. WD you have to pick the \n> right drive.\n\nI know that someone recently made a big PR push about how you could get\n'server reliability' in some of their SATA drives, but maybe now\neveryone's starting to do it. I suspect the premium you can charge for\nit offsets the costs, provided that you switch all your production over\nrather than trying to segregate production lines.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 18:01:38 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n> >> You want an in-depth comparison of how a server disk drive is internally\n> >> better than a desktop drive:\n> >>\n> >> \thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> > \n> > BTW, someone (Western Digital?) is now offering SATA drives that carry\n> > the same MTBF/warranty/what-not as their SCSI drives. I can't remember\n> > if they actually claim that it's the same mechanisms just with a\n> > different controller on the drive...\n> \n> Well western digital and Seagate both carry 5 year warranties. Seagate I \n> believe does on almost all of there products. WD you have to pick the \n> right drive.\n\nThat's nice, but it seems similar to my Toshiba laptop drive experience\n--- it breaks, we replace it. I would rather not have to replace it. :-)\n\nLet me mention the only drive that has ever failed without warning was a\nSCSI Deskstar (deathstar) drive, which was a hybrid because it was a\nSCSI drive, but made for consumer use.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 11 May 2006 19:20:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\n>> Well western digital and Seagate both carry 5 year warranties. Seagate I \n>> believe does on almost all of there products. WD you have to pick the \n>> right drive.\n> \n> That's nice, but it seems similar to my Toshiba laptop drive experience\n> --- it breaks, we replace it. I would rather not have to replace it. :-)\n\nLaptop drives are known to have short lifespans do to heat. I have IDE \ndrives that have been running for four years without any issues but I \nhave good fans blowing over them.\n\nFrankly I think if you are running drivess (in a production environment) \nfor more then 3 years your crazy anyway :)\n\n> \n> Let me mention the only drive that has ever failed without warning was a\n> SCSI Deskstar (deathstar) drive, which was a hybrid because it was a\n> SCSI drive, but made for consumer use.\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Thu, 11 May 2006 16:24:43 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n> >> Well western digital and Seagate both carry 5 year warranties. Seagate I \n> >> believe does on almost all of there products. WD you have to pick the \n> >> right drive.\n> > \n> > That's nice, but it seems similar to my Toshiba laptop drive experience\n> > --- it breaks, we replace it. I would rather not have to replace it. :-)\n> \n> Laptop drives are known to have short lifespans do to heat. I have IDE \n> drives that have been running for four years without any issues but I \n> have good fans blowing over them.\n> \n> Frankly I think if you are running drivess (in a production environment) \n> for more then 3 years your crazy anyway :)\n\nAgreed --- the cost/benefit of keeping a drive >3 years just doesn't\nmake sense.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 11 May 2006 19:31:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Thu, May 11, 2006 at 07:20:27PM -0400, Bruce Momjian wrote:\n> Joshua D. Drake wrote:\n> > \n> > >> You want an in-depth comparison of how a server disk drive is internally\n> > >> better than a desktop drive:\n> > >>\n> > >> \thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> > > \n> > > BTW, someone (Western Digital?) is now offering SATA drives that carry\n> > > the same MTBF/warranty/what-not as their SCSI drives. I can't remember\n> > > if they actually claim that it's the same mechanisms just with a\n> > > different controller on the drive...\n> > \n> > Well western digital and Seagate both carry 5 year warranties. Seagate I \n> > believe does on almost all of there products. WD you have to pick the \n> > right drive.\n> \n> That's nice, but it seems similar to my Toshiba laptop drive experience\n> --- it breaks, we replace it. I would rather not have to replace it. :-)\n> \n> Let me mention the only drive that has ever failed without warning was a\n> SCSI Deskstar (deathstar) drive, which was a hybrid because it was a\n> SCSI drive, but made for consumer use.\n\nMy damn powerbook drive recently failed with very little warning, other\nthan I did notice that disk activity seemed to be getting a bit slower.\nIIRC it didn't log any errors or anything. Even if it did, if the OS was\ncatching them I'd hope it would pop up a warning or something. But from\nwhat I've heard, some drives now-a-days will silently remap dead sectors\nwithout telling the OS anything, which is great until you've used up all\nof the spare sectors and there's nowhere to remap to. :(\n\nHmm... I should figure out how to have OS X email me daily log updates\nlike FreeBSD does...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 18:41:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Thu, May 11, 2006 at 07:20:27PM -0400, Bruce Momjian wrote:\n> > Joshua D. Drake wrote:\n> > > \n> > > >> You want an in-depth comparison of how a server disk drive is internally\n> > > >> better than a desktop drive:\n> > > >>\n> > > >> \thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> > > > \n> > > > BTW, someone (Western Digital?) is now offering SATA drives that carry\n> > > > the same MTBF/warranty/what-not as their SCSI drives. I can't remember\n> > > > if they actually claim that it's the same mechanisms just with a\n> > > > different controller on the drive...\n> > > \n> > > Well western digital and Seagate both carry 5 year warranties. Seagate I \n> > > believe does on almost all of there products. WD you have to pick the \n> > > right drive.\n> > \n> > That's nice, but it seems similar to my Toshiba laptop drive experience\n> > --- it breaks, we replace it. I would rather not have to replace it. :-)\n> > \n> > Let me mention the only drive that has ever failed without warning was a\n> > SCSI Deskstar (deathstar) drive, which was a hybrid because it was a\n> > SCSI drive, but made for consumer use.\n> \n> My damn powerbook drive recently failed with very little warning, other\n> than I did notice that disk activity seemed to be getting a bit slower.\n> IIRC it didn't log any errors or anything. Even if it did, if the OS was\n> catching them I'd hope it would pop up a warning or something. But from\n> what I've heard, some drives now-a-days will silently remap dead sectors\n> without telling the OS anything, which is great until you've used up all\n> of the spare sectors and there's nowhere to remap to. :(\n\nYes, I think most IDE drives do silently remap, and most SCSI drives\ndon't. Not sure how much _most_ is.\n\nI know my SCSI controller beeps at me when I try to access a bad block. \nNow, that gets my attention.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 11 May 2006 19:45:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "\n> Hmm... I should figure out how to have OS X email me daily log updates\n> like FreeBSD does...\n\nLogwatch.\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Thu, 11 May 2006 16:59:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "On Thu, May 11, 2006 at 18:41:25 -0500,\n \"Jim C. Nasby\" <[email protected]> wrote:\n> On Thu, May 11, 2006 at 07:20:27PM -0400, Bruce Momjian wrote:\n> \n> My damn powerbook drive recently failed with very little warning, other\n> than I did notice that disk activity seemed to be getting a bit slower.\n> IIRC it didn't log any errors or anything. Even if it did, if the OS was\n> catching them I'd hope it would pop up a warning or something. But from\n> what I've heard, some drives now-a-days will silently remap dead sectors\n> without telling the OS anything, which is great until you've used up all\n> of the spare sectors and there's nowhere to remap to. :(\n\nYou might look into smartmontools. One part of this is a daemon that runs\nselftests on the disks on a regular basis. You can have warnings mailed to\nyou on various conditions. Drives will fail the self test before they\nrun out of spare sectors. There are other drive characteristics that can\nbe used to tell if drive failure is imminent and give you a chance to replace\na drive before it fails.\n", "msg_date": "Fri, 12 May 2006 02:19:57 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arguments Pro/Contra Software Raid" }, { "msg_contents": "> My damn powerbook drive recently failed with very little warning\n\nIt seems to me that S.M.A.R.T. reporting is a crock of shit. I've had ATA\ndrives report everything OK while clearly in the final throes of death, just\nminutes before total failure.\n\n-- \nScott Ribe\[email protected]\nhttp://www.killerbytes.com/\n(303) 722-0567 voice\n\n\n", "msg_date": "Fri, 12 May 2006 09:13:59 -0600", "msg_from": "Scott Ribe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid" }, { "msg_contents": "Scott Ribe <[email protected]> writes:\n>> My damn powerbook drive recently failed with very little warning\n\n> It seems to me that S.M.A.R.T. reporting is a crock of shit. I've had ATA\n> drives report everything OK while clearly in the final throes of death, just\n> minutes before total failure.\n\nFWIW, I replaced a powerbook's drive about two weeks ago myself, and its\nSMART reporting didn't show a darn thing wrong either. Fortunately, the\ndrive started acting noticeably weird (long pauses seemingly trying to\nrecalibrate itself) while still working well enough that I was able to\nget everything copied off it. I didn't wait for it to fail completely ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 May 2006 11:53:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid " }, { "msg_contents": "At 11:53 AM 5/12/2006 -0400, Tom Lane wrote:\n\n>Scott Ribe <[email protected]> writes:\n> >> My damn powerbook drive recently failed with very little warning\n>\n> > It seems to me that S.M.A.R.T. reporting is a crock of shit. I've had ATA\n> > drives report everything OK while clearly in the final throes of death, \n> just\n> > minutes before total failure.\n>\n>FWIW, I replaced a powerbook's drive about two weeks ago myself, and its\n>SMART reporting didn't show a darn thing wrong either. Fortunately, the\n>drive started acting noticeably weird (long pauses seemingly trying to\n>recalibrate itself) while still working well enough that I was able to\n>get everything copied off it. I didn't wait for it to fail completely ;-)\n\nStrange. With long pauses, usually you'd see stuff like \"crc\" errors in the \nlogs, and you'd get some info from the SMART monitoring stuff.\n\nI guess a lot of it depends on the drive model and manufacturer.\n\nSMART reporting is better than nothing, and it's actually not too bad. It's \njust whether manufacturers implement it in useful ways or not.\n\nI wouldn't trust the drive or manufacturer's judgement on when failure is \nimminent - the drive usually gathers statistics etc and these are typically \nreadable with the SMART monitoring/reporting software, so you should check \nthose stats and decide for yourself when failure is imminent.\n\nFor example: I'd suggest regarding any non-cable related CRC errors, or \nseek failures as \"drive replacement time\"- even if the drive or \nManufacturer thinks you need to have tons in a row for \"failure imminent\".\n\nI recommend \"blacklisting\" drives which don't notice anything before it is \ntoo late. e.g. even if it starts taking a long time to read a block, it \nreports no differences in the SMART stats.\n\nLink.\n\n\n", "msg_date": "Sun, 14 May 2006 16:31:00 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Arguments Pro/Contra Software Raid " } ]
[ { "msg_contents": "Hi all !\n\nI am running PostgreSQL 7.3.2 on Linux 2.6.13...\n\nWhat I see when VACUUM process is running is:\n\nCpu(s): 0.0% us, 3.2% sy, 0.0% ni, 0.0% id, 93.5% wa, 3.2% hi,\n0.0% si\n\nWhat I am worry about is \"93.5% wa\" ...\n\nCould someone explain me what is the VACUUM process waiting for ?\n\nBest regards\nDavid\n\n", "msg_date": "9 May 2006 02:45:37 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "VACUUM killing my CPU" }, { "msg_contents": "On May 9, 2006 02:45 am, [email protected] wrote:\n> What I am worry about is \"93.5% wa\" ...\n>\n> Could someone explain me what is the VACUUM process waiting for ?\n>\n\nDisk I/O.\n\n-- \nIn a truly free society, \"Alcohol, Tobacco and Firearms\" would be a \nconvenience store chain.\n\n", "msg_date": "Tue, 9 May 2006 21:08:33 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM killing my CPU" }, { "msg_contents": "Hi,\n\n>>What I am worry about is \"93.5% wa\" ...\n>>\n>>Could someone explain me what is the VACUUM process waiting for ?\n>>\n> \n> \n> Disk I/O.\n> \n\nCPU\nwa: Time spent waiting for IO. Prior to Linux 2.5.41, shown as zero.\n\nJust a little more info to help understand what Alan has pointed out.\n\nYour CPU processes are waiting on the HDD ...\n\nHTH\n\n-- \nRegards,\nRudi\n\n", "msg_date": "Wed, 10 May 2006 14:18:15 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM killing my CPU" } ]
[ { "msg_contents": "Hi all !\n\nI have got such problem.\nIm running Postgresql 7.3.2 on Linux 2.6.13.\nWhat is see when VACCUM is running and killing my CPU is:\n\nCpu(s): 3.2% us, 0.0% sy, 0.0% ni, 0.0% id, 96.8% wa, 0.0% hi,\n0.0% si\n\nwhat i am worry about is \"96.8% wa\" why is it like that?\n\nwhat is the process waiting for ?\n\ncould somone explain me that please? :)\n\nBest regards\ndavid\n\n", "msg_date": "9 May 2006 03:19:08 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "PostgreSQL VACCUM killing CPU" }, { "msg_contents": "The \"wa\" means waiting on IO. Vacuum is a very IO intensive \nprocess. You can use tools like vmstat and iostat to see how much \ndisk IO is occurring. Also, sar is very helpful for trending these \nvalues over time.\n\n-- Will Reese http://blog.rezra.com\nOn May 9, 2006, at 5:19 AM, [email protected] wrote:\n\n> Hi all !\n>\n> I have got such problem.\n> Im running Postgresql 7.3.2 on Linux 2.6.13.\n> What is see when VACCUM is running and killing my CPU is:\n>\n> Cpu(s): 3.2% us, 0.0% sy, 0.0% ni, 0.0% id, 96.8% wa, 0.0% hi,\n> 0.0% si\n>\n> what i am worry about is \"96.8% wa\" why is it like that?\n>\n> what is the process waiting for ?\n>\n> could somone explain me that please? :)\n>\n> Best regards\n> david\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 9 May 2006 22:03:12 -0500", "msg_from": "Will Reese <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL VACCUM killing CPU" }, { "msg_contents": "\n>> I have got such problem.\n>> Im running Postgresql 7.3.2 on Linux 2.6.13.\n\nAlso, you should seriously consider upgrading. 8.1.3 is the current \nPostgreSQL release. If you must remain on 7.3, at least upgrade to \n7.3.14, which contains many bugfixes.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Wed, 10 May 2006 12:24:55 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL VACCUM killing CPU" }, { "msg_contents": "\n>> I have got such problem.\n>> Im running Postgresql 7.3.2 on Linux 2.6.13.\n\nAlso, you should seriously consider upgrading. 8.1.3 is the current \nPostgreSQL release. If you must remain on 7.3, at least upgrade to \n7.3.14, which contains *many* bugfixes.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Wed, 10 May 2006 12:28:24 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL VACCUM killing CPU" }, { "msg_contents": "On Tue, May 09, 2006 at 03:19:08AM -0700, [email protected] wrote:\n> I have got such problem.\n> Im running Postgresql 7.3.2 on Linux 2.6.13.\n> What is see when VACCUM is running and killing my CPU is:\n> \n> Cpu(s): 3.2% us, 0.0% sy, 0.0% ni, 0.0% id, 96.8% wa, 0.0% hi,\n> 0.0% si\n> \n> what i am worry about is \"96.8% wa\" why is it like that?\n\nIt's killing your disk drives instead of CPU(which is mostly _idle_\nwaiting for I/O completion).\n\nRun this command to get an idea of the I/O activities:\n iostat -x 3 3\n\n[AD]Running a kernel patched with adaptive read-ahead may help it:\nhttp://www.vanheusden.com/ara/adaptive-readahead-11.1-2.6.16.5.patch.gz\n", "msg_date": "Wed, 10 May 2006 22:17:26 +0800", "msg_from": "Wu Fengguang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL VACCUM killing CPU" } ]
[ { "msg_contents": "Ok, thank you all again for your help in this matter. Yes, Michael I (the\noriginal poster) did say or imply I guess is a better word for it that a\ncombo of training and hands-on is the best way for one to learn PostgreSQL\nor just about anything for that matter. Thank you for recognizing the true\nintention of my statements.\n\nOne does need some sort of basis from which to grow. I will say that\nnothing can replace the hands-on real-world training one can get in this\nbusiness as it is the best way to learn and remember. Just my opinion. For\nexample, I stated I was a SysAdmin for 20 years. I was then thrust into the\nOracle world as a DBA about 2 years ago while still maintaining my SysAdmin\nresponsibilities. I have yet to receive any formal Oracle training and have\nhad to learn that on my own via, manuals, Google searches and begging the\nOracle Database Architect here for assistance. However, with PostgreSQL I\ninitially started down the very same track but was fortunate enough to\nreceive the ok for that week long PG boot camp. Although I didn't take all\nthat much away from the boot camp it did provide an excellent base from\nwhich I continue to grow as a PG DBA and it has helped me to understand\npostgres a lot easier and quicker than Oracle.\n\nSo please, lets just not throw emails back-n-forth amongst the group. Since\njoining I have found the group as a whole to be a great resource of\ninformation and PG knowledge and do not want us to get a testy with each\nother over something I said or someone's interpretation of what I said.\nCase closed.\n\nBTW - I am still working towards getting the knowledge out here about what I\nlearned form the posts, mainly that the buffers/cache row of information\nfrom the free command is the one we need most be concerned with.\n\nThank you,\nTim McElroy\n\n -----Original Message-----\nFrom: \[email protected]\n[mailto:[email protected]] On Behalf Of Michael Stone\nSent:\tMonday, May 08, 2006 5:17 PM\nTo:\[email protected]\nSubject:\tRe: [PERFORM] Memory and/or cache issues?\n\nOn Mon, May 08, 2006 at 03:38:23PM -0400, Vivek Khera wrote:\n>On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:\n>>>Yeah, I prefer my surgeons to work this way too. training is for the\n>>>birds.\n>>\n>>I think you read too quickly past the part where Tim said he'd \n>>taking a\n>>week-long training class.\n>\n>s/training/apprenticeship/g;\n\nOf course, the original poster did say that hands-on was the best way to \nlearn. What is apprenticeship but a combination of training and \nexperience. Are you just sniping for fun?\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n\n\nRE: [PERFORM] Memory and/or cache issues?\n\n\nOk, thank you all again for your help in this matter.  Yes, Michael I (the original poster) did say or imply I guess is a better word for it that a combo of training and hands-on is the best way for one to learn PostgreSQL or just about anything for that matter.  Thank you for recognizing the true intention of my statements.\nOne does need some sort of basis from which to grow.  I will say that nothing can replace the hands-on real-world training one can get in this business as it is the best way to learn and remember.  Just my opinion.  For example, I stated I was a SysAdmin for 20 years.  I was then thrust into the Oracle world as a DBA about 2 years ago while still maintaining my SysAdmin responsibilities.  I have yet to receive any formal Oracle training and have had to learn that on my own via, manuals, Google searches and begging the Oracle Database Architect here for assistance.  However, with PostgreSQL I initially started down the very same track but was fortunate enough to receive the ok for that week long PG boot camp.  Although I didn't take all that much away from the boot camp it did provide an excellent base from which I continue to grow as a PG DBA and it has helped me to understand postgres a lot easier and quicker than Oracle.\nSo please, lets just not throw emails back-n-forth amongst the group.  Since joining I have found the group as a whole to be a great resource of information and PG knowledge and do not want us to get a testy with each other over something I said or someone's interpretation of what I said.  Case closed.\nBTW - I am still working towards getting the knowledge out here about what I learned form the posts, mainly that the buffers/cache row of information from the free command is the one we need most be concerned with.\nThank you,\nTim McElroy\n\n -----Original Message-----\nFrom:   [email protected] [mailto:[email protected]]  On Behalf Of Michael Stone\nSent:   Monday, May 08, 2006 5:17 PM\nTo:     [email protected]\nSubject:        Re: [PERFORM] Memory and/or cache issues?\n\nOn Mon, May 08, 2006 at 03:38:23PM -0400, Vivek Khera wrote:\n>On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:\n>>>Yeah, I prefer my surgeons to work this way too.  training is for the\n>>>birds.\n>>\n>>I think you read too quickly past the part where Tim said he'd  \n>>taking a\n>>week-long training class.\n>\n>s/training/apprenticeship/g;\n\nOf course, the original poster did say that hands-on was the best way to \nlearn. What is apprenticeship but a combination of training and \nexperience. Are you just sniping for fun?\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster", "msg_date": "Tue, 9 May 2006 08:45:16 -0400 ", "msg_from": "\"mcelroy, tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory and/or cache issues?" } ]
[ { "msg_contents": " \nHi,\n\nWe've got a C function that we use here and we find that for every\nconnection, the first run of the function is much slower than any\nsubsequent runs. ( 50ms compared to 8ms)\n\nBesides using connection pooling, are there any options to improve\nperformance?\n\nBy the way, we are using pg version 8.1.3.\n\n-Adam\n\n", "msg_date": "Tue, 9 May 2006 14:57:09 -0700", "msg_from": "\"Adam Palmblad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow C Function" }, { "msg_contents": "Adam Palmblad wrote:\n> \n> Hi,\n> \n> We've got a C function that we use here and we find that for every\n> connection, the first run of the function is much slower than any\n> subsequent runs. ( 50ms compared to 8ms)\n\nThat is fairly standard because the data will be cached.\n\n> \n> Besides using connection pooling, are there any options to improve\n> performance?\n\nNot that I know of but then again I am not a C programer.\n\n\n\n> \n> By the way, we are using pg version 8.1.3.\n> \n> -Adam\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Tue, 09 May 2006 16:09:19 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow C Function" }, { "msg_contents": "\"Adam Palmblad\" <[email protected]> writes:\n> We've got a C function that we use here and we find that for every\n> connection, the first run of the function is much slower than any\n> subsequent runs. ( 50ms compared to 8ms)\n\nPerhaps that represents the time needed to load the dynamic library\ninto the backend? If so, the \"preload_libraries\" parameter might\nhelp you fix it. Or consider pooling connections. Or build a custom\nexecutable with the function linked in permanently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 May 2006 23:52:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow C Function " } ]
[ { "msg_contents": "I'm having a rare but deadly problem. On our web servers, a process occasionally gets stuck, and can't be unstuck. Once it's stuck, all Postgres activities cease. \"kill -9\" is required to kill it -- signals 2 and 15 don't work, and \"/etc/init.d/postgresql stop\" fails.\n\nHere's what the process table looks like:\n\n$ ps -ef | grep postgres\npostgres 30713 1 0 Apr24 ? 00:02:43 /usr/local/pgsql/bin/postmaster -p 5432 -D /disk3/postgres/data\npostgres 25423 30713 0 May08 ? 00:03:34 postgres: writer process\npostgres 25424 30713 0 May08 ? 00:00:02 postgres: stats buffer process\npostgres 25425 25424 0 May08 ? 00:00:02 postgres: stats collector process\npostgres 11918 30713 21 07:37 ? 02:00:27 postgres: production webuser 127.0.0.1(21772) SELECT\npostgres 31624 30713 0 16:11 ? 00:00:00 postgres: production webuser [local] idle\npostgres 31771 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12422) idle\npostgres 31772 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12421) idle\npostgres 31773 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12424) idle\npostgres 31774 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12425) idle\npostgres 31775 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12426) idle\npostgres 31776 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12427) idle\npostgres 31777 30713 0 16:12 ? 00:00:00 postgres: production webuser 127.0.0.1(12428) idle\n\nThe SELECT process is the one that's stuck. top(1) and other indicators show that nothing is going on at all (no CPU usage, normal memory usage); the process seems to be blocked waiting for something. (The \"idle\" processes are attached to a FastCGI program.)\n\nThis has happened on *two different machines*, both doing completely different tasks. The first one is essentially a read-only warehouse that serves lots of queries, and the second one is the server we use to load the warehouse. In both cases, Postgres has been running for a long time, and is issuing SELECT statements that it's issued millions of times before with no problems. No other processes are accessing Postgres, just the web services.\n\nThis is a deadly bug, because our web site goes dead when this happens, and it requires an administrator to log in and kill the stuck postgres process then restart Postgres. We've installed failover system so that the web site is diverted to a backup server, but since this has happened twice in one week, we're worried.\n\nAny ideas?\n\nDetails:\n\n Postgres 8.0.3\n Linux 2.6.12-1.1381_FC3smp i686 i386\n\n Dell 2-CPU Xeon system (hyperthreading is enabled)\n 4 GB memory\n 2 120 GB disks (SATA on machine 1, IDE on machine 2)\n\nThanks,\nCraig\n", "msg_date": "Tue, 09 May 2006 17:38:17 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres gets stuck" }, { "msg_contents": "\n> This is a deadly bug, because our web site goes dead when this happens, \n> and it requires an administrator to log in and kill the stuck postgres \n> process then restart Postgres. We've installed failover system so that \n> the web site is diverted to a backup server, but since this has happened \n> twice in one week, we're worried.\n> \n> Any ideas?\n\nSounds like a deadlock issue.\n\nDo you have query logging turned on?\n\nAlso, edit your postgresql.conf file and add (or uncomment):\n\nstats_command_string = true\n\nand restart postgresql.\n\nthen you'll be able to:\n\nselect * from pg_stat_activity;\n\nto see what queries postgres is running and that might give you some clues.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 10 May 2006 10:51:41 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres gets stuck" }, { "msg_contents": "\n\"\"Craig A. James\"\" <[email protected]> wrote\n> I'm having a rare but deadly problem. On our web servers, a process \n> occasionally gets stuck, and can't be unstuck. Once it's stuck, all \n> Postgres activities cease. \"kill -9\" is required to kill it -- \n> signals 2 and 15 don't work, and \"/etc/init.d/postgresql stop\" fails.\n>\n> Details:\n>\n> Postgres 8.0.3\n>\n\n[Scanning 8.0.4 ~ 8.0.7 ...] Didn't find related bug fix in the upgrade \nrelease. Can you attach to the problematic process and \"bt\" it (so we \ncould see where it stucks)?\n\nRegards,\nQingqing \n\n\n", "msg_date": "Thu, 11 May 2006 23:26:12 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres gets stuck" }, { "msg_contents": "Chris wrote:\n> \n>> This is a deadly bug, because our web site goes dead when this \n>> happens, ...\n> \n> Sounds like a deadlock issue.\n> ...\n> stats_command_string = true\n> and restart postgresql.\n> then you'll be able to:\n> select * from pg_stat_activity;\n> to see what queries postgres is running and that might give you some clues.\n\nThanks, good advice. You're absolutely right, it's stuck on a mutex. After doing what you suggest, I discovered that the query in progress is a user-written function (mine). When I log in as root, and use \"gdb -p <pid>\" to attach to the process, here's what I find. Notice the second function in the stack, a mutex lock:\n\n(gdb) bt\n#0 0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n#1 0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6\n#2 0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6\n#3 0x4f5fc1b4 in ?? ()\n#4 0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from /usr/local/pgsql/lib/libchmoogle.so\n#5 0x009ffcf0 in ?? () from /usr/lib/libz.so.1\n#6 0xbfe71c04 in ?? ()\n#7 0xbfe71e50 in ?? ()\n#8 0xbfe71b78 in ?? ()\n#9 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n#10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n#11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1\n#12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at zipstreamimpl.h:332\n#13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1, pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n#14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50, pOb=0xbfd923b8) at obconversion.cpp:780\n#15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120\n#16 0x00c1a203 in chmoogle_ichem_normalize_parent () at stl_construct.h:120\n#17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243\n#18 0x0810ae4d in ExecMakeFunctionResult ()\n#19 0x0810de2e in ExecProject ()\n#20 0x08115972 in ExecResult ()\n#21 0x08109e01 in ExecProcNode ()\n#22 0x00000020 in ?? ()\n#23 0xbed4b340 in ?? ()\n#24 0xbf92d9a0 in ?? ()\n#25 0xbed4b0c0 in ?? ()\n#26 0x00000000 in ?? ()\n\nIt looks to me like my code is trying to read the input parameter (a fairly long string, maybe 2K) from a buffer that was gzip'ed by Postgres for the trip between the client and server. My suspicion is that it's an incompatibility between malloc() libraries. libz (gzip compression) is calling something called zcfree, which then appears to be intercepted by something that's (probably statically) linked into my library. And somewhere along the way, a mutex gets set, and then ... it's stuck forever.\n\nps(1) shows that this thread had been running for about 7 hours, and the job status showed that this function had been successfully called about 1 million times, before this mutex lock occurred.\n\nAny ideas?\n\nThanks,\nCraig\n", "msg_date": "Thu, 11 May 2006 08:53:34 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres gets stuck" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> My suspicion is that it's an incompatibility between malloc()\n> libraries.\n\nOn Linux there's only supposed to be one malloc, ie, glibc's version.\nOn other platforms I'd be worried about threaded vs non-threaded libc\n(because the backend is not threaded), but not Linux.\n\nThere may be a more basic threading problem here, though, rooted in the\nprecise fact that the backend isn't threaded. If you're trying to use\nany libraries that assume they can have multiple threads, I wouldn't be\nat all surprised to see things go boom. C++ exception handling could be\nproblematic too.\n\nOr it could be a garden variety glibc bug. How up-to-date is your\nplatform?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2006 20:03:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres gets stuck " }, { "msg_contents": "Tom Lane wrote:\n> >My suspicion is that it's an incompatibility between malloc()\n> >libraries.\n> \n> On Linux there's only supposed to be one malloc, ie, glibc's version.\n> On other platforms I'd be worried about threaded vs non-threaded libc\n> (because the backend is not threaded), but not Linux.\n\nI guess I misinterpreted the Postgress manual, which says (in 31.9, \"C Language Functions\"),\n\n \"When allocating memory, use the PostgreSQL functions palloc and pfree\n instead of the corresponding C library functions malloc and free.\"\n\nI imagined that perhaps palloc/pfree used mutexes for something. But if I understand you, palloc() and pfree() are just wrappers around malloc() and free(), and don't (for example) make their own separate calls to brk(2), sbrk(2), or their kin. If that's the case, then you answered my question - it's all ordinary malloc/free calls in the end, and that's not the source of the problem.\n\n> There may be a more basic threading problem here, though, rooted in the\n> precise fact that the backend isn't threaded. If you're trying to use\n> any libraries that assume they can have multiple threads, I wouldn't be\n> at all surprised to see things go boom.\n\nNo threading anywhere. None of the libraries use threads or mutexes. It's just plain old vanilla C/C++ scientific algorithms.\n\n> C++ exception handling could be problematic too.\n\nNo C++ exceptions are thrown anywhere in the code, 'tho I suppose one of the I/O libraries could throw an exception, e.g. when reading from a file. But there's no evidence of this after millions of identical operations succeeded. In addition, the stack trace shows it to be stuck in a memory operation, not an I/O operation.\n\n> Or it could be a garden variety glibc bug. How up-to-date is your\n> platform?\n\nI guess this is the next place to look. From the few answers I've gotten, it sounds like this isn't a known Postgres issue, and my stack trace doesn't seem to be familiar to anyone on this forum. Oh well... thanks for your help.\n\nCraig\n", "msg_date": "Thu, 11 May 2006 19:10:17 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres gets stuck" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I guess I misinterpreted the Postgress manual, which says (in 31.9, \"C Language Functions\"),\n\n> \"When allocating memory, use the PostgreSQL functions palloc and pfree\n> instead of the corresponding C library functions malloc and free.\"\n\n> I imagined that perhaps palloc/pfree used mutexes for something. But if I understand you, palloc() and pfree() are just wrappers around malloc() and free(), and don't (for example) make their own separate calls to brk(2), sbrk(2), or their kin.\n\nCorrect. palloc/pfree are all about managing the lifetime of memory\nallocations, so that (for example) a function can return a palloc'd data\nstructure without worrying about whether that creates a long-term memory\nleak. But ultimately they just use malloc/free, and there's certainly\nnot any threading or mutex considerations in there.\n\n> No threading anywhere. None of the libraries use threads or mutexes. It's just plain old vanilla C/C++ scientific algorithms.\n\nDarn, my best theory down the drain.\n\n>> Or it could be a garden variety glibc bug. How up-to-date is your\n>> platform?\n\n> I guess this is the next place to look.\n\nLet us know how it goes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2006 22:51:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres gets stuck " } ]
[ { "msg_contents": "is there a possibility for creating views or temp tables in memory to \navoid disk io when user makes select operations?\n\nregards\ntom\n\n", "msg_date": "Wed, 10 May 2006 11:18:55 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "in memory views" }, { "msg_contents": "Thomas Vatter schrieb:\n> is there a possibility for creating views or temp tables in memory to \n> avoid disk io when user makes select operations?\n\nNo need. The data will be available in OS and database caches if\nthey are really required often. If not, tune up the caches and\ndo a regular \"pre select\".\n\nRegards\nTino\n", "msg_date": "Wed, 10 May 2006 11:35:26 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Tino Wildenhain wrote:\n\n> Thomas Vatter schrieb:\n>\n>> is there a possibility for creating views or temp tables in memory to \n>> avoid disk io when user makes select operations?\n>\n>\n> No need. The data will be available in OS and database caches if\n> they are really required often. If not, tune up the caches and\n> do a regular \"pre select\".\n>\n> Regards\n> Tino\n>\n>\n\nhmm, I am selecting a resultset with 1300 rows joined from 12 tables. \nwith jdbc I am waiting 40 seconds until the first row appears. The \nfollowing rows appear really fast but the 40 seconds are a problem.\n\nregards\ntom\n\n", "msg_date": "Wed, 10 May 2006 11:55:37 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "Thomas Vatter schrieb:\n> Tino Wildenhain wrote:\n> \n>> Thomas Vatter schrieb:\n>>\n>>> is there a possibility for creating views or temp tables in memory to \n>>> avoid disk io when user makes select operations?\n>>\n>>\n>>\n>> No need. The data will be available in OS and database caches if\n>> they are really required often. If not, tune up the caches and\n>> do a regular \"pre select\".\n>>\n>> Regards\n>> Tino\n>>\n>>\n> \n> hmm, I am selecting a resultset with 1300 rows joined from 12 tables. \n> with jdbc I am waiting 40 seconds until the first row appears. The \n> following rows appear really fast but the 40 seconds are a problem.\n\nWell you will need the equally 40 seconds to fill your hypothetical\nin memory table. (even a bit more due to the creation of a datastructure).\n\nSo you can do the aproaches of semi materialized views (that are in fact\nwriting into a shadow table) or just prefetch your data at time - just\nat the times you would refill your memory tables if they existed.\nA cronjob with select/fetch should do.\n\nRegards\nTino\n", "msg_date": "Wed, 10 May 2006 12:08:30 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Tino Wildenhain wrote:\n\n> Thomas Vatter schrieb:\n>\n>> Tino Wildenhain wrote:\n>>\n>>> Thomas Vatter schrieb:\n>>>\n>>>> is there a possibility for creating views or temp tables in memory \n>>>> to avoid disk io when user makes select operations?\n>>>\n>>>\n>>>\n>>>\n>>> No need. The data will be available in OS and database caches if\n>>> they are really required often. If not, tune up the caches and\n>>> do a regular \"pre select\".\n>>>\n>>> Regards\n>>> Tino\n>>>\n>>>\n>>\n>> hmm, I am selecting a resultset with 1300 rows joined from 12 tables. \n>> with jdbc I am waiting 40 seconds until the first row appears. The \n>> following rows appear really fast but the 40 seconds are a problem.\n>\n>\n> Well you will need the equally 40 seconds to fill your hypothetical\n> in memory table. (even a bit more due to the creation of a \n> datastructure).\n>\n> So you can do the aproaches of semi materialized views (that are in fact\n> writing into a shadow table) or just prefetch your data at time - just\n> at the times you would refill your memory tables if they existed.\n> A cronjob with select/fetch should do.\n>\n> Regards\n> Tino\n>\n>\n\nIf the in memory table is created a bootup time of the dbms it is \nalready present when user selects the data. Of course the challenge is \nto keep the in memory table up to date if data are changed. What do you \nmean with semi materialized views, I have tried select * from this_view \nwith the same result. Also, if I repeat the query it does not run faster.\n\nregards\ntom\n\n-- \nMit freundlichen Gr��en / Regards\nVatter\n \nNetwork Inventory Software\nSun Microsystems Principal Partner\n\nwww.network-inventory.de\nTel. 030-79782510\nE-Mail [email protected]\n\n", "msg_date": "Wed, 10 May 2006 12:43:28 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "Thomas Vatter schrieb:\n> Tino Wildenhain wrote:\n...\n>> Well you will need the equally 40 seconds to fill your hypothetical\n>> in memory table. (even a bit more due to the creation of a \n>> datastructure).\n>>\n>> So you can do the aproaches of semi materialized views (that are in fact\n>> writing into a shadow table) or just prefetch your data at time - just\n>> at the times you would refill your memory tables if they existed.\n>> A cronjob with select/fetch should do.\n>>\n>> Regards\n>> Tino\n>>\n>>\n> \n> If the in memory table is created a bootup time of the dbms it is \n> already present when user selects the data. Of course the challenge is \n> to keep the in memory table up to date if data are changed. What do you \n> mean with semi materialized views, I have tried select * from this_view \n> with the same result. Also, if I repeat the query it does not run faster.\n> \nSemi materialized views are just views with aditional rules and some\ntriggers which copy data to another table. There are several receipes\nif you google accordingly.\n\nI do not know what you mean by \"bootup time\" - do you really reboot\nyour database server? *hehe* just kidding ;)\n\nIn your first email you told me your query indeed runs faster the 2nd\ntime (due to the caching) now you are telling me that it is not.\n\nBtw, judging from your analyze output you are using very cryptic\ntable and column names - you can use aliasing in the query and dont\nhave to resort to tiny tags when you actually name the objects ;)\n\nMaybe others have comments on your query. Btw, better use\nexplain analyze to get realistic results.\n\nRegards\nTino\n", "msg_date": "Wed, 10 May 2006 13:57:04 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Tino Wildenhain wrote:\n\n> Thomas Vatter schrieb:\n>\n>> Tino Wildenhain wrote:\n>\n> ...\n>\n>>> Well you will need the equally 40 seconds to fill your hypothetical\n>>> in memory table. (even a bit more due to the creation of a \n>>> datastructure).\n>>>\n>>> So you can do the aproaches of semi materialized views (that are in \n>>> fact\n>>> writing into a shadow table) or just prefetch your data at time - just\n>>> at the times you would refill your memory tables if they existed.\n>>> A cronjob with select/fetch should do.\n>>>\n>>> Regards\n>>> Tino\n>>>\n>>>\n>>\n>> If the in memory table is created a bootup time of the dbms it is \n>> already present when user selects the data. Of course the challenge \n>> is to keep the in memory table up to date if data are changed. What \n>> do you mean with semi materialized views, I have tried select * from \n>> this_view with the same result. Also, if I repeat the query it does \n>> not run faster.\n>>\n> Semi materialized views are just views with aditional rules and some\n> triggers which copy data to another table. There are several receipes\n> if you google accordingly.\n>\n> I do not know what you mean by \"bootup time\" - do you really reboot\n> your database server? *hehe* just kidding ;)\n>\n> In your first email you told me your query indeed runs faster the 2nd\n> time (due to the caching) now you are telling me that it is not.\n>\n> Btw, judging from your analyze output you are using very cryptic\n> table and column names - you can use aliasing in the query and dont\n> have to resort to tiny tags when you actually name the objects ;)\n>\n> Maybe others have comments on your query. Btw, better use\n> explain analyze to get realistic results.\n>\n> Regards\n> Tino\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\n\nThe subsequent rows are shown faster not the subsequent queries - if \nyou really read my first e-mail ;-) . Yes, I have done analyse \nyesterday, the database has not changed, afaik it is necessary when the \ndatabase contents are changing.\n\nregards\ntom\n\n", "msg_date": "Wed, 10 May 2006 14:23:49 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "On Wed, 2006-05-10 at 04:55, Thomas Vatter wrote:\n> Tino Wildenhain wrote:\n> \n> > Thomas Vatter schrieb:\n> >\n> >> is there a possibility for creating views or temp tables in memory to \n> >> avoid disk io when user makes select operations?\n> >\n> >\n> > No need. The data will be available in OS and database caches if\n> > they are really required often. If not, tune up the caches and\n> > do a regular \"pre select\".\n> >\n> > Regards\n> > Tino\n> >\n> >\n> \n> hmm, I am selecting a resultset with 1300 rows joined from 12 tables. \n> with jdbc I am waiting 40 seconds until the first row appears. The \n> following rows appear really fast but the 40 seconds are a problem.\n\nAre you selecting the whole set at once? Or are you placing it into a\ncursor?\n\nWhat happens if you do this by declaring it as a cursor and then\nfetching the first row?\n", "msg_date": "Wed, 10 May 2006 10:11:57 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Scott Marlowe wrote:\n\n>On Wed, 2006-05-10 at 04:55, Thomas Vatter wrote:\n> \n>\n>>Tino Wildenhain wrote:\n>>\n>> \n>>\n>>>Thomas Vatter schrieb:\n>>>\n>>> \n>>>\n>>>>is there a possibility for creating views or temp tables in memory to \n>>>>avoid disk io when user makes select operations?\n>>>> \n>>>>\n>>>No need. The data will be available in OS and database caches if\n>>>they are really required often. If not, tune up the caches and\n>>>do a regular \"pre select\".\n>>>\n>>>Regards\n>>>Tino\n>>>\n>>>\n>>> \n>>>\n>>hmm, I am selecting a resultset with 1300 rows joined from 12 tables. \n>>with jdbc I am waiting 40 seconds until the first row appears. The \n>>following rows appear really fast but the 40 seconds are a problem.\n>> \n>>\n>\n>Are you selecting the whole set at once? Or are you placing it into a\n>cursor?\n>\n>What happens if you do this by declaring it as a cursor and then\n>fetching the first row?\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n> \n>\n\nI do executeQuery(), for the resultSet I do next() and return one row, \nbut wait, I have to review the logic in this area, I can tell you tomorrow\n\nregards\ntom\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Wed, 2006-05-10 at 04:55, Thomas Vatter wrote:\n \n\nTino Wildenhain wrote:\n\n \n\nThomas Vatter schrieb:\n\n \n\nis there a possibility for creating views or temp tables in memory to \navoid disk io when user makes select operations?\n \n\n\nNo need. The data will be available in OS and database caches if\nthey are really required often. If not, tune up the caches and\ndo a regular \"pre select\".\n\nRegards\nTino\n\n\n \n\nhmm, I am selecting a resultset with 1300 rows joined from 12 tables. \nwith jdbc I am waiting 40 seconds until the first row appears. The \nfollowing rows appear really fast but the 40 seconds are a problem.\n \n\n\nAre you selecting the whole set at once? Or are you placing it into a\ncursor?\n\nWhat happens if you do this by declaring it as a cursor and then\nfetching the first row?\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n \n\n\nI do executeQuery(), for the resultSet I do next() and return one row,\nbut wait, I have to review the logic in this area, I can tell you\ntomorrow\n\nregards\ntom", "msg_date": "Wed, 10 May 2006 17:41:01 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "On Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n> Scott Marlowe wrote: \n\n> > What happens if you do this by declaring it as a cursor and then\n> > fetching the first row?\n\n> > \n> \n> I do executeQuery(), for the resultSet I do next() and return one row,\n> but wait, I have to review the logic in this area, I can tell you\n> tomorrow\n\n\nA good short test is to run explain analyze on the query from the psql\ncommand line. If it shows an execution time of significantly less than\nwhat you get from you application, then it is likely that the real\nproblem is that your application is receiving the whole result set via\nlibpq and waiting for that. A cursor will solve that problem.\n", "msg_date": "Wed, 10 May 2006 10:45:06 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "> is there a possibility for creating views or temp tables in memory to \n> avoid disk io when user makes select operations?\n\nyou might also want to look into \"materialized views\":\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\nhttp://www.varlena.com/varlena/GeneralBits/64.php\n\nthis helped us alot when we had slow queries involving many tables.\n\ncheers,\nthomas\n\n", "msg_date": "Wed, 10 May 2006 17:45:37 +0200", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Scott Marlowe wrote:\n\n>On Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n> \n>\n>>Scott Marlowe wrote: \n>> \n>>\n>\n> \n>\n>>>What happens if you do this by declaring it as a cursor and then\n>>>fetching the first row?\n>>> \n>>>\n>\n> \n>\n>>> \n>>> \n>>>\n>>I do executeQuery(), for the resultSet I do next() and return one row,\n>>but wait, I have to review the logic in this area, I can tell you\n>>tomorrow\n>> \n>>\n>\n>\n>A good short test is to run explain analyze on the query from the psql\n>command line. If it shows an execution time of significantly less than\n>what you get from you application, then it is likely that the real\n>problem is that your application is receiving the whole result set via\n>libpq and waiting for that. A cursor will solve that problem.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n> \n>\nYes, the difference between psql command line and application is 6 \nseconds to 40 seconds. It is\nexactly the step resultSet = excecuteQuery() that needs 40 seconds. I \nuse next() as a cursor\nthrough the resultSet, but I fear this is not enough, do I have to use \ncreateStatement(resultSetType,\nresultSetConcurrency) respectively prepareStatement (resultSetType, \nresultSetConcurrency) to\nachieve the cursor behaviour?\n\nregards\ntom\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n \n\nScott Marlowe wrote: \n \n\n\n \n\n\nWhat happens if you do this by declaring it as a cursor and then\nfetching the first row?\n \n\n\n\n \n\n\n \n \n\nI do executeQuery(), for the resultSet I do next() and return one row,\nbut wait, I have to review the logic in this area, I can tell you\ntomorrow\n \n\n\n\nA good short test is to run explain analyze on the query from the psql\ncommand line. If it shows an execution time of significantly less than\nwhat you get from you application, then it is likely that the real\nproblem is that your application is receiving the whole result set via\nlibpq and waiting for that. A cursor will solve that problem.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n \n\nYes, the difference between psql command line and application is 6\nseconds to 40 seconds. It is\nexactly the step resultSet = excecuteQuery() that needs 40 seconds. I\nuse next() as a cursor\nthrough the resultSet, but I fear this is not enough, do I have to use\ncreateStatement(resultSetType, \nresultSetConcurrency) respectively prepareStatement (resultSetType,\nresultSetConcurrency) to\nachieve the cursor behaviour?\n\nregards\ntom", "msg_date": "Wed, 10 May 2006 22:54:00 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "On Wed, 2006-05-10 at 15:54, Thomas Vatter wrote:\n\n> > \n> Yes, the difference between psql command line and application is 6\n> seconds to 40 seconds. It is\n> exactly the step resultSet = excecuteQuery() that needs 40 seconds. I\n> use next() as a cursor\n> through the resultSet, but I fear this is not enough, do I have to use\n> createStatement(resultSetType, \n> resultSetConcurrency) respectively prepareStatement (resultSetType,\n> resultSetConcurrency) to\n> achieve the cursor behaviour?\n\nNot sure. I don't use a lot of prepared statements. I tend to build\nqueries and throw the at the database. In that instance, it's done\nlike:\n\ncreate cursor cursorname as select (rest of query here);\nfetch from cursorname;\n\nYou can find more on cursors here:\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-declare.html\n\nNot sure if you can use them with prepared statements, or if prepared\nstatements have their own kind of implementation.\n", "msg_date": "Wed, 10 May 2006 16:01:22 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Are you using the Postgres JDBC driver? Or are you using an ODBC JDBC\ndriver? The Postgres specific driver is usually faster.\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Thomas\nVatter\nSent: Wednesday, May 10, 2006 3:54 PM\nTo: Scott Marlowe\nCc: Tino Wildenhain; [email protected]\nSubject: Re: [PERFORM] in memory views\n\n\nScott Marlowe wrote: \n\nOn Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n\n \n\nScott Marlowe wrote: \n\n \n\n\n\n \n\nWhat happens if you do this by declaring it as a cursor and then\n\nfetching the first row?\n\n \n\n\n\n \n\n \n\n \n\nI do executeQuery(), for the resultSet I do next() and return one row,\n\nbut wait, I have to review the logic in this area, I can tell you\n\ntomorrow\n\n \n\n\n\n\n\nA good short test is to run explain analyze on the query from the psql\n\ncommand line. If it shows an execution time of significantly less than\n\nwhat you get from you application, then it is likely that the real\n\nproblem is that your application is receiving the whole result set via\n\nlibpq and waiting for that. A cursor will solve that problem.\n\n\n\n---------------------------(end of broadcast)---------------------------\n\nTIP 4: Have you searched our list archives?\n\n\n\n http://archives.postgresql.org\n\n\n\n\n\n \n\nYes, the difference between psql command line and application is 6\nseconds to 40 seconds. It is\nexactly the step resultSet = excecuteQuery() that needs 40 seconds. I\nuse next() as a cursor\nthrough the resultSet, but I fear this is not enough, do I have to use\ncreateStatement(resultSetType, \nresultSetConcurrency) respectively prepareStatement (resultSetType,\nresultSetConcurrency) to\nachieve the cursor behaviour?\n\nregards\ntom\n\n\n\n\n\n\nMessage\n\n\nAre \nyou using the Postgres JDBC driver?  Or are you using an ODBC JDBC \ndriver?  The Postgres specific driver is usually \nfaster.\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Thomas \n VatterSent: Wednesday, May 10, 2006 3:54 PMTo: Scott \n MarloweCc: Tino Wildenhain; \n [email protected]: Re: [PERFORM] in memory \n viewsScott Marlowe wrote: \n On Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n \nScott Marlowe wrote: \n \n \n\nWhat happens if you do this by declaring it as a cursor and then\nfetching the first row?\n \n \n\n \n I do executeQuery(), for the resultSet I do next() and return one row,\nbut wait, I have to review the logic in this area, I can tell you\ntomorrow\n \n\nA good short test is to run explain analyze on the query from the psql\ncommand line. If it shows an execution time of significantly less than\nwhat you get from you application, then it is likely that the real\nproblem is that your application is receiving the whole result set via\nlibpq and waiting for that. A cursor will solve that problem.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n Yes, the difference between psql command line and \n application is 6 seconds to 40 seconds. It isexactly the step resultSet = \n excecuteQuery() that needs 40 seconds. I use next() as a cursorthrough the \n resultSet, but I fear this is not enough, do I have to use \n createStatement(resultSetType, resultSetConcurrency) respectively \n prepareStatement (resultSetType, resultSetConcurrency) toachieve the \n cursor behaviour?regardstom", "msg_date": "Wed, 10 May 2006 16:08:49 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Dave Dutcher wrote:\n\n> Are you using the Postgres JDBC driver? Or are you using an ODBC JDBC \n> driver? The Postgres specific driver is usually faster.\n\n\nI'm using the postgres driver\n\nregards\ntom\n\n\n\n\n> \n> \n>\n> -----Original Message-----\n> *From:* [email protected]\n> [mailto:[email protected]] *On Behalf Of\n> *Thomas Vatter\n> *Sent:* Wednesday, May 10, 2006 3:54 PM\n> *To:* Scott Marlowe\n> *Cc:* Tino Wildenhain; [email protected]\n> *Subject:* Re: [PERFORM] in memory views\n>\n> Scott Marlowe wrote:\n>\n>>On Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n>> \n>>\n>>>Scott Marlowe wrote: \n>>> \n>>>\n>>\n>> \n>>\n>>>>What happens if you do this by declaring it as a cursor and then\n>>>>fetching the first row?\n>>>> \n>>>>\n>>\n>> \n>>\n>>>> \n>>>> \n>>>>\n>>>I do executeQuery(), for the resultSet I do next() and return one row,\n>>>but wait, I have to review the logic in this area, I can tell you\n>>>tomorrow\n>>> \n>>>\n>>\n>>\n>>A good short test is to run explain analyze on the query from the psql\n>>command line. If it shows an execution time of significantly less than\n>>what you get from you application, then it is likely that the real\n>>problem is that your application is receiving the whole result set via\n>>libpq and waiting for that. A cursor will solve that problem.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>>\n>> \n>>\n> Yes, the difference between psql command line and application is 6\n> seconds to 40 seconds. It is\n> exactly the step resultSet = excecuteQuery() that needs 40\n> seconds. I use next() as a cursor\n> through the resultSet, but I fear this is not enough, do I have to\n> use createStatement(resultSetType,\n> resultSetConcurrency) respectively prepareStatement\n> (resultSetType, resultSetConcurrency) to\n> achieve the cursor behaviour?\n>\n> regards\n> tom\n>\n\n\n-- \nMit freundlichen Gr��en / Regards\nVatter\n \nNetwork Inventory Software\nSun Microsystems Principal Partner\n\nwww.network-inventory.de\nTel. 030-79782510\nE-Mail [email protected]\n\n\n\n\n\n\n\nDave Dutcher wrote:\n\n\nMessage\n\nAre you using the Postgres JDBC driver?  Or\nare you using an ODBC JDBC driver?  The Postgres specific driver is\nusually faster.\n\n\nI'm using the postgres driver\n\nregards\ntom\n\n\n\n\n\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Thomas\nVatter\nSent: Wednesday, May 10, 2006 3:54 PM\nTo: Scott Marlowe\nCc: Tino Wildenhain; [email protected]\nSubject: Re: [PERFORM] in memory views\n\n\nScott Marlowe wrote:\n \nOn Wed, 2006-05-10 at 10:41, Thomas Vatter wrote:\n \n\nScott Marlowe wrote: \n \n\n\n \n\n\nWhat happens if you do this by declaring it as a cursor and then\nfetching the first row?\n \n\n\n\n \n\n\n \n \n\nI do executeQuery(), for the resultSet I do next() and return one row,\nbut wait, I have to review the logic in this area, I can tell you\ntomorrow\n \n\n\n\nA good short test is to run explain analyze on the query from the psql\ncommand line. If it shows an execution time of significantly less than\nwhat you get from you application, then it is likely that the real\nproblem is that your application is receiving the whole result set via\nlibpq and waiting for that. A cursor will solve that problem.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n \n\nYes, the difference between psql command line and application is 6\nseconds to 40 seconds. It is\nexactly the step resultSet = excecuteQuery() that needs 40 seconds. I\nuse next() as a cursor\nthrough the resultSet, but I fear this is not enough, do I have to use\ncreateStatement(resultSetType, \nresultSetConcurrency) respectively prepareStatement (resultSetType,\nresultSetConcurrency) to\nachieve the cursor behaviour?\n\nregards\ntom\n\n\n\n\n\n-- \nMit freundlichen Grüßen / Regards\nVatter\n \nNetwork Inventory Software\nSun Microsystems Principal Partner\n\nwww.network-inventory.de\nTel. 030-79782510\nE-Mail [email protected]", "msg_date": "Wed, 10 May 2006 23:18:58 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "Scott Marlowe wrote:\n\n>On Wed, 2006-05-10 at 15:54, Thomas Vatter wrote:\n>\n> \n>\n>>> \n>>> \n>>>\n>>Yes, the difference between psql command line and application is 6\n>>seconds to 40 seconds. It is\n>>exactly the step resultSet = excecuteQuery() that needs 40 seconds. I\n>>use next() as a cursor\n>>through the resultSet, but I fear this is not enough, do I have to use\n>>createStatement(resultSetType, \n>>resultSetConcurrency) respectively prepareStatement (resultSetType,\n>>resultSetConcurrency) to\n>>achieve the cursor behaviour?\n>> \n>>\n>\n>Not sure. I don't use a lot of prepared statements. I tend to build\n>queries and throw the at the database. In that instance, it's done\n>like:\n>\n>create cursor cursorname as select (rest of query here);\n>fetch from cursorname;\n>\n>You can find more on cursors here:\n>\n>http://www.postgresql.org/docs/8.1/interactive/sql-declare.html\n>\n>Not sure if you can use them with prepared statements, or if prepared\n>statements have their own kind of implementation.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n>\n>\n> \n>\n\nYes, I have used embedded sql and create cursor, fetch before I started \nwith jdbc, seems that\nI have to find out if new jdbc has a better way than simply resultSet = \nstatement.executeQuery().\n\nregards\ntom\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Wed, 2006-05-10 at 15:54, Thomas Vatter wrote:\n\n \n\n\n \n \n\nYes, the difference between psql command line and application is 6\nseconds to 40 seconds. It is\nexactly the step resultSet = excecuteQuery() that needs 40 seconds. I\nuse next() as a cursor\nthrough the resultSet, but I fear this is not enough, do I have to use\ncreateStatement(resultSetType, \nresultSetConcurrency) respectively prepareStatement (resultSetType,\nresultSetConcurrency) to\nachieve the cursor behaviour?\n \n\n\nNot sure. I don't use a lot of prepared statements. I tend to build\nqueries and throw the at the database. In that instance, it's done\nlike:\n\ncreate cursor cursorname as select (rest of query here);\nfetch from cursorname;\n\nYou can find more on cursors here:\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-declare.html\n\nNot sure if you can use them with prepared statements, or if prepared\nstatements have their own kind of implementation.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n \n\n\nYes, I have used embedded sql and create cursor, fetch before I started\nwith jdbc, seems that\nI have to find out if new jdbc has a better way than simply resultSet =\nstatement.executeQuery().\n\nregards\ntom", "msg_date": "Wed, 10 May 2006 23:24:43 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" }, { "msg_contents": "\n\nOn Wed, 10 May 2006, Thomas Vatter wrote:\n\n> Yes, the difference between psql command line and application is 6 \n> seconds to 40 seconds. It is exactly the step resultSet = \n> excecuteQuery() that needs 40 seconds. I use next() as a cursor through \n> the resultSet, but I fear this is not enough, do I have to use \n> createStatement(resultSetType, resultSetConcurrency) respectively \n> prepareStatement (resultSetType, resultSetConcurrency) to achieve the \n> cursor behaviour?\n\nhttp://jdbc.postgresql.org/documentation/81/query.html#query-with-cursor\n\nKris Jurka\n\n", "msg_date": "Wed, 10 May 2006 17:16:23 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in memory views" }, { "msg_contents": "Kris Jurka wrote:\n\n>\n>\n> On Wed, 10 May 2006, Thomas Vatter wrote:\n>\n>> Yes, the difference between psql command line and application is 6 \n>> seconds to 40 seconds. It is exactly the step resultSet = \n>> excecuteQuery() that needs 40 seconds. I use next() as a cursor \n>> through the resultSet, but I fear this is not enough, do I have to \n>> use createStatement(resultSetType, resultSetConcurrency) respectively \n>> prepareStatement (resultSetType, resultSetConcurrency) to achieve the \n>> cursor behaviour?\n>\n>\n> http://jdbc.postgresql.org/documentation/81/query.html#query-with-cursor\n>\n> Kris Jurka\n\n\nI was just returning to my mailbox to report success, I was just a bit \nfaster than your e-mail, I have found the fetchSize function, it \nreduces the delay to 6 seconds. thanks a lot to all who helped, this was \nreally great support, I am glad that the problem is solved\n\ntom\n\n\n", "msg_date": "Thu, 11 May 2006 01:08:00 +0200", "msg_from": "Thomas Vatter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" } ]
[ { "msg_contents": "Hi,\n\nthere was a similar discussion with a ramdisk:\nhttp://archives.postgresql.org/pgsql-hackers/2005-11/msg01058.php\n\nYou need to populate the data on serverstart, of course.\n\nBut as Timo mentionend, it's maybe not worth the trouble.\n\nMaybe their is a way to speed up the queriy itself.\n\nTo analyze this, you should post the query- and table-definition \nand the output of explain analyze of the offending query.\n\nBest regards\n\nHakan Kocaman\nSoftware-Development\n\ndigame.de GmbH\nRichard-Byrd-Str. 4-8\n50829 Köln\n\nTel.: +49 (0) 221 59 68 88 31\nFax: +49 (0) 221 59 68 88 98\nEmail: [email protected]\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Thomas Vatter\n> Sent: Wednesday, May 10, 2006 12:43 PM\n> To: Tino Wildenhain\n> Cc: [email protected]\n> Subject: Re: [PERFORM] in memory views\n> \n> \n> Tino Wildenhain wrote:\n> \n> > Thomas Vatter schrieb:\n> >\n> >> Tino Wildenhain wrote:\n> >>\n> >>> Thomas Vatter schrieb:\n> >>>\n> >>>> is there a possibility for creating views or temp tables \n> in memory \n> >>>> to avoid disk io when user makes select operations?\n> >>>\n> >>>\n> >>>\n> >>>\n> >>> No need. The data will be available in OS and database caches if\n> >>> they are really required often. If not, tune up the caches and\n> >>> do a regular \"pre select\".\n> >>>\n> >>> Regards\n> >>> Tino\n> >>>\n> >>>\n> >>\n> >> hmm, I am selecting a resultset with 1300 rows joined from \n> 12 tables. \n> >> with jdbc I am waiting 40 seconds until the first row appears. The \n> >> following rows appear really fast but the 40 seconds are a problem.\n> >\n> >\n> > Well you will need the equally 40 seconds to fill your hypothetical\n> > in memory table. (even a bit more due to the creation of a \n> > datastructure).\n> >\n> > So you can do the aproaches of semi materialized views \n> (that are in fact\n> > writing into a shadow table) or just prefetch your data at \n> time - just\n> > at the times you would refill your memory tables if they existed.\n> > A cronjob with select/fetch should do.\n> >\n> > Regards\n> > Tino\n> >\n> >\n> \n> If the in memory table is created a bootup time of the dbms it is \n> already present when user selects the data. Of course the \n> challenge is \n> to keep the in memory table up to date if data are changed. \n> What do you \n> mean with semi materialized views, I have tried select * from \n> this_view \n> with the same result. Also, if I repeat the query it does not \n> run faster.\n> \n> regards\n> tom\n> \n> -- \n> Mit freundlichen Grüßen / Regards\n> Vatter\n> \n> Network Inventory Software\n> Sun Microsystems Principal Partner\n> \n> www.network-inventory.de\n> Tel. 030-79782510\n> E-Mail [email protected]\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Wed, 10 May 2006 13:12:32 +0200", "msg_from": "\"Hakan Kocaman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in memory views" } ]
[ { "msg_contents": "Hello,\n\nI just discovered the explain command and well ... have some (for you\nof course very stupid) questions.\n\nI do a quite large (for my taste) join, the query looks like the following:\nSELECT DISTINCT customer.email AS cemail, customer.key AS ckey,\ncustomer.anrede AS canrede, customer.strasse AS cstrasse, customer.plz\nAS cplz, customer.ort AS cort, customer.vorname AS cvorname,\ncustomer.nachname AS cnachname , custtype.name AS tname, customer.land\nAS cland, customer.datanotvalid AS cdatanvalid FROM customer LEFT JOIN\nsells ON customer.key=sells.custid LEFT JOIN goods ON\nsells.goodsid=goods.key LEFT JOIN custtype ON\ncustomer.custgroup=custtype.key LEFT JOIN prodtype ON\nprodtype.key=goods.prodgroup WHERE customer.nachname LIKE '%name%';\n\nAll primary keys are indixed, and this is what explain tells me:\n Unique (cost=15.67..16.69 rows=34 width=115)\n -> Sort (cost=15.67..15.75 rows=34 width=115)\n Sort Key: customer.email, customer.\"key\", customer.anrede, customer.str\nasse, customer.plz, customer.ort, customer.vorname, customer.nachname, custtype.\nname, customer.land, customer.datanotvalid\n -> Hash Left Join (cost=6.16..14.80 rows=34 width=115)\n Hash Cond: (\"outer\".prodgroup = \"inner\".\"key\")\n -> Hash Left Join (cost=4.97..13.10 rows=34 width=119)\n Hash Cond: (\"outer\".custgroup = \"inner\".\"key\")\n -> Hash Left Join (cost=3.88..11.49 rows=34 width=111)\n Hash Cond: (\"outer\".goodsid = \"inner\".\"key\")\n -> Hash Left Join (cost=1.98..9.08\nrows=34 width=11 1)\n Hash Cond: (\"outer\".\"key\" = \"inner\".custid)\n -> Seq Scan on customer \n(cost=0.00..6.10 rows =34 width=107)\n Filter: ((nachname)::text ~~\n'%au%'::text )\n -> Hash (cost=1.78..1.78 rows=78 width=8)\n -> Seq Scan on sells \n(cost=0.00..1.78 r ows=78 width=8)\n -> Hash (cost=1.72..1.72 rows=72 width=8)\n -> Seq Scan on goods \n(cost=0.00..1.72 rows=72 width=8)\n -> Hash (cost=1.08..1.08 rows=8 width=16)\n -> Seq Scan on custtype (cost=0.00..1.08\nrows=8 wid th=16)\n -> Hash (cost=1.15..1.15 rows=15 width=4)\n -> Seq Scan on prodtype (cost=0.00..1.15 rows=15 width=4)\n\n\nWhat does the hash-lines mean, does that mean my query does not use\nthe indices at all?\nWhy are some table-names and some column-names surrounded by ' \" '?\nAre they threated as text-columns?\nI have to admit that the tables are just filled with test-data so the\nanalyzer may take just a very simple way since almost no data is in...\n\nlg Clemens\n", "msg_date": "Wed, 10 May 2006 13:49:41 +0200", "msg_from": "\"Clemens Eisserer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about explain-command..." }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Clemens Eisserer\n> Sent: Wednesday, May 10, 2006 6:50 AM\n> To: [email protected]\n> Subject: [PERFORM] Question about explain-command...\n \n> \n> What does the hash-lines mean, does that mean my query does not use\n> the indices at all?\n> Why are some table-names and some column-names surrounded by ' \" '?\n> Are they threated as text-columns?\n> I have to admit that the tables are just filled with test-data so the\n> analyzer may take just a very simple way since almost no data is in...\n> \n\nFor small tables, it is faster to do a sequential scan than an index\nscan. You probably don't have enough test data to make the planner\nchoose an index scan.\n\nI don't think the quotes really mean anything. They are just used as\ndelimiters.\n\nThe hash lines mean your tables are being joined by hash joins. You\nshould read this page for more info:\n\nhttp://www.postgresql.org/docs/8.1/interactive/performance-tips.html\n\n\n", "msg_date": "Wed, 10 May 2006 09:47:07 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about explain-command..." }, { "msg_contents": "I will try answering your questions. Please note that I am a newbie myself.\n\nClemens Eisserer wrote\n\n> All primary keys are indixed, and this is what explain tells me:\n> Unique (cost=15.67..16.69 rows=34 width=115)\n> -> Sort (cost=15.67..15.75 rows=34 width=115)\n> Sort Key: customer.email, customer.\"key\", customer.anrede, customer.str\n> asse, customer.plz, customer.ort, customer.vorname, customer.nachname, custtype.\n> name, customer.land, customer.datanotvalid\n> -> Hash Left Join (cost=6.16..14.80 rows=34 width=115)\n> Hash Cond: (\"outer\".prodgroup = \"inner\".\"key\")\n> -> Hash Left Join (cost=4.97..13.10 rows=34 width=119)\n> Hash Cond: (\"outer\".custgroup = \"inner\".\"key\")\n> -> Hash Left Join (cost=3.88..11.49 rows=34 width=111)\n> Hash Cond: (\"outer\".goodsid = \"inner\".\"key\")\n> -> Hash Left Join (cost=1.98..9.08\n> rows=34 width=11 1)\n> Hash Cond: (\"outer\".\"key\" = \"inner\".custid)\n> -> Seq Scan on customer (cost=0.00..6.10 rows =34 width=107)\n> Filter: ((nachname)::text ~~\n> '%au%'::text )\n> -> Hash (cost=1.78..1.78 rows=78 width=8)\n> -> Seq Scan on sells (cost=0.00..1.78 r ows=78 width=8)\n> -> Hash (cost=1.72..1.72 rows=72 width=8)\n> -> Seq Scan on goods (cost=0.00..1.72 rows=72 width=8)\n> -> Hash (cost=1.08..1.08 rows=8 width=16)\n> -> Seq Scan on custtype (cost=0.00..1.08\n> rows=8 wid th=16)\n> -> Hash (cost=1.15..1.15 rows=15 width=4)\n> -> Seq Scan on prodtype (cost=0.00..1.15 rows=15 width=4)\n\n\n> What does the hash-lines mean, does that mean my query does not use\n> the indices at all?\n\nYes. Probably each table fits nicely into a single disk read, so reading\nboth the index AND the table is going to be twice as expensive.\n\n> Why are some table-names and some column-names surrounded by ' \" '?\n> Are they threated as text-columns?\n\nThey are either names generated by postgres (\"outer\" and \"inner\") or\nfield names which are also reserved words in SQL (\"key\"). You can always\nuse double quotes around a field name - you have to in some cases if\nthey are reserved words, and always if they contain \"special characters\"\n(not sure from memory exactly which these are - at least spaces). I\nrecommend not to use either of these, even if a reserved word is the\nbest description of your field.\n\nPostgres seems to be a bit better than some other dbms's in allowing\nunquoted reserved words as field names if there is no ambiguity. Thsis\nmay mean that you get a problem if your application is ever ported to a\ndifferent dbms.\n\n> I have to admit that the tables are just filled with test-data so the\n> analyzer may take just a very simple way since almost no data is in...\n\nTry loading your tables with a realistic number of customers, and you\nshould see a change in the query plan to use your precious indexes.\n\n/Nis\n\n", "msg_date": "Wed, 10 May 2006 17:02:27 +0200", "msg_from": "Nis Jorgensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about explain-command..." }, { "msg_contents": "On Wed, May 10, 2006 at 09:47:07AM -0500, Dave Dutcher wrote:\n> The hash lines mean your tables are being joined by hash joins. You\n> should read this page for more info:\n> \n> http://www.postgresql.org/docs/8.1/interactive/performance-tips.html\n\n<tooting-own-horn>You might also want to read\nhttp://www.pervasivepostgres.com/instantkb13/article.aspx?id=10120&query=explain\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:41:12 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about explain-command..." } ]
[ { "msg_contents": "I'm trying to determine why an identical query is running \napproximately 500 to 1000 times slower on our production database \ncompared to our backup database server.\n\nBoth database servers are dual 2.3 GHz G5 Xserves running PostgreSQL \n8.1.3; both are configured with 8GB of RAM with identical shared \nmemory settings; both postgresql.conf files are identical; both \ndatabases have identical indexes defined.\n\nThe three relevant tables are all clustered the same, although I'm \nnot sure when clustering was last performed on either server. All \nthree tables have recently been analyzed on both servers.\n\nThe different explain plans for this query seem to be consistent on \nboth servers regardless of category and the production server is \nconsistently and drastically slower than the backup server.\n\nIf anyone has any ideas on how to have the production server generate \nthe same explain plan as the backup server, or can suggest anything I \nmight want to try, I would greatly appreciate it.\n\nBrian Wipf\nClickSpace Interactive Inc.\n<[email protected]>\n\nHere's the query:\n\nSELECT\tac.attribute_id\nFROM\tattribute_category ac\nWHERE\tis_browsable = 'true' AND\n\tcategory_id = 1000962 AND\n\tEXISTS \t(\tSELECT \t'X'\n\t\t\tFROM \tproduct_attribute_value pav,\n\t\t\t\tcategory_product cp\n\t\t\tWHERE \tpav.attribute_id = ac.attribute_id AND\n\t\t\t\tpav.status_code is null AND\n\t\t\t\tpav.product_id = cp.product_id AND\n\t\t\t\tcp.category_id = ac.category_id AND\n\t\t\t\tcp.product_is_active = 'true' AND\n\t\t\t\tcp.product_status_code = 'complete'\n\t)\n\nExplain plans:\n\nFast (backup server):\n Index Scan using attribute_category__category_id_fk_idx on \nattribute_category ac (cost=0.00..47943.34 rows=7 width=4) (actual \ntime=0.110..0.263 rows=5 loops=1)\n Index Cond: (category_id = 1000962)\n Filter: (((is_browsable)::text = 'true'::text) AND (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..7983.94 rows=3 width=0) (actual \ntime=0.043..0.043 rows=1 loops=5)\n -> Index Scan using \ncategory_product__category_id_is_active_and_status_idx on \ncategory_product cp (cost=0.00..4362.64 rows=1103 width=4) (actual \ntime=0.013..0.015 rows=2 loops=5)\n Index Cond: ((category_id = $1) AND \n((product_is_active)::text = 'true'::text) AND \n((product_status_code)::text = 'complete'::text))\n -> Index Scan using \nproduct_attribute_value__product_id_fk_idx on product_attribute_value \npav (cost=0.00..3.27 rows=1 width=4) (actual time=0.016..0.016 \nrows=1 loops=8)\n Index Cond: (pav.product_id = \"outer\".product_id)\n Filter: ((attribute_id = $0) AND (status_code IS \nNULL))\nTotal runtime: 0.449 ms\n(11 rows)\n\nSlow (production server):\n Index Scan using attribute_category__category_id_fk_idx on \nattribute_category ac (cost=0.00..107115.90 rows=7 width=4) (actual \ntime=1.472..464.437 rows=5 loops=1)\n Index Cond: (category_id = 1000962)\n Filter: (((is_browsable)::text = 'true'::text) AND (subplan))\n SubPlan\n -> Nested Loop (cost=18.33..23739.70 rows=4 width=0) (actual \ntime=92.870..92.870 rows=1 loops=5)\n -> Bitmap Heap Scan on product_attribute_value pav \n(cost=18.33..8764.71 rows=2549 width=4) (actual time=10.191..45.672 \nrows=5869 loops=5)\n Recheck Cond: (attribute_id = $0)\n Filter: (status_code IS NULL)\n -> Bitmap Index Scan on \nproduct_attribute_value__attribute_id_fk_idx (cost=0.00..18.33 \nrows=2952 width=0) (actual time=9.160..9.160 rows=33330 loops=5)\n Index Cond: (attribute_id = $0)\n -> Index Scan using x_category_product_pk on \ncategory_product cp (cost=0.00..5.86 rows=1 width=4) (actual \ntime=0.007..0.007 rows=0 loops=29345)\n Index Cond: ((cp.category_id = $1) AND \n(\"outer\".product_id = cp.product_id))\n Filter: (((product_is_active)::text = 'true'::text) \nAND ((product_status_code)::text = 'complete'::text))\nTotal runtime: 464.667 ms\n(14 rows)\n\nTable Descriptions:\n\n\\d attribute_category;\n Table \"public.attribute_category\"\n Column | Type | Modifiers\n-----------------+----------------------+-----------\nattribute_id | integer | not null\ncategory_id | integer | not null\nis_browsable | character varying(5) |\nis_required | character varying(5) |\nsort_order | integer |\ndefault_unit_id | integer |\nIndexes:\n \"attribute_category_pk\" PRIMARY KEY, btree (attribute_id, \ncategory_id)\n \"attribute_category__attribute_id_fk_idx\" btree (attribute_id)\n \"attribute_category__category_id_fk_idx\" btree (category_id) \nCLUSTER\nForeign-key constraints:\n \"attribute_category_attribute_fk\" FOREIGN KEY (attribute_id) \nREFERENCES attribute(attribute_id) DEFERRABLE INITIALLY DEFERRED\n \"attribute_category_category_fk\" FOREIGN KEY (category_id) \nREFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n\n\\d product_attribute_value;\n Table \"public.product_attribute_value\"\n Column | Type | Modifiers\n----------------------------+-----------------------+-----------\nattribute_id | integer | not null\nattribute_unit_id | integer |\nattribute_value_id | integer |\nboolean_value | character varying(5) |\ndecimal_value | numeric(30,10) |\nproduct_attribute_value_id | integer | not null\nproduct_id | integer | not null\nproduct_reference_id | integer |\nstatus_code | character varying(32) |\nIndexes:\n \"product_attribute_value_pk\" PRIMARY KEY, btree \n(product_attribute_value_id)\n \"product_attribute_value__attribute_id_fk_idx\" btree (attribute_id)\n \"product_attribute_value__attribute_unit_id_fk_idx\" btree \n(attribute_unit_id)\n \"product_attribute_value__attribute_value_id_fk_idx\" btree \n(attribute_value_id)\n \"product_attribute_value__decimal_value_idx\" btree (decimal_value)\n \"product_attribute_value__product_id_fk_idx\" btree (product_id) \nCLUSTER\n \"product_attribute_value__product_reference_id_fk_idx\" btree \n(product_reference_id)\nForeign-key constraints:\n \"product_attribute_value_attribute_fk\" FOREIGN KEY \n(attribute_id) REFERENCES attribute(attribute_id) DEFERRABLE \nINITIALLY DEFERRED\n \"product_attribute_value_attributeunit_fk\" FOREIGN KEY \n(attribute_unit_id) REFERENCES attribute_unit(attribute_unit_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_attributevalue_fk\" FOREIGN KEY \n(attribute_value_id) REFERENCES attribute_value(attribute_value_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_productreference_fk\" FOREIGN KEY \n(product_reference_id) REFERENCES product(product_id) DEFERRABLE \nINITIALLY DEFERRED\n\n\\d category_product;\n Table \"public.category_product\"\n Column | Type | Modifiers\n---------------------+------------------------+-----------\ncategory_id | integer | not null\nproduct_id | integer | not null\nen_name_sort_order | integer |\nfr_name_sort_order | integer |\nmerchant_sort_order | integer |\nprice_sort_order | integer |\nmerchant_count | integer |\nis_active | character varying(5) |\nproduct_is_active | character varying(5) |\nproduct_status_code | character varying(32) |\nproduct_name_en | character varying(512) |\nproduct_name_fr | character varying(512) |\nproduct_click_count | integer |\nIndexes:\n \"x_category_product_pk\" PRIMARY KEY, btree (category_id, \nproduct_id)\n \"category_product__category_id_is_active_and_status_idx\" btree \n(category_id, product_is_active, product_status_code)\n \"category_product__is_active_idx\" btree (is_active)\n \"category_product__merchant_sort_order_idx\" btree \n(merchant_sort_order)\n \"x_category_product__category_id_fk_idx\" btree (category_id) \nCLUSTER\n \"x_category_product__product_id_fk_idx\" btree (product_id)\nForeign-key constraints:\n \"x_category_product_category_fk\" FOREIGN KEY (category_id) \nREFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n \"x_category_product_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n\n", "msg_date": "Wed, 10 May 2006 16:39:14 -0600", "msg_from": "Brian Wipf <[email protected]>", "msg_from_op": true, "msg_subject": "Same query - Slow in production" }, { "msg_contents": "I added to the exists query qualifier: AND cp.category_id = 1000962 \n(in addition to the cp.category_id = ac.category_id)\n\nNow I am getting a much better query plan on our production server:\n\nIndex Scan using attribute_category__category_id_fk_idx on \nattribute_category ac (cost=0.00..485.71 rows=7 width=4) (actual \ntime=0.104..0.351 rows=5 loops=1)\n Index Cond: (category_id = 1000962)\n Filter: (((is_browsable)::text = 'true'::text) AND (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..24.77 rows=1 width=0) (actual \ntime=0.058..0.058 rows=1 loops=5)\n -> Index Scan using \nx_category_product__category_id_fk_idx on category_product cp \n(cost=0.00..6.01 rows=1 width=4) (actual time=0.014..0.014 rows=1 \nloops=5)\n Index Cond: ((category_id = $1) AND (category_id = \n1000962))\n Filter: (((product_is_active)::text = 'true'::text) \nAND ((product_status_code)::text = 'complete'::text))\n -> Index Scan using \nproduct_attribute_value__product_id_fk_idx on product_attribute_value \npav (cost=0.00..18.75 rows=1 width=4) (actual time=0.041..0.041 \nrows=1 loops=5)\n Index Cond: (pav.product_id = \"outer\".product_id)\n Filter: ((attribute_id = $0) AND (status_code IS \nNULL))\nTotal runtime: 0.558 ms\n(12 rows)\n\nIt is using the x_category_product__category_id_fk_idx on \ncategory_product instead of the \ncategory_product__category_id_is_active_and_status_idx index as on \nour backup server. Still not sure what's causing the differences in \nquery execution between the servers, but at least the query is fast \nagain.\n\nBrian\n\nOn 10-May-06, at 4:39 PM, Brian Wipf wrote:\n\n> I'm trying to determine why an identical query is running \n> approximately 500 to 1000 times slower on our production database \n> compared to our backup database server.\n>\n> Both database servers are dual 2.3 GHz G5 Xserves running \n> PostgreSQL 8.1.3; both are configured with 8GB of RAM with \n> identical shared memory settings; both postgresql.conf files are \n> identical; both databases have identical indexes defined.\n>\n> The three relevant tables are all clustered the same, although I'm \n> not sure when clustering was last performed on either server. All \n> three tables have recently been analyzed on both servers.\n>\n> The different explain plans for this query seem to be consistent on \n> both servers regardless of category and the production server is \n> consistently and drastically slower than the backup server.\n>\n> If anyone has any ideas on how to have the production server \n> generate the same explain plan as the backup server, or can suggest \n> anything I might want to try, I would greatly appreciate it.\n>\n> Brian Wipf\n> ClickSpace Interactive Inc.\n> <[email protected]>\n>\n> Here's the query:\n>\n> SELECT\tac.attribute_id\n> FROM\tattribute_category ac\n> WHERE\tis_browsable = 'true' AND\n> \tcategory_id = 1000962 AND\n> \tEXISTS \t(\tSELECT \t'X'\n> \t\t\tFROM \tproduct_attribute_value pav,\n> \t\t\t\tcategory_product cp\n> \t\t\tWHERE \tpav.attribute_id = ac.attribute_id AND\n> \t\t\t\tpav.status_code is null AND\n> \t\t\t\tpav.product_id = cp.product_id AND\n> \t\t\t\tcp.category_id = ac.category_id AND\n> \t\t\t\tcp.product_is_active = 'true' AND\n> \t\t\t\tcp.product_status_code = 'complete'\n> \t)\n>\n> Explain plans:\n>\n> Fast (backup server):\n> Index Scan using attribute_category__category_id_fk_idx on \n> attribute_category ac (cost=0.00..47943.34 rows=7 width=4) (actual \n> time=0.110..0.263 rows=5 loops=1)\n> Index Cond: (category_id = 1000962)\n> Filter: (((is_browsable)::text = 'true'::text) AND (subplan))\n> SubPlan\n> -> Nested Loop (cost=0.00..7983.94 rows=3 width=0) (actual \n> time=0.043..0.043 rows=1 loops=5)\n> -> Index Scan using \n> category_product__category_id_is_active_and_status_idx on \n> category_product cp (cost=0.00..4362.64 rows=1103 width=4) (actual \n> time=0.013..0.015 rows=2 loops=5)\n> Index Cond: ((category_id = $1) AND \n> ((product_is_active)::text = 'true'::text) AND \n> ((product_status_code)::text = 'complete'::text))\n> -> Index Scan using \n> product_attribute_value__product_id_fk_idx on \n> product_attribute_value pav (cost=0.00..3.27 rows=1 width=4) \n> (actual time=0.016..0.016 rows=1 loops=8)\n> Index Cond: (pav.product_id = \"outer\".product_id)\n> Filter: ((attribute_id = $0) AND (status_code IS \n> NULL))\n> Total runtime: 0.449 ms\n> (11 rows)\n>\n> Slow (production server):\n> Index Scan using attribute_category__category_id_fk_idx on \n> attribute_category ac (cost=0.00..107115.90 rows=7 width=4) \n> (actual time=1.472..464.437 rows=5 loops=1)\n> Index Cond: (category_id = 1000962)\n> Filter: (((is_browsable)::text = 'true'::text) AND (subplan))\n> SubPlan\n> -> Nested Loop (cost=18.33..23739.70 rows=4 width=0) (actual \n> time=92.870..92.870 rows=1 loops=5)\n> -> Bitmap Heap Scan on product_attribute_value pav \n> (cost=18.33..8764.71 rows=2549 width=4) (actual time=10.191..45.672 \n> rows=5869 loops=5)\n> Recheck Cond: (attribute_id = $0)\n> Filter: (status_code IS NULL)\n> -> Bitmap Index Scan on \n> product_attribute_value__attribute_id_fk_idx (cost=0.00..18.33 \n> rows=2952 width=0) (actual time=9.160..9.160 rows=33330 loops=5)\n> Index Cond: (attribute_id = $0)\n> -> Index Scan using x_category_product_pk on \n> category_product cp (cost=0.00..5.86 rows=1 width=4) (actual \n> time=0.007..0.007 rows=0 loops=29345)\n> Index Cond: ((cp.category_id = $1) AND \n> (\"outer\".product_id = cp.product_id))\n> Filter: (((product_is_active)::text = \n> 'true'::text) AND ((product_status_code)::text = 'complete'::text))\n> Total runtime: 464.667 ms\n> (14 rows)\n>\n> Table Descriptions:\n>\n> \\d attribute_category;\n> Table \"public.attribute_category\"\n> Column | Type | Modifiers\n> -----------------+----------------------+-----------\n> attribute_id | integer | not null\n> category_id | integer | not null\n> is_browsable | character varying(5) |\n> is_required | character varying(5) |\n> sort_order | integer |\n> default_unit_id | integer |\n> Indexes:\n> \"attribute_category_pk\" PRIMARY KEY, btree (attribute_id, \n> category_id)\n> \"attribute_category__attribute_id_fk_idx\" btree (attribute_id)\n> \"attribute_category__category_id_fk_idx\" btree (category_id) \n> CLUSTER\n> Foreign-key constraints:\n> \"attribute_category_attribute_fk\" FOREIGN KEY (attribute_id) \n> REFERENCES attribute(attribute_id) DEFERRABLE INITIALLY DEFERRED\n> \"attribute_category_category_fk\" FOREIGN KEY (category_id) \n> REFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n>\n> \\d product_attribute_value;\n> Table \"public.product_attribute_value\"\n> Column | Type | Modifiers\n> ----------------------------+-----------------------+-----------\n> attribute_id | integer | not null\n> attribute_unit_id | integer |\n> attribute_value_id | integer |\n> boolean_value | character varying(5) |\n> decimal_value | numeric(30,10) |\n> product_attribute_value_id | integer | not null\n> product_id | integer | not null\n> product_reference_id | integer |\n> status_code | character varying(32) |\n> Indexes:\n> \"product_attribute_value_pk\" PRIMARY KEY, btree \n> (product_attribute_value_id)\n> \"product_attribute_value__attribute_id_fk_idx\" btree \n> (attribute_id)\n> \"product_attribute_value__attribute_unit_id_fk_idx\" btree \n> (attribute_unit_id)\n> \"product_attribute_value__attribute_value_id_fk_idx\" btree \n> (attribute_value_id)\n> \"product_attribute_value__decimal_value_idx\" btree (decimal_value)\n> \"product_attribute_value__product_id_fk_idx\" btree (product_id) \n> CLUSTER\n> \"product_attribute_value__product_reference_id_fk_idx\" btree \n> (product_reference_id)\n> Foreign-key constraints:\n> \"product_attribute_value_attribute_fk\" FOREIGN KEY \n> (attribute_id) REFERENCES attribute(attribute_id) DEFERRABLE \n> INITIALLY DEFERRED\n> \"product_attribute_value_attributeunit_fk\" FOREIGN KEY \n> (attribute_unit_id) REFERENCES attribute_unit(attribute_unit_id) \n> DEFERRABLE INITIALLY DEFERRED\n> \"product_attribute_value_attributevalue_fk\" FOREIGN KEY \n> (attribute_value_id) REFERENCES attribute_value(attribute_value_id) \n> DEFERRABLE INITIALLY DEFERRED\n> \"product_attribute_value_product_fk\" FOREIGN KEY (product_id) \n> REFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n> \"product_attribute_value_productreference_fk\" FOREIGN KEY \n> (product_reference_id) REFERENCES product(product_id) DEFERRABLE \n> INITIALLY DEFERRED\n>\n> \\d category_product;\n> Table \"public.category_product\"\n> Column | Type | Modifiers\n> ---------------------+------------------------+-----------\n> category_id | integer | not null\n> product_id | integer | not null\n> en_name_sort_order | integer |\n> fr_name_sort_order | integer |\n> merchant_sort_order | integer |\n> price_sort_order | integer |\n> merchant_count | integer |\n> is_active | character varying(5) |\n> product_is_active | character varying(5) |\n> product_status_code | character varying(32) |\n> product_name_en | character varying(512) |\n> product_name_fr | character varying(512) |\n> product_click_count | integer |\n> Indexes:\n> \"x_category_product_pk\" PRIMARY KEY, btree (category_id, \n> product_id)\n> \"category_product__category_id_is_active_and_status_idx\" btree \n> (category_id, product_is_active, product_status_code)\n> \"category_product__is_active_idx\" btree (is_active)\n> \"category_product__merchant_sort_order_idx\" btree \n> (merchant_sort_order)\n> \"x_category_product__category_id_fk_idx\" btree (category_id) \n> CLUSTER\n> \"x_category_product__product_id_fk_idx\" btree (product_id)\n> Foreign-key constraints:\n> \"x_category_product_category_fk\" FOREIGN KEY (category_id) \n> REFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n> \"x_category_product_product_fk\" FOREIGN KEY (product_id) \n> REFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n\n\n", "msg_date": "Wed, 10 May 2006 17:56:18 -0600", "msg_from": "Brian Wipf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Same query - Slow in production" }, { "msg_contents": "Brian Wipf <[email protected]> writes:\n> I'm trying to determine why an identical query is running \n> approximately 500 to 1000 times slower on our production database \n> compared to our backup database server.\n\nIt looks to me like it's pure luck that the query is fast on the backup\nserver. The outer side of the EXISTS' join is being badly misestimated:\n\n> -> Index Scan using \n> category_product__category_id_is_active_and_status_idx on \n> category_product cp (cost=0.00..4362.64 rows=1103 width=4) (actual \n> time=0.013..0.015 rows=2 loops=5)\n> Index Cond: ((category_id = $1) AND \n> ((product_is_active)::text = 'true'::text) AND \n> ((product_status_code)::text = 'complete'::text))\n\nIf there actually had been 1100 matching rows instead of 2, the query\nwould have run 550 times slower, putting it in the same ballpark as\nthe other plan. So what I'm guessing is that the planner sees these\ntwo plans as being nearly the same cost, and small differences in the\nstats between the two databases are enough to tip its choice in one\ndirection or the other.\n\nSo what you want, of course, is to improve that rowcount estimate.\nI suppose the reason it's so bad is that we don't have multicolumn\nstatistics ... is there a strong correlation between product_is_active\nand product_status_code? If so, it might be worth your while to find a\nway to merge them into one column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2006 21:20:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query - Slow in production " }, { "msg_contents": "\"Christian Paul Cosinas\" <cpc 'at' cybees.com> writes:\n\n> Hi!\n> \n> How can I speed up my server's performance when I use offset and limit\n> clause.\n> \n> For example I have a query:\n> SELECT * FROM table ORDER BY id, name OFFSET 100000 LIMIT 10000\n> \n> This query takes a long time about more than 2 minutes.\n> \n> If my query is:\n> SELECT * FROM table ORDER BY id, name OFFSET 50000 LIMIT 10000\n> It takes about 2 seconds.\n\nFirst you should read the appropriate documentation.\n\nhttp://www.postgresql.org/docs/8.1/interactive/performance-tips.html\n\n-- \nGuillaume Cottenceau\n", "msg_date": "11 May 2006 08:46:31 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "Christian Paul Cosinas wrote:\n> Hi!\n> \n> How can I speed up my server's performance when I use offset and limit\n> clause.\n> \n> For example I have a query:\n> SELECT * FROM table ORDER BY id, name OFFSET 100000 LIMIT 10000\n> \n> This query takes a long time about more than 2 minutes.\n> \n> If my query is:\n> SELECT * FROM table ORDER BY id, name OFFSET 50000 LIMIT 10000\n> It takes about 2 seconds.\n\nPlease create a new thread rather than replying to someone elses post \nand changing the subject. These threads can sometimes get missed.\n\nYou do have an index on id and name don't you?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 11 May 2006 16:51:36 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "\n\tWhy do you want to use it this way ?\n\tExplain what you want to do, there probably is another faster solution...\n\nOn Thu, 11 May 2006 16:45:33 +0200, Christian Paul Cosinas \n<[email protected]> wrote:\n\n> Hi!\n>\n> How can I speed up my server's performance when I use offset and limit\n> clause.\n>\n> For example I have a query:\n> SELECT * FROM table ORDER BY id, name OFFSET 100000 LIMIT 10000\n>\n> This query takes a long time about more than 2 minutes.\n>\n> If my query is:\n> SELECT * FROM table ORDER BY id, name OFFSET 50000 LIMIT 10000\n> It takes about 2 seconds.\n>\n> Thanks\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n", "msg_date": "Thu, 11 May 2006 09:05:56 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "Hi!\n\nHow can I speed up my server's performance when I use offset and limit\nclause.\n\nFor example I have a query:\nSELECT * FROM table ORDER BY id, name OFFSET 100000 LIMIT 10000\n\nThis query takes a long time about more than 2 minutes.\n\nIf my query is:\nSELECT * FROM table ORDER BY id, name OFFSET 50000 LIMIT 10000\nIt takes about 2 seconds.\n\nThanks\n\n", "msg_date": "Thu, 11 May 2006 14:45:33 -0000", "msg_from": "\"Christian Paul Cosinas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Speed Up Offset and Limit Clause" }, { "msg_contents": "Christian Paul Cosinas wrote:\n> I am creating an application that gets the value of a large table and write\n> it to a file.\n> \n> Why I want to use offset and limit is for me to create a threaded\n> application so that they will not get the same results.\n> \n> For example:\n> \n> Thread 1 : gets offset 0 limit 5000\n> Thread 2 : gets offset 5000 limit 5000\n> Thread 3 : gets offset 10000 limit 5000\n> \n> And so on...\n> \n> Would there be any other faster way than what It thought?\n\nIn order to return rows 10000 to 15000, it must select all rows from zero to 15000 and then discard the first 10000 -- probably not what you were hoping for.\n\nYou might add a \"thread\" column. Say you want to run ten threads:\n\n create sequence thread_seq \n increment by 1\n minvalue 1 maxvalue 10\n cycle\n start with 1;\n\n create table mytable(\n column1 integer,\n ... other columns..., \n thread integer default nextval('thread_seq')\n );\n\n create bitmap index i_mytable_thread on mytable(thread);\n\nNow whenever you insert into mytable, you get a value in mytable.thread between 1 and 10, and it's indexed with a highly efficient bitmap index. So your query becomes:\n\n Thread 1: select ... from mytable where ... and thread = 1;\n Thread 2: select ... from mytable where ... and thread = 2;\n ... and so forth.\n\nCraig\n", "msg_date": "Tue, 16 May 2006 19:20:12 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "I am creating an application that gets the value of a large table and write\nit to a file.\n\nWhy I want to use offset and limit is for me to create a threaded\napplication so that they will not get the same results.\n\nFor example:\n\nThread 1 : gets offset 0 limit 5000\nThread 2 : gets offset 5000 limit 5000\nThread 3 : gets offset 10000 limit 5000\n\nAnd so on...\n\nWould there be any other faster way than what It thought?\n\n-----Original Message-----\nFrom: PFC [mailto:[email protected]] \nSent: Thursday, May 11, 2006 7:06 AM\nTo: Christian Paul Cosinas; [email protected]\nSubject: Re: [PERFORM] Speed Up Offset and Limit Clause\n\n\n\tWhy do you want to use it this way ?\n\tExplain what you want to do, there probably is another faster\nsolution...\n\nOn Thu, 11 May 2006 16:45:33 +0200, Christian Paul Cosinas \n<[email protected]> wrote:\n\n> Hi!\n>\n> How can I speed up my server's performance when I use offset and limit\n> clause.\n>\n> For example I have a query:\n> SELECT * FROM table ORDER BY id, name OFFSET 100000 LIMIT 10000\n>\n> This query takes a long time about more than 2 minutes.\n>\n> If my query is:\n> SELECT * FROM table ORDER BY id, name OFFSET 50000 LIMIT 10000\n> It takes about 2 seconds.\n>\n> Thanks\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n", "msg_date": "Wed, 17 May 2006 09:51:05 -0000", "msg_from": "\"Christian Paul Cosinas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "On Tue, May 16, 2006 at 07:20:12PM -0700, Craig A. James wrote:\n> >Why I want to use offset and limit is for me to create a threaded\n> >application so that they will not get the same results.\n> \n> In order to return rows 10000 to 15000, it must select all rows from zero \n> to 15000 and then discard the first 10000 -- probably not what you were \n> hoping for.\n> \n> You might add a \"thread\" column. Say you want to run ten threads:\n\nAnother possibility is partitioning the table. If you do that using\ninheritance-based partitioning, you could just select directly from\ndifferent partition tables, which probably be even faster than using a\nsingle table. The downside is it's more work to setup.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 17 May 2006 14:04:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" }, { "msg_contents": "\n> Thread 1 : gets offset 0 limit 5000\n> Thread 2 : gets offset 5000 limit 5000\n> Thread 3 : gets offset 10000 limit 5000\n>\n> Would there be any other faster way than what It thought?\n\n\tYeah, sure, use a thread which does the whole query (maybe using a \ncursor) and fills a queue with the results, then N threads consuming from \nthat queue... it will work better.\n", "msg_date": "Sat, 27 May 2006 23:04:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed Up Offset and Limit Clause" } ]
[ { "msg_contents": "\n> Something else worth considering is not using the normal \n> catalog methods\n> for storing information about temp tables, but hacking that together\n> would probably be a rather large task.\n\nBut the timings suggest, that it cannot be the catalogs in the worst\ncase\nhe showed.\n\n> 0.101 ms BEGIN\n> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER\nNOT \n> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n\n1.4 seconds is not great for create table, is that what we expect ?\n\n> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id\nDESC \n> LIMIT 20\n> 0.443 ms ANALYZE tmp\n> 0.365 ms SELECT * FROM tmp\n> 0.310 ms DROP TABLE tmp\n> 32.918 ms COMMIT\n> \n> \tCREATING the table is OK, but what happens on COMMIT ? I hear\nthe disk \n> seeking frantically.\n\nThe 32 seconds for commit can hardly be catalog related. It seems the\nfile is \nfsynced before it is dropped.\n\nAndreas\n", "msg_date": "Thu, 11 May 2006 09:55:15 +0200", "msg_from": "\"Zeugswetter Andreas DCP SD\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:\n> > 0.101 ms BEGIN\n> > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER\n> NOT \n> > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n> \n> 1.4 seconds is not great for create table, is that what we expect ?\n\nHmm, I'm hoping ms means milliseconds...\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Thu, 11 May 2006 10:30:25 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:\n> \n> > Something else worth considering is not using the normal \n> > catalog methods\n> > for storing information about temp tables, but hacking that together\n> > would probably be a rather large task.\n> \n> But the timings suggest, that it cannot be the catalogs in the worst\n> case\n> he showed.\n> \n> > 0.101 ms BEGIN\n> > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER\n> NOT \n> > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n> \n> 1.4 seconds is not great for create table, is that what we expect ?\nmilliseconds... :) Given the amount of code and locking that it looks\nlike is involved in creating a table, that might not be unreasonable...\n\n> > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id\n> DESC \n> > LIMIT 20\n> > 0.443 ms ANALYZE tmp\n> > 0.365 ms SELECT * FROM tmp\n> > 0.310 ms DROP TABLE tmp\n> > 32.918 ms COMMIT\n> > \n> > \tCREATING the table is OK, but what happens on COMMIT ? I hear\n> the disk \n> > seeking frantically.\n> \n> The 32 seconds for commit can hardly be catalog related. It seems the\n> file is \n> fsynced before it is dropped.\n\nI'd hope that wasn't what's happening... is the backend smart enough to\nknow not to fsync anything involved with the temp table? ISTM that that\ntransaction shouldn't actually be creating any WAL traffic at all.\nThough on the other hand there's no reason that DROP should be in the\ntransaction at all; maybe that's gumming things up during the commit.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 16:01:41 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\n\n>> > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id\n>> DESC\n>> > LIMIT 20\n>> > 0.443 ms ANALYZE tmp\n>> > 0.365 ms SELECT * FROM tmp\n>> > 0.310 ms DROP TABLE tmp\n>> > 32.918 ms COMMIT\n\n>> The 32 seconds for commit can hardly be catalog related. It seems the\n>> file is\n>> fsynced before it is dropped.\n>\n> I'd hope that wasn't what's happening... is the backend smart enough to\n> know not to fsync anything involved with the temp table? ISTM that that\n> transaction shouldn't actually be creating any WAL traffic at all.\n> Though on the other hand there's no reason that DROP should be in the\n> transaction at all; maybe that's gumming things up during the commit.\n\n\tI included the DROP to make it clear that the time was spent in \nCOMMITting, not in DROPping the table.\n\tAlso, you can't use CREATE TEMP TABLE AS SELECT ... and at the same time \nmake it ON COMMIT DROP. You have to CREATE and INSERT.\n\tWith an ON COMMIT DROP temp table, the global timings are the same wether \nor not it is dropped before commit : it is always the COMMIT which takes \nall the milliseconds.\n\n\tI still bet on system catalog updates being the main cause of the time \nspent in COMMIT...\n\t(because ANALYZE changes this time)\n", "msg_date": "Thu, 11 May 2006 23:33:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I'd hope that wasn't what's happening... is the backend smart enough to\n> know not to fsync anything involved with the temp table?\n\nThe catalog entries required for it have to be fsync'd, unless you enjoy\nputting your entire database at risk (a bad block in pg_class, say,\nwould probably take out more than one table).\n\nIt's interesting to speculate about keeping such catalog entries in\nchild tables of pg_class etc that are themselves temp tables. Resolving\nthe apparent circularity of this is left as an exercise for the reader.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2006 18:08:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal " }, { "msg_contents": "On Thu, May 11, 2006 at 06:08:36PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > I'd hope that wasn't what's happening... is the backend smart enough to\n> > know not to fsync anything involved with the temp table?\n> \n> The catalog entries required for it have to be fsync'd, unless you enjoy\n> putting your entire database at risk (a bad block in pg_class, say,\n> would probably take out more than one table).\n\nYeah, thought about that after sending... :(\n\n> It's interesting to speculate about keeping such catalog entries in\n> child tables of pg_class etc that are themselves temp tables. Resolving\n> the apparent circularity of this is left as an exercise for the reader.\n\nWell, since it'd be a system table with a fixed OID there could\npresumably be a special case in the recovery code for it, though that's\npretty fugly sounding.\n\nAnother alternative would be to support global temp tables... I think\nthat would handle all the complaints of the OP except for the cost of\nanalyze. I suspect this would be easier to do than creating a special\ntype of temp table that used tuplestore instead of the full table\nframework, and it'd certainly be more general-purpose.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:58:42 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" }, { "msg_contents": "1.451 ms = 1.451 milliseconds\n1451.0 ms = 1.451 seconds ...\n\nso 32.918 ms for a commit seems perhaps reasonable ?\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Zeugswetter Andreas DCP SD\nSent:\tThu 5/11/2006 12:55 AM\nTo:\tJim C. Nasby; PFC\nCc:\tGreg Stark; Tom Lane; [email protected]; [email protected]\nSubject:\tRe: [PERFORM] [HACKERS] Big IN() clauses etc : feature proposal\n\n\n> Something else worth considering is not using the normal \n> catalog methods\n> for storing information about temp tables, but hacking that together\n> would probably be a rather large task.\n\nBut the timings suggest, that it cannot be the catalogs in the worst\ncase\nhe showed.\n\n> 0.101 ms BEGIN\n> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER\nNOT \n> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP\n\n1.4 seconds is not great for create table, is that what we expect ?\n\n> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id\nDESC \n> LIMIT 20\n> 0.443 ms ANALYZE tmp\n> 0.365 ms SELECT * FROM tmp\n> 0.310 ms DROP TABLE tmp\n> 32.918 ms COMMIT\n> \n> \tCREATING the table is OK, but what happens on COMMIT ? I hear\nthe disk \n> seeking frantically.\n\nThe 32 seconds for commit can hardly be catalog related. It seems the\nfile is \nfsynced before it is dropped.\n\nAndreas\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n!DSPAM:446c0a75172664042098162!\n\n\n\n\n", "msg_date": "Thu, 18 May 2006 00:56:57 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Big IN() clauses etc : feature proposal" } ]
[ { "msg_contents": "I am attempting to learn more about the way Pg decides what operators to use\nin its query planning and executions. I have moderately complicated table\nlayout, but it is mostly normalized I recently created a query:\n\nselect account.acct_name as \"Customer Name\", NULL as \"Facility\",\naccount.acct_number as \"LDC Acct#\", account.svc_address as \"Service\nAddress\",\naccount.svc_address as \"Service Address\", account.svc_city as \"Service\nCity\", account.svc_state as \"Service State\", account.svc_city as \"Service\nCity\",\naccount.svc_zip as \"Service Zip\", product.ldc_name as \"LDC\", NULL as \"ESI\nRate\", NULL as \"LDC Rate\",\naccount.billing_address as \"Mailing Address1\", account.billing_address_2 as\n\"Mailing Address2\",\naccount.billing_city || ', ' || account.billing_state as \"City, State\",\naccount.billing_zip as \"Zip\", customer.first_name || ' ' ||\ncustomer.last_name\nas \"Contact\", customer.phone as \"Phone\", customer.class as \"Customer Class\",\nNULL as \"Tax Exempt\", NULL as \"Exempt%\",\nmarketer_divisions.channel_partner_code as \"Channel Partner\", NULL as \"AE\",\nNULL as \"Annual Use MCF\", account.rate as \"Trigger Price\",\nmarketer_divisions.channel_partner_fee as \"Channel Partner Fee\"\nfrom naes.reconciliation\n inner join naes.application\n inner join naes.account\n inner join naes.marketer_product\n inner join naes.marketer_divisions\n inner join naes.cities\n on marketer_divisions.city_id = cities.city_id\n on marketer_product.division_id = marketer_divisions.division_id\n inner join naes.product\n on marketer_product.ldc_id = product.ldc_id\n on account.marketer_product_id =\nmarketer_product.marketer_product_id\n inner join naes.customer\n on account.customer_id = customer.customer_id\n on account.app_id = application.app_id and account.acct_id =\napplication.acct_id\n on reconciliation.app_id = application.app_id and\nreconciliation.transferred_date is NULL;\n\nThe query runs fine I have no performance issues with it, but here are two\nquery plans for the above query, one with nested loops on, the other with\nthem off:\n\nNested Loops on:\n\nNested Loop (cost=3.33..11.37 rows=1 width=268) (actual time=2.166..2.982\nrows=3 loops=1)\n Join Filter: (\"outer\".city_id = \"inner\".city_id)\n -> Nested Loop (cost=3.33..10.32 rows=1 width=272) (actual\ntime=2.136..2.863 rows=3 loops=1)\n Join Filter: (\"outer\".division_id =\n\"inner\".division_id)plication.app_id and reco=\n -> Nested Loop (cost=3.33..9.27 rows=1 width=231) (actual\ntime=2.119..2.763 rows=3 loops=1)\n Join Filter: (\"outer\".ldc_id = \"inner\".ldc_id)\n -> Nested Loop (cost=3.33..8.23 rows=1 width=218) (actual\ntime=2.101..2.659 rows=3 loops=1)\n -> Nested Loop (cost=3.33..5.15 rows=1 width=151)\n(actual time=2.068..2.559 rows=3 loops=1)\n Join Filter: (\"inner\".app_id = \"outer\".app_id)\n -> Merge Join (cost=3.33..4.11 rows=1\nwidth=159) (actual time=1.096..1.477 rows=31 loops=1)\n Merge Cond: (\"outer\".marketer_product_id =\n\"inner\".marketer_product_id)\n -> Index Scan using\n\"PK_marketer_product_id\" on marketer_product (cost=0.00..3.04 rows=4\nwidth=12) (actual time=0.017..0.033 rows=4 loops=1)\n -> Sort (cost=3.33..3.33 rows=1\nwidth=155) (actual time=1.065..1.180 rows=31 loops=1)\n Sort Key: account.marketer_product_id\n -> Hash Join (cost=1.75..3.32\nrows=1 width=155) (actual time=0.457..0.848 rows=31 loops=1)\n Hash Cond: ((\"outer\".app_id =\n\"inner\".app_id) AND (\"outer\".acct_id = \"inner\".acct_id))\n -> Seq Scan on account\n(cost=0.00..1.28 rows=28 width=155) (actual time=0.007..0.160 rows=34\nloops=1)\n -> Hash (cost=1.50..1.50\nrows=50 width=8) (actual time=0.413..0.413 rows=50 loops=1)\n -> Seq Scan on\napplication (cost=0.00..1.50 rows=50 width=8) (actual time=0.006..0.209\nrows=50 loops=1)\n -> Seq Scan on reconciliation (cost=0.00..1.03\nrows=1 width=4) (actual time=0.005..0.016 rows=3 loops=31)\n Filter: (transferred_date IS NULL)\n -> Index Scan using customer_pkey on customer\n(cost=0.00..3.06 rows=1 width=75) (actual time=0.011..0.015 rows=1 loops=3)\n Index Cond: (\"outer\".customer_id =\ncustomer.customer_id)\n -> Seq Scan on product (cost=0.00..1.02 rows=2 width=21)\n(actual time=0.005..0.013 rows=2 loops=3)\n -> Seq Scan on marketer_divisions (cost=0.00..1.02 rows=2\nwidth=49) (actual time=0.005..0.013 rows=2 loops=3)\n -> Seq Scan on cities (cost=0.00..1.02 rows=2 width=4) (actual\ntime=0.005..0.013 rows=2 loops=3)\n Total runtime: 3.288 ms\n\nNested Loops off:\n\nHash Join (cost=8.27..11.78 rows=1 width=268) (actual time=1.701..1.765\nrows=3 loops=1)\n Hash Cond: (\"outer\".city_id = \"inner\".city_id)\n -> Hash Join (cost=7.24..10.73 rows=1 width=272) (actual\ntime=1.629..1.667 rows=3 loops=1)\n Hash Cond: (\"outer\".customer_id = \"inner\".customer_id)\n -> Seq Scan on customer (cost=0.00..3.32 rows=32 width=75)\n(actual time=0.006..0.136 rows=33 loops=1)\n -> Hash (cost=7.24..7.24 rows=1 width=205) (actual\ntime=1.366..1.366 rows=3 loops=1)\n -> Hash Join (cost=6.43..7.24 rows=1 width=205) (actual\ntime=1.243..1.333 rows=3 loops=1)\n Hash Cond: (\"outer\".division_id = \"inner\".division_id)\n -> Hash Join (cost=5.40..6.20 rows=1 width=164)\n(actual time=1.184..1.252 rows=3 loops=1)\n Hash Cond: (\"outer\".ldc_id = \"inner\".ldc_id)\n -> Merge Join (cost=4.38..5.16 rows=1\nwidth=151) (actual time=1.124..1.169 rows=3 loops=1)\n Merge Cond: (\"outer\".marketer_product_id =\n\"inner\".marketer_product_id)\n -> Index Scan using\n\"PK_marketer_product_id\" on marketer_product (cost=0.00..3.04 rows=4\nwidth=12) (actual time=0.012..0.019 rows=2 loops=1)\n -> Sort (cost=4.38..4.38 rows=1\nwidth=147) (actual time=1.098..1.109 rows=3 loops=1)\n Sort Key: account.marketer_product_id\n -> Hash Join (cost=2.78..4.37\nrows=1 width=147) (actual time=1.007..1.064 rows=3 loops=1)\n Hash Cond: (\"outer\".app_id =\n\"inner\".app_id)\n -> Hash Join (cost=1.75..3.32\nrows=1 width=155) (actual time=0.494..0.875 rows=31 loops=1)\n Hash Cond:\n((\"outer\".app_id = \"inner\".app_id) AND (\"outer\".acct_id = \"inner\".acct_id))\n -> Seq Scan on account\n(cost=0.00..1.28 rows=28 width=155) (actual time=0.007..0.154 rows=34\nloops=1)\n -> Hash\n(cost=1.50..1.50 rows=50 width=8) (actual time=0.451..0.451 rows=50 loops=1)\n -> Seq Scan on\napplication (cost=0.00..1.50 rows=50 width=8) (actual time=0.006..0.223\nrows=50 loops=1)\n -> Hash (cost=1.03..1.03\nrows=1 width=4) (actual time=0.042..0.042 rows=3 loops=1)\n -> Seq Scan on\nreconciliation (cost=0.00..1.03 rows=1 width=4) (actual time=0.007..0.019\nrows=3 loops=1)\n Filter:\n(transferred_date IS NULL)\n -> Hash (cost=1.02..1.02 rows=2 width=21)\n(actual time=0.036..0.036 rows=2 loops=1)\n -> Seq Scan on product (cost=0.00..1.02\nrows=2 width=21) (actual time=0.005..0.014 rows=2 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=49) (actual\ntime=0.036..0.036 rows=2 loops=1)\n -> Seq Scan on marketer_divisions\n(cost=0.00..1.02 rows=2 width=49) (actual time=0.007..0.016 rows=2 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=4) (actual time=0.039..0.039\nrows=2 loops=1)\n -> Seq Scan on cities (cost=0.00..1.02 rows=2 width=4) (actual\ntime=0.009..0.017 rows=2 loops=1)\n Total runtime: 2.084 ms\n\nWith nested loops enabled does it choose to use them because it sees the\nestimated start up cost with loops as less? Does it not know that the total\nquery would be faster with the Hash Joins? This query is in development\nright now, and as such there are not many rows. When it goes to production\nthe reconciliation table will grow by about 50 ­ 100 rows per day where the\ntransferred_date is NULL (this is the driving criteria behind this query.)\nAs the table grows can I expect Pg to realize the the nested loops will be\nslower and will it switch to the Hash Joins? If not how would I force it to\nuse the Hash Joins without just turning off nested loops completely? Is it\na good idea to turn off nested loops completely?\n\nStatistics collecting and auto vacuum is enabled btw. I have an erd diagram\nshowing the table structures if anyone is interested in looking at it, just\nlet me know.\n\nThanks,\nKetema\n\n\n\n\nNested Loops vs. Hash Joins or Merge Joins\n\n\nI am attempting to learn more about the way Pg decides what operators to use in its query planning and executions.  I have moderately complicated table layout, but it is mostly normalized I recently created a query:\n\nselect account.acct_name as \"Customer Name\", NULL as \"Facility\", account.acct_number as \"LDC Acct#\", account.svc_address as \"Service Address\",\naccount.svc_address as \"Service Address\", account.svc_city as \"Service City\", account.svc_state as \"Service State\", account.svc_city as \"Service City\",\naccount.svc_zip as \"Service Zip\", product.ldc_name as \"LDC\", NULL as \"ESI Rate\", NULL as \"LDC Rate\",\naccount.billing_address as \"Mailing Address1\", account.billing_address_2 as \"Mailing Address2\",\naccount.billing_city || ', ' || account.billing_state as \"City, State\", account.billing_zip as \"Zip\", customer.first_name || ' ' || customer.last_name\nas \"Contact\", customer.phone as \"Phone\", customer.class as \"Customer Class\", NULL as \"Tax Exempt\", NULL as \"Exempt%\",\nmarketer_divisions.channel_partner_code as \"Channel Partner\", NULL as \"AE\", NULL as \"Annual Use MCF\", account.rate as \"Trigger Price\",\nmarketer_divisions.channel_partner_fee as \"Channel Partner Fee\"\nfrom naes.reconciliation\n   inner join naes.application\n      inner join naes.account\n         inner join naes.marketer_product\n            inner join naes.marketer_divisions\n               inner join naes.cities\n               on marketer_divisions.city_id = cities.city_id\n            on marketer_product.division_id = marketer_divisions.division_id\n            inner join naes.product\n            on marketer_product.ldc_id = product.ldc_id\n         on account.marketer_product_id = marketer_product.marketer_product_id\n         inner join naes.customer\n         on account.customer_id = customer.customer_id\n      on account.app_id = application.app_id and account.acct_id = application.acct_id\n   on reconciliation.app_id = application.app_id and reconciliation.transferred_date is NULL;\n\nThe query runs fine I have no performance issues with it, but here are two query plans for the above query, one with nested loops on, the other with them off:\n\nNested Loops on:\n\nNested Loop  (cost=3.33..11.37 rows=1 width=268) (actual time=2.166..2.982 rows=3 loops=1)\n   Join Filter: (\"outer\".city_id = \"inner\".city_id)\n   ->  Nested Loop  (cost=3.33..10.32 rows=1 width=272) (actual time=2.136..2.863 rows=3 loops=1)\n         Join Filter: (\"outer\".division_id = \"inner\".division_id)plication.app_id and reco=\n         ->  Nested Loop  (cost=3.33..9.27 rows=1 width=231) (actual time=2.119..2.763 rows=3 loops=1)\n               Join Filter: (\"outer\".ldc_id = \"inner\".ldc_id)\n               ->  Nested Loop  (cost=3.33..8.23 rows=1 width=218) (actual time=2.101..2.659 rows=3 loops=1)\n                     ->  Nested Loop  (cost=3.33..5.15 rows=1 width=151) (actual time=2.068..2.559 rows=3 loops=1)\n                           Join Filter: (\"inner\".app_id = \"outer\".app_id)\n                           ->  Merge Join  (cost=3.33..4.11 rows=1 width=159) (actual time=1.096..1.477 rows=31 loops=1)\n                                 Merge Cond: (\"outer\".marketer_product_id = \"inner\".marketer_product_id)\n                                 ->  Index Scan using \"PK_marketer_product_id\" on marketer_product  (cost=0.00..3.04 rows=4 width=12) (actual time=0.017..0.033 rows=4 loops=1)\n                                 ->  Sort  (cost=3.33..3.33 rows=1 width=155) (actual time=1.065..1.180 rows=31 loops=1)\n                                       Sort Key: account.marketer_product_id\n                                       ->  Hash Join  (cost=1.75..3.32 rows=1 width=155) (actual time=0.457..0.848 rows=31 loops=1)\n                                             Hash Cond: ((\"outer\".app_id = \"inner\".app_id) AND (\"outer\".acct_id = \"inner\".acct_id))\n                                             ->  Seq Scan on account  (cost=0.00..1.28 rows=28 width=155) (actual time=0.007..0.160 rows=34 loops=1)\n                                             ->  Hash  (cost=1.50..1.50 rows=50 width=8) (actual time=0.413..0.413 rows=50 loops=1)\n                                                   ->  Seq Scan on application  (cost=0.00..1.50 rows=50 width=8) (actual time=0.006..0.209 rows=50 loops=1)\n                           ->  Seq Scan on reconciliation  (cost=0.00..1.03 rows=1 width=4) (actual time=0.005..0.016 rows=3 loops=31)\n                                 Filter: (transferred_date IS NULL)\n                     ->  Index Scan using customer_pkey on customer  (cost=0.00..3.06 rows=1 width=75) (actual time=0.011..0.015 rows=1 loops=3)\n                           Index Cond: (\"outer\".customer_id = customer.customer_id)\n               ->  Seq Scan on product  (cost=0.00..1.02 rows=2 width=21) (actual time=0.005..0.013 rows=2 loops=3)\n         ->  Seq Scan on marketer_divisions  (cost=0.00..1.02 rows=2 width=49) (actual time=0.005..0.013 rows=2 loops=3)\n   ->  Seq Scan on cities  (cost=0.00..1.02 rows=2 width=4) (actual time=0.005..0.013 rows=2 loops=3)\n Total runtime: 3.288 ms\n\nNested Loops off:\n\nHash Join  (cost=8.27..11.78 rows=1 width=268) (actual time=1.701..1.765 rows=3 loops=1)\n   Hash Cond: (\"outer\".city_id = \"inner\".city_id)\n   ->  Hash Join  (cost=7.24..10.73 rows=1 width=272) (actual time=1.629..1.667 rows=3 loops=1)\n         Hash Cond: (\"outer\".customer_id = \"inner\".customer_id)\n         ->  Seq Scan on customer  (cost=0.00..3.32 rows=32 width=75) (actual time=0.006..0.136 rows=33 loops=1)\n         ->  Hash  (cost=7.24..7.24 rows=1 width=205) (actual time=1.366..1.366 rows=3 loops=1)\n               ->  Hash Join  (cost=6.43..7.24 rows=1 width=205) (actual time=1.243..1.333 rows=3 loops=1)\n                     Hash Cond: (\"outer\".division_id = \"inner\".division_id)\n                     ->  Hash Join  (cost=5.40..6.20 rows=1 width=164) (actual time=1.184..1.252 rows=3 loops=1)\n                           Hash Cond: (\"outer\".ldc_id = \"inner\".ldc_id)\n                           ->  Merge Join  (cost=4.38..5.16 rows=1 width=151) (actual time=1.124..1.169 rows=3 loops=1)\n                                 Merge Cond: (\"outer\".marketer_product_id = \"inner\".marketer_product_id)\n                                 ->  Index Scan using \"PK_marketer_product_id\" on marketer_product  (cost=0.00..3.04 rows=4 width=12) (actual time=0.012..0.019 rows=2 loops=1)\n                                 ->  Sort  (cost=4.38..4.38 rows=1 width=147) (actual time=1.098..1.109 rows=3 loops=1)\n                                       Sort Key: account.marketer_product_id\n                                       ->  Hash Join  (cost=2.78..4.37 rows=1 width=147) (actual time=1.007..1.064 rows=3 loops=1)\n                                             Hash Cond: (\"outer\".app_id = \"inner\".app_id)\n                                             ->  Hash Join  (cost=1.75..3.32 rows=1 width=155) (actual time=0.494..0.875 rows=31 loops=1)\n                                                   Hash Cond: ((\"outer\".app_id = \"inner\".app_id) AND (\"outer\".acct_id = \"inner\".acct_id))\n                                                   ->  Seq Scan on account  (cost=0.00..1.28 rows=28 width=155) (actual time=0.007..0.154 rows=34 loops=1)\n                                                   ->  Hash  (cost=1.50..1.50 rows=50 width=8) (actual time=0.451..0.451 rows=50 loops=1)\n                                                         ->  Seq Scan on application  (cost=0.00..1.50 rows=50 width=8) (actual time=0.006..0.223 rows=50 loops=1)\n                                             ->  Hash  (cost=1.03..1.03 rows=1 width=4) (actual time=0.042..0.042 rows=3 loops=1)\n                                                   ->  Seq Scan on reconciliation  (cost=0.00..1.03 rows=1 width=4) (actual time=0.007..0.019 rows=3 loops=1)\n                                                         Filter: (transferred_date IS NULL)\n                           ->  Hash  (cost=1.02..1.02 rows=2 width=21) (actual time=0.036..0.036 rows=2 loops=1)\n                                 ->  Seq Scan on product  (cost=0.00..1.02 rows=2 width=21) (actual time=0.005..0.014 rows=2 loops=1)\n                     ->  Hash  (cost=1.02..1.02 rows=2 width=49) (actual time=0.036..0.036 rows=2 loops=1)\n                           ->  Seq Scan on marketer_divisions  (cost=0.00..1.02 rows=2 width=49) (actual time=0.007..0.016 rows=2 loops=1)\n   ->  Hash  (cost=1.02..1.02 rows=2 width=4) (actual time=0.039..0.039 rows=2 loops=1)\n         ->  Seq Scan on cities  (cost=0.00..1.02 rows=2 width=4) (actual time=0.009..0.017 rows=2 loops=1)\n Total runtime: 2.084 ms\n\nWith nested loops enabled does it choose to use them because it sees the estimated start up cost with loops as less?  Does it not know that the total query would be faster with the Hash Joins?  This query is in development right now, and as such there are not many rows.  When it goes to production the reconciliation table will grow by about 50 – 100 rows per day where the transferred_date is NULL (this is the driving criteria behind this query.)  As the table grows can I expect Pg to realize the the nested loops will be slower and will it switch to the Hash Joins?  If not how would I force it to use the Hash Joins without just turning off nested loops completely?  Is it a good idea to turn off nested loops completely?\n\nStatistics collecting and auto vacuum is enabled btw.  I have an erd diagram showing the table structures if anyone is interested in looking at it, just let me know.\n\nThanks,\nKetema", "msg_date": "Thu, 11 May 2006 08:57:48 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loops vs. Hash Joins or Merge Joins" }, { "msg_contents": "On Thu, May 11, 2006 at 08:57:48AM -0400, Ketema Harris wrote:\n> Nested Loops on:\n> Nested Loop (cost=3.33..11.37 rows=1 width=268) (actual time=2.166..2.982\n> \n> Nested Loops off:\n> Hash Join (cost=8.27..11.78 rows=1 width=268) (actual time=1.701..1.765\n> \n> With nested loops enabled does it choose to use them because it sees the\n> estimated start up cost with loops as less? Does it not know that the total\n> query would be faster with the Hash Joins? This query is in development\n\nYes it does know; re-read the output.\n\nI believe the cases where the planner will look at startup cost over\ntotal cost are pretty limited; when LIMIT is used and I think sometimes\nwhen a CURSOR is used.\n\n> Statistics collecting and auto vacuum is enabled btw. I have an erd diagram\n> showing the table structures if anyone is interested in looking at it, just\n> let me know.\n\nNote that it's not terribly uncommon for the default stats target to be\nwoefully inadequate for large sets of data, not that 100 rows a day is\nlarge. But it probably wouldn't hurt to bump the defaulst stats target\nup to 30 or 50 anyway.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 11 May 2006 17:51:36 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loops vs. Hash Joins or Merge Joins" } ]
[ { "msg_contents": "Hi!\n\nSee the next case, please! This is a theoretical CASE, which cause \nproblems to me. Please, help!\n\nCREATE TABLE a (\n id SERIAL, -- This is the PRIMARY KEY\n col TEXT\n);\n\nCREATE TABLE b (\n id SERIAL, -- This is the PRIMARY KEY\n a_id INT4, -- REFERENCE TO a.id\n value INT4,\n txt TEXT\n);\n\nCREATE TABLE c (\n id SERIAL, -- This is the PRIMARY KEY\n a_id INT4, -- REFERENCE TO a.id\n value INT4,\n txt TEXT\n);\n\nI examined the next query. In my opinion its query plan is not too optimal.\nThere are indexes on b.a_id, b.value, b.txt, c.a_id, c.value, c.txt.\nI generated test datas: 10000 rows into the table 'a',\n 100000 rows into the table 'b',\n 100000 rows into the table 'c'.\n\nSELECT a.id, a.col,\n TABLE_B.minval, TABLE_B.b_txt,\n TABLE_C.maxval, TABLE_C.c_txt\nFROM a\nLEFT JOIN (\n SELECT a_id,\n min(value) AS minval,\n max(txt) AS b_txt \n FROM b\n GROUP BY a_id) TABLE_B ON (TABLE_B.a_id = a.id)\nLEFT JOIN (\n SELECT a_id,\n max(value) AS maxval,\n min(txt) AS c_txt\n FROM c\n GROUP BY a_id) TABLE_C ON (TABLE_C.a_id = a.id)\nLIMIT 10;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6907.56..6932.86 rows=10 width=84) (actual \ntime=750.075..750.184 rows=10 loops=1)\n -> Hash Left Join (cost=6907.56..34114.19 rows=10753 width=84) \n(actual time=750.071..750.163 rows=10 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".a_id)\n -> Merge Left Join (cost=4171.85..4370.95 rows=10000 \nwidth=48) (actual time=400.712..400.775 rows=10 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".a_id)\n -> Sort (cost=823.39..848.39 rows=10000 width=12) \n(actual time=31.926..31.935 rows=10 loops=1)\n Sort Key: a.id\n -> Seq Scan on a (cost=0.00..159.00 rows=10000 \nwidth=12) (actual time=0.013..12.120 rows=10000 loops=1)\n -> Sort (cost=3348.47..3373.32 rows=9940 width=40) \n(actual time=368.741..368.762 rows=22 loops=1)\n Sort Key: table_b.a_id\n -> Subquery Scan table_b (cost=2440.00..2688.50 \nrows=9940 width=40) (actual time=305.836..338.764 rows=10000 loops=1)\n -> HashAggregate (cost=2440.00..2589.10 \nrows=9940 width=17) (actual time=305.832..320.891 rows=10000 loops=1)\n -> Seq Scan on b (cost=0.00..1690.00 \nrows=100000 width=17) (actual time=0.012..125.888 rows=100000 loops=1)\n -> Hash (cost=2708.83..2708.83 rows=10753 width=40) (actual \ntime=349.314..349.314 rows=10000 loops=1)\n -> Subquery Scan table_c (cost=2440.00..2708.83 \nrows=10753 width=40) (actual time=298.826..331.239 rows=10000 loops=1)\n -> HashAggregate (cost=2440.00..2601.30 \nrows=10753 width=17) (actual time=298.821..313.603 rows=10000 loops=1)\n -> Seq Scan on c (cost=0.00..1690.00 \nrows=100000 width=17) (actual time=0.015..124.996 rows=100000 loops=1)\n Total runtime: 757.818 ms\n\nBut I can optimize the previous query by hand:\n\nSELECT a.id, a.col,\n (SELECT min(value) FROM b WHERE b.a_id=a.id) AS minval,\n (SELECT max(txt) FROM b WHERE b.a_id=a.id) AS b_txt,\n (SELECT max(value) FROM c WHERE c.a_id=a.id) AS maxval,\n (SELECT min(txt) FROM c WHERE c.a_id=a.id) AS c_txt\nFROM a\nLIMIT 10;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..126.76 rows=10 width=12) (actual time=0.221..1.754 \nrows=10 loops=1)\n -> Seq Scan on a (cost=0.00..126764.21 rows=10000 width=12) (actual \ntime=0.218..1.734 rows=10 loops=1)\n SubPlan\n -> Aggregate (cost=3.15..3.16 rows=1 width=9) (actual \ntime=0.039..0.039 rows=1 loops=10)\n -> Index Scan using idx_c_aid on c (cost=0.00..3.12 \nrows=9 width=9) (actual time=0.007..0.024 rows=10 loops=10)\n Index Cond: (a_id = $0)\n -> Aggregate (cost=3.15..3.16 rows=1 width=4) (actual \ntime=0.038..0.039 rows=1 loops=10)\n -> Index Scan using idx_c_aid on c (cost=0.00..3.12 \nrows=9 width=4) (actual time=0.008..0.025 rows=10 loops=10)\n Index Cond: (a_id = $0)\n -> Aggregate (cost=3.16..3.17 rows=1 width=9) (actual \ntime=0.038..0.039 rows=1 loops=10)\n -> Index Scan using idx_b_aid on b (cost=0.00..3.14 \nrows=10 width=9) (actual time=0.006..0.022 rows=10 loops=10)\n Index Cond: (a_id = $0)\n -> Aggregate (cost=3.16..3.17 rows=1 width=4) (actual \ntime=0.039..0.040 rows=1 loops=10)\n -> Index Scan using idx_b_aid on b (cost=0.00..3.14 \nrows=10 width=4) (actual time=0.008..0.025 rows=10 loops=10)\n Index Cond: (a_id = $0)\n Total runtime: 1.933 ms\n\nThere is huge difference between the query performances. Why?\nMy problem is that in the first query use HashAggregation on the all \ntable 'b' and 'c' and cannot take notice of the 'LIMIT 10'.\nIn my special case I cannot use the second query formalization. I \nsimplified my problem to this theoretical CASE.\nDo you know why cannot make the best of the 'LIMIT' criteria in the \nfirst query?\nThese tables are big, so in my opinion the planner could optimize better.\n\nIf this is a deficiency of the planner, I'd like to suggest this feature \ninto the planner.\n\nRegards,\n Antal Attila\n", "msg_date": "Fri, 12 May 2006 14:43:16 +0200", "msg_from": "Antal Attila <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong plan for subSELECT with GROUP BY" }, { "msg_contents": "Antal Attila <[email protected]> writes:\n> If this is a deficiency of the planner, I'd like to suggest this feature \n> into the planner.\n\nThis really falls into the category of \"you've got to be kidding\".\nThere's no way that it'd be reasonable for the planner to expend cycles\non every query to look for corner cases like this. Do the\nhand-optimization instead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 May 2006 10:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan for subSELECT with GROUP BY " }, { "msg_contents": "If you wrap the LIMIT select into a subquery in the FROM the planner\nmight figure it out...\n\nSELECT ...\n FROM (SELECT blah FROM a LIMIT 10)\n LEFT JOIN ...\n\nUnlike some other databases that will spend huge amounts of time on\ntrying to re-arrange queries and then hope they can effectively cache\nquery plans, PostgreSQL prefers not to spend cycles on more esoteric\ncases so that query planning is fast.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 12 May 2006 14:12:13 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan for subSELECT with GROUP BY" }, { "msg_contents": "On Fri, 2006-05-12 at 10:05 -0400, Tom Lane wrote:\n> Antal Attila <[email protected]> writes:\n> > If this is a deficiency of the planner, I'd like to suggest this feature \n> > into the planner.\n> \n> This really falls into the category of \"you've got to be kidding\".\n\nAgreed\n\n> There's no way that it'd be reasonable for the planner to expend cycles\n> on every query to look for corner cases like this. \n\nOT: Should we have a way of telling the optimizer how much time and\neffort we would like it to go to? Some of the new optimizations and many\nyet to come cover smaller and smaller sub-cases. \n\nAt least internally, we could mark the cost-of-optimization as we go, so\nwe can play with the external interface later.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 15 May 2006 09:48:33 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan for subSELECT with GROUP BY" } ]
[ { "msg_contents": "Hello,\ncontinuing the saga, \nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\nmy coleague created a test database with fake data (see below).\n\nThe above archived message contains the the timings of firebird and postgresql.\nThe weird problem are the 2 queries that firebird executes in less than 2\nseconds and postgresql took almost half hour to complete at 100% cpu.\n\n\nyou could download the test database at the address below. It is a 128 kpbs \nadsl connection.\n74 MB\nhttp://www.eicomm.no-ip.com/download/BackDNF_Cript.zip\n\n\nMany thanks.\nAndre Felipe Machado\n\n", "msg_date": "Fri, 12 May 2006 12:48:52 -0200", "msg_from": "\"andremachado\" <[email protected]>", "msg_from_op": true, "msg_subject": "Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows)" }, { "msg_contents": "On Fri, May 12, 2006 at 12:48:52PM -0200, andremachado wrote:\n> Hello,\n> continuing the saga, \n> http://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\n> my coleague created a test database with fake data (see below).\n> \n> The above archived message contains the the timings of firebird and postgresql.\n> The weird problem are the 2 queries that firebird executes in less than 2\n> seconds and postgresql took almost half hour to complete at 100% cpu.\n\nHow about posting EXPLAIN ANALYZE for those two queries, as well as the\nqueries themselves?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 12 May 2006 14:14:15 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows)" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Fri, May 12, 2006 at 12:48:52PM -0200, andremachado wrote:\n>> Hello,\n>> continuing the saga, \n>> http://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\n>> my coleague created a test database with fake data (see below).\n>>\n>> The above archived message contains the the timings of firebird and postgresql.\n>> The weird problem are the 2 queries that firebird executes in less than 2\n>> seconds and postgresql took almost half hour to complete at 100% cpu.\n> \n> How about posting EXPLAIN ANALYZE for those two queries, as well as the\n> queries themselves?\n\nI have this database downloaded if anyone wants a copy off a faster link.\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Fri, 12 May 2006 12:54:11 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows)" }, { "msg_contents": "\"andremachado\" <[email protected]> writes:\n> continuing the saga, \n> http://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\n> my coleague created a test database with fake data (see below).\n\nThanks. I played around with this a bit, and got results like these:\noriginal query, 8.1 branch from a couple weeks back: 945 sec\noriginal query, 8.1 branch tip: 184 sec\nmodified query, 8.1 branch tip: 15 sec\n\nThe first differential is because of this patch:\nhttp://archives.postgresql.org/pgsql-committers/2006-04/msg00355.php\nviz\n\tRemove the restriction originally coded into\n\toptimize_minmax_aggregates() that MIN/MAX not be converted to\n\tuse an index if the query WHERE clause contains any volatile\n\tfunctions or subplans.\n\nAllowing the max(DEC2.AM_REFERENCIA) subquery to be converted to an\nindexscan makes for about a 5X reduction in the number of times the\nEXISTS sub-subquery is executed. But the real problem is that Postgres\nisn't excessively smart about EXISTS subqueries. I manually changed it\ninto an IN to get the 15-second runtime: instead of\n\n (select max(DEC2.AM_REFERENCIA) from DECLARACAO DEC2\n where DEC2.IN_FOI_RETIFICADA=0 and\n\t exists (select CAD3.ID_CADASTRO from CADASTRO CAD3 where\n\t\t CAD3.ID_DECLARACAO=DEC2.ID_DECLARACAO and\n\t\t CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA ) )\n\nwrite\n\n (select max(DEC2.AM_REFERENCIA) from DECLARACAO DEC2\n where DEC2.IN_FOI_RETIFICADA=0 and DEC2.ID_DECLARACAO in\n\t (select CAD3.ID_DECLARACAO from CADASTRO CAD3 where\n\t\t CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA ) )\n\nI'm not clear on how Firebird is managing to do this query in under\na second --- I can believe that they know how to do EXISTS as a join\nbut it still seems like the subqueries need to be done many thousand\ntimes. I thought maybe they were caching the results of the overall\nsubquery for specific values of CADASTRO.ID_EMPRESA, but now that I\nsee your test data, there are several thousand distinct values of\nthat, so there's not a lot of traction to be gained that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 May 2006 18:19:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows) " }, { "msg_contents": "Hello, Jim\nI did not want to clutter mailboxes of those who are not interested at\nthis weird problem.\nSo, i pointed to the archived message that contains 2 tar.gz files\n(around 50 KB) with the two sets of queries (firebird and postgresql\nrespectively), its results, explain analyze, pg configurations, firebird\nplan outputs, etc.\nPlease, open the cited link and scroll to the end of message. You will\nfind the 2 tar.gz files.\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php\nIf you have some difficulty, I could send a private email containing the\n2 files in order to not send the big email to the all list again.\nMany thanks.\nAndre Felipe Machado\n\n\nEm Sex, 2006-05-12 �s 14:14 -0500, Jim C. Nasby escreveu:\n> On Fri, May 12, 2006 at 12:48:52PM -0200, andremachado wrote:\n> > Hello,\n> > continuing the saga, \n> > http://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\n> > my coleague created a test database with fake data (see below).\n> > \n> > The above archived message contains the the timings of firebird and postgresql.\n> > The weird problem are the 2 queries that firebird executes in less than 2\n> > seconds and postgresql took almost half hour to complete at 100% cpu.\n> \n> How about posting EXPLAIN ANALYZE for those two queries, as well as the\n> queries themselves?\n\n", "msg_date": "Sat, 13 May 2006 06:58:02 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9?= Felipe Machado <[email protected]>", "msg_from_op": false, "msg_subject": "Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows)" }, { "msg_contents": "\"andremachado\" <[email protected]> writes:\n> continuing the saga, \n> http://archives.postgresql.org/pgsql-performance/2006-04/msg00558.php ,\n> my coleague created a test database with fake data (see below).\n\nI tried to use this data to replicate your results, and could not.\nI grabbed a copy of what I think is the latest Firebird release,\nfirebird-1.5.3.4870, built it on a Fedora Core 4 machine (32-bit,\ncouldn't get it to build cleanly on my newer 64-bit machine :-()\nand compared to Postgres 8.1 branch tip on the same machine.\nOn the interesting sub-sub-EXISTS query, I see these results:\n\nFirebird:\nSQL> set stats on;\nSQL> set plan on;\nSQL> update CADASTRO set IN_CADASTRO_MAIS_ATUAL = case when CADASTRO.ID_CADASTRO= (select max(CAD2.ID_CADASTRO) from CADASTRO CAD2 inner join DECLARACAO DECL on (DECL.ID_DECLARACAO=CAD2.ID_DECLARACAO) where CAD2.ID_EMPRESA=CADASTRO.ID_EMPRESA and DECL.AM_REFERENCIA = (select max(DEC2.AM_REFERENCIA) from DECLARACAO DEC2 where DEC2.IN_FOI_RETIFICADA=0 and exists (select CAD3.ID_CADASTRO from CADASTRO CAD3 where CAD3.ID_DECLARACAO=DEC2.ID_DECLARACAO and CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA ) )and DECL.IN_FOI_RETIFICADA=0 )then 1 else 0 end ;\n\nPLAN (CAD3 INDEX (RDB$FOREIGN1))\nPLAN (DEC2 NATURAL)\nPLAN JOIN (DECL INDEX (IDX_DT_REFERENCIA),CAD2 INDEX (RDB$FOREIGN1))\nPLAN (CADASTRO NATURAL)\nCurrent memory = 786704\nDelta memory = 309056\nMax memory = 786704\nElapsed time= 344.19 sec\nCpu = 0.03 sec\nBuffers = 75\nReads = 2081702\nWrites = 16173\nFetches = 21713743\n\nThe cpu = 0.03 sec bit is bogus; in reality the CPU is maxed out\nand the isql process accumulates very nearly 344 seconds runtime.\n\nPostgres:\nbc=# \\timing\nTiming is on.\nbc=# update CADASTRO set IN_CADASTRO_MAIS_ATUAL = case when CADASTRO.ID_CADASTRO= (select max(CAD2.ID_CADASTRO) from CADASTRO CAD2 inner join DECLARACAO DECL on (DECL.ID_DECLARACAO=CAD2.ID_DECLARACAO) where CAD2.ID_EMPRESA=CADASTRO.ID_EMPRESA and DECL.AM_REFERENCIA = (select max(DEC2.AM_REFERENCIA) from DECLARACAO DEC2 where DEC2.IN_FOI_RETIFICADA=0 and exists (select CAD3.ID_CADASTRO from CADASTRO CAD3 where CAD3.ID_DECLARACAO=DEC2.ID_DECLARACAO and CAD3.ID_EMPRESA=CADASTRO.ID_EMPRESA ) )and DECL.IN_FOI_RETIFICADA=0 )then 1 else 0 end ;\nUPDATE 15490\nTime: 420350.628 ms\n\nNow I know nothing about Firebird and it's quite possible that I missed\nsome essential tuning step, but I'm sure not in the same ballpark as\nyour report of 0.72 sec to run this query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 May 2006 17:26:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows) " } ]
[ { "msg_contents": "Please cc the list so others can help.\n\n> From: Witold Strzelczyk [mailto:[email protected]]\n> On Friday 12 May 2006 00:04, you wrote:\n> \n> Yes, thanks but method is not a point.\n\nActually, it is a point. Databases don't like doing things procedurally. Using a stored procedure to operate on a set of data is very often the wrong way to go about it. In the case of ranking, I'm extremely doubtful that you'll ever get a procedure to opperate anywhere near as fast as native SQL.\n\n> Can You tell me why \n> \n> \t\tselect into inGameRating count(game_result)+1 \n> from users\n> \t\twhere game_result > 2984;\n> \n> tooks ~100 ms and\n> \n> \t\tselect into inGameRating count(game_result)+1 \n> from users\n> \t\twhere game_result > inRow.game_result;\n> \n> where inRow.game_result = 2984 tooks ~1100 ms!?\n\nNo, I can't. What's EXPLAIN ANALYZE show?\n\n> btw. I must try your temp sequence but if it is not as quick \n> as my new (and \n> final) function I'll send if to you.\n> \n> > If you're trying to come up with ranking then you'll be much happier\n> > using a sequence and pulling from it using an ordered \n> select. See lines\n> > 19-27 in http://lnk.nu/cvs.distributed.net/9bu.sql for an example.\n> > Depending on what you're doing you might not need the temp table.\n> >\n> > On Fri, May 05, 2006 at 04:46:43PM +0200, Witold Strzelczyk wrote:\n> > > I have a question about my function. I must get user \n> rating by game\n> > > result. This isn't probably a perfect solution but I have \n> one question\n> > > about\n> > >\n> > > select into inGameRating count(game_result)+1 from users\n> > > \t\twhere game_result > inRow.game_result;\n> > >\n> > > This query in function results in about 1100 ms.\n> > > inRow.game_result is a integer 2984\n> > > And now if I replace inRow.game_result with integer\n> > >\n> > > select into inGameRating count(game_result)+1 from users\n> > > \t\twhere game_result > 2984;\n> > >\n> > > query results in about 100 ms\n> > >\n> > > There is probably a reason for this but can you tell me \n> about it because\n> > > I can't fine one\n> > >\n> > > My function:\n> > >\n> > > create or replace function ttt_result(int,int) returns setof\n> > > tparent_result language plpgsql volatile as $$\n> > > declare\n> > > \tinOffset alias for $1;\n> > > \tinLimit alias for $2;\n> > > \tinRow tparent_result%rowtype;\n> > > \tinGameResult int := -1;\n> > > \tinGameRating int := -1;\n> > > begin\n> > >\n> > > for inRow in\n> > > \tselect\n> > > \t\temail,wynik_gra\n> > > \tfrom\n> > > \t\tkonkurs_uzytkownik\n> > > \torder by wynik_gra desc limit inLimit offset inOffset\n> > > loop\n> > > \tif inGameResult < 0 then -- only for first iteration\n> > > \t\t/* this is fast ~100 ms\n> > > \t\tselect into inGameRating\n> > > \t\t\tcount(game_result)+1 from users\n> > > \t\t\twhere game_result > \t2984;\n> > > \t\t*/\n> > > \t\t/* even if inRow.game_result = 2984 this is \n> very slow ~ 1100 ms!\n> > > \t\tselect into inGameRating count(game_result)+1 \n> from users\n> > > \t\twhere game_result > inRow.game_result;\n> > > \t\t*/\n> > > \t\tinGameResult := inRow.game_result;\n> > > \tend if;\n> > >\n> > > \tif inGameResult > inRow.game_result then\n> > > \t\tinGameRating := inGameRating + 1;\n> > > \tend if;\n> > >\n> > > \tinRow.game_rating := inGameRating;\n> > > \tinGameResult := inRow.game_result;\n> > > \treturn next inRow;\n> > >\n> > > end loop;\n> > > return;\n> > > end;\n> > > $$;\n> > > --\n> > > Witold Strzelczyk\n> > > [email protected]\n> > >\n> > > ---------------------------(end of \n> broadcast)---------------------------\n> > > TIP 9: In versions below 8.0, the planner will ignore \n> your desire to\n> > > choose an index scan if your joining column's \n> datatypes do not\n> > > match\n> \n> -- \n> Witold Strzelczyk\n> \n>   : :   D i g i t a l  O n e  : :  http://www.digitalone.pl\n>   : :   Dowborczykow 25  Lodz  90-019  Poland\n>   : :   tel. [+48 42] 6771477  fax [+48 42] 6771478\n> \n>    ...Where Internet works for effective business solutions...\n> \n", "msg_date": "Fri, 12 May 2006 11:09:27 -0500", "msg_from": "\"Jim Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow variable against int??" } ]
[ { "msg_contents": "I have recently been encountering a number of significant performance\nproblems related to stable functions being called multiple times when I\nbelieve they could be called just once. Searching the ML archives, I see\nI'm not the first:\n\n<http://archives.postgresql.org/pgsql-hackers/2003-04/msg00890.php>\n<http://archives.postgresql.org/pgsql-performance/2006-01/msg00140.php>\n\nand so on. None of them seemed to resolve to a plan of action or elegant\nworkaround. It is mentioned that \"stable\" was added to allow such\nfunctions to be used for index scans, but I could not tell if other\noptimizations I would like are impossible, or possible and if so, might\nor will never be implemented.\n\nI have several examples of queries in which eliminating extra calls to a\nstable function would result in very significant performance gains. All\nof these cases were found while developing a real application, and\nalthough I've simplified them to be more readable, they are not\ncontrived.\n\nProblem 1: creating a view with multiple columns calculated from a\nfunction.\n\n create table sale(saleid serial, total numeric);\n create function cost_of_sale(sale.saleid%type) returns numeric stable as $$\n -- calculates the cost of purchasing the things sold in a sale\n -- takes considerable time to calculate\n $$;\n create view convenient_view_on_sale as\n select *,\n cost_of_sale(saleid) as cost,\n total - cost_of_sale(saleid) as profit,\n case when total != 0 then (total-cost_of_sale(saleid)) / total * 100 end as margin;\n\nExecuting \"select * from convenient_view_on_sale limit 1\" will execute\ncost_of_sale thrice. However, from the definition of stable, we know it\ncould have been called just once. As cost_of_sale takes hundreds of ms\nto execute while the rest of the query is extremely simple, additional\ncalls in effect multiply the total execution time.\n\nNonsolution 1a: moving function to a subselect:\n\n create view convenient_view_on_sale as\n select *,\n total - cost as profit,\n case when total != 0 then (total-cost) / total * 100 end as margin\n from (select *, cost_of_sale(saleid) as cost from sale) as subq;\n\nThe query planner will eliminate the subselect, and cost_of_sale will\nstill be executed thrice. I can observe no change in behaviour\nwhatsoever with this view definition.\n\nPS: I wonder what the behaviour would be if I explicitly inlined\ncost_of_sale here?\n\nNonsolution 1b: preventing optimization of the subselect with \"offset 0\"\n\n create view convenient_view_on_sale as\n select *,\n total - cost as profit,\n case when total != 0 then (total-cost) / total * 100 end as margin\n from (select *, cost_of_sale(saleid) as cost from sale offset 0) as subq;\n\nThis helps in the case of a \"select *\"; the subquery will not be\neliminated due to the \"offset 0\", and cost_of_sale will be executed only\nonce. However, it will always be executed, even if none of the cost\nrelated columns are selected. For exaple,\n\"select saleid from convenient_view_on_sale limit 1\" will execute\ncost_of_sale once, although it could have not been executed at all.\n\nProblem 1 has a workaround: perform the dependant calculations (profit\nand margin in this case) on the client, or in a stored procedure. This\nis often inconvienent, but it works.\n\nProblem 2: expensive functions returning composite types.\n\nConsider that the purchases for a sale might have not yet been made, so\nthe exact cost can not be known, but a guess can be made based on the\ncurrent prices. cost_of_sale might be updated to reflect this:\n\n create function cost_of_sale(sale.saleid%type, out cost numeric, out estimated bool)\n stable as $$ ... $$;\n\n create view convenient_view_on_sale as\n select *, cost_of_sale(saleid) from sale;\n\nNote that in many cases, calculating \"cost\" and \"estimated\" together\ntakes just as long as calculating either one. This is why both are\nreturned from the same function.\n\nNow, I use python as a client, in particular the psycopg2 module. When I\ndo something such as \"select cost from convenient_view_on_sale\", the\nvalues returned for the cost column (a composite type (numeric, bool))\nare strings. Perhaps this is an issue with psycopg2, but as a user, this\nis very annoying since I can not really get at the components of the\ncomposite type without reimplementing pg's parser. Granted I could\nprobably do it simply in a way that work work most the time, but I feel\nit would be error prone, and I'd rather not.\n\nThus, I seek a way to get the components of the cost column in top-level\ncolumns. For example I try, \"select (cost).cost, (cost).estimated\", but\nthis now executes cost_of_sale twice, doubling the time of my query.\n\nSince stable functions are the most common in my experience, and I have\nquite a number of them that perform complex, slow queries, I'd really\nlike to see optimizations in this area. Until such a time, I would very\nmuch appreciate any workaround suggestions.\n", "msg_date": "Fri, 12 May 2006 13:59:08 -0400", "msg_from": "Phil Frost <[email protected]>", "msg_from_op": true, "msg_subject": "stable function optimizations, revisited" } ]
[ { "msg_contents": "Performance Folks,\n\nI just had an article[1] published in which I demonstrated recursive \nPL/pgSQL functions with this function:\n\nCREATE OR REPLACE FUNCTION fib (\n fib_for int\n) RETURNS integer AS $$\nBEGIN\n IF fib_for < 2 THEN\n RETURN fib_for;\n END IF;\n RETURN fib(fib_for - 2) + fib(fib_for - 1);\nEND;\n$$ LANGUAGE plpgsql;\n\nNaturally, it's slow:\n\ntry=# \\timing\ntry=# select fib(28);\n fib\n--------\n317811\n(1 row)\n\nTime: 10642.803 ms\n\nNow, I mistakenly said in my article that PostgreSQL doesn't have \nnative memoization, and so demonstrated how to use a table for \ncaching to speed up the function. It's pretty fast:\n\ntry=# select fib_cached(28);\nfib_cached\n------------\n 317811\n(1 row)\n\nTime: 193.316 ms\n\nBut over the weekend, I was looking at the Pg docs and saw IMMUTABLE, \nand said, \"Oh yeah!\". So I recreated the function with IMMUTABLE. But \nthe performance was not much better:\n\ntry=# select fib(28);\n fib\n--------\n317811\n(1 row)\n\nTime: 8505.668 ms\ntry=# select fib_cached(28);\nfib_cached\n------------\n 317811\n(1 row)\n\nSo, what gives? Am I missing something, or not understanding how \nIMMUTABLE works?\n\nMany TIA,\n\nDavid\n\n1. http://www.onlamp.com/pub/a/onlamp/2006/05/11/postgresql-plpgsql.html\n", "msg_date": "Mon, 15 May 2006 20:15:11 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "IMMUTABLE?" }, { "msg_contents": "David Wheeler <[email protected]> writes:\n> So, what gives? Am I missing something, or not understanding how \n> IMMUTABLE works?\n\nThe latter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2006 23:21:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE? " }, { "msg_contents": "On May 15, 2006, at 20:21, Tom Lane wrote:\n\n>> So, what gives? Am I missing something, or not understanding how\n>> IMMUTABLE works?\n>\n> The latter.\n\nHee-hee! And after all those nice things I wrote about you in a \nprevious email on this list!\n\nBut seriously, the documentation says (as if I need to tell you, but \nI was reading it again to make sure that I'm not insane):\n\n> IMMUTABLE indicates that the function always returns the same \n> result when given the same argument values; that is, it does not do \n> database lookups or otherwise use information not directly present \n> in its argument list. If this option is given, any call of the \n> function with all-constant arguments can be immediately replaced \n> with the function value.\n\nSo that seems pretty clear to me. Now, granted, the recursive calls \nto fib() don't pass a constant argument, but I still would think that \nafter the first time I called fib(28), that the next call to fib(28) \nwould be lightening fast, even if fib(27) wasn't.\n\nSo, uh, would you mind telling me what I'm missing? I'm happy to turn \nthat knowledge into a documentation patch to help future boneheads \nlike myself. :-)\n\nThanks,\n\nDavid\n", "msg_date": "Mon, 15 May 2006 21:22:03 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IMMUTABLE? " }, { "msg_contents": "David Wheeler <[email protected]> writes:\n> But seriously, the documentation says (as if I need to tell you, but \n> I was reading it again to make sure that I'm not insane):\n\n>> IMMUTABLE indicates that the function always returns the same \n>> result when given the same argument values; that is, it does not do \n>> database lookups or otherwise use information not directly present \n>> in its argument list. If this option is given, any call of the \n>> function with all-constant arguments can be immediately replaced \n>> with the function value.\n\nSure. As I read it, that's talking about a static transformation:\nplanner sees 2 + 2 (or if you prefer, int4pl(2,2)), planner runs the\nfunction and replaces the expression with 4. Nothing there about\nmemoization.\n\nIt's true that the system *could* memoize (or in our more usual\nparlance, cache function values) given the assumptions embodied in\nIMMUTABLE. But we don't, and I don't see any statement in the docs\nthat promises that we do. For 99% of the functions that the planner\ndeals with, memoization would be seriously counterproductive because\nthe function evaluation cost is comparable to if not less than the\nlookup cost in a memo table. (int4pl is a good case in point.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2006 00:31:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE? " }, { "msg_contents": "On Tue, May 16, 2006 at 12:31:41AM -0400, Tom Lane wrote:\n> It's true that the system *could* memoize (or in our more usual\n> parlance, cache function values) given the assumptions embodied in\n> IMMUTABLE. But we don't, and I don't see any statement in the docs\n> that promises that we do. For 99% of the functions that the planner\n> deals with, memoization would be seriously counterproductive because\n> the function evaluation cost is comparable to if not less than the\n> lookup cost in a memo table. (int4pl is a good case in point.)\n\nThis seems to change as soon as one takes into account user functions.\n\nWhile most immutable functions really seem to be small and their execution\nfast, stable functions often hide complex sql (sometimes combined with\nif-then-else or other program flow logic).\n\nSo irrespective of caching to prevent evaluation across statements, within a\nsingle statement, is there a strong reason why for example in\nWHERE col = f(const) with f() declared as immutable or stable and without an\nindex on col, f() still gets called for every row? Or is this optimization\njust not done yet?\n\n\nJoachim\n\n", "msg_date": "Tue, 16 May 2006 11:48:24 +0200", "msg_from": "Joachim Wieland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE?" }, { "msg_contents": "Joachim Wieland <[email protected]> writes:\n> So irrespective of caching to prevent evaluation across statements, within a\n> single statement, is there a strong reason why for example in\n> WHERE col = f(const) with f() declared as immutable or stable and without an\n> index on col, f() still gets called for every row? Or is this optimization\n> just not done yet?\n\nThe above statement is not correct, at least not for immutable functions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2006 09:33:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE? " }, { "msg_contents": "On Tue, May 16, 2006 at 09:33:14AM -0400, Tom Lane wrote:\n> Joachim Wieland <[email protected]> writes:\n> > So irrespective of caching to prevent evaluation across statements, within a\n> > single statement, is there a strong reason why for example in\n> > WHERE col = f(const) with f() declared as immutable or stable and without an\n> > index on col, f() still gets called for every row? Or is this optimization\n> > just not done yet?\n\n> The above statement is not correct, at least not for immutable functions.\n\nSo an immutable function gets evaluated once but a stable function still gets\ncalled for every row? Wouldn't it make sense to call a stable function only\nonce as well?\n\n\nJoachim\n", "msg_date": "Tue, 16 May 2006 18:55:14 +0200", "msg_from": "Joachim Wieland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE?" }, { "msg_contents": "On May 15, 2006, at 21:31, Tom Lane wrote:\n\n> Sure. As I read it, that's talking about a static transformation:\n> planner sees 2 + 2 (or if you prefer, int4pl(2,2)), planner runs the\n> function and replaces the expression with 4. Nothing there about\n> memoization.\n\nOh, I see. So it's more like a constant or C macro.\n\n> It's true that the system *could* memoize (or in our more usual\n> parlance, cache function values) given the assumptions embodied in\n> IMMUTABLE. But we don't, and I don't see any statement in the docs\n> that promises that we do. For 99% of the functions that the planner\n> deals with, memoization would be seriously counterproductive because\n> the function evaluation cost is comparable to if not less than the\n> lookup cost in a memo table. (int4pl is a good case in point.)\n\nYes, but there are definitely programming cases where memoization/ \ncaching definitely helps. And it's easy to tell for a given function \nwhether or not it really helps by simply trying it with CACHED and \nwithout.\n\nWould this be a simple thing to implement?\n\nBest,\n\nDavid\n", "msg_date": "Tue, 16 May 2006 11:00:27 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IMMUTABLE? " }, { "msg_contents": "> Yes, but there are definitely programming cases where \n> memoization/caching definitely helps. And it's easy to tell for a given \n> function whether or not it really helps by simply trying it with CACHED \n> and without.\n> \n> Would this be a simple thing to implement?\n\nIt's called a \"table\" :)\n\n", "msg_date": "Wed, 17 May 2006 09:29:13 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE?" }, { "msg_contents": "On May 16, 2006, at 18:29, Christopher Kings-Lynne wrote:\n\n>> Yes, but there are definitely programming cases where memoization/ \n>> caching definitely helps. And it's easy to tell for a given \n>> function whether or not it really helps by simply trying it with \n>> CACHED and without.\n>> Would this be a simple thing to implement?\n>\n> It's called a \"table\" :)\n\n http://www.justatheory.com/computers/databases/postgresql/ \nhigher_order_plpgsql.html\n\nYes, I know. :-P But it'd be easier to have a CACHED keyword, of course.\n\nBest,\n\nDavid\n", "msg_date": "Tue, 16 May 2006 19:08:51 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IMMUTABLE?" }, { "msg_contents": "On Tue, May 16, 2006 at 07:08:51PM -0700, David Wheeler wrote:\n> On May 16, 2006, at 18:29, Christopher Kings-Lynne wrote:\n> \n> >>Yes, but there are definitely programming cases where memoization/ \n> >>caching definitely helps. And it's easy to tell for a given \n> >>function whether or not it really helps by simply trying it with \n> >>CACHED and without.\n> >>Would this be a simple thing to implement?\n> >\n> >It's called a \"table\" :)\n> \n> http://www.justatheory.com/computers/databases/postgresql/ \n> higher_order_plpgsql.html\n> \n> Yes, I know. :-P But it'd be easier to have a CACHED keyword, of course.\n\nRather than worrying about a generic form of memoization, what would be\nextremely valuable would be to improve detection of the same function\nbeing used multiple times in a query, ie:\n\nSELECT moo(x), moo(x)/2 FROM table;\n\nAFAIK PostgreSQL will currently execute moo(x) twice. Even if it knows\nhow to optimize this brain-dead example, I think there are other\nexamples it can't optimize right now. Having a much simpler memoization\nscheme that only works on a tuple-by-tuple basis would probably\neliminate a lot of those (It wouldn't work for any executor node that\nhas to read it's entire input before returning anything, though, such as\nsort).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 17 May 2006 10:51:40 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IMMUTABLE?" } ]
[ { "msg_contents": "Hi List,\n\nIn the past few weeks we have been developing a read-heavy \nmysql-benchmark to have an alternative take at cpu/platform-performance. \nNot really to have a look at how fast mysql can be.\n\nThis benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \nafter our website's production database and the load generated on it is \nmodelled after a simplified version of our visitor behaviour.\n\nLong story short, we think the test is a nice example of the relatively \nlightweight, read-heavy webapplications out there and therefore decided \nto have a go at postgresql as well.\nOf course the queries and indexes have been adjusted to (by our \nknowledge) best suit postgresql, while maintaining the same output to \nthe application/interface layer. While the initial structure only got \npostgresql at about half the performance of mysql 4.1.x, the current \nversion of our postgresql-benchmark has quite similar results to mysql \n4.1.x, but both are quite a bit slower than 5.0.x (I think its about \n30-40% faster).\n\nSince the results from those benchmarks are not yet public (they will be \nput together in a story at our website), I won't go into too much \ndetails about this benchmark.\n\nCurrently we're having a look at a Sun T2000 and will be looking at will \nbe looking at other machines as well in the future. We are running the \nsun-release of postgresql 8.1.3 on that T2000 now, but are looking at \ncompiling the cvs-head version (for its index-root-cache) somewhere this \nweek.\n\nMy guess is there are a few people on this list who are interested in \nsome dtrace results taken during our benchmarks on that T2000.\nAlthough my knowledge of both Solaris and Dtrace are very limited, I \nalready took some samples of the system and user calls. I used Jignesh \nShah's scripts for that: \nhttp://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on\n\nYou can find the samples here:\nhttp://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\nhttp://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n\nAnd I also did the memcpy-scripts, here:\nhttp://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\nhttp://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n(this last log is 3.5MB)\n\nIf anyone is interested in some more dtrace results, let me know (and \ntell me what commands to run ;-) ).\n\nBest regards,\n\nArjen\n", "msg_date": "Tue, 16 May 2006 11:33:32 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Pgsql (and mysql) benchmark on T2000/Solaris and some profiling" }, { "msg_contents": "\n\"Arjen van der Meijden\" <[email protected]> wrote\n>\n> Long story short, we think the test is a nice example of the relatively\n> lightweight, read-heavy webapplications out there and therefore decided\n> to have a go at postgresql as well.\n>\n\nSome sort of web query behavior is quite optimized in MySQL. For example,\nthe query below is runing very fast due to the query result cache\nimplementation in MySQL.\n\nLoop N times\n SELECT * FROM A WHERE i = 1;\nEnd loop.\n\n> You can find the samples here:\n> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>\n\nIMHO, without knowing the exact queries you sent, these logs are not very\nuseful :-(. I would suggest you compare the queries in pair and then post\ntheir dtrace/timing results here (just like the previous Firebird vs.\nPostgreSQL comparison did).\n\nRegards,\nQingqing\n\n\n", "msg_date": "Tue, 16 May 2006 18:01:26 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some profiling" }, { "msg_contents": "Qingqing Zhou wrote:\n> \"Arjen van der Meijden\" <[email protected]> wrote\n> Some sort of web query behavior is quite optimized in MySQL. For example,\n> the query below is runing very fast due to the query result cache\n> implementation in MySQL.\n> \n> Loop N times\n> SELECT * FROM A WHERE i = 1;\n> End loop.\n\nYeah, I know. But our queries get random parameters though for \nidentifiers and the like, so its not just a few queries getting executed \na lot of times, there are. In a run for which I just logged all queries, \nalmost 42k distinct queries executed from 128k in total (it may actually \nbe more random than real life).\nBesides that, they are not so extremely simple queries as your example. \nMost join at least two tables, while the rest often joins three to five.\n\nBut I agree, MySQL has a big advantage with its query result cache. That \nmakes the current performance of postgresql even more impressive in this \nsituation, since the query cache of the 4.1.x run was enabled as well.\n\n> IMHO, without knowing the exact queries you sent, these logs are not very\n> useful :-(. I would suggest you compare the queries in pair and then post\n> their dtrace/timing results here (just like the previous Firebird vs.\n> PostgreSQL comparison did).\n\nWell, I'm bound to some privacy and copyright laws, but I'll see if I \ncan show some example plans of at least the top few queries later today \n(the top two is resp 27% and 21% of the total time).\nBut those top queries aren't the only ones run during the benchmarks or \nin the production environment, nor are they run exclusively at any given \ntime. So the overall load-picture should be usefull too, shouldn't it?\n\nBest regards,\n\nArjen\n", "msg_date": "Tue, 16 May 2006 12:47:59 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "Hi Arjen,\n\nLooking at your outputs...of syscall and usrcall it looks like\n\n* Spending too much time in semsys .... which means you have too many \nconnections and they are contending to get a lock.. which is potentially \nthe WAL log lock\n\n\n* llseek is high which means you can obviously gain a bit with the right \nfile system/files tuning by caching them right.\n\n\nHave you set the values for Solaris for T2000 tuned for Postgresql?\n\nCheck out the tunables from the following URL\n\nhttp://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n\nTry specially the /etc/system and postgresql.conf changes and see if it \nchanges/improves your performance.\n\n\nRegards,\nJignesh\n\n\nArjen van der Meijden wrote:\n> Hi List,\n> \n> In the past few weeks we have been developing a read-heavy \n> mysql-benchmark to have an alternative take at cpu/platform-performance. \n> Not really to have a look at how fast mysql can be.\n> \n> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \n> after our website's production database and the load generated on it is \n> modelled after a simplified version of our visitor behaviour.\n> \n> Long story short, we think the test is a nice example of the relatively \n> lightweight, read-heavy webapplications out there and therefore decided \n> to have a go at postgresql as well.\n> Of course the queries and indexes have been adjusted to (by our \n> knowledge) best suit postgresql, while maintaining the same output to \n> the application/interface layer. While the initial structure only got \n> postgresql at about half the performance of mysql 4.1.x, the current \n> version of our postgresql-benchmark has quite similar results to mysql \n> 4.1.x, but both are quite a bit slower than 5.0.x (I think its about \n> 30-40% faster).\n> \n> Since the results from those benchmarks are not yet public (they will be \n> put together in a story at our website), I won't go into too much \n> details about this benchmark.\n> \n> Currently we're having a look at a Sun T2000 and will be looking at will \n> be looking at other machines as well in the future. We are running the \n> sun-release of postgresql 8.1.3 on that T2000 now, but are looking at \n> compiling the cvs-head version (for its index-root-cache) somewhere this \n> week.\n> \n> My guess is there are a few people on this list who are interested in \n> some dtrace results taken during our benchmarks on that T2000.\n> Although my knowledge of both Solaris and Dtrace are very limited, I \n> already took some samples of the system and user calls. I used Jignesh \n> Shah's scripts for that: \n> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n> \n> \n> You can find the samples here:\n> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n> \n> And I also did the memcpy-scripts, here:\n> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n> (this last log is 3.5MB)\n> \n> If anyone is interested in some more dtrace results, let me know (and \n> tell me what commands to run ;-) ).\n> \n> Best regards,\n> \n> Arjen\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Tue, 16 May 2006 14:19:47 +0100", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "Hi Jignesh,\n\nJignesh K. Shah wrote:\n> Hi Arjen,\n> \n> Looking at your outputs...of syscall and usrcall it looks like\n> \n> * Spending too much time in semsys .... which means you have too many \n> connections and they are contending to get a lock.. which is potentially \n> the WAL log lock\n> \n> * llseek is high which means you can obviously gain a bit with the right \n> file system/files tuning by caching them right.\n> \n> Have you set the values for Solaris for T2000 tuned for Postgresql?\n\nNot particularly, we got a \"special T2000 Solaris dvd\" from your \ncolleagues here in the Netherlands and installed that (actually one of \nyour colleagues did). Doing so all the \"better default\" \n/etc/system-settings are supposed to be set. I haven't really checked \nthat they are, since two of your colleagues have been working on it for \nthe mysql-version of the benchmark and I assumed they'd have verified that.\n\n> Check out the tunables from the following URL\n> \n> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> \n> Try specially the /etc/system and postgresql.conf changes and see if it \n> changes/improves your performance.\n\nI will see that those tunables are verified to be set.\n\nI am a bit surprised though about your remarks, since they'd point at \nthe I/O being in the way? But we only have about 600k/sec i/o according \nto vmstat. The database easily fits in memory.\nIn total I logged about 500k queries of which only 70k where altering \nqueries, of which almost all where inserts in log-tables which aren't \nactively read in this benchmark.\n\nBut I'll give it a try.\n\nBest regards,\n\nArjen\n\n> \n> Arjen van der Meijden wrote:\n>> Hi List,\n>>\n>> In the past few weeks we have been developing a read-heavy \n>> mysql-benchmark to have an alternative take at \n>> cpu/platform-performance. Not really to have a look at how fast mysql \n>> can be.\n>>\n>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \n>> after our website's production database and the load generated on it \n>> is modelled after a simplified version of our visitor behaviour.\n>>\n>> Long story short, we think the test is a nice example of the \n>> relatively lightweight, read-heavy webapplications out there and \n>> therefore decided to have a go at postgresql as well.\n>> Of course the queries and indexes have been adjusted to (by our \n>> knowledge) best suit postgresql, while maintaining the same output to \n>> the application/interface layer. While the initial structure only got \n>> postgresql at about half the performance of mysql 4.1.x, the current \n>> version of our postgresql-benchmark has quite similar results to mysql \n>> 4.1.x, but both are quite a bit slower than 5.0.x (I think its about \n>> 30-40% faster).\n>>\n>> Since the results from those benchmarks are not yet public (they will \n>> be put together in a story at our website), I won't go into too much \n>> details about this benchmark.\n>>\n>> Currently we're having a look at a Sun T2000 and will be looking at \n>> will be looking at other machines as well in the future. We are \n>> running the sun-release of postgresql 8.1.3 on that T2000 now, but are \n>> looking at compiling the cvs-head version (for its index-root-cache) \n>> somewhere this week.\n>>\n>> My guess is there are a few people on this list who are interested in \n>> some dtrace results taken during our benchmarks on that T2000.\n>> Although my knowledge of both Solaris and Dtrace are very limited, I \n>> already took some samples of the system and user calls. I used Jignesh \n>> Shah's scripts for that: \n>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>\n>>\n>> You can find the samples here:\n>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>\n>> And I also did the memcpy-scripts, here:\n>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>> (this last log is 3.5MB)\n>>\n>> If anyone is interested in some more dtrace results, let me know (and \n>> tell me what commands to run ;-) ).\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Tue, 16 May 2006 15:54:14 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "Hi Arjen,\n\nCan you send me my colleagues's names in a private email?\n\nOne of the drawbacks of the syscall.d script is relative timings and \nhence if system CPU usage is very low, it gives the relative weightage \nabout what portion in that low is associated with what call.. So even if \nyou have say..1% system time.. it says that most of it was IO related or \nsemsys related. So iostat output with -c option to include CPU times \nhelps to put it in the right perspective.\n\n\nAlso do check the tunables mentioned and make sure they are set.\n\nRegards,\nJignesh\n\n\nArjen van der Meijden wrote:\n\n> Hi Jignesh,\n>\n> Jignesh K. Shah wrote:\n>\n>> Hi Arjen,\n>>\n>> Looking at your outputs...of syscall and usrcall it looks like\n>>\n>> * Spending too much time in semsys .... which means you have too many \n>> connections and they are contending to get a lock.. which is \n>> potentially the WAL log lock\n>>\n>> * llseek is high which means you can obviously gain a bit with the \n>> right file system/files tuning by caching them right.\n>>\n>> Have you set the values for Solaris for T2000 tuned for Postgresql?\n>\n>\n> Not particularly, we got a \"special T2000 Solaris dvd\" from your \n> colleagues here in the Netherlands and installed that (actually one of \n> your colleagues did). Doing so all the \"better default\" \n> /etc/system-settings are supposed to be set. I haven't really checked \n> that they are, since two of your colleagues have been working on it \n> for the mysql-version of the benchmark and I assumed they'd have \n> verified that.\n>\n>> Check out the tunables from the following URL\n>>\n>> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n>>\n>> Try specially the /etc/system and postgresql.conf changes and see if \n>> it changes/improves your performance.\n>\n>\n> I will see that those tunables are verified to be set.\n>\n> I am a bit surprised though about your remarks, since they'd point at \n> the I/O being in the way? But we only have about 600k/sec i/o \n> according to vmstat. The database easily fits in memory.\n> In total I logged about 500k queries of which only 70k where altering \n> queries, of which almost all where inserts in log-tables which aren't \n> actively read in this benchmark.\n>\n> But I'll give it a try.\n>\n> Best regards,\n>\n> Arjen\n>\n>>\n>> Arjen van der Meijden wrote:\n>>\n>>> Hi List,\n>>>\n>>> In the past few weeks we have been developing a read-heavy \n>>> mysql-benchmark to have an alternative take at \n>>> cpu/platform-performance. Not really to have a look at how fast \n>>> mysql can be.\n>>>\n>>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \n>>> after our website's production database and the load generated on it \n>>> is modelled after a simplified version of our visitor behaviour.\n>>>\n>>> Long story short, we think the test is a nice example of the \n>>> relatively lightweight, read-heavy webapplications out there and \n>>> therefore decided to have a go at postgresql as well.\n>>> Of course the queries and indexes have been adjusted to (by our \n>>> knowledge) best suit postgresql, while maintaining the same output \n>>> to the application/interface layer. While the initial structure only \n>>> got postgresql at about half the performance of mysql 4.1.x, the \n>>> current version of our postgresql-benchmark has quite similar \n>>> results to mysql 4.1.x, but both are quite a bit slower than 5.0.x \n>>> (I think its about 30-40% faster).\n>>>\n>>> Since the results from those benchmarks are not yet public (they \n>>> will be put together in a story at our website), I won't go into too \n>>> much details about this benchmark.\n>>>\n>>> Currently we're having a look at a Sun T2000 and will be looking at \n>>> will be looking at other machines as well in the future. We are \n>>> running the sun-release of postgresql 8.1.3 on that T2000 now, but \n>>> are looking at compiling the cvs-head version (for its \n>>> index-root-cache) somewhere this week.\n>>>\n>>> My guess is there are a few people on this list who are interested \n>>> in some dtrace results taken during our benchmarks on that T2000.\n>>> Although my knowledge of both Solaris and Dtrace are very limited, I \n>>> already took some samples of the system and user calls. I used \n>>> Jignesh Shah's scripts for that: \n>>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>>\n>>>\n>>> You can find the samples here:\n>>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>>\n>>> And I also did the memcpy-scripts, here:\n>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>>> (this last log is 3.5MB)\n>>>\n>>> If anyone is interested in some more dtrace results, let me know \n>>> (and tell me what commands to run ;-) ).\n>>>\n>>> Best regards,\n>>>\n>>> Arjen\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 3: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/docs/faq\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 16 May 2006 16:52:58 +0100", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "Hi Jignesh,\n\nThe settings from that 'special T2000 dvd' differed from the recommended \nsettings on the website you provided. But I don't see much difference in \nperformance with any of the adjustments, it appears to be more or less \nthe same.\n\nHere are a few iostat lines by the way:\n\n sd0 sd1 sd2 nfs1 cpu\nkps tps serv kps tps serv kps tps serv kps tps serv us sy wt id\n 7 1 12 958 50 35 0 0 7 0 0 0 13 1 0 85\n 0 0 0 2353 296 3 0 0 0 0 0 0 92 7 0 1\n 0 0 0 2062 326 2 0 0 0 0 0 0 93 7 0 0\n 1 1 1 1575 350 0 0 0 0 0 0 0 92 7 0 1\n 0 0 0 1628 362 0 0 0 0 0 0 0 92 8 0 1\n\nIt appears to be doing a little less kps/tps on sd1 when I restore my \nown postgresql.conf-settings. (default wal/checkpoints, 20k buffers, 2k \nwork mem).\n\nIs it possible to trace the stack's for semsys, like the memcpy-traces, \nor are those of no interest here?\n\nBest regards,\n\nArjen\n\n\nOn 16-5-2006 17:52, Jignesh K. Shah wrote:\n> Hi Arjen,\n> \n> Can you send me my colleagues's names in a private email?\n> \n> One of the drawbacks of the syscall.d script is relative timings and \n> hence if system CPU usage is very low, it gives the relative weightage \n> about what portion in that low is associated with what call.. So even if \n> you have say..1% system time.. it says that most of it was IO related or \n> semsys related. So iostat output with -c option to include CPU times \n> helps to put it in the right perspective.\n> \n> \n> Also do check the tunables mentioned and make sure they are set.\n> \n> Regards,\n> Jignesh\n> \n> \n> Arjen van der Meijden wrote:\n> \n>> Hi Jignesh,\n>>\n>> Jignesh K. Shah wrote:\n>>\n>>> Hi Arjen,\n>>>\n>>> Looking at your outputs...of syscall and usrcall it looks like\n>>>\n>>> * Spending too much time in semsys .... which means you have too many \n>>> connections and they are contending to get a lock.. which is \n>>> potentially the WAL log lock\n>>>\n>>> * llseek is high which means you can obviously gain a bit with the \n>>> right file system/files tuning by caching them right.\n>>>\n>>> Have you set the values for Solaris for T2000 tuned for Postgresql?\n>>\n>>\n>> Not particularly, we got a \"special T2000 Solaris dvd\" from your \n>> colleagues here in the Netherlands and installed that (actually one of \n>> your colleagues did). Doing so all the \"better default\" \n>> /etc/system-settings are supposed to be set. I haven't really checked \n>> that they are, since two of your colleagues have been working on it \n>> for the mysql-version of the benchmark and I assumed they'd have \n>> verified that.\n>>\n>>> Check out the tunables from the following URL\n>>>\n>>> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n>>>\n>>> Try specially the /etc/system and postgresql.conf changes and see if \n>>> it changes/improves your performance.\n>>\n>>\n>> I will see that those tunables are verified to be set.\n>>\n>> I am a bit surprised though about your remarks, since they'd point at \n>> the I/O being in the way? But we only have about 600k/sec i/o \n>> according to vmstat. The database easily fits in memory.\n>> In total I logged about 500k queries of which only 70k where altering \n>> queries, of which almost all where inserts in log-tables which aren't \n>> actively read in this benchmark.\n>>\n>> But I'll give it a try.\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>>>\n>>> Arjen van der Meijden wrote:\n>>>\n>>>> Hi List,\n>>>>\n>>>> In the past few weeks we have been developing a read-heavy \n>>>> mysql-benchmark to have an alternative take at \n>>>> cpu/platform-performance. Not really to have a look at how fast \n>>>> mysql can be.\n>>>>\n>>>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \n>>>> after our website's production database and the load generated on it \n>>>> is modelled after a simplified version of our visitor behaviour.\n>>>>\n>>>> Long story short, we think the test is a nice example of the \n>>>> relatively lightweight, read-heavy webapplications out there and \n>>>> therefore decided to have a go at postgresql as well.\n>>>> Of course the queries and indexes have been adjusted to (by our \n>>>> knowledge) best suit postgresql, while maintaining the same output \n>>>> to the application/interface layer. While the initial structure only \n>>>> got postgresql at about half the performance of mysql 4.1.x, the \n>>>> current version of our postgresql-benchmark has quite similar \n>>>> results to mysql 4.1.x, but both are quite a bit slower than 5.0.x \n>>>> (I think its about 30-40% faster).\n>>>>\n>>>> Since the results from those benchmarks are not yet public (they \n>>>> will be put together in a story at our website), I won't go into too \n>>>> much details about this benchmark.\n>>>>\n>>>> Currently we're having a look at a Sun T2000 and will be looking at \n>>>> will be looking at other machines as well in the future. We are \n>>>> running the sun-release of postgresql 8.1.3 on that T2000 now, but \n>>>> are looking at compiling the cvs-head version (for its \n>>>> index-root-cache) somewhere this week.\n>>>>\n>>>> My guess is there are a few people on this list who are interested \n>>>> in some dtrace results taken during our benchmarks on that T2000.\n>>>> Although my knowledge of both Solaris and Dtrace are very limited, I \n>>>> already took some samples of the system and user calls. I used \n>>>> Jignesh Shah's scripts for that: \n>>>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>>>\n>>>>\n>>>> You can find the samples here:\n>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>>>\n>>>> And I also did the memcpy-scripts, here:\n>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>>>> (this last log is 3.5MB)\n>>>>\n>>>> If anyone is interested in some more dtrace results, let me know \n>>>> (and tell me what commands to run ;-) ).\n>>>>\n>>>> Best regards,\n>>>>\n>>>> Arjen\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>\n>>>> http://www.postgresql.org/docs/faq\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n> \n> \n", "msg_date": "Tue, 16 May 2006 20:08:32 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "You usertime is way too high for T2000...\n\nIf you have a 6 core machine with 24 threads, it says all 24 threads are \nreported as being busy with iostat output.\n\nBest way to debug this is use\n\nprstat -amL\n(or if you are dumping it in a file prstat -amLc > prstat.txt)\n\nand find the pids with high user cpu time and then use the usrcall.d on \nfew of those pids.\n\nAlso how many database connections do you have and what's the type of \nquery run by each connection?\n\n-Jignesh\n\n\n\nArjen van der Meijden wrote:\n> Hi Jignesh,\n> \n> The settings from that 'special T2000 dvd' differed from the recommended \n> settings on the website you provided. But I don't see much difference in \n> performance with any of the adjustments, it appears to be more or less \n> the same.\n> \n> Here are a few iostat lines by the way:\n> \n> sd0 sd1 sd2 nfs1 cpu\n> kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id\n> 7 1 12 958 50 35 0 0 7 0 0 0 13 1 0 85\n> 0 0 0 2353 296 3 0 0 0 0 0 0 92 7 0 1\n> 0 0 0 2062 326 2 0 0 0 0 0 0 93 7 0 0\n> 1 1 1 1575 350 0 0 0 0 0 0 0 92 7 0 1\n> 0 0 0 1628 362 0 0 0 0 0 0 0 92 8 0 1\n> \n> It appears to be doing a little less kps/tps on sd1 when I restore my \n> own postgresql.conf-settings. (default wal/checkpoints, 20k buffers, 2k \n> work mem).\n> \n> Is it possible to trace the stack's for semsys, like the memcpy-traces, \n> or are those of no interest here?\n> \n> Best regards,\n> \n> Arjen\n> \n> \n> On 16-5-2006 17:52, Jignesh K. Shah wrote:\n> \n>> Hi Arjen,\n>>\n>> Can you send me my colleagues's names in a private email?\n>>\n>> One of the drawbacks of the syscall.d script is relative timings and \n>> hence if system CPU usage is very low, it gives the relative weightage \n>> about what portion in that low is associated with what call.. So even \n>> if you have say..1% system time.. it says that most of it was IO \n>> related or semsys related. So iostat output with -c option to include \n>> CPU times helps to put it in the right perspective.\n>>\n>>\n>> Also do check the tunables mentioned and make sure they are set.\n>>\n>> Regards,\n>> Jignesh\n>>\n>>\n>> Arjen van der Meijden wrote:\n>>\n>>> Hi Jignesh,\n>>>\n>>> Jignesh K. Shah wrote:\n>>>\n>>>> Hi Arjen,\n>>>>\n>>>> Looking at your outputs...of syscall and usrcall it looks like\n>>>>\n>>>> * Spending too much time in semsys .... which means you have too \n>>>> many connections and they are contending to get a lock.. which is \n>>>> potentially the WAL log lock\n>>>>\n>>>> * llseek is high which means you can obviously gain a bit with the \n>>>> right file system/files tuning by caching them right.\n>>>>\n>>>> Have you set the values for Solaris for T2000 tuned for Postgresql?\n>>>\n>>>\n>>>\n>>> Not particularly, we got a \"special T2000 Solaris dvd\" from your \n>>> colleagues here in the Netherlands and installed that (actually one \n>>> of your colleagues did). Doing so all the \"better default\" \n>>> /etc/system-settings are supposed to be set. I haven't really checked \n>>> that they are, since two of your colleagues have been working on it \n>>> for the mysql-version of the benchmark and I assumed they'd have \n>>> verified that.\n>>>\n>>>> Check out the tunables from the following URL\n>>>>\n>>>> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n>>>>\n>>>> Try specially the /etc/system and postgresql.conf changes and see \n>>>> if it changes/improves your performance.\n>>>\n>>>\n>>>\n>>> I will see that those tunables are verified to be set.\n>>>\n>>> I am a bit surprised though about your remarks, since they'd point at \n>>> the I/O being in the way? But we only have about 600k/sec i/o \n>>> according to vmstat. The database easily fits in memory.\n>>> In total I logged about 500k queries of which only 70k where altering \n>>> queries, of which almost all where inserts in log-tables which aren't \n>>> actively read in this benchmark.\n>>>\n>>> But I'll give it a try.\n>>>\n>>> Best regards,\n>>>\n>>> Arjen\n>>>\n>>>>\n>>>> Arjen van der Meijden wrote:\n>>>>\n>>>>> Hi List,\n>>>>>\n>>>>> In the past few weeks we have been developing a read-heavy \n>>>>> mysql-benchmark to have an alternative take at \n>>>>> cpu/platform-performance. Not really to have a look at how fast \n>>>>> mysql can be.\n>>>>>\n>>>>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled \n>>>>> after our website's production database and the load generated on \n>>>>> it is modelled after a simplified version of our visitor behaviour.\n>>>>>\n>>>>> Long story short, we think the test is a nice example of the \n>>>>> relatively lightweight, read-heavy webapplications out there and \n>>>>> therefore decided to have a go at postgresql as well.\n>>>>> Of course the queries and indexes have been adjusted to (by our \n>>>>> knowledge) best suit postgresql, while maintaining the same output \n>>>>> to the application/interface layer. While the initial structure \n>>>>> only got postgresql at about half the performance of mysql 4.1.x, \n>>>>> the current version of our postgresql-benchmark has quite similar \n>>>>> results to mysql 4.1.x, but both are quite a bit slower than 5.0.x \n>>>>> (I think its about 30-40% faster).\n>>>>>\n>>>>> Since the results from those benchmarks are not yet public (they \n>>>>> will be put together in a story at our website), I won't go into \n>>>>> too much details about this benchmark.\n>>>>>\n>>>>> Currently we're having a look at a Sun T2000 and will be looking at \n>>>>> will be looking at other machines as well in the future. We are \n>>>>> running the sun-release of postgresql 8.1.3 on that T2000 now, but \n>>>>> are looking at compiling the cvs-head version (for its \n>>>>> index-root-cache) somewhere this week.\n>>>>>\n>>>>> My guess is there are a few people on this list who are interested \n>>>>> in some dtrace results taken during our benchmarks on that T2000.\n>>>>> Although my knowledge of both Solaris and Dtrace are very limited, \n>>>>> I already took some samples of the system and user calls. I used \n>>>>> Jignesh Shah's scripts for that: \n>>>>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>>>>\n>>>>>\n>>>>> You can find the samples here:\n>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>>>>\n>>>>> And I also did the memcpy-scripts, here:\n>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>>>>> (this last log is 3.5MB)\n>>>>>\n>>>>> If anyone is interested in some more dtrace results, let me know \n>>>>> (and tell me what commands to run ;-) ).\n>>>>>\n>>>>> Best regards,\n>>>>>\n>>>>> Arjen\n>>>>>\n>>>>> ---------------------------(end of \n>>>>> broadcast)---------------------------\n>>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>>\n>>>>> http://www.postgresql.org/docs/faq\n>>>>\n>>>>\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 6: explain analyze is your friend\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: Don't 'kill -9' the postmaster\n>>\n>>\n>>\n", "msg_date": "Wed, 17 May 2006 12:07:53 +0100", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "We have the 4 core machine. However, these numbers are taken during a \nbenchmark, not normal work load. So the output should display the system \nbeing working fully ;)\n\nSo its postgres doing a lot of work and you already had a look at the \nusrcall for that.\n\nThe benchmark just tries to do the queries for \"random page visits\". \nThis totals up to about some 80 different queries being executed with \nmostly random parameters. The workload is generated using php so there \nare no connection pools, nor prepared statements.\n\nThe queries vary, but are all relatively lightweight queries with less \nthan 6 or 7 joinable tables. Almost all queries can use indexes. Most \ntables are under a few MB of data, although there are a few larger than \nthat. Most records are relatively small, consisting of mostly numbers \n(id's and such).\n\nThe results presented here was with 25 concurrent connections.\n\nBest regards,\n\nArjen\n\n\nJignesh K. Shah wrote:\n> You usertime is way too high for T2000...\n> \n> If you have a 6 core machine with 24 threads, it says all 24 threads are \n> reported as being busy with iostat output.\n> \n> Best way to debug this is use\n> \n> prstat -amL\n> (or if you are dumping it in a file prstat -amLc > prstat.txt)\n> \n> and find the pids with high user cpu time and then use the usrcall.d on \n> few of those pids.\n> \n> Also how many database connections do you have and what's the type of \n> query run by each connection?\n> \n> -Jignesh\n> \n> \n> \n> Arjen van der Meijden wrote:\n>> Hi Jignesh,\n>>\n>> The settings from that 'special T2000 dvd' differed from the \n>> recommended settings on the website you provided. But I don't see much \n>> difference in performance with any of the adjustments, it appears to \n>> be more or less the same.\n>>\n>> Here are a few iostat lines by the way:\n>>\n>> sd0 sd1 sd2 nfs1 cpu\n>> kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id\n>> 7 1 12 958 50 35 0 0 7 0 0 0 13 1 0 85\n>> 0 0 0 2353 296 3 0 0 0 0 0 0 92 7 0 1\n>> 0 0 0 2062 326 2 0 0 0 0 0 0 93 7 0 0\n>> 1 1 1 1575 350 0 0 0 0 0 0 0 92 7 0 1\n>> 0 0 0 1628 362 0 0 0 0 0 0 0 92 8 0 1\n>>\n>> It appears to be doing a little less kps/tps on sd1 when I restore my \n>> own postgresql.conf-settings. (default wal/checkpoints, 20k buffers, \n>> 2k work mem).\n>>\n>> Is it possible to trace the stack's for semsys, like the \n>> memcpy-traces, or are those of no interest here?\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>>\n>> On 16-5-2006 17:52, Jignesh K. Shah wrote:\n>>\n>>> Hi Arjen,\n>>>\n>>> Can you send me my colleagues's names in a private email?\n>>>\n>>> One of the drawbacks of the syscall.d script is relative timings and \n>>> hence if system CPU usage is very low, it gives the relative \n>>> weightage about what portion in that low is associated with what \n>>> call.. So even if you have say..1% system time.. it says that most of \n>>> it was IO related or semsys related. So iostat output with -c option \n>>> to include CPU times helps to put it in the right perspective.\n>>>\n>>>\n>>> Also do check the tunables mentioned and make sure they are set.\n>>>\n>>> Regards,\n>>> Jignesh\n>>>\n>>>\n>>> Arjen van der Meijden wrote:\n>>>\n>>>> Hi Jignesh,\n>>>>\n>>>> Jignesh K. Shah wrote:\n>>>>\n>>>>> Hi Arjen,\n>>>>>\n>>>>> Looking at your outputs...of syscall and usrcall it looks like\n>>>>>\n>>>>> * Spending too much time in semsys .... which means you have too \n>>>>> many connections and they are contending to get a lock.. which is \n>>>>> potentially the WAL log lock\n>>>>>\n>>>>> * llseek is high which means you can obviously gain a bit with the \n>>>>> right file system/files tuning by caching them right.\n>>>>>\n>>>>> Have you set the values for Solaris for T2000 tuned for Postgresql?\n>>>>\n>>>>\n>>>>\n>>>> Not particularly, we got a \"special T2000 Solaris dvd\" from your \n>>>> colleagues here in the Netherlands and installed that (actually one \n>>>> of your colleagues did). Doing so all the \"better default\" \n>>>> /etc/system-settings are supposed to be set. I haven't really \n>>>> checked that they are, since two of your colleagues have been \n>>>> working on it for the mysql-version of the benchmark and I assumed \n>>>> they'd have verified that.\n>>>>\n>>>>> Check out the tunables from the following URL\n>>>>>\n>>>>> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n>>>>>\n>>>>> Try specially the /etc/system and postgresql.conf changes and see \n>>>>> if it changes/improves your performance.\n>>>>\n>>>>\n>>>>\n>>>> I will see that those tunables are verified to be set.\n>>>>\n>>>> I am a bit surprised though about your remarks, since they'd point \n>>>> at the I/O being in the way? But we only have about 600k/sec i/o \n>>>> according to vmstat. The database easily fits in memory.\n>>>> In total I logged about 500k queries of which only 70k where \n>>>> altering queries, of which almost all where inserts in log-tables \n>>>> which aren't actively read in this benchmark.\n>>>>\n>>>> But I'll give it a try.\n>>>>\n>>>> Best regards,\n>>>>\n>>>> Arjen\n>>>>\n>>>>>\n>>>>> Arjen van der Meijden wrote:\n>>>>>\n>>>>>> Hi List,\n>>>>>>\n>>>>>> In the past few weeks we have been developing a read-heavy \n>>>>>> mysql-benchmark to have an alternative take at \n>>>>>> cpu/platform-performance. Not really to have a look at how fast \n>>>>>> mysql can be.\n>>>>>>\n>>>>>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is \n>>>>>> modelled after our website's production database and the load \n>>>>>> generated on it is modelled after a simplified version of our \n>>>>>> visitor behaviour.\n>>>>>>\n>>>>>> Long story short, we think the test is a nice example of the \n>>>>>> relatively lightweight, read-heavy webapplications out there and \n>>>>>> therefore decided to have a go at postgresql as well.\n>>>>>> Of course the queries and indexes have been adjusted to (by our \n>>>>>> knowledge) best suit postgresql, while maintaining the same output \n>>>>>> to the application/interface layer. While the initial structure \n>>>>>> only got postgresql at about half the performance of mysql 4.1.x, \n>>>>>> the current version of our postgresql-benchmark has quite similar \n>>>>>> results to mysql 4.1.x, but both are quite a bit slower than 5.0.x \n>>>>>> (I think its about 30-40% faster).\n>>>>>>\n>>>>>> Since the results from those benchmarks are not yet public (they \n>>>>>> will be put together in a story at our website), I won't go into \n>>>>>> too much details about this benchmark.\n>>>>>>\n>>>>>> Currently we're having a look at a Sun T2000 and will be looking \n>>>>>> at will be looking at other machines as well in the future. We are \n>>>>>> running the sun-release of postgresql 8.1.3 on that T2000 now, but \n>>>>>> are looking at compiling the cvs-head version (for its \n>>>>>> index-root-cache) somewhere this week.\n>>>>>>\n>>>>>> My guess is there are a few people on this list who are interested \n>>>>>> in some dtrace results taken during our benchmarks on that T2000.\n>>>>>> Although my knowledge of both Solaris and Dtrace are very limited, \n>>>>>> I already took some samples of the system and user calls. I used \n>>>>>> Jignesh Shah's scripts for that: \n>>>>>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>>>>>\n>>>>>>\n>>>>>> You can find the samples here:\n>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>>>>>\n>>>>>> And I also did the memcpy-scripts, here:\n>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>>>>>> (this last log is 3.5MB)\n>>>>>>\n>>>>>> If anyone is interested in some more dtrace results, let me know \n>>>>>> (and tell me what commands to run ;-) ).\n>>>>>>\n>>>>>> Best regards,\n>>>>>>\n>>>>>> Arjen\n>>>>>>\n>>>>>> ---------------------------(end of \n>>>>>> broadcast)---------------------------\n>>>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>>>\n>>>>>> http://www.postgresql.org/docs/faq\n>>>>>\n>>>>>\n>>>>>\n>>>>> ---------------------------(end of \n>>>>> broadcast)---------------------------\n>>>>> TIP 6: explain analyze is your friend\n>>>>>\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 2: Don't 'kill -9' the postmaster\n>>>\n>>>\n>>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Wed, 17 May 2006 13:53:54 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "I have seen MemoryContextSwitchTo taking time before.. However I am not \nsure why would it take so much CPU time?\nMaybe that function does not work efficiently on Solaris?\n\nAlso I donot have much idea about slot_getattr.\n\nAnybody else? (Other option is to use \"collect -p $pid\" experiments to \ngather the data to figure out what instruction is causing the high CPU \nusage) Maybe the Sun engineers out there can help out\n\n-Jignesh\n\n\nArjen van der Meijden wrote:\n> We have the 4 core machine. However, these numbers are taken during a \n> benchmark, not normal work load. So the output should display the system \n> being working fully ;)\n> \n> So its postgres doing a lot of work and you already had a look at the \n> usrcall for that.\n> \n> The benchmark just tries to do the queries for \"random page visits\". \n> This totals up to about some 80 different queries being executed with \n> mostly random parameters. The workload is generated using php so there \n> are no connection pools, nor prepared statements.\n> \n> The queries vary, but are all relatively lightweight queries with less \n> than 6 or 7 joinable tables. Almost all queries can use indexes. Most \n> tables are under a few MB of data, although there are a few larger than \n> that. Most records are relatively small, consisting of mostly numbers \n> (id's and such).\n> \n> The results presented here was with 25 concurrent connections.\n> \n> Best regards,\n> \n> Arjen\n> \n> \n> Jignesh K. Shah wrote:\n> \n>> You usertime is way too high for T2000...\n>>\n>> If you have a 6 core machine with 24 threads, it says all 24 threads \n>> are reported as being busy with iostat output.\n>>\n>> Best way to debug this is use\n>>\n>> prstat -amL\n>> (or if you are dumping it in a file prstat -amLc > prstat.txt)\n>>\n>> and find the pids with high user cpu time and then use the usrcall.d \n>> on few of those pids.\n>>\n>> Also how many database connections do you have and what's the type of \n>> query run by each connection?\n>>\n>> -Jignesh\n>>\n>>\n>>\n>> Arjen van der Meijden wrote:\n>>\n>>> Hi Jignesh,\n>>>\n>>> The settings from that 'special T2000 dvd' differed from the \n>>> recommended settings on the website you provided. But I don't see \n>>> much difference in performance with any of the adjustments, it \n>>> appears to be more or less the same.\n>>>\n>>> Here are a few iostat lines by the way:\n>>>\n>>> sd0 sd1 sd2 nfs1 cpu\n>>> kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id\n>>> 7 1 12 958 50 35 0 0 7 0 0 0 13 1 0 85\n>>> 0 0 0 2353 296 3 0 0 0 0 0 0 92 7 0 1\n>>> 0 0 0 2062 326 2 0 0 0 0 0 0 93 7 0 0\n>>> 1 1 1 1575 350 0 0 0 0 0 0 0 92 7 0 1\n>>> 0 0 0 1628 362 0 0 0 0 0 0 0 92 8 0 1\n>>>\n>>> It appears to be doing a little less kps/tps on sd1 when I restore my \n>>> own postgresql.conf-settings. (default wal/checkpoints, 20k buffers, \n>>> 2k work mem).\n>>>\n>>> Is it possible to trace the stack's for semsys, like the \n>>> memcpy-traces, or are those of no interest here?\n>>>\n>>> Best regards,\n>>>\n>>> Arjen\n>>>\n>>>\n>>> On 16-5-2006 17:52, Jignesh K. Shah wrote:\n>>>\n>>>> Hi Arjen,\n>>>>\n>>>> Can you send me my colleagues's names in a private email?\n>>>>\n>>>> One of the drawbacks of the syscall.d script is relative timings and \n>>>> hence if system CPU usage is very low, it gives the relative \n>>>> weightage about what portion in that low is associated with what \n>>>> call.. So even if you have say..1% system time.. it says that most \n>>>> of it was IO related or semsys related. So iostat output with -c \n>>>> option to include CPU times helps to put it in the right perspective.\n>>>>\n>>>>\n>>>> Also do check the tunables mentioned and make sure they are set.\n>>>>\n>>>> Regards,\n>>>> Jignesh\n>>>>\n>>>>\n>>>> Arjen van der Meijden wrote:\n>>>>\n>>>>> Hi Jignesh,\n>>>>>\n>>>>> Jignesh K. Shah wrote:\n>>>>>\n>>>>>> Hi Arjen,\n>>>>>>\n>>>>>> Looking at your outputs...of syscall and usrcall it looks like\n>>>>>>\n>>>>>> * Spending too much time in semsys .... which means you have too \n>>>>>> many connections and they are contending to get a lock.. which is \n>>>>>> potentially the WAL log lock\n>>>>>>\n>>>>>> * llseek is high which means you can obviously gain a bit with the \n>>>>>> right file system/files tuning by caching them right.\n>>>>>>\n>>>>>> Have you set the values for Solaris for T2000 tuned for Postgresql?\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> Not particularly, we got a \"special T2000 Solaris dvd\" from your \n>>>>> colleagues here in the Netherlands and installed that (actually one \n>>>>> of your colleagues did). Doing so all the \"better default\" \n>>>>> /etc/system-settings are supposed to be set. I haven't really \n>>>>> checked that they are, since two of your colleagues have been \n>>>>> working on it for the mysql-version of the benchmark and I assumed \n>>>>> they'd have verified that.\n>>>>>\n>>>>>> Check out the tunables from the following URL\n>>>>>>\n>>>>>> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp \n>>>>>>\n>>>>>>\n>>>>>> Try specially the /etc/system and postgresql.conf changes and see \n>>>>>> if it changes/improves your performance.\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> I will see that those tunables are verified to be set.\n>>>>>\n>>>>> I am a bit surprised though about your remarks, since they'd point \n>>>>> at the I/O being in the way? But we only have about 600k/sec i/o \n>>>>> according to vmstat. The database easily fits in memory.\n>>>>> In total I logged about 500k queries of which only 70k where \n>>>>> altering queries, of which almost all where inserts in log-tables \n>>>>> which aren't actively read in this benchmark.\n>>>>>\n>>>>> But I'll give it a try.\n>>>>>\n>>>>> Best regards,\n>>>>>\n>>>>> Arjen\n>>>>>\n>>>>>>\n>>>>>> Arjen van der Meijden wrote:\n>>>>>>\n>>>>>>> Hi List,\n>>>>>>>\n>>>>>>> In the past few weeks we have been developing a read-heavy \n>>>>>>> mysql-benchmark to have an alternative take at \n>>>>>>> cpu/platform-performance. Not really to have a look at how fast \n>>>>>>> mysql can be.\n>>>>>>>\n>>>>>>> This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is \n>>>>>>> modelled after our website's production database and the load \n>>>>>>> generated on it is modelled after a simplified version of our \n>>>>>>> visitor behaviour.\n>>>>>>>\n>>>>>>> Long story short, we think the test is a nice example of the \n>>>>>>> relatively lightweight, read-heavy webapplications out there and \n>>>>>>> therefore decided to have a go at postgresql as well.\n>>>>>>> Of course the queries and indexes have been adjusted to (by our \n>>>>>>> knowledge) best suit postgresql, while maintaining the same \n>>>>>>> output to the application/interface layer. While the initial \n>>>>>>> structure only got postgresql at about half the performance of \n>>>>>>> mysql 4.1.x, the current version of our postgresql-benchmark has \n>>>>>>> quite similar results to mysql 4.1.x, but both are quite a bit \n>>>>>>> slower than 5.0.x (I think its about 30-40% faster).\n>>>>>>>\n>>>>>>> Since the results from those benchmarks are not yet public (they \n>>>>>>> will be put together in a story at our website), I won't go into \n>>>>>>> too much details about this benchmark.\n>>>>>>>\n>>>>>>> Currently we're having a look at a Sun T2000 and will be looking \n>>>>>>> at will be looking at other machines as well in the future. We \n>>>>>>> are running the sun-release of postgresql 8.1.3 on that T2000 \n>>>>>>> now, but are looking at compiling the cvs-head version (for its \n>>>>>>> index-root-cache) somewhere this week.\n>>>>>>>\n>>>>>>> My guess is there are a few people on this list who are \n>>>>>>> interested in some dtrace results taken during our benchmarks on \n>>>>>>> that T2000.\n>>>>>>> Although my knowledge of both Solaris and Dtrace are very \n>>>>>>> limited, I already took some samples of the system and user \n>>>>>>> calls. I used Jignesh Shah's scripts for that: \n>>>>>>> http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on \n>>>>>>>\n>>>>>>>\n>>>>>>> You can find the samples here:\n>>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log\n>>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log\n>>>>>>>\n>>>>>>> And I also did the memcpy-scripts, here:\n>>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log\n>>>>>>> http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log\n>>>>>>> (this last log is 3.5MB)\n>>>>>>>\n>>>>>>> If anyone is interested in some more dtrace results, let me know \n>>>>>>> (and tell me what commands to run ;-) ).\n>>>>>>>\n>>>>>>> Best regards,\n>>>>>>>\n>>>>>>> Arjen\n>>>>>>>\n>>>>>>> ---------------------------(end of \n>>>>>>> broadcast)---------------------------\n>>>>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>>>>\n>>>>>>> http://www.postgresql.org/docs/faq\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> ---------------------------(end of \n>>>>>> broadcast)---------------------------\n>>>>>> TIP 6: explain analyze is your friend\n>>>>>>\n>>>>>\n>>>>> ---------------------------(end of \n>>>>> broadcast)---------------------------\n>>>>> TIP 2: Don't 'kill -9' the postmaster\n>>>>\n>>>>\n>>>>\n>>>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n", "msg_date": "Wed, 17 May 2006 16:14:41 +0100", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" }, { "msg_contents": "Here's a \"corner case\" that might interest someone. It tripped up one of our programmers.\n\nWe have a table with > 10 million rows. The ID column is indexed, the table has been vacuum/analyzed. Compare these two queries:\n\n select * from tbl where id >= 10000000 limit 1;\n select * from tbl where id >= 10000000 order by id limit 1;\n\nThe first takes 4 seconds, and uses a full table scan. The second takes 32 msec and uses the index. \nDetails are below.\n\nI understand why the planner makes the choices it does -- the \"id > 10000000\" isn't very selective and under normal circumstances a full table scan is probably the right choice. But the \"limit 1\" apparently doesn't alter the planner's strategy at all. We were surprised by this.\n\nAdding the \"order by\" was a simple solution.\n\nCraig\n\n\n\npg=> explain analyze select url, url_digest from url_queue where priority >= 10000000 limit 1;\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.65 rows=1 width=108) (actual time=4036.113..4036.117 rows=1 loops=1)\n -> Seq Scan on url_queue (cost=0.00..391254.35 rows=606176 width=108) (actual time=4036.101..4036.101 rows=1 loops=1)\n Filter: (priority >= 10000000)\n Total runtime: 4036.200 ms\n(4 rows)\n\npg=> explain analyze select url, url_digest from url_queue where priority >= 10000000 order by priority limit 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------\n Limit (cost=0.00..2.38 rows=1 width=112) (actual time=32.445..32.448 rows=1 loops=1)\n -> Index Scan using url_queue_priority on url_queue (cost=0.00..1440200.41 rows=606176 width=112) (actual time=32.434..32.434 rows=1 loops=1)\n Index Cond: (priority >= 10000000)\n Total runtime: 32.566 ms\n", "msg_date": "Wed, 17 May 2006 08:54:52 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Optimizer: limit not taken into account" }, { "msg_contents": "Please don't reply to previous messages to start new threads. This makes it\nharder to find stuff in the archives and may keep people from noticing your\nmessage.\n\nOn Wed, May 17, 2006 at 08:54:52 -0700,\n \"Craig A. James\" <[email protected]> wrote:\n> Here's a \"corner case\" that might interest someone. It tripped up one of \n> our programmers.\n> \n> We have a table with > 10 million rows. The ID column is indexed, the \n> table has been vacuum/analyzed. Compare these two queries:\n> \n> select * from tbl where id >= 10000000 limit 1;\n> select * from tbl where id >= 10000000 order by id limit 1;\n> \n> The first takes 4 seconds, and uses a full table scan. The second takes 32 \n> msec and uses the index. Details are below.\n\nI suspect it wasn't intended to be a full table scan. But rather a sequential\nscan until it found a matching row. If the data in the table is ordered by\nby id, this strategy may not work out well. Where as if the data is randomly\nordered, it would be expected to find a match quickly.\n\nHave you analyzed the table recently? If the planner has bad stats on the\ntable, that is going to make it more likely to choose a bad plan.\n\n\n> I understand why the planner makes the choices it does -- the \"id > \n> 10000000\" isn't very selective and under normal circumstances a full table \n> scan is probably the right choice. But the \"limit 1\" apparently doesn't \n> alter the planner's strategy at all. We were surprised by this.\n> \n> Adding the \"order by\" was a simple solution.\n> \n> Craig\n> \n> \n> \n> pg=> explain analyze select url, url_digest from url_queue where priority \n> >= 10000000 limit 1;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.65 rows=1 width=108) (actual time=4036.113..4036.117 \n> rows=1 loops=1)\n> -> Seq Scan on url_queue (cost=0.00..391254.35 rows=606176 width=108) \n> (actual time=4036.101..4036.101 rows=1 loops=1)\n> Filter: (priority >= 10000000)\n> Total runtime: 4036.200 ms\n> (4 rows)\n> \n> pg=> explain analyze select url, url_digest from url_queue where priority \n> >= 10000000 order by priority limit 1;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------\n> Limit (cost=0.00..2.38 rows=1 width=112) (actual time=32.445..32.448 \n> rows=1 loops=1)\n> -> Index Scan using url_queue_priority on url_queue \n> (cost=0.00..1440200.41 rows=606176 width=112) (actual time=32.434..32.434 \n> rows=1 loops=1)\n> Index Cond: (priority >= 10000000)\n> Total runtime: 32.566 ms\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Wed, 17 May 2006 12:44:26 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer: limit not taken into account" }, { "msg_contents": "On Wed, 2006-05-17 at 08:54 -0700, Craig A. James wrote:\n> Here's a \"corner case\" that might interest someone. It tripped up one of our programmers.\n> \n> We have a table with > 10 million rows. The ID column is indexed, the table has been vacuum/analyzed. Compare these two queries:\n> \n> select * from tbl where id >= 10000000 limit 1;\n> select * from tbl where id >= 10000000 order by id limit 1;\n> \n> The first takes 4 seconds, and uses a full table scan. The second takes 32 msec and uses the index. \n> Details are below.\n\nThe rows are not randomly distributed, so the SeqScan takes longer to\nfind 1 row than the index scan.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 17 May 2006 19:22:09 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer: limit not taken into account" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> I suspect it wasn't intended to be a full table scan. But rather a sequential\n> scan until it found a matching row. If the data in the table is ordered by\n> by id, this strategy may not work out well. Where as if the data is randomly\n> ordered, it would be expected to find a match quickly.\n\nRight. You can see from the differential in the estimates for the\nSeqScan and the Limit nodes that the planner is not expecting the\nseqscan to run to completion, but rather to find a matching row quite\nquickly.\n\nThere is not anything in there that considers whether the table's\nphysical order is so nonrandom that the search will take much longer\nthan it would given uniform distribution. It might be possible to do\nsomething with the correlation statistic in simple cases ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2006 14:30:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer: limit not taken into account " }, { "msg_contents": "Tom Lane wrote:\n> There is not anything in there that considers whether the table's\n> physical order is so nonrandom that the search will take much longer\n> than it would given uniform distribution. It might be possible to do\n> something with the correlation statistic in simple cases ...\n\nIn this case, the rows are not random at all, in fact they're inserted from a sequence, then rows are deleted as they are processed. If the planner is hoping for random physical distribution, this particular case is exactly wrong. \n\nCraig\n", "msg_date": "Wed, 17 May 2006 11:58:09 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer: limit not taken into account" }, { "msg_contents": "On Wed, May 17, 2006 at 08:54:52AM -0700, Craig A. James wrote:\n> Here's a \"corner case\" that might interest someone. It tripped up one of \n> our programmers.\n> \n> We have a table with > 10 million rows. The ID column is indexed, the \n> table has been vacuum/analyzed. Compare these two queries:\n> \n> select * from tbl where id >= 10000000 limit 1;\n> select * from tbl where id >= 10000000 order by id limit 1;\n> \n> The first takes 4 seconds, and uses a full table scan. The second takes 32 \n> msec and uses the index. Details are below.\n> \n> I understand why the planner makes the choices it does -- the \"id > \n> 10000000\" isn't very selective and under normal circumstances a full table \n> scan is probably the right choice. But the \"limit 1\" apparently doesn't \n> alter the planner's strategy at all. We were surprised by this.\n\nIs it really not very selective? If there's 10000000 rows in the table,\nand id starts at 1 with very few gaps, then >= 10000000 should actually\nbe very selective...\n\nAlso, I hope you understand there's a big difference between a limit\nquery that does and doesn't have an order by.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 17 May 2006 14:20:34 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer: limit not taken into account" }, { "msg_contents": "* Jignesh K. Shah:\n\n> * llseek is high which means you can obviously gain a bit with the\n> right file system/files tuning by caching them right.\n\nIt might also make sense to switch from lseek-read/write to\npread/pwrite. It shouldn't be too hard to hack this into the virtual\nfile descriptor module.\n", "msg_date": "Thu, 18 May 2006 07:39:01 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pgsql (and mysql) benchmark on T2000/Solaris and some" } ]
[ { "msg_contents": "I have a table of about 500,000 rows. \n\n \n\nI need to add a new column and populate it.\n\n \n\nSo, I have tried to run the following command. The command never finishes (I\ngave up after about and hour and a half!).\n\nNote that none of the columns have indexes.\n\n \n\nUpdate mytable set new_column = \n\ncase when column_1 = column_2\nthen 1 \n\nwhen column_1+column_3= column_2 and column_3 > 0\nthen 2 \n\nwhen column_1+column_3+column_4 = column_2 and column_4 > 0\nthen 3 \n\nwhen column_1+column_3+column_4+column_5 = column_2 and column_5 > 0\nthen 4 \n\nelse\n0 \n\nend\n\n \n\n \n\nMy computer is a Pentium 4 – 2.4 GHZ and 1G RAM – so it should be fast\nenough.\n\n \n\nAny ideas?\n\n \n\nJonathan Blitz\n\n \n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.392 / Virus Database: 268.5.6/340 - Release Date: 05/15/2006\n \n\n\n\n\n\n\n\n\n\nI have a table of about 500,000 rows. \n \nI need to add a new column and populate it.\n \nSo, I have tried to run the following command. The command\nnever finishes (I gave up after about and hour and a half!).\nNote that none of the columns have indexes.\n \nUpdate mytable set new_column =    \ncase when column_1 = column_2                                                                        then\n1   \nwhen column_1+column_3= column_2 and column_3 > 0                                      then\n2   \nwhen column_1+column_3+column_4 = column_2 and column_4 >\n0                     then 3   \nwhen column_1+column_3+column_4+column_5 = column_2 and\ncolumn_5 > 0     then 4   \nelse                                                                                                                         0\n\nend\n \n \nMy computer is a Pentium 4 – 2.4 GHZ and 1G RAM – so it\nshould be fast enough.\n \nAny ideas?\n \nJonathan Blitz", "msg_date": "Wed, 17 May 2006 03:19:26 +0200", "msg_from": "\"Jonathan Blitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Adding and filling new column on big table" }, { "msg_contents": "On Wed, May 17, 2006 at 03:19:26AM +0200, Jonathan Blitz wrote:\n> I have a table of about 500,000 rows. \n> \n> I need to add a new column and populate it.\n> \n> So, I have tried to run the following command. The command never finishes (I\n> gave up after about and hour and a half!).\n\nIf you install contrib/pgstattuple you can figure out how fast the\nupdate is running. Run \"SELECT * FROM pgstattuple('mytable')\" a\nfew times and note the rate at which dead_tuple_count is increasing.\nIf it's not increasing at all then query pg_locks and look for locks\nwhere \"granted\" is false.\n\nI created a test table, populated it with 500,000 rows of random\ndata, and ran the update you posted. On a 500MHz Pentium III with\n512M RAM and a SCSI drive from the mid-to-late 90s, running PostgreSQL\n8.1.3 on FreeBSD 6.1, the update finished in just over two minutes.\nThe table had one index (the primary key).\n\n> Note that none of the columns have indexes.\n\nDo you mean that no columns in the table have indexes? Or that the\ncolumns referenced in the update don't have indexes but that other\ncolumns do? What does \"\\d mytable\" show? Do other tables have\nforeign key references to this table? What non-default settings\ndo you have in postgresql.conf? What version of PostgreSQL are you\nrunning and on what platform? How busy is the system? What's the\noutput of \"EXPLAIN UPDATE mytable ...\"?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 16 May 2006 20:31:30 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "Jonathan Blitz writes:\n\n> So, I have tried to run the following command. The command never finishes \n> (I gave up after about and hour and a half!).\n\nDid you ever find what was the problem?\nPerhaps you needed to run a vacuum full on the table?\n\n", "msg_date": "Mon, 29 May 2006 23:22:36 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "> > So, I have tried to run the following command. The command never\nfinishes\n> > (I gave up after about and hour and a half!).\n> \n> Did you ever find what was the problem?\n> Perhaps you needed to run a vacuum full on the table?\n\nNope.\nI just gave up in the end and left it with NULL as the default value.\nThere were, in fact, over 2 million rows in the table rather than 1/4 of a\nmillion so that was part of the problem.\n\nJonathan\n \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.394 / Virus Database: 268.7.4/351 - Release Date: 05/29/2006\n \n\n", "msg_date": "Tue, 30 May 2006 11:35:55 +0200", "msg_from": "\"Jonathan Blitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "Jonathan Blitz writes:\n\n> I just gave up in the end and left it with NULL as the default value.\n\n\nCould you do the updates in batches instead of trying to do them all at \nonce?\n\nHave you done a vacuum full on this table ever?\n\n> There were, in fact, over 2 million rows in the table rather than 1/4 of a\n> million so that was part of the problem.\n\nWhat hardware?\nI have a dual CPU opteron with 4GB of RAM and 8 disks in RAID 10 (SATA). \nDoing an update on a 5 million record table took quite a while, but it did \nfininish. :-)\n\nI just did vacuum full before and after though.. That many updates tend to \nslow down operations on the table aftewards unless you vacuum the table. \nBased on what you wrote it sounded as if you tried a few times and may have \nkilled the process.. this would certainly slow down the operations on that \ntable unless you did a vacuum full. \n\nI wonder if running vacuum analyze against the table as the updates are \nrunning would be of any help.\n\n", "msg_date": "Tue, 30 May 2006 11:58:38 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "> \n> \n> Could you do the updates in batches instead of trying to do them all at\n> once?\n\nNope. Didn't think it would make any difference.\n> \n> Have you done a vacuum full on this table ever?\n\nMany times\n\n> \n> What hardware?\n> I have a dual CPU opteron with 4GB of RAM and 8 disks in RAID 10 (SATA).\n> Doing an update on a 5 million record table took quite a while, but it did\n> fininish. :-)\n\nI am using a laptop :).\nPentium 4 (not 4M) with 1GB of memory - 2 MHZ\n\nMust do it on that since the program is aimed for use at home.\n\nJonathan\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.394 / Virus Database: 268.7.4/351 - Release Date: 05/29/2006\n \n\n", "msg_date": "Tue, 30 May 2006 19:11:30 +0200", "msg_from": "\"Jonathan Blitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "Jonathan Blitz writes:\n\n> Nope. Didn't think it would make any difference.\n\nMay be worth a try.\n\n> I am using a laptop :).\n> Pentium 4 (not 4M) with 1GB of memory - 2 MHZ\n\nMost laptop drives are only 5,400 RPM which would make a transaction like \nyou are describing likely take a while.\n \n> Must do it on that since the program is aimed for use at home.\n\nNo desktop at home you could try it on?\nI think the problem with the laptop is likely it's drive.\n\n", "msg_date": "Tue, 30 May 2006 13:25:35 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "On Tue, 2006-05-30 at 16:04, Jonathan Blitz wrote:\n> > \n> > Most laptop drives are only 5,400 RPM which would make a transaction like\n> > you are describing likely take a while.\n> \n> Not sure what my one is but it is new(ish).\n> \n> > \n> > No desktop at home you could try it on?\n> > I think the problem with the laptop is likely it's drive.\n> \n> I suppose I could do but I need to install PostgreSQL there and then copy\n> over the database.\n\nKeep in mind, most, if not all IDE drives lie about fsync, so the speed\nof the drive is a limit only if you're actually writing a lot of data. \nIf you're doing a lot of little transactions, the drive should be lying\nand holding the data in cache on board, so the speed should be OK.\n", "msg_date": "Tue, 30 May 2006 15:07:15 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "\n>> Most laptop drives are only 5,400 RPM which would make a transaction \n>> like\n>> you are describing likely take a while.\n>\n> Not sure what my one is but it is new(ish).\n\n\tIf you're doing data intensive operations (like a big update which looks \nlike what you're doing) it will write many megabytes to the harddrive... \nmy laptop HDD (5400 rpm) does about 15 MB/s throughput while a standard \ndesktop 7200rpm drive does 55-60 MB/s throughput. Plus, seek times on a \nlaptop drive are horrendous.\n", "msg_date": "Tue, 30 May 2006 22:24:32 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "Jonathan Blitz writes:\n\n> I suppose I could do but I need to install PostgreSQL there and then copy\n> over the database.\n> Maybe I will give it a try.\n\nI really think that is your best bet.\nIf for whatever reason that will not be an option perhaps you can just let \nthe process run over the weekend.. possibly monitor the process from the OS \nto make sure it is not frozen.\n\nDon't recall if you mentioned the OS.. is it any unix like os?\nIf so there are several ways you could check to make sure the process is not \nfrozen such as iostats, top, vmstats(these from FreeBSD, but most unix like \nos should have tools like those if not some with the same name). \n\n", "msg_date": "Tue, 30 May 2006 16:52:32 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding and filling new column on big table" }, { "msg_contents": "> \n> Most laptop drives are only 5,400 RPM which would make a transaction like\n> you are describing likely take a while.\n\nNot sure what my one is but it is new(ish).\n\n> \n> No desktop at home you could try it on?\n> I think the problem with the laptop is likely it's drive.\n\nI suppose I could do but I need to install PostgreSQL there and then copy\nover the database.\nMaybe I will give it a try.\n\nJonathan\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.394 / Virus Database: 268.7.4/351 - Release Date: 05/29/2006\n \n\n", "msg_date": "Tue, 30 May 2006 23:04:48 +0200", "msg_from": "\"Jonathan Blitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding and filling new column on big table" } ]
[ { "msg_contents": "\n\n\n\nHi,\n\nCurrently I'm using postgresql v8.1.3 and the latest jdbc.\n\nI try to open a JReports' report and the time taken to completely open the\nreport is 137453ms.\nThen I open the same report but this time I connect to postgresql v7.2.2\nbut the completion time is even faster than connect to postgresql v8.1.3\nwhich took 15516ms to finish.\n\nI try many times and the result is still the same.\n\nSo I think it might be compatibility problem between JReport & Postgresql\n8.1.3 so i add in 'protocolVersion=2' in the connection string.\nThen i open the same report again and this time it just as what i expected,\nthe execution time for the report become 6000ms only,\nit is 20x times faster than previous test without 'protocolVersion=2'\noption.\n\nMay I know what is the reason of this?\nIs it because of the compatibility between JDBC driver with JReport?\n\nThanks!\n\n", "msg_date": "Wed, 17 May 2006 14:13:46 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Performance incorporate with JReport" }, { "msg_contents": "Moving to -jdbc.\n\nOn Wed, May 17, 2006 at 02:13:46PM +0800, [email protected] wrote:\n> Currently I'm using postgresql v8.1.3 and the latest jdbc.\n> \n> I try to open a JReports' report and the time taken to completely open the\n> report is 137453ms.\n> Then I open the same report but this time I connect to postgresql v7.2.2\n> but the completion time is even faster than connect to postgresql v8.1.3\n> which took 15516ms to finish.\n> \n> I try many times and the result is still the same.\n> \n> So I think it might be compatibility problem between JReport & Postgresql\n> 8.1.3 so i add in 'protocolVersion=2' in the connection string.\n> Then i open the same report again and this time it just as what i expected,\n> the execution time for the report become 6000ms only,\n> it is 20x times faster than previous test without 'protocolVersion=2'\n> option.\n> \n> May I know what is the reason of this?\n> Is it because of the compatibility between JDBC driver with JReport?\n\nThis certainly sounds like a likely JDBC issue, so I'm moving this email\nto pgsql-jdbc, since someone there is more likely to be able to help\nyou.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 17 May 2006 14:00:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Performance incorporate with JReport" }, { "msg_contents": "\n\n> On Wed, May 17, 2006 at 02:13:46PM +0800, [email protected] wrote:\n>> Currently I'm using postgresql v8.1.3 and the latest jdbc.\n>>\n>> I try to open a JReports' report and the time taken to completely open the\n>> report is 137453ms.\n>> Then I open the same report but this time I connect to postgresql v7.2.2\n>> but the completion time is even faster than connect to postgresql v8.1.3\n>> which took 15516ms to finish.\n>>\n>> So I think it might be compatibility problem between JReport & Postgresql\n>> 8.1.3 so i add in 'protocolVersion=2' in the connection string.\n>> Then i open the same report again and this time it just as what i expected,\n>> the execution time for the report become 6000ms only,\n>> it is 20x times faster than previous test without 'protocolVersion=2'\n>> option.\n>>\n\nIt is very unclear what JReport and/or this specific report is doing, but \nthere are differences with how PreparedStatements are planned for the V2 \nand V3 protocol.\n\nFor the V2 protocol, the driver manually interpolates the parameters into \nthe query string and sends the whole sql across upon each execution. \nThis means the server can know the exact parameters used and generate the \nbest plan (at the expense of reparsing/replanning on every execution).\n\nFor the V3 protocol the driver prepares the query and sends \nthe parameters over separately. There are two different modes of this \nexecution depending on whether we expect to re-execute the same statement \nmultiple times or not (see the prepareThreshold configuration parameter). \nIf we don't expect to reissue the same query with different parameters it \nwill be executed on the unnamed statement which will generate a plan using \nthe passed parameters and should be equivalent to the V2 case. If it is \nexpected to be reissed it will be executed on a named statement and \nprepared with generic parameters that may differ wildly from your actual \nparameters resulting in a non-ideal plan.\n\nAlso with the V3 protocol on either a named or unnamed statement the JDBC \ndriver passes type information along with the parameters which may cause \nthe server to generate a poor plan because it fails to use an index \nsomehow. Although with 8.0+ cross type indexing usually works.\n\nIt's tough to say what's going on here without more detail about what \nJReport is actually doing.\n\nKris Jurka\n", "msg_date": "Wed, 17 May 2006 14:40:59 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Performance incorporate with JReport" } ]
[ { "msg_contents": "Hi,\n\nI have a web page, that executes several SQLs.\n\nSo, I would like to know witch one of those SQLs consumes more CPU.\nFor example,\nI have SQL1 that is executed in 1.2 secs and a SQL2 that is executed in \n200 ms.\n\nBut SQL2 is executed 25 times and SQL1 is executed 1 time, so really \nSQL2 consumes more CPU time.\n\nIs there any way to know this?\nI have think that logging all SQLs and then cheking it is a way to do it \n... any other idea?\n\nThanks in advance\n", "msg_date": "Wed, 17 May 2006 17:21:14 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "SQL CPU time usage" }, { "msg_contents": "On 17 May 2006, at 16:21, Ruben Rubio Rey wrote:\n\n> I have a web page, that executes several SQLs.\n>\n> So, I would like to know witch one of those SQLs consumes more CPU.\n> For example,\n> I have SQL1 that is executed in 1.2 secs and a SQL2 that is \n> executed in 200 ms.\n>\n> But SQL2 is executed 25 times and SQL1 is executed 1 time, so \n> really SQL2 consumes more CPU time.\n>\n> Is there any way to know this?\n> I have think that logging all SQLs and then cheking it is a way to \n> do it ... any other idea?\n\nPractical Query Analysis: <http://pqa.projects.postgresql.org/> does \nexactly that (scan historic logs). Very nice indeed and more than \nworth the money (it's BSD-licensed)\n\n-- \nJohn O'Shea\nWordbank Limited\n33 Charlotte Street, London W1T 1RR\nDirect line: +44 (0) 20 7903 8829\nFax: +44 (0) 20 7903 8888\n<http://www.wordbank.com/>\n\n\n", "msg_date": "Wed, 17 May 2006 17:18:30 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SQL CPU time usage" }, { "msg_contents": "[email protected] wrote:\n\n> On 17 May 2006, at 16:21, Ruben Rubio Rey wrote:\n>\n>> I have a web page, that executes several SQLs.\n>>\n>> So, I would like to know witch one of those SQLs consumes more CPU.\n>> For example,\n>> I have SQL1 that is executed in 1.2 secs and a SQL2 that is executed \n>> in 200 ms.\n>>\n>> But SQL2 is executed 25 times and SQL1 is executed 1 time, so really \n>> SQL2 consumes more CPU time.\n>>\n>> Is there any way to know this?\n>> I have think that logging all SQLs and then cheking it is a way to \n>> do it ... any other idea?\n>\n>\n> Practical Query Analysis: <http://pqa.projects.postgresql.org/> does \n> exactly that (scan historic logs). Very nice indeed and more than \n> worth the money (it's BSD-licensed)\n>\nthanks\n", "msg_date": "Thu, 18 May 2006 16:52:23 +0200", "msg_from": "Ruben Rubio Rey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL CPU time usage" }, { "msg_contents": "You may also try PgFouine (http://pgfouine.projects.postgresql.org/)\nfor log analysis, I found it very useful in similar situation.\n\n\nOn 5/17/06, Ruben Rubio Rey <[email protected]> wrote:\n> Hi,\n>\n> I have a web page, that executes several SQLs.\n>\n> So, I would like to know witch one of those SQLs consumes more CPU.\n> For example,\n> I have SQL1 that is executed in 1.2 secs and a SQL2 that is executed in\n> 200 ms.\n>\n> But SQL2 is executed 25 times and SQL1 is executed 1 time, so really\n> SQL2 consumes more CPU time.\n>\n> Is there any way to know this?\n> I have think that logging all SQLs and then cheking it is a way to do it\n> ... any other idea?\n>\n> Thanks in advance\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 18 May 2006 20:01:16 +0400", "msg_from": "\"Ivan Zolotukhin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL CPU time usage" } ]
[ { "msg_contents": "Hi.\n\nI'm trying to plan for a performance test session where a large database is\nsubject to regular hits from my application while both regular and full\ndatabase maintenance is being performed. The idea is to gain a better idea\non the impact maintenance will have on regular usage, and when to reasonably\nschedule both regular and full maintenance.\n\nIs the verbose option for the VACUUM command and physical disk space usage\nenough? What's a good rule of thumb for verifying that the space supposedly\nrecovered from a FULL vacuum is real? I can turn on verbose for a FULL\nvacuum and watch the \"Total free space (including removable row versions) is\n7032 bytes.\" details, but can it be reasonably correlated against disk linux\nsystem tools? (like du) Or only as a guidance that some maintenance is being\nperformed?\n\nAny other stat collection points I should be watching?\n\nHere's an example lazy vacuum verbose output from an empty schema table:\n(not that you guys haven't seen this stuff enough)\n\nVACUUM VERBOSE app.queuedemails;\nINFO: vacuuming \"app.queuedemails\"\nINFO: index \"queuedemails1\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO: index \"queuedemails2\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO: \"queuedemails\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO: vacuuming \"pg_toast.pg_toast_17595\"\nINFO: index \"pg_toast_17595_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO: \"pg_toast_17595\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nTHANKS!!!\n\n- Chris\n \n\n\n\n\n\nPerformance/Maintenance test result collection\n\n\nHi.\n\nI'm trying to plan for a performance test session where a large database is subject to regular hits from my application while both regular and full database maintenance is being performed. The idea is to gain a better idea on the impact maintenance will have on regular usage, and when to reasonably schedule both regular and full maintenance.\nIs the verbose option for the VACUUM command and physical disk space usage enough? What's a good rule of thumb for verifying that the space supposedly recovered from a FULL vacuum is real? I can turn on verbose for a FULL vacuum and watch the \"Total free space (including removable row versions) is 7032 bytes.\" details, but can it be reasonably correlated against disk linux system tools? (like du) Or only as a guidance that some maintenance is being performed?\nAny other stat collection points I should be watching?\n\nHere's an example lazy vacuum verbose output from an empty schema table: (not that you guys haven't seen this stuff enough)\nVACUUM VERBOSE app.queuedemails;\nINFO:  vacuuming \"app.queuedemails\"\nINFO:  index \"queuedemails1\" now contains 0 row versions in 1 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO:  index \"queuedemails2\" now contains 0 row versions in 1 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO:  \"queuedemails\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL:  0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO:  vacuuming \"pg_toast.pg_toast_17595\"\nINFO:  index \"pg_toast_17595_index\" now contains 0 row versions in 1 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO:  \"pg_toast_17595\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL:  0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nTHANKS!!!\n\n- Chris", "msg_date": "Wed, 17 May 2006 13:50:22 -0400", "msg_from": "Chris Mckenzie <[email protected]>", "msg_from_op": true, "msg_subject": "Performance/Maintenance test result collection" }, { "msg_contents": "On Wed, May 17, 2006 at 01:50:22PM -0400, Chris Mckenzie wrote:\n> Hi.\n> \n> I'm trying to plan for a performance test session where a large database is\n> subject to regular hits from my application while both regular and full\n> database maintenance is being performed. The idea is to gain a better idea\n> on the impact maintenance will have on regular usage, and when to reasonably\n> schedule both regular and full maintenance.\n\nWhat do you mean by \"regular and full maintenance\"? Do you mean VACUUM\nFULL?\n\nIf you're vacuuming appropriately you shouldn't have any need to ever\nVACUUM FULL...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 17 May 2006 14:24:40 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance/Maintenance test result collection" } ]
[ { "msg_contents": "Hello,\n\nI'm running a benchmark with theses 3 databases, and the first results\nare not very good for PostgreSQL.\n\nPostgreSQL is 20% less performance than MySQL (InnoDB tables)\n\nMy benchmark uses the same server for theses 3 databases :\nDell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian\nSarge - Linux 2.6\n\nThe transactions are a random mix of request in read (select) and\nwrite (insert, delete, update) on many tables about 100 000 to 15 000\n000 rows.\n\nTransactions are executed from 500 connections.\n\nFor the tunning of PostgreSQL i use official documentation and theses\nweb sites :\n\nhttp://www.revsys.com/writings/postgresql-performance.html\nhttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n\n\nSome important points of my postgresql.conf file :\n\nmax_connections = 510\nshared_buffer = 16384\nmax_prepared_transactions = 510\nwork_mem = 1024\nmaintenance_work_mem = 1024\nfsync = off\nwal_buffers = 32\ncommit_delay = 500\ncheckpoint_segments = 10\ncheckpoint_timeout = 300\ncheckpoint_warning = 0\neffective_cache_size = 165 000\nautovaccuum = on\ndefault_transaction_isolation = 'read_committed'\n\nWhat do you think of my tunning ?\n\nBest regards.\n\nO.A\n", "msg_date": "Thu, 18 May 2006 11:57:26 +0200", "msg_from": "\"Olivier Andreotti\" <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2" }, { "msg_contents": "Hello :)\n\nWhat version would PostgreSQL 8.1.4 be?\n\n> I'm running a benchmark with theses 3 databases, and the first results\n> are not very good for PostgreSQL.\n\nCould you give us some more infos about the box' performance while you\nrun the PG benchmark? A few minutes output of \"vmstat 10\" maybe? What\ndoes \"top\" say?\n\n> My benchmark uses the same server for theses 3 databases :\n> Dell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian\n> Sarge - Linux 2.6\n\nHow are you using the 3 disks? Did you split pg_xlog and the database\non different disks or not?\n\n> The transactions are a random mix of request in read (select) and\n> write (insert, delete, update) on many tables about 100 000 to 15 000\n> 000 rows.\n> \n> Transactions are executed from 500 connections.\n\nCan you say something about the clients? Do they run over network from\nother hosts? What language/bindings do they use?\n\nWhen they do inserts, are the inserts bundled or are there\nsingle insert transactions? Are the statements prepared?\n\n\nBye, Chris.\n\n\n\n\n", "msg_date": "Thu, 18 May 2006 12:16:13 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "2006/5/18, Chris Mair <[email protected]>:\n> Hello :)\n>\n\nHello Chris\n\n> What version would PostgreSQL 8.1.4 be?\n>\n\nHum, ok, it is the 8.1.3 version :)\n\n> Could you give us some more infos about the box' performance while you\n> run the PG benchmark? A few minutes output of \"vmstat 10\" maybe? What\n> does \"top\" say?\n\n>\nHere, an extract from the vmstat 3 during the test, you can see that\nmy problem is probably a very high disk usage (write and read).\n\n 5 90 92 126792 9240 2429940 0 0 943 10357 3201 2024 18 9 0 74\n 0 21 92 129244 9252 2427268 0 0 799 6389 2228 981 8 3 0 89\n 0 13 92 127236 9272 2428772 0 0 453 8137 2489 1557 5 4 0 91\n 0 51 92 125264 9304 2431296 0 0 725 4999 2206 1763 11 4 0 85\n 0 47 92 127984 9308 2428476 0 0 612 8369 2842 1689 11 4 0 85\n 0 114 92 125572 9324 2430980 0 0 704 8436 2744 1145 11 5 0 84\n 0 29 92 128700 9184 2428020 0 0 701 5948 2748 1688 11 5 0 84\n49 53 92 127332 9180 2429820 0 0 1053 10221 3107 2156 16 9 0 75\n 0 63 92 124912 9200 2431796 0 0 608 10272 2512 996 10 5 0 86\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 11 92 128344 9224 2428432 0 0 287 9691 2227 685 4 3 0 93\n 0 9 92 124548 9244 2432520 0 0 1168 9859 3186 1967 17 7 0 76\n 0 8 92 128452 9180 2428316 0 0 512 10673 2709 1059 7 3 0 89\n 0 78 92 126820 9192 2429888 0 0 501 7100 2300 1002 6 3 0 91\n 0 80 92 129932 9092 2427128 0 0 860 9103 2850 1724 13 8 0 79\n 2 17 92 125468 9112 2431484 0 0 1311 10268 2890 1540 14 6 0 79\n 0 10 92 127548 9088 2429268 0 0 1048 10404 3244 1810 18 7 0 75\n 0 29 92 126456 9124 2430456 0 0 365 10288 2607 953 6 3 0 92\n 0 25 92 125852 9132 2431012 0 0 172 7168 2202 656 4 3 0 93\n 0 17 92 124968 9188 2431920 0 0 283 4676 1996 708 4 2 0 94\n 0 11 92 129644 9144 2427104 0 0 357 6387 2112 816 5 3 0 92\n 0 16 92 125252 9176 2431804 0 0 1405 6753 2988 2083 21 7 0 71\n\n>\n> How are you using the 3 disks? Did you split pg_xlog and the database\n> on different disks or not?\n>\n\nData are on disk 1 et 2. Index on disk 3. Perhaps i'm wrong but fsync\n= off, pg_xlog are running with that ?\n\n>\n> Can you say something about the clients? Do they run over network from\n> other hosts? What language/bindings do they use?\n>\n\nClient is another server from the same network. Clients are connected\nwith JDBC connector.\n\n> When they do inserts, are the inserts bundled or are there\n> single insert transactions? Are the statements prepared?\n>\n>\n\nI use prepared statements for all requests. Each transaction is about\n5-45 requests.\n\n> Bye, Chris.\n>\n\nOA\n", "msg_date": "Thu, 18 May 2006 12:42:57 +0200", "msg_from": "\"Olivier Andreotti\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "Hi Olivier,\n\nFirst question I'd like to ask is: will this benchmark and its results\nwill be accessible on the net when you'll have finished ?\n\nI'm interested about your benchmark and your results.\n\n> I'm running a benchmark with theses 3 databases, and the first results\n> are not very good for PostgreSQL.\n\nHope I can give you hints to enhance PostgreSQL's performances in your\nbenchmark.\n\n> PostgreSQL is 20% less performance than MySQL (InnoDB tables)\n\nI think MySQL's tuning is comparable to PostgreSQL's?\n\n> My benchmark uses the same server for theses 3 databases :\n> Dell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian\n> Sarge - Linux 2.6\n\nok. 3 disks is really few for a database server IMHO (more disks, better\nI/O *if* you span database files onto disks).\n\n> The transactions are a random mix of request in read (select) and\n> write (insert, delete, update) on many tables about 100 000 to 15 000\n> 000 rows.\n\nok. But.. What's the size of your database ?\n[see it in psql with: select pg_size_pretty(pg_database_size('myDatabase');]\n\n> Transactions are executed from 500 connections.\n\nYou mean its a progressive test (1, 10, 100, 400, 500..???) or 500 from\nthe very beggining ?\n\n> For the tunning of PostgreSQL i use official documentation and theses\n> web sites :\n> \n> http://www.revsys.com/writings/postgresql-performance.html\n> http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n\nThose pages are great if you want to reach to a great postgresql.conf.\n\n> Some important points of my postgresql.conf file :\n> \n> max_connections = 510\n> shared_buffer = 16384\n> max_prepared_transactions = 510\n\nwhy? whats the point putting 510 here?\n\n> work_mem = 1024\n\nI found that value really low. But you'll have to check if you need\nmore. Thats all about looking for temporary files creation under $PGDATA.\n\n> maintenance_work_mem = 1024\n\nThis has to be increased dramatically, I really reccomend you read this\npage too: http://www.powerpostgresql.com/PerfList/\n\n> fsync = off\n\nThats pretty unsecure for a production database. I don't think it is\ngood to test PostgreSQL with fsync off, since this won't reflect the\nfinal configuration of a production server.\n\n> wal_buffers = 32\n\nA great value would be 64. Some tests already concluded that 64 is a\ngood value for large databases.\n\nYou'll *have to* move $PGDATA/pg_xlog/ too (see end of this mail).\n\n> commit_delay = 500\n> checkpoint_segments = 10\n\nPut something larger than that. I use often use like 64 for large databases.\n\n> checkpoint_timeout = 300\n> checkpoint_warning = 0\n> effective_cache_size = 165 000\n\nTry 174762 (2/3 the ram installed). Wont be a great enhance, for sure,\nbut let's put reccomended values.\n\n> autovaccuum = on\n\nThats a critic point. Personaly I dont use autovacuum. Because I just\ndon't want a vacuum to be started ... when the server is loaded :)\n\nI prefer control vacuum process, when its possible (if its not,\nautovacuum is the best choice!), for example, a nighlty vacuum...\n\nA question for you: after setting up your test database, did you launch\na vacuum full analyze of it ?\n\n> default_transaction_isolation = 'read_committed'\n\n> What do you think of my tunning ?\n\nIMHO, it is fairly good, since you put already somewhat good values.\n\nTry too to set \"max_fsm_pages\" depending what PostgreSQL tells you in\nthe logfile... (see again http://www.powerpostgresql.com/PerfList/)\n\nWith XEON, you have to lower \"random_page_cost\" to 3 too.\n\nYou don't mention files organisation ($PGDATA, the PG \"cluster\") of your\nserver?\n\nI mean, it is now well known that you *have to* move pg_xlog/ directory\nto another (array of) disk! Because otherwise its the same disk head\nthat writes into WALs _and_ into files...\n\nOTOH you are using \"fsync=off\", that any DBA wouldn't reccomend.. Well,\nok, it's for testing purposes.\n\nSame remark, if you can create tablespaces to span database files\naccross (array of) disks, even better. But with 3 disks, its somewhat\nlimitated: move pg_xlog before anything else.\n\nNow about \"client side\", I reccomend you install and use pgpool, see:\nhttp://pgpool.projects.postgresql.org/ . Because \"pgpool caches the\nconnection to PostgreSQL server to reduce the overhead to establish the\nconnection to it\". Allways good :)\n\nHope those little hints will help you in getting the best from your\nPostgreSQL server.\n\nKeep us on touch,\n\n-- \nJean-Paul Argudo\nwww.PostgreSQLFr.org\nwww.dalibo.com\n", "msg_date": "Thu, 18 May 2006 12:48:42 +0200", "msg_from": "Jean-Paul Argudo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "\nThat fsync off would make me very unhappy in a production environment .... not that turning it on would help postgres, but ... one advantage of postgres is its reliability under a \"pull the plug\" scenario, but this setting defeats that.\n\nFWIW, Xeon has gotten quite negative reviews in these quarters (Opteron seems to do way better), IIRC, and I know we've had issues with Dell's disk i/o, admittedly on a different box.\n\nQuite interesting results, even if a bit disappointing to a (newly minted) fan of postgres. I'll be quite interested to hear more. Thanks for the work, although it seems like some of it won;t be able to released, unless Oracle has given some new blessing to releasing benchmark results.\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n-----Original Message-----\nFrom:\[email protected] on behalf of Olivier Andreotti\nSent:\tThu 5/18/2006 2:57 AM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2\n\nHello,\n\nI'm running a benchmark with theses 3 databases, and the first results\nare not very good for PostgreSQL.\n\nPostgreSQL is 20% less performance than MySQL (InnoDB tables)\n\nMy benchmark uses the same server for theses 3 databases :\nDell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian\nSarge - Linux 2.6\n\nThe transactions are a random mix of request in read (select) and\nwrite (insert, delete, update) on many tables about 100 000 to 15 000\n000 rows.\n\nTransactions are executed from 500 connections.\n\nFor the tunning of PostgreSQL i use official documentation and theses\nweb sites :\n\nhttp://www.revsys.com/writings/postgresql-performance.html\nhttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n\n\nSome important points of my postgresql.conf file :\n\nmax_connections = 510\nshared_buffer = 16384\nmax_prepared_transactions = 510\nwork_mem = 1024\nmaintenance_work_mem = 1024\nfsync = off\nwal_buffers = 32\ncommit_delay = 500\ncheckpoint_segments = 10\ncheckpoint_timeout = 300\ncheckpoint_warning = 0\neffective_cache_size = 165 000\nautovaccuum = on\ndefault_transaction_isolation = 'read_committed'\n\nWhat do you think of my tunning ?\n\nBest regards.\n\nO.A\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n!DSPAM:446c453a198591465223968!\n\n\n\n\n", "msg_date": "Thu, 18 May 2006 03:55:32 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2" }, { "msg_contents": ">\n> Do you use prepared statements through JDBC with bound variables? If\n> yes, you might have problems with PostgreSQL not choosing optimal\n> plans because every statement is planned \"generically\" which may\n> force PostgreSQL not to use indexes.\n>\n\ni used prepared statements for the 3 databases.\n\n> > shared_buffer = 16384\n>\n> This may be higher.\n>\n\nI'll try that.\n\n\n> > autovaccuum = on\n>\n> And you are sure, it's running?\n>\n\nYes, i can see autovaccum in the postgresql.log.\n", "msg_date": "Thu, 18 May 2006 12:59:42 +0200", "msg_from": "\"Olivier Andreotti\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2" }, { "msg_contents": "On 18.05.2006, at 12:42 Uhr, Olivier Andreotti wrote:\n\n> I use prepared statements for all requests. Each transaction is about\n> 5-45 requests.\n\nThis may lead to bad plans (at least with 8.0.3 this was the \ncase) ... I had the same problem a couple of months ago and I \nswitched from prepared statements with bound values to statements \nwith \"inlined\" values:\n\nSELECT\n\tt0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz, t0.vorname\nFROM\n\tpublic.dga_dienstleister t0\nWHERE t0.plz like ?::varchar(256) ESCAPE '|'\n\nwithBindings: 1:\"53111\"(plz)\n\nhas changed in my app to:\n\nSELECT\n\tt0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz, t0.vorname\nFROM\n\tpublic.dga_dienstleister t0\nWHERE t0.plz like '53111' ESCAPE '|'\n\n\nThe problem was, that the planner wasn't able to use an index with \nthe first version because it just didn't know enough about the actual \nquery.\n\nIt might be, that you run into similar problems. An easy way to test \nthis may be to set the protocolVersion in the JDBC driver connection \nurl to \"2\":\n\njdbc:postgresql://127.0.0.1/Database?protocolVersion=2\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development\n\n\n", "msg_date": "Thu, 18 May 2006 13:02:29 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "\n> > Could you give us some more infos about the box' performance while you\n> > run the PG benchmark? A few minutes output of \"vmstat 10\" maybe? What\n> > does \"top\" say?\n> \n> >\n> Here, an extract from the vmstat 3 during the test, you can see that\n> my problem is probably a very high disk usage (write and read).\n> \n\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 11 92 128344 9224 2428432 0 0 287 9691 2227 685 4 3 0 93\n> [...]\n\nYes, as is the case most of the time, disk I/O is the bottleneck here...\nI'd look into everything disk releated here...\n\n\n\n> > How are you using the 3 disks? Did you split pg_xlog and the database\n> > on different disks or not?\n> >\n> \n> Data are on disk 1 et 2. Index on disk 3. Perhaps i'm wrong but fsync\n> = off, pg_xlog are running with that ?\n\nYes, pg_xlog ist also used with fsync=off. you might gain quite some\nperformance if you can manage to put pg_xlog on its own disk (just\nsymlink the directory). \n\nAnyway, as others have pointed out, consider that with fsync = off\nyou're loosing the \"unbreakability\" in case of power failures / os\ncrashes etc.\n\n\n> > Can you say something about the clients? Do they run over network from\n> > other hosts? What language/bindings do they use?\n> >\n> \n> Client is another server from the same network. Clients are connected\n> with JDBC connector.\n\n\nok, don't know about that one..\n\n> > When they do inserts, are the inserts bundled or are there\n> > single insert transactions? Are the statements prepared?\n\n> I use prepared statements for all requests. Each transaction is about\n> 5-45 requests.\n\nsounds ok,\ncould be even more bundled together if the application is compatible\nwith that.\n\n\nBye, Chris.\n\n\n", "msg_date": "Thu, 18 May 2006 14:44:40 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "On Thu, May 18, 2006 at 02:44:40PM +0200, Chris Mair wrote:\n> Yes, pg_xlog ist also used with fsync=off. you might gain quite some\n> performance if you can manage to put pg_xlog on its own disk (just\n> symlink the directory). \n\nSubstantially increasing wal buffers might help too.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 19 May 2006 15:16:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" }, { "msg_contents": "On Thu, May 18, 2006 at 12:48:42PM +0200, Jean-Paul Argudo wrote:\n> > autovaccuum = on\n> \n> Thats a critic point. Personaly I dont use autovacuum. Because I just\n> don't want a vacuum to be started ... when the server is loaded :)\n> \n> I prefer control vacuum process, when its possible (if its not,\n> autovacuum is the best choice!), for example, a nighlty vacuum...\n\nThis can be problematic for a benchmark, which often will create dead\ntuples at a pretty good clip.\n\nIn any case, if you are going to use autovacuum, you should cut all the\nthresholds and scale factors in half, and set cost_delay to something (I\nfind 5-10 is usually good).\n\nDepending on your write load, you might need to make the bgwriter more\naggressive, too.\n\nIf you can graph some metric from your benchmark over time it should be\npretty easy to spot if the bgwriter is keeping up with things or not; if\nit's not, you'll see big spikes every time there's a checkpoint.\n\n> A question for you: after setting up your test database, did you launch\n> a vacuum full analyze of it ?\n\nWhy would you vacuum a newly loaded database that has no dead tuples?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 19 May 2006 15:21:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle" } ]
[ { "msg_contents": "What filesystem are you using - ext2/etx3/xfs/jfs/...? Does the SCSI\ncontroller have a battery backed cache? For ext3, mounting it with\ndata=writeback should give you quite a boost in write performance.\n\nWhat benchmark tool are you using - is it by any chance BenchmarkSQL?\n(since you mention that it is JDBC and prepared statements).\n\nJust to let you know, I've tested PostgreSQL 8.1.3 against a well-known\nproprietary DB (let's call it RS for \"Rising Sun\") on similar hardware\n(single Xeon CPU, 6Gb Ram, single SCSI disk for tables+indexes+pg_xlog)\nusing BenchmarkSQL and found that Postgres was capable of handling up to\n8 times (yes, 8 times) as many transactions per minute, starting at 2\ntimes as many for a single user going to 8 times as many at 10\nconcurrent users, consistent all the way up to 100 concurrent users.\nBenchmarkSQL stops at 100 users (\"terminals\") so I don't know what it\nlooks like with 200, 300 or 500 users.\n\nHeck, the single disk Postgres instance did even beat our RS production\nsystem in this benchmark, and in that case the RS instance has a fully\nequipped EMC SAN. (although low-end)\n\nI personally don't care about MySQL as I don't consider it to be a DBMS\nat all (breaking the consistency and durability ACID rules disqualifies\nit hands-down). That company/product is one of the reasons I'm ashamed\nof being swedish..\n\nBtw, check you logfile for hints regarding increasing max_fsm_pages, and\nconsider increasing checkpoint_segments as well. You could also play\nwith more aggressive bgwriter_* params to reduce the risk for long\nvacuum pauses.\n\n- Mikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Olivier\nAndreotti\nSent: den 18 maj 2006 11:57\nTo: [email protected]\nSubject: [PERFORM] Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle\n10g2\n\nHello,\n\nI'm running a benchmark with theses 3 databases, and the first results\nare not very good for PostgreSQL.\n\nPostgreSQL is 20% less performance than MySQL (InnoDB tables)\n\nMy benchmark uses the same server for theses 3 databases :\nDell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian Sarge\n- Linux 2.6\n\nThe transactions are a random mix of request in read (select) and write\n(insert, delete, update) on many tables about 100 000 to 15 000 000\nrows.\n\nTransactions are executed from 500 connections.\n\nFor the tunning of PostgreSQL i use official documentation and theses\nweb sites :\n\nhttp://www.revsys.com/writings/postgresql-performance.html\nhttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n\n\nSome important points of my postgresql.conf file :\n\nmax_connections = 510\nshared_buffer = 16384\nmax_prepared_transactions = 510\nwork_mem = 1024\nmaintenance_work_mem = 1024\nfsync = off\nwal_buffers = 32\ncommit_delay = 500\ncheckpoint_segments = 10\ncheckpoint_timeout = 300\ncheckpoint_warning = 0\neffective_cache_size = 165 000\nautovaccuum = on\ndefault_transaction_isolation = 'read_committed'\n\nWhat do you think of my tunning ?\n\nBest regards.\n\nO.A\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Thu, 18 May 2006 16:11:51 +0200", "msg_from": "\"Mikael Carneholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2" }, { "msg_contents": "\"Mikael Carneholm\" <[email protected]> writes:\n> Btw, check you logfile for hints regarding increasing max_fsm_pages, and\n> consider increasing checkpoint_segments as well. You could also play\n> with more aggressive bgwriter_* params to reduce the risk for long\n> vacuum pauses.\n\nYeah, checkpoint_segments is a really critical number for any\nwrite-intensive situation. Pushing it up to 30 or more can make a big\ndifference. You might want to set checkpoint_warning to a large value\n(300 or so) so you can see in the log how often checkpoints are\nhappening. You really don't want checkpoints to happen more than about\nonce every five minutes, because not only does the checkpoint itself\ncost a lot of I/O, but there is a subsequent penalty of increased WAL\ntraffic due to fresh page images getting dumped into WAL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2006 10:48:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2 " }, { "msg_contents": "Hello everybody !\n\nThanks for all the advices, iI will try all theses new values, and\ni'll post my final values on this thread.\n\nAbout the benchmark and the results, i dont know if can publish values\nabout Oracle performance ? For MySQL and PostgreSQL, i think there is\nno problems.\n\nJust a last question about the pg_xlog : i understand that the\ndirectory must be moved but i have just 3 disks for the database :\ndisk 1 and 2 for the data, disk 3 for the indexes, where can i put the\npg_xlog ?\n\nOA.\n", "msg_date": "Fri, 19 May 2006 12:24:35 +0200", "msg_from": "\"Olivier Andreotti\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2" }, { "msg_contents": "\"Olivier Andreotti\" <[email protected]> writes:\n> Just a last question about the pg_xlog : i understand that the\n> directory must be moved but i have just 3 disks for the database :\n> disk 1 and 2 for the data, disk 3 for the indexes, where can i put the\n> pg_xlog ?\n\nIf you have three disks then put the xlog on one of them and everything\nelse on the other two. Separating out the indexes is way less important\nthan getting xlog onto its very own spindle (at least for\nwrite-intensive cases).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2006 09:10:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle 10g2 " } ]
[ { "msg_contents": "Yes, regular versus full vacuum. Thanks for the comment but I was hoping to\ncome to that conclusion on my own by observing the affects of the different\nvacuums.\n\nMy original question was guidance on collecting data for confirmation on the\nimpact that maintenance of a large database (as a result of my applications\nregular usage over a period of time) has.\n\nI can du the various tables and compare their size before/after against the\nverbose output of a VACUUM FULL. I can use sar during all of this to monitor\ncpu and i/o activity. I can turn on transaction logging once I get a better\nidea of maintenance impact on my hardware so identify the biggest\ntransactions that might statement timeout if a VACUUM was running at the\nsame time.\n\nAny suggestions or comments related to collection of this type of data would\nbe helpful. I've already read the Postges 7.4 (yes, I'm stuck on 7.4)\nmanual, I was hoping for this mail-list' wisdom to supply me with some tips\nthat can only be learnt through painful experience. :-)\n\nThanks.\n\n- Chris\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Wednesday, May 17, 2006 3:25 PM\nTo: Chris Mckenzie\nCc: [email protected]\nSubject: Re: [PERFORM] Performance/Maintenance test result collection\n\n\nOn Wed, May 17, 2006 at 01:50:22PM -0400, Chris Mckenzie wrote:\n> Hi.\n> \n> I'm trying to plan for a performance test session where a large \n> database is subject to regular hits from my application while both \n> regular and full database maintenance is being performed. The idea is \n> to gain a better idea on the impact maintenance will have on regular \n> usage, and when to reasonably schedule both regular and full \n> maintenance.\n\nWhat do you mean by \"regular and full maintenance\"? Do you mean VACUUM FULL?\n\nIf you're vacuuming appropriately you shouldn't have any need to ever VACUUM\nFULL...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] Performance/Maintenance test result collection\n\n\nYes, regular versus full vacuum. Thanks for the comment but I was hoping to come to that conclusion on my own by observing the affects of the different vacuums.\nMy original question was guidance on collecting data for confirmation on the impact that maintenance of a large database (as a result of my applications regular usage over a period of time) has.\nI can du the various tables and compare their size before/after against the verbose output of a VACUUM FULL. I can use sar during all of this to monitor cpu and i/o activity. I can turn on transaction logging once I get a better idea of maintenance impact on my hardware so identify the biggest transactions that might statement timeout if a VACUUM was running at the same time.\nAny suggestions or comments related to collection of this type of data would be helpful. I've already read the Postges 7.4 (yes, I'm stuck on 7.4) manual, I was hoping for this mail-list' wisdom to supply me with some tips that can only be learnt through painful experience. :-)\nThanks.\n\n- Chris\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Wednesday, May 17, 2006 3:25 PM\nTo: Chris Mckenzie\nCc: [email protected]\nSubject: Re: [PERFORM] Performance/Maintenance test result collection\n\n\nOn Wed, May 17, 2006 at 01:50:22PM -0400, Chris Mckenzie wrote:\n> Hi.\n> \n> I'm trying to plan for a performance test session where a large \n> database is subject to regular hits from my application while both \n> regular and full database maintenance is being performed. The idea is \n> to gain a better idea on the impact maintenance will have on regular \n> usage, and when to reasonably schedule both regular and full \n> maintenance.\n\nWhat do you mean by \"regular and full maintenance\"? Do you mean VACUUM FULL?\n\nIf you're vacuuming appropriately you shouldn't have any need to ever VACUUM FULL...\n-- \nJim C. Nasby, Sr. Engineering Consultant      [email protected]\nPervasive Software      http://pervasive.com    work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461", "msg_date": "Thu, 18 May 2006 11:20:17 -0400", "msg_from": "Chris Mckenzie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance/Maintenance test result collection" }, { "msg_contents": "On Thu, May 18, 2006 at 11:20:17AM -0400, Chris Mckenzie wrote:\n> Yes, regular versus full vacuum. Thanks for the comment but I was hoping to\n> come to that conclusion on my own by observing the affects of the different\n> vacuums.\n> \n> My original question was guidance on collecting data for confirmation on the\n> impact that maintenance of a large database (as a result of my applications\n> regular usage over a period of time) has.\n> \n> I can du the various tables and compare their size before/after against the\n> verbose output of a VACUUM FULL. I can use sar during all of this to monitor\n> cpu and i/o activity. I can turn on transaction logging once I get a better\n> idea of maintenance impact on my hardware so identify the biggest\n> transactions that might statement timeout if a VACUUM was running at the\n> same time.\n \nWell, vacuum full re-writes the table completely from scratch. Lazy\nvacuum reads the entire table (just like full), but only writes out\npages that have dead space on them.\n\nBut if you're wondering about the impact that will have on your\napplication, you can stop wondering, because vacuum full will\nessentially shut your application down because it locks out use of the\ntable while it's being vacuumed.\n\n> Any suggestions or comments related to collection of this type of data would\n> be helpful. I've already read the Postges 7.4 (yes, I'm stuck on 7.4)\n> manual, I was hoping for this mail-list' wisdom to supply me with some tips\n> that can only be learnt through painful experience. :-)\n\nIf you're stuck on 7.4, at least make sure you're using the most recent\nversion. Otherwise you're exposing yourself to a number of data loss\nbugs.\n\nAs for vacuuming, it depends a lot on what your application is doing. If\nyou have frequent-enough slow periods (like at night), you can probably\nschedule a database-wide vacuum during that time, possibly supplimented\nby vacuums on critical tables during the day. If you have something\ncloser to a 24x7 load then pg_autovacuum is probably your best bet,\nalong with vacuum_cost_delay (if that's available in 7.4; it's been so\nlong I don't remember).\n\nThere's a few articles in our knowledge base\n(http://www.pervasivepostgres.com/kb/index.asp) that might be worth\nreading as well (search for 'vacuum'). In particular, \"Is PostgreSQL\nremembering what I vacuumed\" has some critical information about\nmanaging the free space map.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 19 May 2006 15:50:14 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance/Maintenance test result collection" } ]
[ { "msg_contents": "Could someone explain the results of the following? This is with postgres 8.1.2 on a database that was just vacuum-verbose-analyzed. I have packets_i4 index which I am expecting to be used with this query but as you can see, I have have to convince its usage by turning off other scans. The total runtime is pretty drastic when the index is not chosen. When using a cursor, the query using the index is the only one that provides immediate results. Also, what is Recheck Cond?\n \n adbs_db=# \\d packets\n Table \"public.packets\"\n Column | Type | Modifiers \n-------------------------+------------------------+--------------------\n system_time_secs | integer | not null\n system_time_subsecs | integer | not null\n spacecraft_time_secs | integer | not null\n spacecraft_time_subsecs | integer | not null\n mnemonic | character varying(64) | \n mnemonic_id | integer | not null\n data_length | integer | not null\n data | bytea | not null\n volume_label | character varying(128) | not null\n tlm_version_name | character varying(32) | not null\n environment_name | character varying(32) | not null\n quality | integer | not null default 0\nIndexes:\n \"packets_i1\" btree (volume_label)\n \"packets_i4\" btree (environment_name, system_time_secs, system_time_subsecs, mnemonic)\n \"packets_i5\" btree (environment_name, spacecraft_time_secs, spacecraft_time_subsecs, mnemonic)\n\n \n adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label\n from packets where environment_name='PASITCTX01' \n and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n \n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on packets (cost=247201.41..2838497.72 rows=12472989 width=47) (actual time=573856.344..771866.516 rows=13365371 loops=1)\n Recheck Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n -> Bitmap Index Scan on packets_i4 (cost=0.00..247201.41 rows=12472989 width=0) (actual time=573484.199..573484.199 rows=13365371 loops=1)\n Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 777208.041 ms\n(5 rows)\n\n \n adbs_db=# set enable_bitmapscan to off;\nSET\n\n adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label\n from packets where environment_name='PASITCTX01' \n and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on packets (cost=0.00..3045957.30 rows=12472989 width=47) (actual time=58539.693..493056.015 rows=13365371 loops=1)\n Filter: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 498620.963 ms\n(3 rows)\n\n \n adbs_db=# set enable_seqscan to off;\n SET\n \n adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label\n from packets where environment_name='PASITCTX01' \n and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n \n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using packets_i4 on packets (cost=0.00..19908567.85 rows=12472989 width=47) (actual time=47.691..206028.754 rows=13365371 loops=1)\n Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 211644.843 ms\n(3 rows)\n\n\n\t\t\n---------------------------------\nBlab-away for as little as 1�/min. Make PC-to-Phone Calls using Yahoo! Messenger with Voice.\nCould someone explain the results of the following?  This is with postgres 8.1.2 on a database that was just vacuum-verbose-analyzed.  I have packets_i4 index which I am expecting to be used with this query but as you can see, I have have to convince its usage by turning off other scans.  The total runtime is pretty drastic when the index is not chosen.  When using a cursor, the query using the index is the only one that provides immediate results.  Also, what is Recheck Cond?   adbs_db=#   \\d packets                        Table \"public.packets\"         Column          |          Type         \n |     Modifiers      -------------------------+------------------------+-------------------- system_time_secs        | integer                | not null system_time_subsecs     | integer                | not null spacecraft_time_secs    | integer                | not null spacecraft_time_subsecs | integer                | not null mnemonic                | character varying(64)  |\n  mnemonic_id             | integer                | not null data_length             | integer                | not null data                    | bytea                  | not null volume_label            | character varying(128) | not null tlm_version_name        | character varying(32)  | not null environment_name        | character\n varying(32)  | not null quality                 | integer                | not null default 0Indexes:    \"packets_i1\" btree (volume_label)    \"packets_i4\" btree (environment_name, system_time_secs, system_time_subsecs, mnemonic)    \"packets_i5\" btree (environment_name, spacecraft_time_secs, spacecraft_time_subsecs, mnemonic)   adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label   from packets   where environment_name='PASITCTX01'   and system_time_secs>=1132272000 and system_time_secs<=1143244800;                QUERY\n PLAN                                                                    ------------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on packets  (cost=247201.41..2838497.72 rows=12472989 width=47) (actual time=573856.344..771866.516 rows=13365371 loops=1)   Recheck Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))   ->  Bitmap Index Scan on packets_i4  (cost=0.00..247201.41 rows=12472989 width=0)\n (actual time=573484.199..573484.199 rows=13365371 loops=1)         Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 777208.041 ms(5 rows)   adbs_db=# set enable_bitmapscan to off;SET adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label   from packets   where environment_name='PASITCTX01'   and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n                                                              QUERY PLAN                                                               --------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on packets  (cost=0.00..3045957.30 rows=12472989 width=47)\n (actual time=58539.693..493056.015 rows=13365371 loops=1)   Filter: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 498620.963 ms(3 rows)   adbs_db=# set enable_seqscan to off; SET   adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label   from packets   where environment_name='PASITCTX01'   and system_time_secs>=1132272000 and system_time_secs<=1143244800;             QUERY\n PLAN                                                                   ------------------------------------------------------------------------------------------------------------------------------------------------ Index Scan using packets_i4 on packets  (cost=0.00..19908567.85 rows=12472989 width=47) (actual time=47.691..206028.754 rows=13365371 loops=1)   Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 211644.843 ms(3 rows)\nBlab-away for as little as 1�/min. Make PC-to-Phone Calls using Yahoo! Messenger with Voice.", "msg_date": "Thu, 18 May 2006 08:52:04 -0700 (PDT)", "msg_from": "Stephen Byers <[email protected]>", "msg_from_op": true, "msg_subject": "why is bitmap index chosen for this query?" }, { "msg_contents": "On Thu, May 18, 2006 at 08:52:04AM -0700, Stephen Byers wrote:\n> Could someone explain the results of the following?\n\nIt sounds like PostgreSQL badly overestimates the cost of the index scan.\nDoes the table perchance fit completely into memory, without\neffective_cache_size indicating that?\n\n> Also, what is Recheck Cond?\n\nThe bitmap index scan will by default allocate one bit per tuple. If it can't\nhold a complete bitmap of every tuple in memory, it will fall back to\nallocating one bit per (8 kB) page, since it will have to read the entire\npage anyhow, and the most dramatic cost is the seek. However, in the latter\ncase, it will also get a few extra records that don't match the original\nclause, so it will have to recheck the condition (\"Recheck Cond\") before\noutputting the tuples to the parent node.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 18 May 2006 17:59:06 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> wrote: \nIt sounds like PostgreSQL badly overestimates the cost of the index scan.\nDoes the table perchance fit completely into memory, without\neffective_cache_size indicating that?\n\n Don't know the exact way to answer your question, but my initial instinct is \"no way.\"\n select pg_relation_size('packets');\n pg_relation_size \n------------------\n 19440115712\n\n 19GB. So it's a big table. The query I submitted earlier returns about 13M rows and the table currently has about 38M rows.\n \n \n\n\t\t\n---------------------------------\nLove cheap thrills? Enjoy PC-to-Phone calls to 30+ countries for just 2�/min with Yahoo! Messenger with Voice.\n\"Steinar H. Gunderson\" <[email protected]> wrote: It sounds like PostgreSQL badly overestimates the cost of the index scan.Does the table perchance fit completely into memory, withouteffective_cache_size indicating that? Don't know the exact way to answer your question, but my initial instinct is \"no way.\"   select pg_relation_size('packets');  pg_relation_size ------------------      19440115712 19GB.  So it's a big table.  The query I submitted earlier returns about 13M rows and the table currently has about 38M rows.    \nLove cheap thrills? Enjoy PC-to-Phone calls to 30+ countries for just 2�/min with Yahoo! Messenger with Voice.", "msg_date": "Thu, 18 May 2006 09:41:23 -0700 (PDT)", "msg_from": "Stephen Byers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "On Thu, May 18, 2006 at 09:41:23AM -0700, Stephen Byers wrote:\n> Does the table perchance fit completely into memory, without\n> effective_cache_size indicating that?\n> \n> Don't know the exact way to answer your question, but my initial instinct is \"no way.\"\n\nWhat about the working set? Have you tried running the queries multiple times\nin a row to see if the results change? It might be that your initial bitmap\nscan puts all the relevant bits into cache, which will skew the results.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 18 May 2006 18:46:40 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> What about the working set? Have you tried running the queries multiple times\n> in a row to see if the results change? It might be that your initial bitmap\n> scan puts all the relevant bits into cache, which will skew the results.\n\nIf the examples were done in the order shown, the seqscan ought to have\npretty well blown out the cache ... but I concur that it'd be\ninteresting to check whether repeated executions of the same plan show\nmarkedly different times.\n\nAlso, is the index order closely correlated to the actual physical\ntable order?\n\nWhat is work_mem set to, and does increasing it substantially make the\nbitmap scan work any better?\n\nConsidering that the query is fetching about half of the table, I'd have\nthought that the planner was correct to estimate that bitmap or seqscan\nought to win. For the plain indexscan to win, the order correlation\nmust be quite strong, and I'm also guessing that the bitmap scan must\nhave run into some substantial trouble (like discarding a lot of info\nbecause of lack of work_mem).\n\nIIRC, the planner doesn't currently try to model the effects of a bitmap\nscan going into lossy mode, which is something it probably should try to\naccount for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2006 13:34:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query? " }, { "msg_contents": "I repeated explain analyze on the query 5 times and it came up with the same plan.\n \nYou asked about index order and physical table order. In general the index order is indeed close to the same order as the physical table order. However, this query is likely an exception. The data is actually from a backup server that has filled a hole for some of the time range that I'm specifying in my query.\n \n Work_mem was set to 10240. After your suggestion, I bumped it to 102400 and it looks like it did significantly impact performance. \n \n adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label from packets\nadbs_db-# where environment_name='PASITCTX01' \nadbs_db-# and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on packets (cost=247205.64..2838556.55 rows=12473252 width=47) (actual time=32118.943..187075.742 rows=13365371 loops=1)\n Recheck Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n -> Bitmap Index Scan on packets_i4 (cost=0.00..247205.64 rows=12473252 width=0) (actual time=30370.789..30370.789 rows=13365371 loops=1)\n Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 191995.431 ms\n(5 rows)\n adbs_db=# \nadbs_db=# \nadbs_db=# adbs_db=# set enable_bitmapscan to off;\nSET\nadbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label from packets\nadbs_db-# where environment_name='PASITCTX01' \nadbs_db-# and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on packets (cost=0.00..3046021.47 rows=12473252 width=47) (actual time=56616.457..475839.789 rows=13365371 loops=1)\n Filter: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 481228.409 ms\n(3 rows)\n adbs_db=# \nadbs_db=# \nadbs_db=# adbs_db=# set enable_seqscan to off;\nSET\nadbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label from packets\nadbs_db-# where environment_name='PASITCTX01' \nadbs_db-# and system_time_secs>=1132272000 and system_time_secs<=1143244800;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using packets_i4 on packets (cost=0.00..19909080.77 rows=12473273 width=47) (actual time=3.511..188273.177 rows=13365371 loops=1)\n Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))\n Total runtime: 194061.497 ms\n(3 rows)\n \n \n \n Wow -- so what does that mean? Do I need to leave my work_mem at 100MB?? I mentioned that my application actually uses a cursor to walk through this data. Even though the bitmap scan technically had the fastest time with explain analyze, it takes a long while (20 seconds) before the results start to come back through the cursor. Conversely, with the index scan, results immediately come back through the cursor method (which is more desirable). Thoughts? \n \n Example: \n begin;\ndeclare myCursor cursor for \n select spacecraft_time_secs,mnemonic,volume_label from packets\n where environment_name='PASITCTX01' \n and system_time_secs>=1132272000 \n and system_time_secs<=1143244800;\nfetch 10 from myCursor;\nend;\n\n \n PS, this is on a Sun Fire V240 with 4GB RAM, Solaris 8.\n \n Thanks,\n Steve\n \n \nTom Lane <[email protected]> wrote:\n \"Steinar H. Gunderson\" writes:\n> What about the working set? Have you tried running the queries multiple times\n> in a row to see if the results change? It might be that your initial bitmap\n> scan puts all the relevant bits into cache, which will skew the results.\n\nIf the examples were done in the order shown, the seqscan ought to have\npretty well blown out the cache ... but I concur that it'd be\ninteresting to check whether repeated executions of the same plan show\nmarkedly different times.\n\nAlso, is the index order closely correlated to the actual physical\ntable order?\n\nWhat is work_mem set to, and does increasing it substantially make the\nbitmap scan work any better?\n\nConsidering that the query is fetching about half of the table, I'd have\nthought that the planner was correct to estimate that bitmap or seqscan\nought to win. For the plain indexscan to win, the order correlation\nmust be quite strong, and I'm also guessing that the bitmap scan must\nhave run into some substantial trouble (like discarding a lot of info\nbecause of lack of work_mem).\n\nIIRC, the planner doesn't currently try to model the effects of a bitmap\nscan going into lossy mode, which is something it probably should try to\naccount for.\n\nregards, tom lane\n\n\n\t\t\t\n---------------------------------\nSneak preview the all-new Yahoo.com. It's not radically different. Just radically better. \nI repeated explain analyze on the query 5 times and it came up with the same plan. You asked about index order and physical table order.  In general the index order is indeed close to the same order as the physical table order.  However, this query is likely an exception.  The data is actually from a backup server that has filled a hole for some of the time range that I'm specifying in my query.   Work_mem was set to 10240.  After your suggestion, I bumped it to 102400 and it looks like it did significantly impact performance.    adbs_db=# explain analyze select spacecraft_time_secs,mnemonic,volume_label from packetsadbs_db-#   where environment_name='PASITCTX01' adbs_db-#   and system_time_secs>=1132272000 and\n system_time_secs<=1143244800;                                                                   QUERY PLAN                                                                   \n ------------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on packets  (cost=247205.64..2838556.55 rows=12473252 width=47) (actual time=32118.943..187075.742 rows=13365371 loops=1)   Recheck Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800))   ->  Bitmap Index Scan on packets_i4  (cost=0.00..247205.64 rows=12473252 width=0) (actual time=30370.789..30370.789 rows=13365371 loops=1)         Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 191995.431 ms(5 rows) adbs_db=# adbs_db=# adbs_db=# adbs_db=# set enable_bitmapscan to off;SETadbs_db=#\n explain analyze select spacecraft_time_secs,mnemonic,volume_label from packetsadbs_db-#   where environment_name='PASITCTX01' adbs_db-#   and system_time_secs>=1132272000 and system_time_secs<=1143244800;                                                              QUERY\n PLAN                                                               --------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on packets  (cost=0.00..3046021.47 rows=12473252 width=47) (actual time=56616.457..475839.789 rows=13365371 loops=1)   Filter: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 481228.409 ms(3 rows) adbs_db=# adbs_db=# adbs_db=# adbs_db=# set enable_seqscan to off;SETadbs_db=# explain analyze\n select spacecraft_time_secs,mnemonic,volume_label from packetsadbs_db-#   where environment_name='PASITCTX01' adbs_db-#   and system_time_secs>=1132272000 and system_time_secs<=1143244800;                                                                  QUERY\n PLAN                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using packets_i4 on packets  (cost=0.00..19909080.77 rows=12473273 width=47) (actual time=3.511..188273.177 rows=13365371 loops=1)   Index Cond: (((environment_name)::text = 'PASITCTX01'::text) AND (system_time_secs >= 1132272000) AND (system_time_secs <= 1143244800)) Total runtime: 194061.497 ms(3 rows)       Wow\n -- so what does that mean?  Do I need to leave my work_mem at 100MB??  I mentioned that my application actually uses a cursor to walk through this data.  Even though the bitmap scan technically had the fastest time with explain analyze, it takes a long while (20 seconds) before the results start to come back through the cursor.  Conversely, with the index scan, results immediately come back through the cursor method (which is more desirable).  Thoughts?    Example: begin;declare myCursor cursor for   select spacecraft_time_secs,mnemonic,volume_label from packets  where environment_name='PASITCTX01'   and system_time_secs>=1132272000   and system_time_secs<=1143244800;fetch 10 from myCursor;end;   PS, this is on a Sun Fire V240 with 4GB RAM, Solaris 8.   Thanks, Steve\n  Tom Lane <[email protected]> wrote: \"Steinar H. Gunderson\" writes:> What about the working set? Have you tried running the queries multiple times> in a row to see if the results change? It might be that your initial bitmap> scan puts all the relevant bits into cache, which will skew the results.If the examples were done in the order shown, the seqscan ought to havepretty well blown out the cache ... but I concur that it'd beinteresting to check whether repeated executions of the same plan showmarkedly different times.Also, is the index order closely correlated to the actual physicaltable order?What is work_mem set to, and does increasing it substantially make thebitmap scan work any better?Considering that the query is\n fetching about half of the table, I'd havethought that the planner was correct to estimate that bitmap or seqscanought to win. For the plain indexscan to win, the order correlationmust be quite strong, and I'm also guessing that the bitmap scan musthave run into some substantial trouble (like discarding a lot of infobecause of lack of work_mem).IIRC, the planner doesn't currently try to model the effects of a bitmapscan going into lossy mode, which is something it probably should try toaccount for.regards, tom lane\nSneak preview the all-new Yahoo.com. It's not radically different. Just radically better.", "msg_date": "Thu, 18 May 2006 12:38:18 -0700 (PDT)", "msg_from": "Stephen Byers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why is bitmap index chosen for this query? " }, { "msg_contents": "On Thu, May 18, 2006 at 12:38:18PM -0700, Stephen Byers wrote:\n> I repeated explain analyze on the query 5 times and it came up with the same plan.\n\nYes, but did it end up with the same runtime? That's the interesting part --\nthe plan will almost always be identical between explain analyze runs given\nthat you haven't done anything in between them.\n\n> You asked about index order and physical table order. In general the index\n> order is indeed close to the same order as the physical table order.\n> However, this query is likely an exception. The data is actually from a\n> backup server that has filled a hole for some of the time range that I'm\n> specifying in my query.\n\nWell, it still isn't all that far-fetched to believe that the data has lots\nof correlation (which helps the index scan quite a lot) that the planner\nisn't able to pick up. I don't know the details here, so I can't tell you how\nthe correlation for such a query (WHERE a=foo and b between bar and baz) is\nestimated. Something tells me someone else might, though. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 18 May 2006 21:48:13 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "Yes, here are the runtimes for the repeated query.\n Total runtime: 748716.750 ms\nTotal runtime: 749170.934 ms\nTotal runtime: 744113.594 ms\nTotal runtime: 746314.251 ms\nTotal runtime: 742083.732 ms\n\n Thanks,\n Steve\n\n\"Steinar H. Gunderson\" <[email protected]> wrote:\n On Thu, May 18, 2006 at 12:38:18PM -0700, Stephen Byers wrote:\n> I repeated explain analyze on the query 5 times and it came up with the same plan.\n\nYes, but did it end up with the same runtime? That's the interesting part --\nthe plan will almost always be identical between explain analyze runs given\nthat you haven't done anything in between them.\n\n> You asked about index order and physical table order. In general the index\n> order is indeed close to the same order as the physical table order.\n> However, this query is likely an exception. The data is actually from a\n> backup server that has filled a hole for some of the time range that I'm\n> specifying in my query.\n\nWell, it still isn't all that far-fetched to believe that the data has lots\nof correlation (which helps the index scan quite a lot) that the planner\nisn't able to pick up. I don't know the details here, so I can't tell you how\nthe correlation for such a query (WHERE a=foo and b between bar and baz) is\nestimated. Something tells me someone else might, though. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n\t\t\t\n---------------------------------\nSneak preview the all-new Yahoo.com. It's not radically different. Just radically better. \nYes, here are the runtimes for the repeated query. Total runtime: 748716.750 msTotal runtime: 749170.934 msTotal runtime: 744113.594 msTotal runtime: 746314.251 msTotal runtime: 742083.732 ms Thanks, Steve\"Steinar H. Gunderson\" <[email protected]> wrote: On Thu, May 18, 2006 at 12:38:18PM -0700, Stephen Byers wrote:> I repeated explain analyze on the query 5 times and it came up with the same plan.Yes, but did it end up with the same runtime? That's the interesting part --the plan will almost always be identical between explain analyze runs giventhat you haven't done anything in between them.> You asked about index order and physical table order. In general the index> order is indeed close to the same order as the physical table\n order.> However, this query is likely an exception. The data is actually from a> backup server that has filled a hole for some of the time range that I'm> specifying in my query.Well, it still isn't all that far-fetched to believe that the data has lotsof correlation (which helps the index scan quite a lot) that the plannerisn't able to pick up. I don't know the details here, so I can't tell you howthe correlation for such a query (WHERE a=foo and b between bar and baz) isestimated. Something tells me someone else might, though. :-)/* Steinar */-- Homepage: http://www.sesse.net/---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives?http://archives.postgresql.org\nSneak preview the all-new Yahoo.com. It's not radically different. Just radically better.", "msg_date": "Thu, 18 May 2006 12:53:16 -0700 (PDT)", "msg_from": "Stephen Byers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "On Thu, May 18, 2006 at 12:53:16PM -0700, Stephen Byers wrote:\n> Yes, here are the runtimes for the repeated query.\n> Total runtime: 748716.750 ms\n> Total runtime: 749170.934 ms\n> Total runtime: 744113.594 ms\n> Total runtime: 746314.251 ms\n> Total runtime: 742083.732 ms\n\nWith which options enabled? This isn't even close to any of the three times\nyou already posted.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 18 May 2006 22:16:14 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "You may be comparing the values to Tom's suggestion to bump up work_mem. Take a look at the original posting (Total runtime: 777208.041 ms for the bitmap scan)\n \n -Steve\n \n\"Steinar H. Gunderson\" <[email protected]> wrote:\n On Thu, May 18, 2006 at 12:53:16PM -0700, Stephen Byers wrote:\n> Yes, here are the runtimes for the repeated query.\n> Total runtime: 748716.750 ms\n> Total runtime: 749170.934 ms\n> Total runtime: 744113.594 ms\n> Total runtime: 746314.251 ms\n> Total runtime: 742083.732 ms\n\nWith which options enabled? This isn't even close to any of the three times\nyou already posted.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to [email protected] so that your\nmessage can get through to the mailing list cleanly\n\n\n\t\t\n---------------------------------\nYahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2�/min or less.\nYou may be comparing the values to Tom's suggestion to bump up work_mem.  Take a look at the original posting (Total runtime: 777208.041 ms for the bitmap scan)   -Steve \"Steinar H. Gunderson\" <[email protected]> wrote: On Thu, May 18, 2006 at 12:53:16PM -0700, Stephen Byers wrote:> Yes, here are the runtimes for the repeated query.> Total runtime: 748716.750 ms> Total runtime: 749170.934 ms> Total runtime: 744113.594 ms> Total runtime: 746314.251 ms> Total runtime: 742083.732 msWith which options enabled? This isn't even close to any of the three timesyou already posted./* Steinar */-- Homepage: http://www.sesse.net/---------------------------(end of broadcast)---------------------------TIP 1: if\n posting/reading through Usenet, please send an appropriatesubscribe-nomail command to [email protected] so that yourmessage can get through to the mailing list cleanly\nYahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2�/min or less.", "msg_date": "Thu, 18 May 2006 13:26:26 -0700 (PDT)", "msg_from": "Stephen Byers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why is bitmap index chosen for this query?" }, { "msg_contents": "On Thu, May 18, 2006 at 12:38:18PM -0700, Stephen Byers wrote:\n> I repeated explain analyze on the query 5 times and it came up with the same plan.\n> \n> You asked about index order and physical table order. In general the index order is indeed close to the same order as the physical table order. However, this query is likely an exception. The data is actually from a backup server that has filled a hole for some of the time range that I'm specifying in my query.\n \nWhat's SELECT correlation FROM pg_stats WHERE tablename='packets' AND\nattname='environment_name' show?\n\nWhat's effective_cache_size and random_page_cost set to?\n\nAlso, out of curiosity, why not just use a timestamp instead of two\nint's for storing time?\n\n> Wow -- so what does that mean? Do I need to leave my work_mem at 100MB?? I mentioned that my application actually uses a cursor to walk through this data. Even though the bitmap scan technically had the fastest time with explain analyze, it takes a long while (20 seconds) before the results start to come back through the cursor. Conversely, with the index scan, results immediately come back through the cursor method (which is more desirable). Thoughts? \n\nDo you really need to use a cursor? It's generally less efficient than\ndoing things with a single SQL statement, depending on what exactly\nyou're doing.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 19 May 2006 15:59:15 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is bitmap index chosen for this query?" } ]
[ { "msg_contents": "(Its been a hour and I dont see my message on the list so I'm sending it again. I've moved the queries and analyze out of the email incase it was rejected because too long)\n\nquery: http://pastebin.ca/57218\n\nIn the pictures table all the ratings have a shared index \n \nCREATE INDEX idx_rating ON pictures USING btree (rating_nudity, rating_violence, rating_sex, rating_racism, rating_spoilers, rating_yaoi, rating_yuri, rating_profanity);\n \nand approved and date_submitted and user_id also have their own btree indexes.\n \nIn the picture_categories table pid and cat_id have their own btree indices plus one together. \n\nFull table definition: http://pastebin.ca/57219\n\nthe cat_id and rating values vary from query to query. The one listed above took 54 seconds in a test run just now. Here is explain analyze: http://pastebin.ca/57220\n\n\nBoth pictures and picture categories have about 287,000 rows\n \nThis query needs to run in under about a second or it kills my site by clogging apache slots (apache maxes out at 256 and I can have several hundred people on my site at a time). How can I make it run faster?\n \n \nServer is a dual xeon with a gig of ram dedicated mostly to postgresql.\nHere is the changed lines in my postgresql.conf: http://pastebin.ca/57222\n\nI know hyperthreading is considered something that can slow down a server but with my very high concurancy (averages about 400-500 concurant users during peak hours) I am hoping the extra virtual CPUs wil help. Anyone have experance that says diferent at high concurancy?\n\n\n\n\n\n\n(Its been a hour and I dont see my message on the \nlist so I'm sending it again. I've moved the queries and analyze out of the \nemail incase it was rejected because too long)\n \nquery: http://pastebin.ca/57218\n \nIn the pictures table all the ratings have a shared index \n CREATE INDEX idx_rating ON pictures USING btree  \n(rating_nudity, rating_violence, rating_sex, rating_racism, rating_spoilers, \nrating_yaoi, rating_yuri, rating_profanity); and approved and \ndate_submitted and user_id also have their own btree indexes. In \nthe picture_categories table pid and cat_id have their own btree indices plus \none together. \n \nFull table definition: http://pastebin.ca/57219\n \nthe cat_id and rating values vary from query to query. The one listed above \ntook 54 seconds in a test run just now. Here is explain analyze: http://pastebin.ca/57220\n \n \nBoth pictures and picture categories have about 287,000 \nrows This query needs to run in under about a second or it kills my \nsite by clogging apache slots (apache maxes out at 256 and I can have several \nhundred people on my site at a time). How can I make it run \nfaster?  Server is a dual xeon with a gig of ram dedicated \nmostly to postgresql.Here is the changed lines in my postgresql.conf: http://pastebin.ca/57222\n \nI know hyperthreading is considered something that can slow down a server \nbut with my very high concurancy (averages about 400-500 concurant users during \npeak hours) I am hoping the extra virtual CPUs wil help. Anyone have experance \nthat says diferent at high concurancy?", "msg_date": "Fri, 19 May 2006 15:56:49 -0700", "msg_from": "\"Cstdenis\" <[email protected]>", "msg_from_op": true, "msg_subject": "How can I make this query faster (resend)" }, { "msg_contents": "On Fri, May 19, 2006 at 03:56:49PM -0700, Cstdenis wrote:\n> (Its been a hour and I dont see my message on the list so I'm sending it again. I've moved the queries and analyze out of the email incase it was rejected because too long)\n> \n> query: http://pastebin.ca/57218\n> \n> In the pictures table all the ratings have a shared index \n> \n> CREATE INDEX idx_rating ON pictures USING btree (rating_nudity, rating_violence, rating_sex, rating_racism, rating_spoilers, rating_yaoi, rating_yuri, rating_profanity);\n> \n> and approved and date_submitted and user_id also have their own btree indexes.\n> \n> In the picture_categories table pid and cat_id have their own btree indices plus one together. \n> \n> Full table definition: http://pastebin.ca/57219\n> \n> the cat_id and rating values vary from query to query. The one listed above took 54 seconds in a test run just now. Here is explain analyze: http://pastebin.ca/57220\n \npictures is the interesting table here. It looks like the planner would\ndo better to choose something other than a nested loop on it. Try\nrunning EXPLAIN ANALYZE on the query with enable_nestloop=off and see\nwhat you get (you'll need to compare it to what you get with\nenable_nestloop on to see what the change is).\n\n> Both pictures and picture categories have about 287,000 rows\n> \n> This query needs to run in under about a second or it kills my site by clogging apache slots (apache maxes out at 256 and I can have several hundred people on my site at a time). How can I make it run faster?\n> \n> \n> Server is a dual xeon with a gig of ram dedicated mostly to postgresql.\n> Here is the changed lines in my postgresql.conf: http://pastebin.ca/57222\n\nI suspect the low work_mem may be why it's using a nested loop. In\naddition to the test above, it would be interesting to see what happens\nto the plan if you set work_mem to 10000.\n\nTo be honest, you're pushing things expecting a machine with only 1G to\nserve 300 active connections. How large is the database itself?\n\n> I know hyperthreading is considered something that can slow down a server but with my very high concurancy (averages about 400-500 concurant users during peak hours) I am hoping the extra virtual CPUs wil help. Anyone have experance that says diferent at high concurancy?\n\nBest bet is to try it and see. Generally, people find HT hurts, but I\nrecently saw it double the performance of pgbench on a windows XP\nmachine, so it's possible that windows is just more clever about how to\nuse it than linux is.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 22 May 2006 10:20:18 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make this query faster (resend)" }, { "msg_contents": "Hi, Cstendis,\n\nCstdenis wrote:\n\n> Server is a dual xeon with a gig of ram dedicated mostly to postgresql.\n> Here is the changed lines in my postgresql.conf: http://pastebin.ca/57222\n\n3M is really low for a production server.\n\nTry using pg_pool and limiting it to about 30 or so backend connections,\nand then give them at least 30 megs of RAM each.\n\nThis should also cut down the connection creation overhead.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Mon, 22 May 2006 17:30:29 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make this query faster (resend)" }, { "msg_contents": "From: \"Jim C. Nasby\" <[email protected]>\nTo: \"Cstdenis\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, May 22, 2006 8:20 AM\nSubject: Re: [PERFORM] How can I make this query faster (resend)\n\n\n> On Fri, May 19, 2006 at 03:56:49PM -0700, Cstdenis wrote:\n> > (Its been a hour and I dont see my message on the list so I'm sending it\nagain. I've moved the queries and analyze out of the email incase it was\nrejected because too long)\n> >\n> > query: http://pastebin.ca/57218\n> >\n> > In the pictures table all the ratings have a shared index\n> >\n> > CREATE INDEX idx_rating ON pictures USING btree (rating_nudity,\nrating_violence, rating_sex, rating_racism, rating_spoilers, rating_yaoi,\nrating_yuri, rating_profanity);\n> >\n> > and approved and date_submitted and user_id also have their own btree\nindexes.\n> >\n> > In the picture_categories table pid and cat_id have their own btree\nindices plus one together.\n> >\n> > Full table definition: http://pastebin.ca/57219\n> >\n> > the cat_id and rating values vary from query to query. The one listed\nabove took 54 seconds in a test run just now. Here is explain analyze:\nhttp://pastebin.ca/57220\n>\n> pictures is the interesting table here. It looks like the planner would\n> do better to choose something other than a nested loop on it. Try\n> running EXPLAIN ANALYZE on the query with enable_nestloop=off and see\n> what you get (you'll need to compare it to what you get with\n> enable_nestloop on to see what the change is).\n\nWith enable_nestloop=off the same query as is explained further down in this\nemail took much longer 63 seconds insted of 6. It decided to do sequencial\nscans on pictures and users with nested loop disabled.\n\nMerge Join (cost=146329.63..146963.96 rows=231 width=66) (actual\ntime=61610.538..62749.176 rows=1305 loops=1)\n Merge Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Sort (cost=123828.88..123829.46 rows=231 width=47) (actual\ntime=60445.367..60451.176 rows=1305 loops=1)\n Sort Key: pictures.user_id\n -> Hash Join (cost=634.36..123819.81 rows=231 width=47) (actual\ntime=128.088..60423.623 rows=1305 loops=1)\n Hash Cond: (\"outer\".pid = \"inner\".pid)\n -> Seq Scan on pictures (cost=0.00..121670.43 rows=302543\nwidth=47) (actual time=0.210..58795.925 rows=291318 loops=1)\n -> Hash (cost=633.78..633.78 rows=231 width=4) (actual\ntime=38.443..38.443 rows=1305 loops=1)\n -> Bitmap Heap Scan on picture_categories\n(cost=2.81..633.78 rows=231 width=4) (actual time=4.753..32.259 rows=1305\nloops=1)\n Recheck Cond: (cat_id = 182)\n -> Bitmap Index Scan on\nidx_picture_categories_cat_id (cost=0.00..2.81 rows=231 width=0) (actual\ntime=4.398..4.398 rows=1305 loops=1)\n Index Cond: (cat_id = 182)\n -> Sort (cost=22500.74..22816.79 rows=126418 width=23) (actual\ntime=1163.788..1505.104 rows=52214 loops=1)\n Sort Key: users.user_id\n -> Seq Scan on users (cost=0.00..11788.18 rows=126418 width=23)\n(actual time=0.017..692.992 rows=54605 loops=1)\nTotal runtime: 62776.720 ms\n\n\n> > Both pictures and picture categories have about 287,000 rows\n> >\n> > This query needs to run in under about a second or it kills my site by\nclogging apache slots (apache maxes out at 256 and I can have several\nhundred people on my site at a time). How can I make it run faster?\n> >\n> >\n> > Server is a dual xeon with a gig of ram dedicated mostly to postgresql.\n> > Here is the changed lines in my postgresql.conf:\nhttp://pastebin.ca/57222\n>\n> I suspect the low work_mem may be why it's using a nested loop. In\n> addition to the test above, it would be interesting to see what happens\n> to the plan if you set work_mem to 10000.\n\nI moved to a more powerful server (2gb ram and mirrored scsi HDs) and upped\nthe work mem to 10mb. Its much faster now, however its still doing a nested\nloop. (see also my reply to Markus Schaber)\n\n\nNested Loop (cost=2.81..3398.76 rows=231 width=66) (actual\ntime=14.946..5797.701 rows=1305 loops=1)\n -> Nested Loop (cost=2.81..2022.71 rows=231 width=47) (actual\ntime=14.551..5181.042 rows=1305 loops=1)\n -> Bitmap Heap Scan on picture_categories (cost=2.81..633.78\nrows=231 width=4) (actual time=9.966..140.606 rows=1305 loops=1)\n Recheck Cond: (cat_id = 182)\n -> Bitmap Index Scan on idx_picture_categories_cat_id\n(cost=0.00..2.81 rows=231 width=0) (actual time=9.720..9.720 rows=1305\nloops=1)\n Index Cond: (cat_id = 182)\n -> Index Scan using pictures_pkey on pictures (cost=0.00..6.00\nrows=1 width=47) (actual time=3.802..3.820 rows=1 loops=1305)\n Index Cond: (pictures.pid = \"outer\".pid)\n -> Index Scan using users_pkey on users (cost=0.00..5.94 rows=1\nwidth=23) (actual time=0.095..0.100 rows=1 loops=1305)\n Index Cond: (\"outer\".user_id = users.user_id)\nTotal runtime: 5812.238 ms\n\n\n> To be honest, you're pushing things expecting a machine with only 1G to\n> serve 300 active connections. How large is the database itself?\n\nThe database is 3.7G on disk. There is about 1G of actual data in it -- the\nrest is dead tuples and indices. (I vacuum regularly, but a vacuum full\ncauses too much downtime to do unless I have to)\n\n> > I know hyperthreading is considered something that can slow down a\nserver but with my very high concurancy (averages about 400-500 concurant\nusers during peak hours) I am hoping the extra virtual CPUs wil help. Anyone\nhave experance that says diferent at high concurancy?\n>\n> Best bet is to try it and see. Generally, people find HT hurts, but I\n> recently saw it double the performance of pgbench on a windows XP\n> machine, so it's possible that windows is just more clever about how to\n> use it than linux is.\n\nAnyone know if those who have found it hurts are low concurancy complex cpu\nintensive queries or high concurancy simple queries or both? I can\nunderstand it hurting in the former, but not the later. I'll have to give it\na try I guess. It should at least help my very high load averages.\n\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n", "msg_date": "Sun, 28 May 2006 03:49:06 -0700", "msg_from": "\"Cstdenis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make this query faster (resend)" }, { "msg_contents": "(re-sending because my first one forgot the CC to the list. Sorry)\n\nI moved my database to a more powerful server. Mirrored ultra 320 SCSI HDs\nand 2GB of ram. It performs much faster.\n\nI also changed some conf settings accordingly\nwork_mem = 10240\nshared_buffers = 25600\nmax_connections = 450 (Also moved the webserver and needed more connections\nduring the DNS propagation).\n\nI've been looking into pgpool. If I understand things correctly I can have\npersistent connections from all 256 apache processes to a pgpool and it can\nhave like 30 persistent connections to the actual server thus saving lots of\nserver memory (due to very high concurrency I would probably actually use at\nleast 100) Is this correct?\n\nHowever, memory doesn't seem to be my problem anymore, the query is still\ntaking longer than I'd like for the larger categories (6 seconds for one\nwith 1300 pictures) but its more managable. The problem now is that my\nserver's load average during peak hours has gone as high as 30 (tho the\nserver seems to still be responding fairly quickly it still worrysome)\n\n\nGiven my new server specs can anyone suggest any other config file\nimprovements? Perhaps some of the *_cost variables could be adjusted to\nbetter reflect my server's hardware?\n\n----- Original Message ----- \nFrom: \"Markus Schaber\" <[email protected]>\nTo: \"Cstdenis\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, May 22, 2006 8:30 AM\nSubject: Re: [PERFORM] How can I make this query faster (resend)\n\n\n> Hi, Cstendis,\n>\n> Cstdenis wrote:\n>\n> > Server is a dual xeon with a gig of ram dedicated mostly to postgresql.\n> > Here is the changed lines in my postgresql.conf:\nhttp://pastebin.ca/57222\n>\n> 3M is really low for a production server.\n>\n> Try using pg_pool and limiting it to about 30 or so backend connections,\n> and then give them at least 30 megs of RAM each.\n>\n> This should also cut down the connection creation overhead.\n>\n> HTH,\n> Markus\n> -- \n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in EU! www.ffii.org\nwww.nosoftwarepatents.org\n>\n\n\n", "msg_date": "Mon, 29 May 2006 07:33:18 -0700", "msg_from": "\"Cstdenis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make this query faster (resend)" }, { "msg_contents": "(Resending because my other send didn't get a CC to the list)\n\nFrom: \"Jim C. Nasby\" <[email protected]>\nTo: \"Cstdenis\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, May 22, 2006 8:20 AM\nSubject: Re: [PERFORM] How can I make this query faster (resend)\n\n\n> On Fri, May 19, 2006 at 03:56:49PM -0700, Cstdenis wrote:\n> > (Its been a hour and I dont see my message on the list so I'm sending it\nagain. I've moved the queries and analyze out of the email incase it was\nrejected because too long)\n> >\n> > query: http://pastebin.ca/57218\n> >\n> > In the pictures table all the ratings have a shared index\n> >\n> > CREATE INDEX idx_rating ON pictures USING btree (rating_nudity,\nrating_violence, rating_sex, rating_racism, rating_spoilers, rating_yaoi,\nrating_yuri, rating_profanity);\n> >\n> > and approved and date_submitted and user_id also have their own btree\nindexes.\n> >\n> > In the picture_categories table pid and cat_id have their own btree\nindices plus one together.\n> >\n> > Full table definition: http://pastebin.ca/57219\n> >\n> > the cat_id and rating values vary from query to query. The one listed\nabove took 54 seconds in a test run just now. Here is explain analyze:\nhttp://pastebin.ca/57220\n>\n> pictures is the interesting table here. It looks like the planner would\n> do better to choose something other than a nested loop on it. Try\n> running EXPLAIN ANALYZE on the query with enable_nestloop=off and see\n> what you get (you'll need to compare it to what you get with\n> enable_nestloop on to see what the change is).\n\nWith enable_nestloop=off the same query as is explained further down in this\nemail took much longer 63 seconds insted of 6. It decided to do sequencial\nscans on pictures and users with nested loop disabled.\n\nMerge Join (cost=146329.63..146963.96 rows=231 width=66) (actual\ntime=61610.538..62749.176 rows=1305 loops=1)\n Merge Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Sort (cost=123828.88..123829.46 rows=231 width=47) (actual\ntime=60445.367..60451.176 rows=1305 loops=1)\n Sort Key: pictures.user_id\n -> Hash Join (cost=634.36..123819.81 rows=231 width=47) (actual\ntime=128.088..60423.623 rows=1305 loops=1)\n Hash Cond: (\"outer\".pid = \"inner\".pid)\n -> Seq Scan on pictures (cost=0.00..121670.43 rows=302543\nwidth=47) (actual time=0.210..58795.925 rows=291318 loops=1)\n -> Hash (cost=633.78..633.78 rows=231 width=4) (actual\ntime=38.443..38.443 rows=1305 loops=1)\n -> Bitmap Heap Scan on picture_categories\n(cost=2.81..633.78 rows=231 width=4) (actual time=4.753..32.259 rows=1305\nloops=1)\n Recheck Cond: (cat_id = 182)\n -> Bitmap Index Scan on\nidx_picture_categories_cat_id (cost=0.00..2.81 rows=231 width=0) (actual\ntime=4.398..4.398 rows=1305 loops=1)\n Index Cond: (cat_id = 182)\n -> Sort (cost=22500.74..22816.79 rows=126418 width=23) (actual\ntime=1163.788..1505.104 rows=52214 loops=1)\n Sort Key: users.user_id\n -> Seq Scan on users (cost=0.00..11788.18 rows=126418 width=23)\n(actual time=0.017..692.992 rows=54605 loops=1)\nTotal runtime: 62776.720 ms\n\n\n> > Both pictures and picture categories have about 287,000 rows\n> >\n> > This query needs to run in under about a second or it kills my site by\nclogging apache slots (apache maxes out at 256 and I can have several\nhundred people on my site at a time). How can I make it run faster?\n> >\n> >\n> > Server is a dual xeon with a gig of ram dedicated mostly to postgresql.\n> > Here is the changed lines in my postgresql.conf:\nhttp://pastebin.ca/57222\n>\n> I suspect the low work_mem may be why it's using a nested loop. In\n> addition to the test above, it would be interesting to see what happens\n> to the plan if you set work_mem to 10000.\n\nI moved to a more powerful server (2gb ram and mirrored scsi HDs) and upped\nthe work mem to 10mb. Its much faster now, however its still doing a nested\nloop. (see also my reply to Markus Schaber)\n\n\nNested Loop (cost=2.81..3398.76 rows=231 width=66) (actual\ntime=14.946..5797.701 rows=1305 loops=1)\n -> Nested Loop (cost=2.81..2022.71 rows=231 width=47) (actual\ntime=14.551..5181.042 rows=1305 loops=1)\n -> Bitmap Heap Scan on picture_categories (cost=2.81..633.78\nrows=231 width=4) (actual time=9.966..140.606 rows=1305 loops=1)\n Recheck Cond: (cat_id = 182)\n -> Bitmap Index Scan on idx_picture_categories_cat_id\n(cost=0.00..2.81 rows=231 width=0) (actual time=9.720..9.720 rows=1305\nloops=1)\n Index Cond: (cat_id = 182)\n -> Index Scan using pictures_pkey on pictures (cost=0.00..6.00\nrows=1 width=47) (actual time=3.802..3.820 rows=1 loops=1305)\n Index Cond: (pictures.pid = \"outer\".pid)\n -> Index Scan using users_pkey on users (cost=0.00..5.94 rows=1\nwidth=23) (actual time=0.095..0.100 rows=1 loops=1305)\n Index Cond: (\"outer\".user_id = users.user_id)\nTotal runtime: 5812.238 ms\n\n\n> To be honest, you're pushing things expecting a machine with only 1G to\n> serve 300 active connections. How large is the database itself?\n\nThe database is 3.7G on disk. There is about 1G of actual data in it -- the\nrest is dead tuples and indices. (I vacuum regularly, but a vacuum full\ncauses too much downtime to do unless I have to)\n\n> > I know hyperthreading is considered something that can slow down a\nserver but with my very high concurancy (averages about 400-500 concurant\nusers during peak hours) I am hoping the extra virtual CPUs wil help. Anyone\nhave experance that says diferent at high concurancy?\n>\n> Best bet is to try it and see. Generally, people find HT hurts, but I\n> recently saw it double the performance of pgbench on a windows XP\n> machine, so it's possible that windows is just more clever about how to\n> use it than linux is.\n\nAnyone know if those who have found it hurts are low concurancy complex cpu\nintensive queries or high concurancy simple queries or both? I can\nunderstand it hurting in the former, but not the later. I'll have to give it\na try I guess. It should at least help my very high load averages.\n\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n", "msg_date": "Mon, 29 May 2006 07:35:14 -0700", "msg_from": "\"Cstdenis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make this query faster (resend)" }, { "msg_contents": "On Mon, May 29, 2006 at 07:35:14AM -0700, Cstdenis wrote:\n> > To be honest, you're pushing things expecting a machine with only 1G to\n> > serve 300 active connections. How large is the database itself?\n> \n> The database is 3.7G on disk. There is about 1G of actual data in it -- the\n> rest is dead tuples and indices. (I vacuum regularly, but a vacuum full\n> causes too much downtime to do unless I have to)\n\nIt sounds like you're not vacuuming anywhere near regularly enough if\nyou have that much dead space. You should at least reindex.\n\n> > > I know hyperthreading is considered something that can slow down a\n> server but with my very high concurancy (averages about 400-500 concurant\n> users during peak hours) I am hoping the extra virtual CPUs wil help. Anyone\n> have experance that says diferent at high concurancy?\n> >\n> > Best bet is to try it and see. Generally, people find HT hurts, but I\n> > recently saw it double the performance of pgbench on a windows XP\n> > machine, so it's possible that windows is just more clever about how to\n> > use it than linux is.\n> \n> Anyone know if those who have found it hurts are low concurancy complex cpu\n> intensive queries or high concurancy simple queries or both? I can\n> understand it hurting in the former, but not the later. I'll have to give it\n> a try I guess. It should at least help my very high load averages.\n\nThe issue is that HT doesn't give you anything close to having 2 CPUs,\nso for all but the most trivial and limited cases it's not going to be a\nwin.\n\nIncidentally, the only good results I've seen with HT are on windows.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 09:35:29 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make this query faster (resend)" } ]
[ { "msg_contents": "Where can I find any documentation to partition the tablespace disk files onto\ndifferent physical arrays for improved performance?\n\n-Kenji\n", "msg_date": "Fri, 19 May 2006 19:37:45 -0700", "msg_from": "Kenji Morishige <[email protected]>", "msg_from_op": true, "msg_subject": "utilizing multiple disks for i/o performance" }, { "msg_contents": "On Fri, 2006-05-19 at 21:37, Kenji Morishige wrote:\n> Where can I find any documentation to partition the tablespace disk files onto\n> different physical arrays for improved performance?\n\nThere have been quite a few posts to this list in the past about this,\nso searching it might be a good start.\n\nFirstly, you need to defined \"improved performance\". Are we talking\ntransactional throughput (OLTP), or batch updates (ETL), or report\ngeneration (OLAP stuff)??? Or some other scenario.\n\nFor write performance, the general rules are:\n\nYou can only commit 1 transaction per rotation of a properly fsynced\ndisc that holds the WAL file (i.e. the pg_xlog directory). So, putting\nthat on it's own fast spinning disc is step one for improved\nperformance.\n\nA battery backed cache unit (BBU) on a RAID controller is a must.\n\nRAID 1+0 is a good choice for your data partition.\n\nFor many hardware RAID controllers with the above mentioned BBU moving\nthe pg_xlog to another partition is no real help.\n\nCheap RAID controllers are often worse than no RAID controller. If you\ncan't afford a good RAID controller, you're probably better off with\nsoftware RAID than using a cheapie.\n\nFor READ performance:\n\nOften settings in postgresql.conf are far more important than the drive\nlayout. \n\nLots of RAM is a good thing.\n\nAssuming you've got lots of RAM, making shared_buffers anywhere from 10\nto 25% of it is a pretty good size.\n\nwork_mem usually works well at around 16 meg or so.\n\ndrop random_page_cost to about 2 for most systems.\n\nLastly, read this:\n\nhttp://www.varlena.com/GeneralBits/Tidbits/perf.html\n", "msg_date": "Mon, 22 May 2006 10:14:51 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilizing multiple disks for i/o performance" }, { "msg_contents": "On Fri, May 19, 2006 at 07:37:45PM -0700, Kenji Morishige wrote:\n> Where can I find any documentation to partition the tablespace disk files onto\n> different physical arrays for improved performance?\n\nOther than CREATE TABLESPACE??\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 22 May 2006 10:21:18 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilizing multiple disks for i/o performance" } ]
[ { "msg_contents": "Fellow PostgreSQLers,\n\nWith a bit of guidance from Klint Gore, Neil Conway, Josh Berkus, and \nAlexey Dvoychenkov, I have written a PL/pgSQL function to help me \ncompare the performance between different functions that execute the \nsame task. I've blogged the about the function here:\n\n http://www.justatheory.com/computers/databases/postgresql/ \nbenchmarking_functions.html\n\nMy question for the list is: How important is it that I have the \ncontrol in there? In the version I've blogged, the control just \nexecutes 'SELECT TRUE FROM generate_series( 1, n)' and iterates \nloops over the results. But I wasn't sure how accurate that was. \nAnother approach I've tried it to simply loop without executing a \nquery, 'FOR i IN 1..n LOOP', but that takes virtually no time at all.\n\nThe idea of the control is, of course, to subtract the overhead of \nthe benchmarking function from the code actually being tested. So I \nguess my question is, how important is it to have the control there, \nand, if it is important, how should it actually work?\n\nMany TIA,\n\nDavid\n", "msg_date": "Fri, 19 May 2006 21:51:30 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmarking Function" }, { "msg_contents": "DW,\n\n> The idea of the control is, of course, to subtract the overhead of  \n> the benchmarking function from the code actually being tested. So I  \n> guess my question is, how important is it to have the control there,  \n> and, if it is important, how should it actually work?\n\nWell, per our conversation the approach doesn't really work. EXECUTE \n'string' + generate_series seems to carry a substantial and somewhat random \noverhead, between 100ms and 200ms -- enough to wipe out any differences \nbetween queries.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 21 May 2006 12:23:49 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarking Function" }, { "msg_contents": "On May 21, 2006, at 12:23, Josh Berkus wrote:\n\n> Well, per our conversation the approach doesn't really work. EXECUTE\n> 'string' + generate_series seems to carry a substantial and \n> somewhat random\n> overhead, between 100ms and 200ms -- enough to wipe out any \n> differences\n> between queries.\n\nPer our conversation I eliminated the EXECUTE 'string' + \ngenerate_series. Check it out.\n\n http://theory.kineticode.com/computers/databases/postgresql/ \nbenchmarking_functions.html\n\n(Temporary URL; justatheory.com seems to have disappeared from DNS...\n\nBest,\n\nDavid\n", "msg_date": "Sun, 21 May 2006 15:45:14 -0700", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmarking Function" } ]
[ { "msg_contents": "Hi,\n\nI have a query that performs WAY better when I have enable_seqscan = \noff:\n\nexplain analyze select ac.attribute_id, la.name, ac.sort_order from \nattribute_category ac, localized_attribute la where ac.category_id = \n1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and \nla.attribute_id = ac.attribute_id and exists ( select 'x' from \nproduct_attribute_value pav, category_product cp where \n(pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' \n|| ac.attribute_id) and pav.status_code is null and (cp.category_id \n|| '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is \nnull), ac.sort_order, la.name asc;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------\nSort (cost=47.97..47.98 rows=7 width=34) (actual \ntime=33368.721..33368.721 rows=2 loops=1)\n Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name\n -> Nested Loop (cost=2.00..47.87 rows=7 width=34) (actual \ntime=13563.049..33368.679 rows=2 loops=1)\n -> Index Scan using attribute_category__category_id_fk_idx \non attribute_category ac (cost=0.00..26.73 rows=7 width=8) (actual \ntime=13562.918..33368.370 rows=2 loops=1)\n Index Cond: (category_id = 1001402)\n Filter: (((is_browsable)::text = 'true'::text) AND \n(subplan))\n SubPlan\n -> Nested Loop (cost=0.02..278217503.21 \nrows=354763400 width=0) (actual time=4766.821..4766.821 rows=0 loops=7)\n -> Seq Scan on category_product cp \n(cost=0.00..158150.26 rows=18807 width=4) (actual \ntime=113.595..4585.461 rows=12363 loops=7)\n Filter: ((((category_id)::text || \n'.'::text) || (is_visible)::text) = '1001402.true'::text)\n -> Index Scan using \nproduct_attribute_value__prod_id_att_id_status_is_null_ids on \nproduct_attribute_value pav (cost=0.02..14171.84 rows=18863 width=8) \n(actual time=0.012..0.012 rows=0 loops=86538)\n Index Cond: ((((pav.product_id)::text \n|| '.'::text) || (pav.attribute_id)::text) = \n(((\"outer\".product_id)::text || '.'::text) || ($0)::text))\n -> Bitmap Heap Scan on localized_attribute la \n(cost=2.00..3.01 rows=1 width=30) (actual time=0.129..0.129 rows=1 \nloops=2)\n Recheck Cond: (la.attribute_id = \"outer\".attribute_id)\n Filter: (locale_id = 1000001)\n -> Bitmap Index Scan on \nlocalized_attribute__attribute_id_fk_idx (cost=0.00..2.00 rows=1 \nwidth=0) (actual time=0.091..0.091 rows=1 loops=2)\n Index Cond: (la.attribute_id = \n\"outer\".attribute_id)\nTotal runtime: 33369.105 ms\n\nNow when I disable sequential scans:\n\nset enable_seqscan = off;\n\nexplain analyze select ac.attribute_id, la.name, ac.sort_order from \nattribute_category ac, localized_attribute la where ac.category_id = \n1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and \nla.attribute_id = ac.attribute_id and exists ( select 'x' from \nproduct_attribute_value pav, category_product cp where \n(pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' \n|| ac.attribute_id) and pav.status_code is null and (cp.category_id \n|| '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is \nnull), ac.sort_order, la.name asc;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------\nSort (cost=48.09..48.11 rows=7 width=34) (actual \ntime=1675.944..1675.945 rows=2 loops=1)\n Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name\n -> Nested Loop (cost=2.00..48.00 rows=7 width=34) (actual \ntime=687.600..1675.831 rows=2 loops=1)\n -> Index Scan using attribute_category__category_id_fk_idx \non attribute_category ac (cost=0.00..26.86 rows=7 width=8) (actual \ntime=687.441..1675.584 rows=2 loops=1)\n Index Cond: (category_id = 1001402)\n Filter: (((is_browsable)::text = 'true'::text) AND \n(subplan))\n SubPlan\n -> Nested Loop (cost=0.03..278076992.97 \nrows=354763400 width=0) (actual time=239.299..239.299 rows=0 loops=7)\n -> Index Scan using \ncategory_product__cat_id_is_visible_idx on category_product cp \n(cost=0.01..17640.02 rows=18807 width=4) (actual time=0.036..30.205 \nrows=12363 loops=7)\n Index Cond: ((((category_id)::text || \n'.'::text) || (is_visible)::text) = '1001402.true'::text)\n -> Index Scan using \nproduct_attribute_value__prod_id_att_id_status_is_null_ids on \nproduct_attribute_value pav (cost=0.02..14171.84 rows=18863 width=8) \n(actual time=0.013..0.013 rows=0 loops=86538)\n Index Cond: ((((pav.product_id)::text \n|| '.'::text) || (pav.attribute_id)::text) = \n(((\"outer\".product_id)::text || '.'::text) || ($0)::text))\n -> Bitmap Heap Scan on localized_attribute la \n(cost=2.00..3.01 rows=1 width=30) (actual time=0.093..0.094 rows=1 \nloops=2)\n Recheck Cond: (la.attribute_id = \"outer\".attribute_id)\n Filter: (locale_id = 1000001)\n -> Bitmap Index Scan on \nlocalized_attribute__attribute_id_fk_idx (cost=0.00..2.00 rows=1 \nwidth=0) (actual time=0.060..0.060 rows=1 loops=2)\n Index Cond: (la.attribute_id = \n\"outer\".attribute_id)\nTotal runtime: 1676.727 ms\n\n\nthe tables involved with the query have all been vacuum analyzed. I \nalso have default_statistics_target = 100.\n\nThere's something definitely wrong with that Nested Loop with the \nhigh row count. That row count appears to be close to the product of \nthe number of rows in category_product and product_attribute_value.\n\nAny ideas and help would be greatly appreciated.\n\n\nThanks,\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\n\nHi,I have a query that performs WAY better when I have enable_seqscan = off:explain analyze select ac.attribute_id, la.name, ac.sort_order from attribute_category ac, localized_attribute la where ac.category_id = 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and la.attribute_id = ac.attribute_id and exists ( select 'x' from product_attribute_value pav, category_product cp where (pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' || ac.attribute_id) and pav.status_code is null and (cp.category_id || '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is null), ac.sort_order, la.name asc;                                                                                                          QUERY PLAN                                                                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=47.97..47.98 rows=7 width=34) (actual time=33368.721..33368.721 rows=2 loops=1)   Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name   ->  Nested Loop  (cost=2.00..47.87 rows=7 width=34) (actual time=13563.049..33368.679 rows=2 loops=1)         ->  Index Scan using attribute_category__category_id_fk_idx on attribute_category ac  (cost=0.00..26.73 rows=7 width=8) (actual time=13562.918..33368.370 rows=2 loops=1)               Index Cond: (category_id = 1001402)               Filter: (((is_browsable)::text = 'true'::text) AND (subplan))               SubPlan                 ->  Nested Loop  (cost=0.02..278217503.21 rows=354763400 width=0) (actual time=4766.821..4766.821 rows=0 loops=7)                       ->  Seq Scan on category_product cp  (cost=0.00..158150.26 rows=18807 width=4) (actual time=113.595..4585.461 rows=12363 loops=7)                             Filter: ((((category_id)::text || '.'::text) || (is_visible)::text) = '1001402.true'::text)                       ->  Index Scan using product_attribute_value__prod_id_att_id_status_is_null_ids on product_attribute_value pav  (cost=0.02..14171.84 rows=18863 width=8) (actual time=0.012..0.012 rows=0 loops=86538)                             Index Cond: ((((pav.product_id)::text || '.'::text) || (pav.attribute_id)::text) = (((\"outer\".product_id)::text || '.'::text) || ($0)::text))         ->  Bitmap Heap Scan on localized_attribute la  (cost=2.00..3.01 rows=1 width=30) (actual time=0.129..0.129 rows=1 loops=2)               Recheck Cond: (la.attribute_id = \"outer\".attribute_id)               Filter: (locale_id = 1000001)               ->  Bitmap Index Scan on localized_attribute__attribute_id_fk_idx  (cost=0.00..2.00 rows=1 width=0) (actual time=0.091..0.091 rows=1 loops=2)                     Index Cond: (la.attribute_id = \"outer\".attribute_id) Total runtime: 33369.105 msNow when I disable sequential scans:set enable_seqscan = off;explain analyze select ac.attribute_id, la.name, ac.sort_order from attribute_category ac, localized_attribute la where ac.category_id = 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and la.attribute_id = ac.attribute_id and exists ( select 'x' from product_attribute_value pav, category_product cp where (pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' || ac.attribute_id) and pav.status_code is null and (cp.category_id || '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is null), ac.sort_order, la.name asc;                                                                                                          QUERY PLAN                                                                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=48.09..48.11 rows=7 width=34) (actual time=1675.944..1675.945 rows=2 loops=1)   Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name   ->  Nested Loop  (cost=2.00..48.00 rows=7 width=34) (actual time=687.600..1675.831 rows=2 loops=1)         ->  Index Scan using attribute_category__category_id_fk_idx on attribute_category ac  (cost=0.00..26.86 rows=7 width=8) (actual time=687.441..1675.584 rows=2 loops=1)               Index Cond: (category_id = 1001402)               Filter: (((is_browsable)::text = 'true'::text) AND (subplan))               SubPlan                 ->  Nested Loop  (cost=0.03..278076992.97 rows=354763400 width=0) (actual time=239.299..239.299 rows=0 loops=7)                       ->  Index Scan using category_product__cat_id_is_visible_idx on category_product cp  (cost=0.01..17640.02 rows=18807 width=4) (actual time=0.036..30.205 rows=12363 loops=7)                             Index Cond: ((((category_id)::text || '.'::text) || (is_visible)::text) = '1001402.true'::text)                       ->  Index Scan using product_attribute_value__prod_id_att_id_status_is_null_ids on product_attribute_value pav  (cost=0.02..14171.84 rows=18863 width=8) (actual time=0.013..0.013 rows=0 loops=86538)                             Index Cond: ((((pav.product_id)::text || '.'::text) || (pav.attribute_id)::text) = (((\"outer\".product_id)::text || '.'::text) || ($0)::text))         ->  Bitmap Heap Scan on localized_attribute la  (cost=2.00..3.01 rows=1 width=30) (actual time=0.093..0.094 rows=1 loops=2)               Recheck Cond: (la.attribute_id = \"outer\".attribute_id)               Filter: (locale_id = 1000001)               ->  Bitmap Index Scan on localized_attribute__attribute_id_fk_idx  (cost=0.00..2.00 rows=1 width=0) (actual time=0.060..0.060 rows=1 loops=2)                     Index Cond: (la.attribute_id = \"outer\".attribute_id) Total runtime: 1676.727 msthe tables involved with the query have all been vacuum analyzed.  I also have default_statistics_target = 100.There's something definitely wrong with that Nested Loop with the high row count. That row count appears to be close to the product of the number of rows in category_product and product_attribute_value.Any ideas and help would be greatly appreciated.Thanks, ____________________________________________________________________Brendan Duddridge | CTO | 403-277-5591 x24 |  [email protected] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB  T2G 0V9 http://www.clickspace.com", "msg_date": "Sun, 21 May 2006 02:21:55 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Performs WAY better with enable_seqscan = off" }, { "msg_contents": "On sun, 2006-05-21 at 02:21 -0600, Brendan Duddridge wrote:\n> Hi,\n> \n> \n> I have a query that performs WAY better when I have enable_seqscan =\n> off:\n> \n> \n> explain analyze select ac.attribute_id, la.name, ac.sort_order from\n> attribute_category ac, localized_attribute la where ac.category_id =\n> 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and\n> la.attribute_id = ac.attribute_id and exists ( select 'x' from\n> product_attribute_value pav, category_product cp where (pav.product_id\n> || '.' || pav.attribute_id) = (cp.product_id || '.' ||\n> ac.attribute_id) and pav.status_code is null and (cp.category_id ||\n> '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is\n> null), ac.sort_order, la.name asc;\n\nis there some reason for the complicated form of the\njoin conditions in the subselect?\n\nwould this not be clearer:\n\nexplain analyze \n select ac.attribute_id,\n la.name, \n ac.sort_order\n from attribute_category ac,\n localized_attribute la\n where ac.category_id = 1001402 \n and la.locale_id = 1000001 \n and ac.is_browsable = 'true' \n and la.attribute_id = ac.attribute_id \n and exists \n (select 'x' from product_attribute_value pav,\n category_product cp \n where pav.product_id = cp.product_id\n and pav.attribute_id = ac.attribute_id\n and pav.status_code is null\n and cp.category_id= '1001402'\n and cp.is_visible = 'true'\n ) \n order by (ac.sort_order is null), \n ac.sort_order, \n la.name asc;\n\n\npossibly the planner would have a better time\nfiguring out if any indexes are usable or estimating\nthe subselect rowcount\n\ngnari\n\n\n", "msg_date": "Sun, 21 May 2006 10:50:11 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performs WAY better with enable_seqscan = off" }, { "msg_contents": "> is there some reason for the complicated form of the\n> join conditions in the subselect?\n\n\nYes, the simpler form query definitely works, but it's not always as \nfast as the index version with the complicated join syntax. Although \neven that query varies significantly with different category_id \nvalues. Not sure why. Sometimes it finishes in 150 ms, other times it \ntakes over a second.\n\nHere's the explain plan from your query:\n\nexplain analyze select ac.attribute_id, la.name, ac.sort_order from \nattribute_category ac, localized_attribute la where ac.category_id = \n1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and \nla.attribute_id = ac.attribute_id and exists (select 'x' from \nproduct_attribute_value pav, category_product cp where pav.product_id \n= cp.product_id and pav.attribute_id = ac.attribute_id and \npav.status_code is null and cp.category_id= '1001402' and \ncp.is_visible = 'true') order by ac.sort_order, la.name asc;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------------------------------------------------------\nSort (cost=6343.18..6343.20 rows=7 width=34) (actual \ntime=2244.241..2244.242 rows=2 loops=1)\n Sort Key: ac.sort_order, la.name\n -> Nested Loop (cost=2.00..6343.08 rows=7 width=34) (actual \ntime=1831.970..2244.209 rows=2 loops=1)\n -> Index Scan using attribute_category__category_id_fk_idx \non attribute_category ac (cost=0.00..6321.95 rows=7 width=8) (actual \ntime=1831.938..2244.142 rows=2 loops=1)\n Index Cond: (category_id = 1001402)\n Filter: (((is_browsable)::text = 'true'::text) AND \n(subplan))\n SubPlan\n -> Nested Loop (cost=2.00..10458.04 rows=30 \nwidth=0) (actual time=320.572..320.572 rows=0 loops=7)\n -> Index Scan using \nproduct_attribute_value__attribute_id_fk_idx on \nproduct_attribute_value pav (cost=0.00..2661.39 rows=2572 width=4) \n(actual time=0.020..33.589 rows=18468 loops=7)\n Index Cond: (attribute_id = $0)\n Filter: (status_code IS NULL)\n -> Bitmap Heap Scan on category_product cp \n(cost=2.00..3.02 rows=1 width=4) (actual time=0.011..0.011 rows=0 \nloops=129274)\n Recheck Cond: (\"outer\".product_id = \ncp.product_id)\n Filter: ((category_id = 1001402) AND \n((is_visible)::text = 'true'::text))\n -> Bitmap Index Scan on \nx_category_product__product_id_fk_idx (cost=0.00..2.00 rows=1 \nwidth=0) (actual time=0.008..0.008 rows=1 loops=129274)\n Index Cond: (\"outer\".product_id = \ncp.product_id)\n -> Bitmap Heap Scan on localized_attribute la \n(cost=2.00..3.01 rows=1 width=30) (actual time=0.019..0.019 rows=1 \nloops=2)\n Recheck Cond: (la.attribute_id = \"outer\".attribute_id)\n Filter: (locale_id = 1000001)\n -> Bitmap Index Scan on \nlocalized_attribute__attribute_id_fk_idx (cost=0.00..2.00 rows=1 \nwidth=0) (actual time=0.015..0.015 rows=1 loops=2)\n Index Cond: (la.attribute_id = \n\"outer\".attribute_id)\nTotal runtime: 2244.542 ms\n\n\nHere's the schema for the two tables involved with the sub-select:\n\n \\d category_product;\n Table \"public.category_product\"\n Column | Type | Modifiers\n---------------------+------------------------+-----------\ncategory_id | integer | not null\nproduct_id | integer | not null\nen_name_sort_order | integer |\nfr_name_sort_order | integer |\nmerchant_sort_order | integer |\nprice_sort_order | integer |\nmerchant_count | integer |\nis_active | character varying(5) |\nproduct_is_active | character varying(5) |\nproduct_status_code | character varying(32) |\nproduct_name_en | character varying(512) |\nproduct_name_fr | character varying(512) |\nproduct_click_count | integer |\nis_visible | character varying(5) |\nis_pending_visible | character varying(5) |\nmin_price_cad | numeric(12,4) |\nmax_price_cad | numeric(12,4) |\nIndexes:\n \"x_category_product_pk\" PRIMARY KEY, btree (category_id, \nproduct_id)\n \"category_product__cat_id_is_visible_idx\" btree \n(((category_id::text || '.'::text) || is_visible::text))\n \"category_product__cat_id_prod_is_act_status_idx\" btree \n(category_id, product_is_active, product_status_code)\n \"category_product__category_id_is_active_and_status_idx\" btree \n(category_id, product_is_active, product_status_code)\n \"category_product__is_active_idx\" btree (is_active)\n \"category_product__lower_product_name_en_idx\" btree (lower \n(product_name_en::text))\n \"category_product__lower_product_name_fr_idx\" btree (lower \n(product_name_fr::text))\n \"category_product__merchant_sort_order_idx\" btree \n(merchant_sort_order)\n \"category_product__min_price_cad_idx\" btree (min_price_cad)\n \"category_product__product_id_category_id_status_idx\" btree \n(product_id, category_id, product_is_active, product_status_code)\n \"x_category_product__category_id_fk_idx\" btree (category_id) \nCLUSTER\n \"x_category_product__product_id_fk_idx\" btree (product_id)\nForeign-key constraints:\n \"x_category_product_category_fk\" FOREIGN KEY (category_id) \nREFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n \"x_category_product_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n\n\n\nand\n\n\n\\d product_attribute_value\n Table \"public.product_attribute_value\"\n Column | Type | Modifiers\n----------------------------+-----------------------+-----------\nattribute_id | integer | not null\nattribute_unit_id | integer |\nattribute_value_id | integer |\nboolean_value | character varying(5) |\ndecimal_value | numeric(30,10) |\nproduct_attribute_value_id | integer | not null\nproduct_id | integer | not null\nproduct_reference_id | integer |\nstatus_code | character varying(32) |\nIndexes:\n \"product_attribute_value_pk\" PRIMARY KEY, btree \n(product_attribute_value_id)\n \"product_attribute_value__attribute_id_fk_idx\" btree (attribute_id)\n \"product_attribute_value__attribute_unit_id_fk_idx\" btree \n(attribute_unit_id)\n \"product_attribute_value__attribute_value_id_fk_idx\" btree \n(attribute_value_id)\n \"product_attribute_value__normalized_value_idx\" btree \n(normalized_value(decimal_value, attribute_unit_id))\n \"product_attribute_value__prod_id_att_id_status_is_null_ids\" \nbtree (((product_id::text || '.'::text) || attribute_id::text)) WHERE \nstatus_code IS NULL\n \"product_attribute_value__prod_id_att_val_id_status_is_null_idx\" \nbtree (((product_id::text || '.'::text) || attribute_value_id::text)) \nWHERE status_code IS NULL\n \"product_attribute_value__product_id_fk_idx\" btree (product_id) \nCLUSTER\n \"product_attribute_value__product_reference_id_fk_idx\" btree \n(product_reference_id)\nForeign-key constraints:\n \"product_attribute_value_attribute_fk\" FOREIGN KEY \n(attribute_id) REFERENCES attribute(attribute_id) DEFERRABLE \nINITIALLY DEFERRED\n \"product_attribute_value_attributeunit_fk\" FOREIGN KEY \n(attribute_unit_id) REFERENCES attribute_unit(attribute_unit_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_attributevalue_fk\" FOREIGN KEY \n(attribute_value_id) REFERENCES attribute_value(attribute_value_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_productreference_fk\" FOREIGN KEY \n(product_reference_id) REFERENCES product(product_id) DEFERRABLE \nINITIALLY DEFERRED\n\n\n\nWhen the query planner uses the indexes with the concatenated values \nand the where clause, the query can be sub-second response times (but \nnot always depending on the category_id value). By just doing a \nregular join as you suggested, it's always slower. The trick is \ngetting Postgres to use the proper index all the time. And so far the \nonly way I can do that is by turning off sequential scans, but that's \nsomething I didn't want to do because I don't know how it would \naffect the performance of the rest of my application.\n\nJust a note, I have random_page_cost set to 1 to try and get it to \nfavour index scans. The database machine has 8GB of RAM and I have \neffective_cache_size set to 2/3 of that.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 21, 2006, at 4:50 AM, Ragnar wrote:\n\n> On sun, 2006-05-21 at 02:21 -0600, Brendan Duddridge wrote:\n>> Hi,\n>>\n>>\n>> I have a query that performs WAY better when I have enable_seqscan =\n>> off:\n>>\n>>\n>> explain analyze select ac.attribute_id, la.name, ac.sort_order from\n>> attribute_category ac, localized_attribute la where ac.category_id =\n>> 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and\n>> la.attribute_id = ac.attribute_id and exists ( select 'x' from\n>> product_attribute_value pav, category_product cp where \n>> (pav.product_id\n>> || '.' || pav.attribute_id) = (cp.product_id || '.' ||\n>> ac.attribute_id) and pav.status_code is null and (cp.category_id ||\n>> '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is\n>> null), ac.sort_order, la.name asc;\n>\n> is there some reason for the complicated form of the\n> join conditions in the subselect?\n>\n> would this not be clearer:\n>\n> explain analyze\n> select ac.attribute_id,\n> la.name,\n> ac.sort_order\n> from attribute_category ac,\n> localized_attribute la\n> where ac.category_id = 1001402\n> and la.locale_id = 1000001\n> and ac.is_browsable = 'true'\n> and la.attribute_id = ac.attribute_id\n> and exists\n> (select 'x' from product_attribute_value pav,\n> category_product cp\n> where pav.product_id = cp.product_id\n> and pav.attribute_id = ac.attribute_id\n> and pav.status_code is null\n> and cp.category_id= '1001402'\n> and cp.is_visible = 'true'\n> )\n> order by (ac.sort_order is null),\n> ac.sort_order,\n> la.name asc;\n>\n>\n> possibly the planner would have a better time\n> figuring out if any indexes are usable or estimating\n> the subselect rowcount\n>\n> gnari\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Sun, 21 May 2006 14:01:14 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performs WAY better with enable_seqscan = off" }, { "msg_contents": "On Sun, May 21, 2006 at 02:01:14PM -0600, Brendan Duddridge wrote:\n> When the query planner uses the indexes with the concatenated values \n> and the where clause, the query can be sub-second response times (but \n> not always depending on the category_id value). By just doing a \n> regular join as you suggested, it's always slower. The trick is \n> getting Postgres to use the proper index all the time. And so far the \n> only way I can do that is by turning off sequential scans, but that's \n> something I didn't want to do because I don't know how it would \n> affect the performance of the rest of my application.\n \nYou can always disable them for just that query...\nBEGIN;\nSET LOCAL enable_seqscan=off;\nSELECT ...\nCOMMIT;\n\n> Just a note, I have random_page_cost set to 1 to try and get it to \n> favour index scans. The database machine has 8GB of RAM and I have \n> effective_cache_size set to 2/3 of that.\n\nThat's rather low for that much memory; I'd set it to 7GB.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 22 May 2006 10:25:51 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performs WAY better with enable_seqscan = off" }, { "msg_contents": "The problem is that the planner is guessing horribly at what the nodes\nwill return, and I'm betting the reason for that is your join criteria.\nWhy are you joining on fields that are concatenated together, instead of\njust joining on the fields themselves? That's a sure-fire way to confuse\nthe planner, and greatly limit your options.\n\nOn Sun, May 21, 2006 at 02:21:55AM -0600, Brendan Duddridge wrote:\n> Hi,\n> \n> I have a query that performs WAY better when I have enable_seqscan = \n> off:\n> \n> explain analyze select ac.attribute_id, la.name, ac.sort_order from \n> attribute_category ac, localized_attribute la where ac.category_id = \n> 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and \n> la.attribute_id = ac.attribute_id and exists ( select 'x' from \n> product_attribute_value pav, category_product cp where \n> (pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' \n> || ac.attribute_id) and pav.status_code is null and (cp.category_id \n> || '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is \n> null), ac.sort_order, la.name asc;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------\n> Sort (cost=47.97..47.98 rows=7 width=34) (actual \n> time=33368.721..33368.721 rows=2 loops=1)\n> Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name\n> -> Nested Loop (cost=2.00..47.87 rows=7 width=34) (actual \n> time=13563.049..33368.679 rows=2 loops=1)\n> -> Index Scan using attribute_category__category_id_fk_idx \n> on attribute_category ac (cost=0.00..26.73 rows=7 width=8) (actual \n> time=13562.918..33368.370 rows=2 loops=1)\n> Index Cond: (category_id = 1001402)\n> Filter: (((is_browsable)::text = 'true'::text) AND \n> (subplan))\n> SubPlan\n> -> Nested Loop (cost=0.02..278217503.21 \n> rows=354763400 width=0) (actual time=4766.821..4766.821 rows=0 loops=7)\n> -> Seq Scan on category_product cp \n> (cost=0.00..158150.26 rows=18807 width=4) (actual \n> time=113.595..4585.461 rows=12363 loops=7)\n> Filter: ((((category_id)::text || \n> '.'::text) || (is_visible)::text) = '1001402.true'::text)\n> -> Index Scan using \n> product_attribute_value__prod_id_att_id_status_is_null_ids on \n> product_attribute_value pav (cost=0.02..14171.84 rows=18863 width=8) \n> (actual time=0.012..0.012 rows=0 loops=86538)\n> Index Cond: ((((pav.product_id)::text \n> || '.'::text) || (pav.attribute_id)::text) = \n> (((\"outer\".product_id)::text || '.'::text) || ($0)::text))\n> -> Bitmap Heap Scan on localized_attribute la \n> (cost=2.00..3.01 rows=1 width=30) (actual time=0.129..0.129 rows=1 \n> loops=2)\n> Recheck Cond: (la.attribute_id = \"outer\".attribute_id)\n> Filter: (locale_id = 1000001)\n> -> Bitmap Index Scan on \n> localized_attribute__attribute_id_fk_idx (cost=0.00..2.00 rows=1 \n> width=0) (actual time=0.091..0.091 rows=1 loops=2)\n> Index Cond: (la.attribute_id = \n> \"outer\".attribute_id)\n> Total runtime: 33369.105 ms\n> \n> Now when I disable sequential scans:\n> \n> set enable_seqscan = off;\n> \n> explain analyze select ac.attribute_id, la.name, ac.sort_order from \n> attribute_category ac, localized_attribute la where ac.category_id = \n> 1001402 and la.locale_id = 1000001 and ac.is_browsable = 'true' and \n> la.attribute_id = ac.attribute_id and exists ( select 'x' from \n> product_attribute_value pav, category_product cp where \n> (pav.product_id || '.' || pav.attribute_id) = (cp.product_id || '.' \n> || ac.attribute_id) and pav.status_code is null and (cp.category_id \n> || '.' || cp.is_visible) = '1001402.true') order by (ac.sort_order is \n> null), ac.sort_order, la.name asc;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------\n> Sort (cost=48.09..48.11 rows=7 width=34) (actual \n> time=1675.944..1675.945 rows=2 loops=1)\n> Sort Key: (ac.sort_order IS NULL), ac.sort_order, la.name\n> -> Nested Loop (cost=2.00..48.00 rows=7 width=34) (actual \n> time=687.600..1675.831 rows=2 loops=1)\n> -> Index Scan using attribute_category__category_id_fk_idx \n> on attribute_category ac (cost=0.00..26.86 rows=7 width=8) (actual \n> time=687.441..1675.584 rows=2 loops=1)\n> Index Cond: (category_id = 1001402)\n> Filter: (((is_browsable)::text = 'true'::text) AND \n> (subplan))\n> SubPlan\n> -> Nested Loop (cost=0.03..278076992.97 \n> rows=354763400 width=0) (actual time=239.299..239.299 rows=0 loops=7)\n> -> Index Scan using \n> category_product__cat_id_is_visible_idx on category_product cp \n> (cost=0.01..17640.02 rows=18807 width=4) (actual time=0.036..30.205 \n> rows=12363 loops=7)\n> Index Cond: ((((category_id)::text || \n> '.'::text) || (is_visible)::text) = '1001402.true'::text)\n> -> Index Scan using \n> product_attribute_value__prod_id_att_id_status_is_null_ids on \n> product_attribute_value pav (cost=0.02..14171.84 rows=18863 width=8) \n> (actual time=0.013..0.013 rows=0 loops=86538)\n> Index Cond: ((((pav.product_id)::text \n> || '.'::text) || (pav.attribute_id)::text) = \n> (((\"outer\".product_id)::text || '.'::text) || ($0)::text))\n> -> Bitmap Heap Scan on localized_attribute la \n> (cost=2.00..3.01 rows=1 width=30) (actual time=0.093..0.094 rows=1 \n> loops=2)\n> Recheck Cond: (la.attribute_id = \"outer\".attribute_id)\n> Filter: (locale_id = 1000001)\n> -> Bitmap Index Scan on \n> localized_attribute__attribute_id_fk_idx (cost=0.00..2.00 rows=1 \n> width=0) (actual time=0.060..0.060 rows=1 loops=2)\n> Index Cond: (la.attribute_id = \n> \"outer\".attribute_id)\n> Total runtime: 1676.727 ms\n> \n> \n> the tables involved with the query have all been vacuum analyzed. I \n> also have default_statistics_target = 100.\n> \n> There's something definitely wrong with that Nested Loop with the \n> high row count. That row count appears to be close to the product of \n> the number of rows in category_product and product_attribute_value.\n> \n> Any ideas and help would be greatly appreciated.\n> \n> \n> Thanks,\n> \n> \n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> \n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n> \n> http://www.clickspace.com\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 22 May 2006 10:30:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performs WAY better with enable_seqscan = off" } ]
[ { "msg_contents": "Hi all,\n\nWe've recently started having a problem where a query that normally executes\nin ~15ms starts to take upwards of 20s to complete. When the connection\nthat ran query is returned to the connection pool, it appears as though a\ntransaction is still in progress so the connection pool tries to cancel the\ntransaction and close the connection. This fails and the connection is\nremoved from the connection pool. At this point, the situation rapidly\ndegrades and we run out of connections to the postgres server.\n\nAn inspection of the pg_stat_activity table shows that practically every\nconnection is running the above-mentioned query and some of those queries\nhave been active for many minutes! We've looked at the pg_locks table as\nwell and the only exclusive locks are on transactions that are open. All\nother locks are AccessShareLocks. Also, as far as we can tell (from looking\nat the Hibernate stats), every db session that is opened is closed.\n\nWhen this happens, if I kill one of the running postgres processes (just by\npicking the last process returned from \"ps -ef | grep postgres\"), the other\nqueries will immediately finish and the system will respond. However,\nwithin 15 minutes, we'll be back in the same state as before. At that\npoint, I've cycled Apache, Tomcat and Postgres and the system then seems to\ncome back.\n\nThis problem appears to be unrelated to load and in fact, the majority of\nthe time there is very little load on the site when this occurs. We've run\nload tests on our dev boxes but we've been unable to reproduce the problem.\nWe're currently working on playing back the clicks on the site previous to\nthe weird state the site gets in and at the same time, we were wondering if\nanyone has experienced a problem like this or has any suggestions.\n\nThe query in question is:\n\nselect distinct s.screening_id, f.film_id, f.title, s.period_start,\nf.runtime, c.value, v.short_name, s.parent_id,\n stats.avg_rating, coalesce(stats.num_adds, 0) as num_adds, coalesce(\nstats.unique_visits, 0) as unique_visits,\n f.*, s.*\n from lte_screening s\n inner join lte_film f on s.film_id = f.film_id\n inner join lte_venue v on s.venue_id = v.venue_id\n inner join lte_film_classifier c on c.film_id = f.film_id\n left join lte_film_stats stats on stats.context = :context and\nstats.film_id = s.film_id\n where c.name = ? and s.period_start is not null and s.festival_id = ?\n and s.period_start between ? + ? and ? + ?\n order by s.period_start, f.title;\n\nAnd the result of explain analyze:\n\nQUERY PLAN\nUnique (cost=1117.42..1118.71 rows=11 width=866) (actual time=\n18.306..18.386 rows=15 loops=1)\n -> Sort (cost=1117.42..1117.44 rows=11 width=866) (actual time=\n18.300..18.316 rows=15 loops=1)\n Sort Key: s.period_start, f.title, s.screening_id, f.film_id,\nf.runtime, c.value, v.short_name, s.parent_id, stats.avg_rating, COALESCE(\nstats.num_adds, 0), COALESCE(stats.unique_visits, 0::bigint), f.film_id,\nf.sku, f.title, f.\"template\", f.release_date, f.runtime, f.\"language\",\nf.country, f.mpaa_rating, f.synopsis, f.\"owner\", f.ext_sales_rank,\nf.small_image_url, f.medium_image_url, f.large_image_url, f.detail_page,\nf.to_delete, f.coalesce_to, (subplan), (subplan), s.screening_id,\ns.period_start, s.period_end, s.ticket_price, s.tickets_avail,\ns.tickets_sold, s.\"type\", s.venue_id, s.festival_id, s.film_id, s.parent_id,\ns.ext_id, s.purchase_url, s.status, s.status_update_time\n -> Nested Loop Left Join (cost=2.62..1117.23 rows=11 width=866)\n(actual time=2.656..17.773 rows=15 loops=1)\n -> Nested Loop (cost=2.62..976.00 rows=11 width=846) (actual\ntime=2.347..16.162 rows=15 loops=1)\n -> Hash Join (cost=2.62..929.09 rows=10 width=831)\n(actual time=2.217..15.480 rows=15 loops=1)\n Hash Cond: (\"outer\".venue_id = \"inner\".venue_id)\n -> Nested Loop (cost=0.00..926.32 rows=10\nwidth=818) (actual time=1.915..14.974 rows=15 loops=1)\n -> Seq Scan on lte_screening s (cost=\n0.00..886.67 rows=10 width=159) (actual time=1.830..14.314 rows=15 loops=1)\n Filter: ((period_start IS NOT NULL)\nAND (festival_id = 316372) AND (period_start >= '2006-05-19\n05:00:00'::timestamp without time zone) AND (period_start <= '2006-05-20\n04:59:59'::timestamp without time zone))\n -> Index Scan using lte_film_pkey on\nlte_film f (cost=0.00..3.95 rows=1 width=659) (actual\ntime=0.026..0.028rows=1 loops=15)\n Index Cond: (\"outer\".film_id =\nf.film_id)\n -> Hash (cost=2.50..2.50 rows=50 width=21)\n(actual time=0.215..0.215 rows=0 loops=1)\n -> Seq Scan on lte_venue v (cost=\n0.00..2.50 rows=50 width=21) (actual time=0.012..0.126 rows=52 loops=1)\n -> Index Scan using idx_classifier_film on\nlte_film_classifier c (cost=0.00..4.67 rows=2 width=23) (actual time=\n0.026..0.028 rows=1 loops=15)\n Index Cond: (c.film_id = \"outer\".film_id)\n Filter: ((name)::text = 'FestivalCategory'::text)\n -> Index Scan using lte_film_stats_pkey on lte_film_stats\nstats (cost=0.00..4.34 rows=1 width=28) (actual time=0.034..0.037 rows=1\nloops=15)\n Index Cond: ((stats.context = 316372) AND\n(stats.film_id= \"outer\".film_id))\n SubPlan\n -> Index Scan using idx_collateral_film on\nlte_film_collateral c (cost=0.00..4.24 rows=1 width=40) (actual time=\n0.009..0.011 rows=1 loops=15)\n Index Cond: (film_id = $0)\n Filter: ((name)::text = 'TVRating'::text)\n -> Index Scan using idx_collateral_film on\nlte_film_collateral c (cost=0.00..4.24 rows=1 width=40) (actual time=\n0.022..0.025 rows=1 loops=15)\n Index Cond: (film_id = $0)\n Filter: ((name)::text = 'IMDBId'::text)\nTotal runtime: 19.077 ms\n\n\nHere is our setup:\n\nWe have 2 machines. The first is the web server and the db server and the\nsecond is just another web server:\n\nMachine A\n- 1 GB RAM\n- 1 Intel(R) Xeon(TM) CPU 2.80GHz HyperThreaded Processor\n- CentOS 4.3\n- Linux moe 2.6.9-22.ELsmp #1 SMP Sat Oct 8 19:11:43 CDT 2005 i686 i686 i386\nGNU/Linux\n\nMachine B\n- 1 GB RAM\n- 1 Intel(R) Xeon(TM) CPU 2.80GHz Processor\n- CentOS 4.3\n- Linux larry 2.6.9-22.0.1.EL #1 Thu Oct 27 12:26:11 CDT 2005 i686 i686 i386\nGNU/Linux\n\nWe're using the following software:\n- Apache 2.0.52\n- Tomcat 5.5.17\n- Postgres 8.0.6\n- JDK 1.5.0-Release 6\n- Proxool 0.8.3\n- Hibernate 3.1.3\n\nThanks in advance for any help,\nMeetesh\n\nHi all,We've recently started having a problem where a query that normally executes in ~15ms starts to take upwards of 20s to complete.  When the connection that ran query is returned to the connection pool, it appears as though a transaction is still in progress so the connection pool tries to cancel the transaction and close the connection.  This fails and the connection is removed from the connection pool.  At this point, the situation rapidly degrades and we run out of connections to the postgres server.\nAn inspection of the pg_stat_activity table shows that practically every connection is running the above-mentioned query and some of those queries have been active for many minutes!  We've looked at the pg_locks table as well and the only exclusive locks are on transactions that are open.  All other locks are AccessShareLocks.  Also, as far as we can tell (from looking at the Hibernate stats), every db session that is opened is closed.\nWhen this happens, if I kill one of the running postgres processes (just by picking the last process returned from \"ps -ef | grep postgres\"), the other queries will immediately finish and the system will respond.  However, within 15 minutes, we'll be back in the same state as before.  At that point, I've cycled Apache, Tomcat and Postgres and the system then seems to come back.\nThis problem appears to be unrelated to load and in fact, the majority of the time there is very little load on the site when this occurs.  We've run load tests on our dev boxes but we've been unable to reproduce the problem.  We're currently working on playing back the clicks on the site previous to the weird state the site gets in and at the same time, we were wondering if anyone has experienced a problem like this or has any suggestions.\nThe query in question is:select distinct s.screening_id, f.film_id, f.title, s.period_start, f.runtime, c.value, v.short_name, s.parent_id,\n        stats.avg_rating, coalesce(stats.num_adds, 0) as num_adds, coalesce(stats.unique_visits, 0) as unique_visits,\n        f.*, s.*    from lte_screening s\n        inner join lte_film f on s.film_id = f.film_id        inner join lte_venue v on \ns.venue_id = v.venue_id        inner join lte_film_classifier c on c.film_id = f.film_id\n        left join lte_film_stats stats on stats.context = :context and stats.film_id = s.film_id\n    where c.name = ? and s.period_start is not null and s.festival_id = ?        and s.period_start\n between ? + ? and ? + ?    order by s.period_start, f.title;And the result of explain analyze:\nQUERY PLANUnique  (cost=1117.42..1118.71 rows=11 width=866) (actual time=\n18.306..18.386 rows=15 loops=1)  ->  Sort  (cost=1117.42..1117.44 rows=11 width=866) (actual time=18.300..18.316 rows=15 loops=1)\n        Sort Key: s.period_start, f.title, s.screening_id, f.film_id, f.runtime, c.value, v.short_name, s.parent_id, \nstats.avg_rating, COALESCE(stats.num_adds, 0), COALESCE(stats.unique_visits, 0::bigint), f.film_id, f.sku, f.title, f.\"template\", f.release_date, f.runtime, f.\"language\", f.country, f.mpaa_rating, f.synopsis\n, f.\"owner\", f.ext_sales_rank, f.small_image_url, f.medium_image_url, f.large_image_url, f.detail_page, f.to_delete, f.coalesce_to, (subplan), (subplan), s.screening_id, s.period_start, s.period_end, s.ticket_price\n, s.tickets_avail, s.tickets_sold, s.\"type\", s.venue_id, s.festival_id, s.film_id, s.parent_id, s.ext_id, s.purchase_url, s.status, s.status_update_time\n        ->  Nested Loop Left Join  (cost=2.62..1117.23 rows=11 width=866) (actual time=2.656..17.773 rows=15 loops=1)\n              ->  Nested Loop  (cost=2.62..976.00 rows=11 width=846) (actual time=2.347..16.162 rows=15 loops=1)                    ->  Hash Join  (cost=\n2.62..929.09 rows=10 width=831) (actual time=2.217..15.480 rows=15 loops=1)                          Hash Cond: (\"outer\".venue_id = \"inner\".venue_id)\n                          ->  Nested Loop  (cost=0.00..926.32 rows=10 width=818) (actual time=1.915..14.974 rows=15 loops=1)\n                                ->  Seq Scan on lte_screening s  (cost=0.00..886.67 rows=10 width=159) (actual time=\n1.830..14.314 rows=15 loops=1)                                      Filter: ((period_start IS NOT NULL) AND (festival_id = 316372) AND (period_start >= '2006-05-19 05:00:00'::timestamp without time zone) AND (period_start <= '2006-05-20 04:59:59'::timestamp without time zone))\n                                ->  Index Scan using lte_film_pkey on lte_film f  (cost=0.00..3.95 rows=1 width=659) (actual time=\n0.026..0.028 rows=1 loops=15)                                      Index Cond: (\"outer\".film_id = f.film_id\n)                          ->  Hash  (cost=2.50..2.50 rows=50 width=21) (actual time=0.215..0.215 rows=0 loops=1)\n                                ->  Seq Scan on lte_venue v  (cost=0.00..2.50 rows=50 width=21) (actual time=0.012..0.126\n rows=52 loops=1)                    ->  Index Scan using idx_classifier_film on lte_film_classifier c  (cost=0.00..4.67\n rows=2 width=23) (actual time=0.026..0.028 rows=1 loops=15)                          Index Cond: (c.film_id = \"outer\".film_id)\n                          Filter: ((name)::text = 'FestivalCategory'::text)\n              ->  Index Scan using lte_film_stats_pkey on lte_film_stats stats  (cost=0.00..4.34 rows=1 width=28) (actual time=0.034..0.037 rows=1 loops=15)\n                    Index Cond: ((stats.context = 316372) AND (stats.film_id = \"outer\".film_id))\n              SubPlan                ->  Index Scan using idx_collateral_film on lte_film_collateral c  (cost=0.00..4.24\n rows=1 width=40) (actual time=0.009..0.011 rows=1 loops=15)                      Index Cond: (film_id = $0)\n                      Filter: ((name)::text = 'TVRating'::text)                ->  Index Scan using idx_collateral_film on lte_film_collateral c  (cost=\n0.00..4.24 rows=1 width=40) (actual time=0.022..0.025 rows=1 loops=15)                      Index Cond: (film_id = $0)\n                      Filter: ((name)::text = 'IMDBId'::text)\nTotal runtime: 19.077 msHere is our setup:We have 2 machines.  The first is the web server and the db server and the second is just another web server:\nMachine A- 1 GB RAM- 1 Intel(R) Xeon(TM) CPU 2.80GHz HyperThreaded Processor- CentOS 4.3- Linux moe 2.6.9-22.ELsmp #1 SMP Sat Oct 8 19:11:43 CDT 2005 i686 i686 i386 GNU/LinuxMachine B- 1 GB RAM\n\n- 1 Intel(R) Xeon(TM) CPU 2.80GHz Processor- CentOS 4.3\n- Linux larry 2.6.9-22.0.1.EL #1 Thu Oct 27 12:26:11 CDT 2005 i686 i686 i386 GNU/LinuxWe're using the following software:- Apache 2.0.52- Tomcat 5.5.17- Postgres 8.0.6- JDK 1.5.0-Release 6- Proxool \n0.8.3- Hibernate 3.1.3Thanks in advance for any help,Meetesh", "msg_date": "Mon, 22 May 2006 12:20:09 -0500", "msg_from": "\"Meetesh Karia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query hanging/not finishing inconsistently" }, { "msg_contents": "Meetesh Karia wrote:\n> Hi all,\n> \n> We've recently started having a problem where a query that normally \n> executes in ~15ms starts to take upwards of 20s to complete. When the \n> connection that ran query is returned to the connection pool, it appears \n> as though a transaction is still in progress so the connection pool \n> tries to cancel the transaction and close the connection. This fails \n> and the connection is removed from the connection pool. At this point, \n> the situation rapidly degrades and we run out of connections to the \n> postgres server.\n> \n> An inspection of the pg_stat_activity table shows that practically every \n> connection is running the above-mentioned query and some of those \n> queries have been active for many minutes! We've looked at the pg_locks \n> table as well and the only exclusive locks are on transactions that are \n> open. All other locks are AccessShareLocks. Also, as far as we can \n> tell (from looking at the Hibernate stats), every db session that is \n> opened is closed.\n> \n> When this happens, if I kill one of the running postgres processes (just \n> by picking the last process returned from \"ps -ef | grep postgres\"), the \n> other queries will immediately finish and the system will respond. \n> However, within 15 minutes, we'll be back in the same state as before. \n> At that point, I've cycled Apache, Tomcat and Postgres and the system \n> then seems to come back.\n\nThis sounds suspiciously like a question I asked a few weeks ago, on April 4. I have a process that just gets stuck. After some questions from various of the experts in this forum, I used gdb(1) to attach to one of the frozen Postgress backend processes, and here's what I found:\n\nOn 5/12/2006, I wrote:\n> Thanks, good advice. You're absolutely right, it's stuck on a\n> mutex. After doing what you suggest, I discovered that the query\n> in progress is a user-written function (mine). When I log in as\n> root, and use \"gdb -p <pid>\" to attach to the process, here's\n> what I find. Notice the second function in the stack, a mutex\n> lock:\n>\n> (gdb) bt\n> #0 0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n> #1 0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6\n> #2 0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6\n> #3 0x4f5fc1b4 in ?? ()\n> #4 0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from /usr/local/pgsql/lib/libchmoogle.so\n> #5 0x009ffcf0 in ?? () from /usr/lib/libz.so.1\n> #6 0xbfe71c04 in ?? ()\n> #7 0xbfe71e50 in ?? ()\n> #8 0xbfe71b78 in ?? ()\n> #9 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1\n> #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at zipstreamimpl.h:332\n> #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1, pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n> #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50, pOb=0xbfd923b8) at obconversion.cpp:780\n> #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120\n> #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at stl_construct.h:120\n> #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243\n> #18 0x0810ae4d in ExecMakeFunctionResult ()\n> #19 0x0810de2e in ExecProject ()\n> #20 0x08115972 in ExecResult ()\n> #21 0x08109e01 in ExecProcNode ()\n> #22 0x00000020 in ?? ()\n> #23 0xbed4b340 in ?? ()\n> #24 0xbf92d9a0 in ?? ()\n> #25 0xbed4b0c0 in ?? ()\n> #26 0x00000000 in ?? ()\n>\n> It looks to me like my code is trying to read the input parameter\n> (a fairly long string, maybe 2K) from a buffer that was gzip'ed\n> by Postgres for the trip between the client and server... somewhere\n> along the way, a mutex gets set, and then ... it's stuck forever.\n>\n> ps(1) shows that this thread had been running for about 7 hours,\n> and the job status showed that this function had been\n> successfully called about 1 million times, before this mutex lock\n> occurred.\n\nThis is not an issue that's been resolved. Nobody had ever seen this before. Tom Lane suggested it might be a libc/c++ bug, but unfortunately in my case this lockup occurs so rarely (every few days) that it will be very difficult to know if we've fixed the problem.\n\nIf gdb(1) reveals that your process is stuck in a mutex, then you might have a better chance testing this hypothesis, since your problem happens within 15 minutes or so.\n\nDid this start recently, perhaps right after a kernel update?\n\nCraig\n", "msg_date": "Mon, 22 May 2006 11:49:39 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query hanging/not finishing inconsistently" }, { "msg_contents": "Hi Craig,\n\nThanks for your response. This did start recently and it wasn't after a\nkernel update, but it was after we moved the db from Machine B to Machine A\n(which have slightly different kernel versions). However, the problem took\nabout a week to show up after we moved from one machine to the other.\nUnfortunately, the problem only reappears after 15 mins once it occurs the\nfirst time. If it occurs again today I'll attach gdb to it and see whether\nit's stuck on a mutex.\n\nMeetesh\n\nOn 5/22/06, Craig A. James <[email protected]> wrote:\n>\n> Meetesh Karia wrote:\n> > Hi all,\n> >\n> > We've recently started having a problem where a query that normally\n> > executes in ~15ms starts to take upwards of 20s to complete. When the\n> > connection that ran query is returned to the connection pool, it appears\n>\n> > as though a transaction is still in progress so the connection pool\n> > tries to cancel the transaction and close the connection. This fails\n> > and the connection is removed from the connection pool. At this point,\n> > the situation rapidly degrades and we run out of connections to the\n> > postgres server.\n> >\n> > An inspection of the pg_stat_activity table shows that practically every\n> > connection is running the above-mentioned query and some of those\n> > queries have been active for many minutes! We've looked at the pg_locks\n> > table as well and the only exclusive locks are on transactions that are\n> > open. All other locks are AccessShareLocks. Also, as far as we can\n> > tell (from looking at the Hibernate stats), every db session that is\n> > opened is closed.\n> >\n> > When this happens, if I kill one of the running postgres processes (just\n> > by picking the last process returned from \"ps -ef | grep postgres\"), the\n>\n> > other queries will immediately finish and the system will respond.\n> > However, within 15 minutes, we'll be back in the same state as before.\n> > At that point, I've cycled Apache, Tomcat and Postgres and the system\n> > then seems to come back.\n>\n> This sounds suspiciously like a question I asked a few weeks ago, on April\n> 4. I have a process that just gets stuck. After some questions from\n> various of the experts in this forum, I used gdb(1) to attach to one of the\n> frozen Postgress backend processes, and here's what I found:\n>\n> On 5/12/2006, I wrote:\n> > Thanks, good advice. You're absolutely right, it's stuck on a\n> > mutex. After doing what you suggest, I discovered that the query\n> > in progress is a user-written function (mine). When I log in as\n> > root, and use \"gdb -p <pid>\" to attach to the process, here's\n> > what I find. Notice the second function in the stack, a mutex\n> > lock:\n> >\n> > (gdb) bt\n> > #0 0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld- linux.so.2\n> > #1 0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6\n> > #2 0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6\n> > #3 0x4f5fc1b4 in ?? ()\n> > #4 0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from\n> /usr/local/pgsql/lib/libchmoogle.so\n> > #5 0x009ffcf0 in ?? () from /usr/lib/libz.so.1\n> > #6 0xbfe71c04 in ?? ()\n> > #7 0xbfe71e50 in ?? ()\n> > #8 0xbfe71b78 in ?? ()\n> > #9 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> > #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> > #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1\n> > #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at\n> zipstreamimpl.h:332\n> > #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1,\n> pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n> > #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50,\n> pOb=0xbfd923b8) at obconversion.cpp:780\n> > #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120\n> > #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at\n> stl_construct.h:120\n> > #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243\n> > #18 0x0810ae4d in ExecMakeFunctionResult ()\n> > #19 0x0810de2e in ExecProject ()\n> > #20 0x08115972 in ExecResult ()\n> > #21 0x08109e01 in ExecProcNode ()\n> > #22 0x00000020 in ?? ()\n> > #23 0xbed4b340 in ?? ()\n> > #24 0xbf92d9a0 in ?? ()\n> > #25 0xbed4b0c0 in ?? ()\n> > #26 0x00000000 in ?? ()\n> >\n> > It looks to me like my code is trying to read the input parameter\n> > (a fairly long string, maybe 2K) from a buffer that was gzip'ed\n> > by Postgres for the trip between the client and server... somewhere\n> > along the way, a mutex gets set, and then ... it's stuck forever.\n> >\n> > ps(1) shows that this thread had been running for about 7 hours,\n> > and the job status showed that this function had been\n> > successfully called about 1 million times, before this mutex lock\n> > occurred.\n>\n> This is not an issue that's been resolved. Nobody had ever seen this\n> before. Tom Lane suggested it might be a libc/c++ bug, but unfortunately in\n> my case this lockup occurs so rarely (every few days) that it will be very\n> difficult to know if we've fixed the problem.\n>\n> If gdb(1) reveals that your process is stuck in a mutex, then you might\n> have a better chance testing this hypothesis, since your problem happens\n> within 15 minutes or so.\n>\n> Did this start recently, perhaps right after a kernel update?\n>\n> Craig\n>\n\nHi Craig,Thanks for your response.  This did start recently and it wasn't after a kernel update, but it was after we moved the db from Machine B to Machine A (which have slightly different kernel versions).  However, the problem took about a week to show up after we moved from one machine to the other.  Unfortunately, the problem only reappears after 15 mins once it occurs the first time.  If it occurs again today I'll attach gdb to it and see whether it's stuck on a mutex.\nMeeteshOn 5/22/06, Craig A. James\n <[email protected]> wrote:\n\nMeetesh Karia wrote:> Hi all,>> We've recently started having a problem where a query that normally> executes in ~15ms starts to take upwards of 20s to complete.  When the> connection that ran query is returned to the connection pool, it appears\n> as though a transaction is still in progress so the connection pool> tries to cancel the transaction and close the connection.  This fails> and the connection is removed from the connection pool.  At this point,\n> the situation rapidly degrades and we run out of connections to the> postgres server.>> An inspection of the pg_stat_activity table shows that practically every> connection is running the above-mentioned query and some of those\n> queries have been active for many minutes!  We've looked at the pg_locks> table as well and the only exclusive locks are on transactions that are> open.  All other locks are AccessShareLocks.  Also, as far as we can\n> tell (from looking at the Hibernate stats), every db session that is> opened is closed.>> When this happens, if I kill one of the running postgres processes (just> by picking the last process returned from \"ps -ef | grep postgres\"), the\n> other queries will immediately finish and the system will respond.> However, within 15 minutes, we'll be back in the same state as before.> At that point, I've cycled Apache, Tomcat and Postgres and the system\n> then seems to come back.This sounds suspiciously like a question I asked a few weeks ago, on April 4.  I have a process that just gets stuck.  After some questions from various of the experts in this forum, I used gdb(1) to attach to one of the frozen Postgress backend processes, and here's what I found:\nOn 5/12/2006, I wrote:> Thanks, good advice.  You're absolutely right, it's stuck on a> mutex.  After doing what you suggest, I discovered that the query> in progress is a user-written function (mine).  When I log in as\n> root, and use \"gdb -p <pid>\" to attach to the process, here's> what I find.  Notice the second function in the stack, a mutex> lock:>> (gdb) bt> #0  0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld-\nlinux.so.2> #1  0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6> #2  0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6> #3  0x4f5fc1b4 in ?? ()> #4  0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from /usr/local/pgsql/lib/libchmoogle.so\n> #5  0x009ffcf0 in ?? () from /usr/lib/libz.so.1> #6  0xbfe71c04 in ?? ()> #7  0xbfe71e50 in ?? ()> #8  0xbfe71b78 in ?? ()> #9  0x009f7019 in zcfree () from /usr/lib/libz.so.1> #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1> #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at zipstreamimpl.h:332> #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1, pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n> #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50, pOb=0xbfd923b8) at obconversion.cpp:780> #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120> #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at stl_construct.h:120\n> #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243> #18 0x0810ae4d in ExecMakeFunctionResult ()> #19 0x0810de2e in ExecProject ()> #20 0x08115972 in ExecResult ()> #21 0x08109e01 in ExecProcNode ()\n> #22 0x00000020 in ?? ()> #23 0xbed4b340 in ?? ()> #24 0xbf92d9a0 in ?? ()> #25 0xbed4b0c0 in ?? ()> #26 0x00000000 in ?? ()>> It looks to me like my code is trying to read the input parameter\n> (a fairly long string, maybe 2K) from a buffer that was gzip'ed> by Postgres for the trip between the client and server... somewhere> along the way, a mutex gets set, and then ... it's stuck forever.\n>> ps(1) shows that this thread had been running for about 7 hours,> and the job status showed that this function had been> successfully called about 1 million times, before this mutex lock\n\n> occurred.This is not an issue that's been resolved.  Nobody had ever seen this before.  Tom Lane suggested it might be a libc/c++ bug, but unfortunately in my case this lockup occurs so rarely (every few days) that it will be very difficult to know if we've fixed the problem.\nIf gdb(1) reveals that your process is stuck in a mutex, then you might have a better chance testing this hypothesis, since your problem happens within 15 minutes or so.Did this start recently, perhaps right after a kernel update?\nCraig", "msg_date": "Mon, 22 May 2006 15:40:11 -0500", "msg_from": "\"Meetesh Karia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query hanging/not finishing inconsistently" }, { "msg_contents": "Hi all,\n\nI just saw another email on the mailing list to this effect as well. We\nrecently updated the kernel versions on our machines to the latest stable\nversions (which contained both HyperThreading and IO bug fixes) and we\nupdated Postgres to version 8.0.8. We thought we were in the clear when we\ndidn't encounter a hang for 6+ days. But, once again we ran into the same\nsituation where a query that normally executes in ~15ms wouldn't finish. As\nbefore, there were no ungranted locks and threads weren't waiting on a\nlock. I attached gdb to one of the stuck postgres processes and got the\nfollowing stack trace:\n\n#0 0x008967a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n#1 0x00977e5b in semop () from /lib/tls/libc.so.6\n#2 0x08167298 in PGSemaphoreLock ()\n#3 0x0818bcb5 in LWLockAcquire ()\n#4 0x080a47f5 in SimpleLruWritePage ()\n#5 0x080a48ad in SimpleLruReadPage ()\n#6 0x080a519a in SubTransGetParent ()\n#7 0x080a51f2 in SubTransGetTopmostTransaction ()\n#8 0x0821371c in HeapTupleSatisfiesSnapshot ()\n#9 0x080822a2 in heap_release_fetch ()\n#10 0x080880fb in index_getnext ()\n#11 0x08128507 in ExecReScanHashJoin ()\n#12 0x08122a09 in ExecScan ()\n#13 0x081287f9 in ExecIndexScan ()\n#14 0x0811dfdd in ExecProcNode ()\n#15 0x0812a49f in ExecNestLoop ()\n#16 0x0811df9d in ExecProcNode ()\n#17 0x0812b74d in ExecSort ()\n#18 0x0811df5d in ExecProcNode ()\n#19 0x0812b941 in ExecUnique ()\n#20 0x0811df2c in ExecProcNode ()\n#21 0x0811ce18 in ExecutorRun ()\n#22 0x081947ec in PortalSetResultFormat ()\n#23 0x08194df4 in PortalRun ()\n#24 0x08192ef7 in PostgresMain ()\n#25 0x08169780 in ClosePostmasterPorts ()\n#26 0x0816b0ae in PostmasterMain ()\n#27 0x0813a5a6 in main ()\n\nWe then upgraded glibc to 2.3.4-2.19 but we encountered the problem within a\nday. Our latest attempt at isolating the problem has been to reboot the\nmachine with a 'noht' kernel param. The machine has been up for 1 day,\n13:18 since then and we haven't seen the problem yet.\n\nHas anyone been able to solve this problem?\n\nThanks,\nMeetesh\n\nOn 5/22/06, Meetesh Karia <[email protected] > wrote:\n>\n> Hi Craig,\n>\n> Thanks for your response. This did start recently and it wasn't after a\n> kernel update, but it was after we moved the db from Machine B to Machine A\n> (which have slightly different kernel versions). However, the problem took\n> about a week to show up after we moved from one machine to the other.\n> Unfortunately, the problem only reappears after 15 mins once it occurs the\n> first time. If it occurs again today I'll attach gdb to it and see whether\n> it's stuck on a mutex.\n>\n> Meetesh\n>\n>\n> On 5/22/06, Craig A. James <[email protected]> wrote:\n> >\n> > Meetesh Karia wrote:\n> > > Hi all,\n> > >\n> > > We've recently started having a problem where a query that normally\n> > > executes in ~15ms starts to take upwards of 20s to complete. When the\n> > > connection that ran query is returned to the connection pool, it\n> > appears\n> > > as though a transaction is still in progress so the connection pool\n> > > tries to cancel the transaction and close the connection. This fails\n> > > and the connection is removed from the connection pool. At this\n> > point,\n> > > the situation rapidly degrades and we run out of connections to the\n> > > postgres server.\n> > >\n> > > An inspection of the pg_stat_activity table shows that practically\n> > every\n> > > connection is running the above-mentioned query and some of those\n> > > queries have been active for many minutes! We've looked at the\n> > pg_locks\n> > > table as well and the only exclusive locks are on transactions that\n> > are\n> > > open. All other locks are AccessShareLocks. Also, as far as we can\n> > > tell (from looking at the Hibernate stats), every db session that is\n> > > opened is closed.\n> > >\n> > > When this happens, if I kill one of the running postgres processes\n> > (just\n> > > by picking the last process returned from \"ps -ef | grep postgres\"),\n> > the\n> > > other queries will immediately finish and the system will respond.\n> > > However, within 15 minutes, we'll be back in the same state as before.\n> > > At that point, I've cycled Apache, Tomcat and Postgres and the system\n> > > then seems to come back.\n> >\n> > This sounds suspiciously like a question I asked a few weeks ago, on\n> > April 4. I have a process that just gets stuck. After some questions from\n> > various of the experts in this forum, I used gdb(1) to attach to one of the\n> > frozen Postgress backend processes, and here's what I found:\n> >\n> > On 5/12/2006, I wrote:\n> > > Thanks, good advice. You're absolutely right, it's stuck on a\n> > > mutex. After doing what you suggest, I discovered that the query\n> > > in progress is a user-written function (mine). When I log in as\n> > > root, and use \"gdb -p <pid>\" to attach to the process, here's\n> > > what I find. Notice the second function in the stack, a mutex\n> > > lock:\n> > >\n> > > (gdb) bt\n> > > #0 0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld- linux.so.2\n> > > #1 0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6\n> > > #2 0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6\n> > > #3 0x4f5fc1b4 in ?? ()\n> > > #4 0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from\n> > /usr/local/pgsql/lib/libchmoogle.so\n> > > #5 0x009ffcf0 in ?? () from /usr/lib/libz.so.1\n> > > #6 0xbfe71c04 in ?? ()\n> > > #7 0xbfe71e50 in ?? ()\n> > > #8 0xbfe71b78 in ?? ()\n> > > #9 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> > > #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> > > #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1\n> > > #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at\n> > zipstreamimpl.h:332\n> > > #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1,\n> > pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n> > > #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50,\n> > pOb=0xbfd923b8) at obconversion.cpp:780\n> > > #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120\n> > > #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at\n> > stl_construct.h:120\n> > > #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243\n> > > #18 0x0810ae4d in ExecMakeFunctionResult ()\n> > > #19 0x0810de2e in ExecProject ()\n> > > #20 0x08115972 in ExecResult ()\n> > > #21 0x08109e01 in ExecProcNode ()\n> > > #22 0x00000020 in ?? ()\n> > > #23 0xbed4b340 in ?? ()\n> > > #24 0xbf92d9a0 in ?? ()\n> > > #25 0xbed4b0c0 in ?? ()\n> > > #26 0x00000000 in ?? ()\n> > >\n> > > It looks to me like my code is trying to read the input parameter\n> > > (a fairly long string, maybe 2K) from a buffer that was gzip'ed\n> > > by Postgres for the trip between the client and server... somewhere\n> > > along the way, a mutex gets set, and then ... it's stuck forever.\n> > >\n> > > ps(1) shows that this thread had been running for about 7 hours,\n> > > and the job status showed that this function had been\n> > > successfully called about 1 million times, before this mutex lock\n> > > occurred.\n> >\n> > This is not an issue that's been resolved. Nobody had ever seen this\n> > before. Tom Lane suggested it might be a libc/c++ bug, but unfortunately in\n> > my case this lockup occurs so rarely (every few days) that it will be very\n> > difficult to know if we've fixed the problem.\n> >\n> > If gdb(1) reveals that your process is stuck in a mutex, then you might\n> > have a better chance testing this hypothesis, since your problem happens\n> > within 15 minutes or so.\n> >\n> > Did this start recently, perhaps right after a kernel update?\n> >\n> > Craig\n> >\n>\n>\n\nHi all,I just saw another email on the mailing list to this effect as well.  We recently updated the kernel versions on our machines to the latest stable versions (which contained both HyperThreading and IO bug fixes) and we updated Postgres to version \n8.0.8.  We thought we were in the clear when we didn't encounter a hang for 6+ days.  But, once again we ran into the same situation where a query that normally executes in ~15ms wouldn't finish.  As before, there were no ungranted locks and threads weren't waiting on a lock.  I attached gdb to one of the stuck postgres processes and got the following stack trace:\n#0  0x008967a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2#1  0x00977e5b in semop () from /lib/tls/libc.so.6#2  0x08167298 in PGSemaphoreLock ()#3  0x0818bcb5 in LWLockAcquire ()#4  0x080a47f5 in SimpleLruWritePage ()\n#5  0x080a48ad in SimpleLruReadPage ()#6  0x080a519a in SubTransGetParent ()#7  0x080a51f2 in SubTransGetTopmostTransaction ()#8  0x0821371c in HeapTupleSatisfiesSnapshot ()#9  0x080822a2 in heap_release_fetch ()\n#10 0x080880fb in index_getnext ()#11 0x08128507 in ExecReScanHashJoin ()#12 0x08122a09 in ExecScan ()#13 0x081287f9 in ExecIndexScan ()#14 0x0811dfdd in ExecProcNode ()#15 0x0812a49f in ExecNestLoop ()\n#16 0x0811df9d in ExecProcNode ()#17 0x0812b74d in ExecSort ()#18 0x0811df5d in ExecProcNode ()#19 0x0812b941 in ExecUnique ()#20 0x0811df2c in ExecProcNode ()#21 0x0811ce18 in ExecutorRun ()#22 0x081947ec in PortalSetResultFormat ()\n#23 0x08194df4 in PortalRun ()#24 0x08192ef7 in PostgresMain ()#25 0x08169780 in ClosePostmasterPorts ()#26 0x0816b0ae in PostmasterMain ()#27 0x0813a5a6 in main ()We then upgraded glibc to 2.3.4-2.19\n but we encountered the problem within a day.  Our latest attempt at isolating the problem has been to reboot the machine with a 'noht' kernel param.  The machine has been up for 1 day, 13:18 since then and we haven't seen the problem yet.\nHas anyone been able to solve this problem?Thanks,MeeteshOn 5/22/06, Meetesh Karia <\[email protected]\n> wrote:Hi Craig,Thanks for your response.  This did start recently and it wasn't after a kernel update, but it was after we moved the db from Machine B to Machine A (which have slightly different kernel versions).  However, the problem took about a week to show up after we moved from one machine to the other.  Unfortunately, the problem only reappears after 15 mins once it occurs the first time.  If it occurs again today I'll attach gdb to it and see whether it's stuck on a mutex.\nMeeteshOn 5/22/06, Craig A. James\n <[email protected]> wrote:\n\n\n\nMeetesh Karia wrote:> Hi all,>> We've recently started having a problem where a query that normally> executes in ~15ms starts to take upwards of 20s to complete.  When the> connection that ran query is returned to the connection pool, it appears\n> as though a transaction is still in progress so the connection pool> tries to cancel the transaction and close the connection.  This fails> and the connection is removed from the connection pool.  At this point,\n> the situation rapidly degrades and we run out of connections to the> postgres server.>> An inspection of the pg_stat_activity table shows that practically every> connection is running the above-mentioned query and some of those\n> queries have been active for many minutes!  We've looked at the pg_locks> table as well and the only exclusive locks are on transactions that are> open.  All other locks are AccessShareLocks.  Also, as far as we can\n> tell (from looking at the Hibernate stats), every db session that is> opened is closed.>> When this happens, if I kill one of the running postgres processes (just> by picking the last process returned from \"ps -ef | grep postgres\"), the\n> other queries will immediately finish and the system will respond.> However, within 15 minutes, we'll be back in the same state as before.> At that point, I've cycled Apache, Tomcat and Postgres and the system\n> then seems to come back.This sounds suspiciously like a question I asked a few weeks ago, on April 4.  I have a process that just gets stuck.  After some questions from various of the experts in this forum, I used gdb(1) to attach to one of the frozen Postgress backend processes, and here's what I found:\nOn 5/12/2006, I wrote:> Thanks, good advice.  You're absolutely right, it's stuck on a> mutex.  After doing what you suggest, I discovered that the query> in progress is a user-written function (mine).  When I log in as\n> root, and use \"gdb -p <pid>\" to attach to the process, here's> what I find.  Notice the second function in the stack, a mutex> lock:>> (gdb) bt> #0  0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld-\nlinux.so.2> #1  0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6> #2  0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6> #3  0x4f5fc1b4 in ?? ()> #4  0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from /usr/local/pgsql/lib/libchmoogle.so\n> #5  0x009ffcf0 in ?? () from /usr/lib/libz.so.1> #6  0xbfe71c04 in ?? ()> #7  0xbfe71e50 in ?? ()> #8  0xbfe71b78 in ?? ()> #9  0x009f7019 in zcfree () from /usr/lib/libz.so.1> #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1\n> #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1> #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at zipstreamimpl.h:332> #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1, pOb=0xbfd923b8, pin=0xffffffea) at istream:115\n> #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50, pOb=0xbfd923b8) at obconversion.cpp:780> #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120> #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at stl_construct.h:120\n> #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243> #18 0x0810ae4d in ExecMakeFunctionResult ()> #19 0x0810de2e in ExecProject ()> #20 0x08115972 in ExecResult ()> #21 0x08109e01 in ExecProcNode ()\n> #22 0x00000020 in ?? ()> #23 0xbed4b340 in ?? ()> #24 0xbf92d9a0 in ?? ()> #25 0xbed4b0c0 in ?? ()> #26 0x00000000 in ?? ()>> It looks to me like my code is trying to read the input parameter\n> (a fairly long string, maybe 2K) from a buffer that was gzip'ed> by Postgres for the trip between the client and server... somewhere> along the way, a mutex gets set, and then ... it's stuck forever.\n>> ps(1) shows that this thread had been running for about 7 hours,> and the job status showed that this function had been> successfully called about 1 million times, before this mutex lock\n\n\n\n> occurred.This is not an issue that's been resolved.  Nobody had ever seen this before.  Tom Lane suggested it might be a libc/c++ bug, but unfortunately in my case this lockup occurs so rarely (every few days) that it will be very difficult to know if we've fixed the problem.\nIf gdb(1) reveals that your process is stuck in a mutex, then you might have a better chance testing this hypothesis, since your problem happens within 15 minutes or so.Did this start recently, perhaps right after a kernel update?\nCraig", "msg_date": "Tue, 20 Jun 2006 12:25:19 -0500", "msg_from": "\"Meetesh Karia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query hanging/not finishing inconsistently" }, { "msg_contents": "\"Meetesh Karia\" <[email protected]> writes:\n> ... But, once again we ran into the same\n> situation where a query that normally executes in ~15ms wouldn't finish. As\n> before, there were no ungranted locks and threads weren't waiting on a\n> lock. I attached gdb to one of the stuck postgres processes and got the\n> following stack trace:\n\n> #0 0x008967a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n> #1 0x00977e5b in semop () from /lib/tls/libc.so.6\n> #2 0x08167298 in PGSemaphoreLock ()\n> #3 0x0818bcb5 in LWLockAcquire ()\n> #4 0x080a47f5 in SimpleLruWritePage ()\n> #5 0x080a48ad in SimpleLruReadPage ()\n> #6 0x080a519a in SubTransGetParent ()\n> #7 0x080a51f2 in SubTransGetTopmostTransaction ()\n> #8 0x0821371c in HeapTupleSatisfiesSnapshot ()\n\nWhat I'm wondering about is possible deadlock conditions inside slru.c.\nThere's no deadlock detection for LWLocks, so if it happened, the\nprocesses involved would just freeze up.\n\nIf this happens again, would you collect stack traces from all the stuck\nprocesses, not just one?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Jun 2006 20:31:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query hanging/not finishing inconsistently " } ]
[ { "msg_contents": "Hi,\n I am having a problem with a sub select query being kinda slow. The\nquery is as follows:\n \nselect batterycode, batterydescription, observationdate from Battery t1\nwhere patientidentifier=611802158 and observationdate = (select\nmax(observationdate) from Battery t2 where t2.batterycode=t1.batterycode\nand patientidentifier=611802158) order by batterydescription.\n \nexplain analyze:\n \n\n'Sort (cost=1697.16..1697.16 rows=1 width=31) (actual\ntime=910.721..910.729 rows=22 loops=1)'\n' Sort Key: batterydescription'\n' -> Index Scan using ix_battery_patient on battery t1\n(cost=0.00..1697.15 rows=1 width=31) (actual time=241.836..910.580\nrows=22 loops=1)'\n' Index Cond: (patientidentifier = 611802158)'\n' Filter: (observationdate = (subplan))'\n' SubPlan'\n' -> Aggregate (cost=26.25..26.26 rows=1 width=8) (actual\ntime=9.666..9.667 rows=1 loops=94)'\n' -> Bitmap Heap Scan on battery t2 (cost=22.23..26.25 rows=1 width=8)\n(actual time=9.606..9.620 rows=7 loops=94)'\n' Recheck Cond: ((patientidentifier = 611802158) AND\n((batterycode)::text = ($0)::text))'\n' -> BitmapAnd (cost=22.23..22.23 rows=1 width=0) (actual\ntime=9.596..9.596 rows=0 loops=94)'\n' -> Bitmap Index Scan on ix_battery_patient (cost=0.00..2.20 rows=58\nwidth=0) (actual time=0.039..0.039 rows=94 loops=94)'\n' Index Cond: (patientidentifier = 611802158)'\n' -> Bitmap Index Scan on ix_battery_code (cost=0.00..19.78 rows=2794\nwidth=0) (actual time=9.514..9.514 rows=27323 loops=94)'\n' Index Cond: ((batterycode)::text = ($0)::text)'\n'Total runtime: 910.897 ms'\n\nBasically I am just trying to display the batterycode with its most\nrecent date. Is there a better way to do this query ?\n\nthanks\n \n \nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n", "msg_date": "Mon, 22 May 2006 18:11:07 -0400", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow query using sub select" }, { "msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> I am having a problem with a sub select query being kinda slow. The\n> query is as follows:\n \n> select batterycode, batterydescription, observationdate from Battery t1\n> where patientidentifier=611802158 and observationdate = (select\n> max(observationdate) from Battery t2 where t2.batterycode=t1.batterycode\n> and patientidentifier=611802158) order by batterydescription.\n\nYeah, this is essentially impossible for the planner to optimize,\nbecause it doesn't see any way to de-correlate the subselect, so it does\nit over again for every row. You might find it works better if you cast\nthe thing as a SELECT DISTINCT ON problem (look at the \"weather report\"\nexample in the SELECT reference page).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 May 2006 19:07:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query using sub select " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tim Jones [mailto:[email protected]]\n> Sent: Tuesday, May 23, 2006 12:11 AM\n> To: [email protected]\n> Subject: [PERFORM] slow query using sub select\n> \n> Hi,\n> I am having a problem with a sub select query being kinda slow. The\n> query is as follows:\n> \n> select batterycode, batterydescription, observationdate from Battery t1\n> where patientidentifier=611802158 and observationdate = (select\n> max(observationdate) from Battery t2 where t2.batterycode=t1.batterycode\n> and patientidentifier=611802158) order by batterydescription.\n\n\nHow about changing it into a standard join:\n\n\nselect t1.batterycode, t1.batterydescription, t2.observationdate\nfrom Battery t1, \n(Select batterycode ,max(observationdate) from Battery t2 where\npatientidentifier=611802158 group by batterycode) AS T2\nwhere t1. batterycode = t2. batterycode\n\nJonathan Blitz\nAnyKey Limited\nIsrael\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.392 / Virus Database: 268.6.1/344 - Release Date: 05/19/2006\n \n\n", "msg_date": "Tue, 23 May 2006 02:33:09 +0200", "msg_from": "\"Jonathan Blitz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query using sub select" } ]
[ { "msg_contents": "[email protected] wrote:\n> The above query takes 5 seconds to execute!\n> \n> [...]\n>\n> Total runtime: 96109.571 ms\n\nIt sure doesn't look like it...\n\n> Total runtime: 461.907 ms\n>\n> [...]\n>\n> Suddenly the query takes only 0.29 seconds!\n\nHow are you timing this, really?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 22 May 2006 23:33:33 +0000 (UTC)", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "Steinar H. Gunderson wrote:\n\n> [email protected] wrote:\n>> The above query takes 5 seconds to execute!\n>> \n>> [...]\n>>\n>> Total runtime: 96109.571 ms\n> \n> It sure doesn't look like it...\n> \n>> Total runtime: 461.907 ms\n>>\n>> [...]\n>>\n>> Suddenly the query takes only 0.29 seconds!\n> \n> How are you timing this, really?\n> \n> /* Steinar */\n\nI'm executing the queries from phpPgAdmin. \nThe above are for explain analyse. I was referring to the pure query\nexecution time.\nDoes anyone have an idea why the OR-query takes so long?\nAny server-side tuning possibilities? I wouldn't like to change the code of\nldap's back-sql...\n\nToni\n", "msg_date": "Tue, 23 May 2006 09:10:29 +0200", "msg_from": "Antonio Batovanja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "Antonio Batovanja wrote:\n> Laurenz Albe wrote:\n> \n>> Antonio Batovanja wrote:\n>>> I'm having trouble understanding, why a specific query on a small\n>>> database is taking so long...\n>>>\n>> Before I try to understand the execution plans:\n>>\n>> Have you run ANALYZE on the tables involved before you ran the query?\n> \n> Hi,\n> \n> Just to be on the safe side, I've run ANALYZE now.\n> Here are the query plans for the two queries:\n\nI suspect a misunderstanding here. What Laurenz probably meant is to run \n analyze on the involved _tables_ so the statistics data is refreshed. \nIf the query planner runs with outdated statistics, queries may perform \nvery poorly. Try\n\n\tvacuum full analyze yourdatabase\n\nTo fully vacuum your database and analyze all tables.\n(vacuum full is extra, but can't hurt.)\n\nhttp://www.postgresql.org/docs/8.1/static/sql-vacuum.html\nhttp://www.postgresql.org/docs/8.1/static/sql-analyze.html\n\nRegards, Erwin\n", "msg_date": "Sun, 28 May 2006 23:56:27 +0200", "msg_from": "Erwin Brandstetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "Antonio Batovanja wrote:\n(...)\n\n> 1) the slooooow query:\n> EXPLAIN ANALYZE SELECT DISTINCT ldap_entries.id, organization.id,\n> text('organization') AS objectClass, ldap_entries.dn AS dn FROM\n> ldap_entries, organization, ldap_entry_objclasses WHERE\n> organization.id=ldap_entries.keyval AND ldap_entries.oc_map_id=1 AND\n> upper(ldap_entries.dn) LIKE '%DC=HUMANOMED,DC=AT' AND 1=1 OR\n> (ldap_entries.id=ldap_entry_objclasses.entry_id AND\n> ldap_entry_objclasses.oc_name='organization');\n\n\nFirst, presenting your query in any readable form might be helpful if \nyou want the community to help you. (Hint! Hint!)\n\nSELECT DISTINCT ldap_entries.id, organization.id,\n\ttext('organization') AS objectClass, ldap_entries.dn AS dn\n FROM ldap_entries, organization, ldap_entry_objclasses\n WHERE organization.id=ldap_entries.keyval\n AND ldap_entries.oc_map_id=1\n AND upper(ldap_entries.dn) LIKE '%DC=HUMANOMED,DC=AT'\n AND 1=1\n OR (ldap_entries.id=ldap_entry_objclasses.entry_id\n AND ldap_entry_objclasses.oc_name='organization');\n\nNext, you might want to use aliases to make it more readable.\n\nSELECT DISTINCT e.id, o.id, text('organization') AS objectClass, e.dn AS dn\n FROM ldap_entries AS e, organization AS o, ldap_entry_objclasses AS eo\n WHERE o.id=e.keyval\n AND e.oc_map_id=1\n AND upper(e.dn) LIKE '%DC=HUMANOMED,DC=AT'\n AND 1=1\n OR (e.id=eo.entry_id\n AND eo.oc_name='organization');\n\nThere are a couple redundant (nonsensical) items, syntax-wise. Let's \nstrip these:\n\nSELECT DISTINCT e.id, o.id, text('organization') AS objectClass, e.dn\n FROM ldap_entries AS e, organization AS o, ldap_entry_objclasses AS eo\n WHERE o.id=e.keyval\n AND e.oc_map_id=1\n AND e.dn ILIKE '%DC=HUMANOMED,DC=AT'\n OR e.id=eo.entry_id\n AND eo.oc_name='organization';\n\nAnd finally, I suspect the lexical precedence of AND and OR might be the \nissue here. \nhttp://www.postgresql.org/docs/8.1/static/sql-syntax.html#SQL-PRECEDENCE\nMaybe that is what you really want (just guessing):\n\nSELECT DISTINCT e.id, o.id, text('organization') AS objectClass, e.dn\n FROM ldap_entries e\n JOIN organization o ON o.id=e.keyval\n LEFT JOIN ldap_entry_objclasses eo ON eo.entry_id=e.id\n WHERE e.oc_map_id=1\n AND e.dn ILIKE '%DC=HUMANOMED,DC=AT'\n OR eo.oc_name='organization)';\n\nI didn't take the time to read the rest. My appologies if I guessed wrong.\n\n\nRegards, Erwin\n", "msg_date": "Mon, 29 May 2006 00:38:55 +0200", "msg_from": "Erwin Brandstetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "> I'm executing the queries from phpPgAdmin. \n> The above are for explain analyse. I was referring to the pure query\n> execution time.\n> Does anyone have an idea why the OR-query takes so long?\n> Any server-side tuning possibilities? I wouldn't like to change the code of\n> ldap's back-sql...\n\nIf you're using phpPgAdmin's timings, they could be more off than the \nreal explain analyze timings. Make sure you're using the figure given \nby explain analyze itself.\n\nChris\n\n", "msg_date": "Wed, 31 May 2006 09:07:34 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" } ]
[ { "msg_contents": "that worked like a champ nice call as always! \n\nthanks\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, May 22, 2006 7:07 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] slow query using sub select \n\n\"Tim Jones\" <[email protected]> writes:\n> I am having a problem with a sub select query being kinda slow. The\n\n> query is as follows:\n \n> select batterycode, batterydescription, observationdate from Battery \n> t1 where patientidentifier=611802158 and observationdate = (select\n> max(observationdate) from Battery t2 where \n> t2.batterycode=t1.batterycode and patientidentifier=611802158) order\nby batterydescription.\n\nYeah, this is essentially impossible for the planner to optimize,\nbecause it doesn't see any way to de-correlate the subselect, so it does\nit over again for every row. You might find it works better if you cast\nthe thing as a SELECT DISTINCT ON problem (look at the \"weather report\"\nexample in the SELECT reference page).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 May 2006 09:26:37 -0400", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query using sub select " } ]
[ { "msg_contents": "All,\n\nI might be completely crazy here, but it seems every other database \nexposes select query stats. Postgres only exposes updates/deletes/ \ninserts. Is there something I am missing here?\n\nBest Regards,\nDan Gorman\n\n", "msg_date": "Tue, 23 May 2006 10:40:01 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Selects query stats?" }, { "msg_contents": "Dan Gorman wrote:\n> All,\n> \n> I might be completely crazy here, but it seems every other database \n> exposes select query stats. Postgres only exposes \n> updates/deletes/inserts. Is there something I am missing here?\n\nPerhaps.\n\nYou can EXPLAIN ANALYZE a SELECT, just like i/u/d -- but then you\ndon't get the normal result set back. Is that what you mean?\n\nYou can turn on log_min_duration_statement and get total SELECT duration\nlogged.\n\nThere's a thread in pgsql-hackers (\"Re: Porting MSSQL to PGSQL: trace and \nprofile\") about server-side logging of query plans and stats (for all four of \ns/i/u/d), which is indeed not there in PG.\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Tue, 23 May 2006 11:15:10 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "What I am looking for is that our DB is doing X selects a min.\n\nTurning on logging isn't an option as it will create too much IO in \nour enviornment.\n\nRegards,\nDan Gorman\n\nOn May 23, 2006, at 11:15 AM, Mischa Sandberg wrote:\n\n> Dan Gorman wrote:\n>> All,\n>> I might be completely crazy here, but it seems every other \n>> database exposes select query stats. Postgres only exposes updates/ \n>> deletes/inserts. Is there something I am missing here?\n>\n> Perhaps.\n>\n> You can EXPLAIN ANALYZE a SELECT, just like i/u/d -- but then you\n> don't get the normal result set back. Is that what you mean?\n>\n> You can turn on log_min_duration_statement and get total SELECT \n> duration\n> logged.\n>\n> There's a thread in pgsql-hackers (\"Re: Porting MSSQL to PGSQL: \n> trace and profile\") about server-side logging of query plans and \n> stats (for all four of s/i/u/d), which is indeed not there in PG.\n>\n> -- \n> Engineers think that equations approximate reality.\n> Physicists think that reality approximates the equations.\n> Mathematicians never make the connection.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n", "msg_date": "Tue, 23 May 2006 11:15:10 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "Dan Gorman wrote:\n> What I am looking for is that our DB is doing X selects a min.\n\nWhat specifically would you like to measure?\nDuration for specific queries?\nQueries in an app for which you have no source?\nThere may be a way to get what you want by other means ...\nDetails?\n\nI gather you cannot just time the app that's doing the selects,\nnor extract those selects and run them via psql and time them\non their own?\n\n>> Dan Gorman wrote:\n>>> All,\n>>> I might be completely crazy here, but it seems every other database \n>>> exposes select query stats. Postgres only exposes \n>>> updates/deletes/inserts. Is there something I am missing here?\n\n", "msg_date": "Tue, 23 May 2006 11:32:28 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "In any other DB (oracle, mysql) I know how many queries (selects) per \nsecond the database is executing. How do I get this\nnumber out of postgres?\n\nI have a perl script that can test this, but no way the db tells me \nhow fast it's going.\n\n(e.g. in oracle: select sum(executions) from v$sqlarea;)\n\nRegards,\nDan Gorman\n\n\n\n\nOn May 23, 2006, at 11:32 AM, Mischa Sandberg wrote:\n\n> Dan Gorman wrote:\n>> What I am looking for is that our DB is doing X selects a min.\n>\n> What specifically would you like to measure?\n> Duration for specific queries?\n> Queries in an app for which you have no source?\n> There may be a way to get what you want by other means ...\n> Details?\n>\n> I gather you cannot just time the app that's doing the selects,\n> nor extract those selects and run them via psql and time them\n> on their own?\n>\n>>> Dan Gorman wrote:\n>>>> All,\n>>>> I might be completely crazy here, but it seems every other \n>>>> database exposes select query stats. Postgres only exposes \n>>>> updates/deletes/inserts. Is there something I am missing here?\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\nIn any other DB (oracle, mysql) I know how many queries (selects) per second the database is executing. How do I get this number out of postgres?I have a perl script that can test this, but no way the db tells me how fast it's going.(e.g. in oracle: select sum(executions) from v$sqlarea;)Regards,Dan GormanOn May 23, 2006, at 11:32 AM, Mischa Sandberg wrote:Dan Gorman wrote: What I am looking for is that our DB is doing X selects a min. What specifically would you like to measure?Duration for specific queries?Queries in an app for which you have no source?There may be a way to get what you want by other means ...Details?I gather you cannot just time the app that's doing the selects,nor extract those selects and run them via psql and time themon their own? Dan Gorman wrote: All,I might be completely crazy here, but it seems every other database exposes select query stats. Postgres only exposes updates/deletes/inserts. Is there something I am missing here? ---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to      choose an index scan if your joining column's datatypes do not      match", "msg_date": "Tue, 23 May 2006 11:33:12 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On Tue, May 23, 2006 at 11:33:12AM -0700, Dan Gorman wrote:\n> In any other DB (oracle, mysql) I know how many queries (selects) per \n> second the database is executing. How do I get this\n> number out of postgres?\n\nYou can't. You also can't know how many DML statements were executed\n(though you can see how many tuples were inserted/updated/deleted), or\nhow many transactions have occured (well, you can hack the last one, but\nit's a bit of a mess).\n\nIt would be nice if all of this was available.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 23 May 2006 13:41:00 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On Tue, 2006-05-23 at 11:33 -0700, Dan Gorman wrote:\n> In any other DB (oracle, mysql) I know how many queries (selects) per\n> second the database is executing. How do I get this \n> number out of postgres?\n> \n> \n> I have a perl script that can test this, but no way the db tells me\n> how fast it's going.\n> \n> \n> (e.g. in oracle: select sum(executions) from v$sqlarea;)\n\nThe Oracle query you show doesn't do that either. It tells you how many\nstatements have been executed since startup, not per second.\n\nThe main problem with what you ask is it only seems to have value. If\nthe value dips for some reason, you have no way of knowing whether that\noccurred because the arrival rate dropped off, there is a system problem\nor whether statements just happened to access more data over that time\nperiod. You can collect information that would allow you to understand\nwhat is happening on your system and summarise that as you choose.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 23 May 2006 19:51:12 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "Yeah, I'm not really concerned about the app or sys performance, just \na basic question of how do I get the rate of selects that are being \nexecuted.\n\nIn a previous post from Jim, he noted it cannot be done. I am very \nsurprised postgres can't do this basic functionality. Does anyone \nknow if the postgres team is working on this?\n\n(btw, I pasted in the wrong oracle query lol - but it can be done in \nmysql and oracle)\n\nBest Regards,\nDan Gorman\n\nOn May 23, 2006, at 11:51 AM, Simon Riggs wrote:\n\n> On Tue, 2006-05-23 at 11:33 -0700, Dan Gorman wrote:\n>> In any other DB (oracle, mysql) I know how many queries (selects) per\n>> second the database is executing. How do I get this\n>> number out of postgres?\n>>\n>>\n>> I have a perl script that can test this, but no way the db tells me\n>> how fast it's going.\n>>\n>>\n>> (e.g. in oracle: select sum(executions) from v$sqlarea;)\n>\n> The Oracle query you show doesn't do that either. It tells you how \n> many\n> statements have been executed since startup, not per second.\n>\n> The main problem with what you ask is it only seems to have value. If\n> the value dips for some reason, you have no way of knowing whether \n> that\n> occurred because the arrival rate dropped off, there is a system \n> problem\n> or whether statements just happened to access more data over that time\n> period. You can collect information that would allow you to understand\n> what is happening on your system and summarise that as you choose.\n>\n> -- \n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n", "msg_date": "Tue, 23 May 2006 12:08:05 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, May 23, 2006 at 11:33:12AM -0700, Dan Gorman wrote:\n>> In any other DB (oracle, mysql) I know how many queries (selects) per \n>> second the database is executing. How do I get this\n>> number out of postgres?\n\n> You can't. You also can't know how many DML statements were executed\n> (though you can see how many tuples were inserted/updated/deleted), or\n> how many transactions have occured (well, you can hack the last one, but\n> it's a bit of a mess).\n\nHack? We do count commits and rollbacks (see pg_stat_database); doesn't\nseem that hacky to me.\n\nCounting individual statements would add overhead (which the OP already\ndeclared unacceptable) and there are some definitional issues too, like\nwhether to count statements executed within functions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 May 2006 15:13:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats? " }, { "msg_contents": "Tom Lane wrote:\n\n> Counting individual statements would add overhead (which the OP already\n> declared unacceptable) and there are some definitional issues too, like\n> whether to count statements executed within functions.\n\nYeah, the problem seems underspecified. How do you count statements\nadded or removed by rewrite rules? Statements executed to answer RI\nqueries? Do you count the statements issued by clients as part of the\nstartup sequence? The hypothetical \"reset session\" of a connection pool\nhandler? How do you count 2PC -- when they are executed, or when they\nare committed? What happens to statements in transactions that are\nrolled back? What happens to a statement that is executed partially\nbecause it failed partway (e.g. because of division by zero)?\n\n\nOTOH ISTM it would be easy to modify Postgres so as to count statements\nin the stat collector, by turning pgstat_report_activity into a routine\nthat sent a count (presumably always 1) instead of the query string, and\nthen just add the count to a counter on receiving.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 23 May 2006 15:50:01 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> OTOH ISTM it would be easy to modify Postgres so as to count statements\n> in the stat collector, by turning pgstat_report_activity into a routine\n> that sent a count (presumably always 1) instead of the query string, and\n> then just add the count to a counter on receiving.\n\nYou wouldn't have to change the backends at all, just modify the\ncollector to count the number of report_activity messages received.\nMight have to play some games with ignoring \"<IDLE>\" messages, but\notherwise simple (and simplistic...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 May 2006 15:55:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats? " }, { "msg_contents": "On Tue, May 23, 2006 at 03:50:01PM -0400, Alvaro Herrera wrote:\n> Tom Lane wrote:\n> \n> > Counting individual statements would add overhead (which the OP already\n> > declared unacceptable) and there are some definitional issues too, like\n> > whether to count statements executed within functions.\n> \n> Yeah, the problem seems underspecified. How do you count statements\n> added or removed by rewrite rules? Statements executed to answer RI\n> queries? Do you count the statements issued by clients as part of the\n> startup sequence? The hypothetical \"reset session\" of a connection pool\n> handler? How do you count 2PC -- when they are executed, or when they\n> are committed? What happens to statements in transactions that are\n> rolled back? What happens to a statement that is executed partially\n> because it failed partway (e.g. because of division by zero)?\n> \n> \n> OTOH ISTM it would be easy to modify Postgres so as to count statements\n> in the stat collector, by turning pgstat_report_activity into a routine\n> that sent a count (presumably always 1) instead of the query string, and\n> then just add the count to a counter on receiving.\n\nYeah, I doubt any other database gets mired neck-deep in exact details\nof statment execution counts; a simple count of queries executed via a\nclient connection would be a great start.\n\nI often run into situations where people are having a performance issue\nbecause they're building web pages that make 50 queries to the database.\nBeing able to identify that and determine how many were selects vs. DML\nwould be useful.\n\nBonus points if there are seperate counters for statements from\nfunctions.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 23 May 2006 15:02:48 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On 5/23/06, Dan Gorman <[email protected]> wrote:\n> What I am looking for is that our DB is doing X selects a min.\n\nIf you're using 7.4, you can use log_duration to only log duration. It\nwon't log all the query text, only one short line per query. Then you\ncan use pgFouine to analyze this and having a graph such like that\nhttp://pgfouine.projects.postgresql.org/reports/sample_hourly.html .\nIf you only log duration, you won't be able to separate\ninsert/delete/update from select though. So it can be interesting only\nif they negligible.\n\nNote that this is not possible in 8.x. You'll have to log the\nstatement to log the duration. I proposed a patch but it was refused\nas it complexified the log configuration.\n\n> Turning on logging isn't an option as it will create too much IO in\n> our enviornment.\n\nWhat we do here is logging on another machine via the network using\nsyslog. From our experience, it's not the fact to log that really\nslows down the db but the generated I/O load. So if you do that, you\nshould be able to log the statements without slowing down your\ndatabase too much.\n\nOn our production databases, we keep the log running all the time and\nwe generate reports daily.\n\nRegards,\n\n--\nGuillaume\n", "msg_date": "Tue, 23 May 2006 22:04:36 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On Tue, 2006-05-23 at 15:55 -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > OTOH ISTM it would be easy to modify Postgres so as to count statements\n> > in the stat collector, by turning pgstat_report_activity into a routine\n> > that sent a count (presumably always 1) instead of the query string, and\n> > then just add the count to a counter on receiving.\n> \n> You wouldn't have to change the backends at all, just modify the\n> collector to count the number of report_activity messages received.\n> Might have to play some games with ignoring \"<IDLE>\" messages, but\n> otherwise simple (and simplistic...)\n\nThe OP wanted statements/sec rather than just a total.\n\nHaving stats logged by time would be very useful, but I wouldn't limit\nthat just to numbers of statements in each time period.\n\nstats_logging_interval = 60 by default, 0 to disable, range 5-3600\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 23 May 2006 22:47:16 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "Alvaro Herrera wrote:\n> Yeah, the problem seems underspecified.\n\nSo, Dan, the question is, what are you trying to measure?\nThis might be a statistic that management has always been given,\nfor Oracle, and you need to produce the \"same\" number for PostgreSQL.\n\nIf not, it's hard to figure out what a statement counter actually can measure,\nto the extent that you can say, \"If that number does THIS, I should do THAT.\"\n\n-- \nEngineers think that equations approximate reality.\nPhysicists think that reality approximates the equations.\nMathematicians never make the connection.\n", "msg_date": "Tue, 23 May 2006 15:18:49 -0700", "msg_from": "Mischa Sandberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On 5/23/06, Dan Gorman <[email protected]> wrote:\n>\n> In any other DB (oracle, mysql) I know how many queries (selects) per second\n> the database is executing. How do I get this\n> number out of postgres?\n\nMysql does AFAIR only count the number of queries and then uses the\n\"seconds since startup\" to estimate the number of queries per second.\nIf your server is hammered with queries 1 hour a day it's not giving\nyou a fair result.\n\n\n-- \n regards,\n Robin\n", "msg_date": "Wed, 24 May 2006 12:27:41 +0200", "msg_from": "\"Robin Ericsson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "On Wed, May 24, 2006 at 12:27:41PM +0200, Robin Ericsson wrote:\n> On 5/23/06, Dan Gorman <[email protected]> wrote:\n> >\n> >In any other DB (oracle, mysql) I know how many queries (selects) per \n> >second\n> >the database is executing. How do I get this\n> >number out of postgres?\n> \n> Mysql does AFAIR only count the number of queries and then uses the\n> \"seconds since startup\" to estimate the number of queries per second.\n> If your server is hammered with queries 1 hour a day it's not giving\n> you a fair result.\n\nSomehow that doesn't surprise me...\n\nIn any case, if we at least provide a raw counter, it's not that hard to\nturn that into selects per second over some period of time.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 24 May 2006 16:33:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "I am not sure if this is what the original poster was refering to, but I \nhave used an application called mtop that shows how many queries per second \nmysql is doing.\n\nIn my case this is helpfull because we have a number of machines running \npostfix and each incoming mail generates about 7 queries. Queries are all \nvery simmilar to each other in that scenario.\n\nHaving access to that query/second stat allowed me to improve the \nsettings in MysQL. Ultimately once we migrated to a new server I could see \nhow moving to the new machine increased the speed at which we could accept \nemails.\n\nI am, little by little, getting PostgreSQL to be used, but for now the \npostfix queries are tied to MySQL. By the time we hopefully do move to \nPostgreSQL for the Postfix backend it will be very helpfull to have some \nsort of way to measure queries/time period. \n\n", "msg_date": "Mon, 29 May 2006 23:52:35 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" }, { "msg_contents": "try pgtop. It is mytop clone for postgresql.\n\nRegards,\nalvis\n\nFrancisco Reyes wrote:\n> I am not sure if this is what the original poster was refering to, but I \n> have used an application called mtop that shows how many queries per \n> second mysql is doing.\n> \n> In my case this is helpfull because we have a number of machines running \n> postfix and each incoming mail generates about 7 queries. Queries are \n> all very simmilar to each other in that scenario.\n> \n> Having access to that query/second stat allowed me to improve the \n> settings in MysQL. Ultimately once we migrated to a new server I could \n> see how moving to the new machine increased the speed at which we could \n> accept emails.\n> \n> I am, little by little, getting PostgreSQL to be used, but for now the \n> postfix queries are tied to MySQL. By the time we hopefully do move to \n> PostgreSQL for the Postfix backend it will be very helpfull to have some \n> sort of way to measure queries/time period.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 30 May 2006 13:05:53 +0000", "msg_from": "Alvis Tunkelis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selects query stats?" } ]
[ { "msg_contents": "Hi I'm a new postgresql user. I wrote ACO (ant colony optimazition) and \nwant to replace it with GEQO in postres/src/backend/optimizer but I don't know how \nto compile and run the source code :(\n \n I installed postgresql-8.1.3 and cygwin but I can not use them to \ncompile the source code. I want to compare GEQO and ACO optimizers performance using a small database\n \n Can you help me???????\n\n\t\t\n---------------------------------\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1&cent;/min.\nHi I'm a new postgresql user. I wrote ACO (ant colony optimazition) and want to replace it with GEQO in postres/src/backend/optimizer but I don't know how to compile and run the source code :(     I installed postgresql-8.1.3 and cygwin but I can not use them to compile the source code. I want to compare GEQO and ACO optimizers performance using a small database     Can you help me???????\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min.", "msg_date": "Wed, 24 May 2006 01:44:43 -0700 (PDT)", "msg_from": "sibel karaasma <[email protected]>", "msg_from_op": true, "msg_subject": "compiling source code!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" } ]
[ { "msg_contents": "\n\n\n\n\nI want to optimize this simple join:\n\nSELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )\n\nhuge_table has about 2.5 million records, can be assumed as fixed, and\nhas the following index:\n\nCREATE INDEX huge_table_index ON huge_table( UPPER( id ) );\n\n...while tiny_table changes with each user request, and typically will\ncontain on the order of 100-1000 records. For this analysis, I put\n300 records in tiny_table, resulting in 505 records in the join.\n\nI tried several approaches. In order of increasing speed of\nexecution:\n\n1. executed as shown above, with enable_seqscan on: about 100 s.\n\n2. executed as shown above, with enable_seqscan off: about 10 s.\n\n3. executed with a LIMIT 6000 clause added to the SELECT statement, and\n enable_seqscan on: about 5 s.\n\n4. executed with a LIMIT 600 clause added to the SELECT statement, and\n enable_seqscan on: less than 1 s.\n\n\n\nClearly, using LIMIT is the way to go. Unfortunately I *do* want all\nthe records that would have been produced without the LIMIT clause,\nand I don't have a formula for the limit that will guarantee this. I\ncould use a very large value (e.g. 20x the size of tiny_table, as in\napproach 3 above) which would make the probability of hitting the\nlimit very small, but unfortunately, the query plan in this case is\ndifferent from the query plan when the limit is just above the\nexpected number of results (approach 4 above).\n\nThe query plan for the fastest approach is this:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Limit (cost=0.01..2338.75 rows=600 width=84)\n -> Nested Loop (cost=0.01..14766453.89 rows=3788315 width=84)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n -> Index Scan using huge_table_index on huge_table h (cost=0.01..48871.80 rows=12628 width=46)\n Index Cond: (upper((\"outer\".id)::text) = upper((h.id)::text))\n\n\n\nHow can I *force* this query plan even with a higher limit value?\n\nI found, by dumb trial and error, that in this case the switch happens\nat LIMIT 5432, which, FWIW, is about 0.2% of the size of huge_table.\nIs there a simpler way to determine this limit (hopefully\nprogrammatically)?\n\n\nAlternatively, I could compute the value for LIMIT as 2x the number of\nrecords in tiny_table, and if the number of records found is *exactly*\nthis number, I would know that (most likely) some records were left\nout. In this case, I could use the fact that, according to the query\nplan above, the scan of tiny_table is sequential to infer which\nrecords in tiny_table were disregarded when the limit was reached, and\nthen repeat the query with only these left over records in tiny_table.\n\nWhat's your opinion of this strategy? Is there a good way to improve\nit?\n\nMany thanks in advance!\n\nkj\n\nPS: FWIW, the query plan for the query with LIMIT 6000 is this:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Limit (cost=19676.75..21327.99 rows=6000 width=84)\n -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n", "msg_date": "Wed, 24 May 2006 11:49:52 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing a huge_table/tiny_table join" } ]
[ { "msg_contents": "\n\n\n\n[ I had a problem with my mailer when I first sent this. My apologies\n for any repeats. ]\n\n\nI want to optimize this simple join:\n\nSELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )\n\nhuge_table has about 2.5 million records, can be assumed as fixed, and\nhas the following index:\n\nCREATE INDEX huge_table_index ON huge_table( UPPER( id ) );\n\n...while tiny_table changes with each user request, and typically will\ncontain on the order of 100-1000 records. For this analysis, I put\n300 records in tiny_table, resulting in 505 records in the join.\n\nI tried several approaches. In order of increasing speed of\nexecution:\n\n1. executed as shown above, with enable_seqscan on: about 100 s.\n\n2. executed as shown above, with enable_seqscan off: about 10 s.\n\n3. executed with a LIMIT 6000 clause added to the SELECT statement, and\n enable_seqscan on: about 5 s.\n\n4. executed with a LIMIT 600 clause added to the SELECT statement, and\n enable_seqscan on: less than 1 s.\n\n\n\nClearly, using LIMIT is the way to go. Unfortunately I *do* want all\nthe records that would have been produced without the LIMIT clause,\nand I don't have a formula for the limit that will guarantee this. I\ncould use a very large value (e.g. 20x the size of tiny_table, as in\napproach 3 above) which would make the probability of hitting the\nlimit very small, but unfortunately, the query plan in this case is\ndifferent from the query plan when the limit is just above the\nexpected number of results (approach 4 above).\n\nThe query plan for the fastest approach is this:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Limit (cost=0.01..2338.75 rows=600 width=84)\n -> Nested Loop (cost=0.01..14766453.89 rows=3788315 width=84)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n -> Index Scan using huge_table_index on huge_table h (cost=0.01..48871.80 rows=12628 width=46)\n Index Cond: (upper((\"outer\".id)::text) = upper((h.id)::text))\n\n\n\nHow can I *force* this query plan even with a higher limit value?\n\nI found, by dumb trial and error, that in this case the switch happens\nat LIMIT 5432, which, FWIW, is about 0.2% of the size of huge_table.\nIs there a simpler way to determine this limit (hopefully\nprogrammatically)?\n\n\nAlternatively, I could compute the value for LIMIT as 2x the number of\nrecords in tiny_table, and if the number of records found is *exactly*\nthis number, I would know that (most likely) some records were left\nout. In this case, I could use the fact that, according to the query\nplan above, the scan of tiny_table is sequential to infer which\nrecords in tiny_table were disregarded when the limit was reached, and\nthen repeat the query with only these left over records in tiny_table.\n\nWhat's your opinion of this strategy? Is there a good way to improve\nit?\n\nMany thanks in advance!\n\nkj\n\nPS: FWIW, the query plan for the query with LIMIT 6000 is this:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Limit (cost=19676.75..21327.99 rows=6000 width=84)\n -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n", "msg_date": "Wed, 24 May 2006 13:42:46 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing a huge_table/tiny_table join" }, { "msg_contents": "\n> SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = \n> UPPER( t.id )\n\n\tWhat about :\n\nSELECT * FROM huge_table h WHERE UPPER(id) IN (SELECT upper(id) FROM \ntiny_table t)\n\n\tOr, try opening a cursor on your original query and using FETCH. It might \nresult in a different plan.\n\tOr lower random_page_cost.\n", "msg_date": "Wed, 31 May 2006 00:37:03 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" } ]
[ { "msg_contents": "I have a system that currently inserts ~ 250 million rows per day (I \nhave about 10k more raw data than that, but I'm at the limit of my \nability to get useful insert performance out of postgres).\n\nThings I've already done that have made a big difference:\n- modified postgresql.conf shared_buffers value\n- converted to COPY from individual insert statements\n- changed BLCKSZ to 32768\n\nI currently get ~35k/sec inserts on a table with one index (~70k/sec \ninserts if I don't have any indexes).\n\nThe indexed field is basically a time_t (seconds since the epoch), \nautovacuum is running (or postgres would stop choosing to use the \nindex). The other fields have relatively lower cardinality.\n\nEach days worth of data gets inserted into its own table so that I \ncan expire the data without too much effort (since drop table is much \nfaster than running a delete and then vacuum).\n\nI would really like to be able to have 1 (or 2) more indexes on the \ntable since it takes a while for a sequential scan of 250million rows \nto complete, but CPU time goes way up.\n\nIn fact, it looks like I'm not currently IO bound, but CPU-bound. I \nthink some sort of lazy-index generation (especially if it could be \nparallelized to use the other processors/cores that currently sit \nmostly idle) would be a solution. Is anyone working on something like \nthis? Any other ideas? Where should I look if I want to start to \nthink about creating a new index that would work this way (or am I \njust crazy)?\n\nThanks for any insight!\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 15:45:17 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": true, "msg_subject": "Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "\nIf you can live with possible database corruption, you could try turning\nFsync off. For example if you could just reinsert the data on the off\nchance a hardware failure corrupts the database, you might get a decent\nimprovement.\n\nAlso have you tried creating the index after you have inserted all your\ndata? (Or maybe copy already disables the indexes while inserting?)\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Daniel J. Luke\n> Sent: Wednesday, May 24, 2006 2:45 PM\n> To: [email protected]\n> Subject: [PERFORM] Getting even more insert performance \n> (250m+rows/day)\n> \n> \n> I have a system that currently inserts ~ 250 million rows per day (I \n> have about 10k more raw data than that, but I'm at the limit of my \n> ability to get useful insert performance out of postgres).\n> \n> Things I've already done that have made a big difference:\n> - modified postgresql.conf shared_buffers value\n> - converted to COPY from individual insert statements\n> - changed BLCKSZ to 32768\n> \n> I currently get ~35k/sec inserts on a table with one index (~70k/sec \n> inserts if I don't have any indexes).\n> \n> The indexed field is basically a time_t (seconds since the epoch), \n> autovacuum is running (or postgres would stop choosing to use the \n> index). The other fields have relatively lower cardinality.\n> \n> Each days worth of data gets inserted into its own table so that I \n> can expire the data without too much effort (since drop table \n> is much \n> faster than running a delete and then vacuum).\n> \n> I would really like to be able to have 1 (or 2) more indexes on the \n> table since it takes a while for a sequential scan of \n> 250million rows \n> to complete, but CPU time goes way up.\n> \n> In fact, it looks like I'm not currently IO bound, but CPU-bound. I \n> think some sort of lazy-index generation (especially if it could be \n> parallelized to use the other processors/cores that currently sit \n> mostly idle) would be a solution. Is anyone working on \n> something like \n> this? Any other ideas? Where should I look if I want to start to \n> think about creating a new index that would work this way (or am I \n> just crazy)?\n> \n> Thanks for any insight!\n> \n> --\n> Daniel J. Luke\n> +========================================================+\n> | *---------------- [email protected] ----------------* |\n> | *-------------- http://www.geeklair.net -------------* |\n> +========================================================+\n> | Opinions expressed are mine and do not necessarily |\n> | reflect the opinions of my employer. |\n> +========================================================+\n> \n> \n> \n\n", "msg_date": "Wed, 24 May 2006 15:02:44 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On Wed, May 24, 2006 at 03:45:17PM -0400, Daniel J. Luke wrote:\n> Things I've already done that have made a big difference:\n> - modified postgresql.conf shared_buffers value\n> - converted to COPY from individual insert statements\n> - changed BLCKSZ to 32768\n\nHave you tried fiddling with the checkpointing settings? Check your logs --\nif you get a warning about checkpoints being too close together, that should\ngive you quite some boost.\n\nApart from that, you should have quite a bit to go on -- somebody on this\nlist reported 2 billion rows/day earlier, but it might have been on beefier\nhardware, of course. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 24 May 2006 22:03:49 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 24, 2006, at 4:02 PM, Dave Dutcher wrote:\n> If you can live with possible database corruption, you could try \n> turning\n> Fsync off. For example if you could just reinsert the data on the off\n> chance a hardware failure corrupts the database, you might get a \n> decent\n> improvement.\n\nI tried, but I didn't see much of an improvement (and it's not really \nacceptable for this application).\n\n> Also have you tried creating the index after you have inserted all \n> your\n> data? (Or maybe copy already disables the indexes while inserting?)\n\nThe data gets inserted in batches every 5 minutes and I potentially \nhave people querying it constantly, so I can't remove and re-create \nthe index.\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 16:08:06 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote:\n> Have you tried fiddling with the checkpointing settings? Check your \n> logs --\n> if you get a warning about checkpoints being too close together, \n> that should\n> give you quite some boost.\n\nno warnings in the log (I did change the checkpoint settings when I \nset up the database, but didn't notice an appreciable difference in \ninsert performance).\n\n> Apart from that, you should have quite a bit to go on -- somebody \n> on this\n> list reported 2 billion rows/day earlier, but it might have been on \n> beefier\n> hardware, of course. :-)\n\nProbably :) I'll keep searching the list archives and see if I find \nanything else (I did some searching and didn't find anything that I \nhadn't already tried).\n\nThanks!\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 16:09:54 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:\n> no warnings in the log (I did change the checkpoint settings when I \n> set up the database, but didn't notice an appreciable difference in \n> insert performance).\n\nHow about wal_buffers? Upping it might not help all that much if only one\nthread is writing, but you might give it a try...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 24 May 2006 22:13:50 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "> The data gets inserted in batches every 5 minutes and I potentially \n> have people querying it constantly, so I can't remove and re-create \n> the index.\n\nHow live does your data need to be? One possibility would be to use a\nseparate table for each batch instead of a separate table per day,\ncreate the indexes after the import and only after the indexes have been\ncreated make the table available for user queries.\n\nYou'd be trading latency for throughput in that case.\n\nAlso, you mentioned that you're CPU-bound, but that you have multiple\nCPU's. In that case, performing N concurrent imports (where N is the\nnumber of processor cores available) might be a win over a single-\nthreaded import.\n\n-- Mark Lewis\n", "msg_date": "Wed, 24 May 2006 13:18:32 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote:\n> On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:\n>> no warnings in the log (I did change the checkpoint settings when I\n>> set up the database, but didn't notice an appreciable difference in\n>> insert performance).\n>\n> How about wal_buffers? Upping it might not help all that much if \n> only one\n> thread is writing, but you might give it a try...\n\nI tried, but I didn't notice a difference.\n\nI should probably emphasize that I appear to be CPU bound (and I can \ndouble my # of rows inserted per second by removing the index on the \ntable, or half it by adding another index).\n\nI really should run gprof just to verify.\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 16:20:20 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:\n> On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote:\n> >Have you tried fiddling with the checkpointing settings? Check your \n> >logs --\n> >if you get a warning about checkpoints being too close together, \n> >that should\n> >give you quite some boost.\n> \n> no warnings in the log (I did change the checkpoint settings when I \n> set up the database, but didn't notice an appreciable difference in \n> insert performance).\n\nKeep in mind that the default warning time of 30 seconds is pretty\nconservative; you'd want to bump that up to 300 seconds or so, probably.\n\nAs for the suggestion of multiple insert runs at a time, I suspect that\nwould just result in a lot of contention for some mutex/semaphore in the\nindex.\n\nYour best bet really is to run gprof and post those results. It's also\npossible that this is fixed be a recent patch to HEAD that reduces the\namount of traffic on the index metapage, something gprof would probably\nconfirm.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 24 May 2006 16:32:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "We were able to achieve 2B (small) rows per day sustained with\nvery little latency. It is beefy hardware, but things that did\nhelp include WAL on its own I/O channel, XFS, binary copy,\nand tuning bgwriter and checkpoint settings for the application\nand hardware. Things that didn't help much were shared_buffers\nand wal_buffers. But our application is single-writer, and a\nsmall number of readers.\n\nAlthough there is tons of great advice in this and other forums,\nI think you just have to do a lot of experimentation with careful\nmeasurement to find what's right for your application/environment.\ni.e., YMMV.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Steinar H.\nGunderson\nSent: Wednesday, May 24, 2006 4:04 PM\nTo: Daniel J. Luke\nCc: [email protected]\nSubject: Re: [PERFORM] Getting even more insert performance\n(250m+rows/day)\n\n\nOn Wed, May 24, 2006 at 03:45:17PM -0400, Daniel J. Luke wrote:\n> Things I've already done that have made a big difference:\n> - modified postgresql.conf shared_buffers value\n> - converted to COPY from individual insert statements\n> - changed BLCKSZ to 32768\n\nHave you tried fiddling with the checkpointing settings? Check your logs --\nif you get a warning about checkpoints being too close together, that should\ngive you quite some boost.\n\nApart from that, you should have quite a bit to go on -- somebody on this\nlist reported 2 billion rows/day earlier, but it might have been on beefier\nhardware, of course. :-)\n\n/* Steinar */\n--\nHomepage: http://www.sesse.net/\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n", "msg_date": "Wed, 24 May 2006 23:20:24 -0400", "msg_from": "\"Ian Westmacott\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" } ]
[ { "msg_contents": "Daniel J. Luke wrote:\n> On May 24, 2006, at 4:02 PM, Dave Dutcher wrote:\n>> If you can live with possible database corruption, you could try\n>> turning Fsync off. For example if you could just reinsert the data\n>> on the off chance a hardware failure corrupts the database, you\n>> might get a decent improvement.\n> \n> I tried, but I didn't see much of an improvement (and it's not really\n> acceptable for this application).\n> \n>> Also have you tried creating the index after you have inserted all\n>> your data? (Or maybe copy already disables the indexes while\n>> inserting?) \n> \n> The data gets inserted in batches every 5 minutes and I potentially\n> have people querying it constantly, so I can't remove and re-create\n> the index.\n> \nare the batches single insert's, or within a big transaction?\n\nI.E., does the inserts look like:\nINSERT\nINSERT\nINSERT\n\nor\n\nBEGIN\nINSERT\nINSERT\nINSERT\nCOMMIT\n\nIf the former, the latter is a big win.\n\nAlso, what release(s) are you running?\n\nLER\n\n-- \nLarry Rosenman\t\t\nDatabase Support Engineer\n\nPERVASIVE SOFTWARE. INC.\n12365B RIATA TRACE PKWY\n3015\nAUSTIN TX 78727-6531 \n\nTel: 512.231.6173\nFax: 512.231.6597\nEmail: [email protected]\nWeb: www.pervasive.com \n", "msg_date": "Wed, 24 May 2006 15:12:26 -0500", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 24, 2006, at 4:12 PM, Larry Rosenman wrote:\n> are the batches single insert's, or within a big transaction?\n> If the former, the latter is a big win.\n\nOne big transaction every 5 minutes using 'COPY FROM' (instead of \ninserts).\n\n> Also, what release(s) are you running?\n\n8.1.x (I think we're upgrading from 8.1.3 to 8.1.4 today).\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 16:18:06 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "Hi, Daniel,\n\nDaniel J. Luke wrote:\n\n> One big transaction every 5 minutes using 'COPY FROM' (instead of \n> inserts).\n\nAre you using \"COPY table FROM '/path/to/file'\", having the file sitting\non the server, or \"COPY table FROM STDIN\" or psql \"/copy\", having the\nfile sitting on the client?\n\n From our tests, having the file on the server can speed up the things by\n factor 2 or 3 in some cases.\n\nAlso, using BINARY copy may give great benefits due to lower parsing\noverhead.\n\nAs you say you're I/O bound, spreading tables, indices, wal and input\nfile to different spindles won't help you much.\n\n\nHTH\nMarkus\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Mon, 29 May 2006 13:11:13 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 29, 2006, at 7:11 AM, Markus Schaber wrote:\n>> One big transaction every 5 minutes using 'COPY FROM' (instead of\n>> inserts).\n>\n> Are you using \"COPY table FROM '/path/to/file'\", having the file \n> sitting\n> on the server, or \"COPY table FROM STDIN\" or psql \"/copy\", having the\n> file sitting on the client?\n\nCOPY table FROM STDIN using psql on the server\n\nI should have gprof numbers on a similarly set up test machine soon ...\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Tue, 30 May 2006 15:59:19 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:\n> I should have gprof numbers on a similarly set up test machine \n> soon ...\n\ngprof output is available at http://geeklair.net/~dluke/ \npostgres_profiles/\n\n(generated from CVS HEAD as of today).\n\nAny ideas are welcome.\n\nThanks!\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Tue, 30 May 2006 18:03:11 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" } ]
[ { "msg_contents": "Daniel J. Luke wrote:\n> On May 24, 2006, at 4:12 PM, Larry Rosenman wrote:\n>> are the batches single insert's, or within a big transaction?\n>> If the former, the latter is a big win.\n> \n> One big transaction every 5 minutes using 'COPY FROM' (instead of\n> inserts).\n> \n>> Also, what release(s) are you running?\n> \n> 8.1.x (I think we're upgrading from 8.1.3 to 8.1.4 today).\n> \nHad to ask :) \n\nAlso, is pg_xlog on the same or different spindles from the rest of the\nPG Data directory?\n\nLER\n\n-- \nLarry Rosenman\t\t\nDatabase Support Engineer\n\nPERVASIVE SOFTWARE. INC.\n12365B RIATA TRACE PKWY\n3015\nAUSTIN TX 78727-6531 \n\nTel: 512.231.6173\nFax: 512.231.6597\nEmail: [email protected]\nWeb: www.pervasive.com \n", "msg_date": "Wed, 24 May 2006 15:24:20 -0500", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" }, { "msg_contents": "On May 24, 2006, at 4:24 PM, Larry Rosenman wrote:\n> Also, is pg_xlog on the same or different spindles from the rest of \n> the\n> PG Data directory?\n\nIt's sitting on the same disk array (but I'm doing 1 transaction \nevery 5 minutes, and I'm not near the array's sustained write \ncapacity, so I don't think that's currently limiting performance).\n\n--\nDaniel J. Luke\n+========================================================+\n| *---------------- [email protected] ----------------* |\n| *-------------- http://www.geeklair.net -------------* |\n+========================================================+\n| Opinions expressed are mine and do not necessarily |\n| reflect the opinions of my employer. |\n+========================================================+", "msg_date": "Wed, 24 May 2006 16:29:25 -0400", "msg_from": "\"Daniel J. Luke\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting even more insert performance (250m+rows/day)" } ]
[ { "msg_contents": "\n\n\n\nI want to optimize this simple join:\n\nSELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )\n\nhuge_table has about 2.5 million records, can be assumed as fixed, and\nhas the following index:\n\nCREATE INDEX huge_table_index ON huge_table( UPPER( id ) );\n\n...while tiny_table changes with each user request, and typically will\ncontain on the order of 100-1000 records. For this analysis, I put\n300 records in tiny_table, resulting in 505 records in the join.\n\nI tried several approaches. In order of increasing speed of\nexecution:\n\n1. executed as shown above, with enable_seqscan on: about 100 s.\n\n2. executed as shown above, with enable_seqscan off: about 10 s.\n\n3. executed with a LIMIT 6000 clause added to the SELECT statement, and\n enable_seqscan on: about 5 s.\n\n4. executed with a LIMIT 600 clause added to the SELECT statement, and\n enable_seqscan on: less than 1 s.\n\n\n\nClearly, using LIMIT is the way to go. Unfortunately I *do* want all\nthe records that would have been produced without the LIMIT clause,\nand I don't have a formula for the limit that will guarantee this. I\ncould use a very large value (e.g. 20x the size of tiny_table, as in\napproach 3 above) which would make the probability of hitting the\nlimit very small, but unfortunately, the query plan in this case is\ndifferent from the query plan when the limit is just above the\nexpected number of results (approach 4 above).\n\nThe query plan for the fastest approach is this:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Limit (cost=0.01..2338.75 rows=600 width=84)\n -> Nested Loop (cost=0.01..14766453.89 rows=3788315 width=84)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n -> Index Scan using huge_table_index on huge_table h (cost=0.01..48871.80 rows=12628 width=46)\n Index Cond: (upper((\"outer\".id)::text) = upper((h.id)::text))\n\n\n\nHow can I *force* this query plan even with a higher limit value?\n\nI found, by dumb trial and error, that in this case the switch happens\nat LIMIT 5432, which, FWIW, is about 0.2% of the size of huge_table.\nIs there a simpler way to determine this limit (hopefully\nprogrammatically)?\n\n\nAlternatively, I could compute the value for LIMIT as 2x the number of\nrecords in tiny_table, and if the number of records found is *exactly*\nthis number, I would know that (most likely) some records were left\nout. In this case, I could use the fact that, according to the query\nplan above, the scan of tiny_table is sequential to infer which\nrecords in tiny_table were disregarded when the limit was reached, and\nthen repeat the query with only these left over records in tiny_table.\n\nWhat's your opinion of this strategy? Is there a good way to improve\nit?\n\nMany thanks in advance!\n\nkj\n\nPS: FWIW, the query plan for the query with LIMIT 6000 is this:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Limit (cost=19676.75..21327.99 rows=6000 width=84)\n -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n\n------------=_1148485808-20617-3--\n\n", "msg_date": "Wed, 24 May 2006 20:52:53 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing a huge_table/tiny_table join" }, { "msg_contents": "\n> kj\n> \n> PS: FWIW, the query plan for the query with LIMIT 6000 is this:\n\nWhat is the explain analyze?\n\n> \n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Limit (cost=19676.75..21327.99 rows=6000 width=84)\n> -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n> Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n> -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n> -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n> -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n> \n> ------------=_1148485808-20617-3--\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n", "msg_date": "Wed, 24 May 2006 18:31:56 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" }, { "msg_contents": "<[email protected]> writes:\n> Limit (cost=19676.75..21327.99 rows=6000 width=84)\n> -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n> Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n> -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n> -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n> -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n\nUm, if huge_table is so much bigger than tiny_table, why are the cost\nestimates for seqscanning them only about 2.5x different? There's\nsomething wacko about your statistics, methinks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 May 2006 21:41:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join " }, { "msg_contents": "On 5/24/06, Tom Lane <[email protected]> wrote:\n>\n> <[email protected]> writes:\n> > Limit (cost=19676.75..21327.99 rows=6000 width=84)\n> > -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n> > Hash Cond: (upper((\"outer\".id)::text) =\n> upper((\"inner\".id)::text))\n> > -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543\n> width=46)\n> > -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n> > -> Seq Scan on tiny_table t (cost=0.00..19676.00rows=300 width=38)\n>\n> Um, if huge_table is so much bigger than tiny_table, why are the cost\n> estimates for seqscanning them only about 2.5x different? There's\n> something wacko about your statistics, methinks.\n\n\n\nYou mean there's a bug in explain? I agree that it makes no sense that the\ncosts don't differ as much as one would expect, but you can see right there\nthe numbers of rows for the two tables, which are exactly as I described.\nAt any rate, how would one go about finding an explanation for these strange\nstats?\n\nMore bewildering still (and infuriating as hell--because it means that all\nof my work for yesterday has been wasted) is that I can no longer reproduce\nthe best query plan, even though the tables have not changed at all. (Hence\nI can't post the explain analyze for the best query plan.) No matter what\nvalue I use for LIMIT, the query planner now insists on sequentially\nscanning huge_table and ignoring the available index. (If I turn off\nenable_seqscan, I get the second worst query plan I posted yesterday.)\n\nAnyway, I take it that there is no way to bypass the optimizer and instruct\nPostgreSQL exactly how one wants the search performed?\n\nThanks!\n\nkj\n\n\nOn 5/24/06, Tom Lane <[email protected]> wrote:\n<[email protected]> writes:>  Limit  (cost=19676.75..21327.99\n rows=6000 width=84)>    ->  Hash Join  (cost=19676.75..1062244.81 rows=3788315 width=84)>          Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))>          ->  Seq Scan on huge_table h  (cost=\n0.00..51292.43 rows=2525543 width=46)>          ->  Hash  (cost=19676.00..19676.00 rows=300 width=38)>                ->  Seq Scan on tiny_table t  (cost=0.00..19676.00 rows=300 width=38)Um, if huge_table is so much bigger than tiny_table, why are the cost\nestimates for seqscanning them only about 2.5x different?  There'ssomething wacko about your statistics, methinks.\n \n \nYou mean there's a bug in explain?  I agree that it makes no sense that the costs don't differ as much as one would expect, but you can see right there the numbers of rows for the two tables, which are exactly as I described.  At any rate, how would one go about finding an explanation for these strange stats?\n\n \nMore bewildering still (and infuriating as hell--because it means that all of my work for yesterday has been wasted) is that I can no longer reproduce the best query plan, even though the tables have not changed at all.  (Hence I can't post the explain analyze for the best query plan.)  No matter what value I use for LIMIT, the query planner now insists on sequentially scanning huge_table and ignoring the available index.  (If I turn off enable_seqscan, I get the second worst query plan I posted yesterday.)\n\n \nAnyway, I take it that there is no way to bypass the optimizer and instruct PostgreSQL exactly how one wants the search performed?\n \nThanks!\n \nkj", "msg_date": "Thu, 25 May 2006 12:21:53 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" }, { "msg_contents": "Tom Lane wrote:\n> <[email protected]> writes:\n>> Limit (cost=19676.75..21327.99 rows=6000 width=84)\n>> -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n>> Hash Cond: (upper((\"outer\".id)::text) = upper((\"inner\".id)::text))\n>> -> Seq Scan on huge_table h (cost=0.00..51292.43 rows=2525543 width=46)\n>> -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n>> -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n> \n> Um, if huge_table is so much bigger than tiny_table, why are the cost\n> estimates for seqscanning them only about 2.5x different? There's\n> something wacko about your statistics, methinks.\n> \n\nThis suggests that tiny_table is very wide (i.e a lot of columns \ncompared to huge_table), or else has thousands of dead tuples.\n\nDo you want to post the descriptions for these tables?\n\nIf you are running 8.1.x, then the output of 'ANALYZE VERBOSE \ntiny_table' is of interest too.\n\nIf you are running a pre-8.1 release, then lets see 'VACUUM VERBOSE \ntiny_table'.\n\nNote that after either of these, your plans may be altered (as ANALYZE \nwill recompute your stats for tiny_table, and VACUUM may truncate pages \nfull of dead tuples at the end of it)!\n\n", "msg_date": "Fri, 26 May 2006 11:27:09 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" } ]
[ { "msg_contents": "Hi,\n\nI find this very helpful:\n\n Lowering the priority of a PostgreSQL query\n http://weblog.bignerdranch.com/?p=11\n\nNow I was wondering whether one could have a\n SELECT pg_setpriority(10);\nexecuted automatically each time a certain user\nconnects (not necessarily using psql)?\n\nAny ideas if and how this might be possible?\n\nRegards :)\nChris.\n\n\n", "msg_date": "Thu, 25 May 2006 18:16:24 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": true, "msg_subject": "lowering priority automatically at connection" }, { "msg_contents": "Chris Mair <[email protected]> writes:\n> I find this very helpful:\n> Lowering the priority of a PostgreSQL query\n> http://weblog.bignerdranch.com/?p=11\n\nThat guy doesn't actually have the foggiest idea what he's doing.\nThe reason there is no built-in capability to do that is that it *does\nnot work well*. Search the list archives for \"priority inversion\" to\nfind out why not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 May 2006 12:26:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lowering priority automatically at connection " }, { "msg_contents": "\n> > I find this very helpful:\n> > Lowering the priority of a PostgreSQL query\n> > http://weblog.bignerdranch.com/?p=11\n> \n> That guy doesn't actually have the foggiest idea what he's doing.\n> The reason there is no built-in capability to do that is that it *does\n> not work well*. Search the list archives for \"priority inversion\" to\n> find out why not.\n> \n> \t\t\tregards, tom lane\n\nOk,\nI've learned something new (*).\nI'll drop that idea :)\n\nBye,\nChris.\n\n(*) \nhttp://en.wikipedia.org/wiki/Priority_inversion\n\n\n\n", "msg_date": "Thu, 25 May 2006 18:35:28 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": true, "msg_subject": "Re: lowering priority automatically at connection" }, { "msg_contents": "On Thu, May 25, 2006 at 06:16:24PM +0200, Chris Mair wrote:\n> I find this very helpful:\n> \n> Lowering the priority of a PostgreSQL query\n> http://weblog.bignerdranch.com/?p=11\n> \n> Now I was wondering whether one could have a\n> SELECT pg_setpriority(10);\n> executed automatically each time a certain user\n> connects (not necessarily using psql)?\n\nBeware that setting priorities can have unintended, adverse effects.\nUse a search engine to find information about \"priority inversion\"\nbefore deciding that query priorities are a good idea.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 25 May 2006 10:54:56 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lowering priority automatically at connection" }, { "msg_contents": "> That guy doesn't actually have the foggiest idea what he's doing.\n> The reason there is no built-in capability to do that is that it *does\n> not work well*. Search the list archives for \"priority inversion\" to\n> find out why not.\n\nhttp://en.wikipedia.org/wiki/Priority_inversion\n\n", "msg_date": "Fri, 26 May 2006 09:32:56 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lowering priority automatically at connection" }, { "msg_contents": "Tom Lane wrote:\n> That guy doesn't actually have the foggiest idea what he's doing.\n> The reason there is no built-in capability to do that is that it *does\n> not work well*. Search the list archives for \"priority inversion\" to\n> find out why not.\n\nI agree that that particular author seems clueless, but better\nresearched papers do show benefits as well:\n\nThe CMU paper\n\"Priority Mechanisms for OLTP and Transactional Web Applications\" [1]\nstudied both TPC-C and TPC-W workloads on postgresql (as well as DB2).\nFor PostgreSQL they found that without priority inheritance they\nhad factor-of-2 benefits for high-priority transactions;\nand with priority inheritance they had factor-of-6 benefits\nfor high priority transactions -- both with negligible harm\nto the low priority transactions.\n\nUnless there's something wrong with that paper (and at first glance\nit looks like their methodologies apply at least to many workloads)\nit seems that \"it *does not work well*\" is a bit of a generalization;\nand that databases with TPC-C and TPC-W like workloads may indeed\nbe cases where this feature would be useful.\n\n[1] http://www.cs.cmu.edu/~harchol/Papers/actual-icde-submission.pdf\n\"\n ...This paper analyzes and proposes prioritization for\n transactional workloads in conventional DBMS...This paper\n provides a detailed resource utilization breakdown for\n OLTP workloads executing on a range of database platforms\n including IBM DB2[14], Shore[16], and PostgreSQL[17]....\n ...\n For DBMS using MVCC (with TPC-C or TPC-W workloads) and\n for TPC-W workloads (with any concurrency control mechanism),\n we find that lock scheduling is largely ineffective (even\n preemptive lock scheduling) and CPU scheduling is highly\n effective. For example, we find that for PostgreSQL\n running under TPC-C, the simplest CPU scheduling\n algorithm CPU-Prio provides a factor of 2 improvement\n for the high-priority transactions, and adding priority\n inheritance (CPU-Prio-Inherit) brings this up to a factor\n of near 6 improvement under high loads, while hardly\n penalizing low-priority transactions.\n\"\n\n Or am I missing something?\n Ron\n", "msg_date": "Tue, 06 Jun 2006 16:27:16 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lowering priority automatically at connection" }, { "msg_contents": "Hi, Chris,\n\nChris Mair wrote:\n\n> Now I was wondering whether one could have a\n> SELECT pg_setpriority(10);\n> executed automatically each time a certain user\n> connects (not necessarily using psql)?\n> \n> Any ideas if and how this might be possible?\n\nWhen using Java, most Datasource implementations (e. G. the JBoss one)\nallow to specify SQL statements that are executed on connection init.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 20 Jun 2006 17:12:16 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lowering priority automatically at connection" } ]
[ { "msg_contents": "\nOn 5/24/06, Tom Lane <[email protected]> wrote:\n>\n> <[email protected]> writes:\n> > Limit (cost=19676.75..21327.99 rows=6000 width=84)\n> > -> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)\n> > Hash Cond: (upper((\"outer\".id)::text) upper((\"inner\".id)::text))\n> > -> Seq Scan on huge_table h (cost= 0.00..51292.43 rows=2525543 width=46)\n> > -> Hash (cost=19676.00..19676.00 rows=300 width=38)\n> > -> Seq Scan on tiny_table t (cost=0.00..19676.00 rows=300 width=38)\n>\n> Um, if huge_table is so much bigger than tiny_table, why are the cost\n> estimates for seqscanning them only about 2.5x different? There's\n> something wacko about your statistics, methinks.\n\n\n\nWell, they're not my statistics; they're explain's. You mean there's\na bug in explain? I agree that it makes no sense that the costs don't\ndiffer as much as one would expect, but you can see right there the\nnumbers of rows for the two tables. At any rate, how would one go\nabout finding an explanation for these strange stats?\n\nMore bewildering still (and infuriating as hell--because it means that\nall of my work for yesterday has been wasted) is that I can no longer\nreproduce the best query plan I posted earlier, even though the tables\nhave not changed at all. (Hence I can't post the explain analyze for\nthe best query plan, which Josh Drake asked for.) No matter what\nvalue I use for LIMIT, the query planner now insists on sequentially\nscanning huge_table and ignoring the available index. (If I turn off\nenable_seqscan, I get the second worst query plan I posted yesterday.)\n\nAnyway, I take it that there is no way to bypass the optimizer and\ninstruct PostgreSQL exactly how one wants the search performed?\n\nThanks!\n\nkj\n", "msg_date": "Thu, 25 May 2006 12:31:04 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" }, { "msg_contents": "On Thu, May 25, 2006 at 12:31:04PM -0400, [email protected] wrote:\n> Well, they're not my statistics; they're explain's. You mean there's\n\nExplain doesn't get them from nowhere. How often is the table being\nANALYSEd?\n\n> More bewildering still (and infuriating as hell--because it means that\n> all of my work for yesterday has been wasted) is that I can no longer\n> reproduce the best query plan I posted earlier, even though the tables\n> have not changed at all. (Hence I can't post the explain analyze for\n\nI find that very hard to believe. Didn't change _at all_? Are you\nsure no VACUUMs or anything are happening automatically?\n\n> Anyway, I take it that there is no way to bypass the optimizer and\n> instruct PostgreSQL exactly how one wants the search performed?\n\nNo, there isn't. \n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Thu, 25 May 2006 12:48:40 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" }, { "msg_contents": "On 5/25/06, [email protected] <[email protected]> wrote:\n> Well, they're not my statistics; they're explain's. You mean there's\n> a bug in explain? I agree that it makes no sense that the costs don't\n> differ as much as one would expect, but you can see right there the\n> numbers of rows for the two tables. At any rate, how would one go\n> about finding an explanation for these strange stats?\n\nWell, the query planner uses statistics to deduce the best plan\npossible. Explain includes this statistical data in its output.\nSee:\nhttp://www.postgresql.org/docs/8.1/interactive/planner-stats.html\n...for information about what it is all about.\n\nThe idea is that your statistics are probably not detailed enough\nto help the planner. See ALTER TABLE SET STATISTICS to change\nthat.\n\n> More bewildering still (and infuriating as hell--because it means that\n> all of my work for yesterday has been wasted) is that I can no longer\n> reproduce the best query plan I posted earlier, even though the tables\n> have not changed at all. (Hence I can't post the explain analyze for\n> the best query plan, which Josh Drake asked for.) No matter what\n> value I use for LIMIT, the query planner now insists on sequentially\n> scanning huge_table and ignoring the available index. (If I turn off\n> enable_seqscan, I get the second worst query plan I posted yesterday.)\n>\n> Anyway, I take it that there is no way to bypass the optimizer and\n> instruct PostgreSQL exactly how one wants the search performed?\n\nThere is no way to bypass. But there are many ways to tune it.\n\n\n\nHmm, there is a probability (though statistics are more probable\ngo) that you're using some older version of PostgreSQL, and you're\nhitting same problem as I did:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-07/msg00345.php\n\nTom has provided back then a patch, which fixed it:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-07/msg00352.php\n\n...but I don't remember when it made into release.\n\n Regfa\n", "msg_date": "Thu, 25 May 2006 19:07:11 +0200", "msg_from": "\"Dawid Kuroczko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" }, { "msg_contents": "On May 25, 2006, at 12:07 PM, Dawid Kuroczko wrote:\n> On 5/25/06, [email protected] <[email protected]> wrote:\n>> Well, they're not my statistics; they're explain's. You mean there's\n>> a bug in explain? I agree that it makes no sense that the costs \n>> don't\n>> differ as much as one would expect, but you can see right there the\n>> numbers of rows for the two tables. At any rate, how would one go\n>> about finding an explanation for these strange stats?\n>\n> Well, the query planner uses statistics to deduce the best plan\n> possible. Explain includes this statistical data in its output.\n> See:\n> http://www.postgresql.org/docs/8.1/interactive/planner-stats.html\n> ...for information about what it is all about.\n>\n> The idea is that your statistics are probably not detailed enough\n> to help the planner. See ALTER TABLE SET STATISTICS to change\n> that.\n\nhttp://www.pervasive-postgres.com/lp/newsletters/2006/ \nInsights_postgres_Mar.asp#4 might also be worth your time to read.\n\n> Hmm, there is a probability (though statistics are more probable\n> go) that you're using some older version of PostgreSQL, and you're\n> hitting same problem as I did:\n>\n> http://archives.postgresql.org/pgsql-performance/2005-07/msg00345.php\n>\n> Tom has provided back then a patch, which fixed it:\n>\n> http://archives.postgresql.org/pgsql-performance/2005-07/msg00352.php\n>\n> ...but I don't remember when it made into release.\n\nAccording to cvs, it's been in since 8.1 and 8.0.4:\n\nRevision 1.111.4.2: download - view: text, markup, annotated - select \nfor diffs\nFri Jul 22 19:12:33 2005 UTC (10 months ago) by tgl\nBranches: REL8_0_STABLE\nCVS tags: REL8_0_8, REL8_0_7, REL8_0_6, REL8_0_5, REL8_0_4\nDiff to: previous 1.111.4.1: preferred, colored; branchpoint 1.111: \npreferred, colored; next MAIN 1.112: preferred, colored\nChanges since revision 1.111.4.1: +18 -37 lines\n\nFix compare_fuzzy_path_costs() to behave a bit more sanely. The \noriginal\ncoding would ignore startup cost differences of less than 1% of the\nestimated total cost; which was OK for normal planning but highly not OK\nif a very small LIMIT was applied afterwards, so that startup cost \nbecomes\nthe name of the game. Instead, compare startup and total costs fuzzily\nbut independently. This changes the plan selected for two queries in \nthe\nregression tests; adjust expected-output files for resulting changes in\nrow order. Per reports from Dawid Kuroczko and Sam Mason.\n\nRevision 1.124: download - view: text, markup, annotated - select for \ndiffs\nFri Jul 22 19:12:01 2005 UTC (10 months ago) by tgl\nBranches: MAIN\nCVS tags: REL8_1_0BETA3, REL8_1_0BETA2, REL8_1_0BETA1\nDiff to: previous 1.123: preferred, colored\nChanges since revision 1.123: +18 -37 lines\n\nFix compare_fuzzy_path_costs() to behave a bit more sanely. The \noriginal\ncoding would ignore startup cost differences of less than 1% of the\nestimated total cost; which was OK for normal planning but highly not OK\nif a very small LIMIT was applied afterwards, so that startup cost \nbecomes\nthe name of the game. Instead, compare startup and total costs fuzzily\nbut independently. This changes the plan selected for two queries in \nthe\nregression tests; adjust expected-output files for resulting changes in\nrow order. Per reports from Dawid Kuroczko and Sam Mason.\n\n--\nJim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n\n", "msg_date": "Thu, 25 May 2006 17:13:07 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a huge_table/tiny_table join" } ]
[ { "msg_contents": "been doing a lot of pgsql/mysql performance testing lately, and there\nis one query that mysql does much better than pgsql...and I see it a\nlot in normal development:\n\nselect a,b,max(c) from t group by a,b;\n\nt has an index on a,b,c.\n\nin my sample case with cardinality of 1000 for a, 2000 for b, and\n300000 records in t, pgsql does a seq. scan on dev box in about a\nsecond (returning 2000 records).\n\nrecent versions of mysql do much better, returning same set in < 20ms.\nmysql explain says it uses an index to optimize the group by somehow.\nis there a faster way to write this query?\n\nMerlin\n", "msg_date": "Thu, 25 May 2006 16:07:19 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "is it possible to make this faster?" }, { "msg_contents": "On Thu, May 25, 2006 at 16:07:19 -0400,\n Merlin Moncure <[email protected]> wrote:\n> been doing a lot of pgsql/mysql performance testing lately, and there\n> is one query that mysql does much better than pgsql...and I see it a\n> lot in normal development:\n> \n> select a,b,max(c) from t group by a,b;\n> \n> t has an index on a,b,c.\n> \n> in my sample case with cardinality of 1000 for a, 2000 for b, and\n> 300000 records in t, pgsql does a seq. scan on dev box in about a\n> second (returning 2000 records).\n> \n> recent versions of mysql do much better, returning same set in < 20ms.\n> mysql explain says it uses an index to optimize the group by somehow.\n> is there a faster way to write this query?\n\nSELECT DISTINCT ON (a, b) a, b, c FROM t ORDER BY a DESC, b DESC, c DESC;\n", "msg_date": "Thu, 25 May 2006 15:23:14 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On 5/25/06, Bruno Wolff III <[email protected]> wrote:\n> On Thu, May 25, 2006 at 16:07:19 -0400,\n> Merlin Moncure <[email protected]> wrote:\n> > been doing a lot of pgsql/mysql performance testing lately, and there\n> > is one query that mysql does much better than pgsql...and I see it a\n> > lot in normal development:\n> >\n> > select a,b,max(c) from t group by a,b;\n> >\n\n> SELECT DISTINCT ON (a, b) a, b, c FROM t ORDER BY a DESC, b DESC, c DESC;\n\nthat is actually slower than group by in my case...am i missing\nsomething? (both essentially resolved to seq_scan)\n\nmerlin\n", "msg_date": "Thu, 25 May 2006 16:31:40 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On Thu, May 25, 2006 at 04:07:19PM -0400, Merlin Moncure wrote:\n> been doing a lot of pgsql/mysql performance testing lately, and there\n> is one query that mysql does much better than pgsql...and I see it a\n> lot in normal development:\n> \n> select a,b,max(c) from t group by a,b;\n> \n> t has an index on a,b,c.\n\nThe planner _should_ TTBOMK be able to do it by itself in 8.1, but have you\ntried something along the following lines?\n\n select a,b,(select c from t t2 order by c desc where t1.a=t2.a and t1.b=t2.b)\n from t t1 group by a,b;\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 25 May 2006 22:36:30 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On May 25, 2006 01:31 pm, \"Merlin Moncure\" <[email protected]> wrote:\n> > SELECT DISTINCT ON (a, b) a, b, c FROM t ORDER BY a DESC, b DESC, c\n> > DESC;\n>\n> that is actually slower than group by in my case...am i missing\n> something? (both essentially resolved to seq_scan)\n\nTry it with an index on a,b,c.\n\n-- \nAlan\n", "msg_date": "Thu, 25 May 2006 13:36:30 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On Thu, May 25, 2006 at 16:31:40 -0400,\n Merlin Moncure <[email protected]> wrote:\n> On 5/25/06, Bruno Wolff III <[email protected]> wrote:\n> >On Thu, May 25, 2006 at 16:07:19 -0400,\n> > Merlin Moncure <[email protected]> wrote:\n> >> been doing a lot of pgsql/mysql performance testing lately, and there\n> >> is one query that mysql does much better than pgsql...and I see it a\n> >> lot in normal development:\n> >>\n> >> select a,b,max(c) from t group by a,b;\n> >>\n> \n> >SELECT DISTINCT ON (a, b) a, b, c FROM t ORDER BY a DESC, b DESC, c DESC;\n> \n> that is actually slower than group by in my case...am i missing\n> something? (both essentially resolved to seq_scan)\n\nIf there aren't many c's for each (a,b), then a sort might be the best way to\ndo this. I don't remember if skip scanning ever got done, but if it did, it\nwould have been 8.1 or later.\n", "msg_date": "Thu, 25 May 2006 15:47:46 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> been doing a lot of pgsql/mysql performance testing lately, and there\n> is one query that mysql does much better than pgsql...and I see it a\n> lot in normal development:\n\n> select a,b,max(c) from t group by a,b;\n\n> t has an index on a,b,c.\n\nThe index won't help, as per this comment from planagg.c:\n\n\t * We don't handle GROUP BY, because our current implementations of\n\t * grouping require looking at all the rows anyway, and so there's not\n\t * much point in optimizing MIN/MAX.\n\nGiven the numbers you mention (300k rows in 2000 groups) I'm not\nconvinced that an index-based implementation would help much; we'd\nstill need to fetch at least one record out of every 150, which is\ngoing to cost near as much as seqscanning all of them.\n\n> recent versions of mysql do much better, returning same set in < 20ms.\n\nWell, since they don't do MVCC they can answer this query from the\nindex without going to the heap at all. But that still seems remarkably\nfast for something that has to grovel through 300k index entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 May 2006 16:52:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On 5/25/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Thu, May 25, 2006 at 04:07:19PM -0400, Merlin Moncure wrote:\n> > been doing a lot of pgsql/mysql performance testing lately, and there\n> > is one query that mysql does much better than pgsql...and I see it a\n> > lot in normal development:\n> >\n> > select a,b,max(c) from t group by a,b;\n\n> select a,b,(select c from t t2 order by c desc where t1.a=t2.a and t1.b=t2.b)\n> from t t1 group by a,b;\n\nthis came out to a tie with the group by approach, although it\nproduced a different (but similar) plan. we are still orders of\nmagnitude behind mysql here.\n\nInterestingly, if I extract out the distinct values of a,b to a temp\ntable and rejoin to t using your approach, I get competitive times\nwith mysql. this means the essential problem is:\n\nselect a,b from t group by a,b\n\nis slow. This feels like the same penalty for mvcc we pay with count(*)...hm.\n\nmerlin\n", "msg_date": "Thu, 25 May 2006 16:54:09 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On Thu, May 25, 2006 at 04:54:09PM -0400, Merlin Moncure wrote:\n>> select a,b,(select c from t t2 order by c desc where t1.a=t2.a and \n>> t1.b=t2.b)\n>> from t t1 group by a,b;\n> this came out to a tie with the group by approach, although it\n> produced a different (but similar) plan. we are still orders of\n> magnitude behind mysql here.\n\nActually, it _should_ produce a syntax error -- it's missing a LIMIT 1 in the\nsubquery.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 25 May 2006 23:08:50 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> \"Merlin Moncure\" <[email protected]> writes:\n>> recent versions of mysql do much better, returning same set in < 20ms.\n\n> Well, since they don't do MVCC they can answer this query from the\n> index without going to the heap at all. But that still seems remarkably\n> fast for something that has to grovel through 300k index entries.\n\nAre you sure you measured that right? I tried to duplicate this using\nmysql 5.0.21, and I see runtimes of 0.45 sec without an index and\n0.15 sec with. This compares to psql times around 0.175 sec. Doesn't\nlook to me like we're hurting all that badly, even without using the\nindex.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 May 2006 17:11:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On Thu, 2006-05-25 at 15:52, Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > been doing a lot of pgsql/mysql performance testing lately, and there\n> > is one query that mysql does much better than pgsql...and I see it a\n> > lot in normal development:\n> \n> > select a,b,max(c) from t group by a,b;\n> \n> > t has an index on a,b,c.\n> \n> The index won't help, as per this comment from planagg.c:\n> \n> \t * We don't handle GROUP BY, because our current implementations of\n> \t * grouping require looking at all the rows anyway, and so there's not\n> \t * much point in optimizing MIN/MAX.\n> \n> Given the numbers you mention (300k rows in 2000 groups) I'm not\n> convinced that an index-based implementation would help much; we'd\n> still need to fetch at least one record out of every 150, which is\n> going to cost near as much as seqscanning all of them.\n> \n> > recent versions of mysql do much better, returning same set in < 20ms.\n> \n> Well, since they don't do MVCC they can answer this query from the\n> index without going to the heap at all. But that still seems remarkably\n> fast for something that has to grovel through 300k index entries.\n\nWell, they do, just with innodb tables.\n\nMerlin, have you tried this against innodb tables to see what you get?\n", "msg_date": "Thu, 25 May 2006 16:15:29 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On Thu, 2006-05-25 at 16:52 -0400, Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > been doing a lot of pgsql/mysql performance testing lately, and there\n> > is one query that mysql does much better than pgsql...and I see it a\n> > lot in normal development:\n> \n> > select a,b,max(c) from t group by a,b;\n> \n> > t has an index on a,b,c.\n> \n> The index won't help, as per this comment from planagg.c:\n> \n> \t * We don't handle GROUP BY, because our current implementations of\n> \t * grouping require looking at all the rows anyway, and so there's not\n> \t * much point in optimizing MIN/MAX.\n> \n> Given the numbers you mention (300k rows in 2000 groups) I'm not\n> convinced that an index-based implementation would help much; we'd\n> still need to fetch at least one record out of every 150, which is\n> going to cost near as much as seqscanning all of them.\n\nWell, if the MySQL server has enough RAM that the index is cached (or\nindex + relevant chunks of data file if using InnoDB?) then that would\nexplain how MySQL can use an index to get fast results.\n\n-- Mark Lewis\n", "msg_date": "Thu, 25 May 2006 14:26:23 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "On May 25, 2006, at 4:11 PM, Tom Lane wrote:\n> Tom Lane <[email protected]> writes:\n>> \"Merlin Moncure\" <[email protected]> writes:\n>>> recent versions of mysql do much better, returning same set in < \n>>> 20ms.\n>\n>> Well, since they don't do MVCC they can answer this query from the\n>> index without going to the heap at all. But that still seems \n>> remarkably\n>> fast for something that has to grovel through 300k index entries.\n>\n> Are you sure you measured that right? I tried to duplicate this using\n> mysql 5.0.21, and I see runtimes of 0.45 sec without an index and\n> 0.15 sec with. This compares to psql times around 0.175 sec. Doesn't\n> look to me like we're hurting all that badly, even without using the\n> index.\n\nWell, that would depend greatly on how wide the rows were, and I \ndon't believe the OP ever mentioned that. If he's got a nice, fat \nvarchar(1024) in that table, then it's not surprising that an index \nwould help things.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n", "msg_date": "Thu, 25 May 2006 17:17:43 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On May 25, 2006, at 4:11 PM, Tom Lane wrote:\n>> Are you sure you measured that right? I tried to duplicate this using\n>> mysql 5.0.21, and I see runtimes of 0.45 sec without an index and\n>> 0.15 sec with. This compares to psql times around 0.175 sec. Doesn't\n>> look to me like we're hurting all that badly, even without using the\n>> index.\n\n> Well, that would depend greatly on how wide the rows were, and I \n> don't believe the OP ever mentioned that. If he's got a nice, fat \n> varchar(1024) in that table, then it's not surprising that an index \n> would help things.\n\nWide rows might slow down the psql side of things somewhat (though\nprobably not as much as you think). That doesn't account for the\ndiscrepancy in our mysql results though.\n\nFor the record, I was testing with a table like\n\tcreate table t(a int, b int, c int);\n\tcreate index ti on t(a,b,c);\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 May 2006 18:30:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "Also, are you sure your numbers are not coming out of the mysql query \ncache?\n\nThat might explain some of it - also with Tom seeing comprable \nnumbers in his test.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n\n", "msg_date": "Thu, 25 May 2006 23:10:08 -0400", "msg_from": "Jeff - <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "Jeff - <[email protected]> writes:\n> Also, are you sure your numbers are not coming out of the mysql query \n> cache?\n> That might explain some of it - also with Tom seeing comprable \n> numbers in his test.\n\nIndeed, enabling the mysql query cache makes the timings drop to\nnil ... as long as I present a query that's strcmp-equal to the\nlast one (not different in whitespace for instance).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 May 2006 23:29:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On 5/25/06, Tom Lane <[email protected]> wrote:\n> Tom Lane <[email protected]> writes:\n> > \"Merlin Moncure\" <[email protected]> writes:\n> >> recent versions of mysql do much better, returning same set in < 20ms.\n\n> Are you sure you measured that right? I tried to duplicate this using\n> mysql 5.0.21, and I see runtimes of 0.45 sec without an index and\n> 0.15 sec with. This compares to psql times around 0.175 sec. Doesn't\n> look to me like we're hurting all that badly, even without using the\n> index.\n\nWell, my numbers were approximate, but I tested on a few different\nmachines. the times got closer as the cpu speed got faster. pg\nreally loves a quick cpu. on 600 mhz p3 I got 70ms on mysql and\n1050ms on pg. Mysql query cache is always off for my performance\ntesting.\n\nMy a and b columns were ID columns from another table, so I rewrote\nthe join and now pg is smoking mysql (again).\n\nTo quickly answer the other questions:\n\n1. no, not testing innodb\n2, rows are narrow\n\nMerlin\n", "msg_date": "Fri, 26 May 2006 00:47:35 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> On 5/25/06, Tom Lane <[email protected]> wrote:\n>> \"Merlin Moncure\" <[email protected]> writes:\n>>> recent versions of mysql do much better, returning same set in < 20ms.\n\n>> Are you sure you measured that right? I tried to duplicate this using\n>> mysql 5.0.21, and I see runtimes of 0.45 sec without an index and\n>> 0.15 sec with. This compares to psql times around 0.175 sec. Doesn't\n>> look to me like we're hurting all that badly, even without using the\n>> index.\n\n> Well, my numbers were approximate, but I tested on a few different\n> machines. the times got closer as the cpu speed got faster. pg\n> really loves a quick cpu. on 600 mhz p3 I got 70ms on mysql and\n> 1050ms on pg. Mysql query cache is always off for my performance\n> testing.\n\nWell, this bears looking into, because I couldn't get anywhere near 20ms\nwith mysql. I was using a dual Xeon 2.8GHz machine which ought to be\nquick enough, and the stock Fedora Core 5 RPM of mysql. (Well, actually\nthat SRPM built on FC4, because this machine is still on FC4.) I made a\nMyISAM table with three integer columns as mentioned, and filled it with\nabout 300000 rows with 2000 distinct values of (a,b) and random values\nof c. I checked the timing both in the mysql CLI, and with a trivial\ntest program that timed mysql_real_query() plus mysql_store_result(),\ngetting pretty near the same timings each way.\n\nBTW, in pgsql it helps a whole lot to raise work_mem a bit for this\nexample --- at default work_mem it wants to do sort + group_aggregate,\nwhile with work_mem 2000 or more it'll use a hash_aggregate plan which\nis quite a bit faster.\n\nIt seems possible that there is some equivalently simple tuning on the\nmysql side that you did and I didn't. This is an utterly stock mysql\ninstall, just \"rpm -i\" and \"service mysqld start\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 May 2006 10:22:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On 5/26/06, Tom Lane <[email protected]> wrote:\n> Well, this bears looking into, because I couldn't get anywhere near 20ms\n> with mysql. I was using a dual Xeon 2.8GHz machine which ought to be\n\ndid you have a key on a,b,c? if I include unimportant unkeyed field d\nthe query time drops from 70ms to ~ 1 second. mysql planner is\ntricky, it's full of special case optimizations...\n\nselect count(*) from (select a,b,max(c) group by a,b) q;\nblows the high performance case as does putting the query in a view.\n\nmysql> select version();\n+-----------+\n| version() |\n+-----------+\n| 5.0.16 |\n+-----------+\n1 row in set (0.00 sec)\n\nmysql> set global query_cache_size = 0;\nQuery OK, 0 rows affected (0.00 sec)\n\nmysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n[...]\n+---------+--------+------------------+\n939 rows in set (0.07 sec)\n\nmysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n[...]\n+---------+--------+------------------+--------------+\n939 rows in set (1.39 sec)\n\nmerlin\n", "msg_date": "Fri, 26 May 2006 12:56:44 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> did you have a key on a,b,c?\n\nYeah, I did\n\tcreate index t1i on t1 (a,b,c);\nDo I need to use some other syntax to get it to work?\n\n> select count(*) from (select a,b,max(c) group by a,b) q;\n> blows the high performance case as does putting the query in a view.\n\nI noticed that too, while trying to suppress the returning of the\nresults for timing purposes ... still a few bugs in their optimizer\nobviously. (Curiously, EXPLAIN still claims that the index is being\nused.)\n\n> mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> [...]\n> +---------+--------+------------------+\n> 939 rows in set (0.07 sec)\n\n> mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> [...]\n> +---------+--------+------------------+--------------+\n> 939 rows in set (1.39 sec)\n\nI don't understand what you did differently in those two cases?\nOr was there a DROP INDEX between?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 May 2006 13:07:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On 5/26/06, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > did you have a key on a,b,c?\n> Yeah, I did\n> create index t1i on t1 (a,b,c);\n> Do I need to use some other syntax to get it to work?\n\ncan't thing of anything, I'm running completely stock, did you do a\noptimize table foo? is the wind blowing in the right direction?\n\n> > select count(*) from (select a,b,max(c) group by a,b) q;\n> > blows the high performance case as does putting the query in a view.\n\n> I noticed that too, while trying to suppress the returning of the\n> results for timing purposes ... still a few bugs in their optimizer\n> obviously. (Curiously, EXPLAIN still claims that the index is being\n> used.)\n\nwell, they do some tricky things pg can't do for architectural reasons\nbut the special case is obviously hard to get right. I suppose this\nkinda agrues against doing all kinds of acrobatics to optimize mvcc\nweak cases like the above and count(*)...better to make heap access as\nquick as possible.\n\n> > mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> > 939 rows in set (0.07 sec)\n\n> > mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> > 939 rows in set (1.39 sec)\n\noops, pasted the wrong query..case 2 should have been\nselect user_id, acc_id, max(sample_date), disksize from usage_samples\ngroup by 1,2\nillustrating what going to the heap does to the time.\n\nmerlin\n", "msg_date": "Fri, 26 May 2006 13:46:30 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> can't thing of anything, I'm running completely stock, did you do a\n> optimize table foo?\n\nNope, never heard of that before. But I did it, and it doesn't seem to\nhave changed my results at all.\n\n> mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> 939 rows in set (0.07 sec)\n\n0.07 seconds is not impossibly out of line with my result of 0.15 sec,\nmaybe your machine is just 2X faster than mine. This is a 2.8GHz dual\nXeon EM64T, what are you testing? You said \"less than 20 msec\" before,\nwhat was that on?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 May 2006 13:55:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "On 5/26/06, Tom Lane <[email protected]> wrote:\n> > mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2\n> > 939 rows in set (0.07 sec)\n>\n> 0.07 seconds is not impossibly out of line with my result of 0.15 sec,\n> maybe your machine is just 2X faster than mine. This is a 2.8GHz dual\n> Xeon EM64T, what are you testing? You said \"less than 20 msec\" before,\n> what was that on?\n\n600 mhz p3: 70 ms, 1100 ms slow case\n1600 mhz p4: 10-30ms (mysql timer not very precise) 710ms slow case\nquad opteron 865: 0 :-)\ndual p3 1133 Mhz xeon, mysql 4.0.16: 500 ms\n\nusing steinar's 'substitute group by' for pg I get 40ms on the p3 and\nlow times on all else. your time of 150 ms is looking like the slow\ncase on my results.\n\nmerlin\n", "msg_date": "Fri, 26 May 2006 15:40:22 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is it possible to make this faster?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> your time of 150 ms is looking like the slow case on my results.\n\nYeah... so what's wrong with my test? Anyone else care to duplicate\nthe test and see what they get?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 May 2006 16:44:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster? " }, { "msg_contents": "Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n>> your time of 150 ms is looking like the slow case on my results.\n> \n> Yeah... so what's wrong with my test? Anyone else care to duplicate\n> the test and see what they get?\n\nUsing your test [generating c from int(rand(1000))], I get 230 ms using \n5.0.18 on a P3 1000 Mhz (doing optimize table on t made no difference at \nall).\n\nCheers\n\nMark\n", "msg_date": "Sun, 28 May 2006 13:11:47 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to make this faster?" } ]
[ { "msg_contents": "Hi There,\n\nI've got a situation where I need to pull profit information by product \ncategory, as well as the totals for each branch.\n\nBasically, something like\n\nSELECT branch_id, prod_cat_id, sum(prod_profit) as prod_cat_profit\n FROM () as b1\nWHERE x = y\nGROUP BY branch, prod_cat_id\n\n\nNow, I also need the branch total, effectively,\nSELECT branch_id, sum(prod_profit) as branch_total\n FROM () as b1\nWHERE x = y\nGROUP BY branch_id.\n\n\nSince the actual queries for generating prod_profit are non-trivial, how \ndo I combine them to get the following select list?\n\nOr is there a more efficient way?\n\nKind Regards,\nJames", "msg_date": "Fri, 26 May 2006 11:56:39 +0200", "msg_from": "James Neethling <[email protected]>", "msg_from_op": true, "msg_subject": "column totals" }, { "msg_contents": "James Neethling wrote:\n> Hi There,\n>\n> I've got a situation where I need to pull profit information by \n> product category, as well as the totals for each branch.\n>\n> Basically, something like\n>\n> SELECT branch_id, prod_cat_id, sum(prod_profit) as prod_cat_profit\n> FROM () as b1\n> WHERE x = y\n> GROUP BY branch, prod_cat_id\n>\n>\n> Now, I also need the branch total, effectively,\n> SELECT branch_id, sum(prod_profit) as branch_total\n> FROM () as b1\n> WHERE x = y\n> GROUP BY branch_id.\n>\n>\n> Since the actual queries for generating prod_profit are non-trivial, \n> how do I combine them to get the following select list?\nSELECT branch_id, prod_cat_id, sum(prod_profit) as prod_cat_profit, \nsum(prod_profit) as branch_total\n>\n> Or is there a more efficient way?\n>\n> Kind Regards,\n> James\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>", "msg_date": "Fri, 26 May 2006 15:19:51 +0200", "msg_from": "James Neethling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column totals" }, { "msg_contents": "On f�s, 2006-05-26 at 11:56 +0200, James Neethling wrote:\n\n> SELECT branch_id, prod_cat_id, sum(prod_profit) as prod_cat_profit\n> FROM () as b1\n> WHERE x = y\n> GROUP BY branch, prod_cat_id\n> \n> \n> Now, I also need the branch total, effectively,\n> SELECT branch_id, sum(prod_profit) as branch_total\n> FROM () as b1\n> WHERE x = y\n> GROUP BY branch_id.\n> \n> \n> Since the actual queries for generating prod_profit are non-trivial, how \n> do I combine them to get the following select list?\n\none simple way using temp table and 2 steps:\n\nCREATE TEMP TABLE foo AS\n SELECT branch_id, \n prod_cat_id, \n sum(prod_profit) as prod_cat_profit\n FROM () as b1\n WHERE x = y\n GROUP BY branch, prod_cat_id;\n\nSELECT branch_id, \n prod_cat_id, \n prod_cat_profit,\n branch_total\nFROM foo as foo1 \n JOIN \n (SELECT branch_id, \n sum(prod_cat_profit) as branch_total\n FROM foo\n GROUP BY branch_id \n ) as foo2 USING branch_id;\n\n\n(untested)\n\ngnari\n\n\n", "msg_date": "Fri, 26 May 2006 16:23:37 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column totals" } ]
[ { "msg_contents": "Hi, \n\nI have 2 servers, one of them has a 7.4 postgres and the other has a 8.1 \n\nI have this query: \n\nselect fagrempr,fagrdocr,fagrserr,fagrparr \nfrom arqcfat \nleft join arqfagr on fagrorig = 'CFAT' and fagrdocu = cfatdocu and fagrempe \n= cfatempe and fagrseri = cfatseri \nwhere cfatdata between '2006-01-01' and '2006-01-31' \nand cfattipo = 'VD' \nand cfatstat <> 'C' \nand fagrform = 'CT' \nand fagrtipr = 'REC' \ngroup by fagrempr,fagrdocr,fagrserr,fagrparr \n\nThe 8.1 give me this plan: \n\n HashAggregate (cost=59.07..59.08 rows=1 width=20) \n -> Nested Loop (cost=0.00..59.06 rows=1 width=20) \n -> Index Scan using arqfagr_arqfa3_key on arqfagr \n(cost=0.00..53.01 rows=1 width=36) \n Index Cond: ((fagrorig = 'CFAT'::bpchar) AND (fagrform = \n'CT'::bpchar)) \n Filter: (fagrtipr = 'REC'::bpchar) \n -> Index Scan using arqcfat_arqcfat1_key on arqcfat \n(cost=0.00..6.03 rows=1 width=16) \n Index Cond: ((\"outer\".fagrempe = arqcfat.cfatempe) AND \n(\"outer\".fagrdocu = arqcfat.cfatdocu) AND (\"outer\".fagrseri = \narqcfat.cfatseri)) \n Filter: ((cfatdata >= '01-01-2006'::date) AND (cfatdata <= \n'31-01-2006'::date) AND (cfattipo = 'VD'::bpchar) AND (cfatstat <> \n'C'::bpchar)) \n\nThe 7.4 give me this plan: \n\nHashAggregate (cost=2163.93..2163.93 rows=1 width=19) \n -> Nested Loop (cost=0.00..2163.92 rows=1 width=19) \n -> Index Scan using arqcfat_arqcfat2_key on arqcfat \n(cost=0.00..2145.78 rows=3 width=15) \n Index Cond: ((cfatdata >= '01-01-2006'::date) AND (cfatdata \n<= '31-01-2006'::date)) \n Filter: ((cfattipo = 'VD'::bpchar) AND (cfatstat <> \n'C'::bpchar)) \n -> Index Scan using arqfagr_arqfa1_key on arqfagr \n(cost=0.00..6.03 rows=1 width=34) \n Index Cond: ((arqfagr.fagrorig = 'CFAT'::bpchar) AND \n(arqfagr.fagrempe = \"outer\".cfatempe) AND (arqfagr.fagrdocu = \n\"outer\".cfatdocu) AND (arqfagr.fagrseri = \"outer\".cfatseri)) \n Filter: ((fagrform = 'CT'::bpchar) AND (fagrtipr = \n'REC'::bpchar)) \n\nWhy the plan is worst in postgres 8.1? \n\nI know the best plan is read fisrt the table which has a date index as the \n7.4 did, because in a few days I will have few lines too, so the query will \nbe faster. \n\nIs there some thing I have to change in 8.1 to make the plans as the 7.4? \n\nThanks , \n\nWaldomiro C. Neto. \n\n\n", "msg_date": "Fri, 26 May 2006 09:04:56 -0300", "msg_from": "[email protected] <[email protected]>", "msg_from_op": true, "msg_subject": "Why the 8.1 plan is worst than 7.4?" }, { "msg_contents": "What's explain analyze show?\n\nOn Fri, May 26, 2006 at 09:04:56AM -0300, [email protected] wrote:\n> Hi, \n> \n> I have 2 servers, one of them has a 7.4 postgres and the other has a 8.1 \n> \n> I have this query: \n> \n> select fagrempr,fagrdocr,fagrserr,fagrparr \n> from arqcfat \n> left join arqfagr on fagrorig = 'CFAT' and fagrdocu = cfatdocu and fagrempe \n> = cfatempe and fagrseri = cfatseri \n> where cfatdata between '2006-01-01' and '2006-01-31' \n> and cfattipo = 'VD' \n> and cfatstat <> 'C' \n> and fagrform = 'CT' \n> and fagrtipr = 'REC' \n> group by fagrempr,fagrdocr,fagrserr,fagrparr \n> \n> The 8.1 give me this plan: \n> \n> HashAggregate (cost=59.07..59.08 rows=1 width=20) \n> -> Nested Loop (cost=0.00..59.06 rows=1 width=20) \n> -> Index Scan using arqfagr_arqfa3_key on arqfagr \n> (cost=0.00..53.01 rows=1 width=36) \n> Index Cond: ((fagrorig = 'CFAT'::bpchar) AND (fagrform = \n> 'CT'::bpchar)) \n> Filter: (fagrtipr = 'REC'::bpchar) \n> -> Index Scan using arqcfat_arqcfat1_key on arqcfat \n> (cost=0.00..6.03 rows=1 width=16) \n> Index Cond: ((\"outer\".fagrempe = arqcfat.cfatempe) AND \n> (\"outer\".fagrdocu = arqcfat.cfatdocu) AND (\"outer\".fagrseri = \n> arqcfat.cfatseri)) \n> Filter: ((cfatdata >= '01-01-2006'::date) AND (cfatdata <= \n> '31-01-2006'::date) AND (cfattipo = 'VD'::bpchar) AND (cfatstat <> \n> 'C'::bpchar)) \n> \n> The 7.4 give me this plan: \n> \n> HashAggregate (cost=2163.93..2163.93 rows=1 width=19) \n> -> Nested Loop (cost=0.00..2163.92 rows=1 width=19) \n> -> Index Scan using arqcfat_arqcfat2_key on arqcfat \n> (cost=0.00..2145.78 rows=3 width=15) \n> Index Cond: ((cfatdata >= '01-01-2006'::date) AND (cfatdata \n> <= '31-01-2006'::date)) \n> Filter: ((cfattipo = 'VD'::bpchar) AND (cfatstat <> \n> 'C'::bpchar)) \n> -> Index Scan using arqfagr_arqfa1_key on arqfagr \n> (cost=0.00..6.03 rows=1 width=34) \n> Index Cond: ((arqfagr.fagrorig = 'CFAT'::bpchar) AND \n> (arqfagr.fagrempe = \"outer\".cfatempe) AND (arqfagr.fagrdocu = \n> \"outer\".cfatdocu) AND (arqfagr.fagrseri = \"outer\".cfatseri)) \n> Filter: ((fagrform = 'CT'::bpchar) AND (fagrtipr = \n> 'REC'::bpchar)) \n> \n> Why the plan is worst in postgres 8.1? \n> \n> I know the best plan is read fisrt the table which has a date index as the \n> 7.4 did, because in a few days I will have few lines too, so the query will \n> be faster. \n> \n> Is there some thing I have to change in 8.1 to make the plans as the 7.4? \n> \n> Thanks , \n> \n> Waldomiro C. Neto. \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 30 May 2006 19:38:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the 8.1 plan is worst than 7.4?" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Why the plan is worst in postgres 8.1? \n\n(1) you have not actually shown us that the plan is worse. If you are\ncomplaining that the planner is wrong, EXPLAIN output (which contains\nonly the planner's estimates) is useless for proving your point. Show\nEXPLAIN ANALYZE.\n\n(2) Have you ANALYZEd these tables recently in either database? The\ndiscrepancies in estimated rowcounts suggest that the two planners\nare working with different statistics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 May 2006 23:20:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the 8.1 plan is worst than 7.4? " } ]
[ { "msg_contents": "I've set up something similar the 'recommended' way to merge data into\nthe DB, i.e.\n\nhttp://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n\nhowever I did it with a trigger on insert, i.e. (not my schema :) ):\n\nCREATE TABLE db (a INT PRIMARY KEY, b TEXT, c INTEGER, d INET);\n\nCREATE FUNCTION merge_db() RETURNS TRIGGER AS\n$$\nBEGIN\n UPDATE db SET b = NEW.data\n WHERE a = NEW.key\n AND NOT (c IS DISTINCT FROM NEW.c)\n AND NOT (d IS DISTINCT FROM NEW.d);\n IF found THEN\n RETURN NULL;\n END IF;\n RETURN NEW;\nEND;\n$$\nLANGUAGE plpgsql;\n\nCREATE TRIGGER merge_db_tr BEFORE INSERT ON db\nFOR EACH ROW EXECUTE PROCEDURE merge_db();\n\nIs this the best/fastest way to do this sort of thing? I only get\nabout 50 records/second inserts, while without the trigger (inserting\nunmerged data) I can get more like 1000/second. I'm doing the whole\nNOT ... IS DISTINCT stuff to handle NULL values that might be in the\ncolumns ... I'm only considering two column keys equal if (a,c,d) are\nall the same (i.e. either the same value or both NULL).\n\nI read that there is a race condition with the above method as applied\nto a normal function ... does this apply to a trigger as well?\n\nOptimization Questions:\n-Can I do better with the trigger function itself?\n\n-I realize that I can create indexes on some of the lookup columns\n('key' in the above example). This would speed up the location of the\nupdate record but slow down the actual update insert, right? Would\nthis be a win? I tested an index on 10000 rows, and it beat out the\nnon-indexed by about 7% (3:31 with index, 3:45 without) ... is this\nall the benefit that I can expect?\n\n-Will moving pg_xlog to a different disk help all that much, if the\nwhole DB is currently on a 4 disk RAID10? What about moving the\nindexes? I've set up my postgresql.conf according to the docs and\nJosh Berkus' presentation, i.e. (16GB ram, quad Opteron moachine, not\nall settings are relevant):\nshared_buffers = 60000\ntemp_buffers = 10000\nwork_mem = 131072\nmaintenance_work_mem = 524288\neffective_cache_size = 120000\nrandom_page_cost = 2\nwal_buffers = 128\ncheckpoint_segments = 128\ncheckpoint_timeout = 3000\nmax_fsm_pages = 2000000\nmax_fsm_relations = 1000000\n\n-If I break up my dataset into smaller chunks and parallelize it,\ncould I get better total performance, or would I most likely be\nthrashing the disk?\n\n-If I sort the data in the COPY file by key (i.e. a,c,d) before\ninserting it into the database, will this help out the DB at all?\n\n-Its cleaner to just be able to insert everything into the database\nand let the DB aggregate the records, however I could use some of our\nextra hardware to do aggregation in perl and then output the already\naggregated records to the DB ... this has the advantage of being\neasily parallelizable but requires a bit of extra work to get right.\nDo you think that this is the best way to go?\n\nAlso, as a slight aside, without a trigger, COPY seems to process each\nrecord very quickly (using Perl DBI, about 7000 records/second)\nhowever there is a long pause once the last record has been delivered.\n Is this just the backend queuing up the insert commands given by\nperl, or is there extra processing that needs to be done at the end of\nthe COPY that could be taking a while (10s on 500K record COPY).\n\nThanks!\n", "msg_date": "Fri, 26 May 2006 14:48:20 -0400", "msg_from": "\"Worky Workerson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bulk loading/merging" }, { "msg_contents": "Another little question ... would using any sort of TEMP table help out, i.e.\nloading the unaggregated data into a TEMP table, aggregating the data via a\nSELECT INTO another TEMP table, and then finally INSERT ... SELECT into the\nmaster, aggregated, triggered table? It seems like this might be a win if\nA) the TEMP tables fit into memory, and B) the load data aggregates well.\nWorst case (i.e. all unique data in the load) seems like it might take much\nlonger, however, since I'm creating 2 new TEMP tables ....\n\nAnother little question ... would using any sort of TEMP table help out, i.e. loading the unaggregated data into a TEMP table, aggregating the data via a SELECT INTO another TEMP table, and then finally INSERT ... SELECT into the master, aggregated, triggered table?  It seems like this might be a win if A) the TEMP tables fit into memory, and B) the load data aggregates well.  Worst case (\ni.e. all unique data in the load) seems like it might take much longer, however, since I'm creating 2 new TEMP tables ....", "msg_date": "Fri, 26 May 2006 15:04:59 -0400", "msg_from": "\"Worky Workerson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bulk loading/merging" }, { "msg_contents": "Your best bet is to do this as a single, bulk operation if possible.\nThat way you can simply do an UPDATE ... WHERE EXISTS followed by an\nINSERT ... SELECT ... WHERE NOT EXISTS.\n\nOn Fri, May 26, 2006 at 02:48:20PM -0400, Worky Workerson wrote:\n> I've set up something similar the 'recommended' way to merge data into\n> the DB, i.e.\n> \n> http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\n> \n> however I did it with a trigger on insert, i.e. (not my schema :) ):\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 30 May 2006 20:09:22 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk loading/merging" }, { "msg_contents": "On 5/30/06, Jim C. Nasby <[email protected]> wrote:\n\n> Your best bet is to do this as a single, bulk operation if possible.\n> That way you can simply do an UPDATE ... WHERE EXISTS followed by an\n> INSERT ... SELECT ... WHERE NOT EXISTS.\n\n\n\n hmm, I don't quite understand what you are saying and I think my\nbasic misunderstanding is how to use the UPDATE ... WHERE EXISTS to merge\ndata in bulk. Assuming that I bulk COPYed the data into a temporary\ntable, I'd need to issue an UPDATE for each row in the newly created table,\nright?\n\nFor example, for a slightly different key,count schema:\n\nCREATE TABLE kc (key integer, count integer);\n\nand wanting to merge the following data by just updating the count for a\ngiven key to the equivalent of OLD.count + NEW.count:\n\n1,10\n2,15\n3,45\n1,30\n\nHow would I go about using UPDATE ... WHERE EXISTS to update the \"master\" kc\ntable from a (temporary) table loaded with the above data?\n\nOn 5/30/06, Jim C. Nasby <[email protected]> wrote:\n\nYour best bet is to do this as a single, bulk operation if possible.That way you can simply do an UPDATE ... WHERE EXISTS followed by an\nINSERT ... SELECT ... WHERE NOT EXISTS.\n \n \n\nhmm, I don't quite understand what you are saying and I think my basic misunderstanding is how to use the UPDATE ... WHERE EXISTS to merge data in bulk.  Assuming that I bulk COPYed the data into a temporary table, I'd need to issue an UPDATE for each row in the newly created table, right?  \n\n \nFor example, for a slightly different key,count schema:\n \nCREATE TABLE kc (key integer, count integer);\n \nand wanting to merge the following data by just updating the count for a given key to the equivalent of OLD.count + NEW.count:\n \n1,10\n2,15\n3,45\n1,30\n \nHow would I go about using UPDATE ... WHERE EXISTS to update the \"master\" kc table from a (temporary) table loaded with the above data?", "msg_date": "Thu, 1 Jun 2006 14:04:46 -0400", "msg_from": "\"Michael Artz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk loading/merging" }, { "msg_contents": "On 06/02/2006, Michael Artz wrote: \n\nhmm, I don't quite understand what you are saying and I think my basic\nmisunderstanding is how to use the UPDATE ... WHERE EXISTS to merge data in\nbulk. Assuming that I bulk COPYed the data into a temporary table, I'd need\nto issue an UPDATE for each row in the newly created table, right? \n\n \n\nFor example, for a slightly different key,count schema:\n\nCREATE TABLE kc (key integer, count integer);\n\nand wanting to merge the following data by just updating the count for a\ngiven key to the equivalent of OLD.count + NEW.count:\n\n1,10\n\n2,15\n\n3,45\n\n1,30\n\nHow would I go about using UPDATE ... WHERE EXISTS to update the \"master\" kc\ntable from a (temporary) table loaded with the above data?\n\n \n\nMay be, this method could help you:\n\nCREATE TEMP TABLE clip_temp (\n\n cids int8 NOT NULL,\n\n clip_id int8 NOT NULL,\n\n mentions int4 DEFAULT 0,\n\n CONSTRAINT pk_clip_temp PRIMARY KEY (cids, clip_id))\n\n)\n\ninsert data into this temporary table...\n\nthen do:\n\n \n\nUPDATE clip_category SET mentions=clip_temp.mentions\n\nFROM clip_temp\n\nWHERE clip_category.cids=clip_temp.cids\n\nAND clip_category.clip_id=clip_temp.clip_id\n\n \n\nDELETE FROM clip_temp USING clip_category\n\nWHERE clip_temp.cids=clip_category.cids\n\nAND clip_temp.clip_id=clip_category.clip_id\n\n \n\nINSERT INTO clip_category (cids, clip_id, mentions)\n\nSELECT * FROM clip_temp\n\n \n\nDROP TABLE clip_temp;\n\n \n\n \n\nBest regards,\n\nahmad fajar,\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\nOn 06/02/2006, Michael Artz wrote: \nhmm, I don't quite understand what\nyou are saying and I think my basic misunderstanding is how to use the\nUPDATE ... WHERE EXISTS to merge data in bulk.  Assuming that I bulk\nCOPYed the data into a temporary table, I'd need to issue an\nUPDATE for each row in the newly created table, right?  \n \nFor example, for a slightly\ndifferent key,count schema:\nCREATE TABLE kc (key integer, count\ninteger);\nand wanting to merge the following\ndata by just updating the count for a given key to the equivalent of OLD.count\n+ NEW.count:\n1,10\n2,15\n3,45\n1,30\nHow would I go about using UPDATE\n... WHERE EXISTS to update the \"master\" kc table from a (temporary)\ntable loaded with the above data?\n \nMay be, this method could help you:\nCREATE TEMP TABLE clip_temp (\n  cids int8 NOT NULL,\n  clip_id int8 NOT NULL,\n  mentions int4 DEFAULT 0,\n  CONSTRAINT pk_clip_temp PRIMARY KEY\n(cids, clip_id))\n)\ninsert data into this temporary table...\nthen do:\n \nUPDATE clip_category SET\nmentions=clip_temp.mentions\nFROM clip_temp\nWHERE clip_category.cids=clip_temp.cids\nAND\nclip_category.clip_id=clip_temp.clip_id\n \nDELETE FROM clip_temp USING clip_category\nWHERE clip_temp.cids=clip_category.cids\nAND\nclip_temp.clip_id=clip_category.clip_id\n \nINSERT INTO clip_category (cids, clip_id,\nmentions)\nSELECT * FROM clip_temp\n \nDROP TABLE clip_temp;\n \n \nBest regards,\nahmad\nfajar,", "msg_date": "Sun, 4 Jun 2006 19:00:32 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk loading/merging" }, { "msg_contents": "On Thu, Jun 01, 2006 at 02:04:46PM -0400, Michael Artz wrote:\n> On 5/30/06, Jim C. Nasby <[email protected]> wrote:\n> \n> >Your best bet is to do this as a single, bulk operation if possible.\n> >That way you can simply do an UPDATE ... WHERE EXISTS followed by an\n> >INSERT ... SELECT ... WHERE NOT EXISTS.\n> \n> \n> \n> hmm, I don't quite understand what you are saying and I think my\n> basic misunderstanding is how to use the UPDATE ... WHERE EXISTS to merge\n> data in bulk. Assuming that I bulk COPYed the data into a temporary\n> table, I'd need to issue an UPDATE for each row in the newly created table,\n> right?\n> \n> For example, for a slightly different key,count schema:\n> \n> CREATE TABLE kc (key integer, count integer);\n> \n> and wanting to merge the following data by just updating the count for a\n> given key to the equivalent of OLD.count + NEW.count:\n> \n> 1,10\n> 2,15\n> 3,45\n> 1,30\n> \n> How would I go about using UPDATE ... WHERE EXISTS to update the \"master\" kc\n> table from a (temporary) table loaded with the above data?\n\nCREATE TEMP TABLE moo () LIKE kc;\nCOPY ... moo;\nBEGIN;\n UPDATE kc\n SET count=kc.count + moo.count\n FROM moo\n WHERE moo.key = kc.key\n ;\n INSERT INTO kc(key, count)\n SELECT key, count\n FROM moo\n WHERE NOT EXISTS (\n SELECT 1\n FROM kc\n WHERE kc.key = moo.key\n )\n ;\nCOMMIT;\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 5 Jun 2006 10:04:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk loading/merging" }, { "msg_contents": "\n\tHere are two ways to phrase a query... the planner choses very different \nplans as you will see. Everything is freshly ANALYZEd.\n\n\nEXPLAIN ANALYZE SELECT r.* FROM raw_annonces r LEFT JOIN annonces a ON \na.id=r.id LEFT JOIN archive_data d ON d.id=r.id WHERE a.id IS NULL AND \nd.id IS NULL AND r.id >1130306 order by id limit 1;\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.54 rows=1 width=627) (actual time=708.167..708.168 \nrows=1 loops=1)\n -> Merge Left Join (cost=0.00..128497.77 rows=50539 width=627) \n(actual time=708.165..708.165 rows=1 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Merge Left Join (cost=0.00..27918.92 rows=50539 width=627) \n(actual time=144.519..144.519 rows=1 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_annonces_pkey on raw_annonces r \n(cost=0.00..11222.32 rows=50539 width=627) (actual time=0.040..0.040 \nrows=1 loops=1)\n Index Cond: (id > 1130306)\n -> Index Scan using annonces_pkey on annonces a \n(cost=0.00..16118.96 rows=65376 width=4) (actual time=0.045..133.272 \nrows=65376 loops=1)\n -> Index Scan using archive_data_pkey on archive_data d \n(cost=0.00..98761.01 rows=474438 width=4) (actual time=0.060..459.995 \nrows=474438 loops=1)\n Total runtime: 708.316 ms\n\nEXPLAIN ANALYZE SELECT * FROM raw_annonces r WHERE r.id>1130306 AND NOT \nEXISTS( SELECT id FROM annonces WHERE id=r.id ) AND NOT EXISTS( SELECT id \n FROM archive_data WHERE id=r.id ) ORDER BY id LIMIT 1;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..38.12 rows=1 width=627) (actual time=0.040..0.041 \nrows=1 loops=1)\n -> Index Scan using raw_annonces_pkey on raw_annonces r \n(cost=0.00..481652.07 rows=12635 width=627) (actual time=0.039..0.039 \nrows=1 loops=1)\n Index Cond: (id > 1130306)\n Filter: ((NOT (subplan)) AND (NOT (subplan)))\n SubPlan\n -> Index Scan using archive_data_pkey on archive_data \n(cost=0.00..3.66 rows=1 width=4) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (id = $0)\n -> Index Scan using annonces_pkey on annonces \n(cost=0.00..5.65 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=1)\n Index Cond: (id = $0)\n Total runtime: 0.121 ms\n\n\n\tIdeas ?\n", "msg_date": "Mon, 12 Jun 2006 21:42:00 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Interesting slow query" }, { "msg_contents": "PFC <[email protected]> writes:\n> \tHere are two ways to phrase a query... the planner choses very different \n> plans as you will see. Everything is freshly ANALYZEd.\n\nUsually we get complaints the other way around (that the NOT EXISTS\napproach is a lot slower). You did not show any statistics, but I\nsuspect the key point here is that the condition id > 1130306 excludes\nmost or all of the A and D tables. The planner is not smart about\nmaking transitive inequality deductions, but you could help it along\nby adding the implied clauses yourself:\n\nEXPLAIN ANALYZE SELECT r.* FROM raw_annonces r\n LEFT JOIN annonces a ON (a.id=r.id AND a.id > 1130306)\n LEFT JOIN archive_data d ON (d.id=r.id AND d.id > 1130306)\n WHERE a.id IS NULL AND d.id IS NULL AND r.id > 1130306\n order by id limit 1;\n\nWhether this is worth doing in your app depends on how often you do\nsearches at the end of the ID range ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jun 2006 18:53:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting slow query " }, { "msg_contents": "\n> Usually we get complaints the other way around (that the NOT EXISTS\n> approach is a lot slower).\n\n\tYes, I know ;)\n\t(I rephrased the query this way to exploit the fact that the planner \nwould choose a nested loop)\n\n> You did not show any statistics, but I\n> suspect the key point here is that the condition id > 1130306 excludes\n> most or all of the A and D tables.\n\n\tRight.\n\n\tActually :\n\t- Table r (raw_annonces) contains raw data waiting to be processed\n\t- Table a (annonces) contains processed data ready for display on the \nwebsite (active data)\n\t- Table d (archive) contains old archived data which can be displayed on \nrequest but is normally excluded from the searches, which normally only \nhit recent records. This is to get speedy searches.\n\n\tSo, records are added into the \"raw\" table, these have a SERIAL primary \nkey.\n\tThen a script processes them and inserts the results into the active \ntable. 15 days of \"raw\" records are kept, then they are deleted.\n\tPeriodically old records from \"annonces\" are moved to the archive.\n\n\tThe promary key stays the same in the 3 tables.\n\n\tThe script knows at which id it stopped last time it was run, hence the \n(id > x) condition. Normally this excludes the entire \"annonces\" table, \nbecause we process only new records.\n\n> The planner is not smart about\n> making transitive inequality deductions, but you could help it along\n> by adding the implied clauses yourself:\n>\nEXPLAIN ANALYZE SELECT r.* FROM raw_annonces r LEFT JOIN annonces a ON \n(a.id=r.id AND a.id > 1180726) LEFT JOIN archive_data d ON (d.id=r.id AND \nd.id > 1180726) WHERE a.id IS NULL AND d.id IS NULL AND r.id > 1180726 \norder by id limit 1;\n>\n> Whether this is worth doing in your app depends on how often you do\n> searches at the end of the ID range ...\n\n\tQuite often actually, so I did the mod.\n\tThe interesting part is that, yesterday after ANALYZE the query plan was \nhorrible, and today, after adding new data I ANALYZED and retried the slow \nquery, and it was fast again :\n\nEXPLAIN ANALYZE SELECT r.* FROM raw_annonces r LEFT JOIN annonces a ON \n(a.id=r.id) LEFT JOIN archive_data d ON (d.id=r.id) WHERE a.id IS NULL AND \nd.id IS NULL AND r.id > 1180726 order by id limit 1;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..10.42 rows=1 width=631) (actual time=0.076..0.076 \nrows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..7129.11 rows=684 width=631) \n(actual time=0.074..0.074 rows=1 loops=1)\n Filter: (\"inner\".id IS NULL)\n -> Nested Loop Left Join (cost=0.00..4608.71 rows=684 \nwidth=631) (actual time=0.064..0.064 rows=1 loops=1)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_annonces_pkey on raw_annonces r \n(cost=0.00..667.56 rows=684 width=631) (actual time=0.013..0.013 rows=1 \nloops=1)\n Index Cond: (id > 1180726)\n -> Index Scan using annonces_pkey on annonces a \n(cost=0.00..5.75 rows=1 width=4) (actual time=0.046..0.046 rows=0 loops=1)\n Index Cond: (a.id = \"outer\".id)\n -> Index Scan using archive_data_pkey on archive_data d \n(cost=0.00..3.67 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (d.id = \"outer\".id)\n Total runtime: 0.197 ms\n\n\tSo I did a few tests...\n\nCREATE TABLE test.raw (id INTEGER PRIMARY KEY);\nCREATE TABLE test.active (id INTEGER PRIMARY KEY);\nCREATE TABLE test.archive (id INTEGER PRIMARY KEY);\nINSERT INTO test.archive SELECT * FROM generate_series( 1, 1000000 );\nINSERT INTO test.active SELECT * FROM generate_series( 1000001, 1100000 );\nINSERT INTO test.raw SELECT * FROM generate_series( 1050000, 1101000 );\nVACUUM ANALYZE;\n\nSo we have 1M archived records, 100K active, 51K in the \"raw\" table of \nwhich 1000 are new.\n\n\tQuery 1:\n\nEXPLAIN ANALYZE SELECT * FROM test.raw AS raw LEFT JOIN test.active AS \nactive ON (active.id=raw.id) LEFT JOIN test.archive AS archive ON \n(archive.id=raw.id) WHERE raw.id>1100000 AND active.id IS NULL AND \narchive.id IS NULL LIMIT 1;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..5.29 rows=1 width=12) (actual time=94.478..94.478 \nrows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..5400.09 rows=1021 width=12) \n(actual time=94.477..94.477 rows=1 loops=1)\n Filter: (\"inner\".id IS NULL)\n -> Merge Left Join (cost=0.00..2310.55 rows=1021 width=8) \n(actual time=94.458..94.458 rows=1 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_pkey on raw (cost=0.00..24.78 \nrows=1021 width=4) (actual time=0.016..0.016 rows=1 loops=1)\n Index Cond: (id > 1100000)\n -> Index Scan using active_pkey on active \n(cost=0.00..2023.00 rows=100000 width=4) (actual time=0.005..76.572 \nrows=100000 loops=1)\n -> Index Scan using archive_pkey on archive (cost=0.00..3.01 \nrows=1 width=4) (actual time=0.013..0.013 rows=0 loops=1)\n Index Cond: (archive.id = \"outer\".id)\n Total runtime: 94.550 ms\n\n\tQuery 2:\n\nEXPLAIN ANALYZE SELECT * FROM test.raw AS raw LEFT JOIN test.active AS \nactive ON (active.id=raw.id AND active.id>1100000) LEFT JOIN test.archive \nAS archive ON (archive.id=raw.id AND archive.id > 1100000) WHERE \nraw.id>1100000 AND active.id IS NULL AND archive.id IS NULL LIMIT 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.04 rows=1 width=12) (actual time=0.035..0.035 rows=1 \nloops=1)\n -> Merge Left Join (cost=0.00..37.67 rows=1021 width=12) (actual \ntime=0.034..0.034 rows=1 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Merge Left Join (cost=0.00..30.51 rows=1021 width=8) (actual \ntime=0.026..0.026 rows=1 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_pkey on raw (cost=0.00..24.78 \nrows=1021 width=4) (actual time=0.016..0.016 rows=1 loops=1)\n Index Cond: (id > 1100000)\n -> Index Scan using active_pkey on active \n(cost=0.00..3.14 rows=10 width=4) (actual time=0.006..0.006 rows=0 loops=1)\n Index Cond: (id > 1100000)\n -> Index Scan using archive_pkey on archive (cost=0.00..4.35 \nrows=100 width=4) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (id > 1100000)\n Total runtime: 0.101 ms\n\n\tOK, you were right ;)\n\n\tQuery 3:\n\nEXPLAIN ANALYZE SELECT * FROM test.raw AS raw WHERE raw.id > 1100000 AND \nNOT EXISTS (SELECT 1 FROM test.active AS a WHERE a.id=raw.id) AND NOT \nEXISTS (SELECT 1 FROM test.archive AS a WHERE a.id=raw.id) LIMIT 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..24.23 rows=1 width=4) (actual time=0.036..0.036 rows=1 \nloops=1)\n -> Index Scan using raw_pkey on raw (cost=0.00..6178.35 rows=255 \nwidth=4) (actual time=0.035..0.035 rows=1 loops=1)\n Index Cond: (id > 1100000)\n Filter: ((NOT (subplan)) AND (NOT (subplan)))\n SubPlan\n -> Index Scan using archive_pkey on archive a \n(cost=0.00..3.01 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (id = $0)\n -> Index Scan using active_pkey on active a (cost=0.00..3.01 \nrows=1 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (id = $0)\n Total runtime: 0.086 ms\n\n\tI see a problem with Query 1:\n\tThe Merge Join goes through tables \"raw\" and \"active\" in sorted order.\n\"archive\" contains values 1-1000000\n\"active\" contains values 1000001-1100000\n\"raw\" contains values 1050000-1101000\n\n\tHowever it starts at the beginning of \"active\" ; it would be smarter to \nstart the index scan of \"active\" at the lowest value in \"raw\", ie. to seek \ninto the right position into the index before beginning to scan it. This \nis achieved by your advice on manually adding the \"id > x\" conditions in \nthe query.\n\n\tHowever, if I want to join the full tables, dropping the id>x condition :\n\nEXPLAIN ANALYZE SELECT * FROM test.raw AS raw LEFT JOIN test.active AS \nactive ON (active.id=raw.id) LEFT JOIN test.archive AS archive ON \n(archive.id=raw.id) WHERE active.id IS NULL AND archive.id IS NULL;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..27305.04 rows=51001 width=12) (actual \ntime=837.196..838.099 rows=1000 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Merge Left Join (cost=0.00..3943.52 rows=51001 width=8) (actual \ntime=153.495..154.190 rows=1000 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_pkey on raw (cost=0.00..1033.01 \nrows=51001 width=4) (actual time=0.012..23.085 rows=51001 loops=1)\n -> Index Scan using active_pkey on active (cost=0.00..2023.00 \nrows=100000 width=4) (actual time=0.004..47.333 rows=100000 loops=1)\n -> Index Scan using archive_pkey on archive (cost=0.00..20224.00 \nrows=1000000 width=4) (actual time=0.043..501.953 rows=1000000 loops=1)\n Total runtime: 838.272 ms\n\n\tThis is very slow : the Index Scans on \"active\" and \"archive\" have to \nskip a huge number of rows before getting to the first interesting row. We \nknow that rows in \"active\" and \"archive\" will be of no use if their id is \n< (SELECT min(id) FROM test.raw) which is 1050000. Let's rephrase :\n\nEXPLAIN ANALYZE SELECT * FROM test.raw AS raw LEFT JOIN test.active AS \nactive ON (active.id=raw.id AND active.id >= 1050000) LEFT JOIN \ntest.archive AS archive ON (archive.id=raw.id AND archive.id >= 1050000) \nWHERE active.id IS NULL AND archive.id IS NULL;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..2837.93 rows=51001 width=12) (actual \ntime=114.590..115.451 rows=1000 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Merge Left Join (cost=0.00..2705.78 rows=51001 width=8) (actual \ntime=114.576..115.239 rows=1000 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n Filter: (\"inner\".id IS NULL)\n -> Index Scan using raw_pkey on raw (cost=0.00..1033.01 \nrows=51001 width=4) (actual time=0.012..51.505 rows=51001 loops=1)\n -> Index Scan using active_pkey on active (cost=0.00..1158.32 \nrows=50913 width=4) (actual time=0.009..22.312 rows=50001 loops=1)\n Index Cond: (id >= 1050000)\n -> Index Scan using archive_pkey on archive (cost=0.00..4.35 rows=100 \nwidth=4) (actual time=0.012..0.012 rows=0 loops=1)\n Index Cond: (id >= 1050000)\n Total runtime: 115.601 ms\n\n\tSo here's my point : the first operation in the Index Scan in a merge \njoin could be to seek to the right position in the index before scanning \nit. This value is known : it is the first value yielded by the index scan \non \"raw\".\n\n\tThis would remove the need for teaching the planner about transitivity, \nand also optimize this case where transitivity is useless.\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 13 Jun 2006 13:56:47 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting slow query " } ]
[ { "msg_contents": "Hi,\n\nIs Postgres supposed to be able to handle concurrent requests while \ndoing large updates?\n\nThis morning I was executing the following simple update statement \nthat would affect 220,000 rows in my product table:\n\nupdate product set is_hungry = 'true' where date_modified > \ncurrent_date - 10;\n\nBut the application that accesses the product table for reading \nbecame very unresponsive while the update was happening.\n\nIs it just a matter of slow I/O? The CPU usage seemed very low (less \nthan 5%) and iostat showed less than 1 MB / sec throughput.\n\nI was doing the update in psql.\n\nAre there any settings that I could tweak that would help with this \nsort of thing?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\n\nHi,Is Postgres supposed to be able to handle concurrent requests while doing large updates?This morning I was executing the following simple update statement that would affect 220,000 rows in my product table:update product set is_hungry = 'true'  where date_modified > current_date - 10;But the application that accesses the product table for reading became very unresponsive while the update was happening.Is it just a matter of slow I/O? The CPU usage seemed very low (less than 5%) and iostat showed  less than 1 MB / sec throughput.I was doing the update in psql.Are there any settings that I could tweak that would help with this sort of thing?Thanks, ____________________________________________________________________Brendan Duddridge | CTO | 403-277-5591 x24 |  [email protected] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB  T2G 0V9 http://www.clickspace.com", "msg_date": "Sun, 28 May 2006 03:37:57 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "App very unresponsive while performing simple update" }, { "msg_contents": "Further to my issue, the update never did finish. I received the \nfollowing message in psql:\n\nssprod=# update product set is_hungry = 'true' where date_modified > \ncurrent_date - 10;\nERROR: deadlock detected\nDETAIL: Process 18778 waits for ShareLock on transaction 711698780; \nblocked by process 15784.\nProcess 15784 waits for ShareLock on transaction 711697098; blocked \nby process 18778.\n\nThis is the second time I've tried to run this query without success.\n\nWould changing the isolation level to serializable in my psql session \nhelp with this?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 28, 2006, at 3:37 AM, Brendan Duddridge wrote:\n\n> Hi,\n>\n> Is Postgres supposed to be able to handle concurrent requests while \n> doing large updates?\n>\n> This morning I was executing the following simple update statement \n> that would affect 220,000 rows in my product table:\n>\n> update product set is_hungry = 'true' where date_modified > \n> current_date - 10;\n>\n> But the application that accesses the product table for reading \n> became very unresponsive while the update was happening.\n>\n> Is it just a matter of slow I/O? The CPU usage seemed very low \n> (less than 5%) and iostat showed less than 1 MB / sec throughput.\n>\n> I was doing the update in psql.\n>\n> Are there any settings that I could tweak that would help with this \n> sort of thing?\n>\n> Thanks,\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n\n\nFurther to my issue, the update never did finish. I received the following message in psql:ssprod=# update product set is_hungry = 'true'  where date_modified > current_date - 10;ERROR:  deadlock detectedDETAIL:  Process 18778 waits for ShareLock on transaction 711698780;  blocked by process 15784.Process 15784 waits for ShareLock on transaction 711697098; blocked by process 18778.This is the second time I've tried to run this query without success.Would changing the isolation level to serializable in my psql session help with this?Thanks, ____________________________________________________________________Brendan Duddridge | CTO | 403-277-5591 x24 |  [email protected] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB  T2G 0V9 http://www.clickspace.com  On May 28, 2006, at 3:37 AM, Brendan Duddridge wrote:Hi,Is Postgres supposed to be able to handle concurrent requests while doing large updates?This morning I was executing the following simple update statement that would affect 220,000 rows in my product table:update product set is_hungry = 'true'  where date_modified > current_date - 10;But the application that accesses the product table for reading became very unresponsive while the update was happening.Is it just a matter of slow I/O? The CPU usage seemed very low (less than 5%) and iostat showed  less than 1 MB / sec throughput.I was doing the update in psql.Are there any settings that I could tweak that would help with this sort of thing?Thanks, ____________________________________________________________________Brendan Duddridge | CTO | 403-277-5591 x24 |  [email protected] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB  T2G 0V9 http://www.clickspace.com", "msg_date": "Sun, 28 May 2006 03:43:23 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "Brendan Duddridge <[email protected]> writes:\n\n> Further to my issue, the update never did finish. I received the following\n> message in psql:\n> \n> ssprod=# update product set is_hungry = 'true' where date_modified >\n> current_date - 10;\n> ERROR: deadlock detected\n> DETAIL: Process 18778 waits for ShareLock on transaction 711698780; blocked\n> by process 15784.\n> Process 15784 waits for ShareLock on transaction 711697098; blocked by process\n> 18778.\n\nWhat queries are those two processes executing? And what foreign keys do you\nhave on the product table or elsewhere referring to the product table? And\nwhat indexes do you have on those columns?\n\nI think this indicates you have foreign keys causing the deadlock. One process\nis waiting until an update elsewhere finishes before modifying a record that\nother update refers to via a foreign key. But that other process is waiting\nsimilarly for the first one.\n\nDo you have any foreign keys in other tables referring to the product table?\nDo you have indexes on those other tables? The update needs to check those\nother tables to make sure there are no references to the records you're\nupdating. If there's no index it has to do a sequential scan.\n\nTo get a deadlock I think you would need another update running somewhere\nthough.\n\n\n\n-- \ngreg\n\n", "msg_date": "28 May 2006 09:11:21 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> What queries are those two processes executing? And what foreign keys do you\n> have on the product table or elsewhere referring to the product table? And\n> what indexes do you have on those columns?\n\nAnd what PG version is this? Alvaro fixed the\nforeign-keys-take-exclusive-locks problem in 8.1 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2006 12:04:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > What queries are those two processes executing? And what foreign keys do you\n> > have on the product table or elsewhere referring to the product table? And\n> > what indexes do you have on those columns?\n> \n> And what PG version is this? Alvaro fixed the\n> foreign-keys-take-exclusive-locks problem in 8.1 ...\n\nExcept I don't think this is taking an exclusive lock at all. The original\npost had the deadlock detection fire on a SharedLock. I think the other\nprocess is also an update and is holding an exclusive lock while also\ntrying to acquire a SharedLock for a foreign key column.\n\n-- \ngreg\n\n", "msg_date": "28 May 2006 13:55:54 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "Hi,\n\nThanks for your replies.\n\nWe are using PostgreSQL 8.1.3 on OS X Server.\n\nWe do have foreign keys on other tables that reference the product \ntable. Also, there will be updates going on at the same time as this \nupdate. When anyone clicks on a product details link, we issue an \nupdate statement to increment the click_count on the product. e.g. \nupdate product set click_count = click_count + 1;\n\nThere are 1.2 million rows in this table and my update will affect \n200,000 of them.\n\nWe do have indexes on all foreign keys that reference the product table.\n\nHere's what our product table looks like:\n\n Table \"public.product\"\n Column | Type | Modifiers\n------------------------------+-----------------------------+-----------\nclick_count | integer |\ndate_created | timestamp without time zone | not null\ndate_modified | timestamp without time zone |\ndate_of_last_keyphrase_match | timestamp without time zone |\nean | character varying(32) |\ngtin | character varying(32) |\nhome_category_id | integer |\nis_active | character varying(5) |\nis_featured | character varying(5) |\nis_hungry | character varying(5) |\nisbn | character varying(32) |\nmanufacturer_id | integer |\nmedia_for_clipboard_id | integer |\nmedia_for_detail_id | integer |\nmedia_for_thumbnail_id | integer |\nmpn | character varying(512) |\nproduct_id | integer | not null\nstatus_code | character varying(32) |\nunsps_code | bigint |\nupc | character varying(32) |\nriding_id | integer |\nname_en | character varying(512) |\nname_fr | character varying(512) |\nshort_description_en | character varying(2048) |\nshort_description_fr | character varying(2048) |\nlong_description_en | text |\nlong_description_fr | text |\nIndexes:\n \"product_pk\" PRIMARY KEY, btree (product_id)\n \"product__active_status_idx\" btree (is_active, status_code)\n \"product__additional_0__idx\" btree (riding_id)\n \"product__date_created_idx\" btree (date_created)\n \"product__date_modified_idx\" btree (date_modified)\n \"product__date_of_last_keyphrase_match_idx\" btree \n(date_of_last_keyphrase_match)\n \"product__home_category_id_fk_idx\" btree (home_category_id)\n \"product__hungry_idx\" btree (is_hungry)\n \"product__lower_name_en_idx\" btree (lower(name_en::text))\n \"product__lower_name_fr_idx\" btree (lower(name_fr::text))\n \"product__manufacturer_id_fk_idx\" btree (manufacturer_id)\n \"product__manufacturer_id_mpn_idx\" btree (manufacturer_id, mpn)\n \"product__media_for_clipboard_id_fk_idx\" btree \n(media_for_clipboard_id)\n \"product__media_for_detail_id_fk_idx\" btree (media_for_detail_id)\n \"product__media_for_thumbnail_id_fk_idx\" btree \n(media_for_thumbnail_id)\n \"product__upc_idx\" btree (upc)\n \"product_additional_2__idx\" btree (is_active, status_code) WHERE \nis_active::text = 'true'::text AND status_code::text = 'complete'::text\nForeign-key constraints:\n \"product_homecategory_fk\" FOREIGN KEY (home_category_id) \nREFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n \"product_manufacturer_fk\" FOREIGN KEY (manufacturer_id) \nREFERENCES manufacturer(manufacturer_id) DEFERRABLE INITIALLY DEFERRED\n \"product_mediaforclipboard_fk\" FOREIGN KEY \n(media_for_clipboard_id) REFERENCES media(media_id) DEFERRABLE \nINITIALLY DEFERRED\n \"product_mediafordetail_fk\" FOREIGN KEY (media_for_detail_id) \nREFERENCES media(media_id) DEFERRABLE INITIALLY DEFERRED\n \"product_mediaforthumbnail_fk\" FOREIGN KEY \n(media_for_thumbnail_id) REFERENCES media(media_id) DEFERRABLE \nINITIALLY DEFERRED\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 28, 2006, at 10:04 AM, Tom Lane wrote:\n\n> Greg Stark <[email protected]> writes:\n>> What queries are those two processes executing? And what foreign \n>> keys do you\n>> have on the product table or elsewhere referring to the product \n>> table? And\n>> what indexes do you have on those columns?\n>\n> And what PG version is this? Alvaro fixed the\n> foreign-keys-take-exclusive-locks problem in 8.1 ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n", "msg_date": "Sun, 28 May 2006 13:17:07 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: App very unresponsive while performing simple update " }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> And what PG version is this? Alvaro fixed the\n>> foreign-keys-take-exclusive-locks problem in 8.1 ...\n\n> Except I don't think this is taking an exclusive lock at all. The original\n> post had the deadlock detection fire on a SharedLock.\n\nYeah, but it was a ShareLock on a transaction ID, which is the trace\nof something doing XactLockTableWait, which is only done if we're\nblocking on a locked or updated-but-uncommitted row.\n\nSince Brendan says he's using 8.1, the FK theory is out, and I think\nwhat this probably is is a garden-variety deadlock on tuple updates, ie,\ntwo concurrent transactions tried to update the same tuples in different\norders.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 May 2006 17:32:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update " }, { "msg_contents": "Brendan Duddridge <[email protected]> writes:\n\n> We do have foreign keys on other tables that reference the product table.\n> Also, there will be updates going on at the same time as this update. When\n> anyone clicks on a product details link, we issue an update statement to\n> increment the click_count on the product. e.g. update product set click_count\n> = click_count + 1;\n\nYou should realize this will produce a lot of garbage records and mean you'll\nhave to be running vacuum very frequently. You might consider instead of\nupdating the main table inserting into a separate clickstream table. That\ntrades off not getting instantaneous live totals with isolating the\nmaintenance headache in a single place. That table will grow large but you can\nprune it at your leisure without impacting query performance on your main\ntables.\n\n> There are 1.2 million rows in this table and my update will affect 200,000\n> of them.\n> \n> We do have indexes on all foreign keys that reference the product table.\n\nWell I suppose you had an update running concurrently against one of CATEGORY,\nMANUFACTURER, or MEDIA. Do any of those tables have a reference back to the\nproduct table? Is it possible to have a record with a reference back to the\nsame record that refers to it?\n\nI think you're seeing the problem because these foreign keys are all initially\ndeferred. That means you can update both tables and then can't commit either\none because it needs to obtain a shared lock on the other record which is\nalready locked for the update.\n\nI'm not certain that making them not deferred would actually eliminate the\ndeadlock. It might just make it less likely. \n\nThe deferred foreign key checks may also be related to the performance\ncomplaints. In my experience they're quite fast but I wonder what happens when\nyou do a large batch update and then need to perform a whole slew of deferred\nforeign key checks.\n\nMore likely you were blocking on some lock. Until that other query holding\nthat lock tries to commit Postgres won't actually detect a deadlock, it'll\njust sit waiting until the lock becomes available.\n\nAlso, you have a lot of indexes here. That alone will make updates pretty\nslow.\n\n-- \ngreg\n\n", "msg_date": "28 May 2006 19:20:59 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n>\n> > Except I don't think this is taking an exclusive lock at all. The original\n> > post had the deadlock detection fire on a SharedLock.\n> \n> Yeah, but it was a ShareLock on a transaction ID, which is the trace\n> of something doing XactLockTableWait, which is only done if we're\n> blocking on a locked or updated-but-uncommitted row.\n\nOops, didn't see this before I sent my last message. Brendan, in case it's not\nclear, in case of a conflict between my explanation and Tom's listen to Tom.\n\n:)\n\n\n-- \ngreg\n\n", "msg_date": "28 May 2006 19:24:14 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:\n> Brendan Duddridge <[email protected]> writes:\n> \n> > We do have foreign keys on other tables that reference the product table.\n> > Also, there will be updates going on at the same time as this update. When\n> > anyone clicks on a product details link, we issue an update statement to\n> > increment the click_count on the product. e.g. update product set click_count\n> > = click_count + 1;\n> \n> You should realize this will produce a lot of garbage records and mean you'll\n> have to be running vacuum very frequently. You might consider instead of\n> updating the main table inserting into a separate clickstream table. That\n> trades off not getting instantaneous live totals with isolating the\n> maintenance headache in a single place. That table will grow large but you can\n> prune it at your leisure without impacting query performance on your main\n> tables.\n \nActually, you can still get instant results, you just have to hit two\ntables to do it.\n\n> More likely you were blocking on some lock. Until that other query holding\n> that lock tries to commit Postgres won't actually detect a deadlock, it'll\n> just sit waiting until the lock becomes available.\n\nWow, are you sure that's how it works? I would think it would be able to\ndetect deadlocks as soon as both processes are waiting on each other's\nlocks.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 31 May 2006 01:23:07 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": ">> You should realize this will produce a lot of garbage records and \n>> mean you'll\n>> have to be running vacuum very frequently. You might consider \n>> instead of\n>> updating the main table inserting into a separate clickstream \n>> table. That\n>> trades off not getting instantaneous live totals with isolating the\n>> maintenance headache in a single place. That table will grow large \n>> but you can\n>> prune it at your leisure without impacting query performance on \n>> your main\n>> tables.\n\nWe actually already have a table for this purpose. product_click_history\n\n>\n> Actually, you can still get instant results, you just have to hit two\n> tables to do it.\n\nWell, not really for our situation. We use the click_count on product \nto sort our product listings by popularity. Joining with our \nproduct_click_history to get live counts would be very slow. Some \ncategories have many tens of thousands of products. Any joins outside \nour category_product table tend to be very slow.\n\nWe'll probably have to write a process to update the click_count from \nquerying our product_click_history table.\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 31, 2006, at 12:23 AM, Jim C. Nasby wrote:\n\n> On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:\n>> Brendan Duddridge <[email protected]> writes:\n>>\n>>> We do have foreign keys on other tables that reference the \n>>> product table.\n>>> Also, there will be updates going on at the same time as this \n>>> update. When\n>>> anyone clicks on a product details link, we issue an update \n>>> statement to\n>>> increment the click_count on the product. e.g. update product \n>>> set click_count\n>>> = click_count + 1;\n>>\n>> You should realize this will produce a lot of garbage records and \n>> mean you'll\n>> have to be running vacuum very frequently. You might consider \n>> instead of\n>> updating the main table inserting into a separate clickstream \n>> table. That\n>> trades off not getting instantaneous live totals with isolating the\n>> maintenance headache in a single place. That table will grow large \n>> but you can\n>> prune it at your leisure without impacting query performance on \n>> your main\n>> tables.\n>\n> Actually, you can still get instant results, you just have to hit two\n> tables to do it.\n>\n>> More likely you were blocking on some lock. Until that other query \n>> holding\n>> that lock tries to commit Postgres won't actually detect a \n>> deadlock, it'll\n>> just sit waiting until the lock becomes available.\n>\n> Wow, are you sure that's how it works? I would think it would be \n> able to\n> detect deadlocks as soon as both processes are waiting on each other's\n> locks.\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Wed, 31 May 2006 00:29:50 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:\n> We'll probably have to write a process to update the click_count from  \n> querying our product_click_history table.\n\nHow about an insert trigger on product_click_history which updates click_count \nevery say 10000 transactions or so?\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Wed, 31 May 2006 08:34:44 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "On Wed, May 31, 2006 at 01:23:07 -0500,\n \"Jim C. Nasby\" <[email protected]> wrote:\n> On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:\n> > Brendan Duddridge <[email protected]> writes:\n> > More likely you were blocking on some lock. Until that other query holding\n> > that lock tries to commit Postgres won't actually detect a deadlock, it'll\n> > just sit waiting until the lock becomes available.\n> \n> Wow, are you sure that's how it works? I would think it would be able to\n> detect deadlocks as soon as both processes are waiting on each other's\n> locks.\n\nI don't see how it could wait for a commit. If a command is blocked waiting for\na lock, how are you going to get a commit (you might get a rollback if the\nquery is aborted)?\n", "msg_date": "Wed, 31 May 2006 10:11:41 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n\n> On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:\n> > Brendan Duddridge <[email protected]> writes:\n> > \n> > > We do have foreign keys on other tables that reference the product table.\n> > > Also, there will be updates going on at the same time as this update. When\n> > > anyone clicks on a product details link, we issue an update statement to\n> > > increment the click_count on the product. e.g. update product set click_count\n> > > = click_count + 1;\n> > \n> > You should realize this will produce a lot of garbage records and mean you'll\n> > have to be running vacuum very frequently. You might consider instead of\n> > updating the main table inserting into a separate clickstream table. That\n> > trades off not getting instantaneous live totals with isolating the\n> > maintenance headache in a single place. That table will grow large but you can\n> > prune it at your leisure without impacting query performance on your main\n> > tables.\n> \n> Actually, you can still get instant results, you just have to hit two\n> tables to do it.\n\nBut that defeats the purpose of moving this traffic out to the clickstream\ntable. The whole point is to avoid generating garbage records in your main\ntable that you're doing a lot of real-time queries against.\n\nI would probably keep the clickstream table, then once a day or perhaps more\noften perform an aggregate query against it to generate a summary table (and\nthen vacuum full or cluster it since it's half garbage). Then join from the\nmain product table to the summary table to sort by popularity.\n\nIf you need results that are more up-to-date than 24 hours and/or can't stand\nthe downtime of the daily vacuum full on the summary table it becomes a lot\nharder.\n\n> > More likely you were blocking on some lock. Until that other query holding\n> > that lock tries to commit Postgres won't actually detect a deadlock, it'll\n> > just sit waiting until the lock becomes available.\n> \n> Wow, are you sure that's how it works? I would think it would be able to\n> detect deadlocks as soon as both processes are waiting on each other's\n> locks.\n\nI didn't mean to describe the general situation, just what I suspected was\nhappening in this case. The user had a large batch update that was performing\npoorly. I suspect it may have been performing poorly because it was spending\ntime waiting to acquire an exclusive lock. There would be no deadlock yet,\njust very slow updates.\n\nHowever the other client updating the other table has deferred foreign key\nconstraints back to the table the big update is acquiring all these exclusive\nlocks. Locks for deferred constraints aren't taken until they're checked. So\nthe actual deadlock doesn't occur until the commit occurs.\n\nIn any case Tom said I was misunderstanding the deadlock message he posted.\nThe kind of situation I'm talking about would look something like this:\n\nstark=> begin; \nBEGIN \n stark=> begin; \n BEGIN \nstark=> update t1 set a = 0; \nUPDATE 1 \nstark=> update t1 set a = 1; \nUPDATE 1 \n \n stark=> update t2 set b = 0; \n UPDATE 1 \n stark=> update t2 set b = 2; \n UPDATE 1 \nstark=> commit; \n stark=> commit; \n ERROR: deadlock detected \n DETAIL: Process 16531 waits for ShareLock on transaction 245131; blocked by process 16566\n Process 16566 waits for ShareLock on transaction 245132; blocked by process 16531. \n CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"t1\" x WHERE \"a\" = $1 FOR SHARE OF x\"\n stark=> > \nCOMMIT \nstark=> \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers \n--------+---------+-----------\n a | integer | not null\n b | integer | \nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (a)\nForeign-key constraints:\n \"fk\" FOREIGN KEY (b) REFERENCES t2(b) DEFERRABLE INITIALLY DEFERRED\n\nstark=> \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers \n--------+---------+-----------\n a | integer | \n b | integer | not null\nIndexes:\n \"t2_pkey\" PRIMARY KEY, btree (b)\nForeign-key constraints:\n \"fk\" FOREIGN KEY (a) REFERENCES t1(a) DEFERRABLE INITIALLY DEFERRED\n\n", "msg_date": "31 May 2006 11:24:05 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "Hi Jan,\n\nThat sounds like a great idea! How would you control the update to \noccur only every 10,000 transactions?\n\nIs there a trigger setting for that somewhere?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn May 31, 2006, at 6:34 AM, Jan de Visser wrote:\n\n> On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:\n>> We'll probably have to write a process to update the click_count from\n>> querying our product_click_history table.\n>\n> How about an insert trigger on product_click_history which updates \n> click_count\n> every say 10000 transactions or so?\n>\n> jan\n>\n> -- \n> --------------------------------------------------------------\n> Jan de Visser [email protected]\n>\n> Baruk Khazad! Khazad ai-menu!\n> --------------------------------------------------------------\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n", "msg_date": "Wed, 31 May 2006 11:34:38 -0600", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "On Wednesday 31 May 2006 13:34, Brendan Duddridge wrote:\n> Hi Jan,\n>\n> That sounds like a great idea! How would you control the update to\n> occur only every 10,000 transactions?\n>\n> Is there a trigger setting for that somewhere?\n\nI was thinking something like\n\nIF count(*) % 10000 = 0 then\n ... do stuff ...\nend if\n\nProblem may be that that may be a bit expensive; maybe better to have a \nsequence and use the sequence value. \n\nOr something like that.\n\nAlso, maybe you should do the actual update of click_count not in the trigger \nitself, but have the trigger do a NOTIFY and have another process do a \nLISTEN. Depends how long the update takes.\n\njan\n\n>\n> Thanks,\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n> On May 31, 2006, at 6:34 AM, Jan de Visser wrote:\n> > On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:\n> >> We'll probably have to write a process to update the click_count from\n> >> querying our product_click_history table.\n> >\n> > How about an insert trigger on product_click_history which updates\n> > click_count\n> > every say 10000 transactions or so?\n> >\n> > jan\n> >\n> > --\n> > --------------------------------------------------------------\n> > Jan de Visser [email protected]\n> >\n> > Baruk Khazad! Khazad ai-menu!\n> > --------------------------------------------------------------\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Wed, 31 May 2006 13:43:40 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "On Wed, May 31, 2006 at 11:24:05AM -0400, Greg Stark wrote:\n> stark=> begin; \n> BEGIN \n> stark=> begin; \n> BEGIN \n> stark=> update t1 set a = 0; \n> UPDATE 1 \n> stark=> update t1 set a = 1; \n> UPDATE 1 \n> \n> stark=> update t2 set b = 0; \n> UPDATE 1 \n> stark=> update t2 set b = 2; \n> UPDATE 1 \n> stark=> commit; \n> stark=> commit; \n> ERROR: deadlock detected \n> DETAIL: Process 16531 waits for ShareLock on transaction 245131; blocked by process 16566\n> Process 16566 waits for ShareLock on transaction 245132; blocked by process 16531. \n> CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"t1\" x WHERE \"a\" = $1 FOR SHARE OF x\"\n> stark=> > \n> COMMIT \n\nI tried duplicating this but couldn't. What's the data in the tables?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 31 May 2006 22:18:05 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n\n> I tried duplicating this but couldn't. What's the data in the tables?\n\nSorry, I had intended to include the definition and data:\n\nstark=> create table t1 (a integer primary key, b integer);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"t1_pkey\" for table \"t1\"\nCREATE TABLE\n\nstark=> create table t2 (a integer, b integer primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"t2_pkey\" for table \"t2\"\nCREATE TABLE\n\nstark=> insert into t1 values (1,2);\nINSERT 0 1\n\nstark=> insert into t2 values (1,2);\nINSERT 0 1\n\nstark=> alter table t1 add constraint fk foreign key (b) references t2 deferrable initially deferred ;\nALTER TABLE\n\nstark=> alter table t2 add constraint fk foreign key (a) references t1 deferrable initially deferred ;\nALTER TABLE\n\nstark=> \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers \n--------+---------+-----------\n a | integer | not null\n b | integer | \nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (a)\nForeign-key constraints:\n \"fk\" FOREIGN KEY (b) REFERENCES t2(b) DEFERRABLE INITIALLY DEFERRED\n\nstark=> \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers \n--------+---------+-----------\n a | integer | \n b | integer | not null\nIndexes:\n \"t2_pkey\" PRIMARY KEY, btree (b)\nForeign-key constraints:\n \"fk\" FOREIGN KEY (a) REFERENCES t1(a) DEFERRABLE INITIALLY DEFERRED\n\n\nstark=> select * from t1;\n a | b \n---+---\n 1 | 2\n(1 row)\n\nstark=> select * from t2;\n a | b \n---+---\n 1 | 2\n(1 row)\n\n\n\n\n-- \ngreg\n\n", "msg_date": "01 Jun 2006 00:30:46 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: App very unresponsive while performing simple update" } ]
[ { "msg_contents": "Hi.\n\nI have 2 tables - one with calls numbers and another with calls codes.\nThe structure almost like this:\nbilling=# \\d a_voip\n Table \"public.a_voip\"\n Column | Type |\n Modifiers\n--------------------+-----------------------------+-----------------------------------------------------\n id | integer | not null default\nnextval('a_voip_id_seq'::regclass)\n tm | timestamp without time zone | not null\n user_name | character varying(50) | not null\n...\n calling_station_id | character varying(20) | not null\n called_station_id | character varying(20) | not null\nIndexes:\n \"a_voip_pkey\" PRIMARY KEY, btree (id)\n \"a_voip_tm\" btree (tm)\n\nbilling=# \\d a_voip_codes\n Table \"public.a_voip_codes\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n code | integer | not null\n region | character varying(77) |\n tarif | numeric(13,7) |\nIndexes:\n \"a_voip_codes_pkey\" PRIMARY KEY, btree (code)\n\nI need to select longest codes from a_voip_codes which match with the\nthe called_station_id. Because codes (very rarely) changes I construct\nquery\n\nSELECT user_name, called_station_id,\n(SELECT code FROM a_voip_codes AS c where v.called_station_id like\nc.code || '%' order by code desc limit 1) AS code\n FROM a_voip AS v WHERE user_name = 'dixi' AND tm between '2006-04-01'\nand '2006-05-01' group by user_name, called_station_id;\n\nAnalyzed variant\nbilling=# explain analyze SELECT user_name, called_station_id, (SELECT\ncode FROM a_voip_codes AS c where v.called_station_id like c.code ||\n'%' order by code desc limit 1) AS code FROM a_voip AS v WHERE\nuser_name = 'dixi' AND tm between '2006-04-01' and '2006-05-01' group\nby user_name, called_station_id;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=11515.93..12106.26 rows=69 width=22) (actual\ntime=215.719..677.044 rows=130 loops=1)\n -> Bitmap Heap Scan on a_voip v (cost=1106.66..11513.16 rows=554\nwidth=22) (actual time=72.336..207.618 rows=848 loops=1)\n Recheck Cond: ((tm >= '2006-04-01 00:00:00'::timestamp\nwithout time zone) AND (tm <= '2006-05-01 00:00:00'::timestamp without\ntime zone))\n Filter: ((user_name)::text = 'dixi'::text)\n -> Bitmap Index Scan on a_voip_tm (cost=0.00..1106.66\nrows=90943 width=0) (actual time=69.441..69.441 rows=93594 loops=1)\n Index Cond: ((tm >= '2006-04-01 00:00:00'::timestamp\nwithout time zone) AND (tm <= '2006-05-01 00:00:00'::timestamp without\ntime zone))\n SubPlan\n -> Limit (cost=0.00..8.55 rows=1 width=4) (actual\ntime=3.565..3.567 rows=1 loops=130)\n -> Index Scan Backward using a_voip_codes_pkey on\na_voip_codes c (cost=0.00..85.45 rows=10 width=4) (actual\ntime=3.560..3.560 rows=1 loops=130)\n Filter: (($0)::text ~~ ((code)::text || '%'::text))\n Total runtime: 678.186 ms\n(11 rows)\n\nIt is ugly, however not so long (but only for 69 rows). If I want to\nselect for ALL users it goes veeeery long:\nbilling=# explain analyze SELECT user_name, called_station_id, (SELECT\ncode FROM a_voip_codes AS c where v.called_station_id like c.code ||\n'%' order by code desc limit 1) AS code FROM a_voip AS v WHERE tm\nbetween '2006-04-01' and '2006-05-01' group by user_name,\ncalled_station_id;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=11740.52..107543.85 rows=11198 width=22) (actual\ntime=779.488..75637.623 rows=20564 loops=1)\n -> Bitmap Heap Scan on a_voip v (cost=1106.66..11285.81\nrows=90943 width=22) (actual time=72.539..274.850 rows=90204 loops=1)\n Recheck Cond: ((tm >= '2006-04-01 00:00:00'::timestamp\nwithout time zone) AND (tm <= '2006-05-01 00:00:00'::timestamp without\ntime zone))\n -> Bitmap Index Scan on a_voip_tm (cost=0.00..1106.66\nrows=90943 width=0) (actual time=69.853..69.853 rows=93594 loops=1)\n Index Cond: ((tm >= '2006-04-01 00:00:00'::timestamp\nwithout time zone) AND (tm <= '2006-05-01 00:00:00'::timestamp without\ntime zone))\n SubPlan\n -> Limit (cost=0.00..8.55 rows=1 width=4) (actual\ntime=3.631..3.633 rows=1 loops=20564)\n -> Index Scan Backward using a_voip_codes_pkey on\na_voip_codes c (cost=0.00..85.45 rows=10 width=4) (actual\ntime=3.623..3.623 rows=1 loops=20564)\n Filter: (($0)::text ~~ ((code)::text || '%'::text))\n Total runtime: 75652.199 ms\n(10 rows)\n\nSo I want to ask, how can I reorganize query/structure for achieve\ngood performance?\n\n I experiment with additional column (matched_code) for a_voip table\nand think about RULE which will update that column \"matched_code\"\ndoing the (SELECT code FROM a_voip_codes AS c where\nv.called_station_id like c.code || '%' order by code desc limit 1) job\nwhen a_voip_codes updated. Or about TRIGGER. But this may also takes\nlong time, especially with short \"code\" numbers (like 1 digit). Look:\n\nbilling=# explain analyze UPDATE a_voip SET matched_code = (SELECT\ncode FROM a_voip_codes AS c WHERE a_voip.called_station_id like c.code\n|| '%' order by code desc limit 1) WHERE matched_code LIKE '1%';\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on a_voip (cost=20.34..20467.27 rows=2057\nwidth=168) (actual time=13.407..22201.369 rows=2028 loops=1)\n Filter: ((matched_code)::text ~~ '1%'::text)\n -> Bitmap Index Scan on a_voip_matched_code (cost=0.00..20.34\nrows=2057 width=0) (actual time=2.035..2.035 rows=2028 loops=1)\n Index Cond: (((matched_code)::text >= '1'::character varying)\nAND ((matched_code)::text < '2'::character varying))\n SubPlan\n -> Limit (cost=0.00..8.55 rows=1 width=4) (actual\ntime=10.909..10.911 rows=1 loops=2028)\n -> Index Scan Backward using a_voip_codes_pkey on\na_voip_codes c (cost=0.00..85.45 rows=10 width=4) (actual\ntime=10.923..10.923 rows=1 loops=2028)\n Filter: (($0)::text ~~ ((code)::text || '%'::text))\n Total runtime: 23216.770 ms\n(9 rows)\n\nIs there any other ways to connect longest \"code\" with \"called_station_id\"?\n-- \nengineer\n", "msg_date": "Mon, 29 May 2006 15:53:24 +0600", "msg_from": "\"Anton Maksimenkov\" <[email protected]>", "msg_from_op": true, "msg_subject": "select with \"like\" from another table" }, { "msg_contents": "On 5/29/06, Anton Maksimenkov <[email protected]> wrote:\n> Hi.\n>\n> I have 2 tables - one with calls numbers and another with calls codes.\n> The structure almost like this:\n...\n\nHow long does this query take?\n\nSELECT code FROM a_voip_codes c, a_voip v where v.called_station_id\nlike c.code ||\n'%' order by code desc limit 1\n\nI wonder if you'll benefit from an index on a_voip(called_station_id)\nto speed up this join.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Mon, 29 May 2006 20:36:43 +1000", "msg_from": "\"chris smith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select with \"like\" from another table" }, { "msg_contents": "> > I have 2 tables - one with calls numbers and another with calls codes.\n> > The structure almost like this:\n> ...\n> How long does this query take?\n>\n> SELECT code FROM a_voip_codes c, a_voip v where v.called_station_id\n> like c.code ||\n> '%' order by code desc limit 1\n\nbilling=# explain analyze SELECT code FROM a_voip_codes c, a_voip v\nwhere v.called_station_id like c.code || '%' order by code desc limit\n1;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..11.24 rows=1 width=4) (actual\ntime=15809.846..15809.848 rows=1 loops=1)\n -> Nested Loop (cost=0.00..35877212.61 rows=3192650 width=4)\n(actual time=15809.841..15809.841 rows=1 loops=1)\n Join Filter: ((\"inner\".called_station_id)::text ~~\n((\"outer\".code)::text || '%'::text))\n -> Index Scan Backward using a_voip_codes_pkey on\na_voip_codes c (cost=0.00..69.87 rows=2078 width=4) (actual\ntime=0.029..0.106 rows=6 loops=1)\n -> Seq Scan on a_voip v (cost=0.00..11887.81 rows=307281\nwidth=13) (actual time=1.696..935.368 rows=254472 loops=6)\n Total runtime: 15810.088 ms\n(6 rows)\n\n\n> I wonder if you'll benefit from an index on a_voip(called_station_id)\n> to speed up this join.\n\nYes, it's long. But index gives no help here:\n\nbilling=# CREATE INDEX a_voip_called_station_id ON a_voip(called_station_id);\nCREATE INDEX\nbilling=# vacuum analyze;\nVACUUM\nbilling=# explain analyze SELECT code FROM a_voip_codes c, a_voip v\nwhere v.called_station_id like c.code || '%' order by code desc limit\n1;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..11.27 rows=1 width=4) (actual\ntime=15254.783..15254.785 rows=1 loops=1)\n -> Nested Loop (cost=0.00..35767665.65 rows=3172732 width=4)\n(actual time=15254.778..15254.778 rows=1 loops=1)\n Join Filter: ((\"inner\".called_station_id)::text ~~\n((\"outer\".code)::text || '%'::text))\n -> Index Scan Backward using a_voip_codes_pkey on\na_voip_codes c (cost=0.00..69.87 rows=2078 width=4) (actual\ntime=0.021..0.097 rows=6 loops=1)\n -> Seq Scan on a_voip v (cost=0.00..11868.64 rows=305364\nwidth=13) (actual time=0.006..750.337 rows=254472 loops=6)\n Total runtime: 15255.066 ms\n(6 rows)\n\n\nThe main problem with first (main) query:\n\nSELECT user_name, called_station_id,\n(SELECT code FROM a_voip_codes AS c where v.called_station_id LIKE\nc.code || '%' order by code desc limit 1) AS code\n FROM a_voip AS v WHERE user_name = 'dixi' AND tm between '2006-04-01'\nand '2006-05-01' group by user_name, called_station_id;\n\nis that internal (SELECT... v.called_station_id LIKE c.code || '%'...)\nexecuted for each row, returned by external SELECT user_name... part.\nSo I looking how to avoid internal (SELECT ...) part of query.\n\n Terrible oracle gives something like \"over by (partition by ... order\nby code desc) rnum ... where rnum = 1\" which works like DISTINCT and\nnumerate similate rows, then we get just longest (rnum = 1) rows. But\nI can't imagine how to implement some kind of this algorithm with\npostgres.\n-- \nengineer\n", "msg_date": "Mon, 29 May 2006 21:39:51 +0600", "msg_from": "\"Anton Maksimenkov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select with \"like\" from another table" } ]
[ { "msg_contents": "Can any one explain why the following query\n\nselect f(q) from\n(\n select * from times\n where '2006-03-01 00:00:00'<=q and q<'2006-03-08 00:00:00'\n order by q\n) v;\n\nnever completes, but splitting up the time span into single days does work.\n\nselect f(q) from\n(\n select * from times\n where '2006-03-01 00:00:00'<=q and q<'2006-03-02 00:00:00'\n order by q\n) v;\nselect f(q) from\n(\n select * from times\n where '2006-03-02 00:00:00'<=q and q<'2006-03-03 00:00:00'\n order by q\n) v;\n...\nselect f(q) from\n(\n select * from times\n where '2006-03-07 00:00:00'<=q and q<'2006-03-08 00:00:00'\n order by q\n) v;\n\nThe stored procedure f(q) take a timestamp and does a select and a \ncalculation and then an update of a results table. The times table \ncontaines only a 100 rows per day. It is also observed that the cpu \nstarts the query with 100% usage and then the slowly swings up and down \nfrom 100% to 20% over the first half hour, and then by the following \nmorning the query is still running and the cpu usage is 3-5%. IO bound \ni'm guessing as the hdd is in constant use at 5 to 15 MB per second usage.\nIn contrast the query that is split up into days has a 100% cpu usage \nall the way through to its completion, which only takes twenty minutes \neach. The computer is not being used for anything else, and is a dual \ncore Athlon 4400+ with 4GB of ram.\n\nThanks for any information you can give on this.\n", "msg_date": "Tue, 30 May 2006 10:26:43 +1000", "msg_from": "Anthony Ransley <[email protected]>", "msg_from_op": true, "msg_subject": "Split select completes, single select doesn't and becomes IO bound!" }, { "msg_contents": "On �ri, 2006-05-30 at 10:26 +1000, Anthony Ransley wrote:\n> Can any one explain why the following query\n> \n> select f(q) from\n> (\n> select * from times\n> where '2006-03-01 00:00:00'<=q and q<'2006-03-08 00:00:00'\n> order by q\n> ) v;\n> \n> never completes, but splitting up the time span into single days does work.\n> \n> select f(q) from\n> (\n> select * from times\n> where '2006-03-01 00:00:00'<=q and q<'2006-03-02 00:00:00'\n> order by q\n> ) v;\n\nfirst question: is f() relevant to your problem?\n\nI mean do you see the same effect with:\n select q from\n (\n select * from times\n where '2006-03-01 00:00:00'<=q and q<'2006-03-08 00:00:00'\n order by q\n ) v;\n\nor even:\n select q from times\n where '2006-03-01 00:00:00'<=q and q<'2006-03-08 00:00:00'\n order by q\n\n\nif f() is needed to make this happen show us f()\n\nif f() is not relevant, show us the simplest cases where\nyou see this. show us EXPLAIN on the query that does not\nfinish, show us EXPLAIN ANALYZE on the queries that do.\n\nsecond question: what indexes exist on the table \"times\" ?\n\nanother question: how many rows in the table ?\n\nnext question: is the table newly ANALYZED?\n\nfinally: what version of postgresql are you using?\n\n\nwhithout more info , it is difficult to guess what\nyour problem is, but possibly you need to increase\nthe statistics target of column \"q\"\n\ngnari\n\n\n\n", "msg_date": "Tue, 30 May 2006 21:53:20 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Split select completes, single select doesn't and" } ]