threads
listlengths
1
275
[ { "msg_contents": "Assuming we have 24 73G drives is it better to make one big metalun \nand carve it up and let the SAN manage the where everything is, or is \nit better to specify which spindles are where.\n\nCurrently we would require 3 separate disk arrays.\n\none for the main database, second one for WAL logs, third one we use \nfor the most active table.\n\nProblem with dedicating the spindles to each array is that we end up \nwasting space. Are the SAN's smart enough to do a better job if I \ncreate one large metalun and cut it up ?\n\nDave\n", "msg_date": "Wed, 11 Jul 2007 09:03:27 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "best use of an EMC SAN" }, { "msg_contents": "We do something similar here. We use Netapp and I carve one aggregate \nper data volume. I generally keep the pg_xlog on the same \"data\" LUN, \nbut I don't mix other databases on the same aggregate.\n\nIn the NetApp world because they use RAID DP (dual parity) you have a \nhigher wastage of drives, however, you are guaranteed that an \nerroneous query won't clobber the IO of another database.\n\nIn my experience, NetApp has utilities that set \"IO priority\" but \nit's not granular enough as it's more like using \"renice\" in unix. It \ndoesn't really make that big of a difference.\n\nMy recommendation, each database gets it's own aggregate unless the \nIO footprint is very low.\n\nLet me know if you need more details.\n\nRegards,\nDan Gorman\n\nOn Jul 11, 2007, at 6:03 AM, Dave Cramer wrote:\n\n> Assuming we have 24 73G drives is it better to make one big metalun \n> and carve it up and let the SAN manage the where everything is, or \n> is it better to specify which spindles are where.\n>\n> Currently we would require 3 separate disk arrays.\n>\n> one for the main database, second one for WAL logs, third one we \n> use for the most active table.\n>\n> Problem with dedicating the spindles to each array is that we end \n> up wasting space. Are the SAN's smart enough to do a better job if \n> I create one large metalun and cut it up ?\n>\n> Dave\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n", "msg_date": "Wed, 11 Jul 2007 06:18:10 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "\"Dave Cramer\" <[email protected]> writes:\n\n> Assuming we have 24 73G drives is it better to make one big metalun and carve\n> it up and let the SAN manage the where everything is, or is it better to\n> specify which spindles are where.\n\nThis is quite a controversial question with proponents of both strategies.\n\nI would suggest having one RAID-1 array for the WAL and throw the rest of the\ndrives at a single big array for the data files. That wastes space since the\nWAL isn't big but the benefit is big.\n\nIf you have a battery backed cache you might not need even that. Just throwing\nthem all into a big raid might work just as well.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 11 Jul 2007 15:05:37 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "\nOn 11-Jul-07, at 10:05 AM, Gregory Stark wrote:\n\n> \"Dave Cramer\" <[email protected]> writes:\n>\n>> Assuming we have 24 73G drives is it better to make one big \n>> metalun and carve\n>> it up and let the SAN manage the where everything is, or is it \n>> better to\n>> specify which spindles are where.\n>\n> This is quite a controversial question with proponents of both \n> strategies.\n>\n> I would suggest having one RAID-1 array for the WAL and throw the \n> rest of the\n\nThis is quite unexpected. Since the WAL is primarily all writes, \nisn't a RAID 1 the slowest of all for writing ?\n> drives at a single big array for the data files. That wastes space \n> since the\n> WAL isn't big but the benefit is big.\n>\n> If you have a battery backed cache you might not need even that. \n> Just throwing\n> them all into a big raid might work just as well.\nAny ideas on how to test this before we install the database ?\n>\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n", "msg_date": "Wed, 11 Jul 2007 10:14:41 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "\nIn my sporadic benchmark testing, the only consistent 'trick' I found\nwas that the best thing I could do for performance sequential\nperformance was allocating a bunch of mirrored pair LUNs and stripe\nthem with software raid. This made a huge difference (~2X) in sequential\nperformance, and a little boost in random i/o - at least in FLARE 19.\n\nOn our CX-500s, FLARE failed to fully utilize the secondary drives in\nRAID 1+0 configurations. FWIW, after several months of inquiries, EMC\neventually explained that this is due to their desire to ease the usage\nand thus wear on the secondaries in order to reduce the likelihood of a\nmirrored pair both failing. \n\nWe've never observed a difference using separate WAL LUNs - presumably\ndue to the write cache. That said, we continue to use them figuring it's\n\"cheap\" insurance against running out of space as well as performance\nunder conditions we didn't see while testing.\n\nWe also ended up using single large LUNs for data, but I must admit I\nwanted more time to benchmark splitting off heavily hit tables.\n\nMy advice would be to read the EMC performance white papers, remain\nskeptical, and then test everything yourself. :D\n\n\n\nOn Wed, 2007-07-11 at 09:03 -0400, Dave Cramer wrote:\n> Assuming we have 24 73G drives is it better to make one big metalun \n> and carve it up and let the SAN manage the where everything is, or is \n> it better to specify which spindles are where.\n> \n> Currently we would require 3 separate disk arrays.\n> \n> one for the main database, second one for WAL logs, third one we use \n> for the most active table.\n> \n> Problem with dedicating the spindles to each array is that we end up \n> wasting space. Are the SAN's smart enough to do a better job if I \n> create one large metalun and cut it up ?\n> \n> Dave\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 11 Jul 2007 07:40:01 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "On Wed, Jul 11, 2007 at 09:03:27AM -0400, Dave Cramer wrote:\n> Problem with dedicating the spindles to each array is that we end up \n> wasting space. Are the SAN's smart enough to do a better job if I \n> create one large metalun and cut it up ?\n\nIn my experience, this largely depends on your SAN and its hard- and\nfirm-ware, as well as its ability to interact with the OS. I think\nthe best answer is \"sometimes yes\".\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nHowever important originality may be in some fields, restraint and \nadherence to procedure emerge as the more significant virtues in a \ngreat many others. --Alain de Botton\n", "msg_date": "Wed, 11 Jul 2007 11:33:37 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "[email protected] (Dave Cramer) writes:\n> On 11-Jul-07, at 10:05 AM, Gregory Stark wrote:\n>\n>> \"Dave Cramer\" <[email protected]> writes:\n>>\n>>> Assuming we have 24 73G drives is it better to make one big\n>>> metalun and carve\n>>> it up and let the SAN manage the where everything is, or is it\n>>> better to\n>>> specify which spindles are where.\n>>\n>> This is quite a controversial question with proponents of both\n>> strategies.\n>>\n>> I would suggest having one RAID-1 array for the WAL and throw the\n>> rest of the\n>\n> This is quite unexpected. Since the WAL is primarily all writes,\n> isn't a RAID 1 the slowest of all for writing ?\n\nThe thing is, the disk array caches this LIKE CRAZY. I'm not quite\nsure how many batteries are in there to back things up; there seems to\nbe multiple levels of such, which means that as far as fsync() is\nconcerned, the data is committed very quickly even if it takes a while\nto physically hit disk.\n\nOne piece of the controversy will be that the disk being used for WAL\nis certain to be written to as heavily and continuously as your heavy\nload causes. A fallout of this is that those disks are likely to be\nworked harder than the disk used for storing \"plain old data,\" with\nthe result that if you devote disk to WAL, you'll likely burn thru\nreplacement drives faster there than you do for the \"POD\" disk.\n\nIt is not certain whether it is more desirable to:\na) Spread that wear and tear across the whole array, or\nb) Target certain disks for that wear and tear, and expect to need to\n replace them somewhat more frequently.\n\nAt some point, I'd like to do a test on a decent disk array where we\ntake multiple configurations. Assuming 24 drives:\n\n - Use all 24 to make \"one big filesystem\" as the base case\n - Split off a set (6?) for WAL\n - Split off a set (6? 9?) to have a second tablespace, and shift\n indices there\n\nMy suspicion is that the \"use all 24 for one big filesystem\" scenario\nis likely to be fastest by some small margin, and that the other cases\nwill lose a very little bit in comparison. Andrew Sullivan had a\nsomewhat similar finding a few years ago on some old Solaris hardware\nthat unfortunately isn't at all relevant today. He basically found\nthat moving WAL off to separate disk didn't affect performance\nmaterially.\n\nWhat's quite regrettable is that it is almost sure to be difficult to\nconstruct a test that, on a well-appointed modern disk array, won't\nbasically stay in cache.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/nonrdbms.html\n16-inch Rotary Debugger: A highly effective tool for locating problems\nin computer software. Available for delivery in most major\nmetropolitan areas. Anchovies contribute to poor coding style.\n", "msg_date": "Wed, 11 Jul 2007 13:39:39 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "On Jul 11, 2007, at 12:39 PM, Chris Browne wrote:\n> - Split off a set (6?) for WAL\n\nIn my limited testing, 6 drives for WAL would be complete overkill in \nalmost any case. The only example I've ever seen where WAL was able \nto swamp 2 drives was the DBT testing that Mark Wong was doing at \nOSDL; the only reason that was the case is because he had somewhere \naround 70 data drives. I suppose an entirely in-memory database might \nbe able to swamp a 2 drive WAL as well.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Wed, 11 Jul 2007 12:58:12 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "On Wed, Jul 11, 2007 at 01:39:39PM -0400, Chris Browne wrote:\n> load causes. A fallout of this is that those disks are likely to be\n> worked harder than the disk used for storing \"plain old data,\" with\n> the result that if you devote disk to WAL, you'll likely burn thru\n> replacement drives faster there than you do for the \"POD\" disk.\n\nThis is true, and in operation can really burn you when you start to\nblow out disks. In particular, remember to factor the cost of RAID\nre-build into your RAID plans. Because you're going to be doing it,\nand if your WAL is near to its I/O limits, the only way you're going\nto get your redundancy back is to go noticably slower :-(\n\n> will lose a very little bit in comparison. Andrew Sullivan had a\n> somewhat similar finding a few years ago on some old Solaris hardware\n> that unfortunately isn't at all relevant today. He basically found\n> that moving WAL off to separate disk didn't affect performance\n> materially.\n\nRight, but it's not only the hardware that isn't relevant there. It\nwas also using either 7.1 or 7.2, which means that the I/O pattern\nwas completely different. More recently, ISTR, we did analysis for\nat least one workload that tod us to use separate LUNs for WAL, with\nseparate I/O paths. This was with at least one kind of array\nsupported by Awful Inda eXtreme. Other tests, IIRC, came out\ndifferently -- the experience with one largish EMC array was I think\na dead heat between various strategies (so the additional flexibility\nof doing everything on the array was worth any cost we were able to\nmeasure). But the last time I had to be responsible for that sort of\ntest was again a couple years ago. On the whole, though, my feeling\nis that you can't make general recommendations on this topic: the\nadvances in storage are happening too fast to make generalisations,\nparticularly in the top classes of hardware.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Wed, 11 Jul 2007 14:27:01 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "On Wed, 11 Jul 2007, Jim Nasby wrote:\n\n> I suppose an entirely in-memory database might be able to swamp a 2 \n> drive WAL as well.\n\nYou can really generate a whole lot of WAL volume on an EMC SAN if you're \ndoing UPDATEs fast enough on data that is mostly in-memory. Takes a \nfairly specific type of application to do that though, and whether you'll \never find it outside of a benchmark is hard to say.\n\nThe main thing I would add as a consideration here is that you can \nconfigure PostgreSQL to write WAL data using the O_DIRECT path, bypassing \nthe OS buffer cache, and greatly improve performance into SAN-grade \nhardware like this. That can be a big win if you're doing writes that \ndirty lots of WAL, and the benefit is straightforward to measure if the \nWAL is a dedicated section of disk (just change the wal_sync_method and do \nbenchmarks with each setting). If the WAL is just another section on an \narray, how well those synchronous writes will mesh with the rest of the \nactivity on the system is not as straightforward to predict. Having the \nWAL split out provides a logical separation that makes figuring all this \nout easier.\n\nJust to throw out a slightly different spin on the suggestions going by \nhere: consider keeping the WAL separate, starting as a RAID-1 volume, but \nkeep 2 disks in reserve so that you could easily upgrade to a 0+1 set if \nyou end up discovering you need to double the write bandwidth. Since \nthere's never much actual data on the WAL disks that would a fairly short \ndowntime operation. If you don't reach a wall, the extra drives might \nserve as spares to help mitigate concerns about the WAL drives burning out \nfaster than average because of their high write volume.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 11 Jul 2007 14:35:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of an EMC SAN" }, { "msg_contents": "\nOn 11-Jul-07, at 2:35 PM, Greg Smith wrote:\n\n> On Wed, 11 Jul 2007, Jim Nasby wrote:\n>\n>> I suppose an entirely in-memory database might be able to swamp a \n>> 2 drive WAL as well.\n>\n> You can really generate a whole lot of WAL volume on an EMC SAN if \n> you're doing UPDATEs fast enough on data that is mostly in-memory. \n> Takes a fairly specific type of application to do that though, and \n> whether you'll ever find it outside of a benchmark is hard to say.\n>\nWell, this is such an application. The db fits entirely in memory, \nand the site is doing over 12M page views a day (I'm not exactly sure \nwhat this translates to in transactions) .\n> The main thing I would add as a consideration here is that you can \n> configure PostgreSQL to write WAL data using the O_DIRECT path, \n> bypassing the OS buffer cache, and greatly improve performance into \n> SAN-grade hardware like this. That can be a big win if you're \n> doing writes that dirty lots of WAL, and the benefit is \n> straightforward to measure if the WAL is a dedicated section of \n> disk (just change the wal_sync_method and do benchmarks with each \n> setting). If the WAL is just another section on an array, how well \n> those synchronous writes will mesh with the rest of the activity on \n> the system is not as straightforward to predict. Having the WAL \n> split out provides a logical separation that makes figuring all \n> this out easier.\n>\n> Just to throw out a slightly different spin on the suggestions \n> going by here: consider keeping the WAL separate, starting as a \n> RAID-1 volume, but keep 2 disks in reserve so that you could easily \n> upgrade to a 0+1 set if you end up discovering you need to double \n> the write bandwidth. Since there's never much actual data on the \n> WAL disks that would a fairly short downtime operation. If you \n> don't reach a wall, the extra drives might serve as spares to help \n> mitigate concerns about the WAL drives burning out faster than \n> average because of their high write volume.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 11 Jul 2007 14:45:06 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best use of an EMC SAN" } ]
[ { "msg_contents": "am I wrong or DB2 9.1 is faster on less powerfull hardware ?\n\nLe lundi 09 juillet 2007 à 11:57 -0400, Jignesh K. Shah a écrit :\n> Hello all,\n> \n> I think this result will be useful for performance discussions of \n> postgresql against other databases.\n> \n> http://www.spec.org/jAppServer2004/results/res2007q3/\n> \n> More on Josh Berkus's blog:\n> \n> http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470\n> \n> Regards,\n> Jignesh\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n", "msg_date": "Wed, 11 Jul 2007 16:16:38 +0200", "msg_from": "Philippe Amelant <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Its hard to do direct comparison since that one used a different \ncommercial application server than the PostgreSQL test.. As we do more \ntests with the help of the community, hopefully we can understand where \nthe CPU cycles are spent and see how to make them more efficient...\n\nStay tuned!!\n\n-Jignesh\n\n\nPhilippe Amelant wrote:\n> am I wrong or DB2 9.1 is faster on less powerfull hardware ?\n>\n> Le lundi 09 juillet 2007 à 11:57 -0400, Jignesh K. Shah a écrit :\n> \n>> Hello all,\n>>\n>> I think this result will be useful for performance discussions of \n>> postgresql against other databases.\n>>\n>> http://www.spec.org/jAppServer2004/results/res2007q3/\n>>\n>> More on Josh Berkus's blog:\n>>\n>> http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470\n>>\n>> Regards,\n>> Jignesh\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n> \n", "msg_date": "Wed, 11 Jul 2007 11:48:42 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\n\n\n\n\n\nI don't this so, because DB2 is running on a Sun Sparc T1 processor\n(http://www.sun.com/processors/UltraSPARC-T1/) that's implements much\nmore features, like thread level parelelism, than AMD Opteron.\n\nthe DB2 installation:\n\nJ2EE AppServer HW (SUT hardware)\n Hardware Vendor: Sun Microsystems, Inc.\n Model Name: Sun Fire T2000 Server\n Processor: Sun UltraSPARC T1\n MHz: 1400\n # of CPUs: 8 cores, 1 chip, 8 cores/chip (4 threads/core)\n Memory (MB): 65536\n L1 Cache: 16KB(I)+8KB(D) per core\n L2 Cache: 3MB per chip\nthe postgres installation:\nJ2EE AppServer HW (SUT hardware)\n Hardware Vendor: Sun Microsystems, Inc.\n Model Name: Sun Fire X4200 M2\n Processor: AMD Opteron 2220 SE\n MHz: 2800\n # of CPUs: 4 cores, 2 chips, 2 cores/chip\n Memory (MB): 8192\n L1 Cache: 64KB(I)+16KB(D) per core\n L2 Cache: 1MB per core\n\npostgres with a inferior hardware has better performance than DB2 :-) \n\n\nBest Regards,\nAndré\n\nPhilippe Amelant escreveu:\n\nam I wrong or DB2 9.1 is faster on less powerfull hardware ?\n\nLe lundi 09 juillet 2007 à 11:57 -0400, Jignesh K. Shah a écrit :\n \n\nHello all,\n\nI think this result will be useful for performance discussions of \npostgresql against other databases.\n\nhttp://www.spec.org/jAppServer2004/results/res2007q3/\n\nMore on Josh Berkus's blog:\n\nhttp://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470\n\nRegards,\nJignesh\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n \n\n\n\n", "msg_date": "Wed, 11 Jul 2007 13:37:27 -0300", "msg_from": "=?UTF-8?B?QW5kcsOpIEdvbWVzIExhbWFzIE90ZXJv?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\n\nLe mercredi 11 juillet 2007 à 13:37 -0300, André Gomes Lamas Otero a\nécrit :\n> I don't this so, because DB2 is running on a Sun Sparc T1 processor\n> (http://www.sun.com/processors/UltraSPARC-T1/) that's implements much\n> more features, like thread level parelelism, than AMD Opteron.\n> \n> the DB2 installation:\n> \n\nThis is J2EE server not DB server\n\n> J2EE AppServer HW (SUT hardware)\n> Hardware Vendor: Sun Microsystems, Inc.\n> Model Name: Sun Fire T2000 Server\n> Processor: Sun UltraSPARC T1\n> MHz: 1400\n> # of CPUs: 8 cores, 1 chip, 8 cores/chip (4 threads/core)\n> Memory (MB): 65536\n> L1 Cache: 16KB(I)+8KB(D) per core\n> L2 Cache: 3MB per chip\n> the postgres installation:\n> J2EE AppServer HW (SUT hardware)\n> Hardware Vendor: Sun Microsystems, Inc.\n> Model Name: Sun Fire X4200 M2\n> Processor: AMD Opteron 2220 SE\n> MHz: 2800\n> # of CPUs: 4 cores, 2 chips, 2 cores/chip\n> Memory (MB): 8192\n> L1 Cache: 64KB(I)+16KB(D) per core\n> L2 Cache: 1MB per core\n> \n\n", "msg_date": "Fri, 13 Jul 2007 11:56:29 +0200", "msg_from": "Philippe Amelant <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL publishes first real benchmark" } ]
[ { "msg_contents": "Setting spec for a postgresql server.\nThe hard drive distribution is going to be\n8 x 750GB Seagate on a 3ware 9650SE RAID 6\n2 x 160GB Seagate on a 3ware 2 port\n\nThe question is, would I be better off putting WAL on the second, OS, \ncontroller or in the 8 port controller? Specially since the 2 port will not \nhave battery (3ware does not have 2 ports with battery).\nThe two port controller is primary for the operating system, but I was \nwondering if there would be any benefit to putting WAL on that 2 port \ncontroller.\n\nThe machine will have 8Gb of RAM and will be running Postgresql 8.2 on \nFreeBSD 6.2 Stable.\n\nDuring peak operation there will be about 5 to 20 updates per second with a \nhandfull of reads.\n", "msg_date": "Wed, 11 Jul 2007 10:48:04 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "WALL on controller without battery?" }, { "msg_contents": "On Wed, Jul 11, 2007 at 10:48:04AM -0400, Francisco Reyes wrote:\n> The question is, would I be better off putting WAL on the second, OS, \n> controller or in the 8 port controller? Specially since the 2 port will not \n> have battery (3ware does not have 2 ports with battery).\n\nPut the WAL where the battery is. Even if it's slower (and I don't\nknow whether it will be), I assume that having the right data more\nslowly is better than maybe not having the data at all, quickly.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Wed, 11 Jul 2007 11:36:20 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "On Wednesday 11 July 2007 08:36, Andrew Sullivan <[email protected]> \nwrote:\n> Put the WAL where the battery is. Even if it's slower (and I don't\n> know whether it will be), I assume that having the right data more\n> slowly is better than maybe not having the data at all, quickly.\n>\n\nPresumably he'll have the 2-port configured for write-through operation.\n\nI would spring for a 4-port with a BBU, though, and then put the WAL on the \ndrives with the OS.\n\n-- \n\"Bad laws are the worst sort of tyranny.\" -- Edmund Burke (1729-1797)\n\n", "msg_date": "Wed, 11 Jul 2007 08:47:44 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "On Wed, 11 Jul 2007, Alan Hodgson wrote:\n\n> Presumably he'll have the 2-port configured for write-through operation.\n\nThis is the real key to his question. In order to get acceptable \nperformance for the operating system, Francisco may very well need the OS \ndisks to be configured in write-back mode. If that's the case, then he \ncan't put the WAL there; it has to go onto the array with the BBU.\n\n> I would spring for a 4-port with a BBU, though, and then put the WAL on the\n> drives with the OS.\n\nThis is certainly worth considering. When putting multiple RAID \ncontrollers into a system, I always try to keep them of a similar grade \nbecause it improves the possibility of data recovery in case of a \ncontroller failure. For example, if he had a 4-port with BBU and an \n8-port with BBU, the 8-port could be split into two 4-disk RAID-6 volumes, \nand then in an emergency or for troubleshooting isolation you could always \nget any data you needed off any 4-disk set with either controller. The \nlittle 2-disk unit is providing no such redundancy.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 11 Jul 2007 14:53:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "Alan Hodgson writes:\n\n> I would spring for a 4-port with a BBU, though, and then put the WAL on the \n> drives with the OS.\n\nThe machine is already over budget. :-(\nI will check the price difference but unlikely I will get approval.\n", "msg_date": "Wed, 11 Jul 2007 15:39:32 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "Francisco Reyes wrote:\n> Alan Hodgson writes:\n> \n>> I would spring for a 4-port with a BBU, though, and then put the WAL \n>> on the drives with the OS.\n> \n> The machine is already over budget. :-(\n> I will check the price difference but unlikely I will get approval.\n\nWithout a BBU you are guaranteed at some point to have catastrophic \nfailure unless you turn off write cache, which would then destroy your \nperformance.\n\nJoshua D. Drake\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 11 Jul 2007 12:45:29 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "Joshua D. Drake writes:\n\n> Without a BBU you are guaranteed at some point to have catastrophic \n> failure unless you turn off write cache, which would then destroy your \n> performance.\n\nI am re-working the specs of the machine to try and get a 4port 3ware to \nhave the battery backup.\n", "msg_date": "Wed, 11 Jul 2007 16:34:46 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "On Wed, 11 Jul 2007, Francisco Reyes wrote:\n\n> I am re-working the specs of the machine to try and get a 4port 3ware to have \n> the battery backup.\n\nThat's really not necessary, it just would be better (and obviously more \nexpensive). The warnings you've been getting here have been to let you \nknow that you absolutely can't put the WAL on the controller with the OS \ndisks attached without making compromises you probably won't be happy \nwith.\n\n> During peak operation there will be about 5 to 20 updates per second \n> with a handfull of reads.\n\nThere really is no reason you need to be concerned about WAL from a \nperformance perspective if this is your expected workload. If you're \nworking with a tight budget, the original design you had was perfectly \nfine. Just use all the disks on the big controller as a large volume, put \nboth the database and the WAL on there, and don't even bother trying to \nseparate out the WAL. If you expected hundreds of updates per second, \nthat's where you need to start thinking about a separate WAL disk, and \neven then with 8 disks to spread the load out and a good caching \ncontroller you might still be fine.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 11 Jul 2007 19:01:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WALL on controller without battery?" }, { "msg_contents": "Greg Smith writes:\n\n>> During peak operation there will be about 5 to 20 updates per second \n>> with a handfull of reads.\n> \n> There really is no reason you need to be concerned about WAL from a \n> performance perspective if this is your expected workload.\n\n\nI was able to get the second controller with battery backup.\nThis machine is the backup so if the primary fails it would get higher \nvolumes.\n\nIt is also easier to throw more work at a good machine than to find myself\n with an underperformer.\n\n> both the database and the WAL on there, and don't even bother trying to \n> separate out the WAL.\n\nThanks for the feedback.\nI wish there was a place with hardware guide where people could get feedback \nlike the one you gave me. In particular actual numbers like x to y number of \ntransactions per second you don't need WAL no separate disk.. etc.. \n", "msg_date": "Wed, 11 Jul 2007 20:26:56 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WALL on controller without battery?" } ]
[ { "msg_contents": "\nHow can I get the time it takes a query to execute - explain analyze is\ntaking over 5 hours to complete...can I use \\timing??? I don't get any time\nwhen using the \\timing option...\n\nThanks...Marsha\n-- \nView this message in context: http://www.nabble.com/TIMING-A-QUERY-----tf4062567.html#a11542393\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 11 Jul 2007 08:21:40 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "TIMING A QUERY ???" }, { "msg_contents": "On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:\n> \n> How can I get the time it takes a query to execute - explain analyze is\n> taking over 5 hours to complete\n\nYou can't get it any faster than what explain analyse does: it runs\nthe query. How else would you get the answer?\n\n> ...can I use \\timing??? I don't get any time when using the\n> \\timing option...\n\nHow so? It returns Time: N ms at the end of output for me.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Wed, 11 Jul 2007 11:39:11 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TIMING A QUERY ???" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:\n>> How can I get the time it takes a query to execute - explain analyze is\n>> taking over 5 hours to complete\n\n> You can't get it any faster than what explain analyse does: it runs\n> the query. How else would you get the answer?\n\nWell, on some platforms (ie consumer-grade PCs) explain analyze can be a\nlot slower than just running the query, because of the overhead of all\nthose gettimeofday() calls it does. El cheapo clock hardware is slow\nto read. (I think the problem is actually that the PC-standard hardware\nAPI for clocks was designed back when taking a whole microsecond to read\nthe clock didn't seem like a problem.)\n\n>> ...can I use \\timing??? I don't get any time when using the\n>> \\timing option...\n\n> How so? It returns Time: N ms at the end of output for me.\n\nWorks for me too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 12:10:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TIMING A QUERY ??? " }, { "msg_contents": "\"Andrew Sullivan\" <[email protected]> writes:\n\n> On Wed, Jul 11, 2007 at 08:21:40AM -0700, smiley2211 wrote:\n>> \n>> How can I get the time it takes a query to execute - explain analyze is\n>> taking over 5 hours to complete\n>\n> You can't get it any faster than what explain analyse does: it runs\n> the query. How else would you get the answer?\n\nexplain analyze does actually run slower than the actual query because it has\nto check the time before and after each step of the query, potentially\nthousands of times. If it's a disk-bound query that doesn't matter much but if\nit's a cpu-bound query it can.\n\n>> ...can I use \\timing??? I don't get any time when using the\n>> \\timing option...\n\nYes you can use \\timing. You'll have to provide more information of what\nyou're doing before anyone can help you.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 11 Jul 2007 17:11:09 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TIMING A QUERY ???" }, { "msg_contents": "On Wed, Jul 11, 2007 at 12:10:55PM -0400, Tom Lane wrote:\n> Well, on some platforms (ie consumer-grade PCs) explain analyze can be a\n> lot slower than just running the query, \n\nYes, I suppose I exaggerated when I said \"can't get any faster\", but\ngiven that the OP was talking on the order of hours for the EXPLAIN\nANALYSE to return, I assumed that the problem is one of impatience and\nnot clock cycles. After all, the gettimeofday() additional overhead\nis still not going to come in on the order of minutes without a\n_bursting_ huge query plan.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n", "msg_date": "Wed, 11 Jul 2007 13:01:57 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TIMING A QUERY ???" }, { "msg_contents": "\nThanks all...\\timing works.\n\n\n-- \nView this message in context: http://www.nabble.com/TIMING-A-QUERY-----tf4062567.html#a11559115\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 12 Jul 2007 05:39:40 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TIMING A QUERY ???" } ]
[ { "msg_contents": "Hi,\n\nI'm having a weird problem on a query :\nI've simplified it to get the significant part (see end of message).\nThe point is I've got a simple\nSELECT field FROM table WHERE 'condition1'\nEstimated returned rows : 5453\nThen\nSELECT field FROM table WHERE 'condition2'\nEstimated returned rows : 705\nThen\nSELECT field FROM table WHERE 'condition1' OR 'condition2'\nEstimated returned rows : 143998\n\nCondition2 is a bit complicated (it's a subquery).\nNevertheless, shouldn't the third estimate be smaller or equal to the sum of the two others ?\n\n\nPostgresql is 8.2.4 on Linux, stats are up to date,\nshow default_statistics_target;\n default_statistics_target\n---------------------------\n 1000\n\n\n\nAny ideas ?\n\n\n\nexplain analyze \nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE (stc.STC_DATE>='2007-07-05' AND stc.STC_DATEPLAN<='2007-07-05');\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on stol_stc stc (cost=0.00..24265.15 rows=5453 width=8) (actual time=17.186..100.941 rows=721 loops=1)\n Filter: ((stc_date >= '2007-07-05'::date) AND (stc_dateplan <= '2007-07-05'::date))\n Total runtime: 101.656 ms\n(3 rows)\n\n\nexplain analyze \nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE stc.STC_ID IN \n (SELECT STC_ID FROM STOL_TRJ \n WHERE TRJ_DATEARRT>='2007-07-05' \n AND TRJ_DATEDEPT>=TRJ_DATEARRT \n AND (TRJ_DATEDEPT<='2007-07-05' \n OR TRJ_DATECREAT<='2007-07-05') );\n\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4649.62..10079.52 rows=705 width=8) (actual time=6.266..13.037 rows=640 loops=1)\n -> HashAggregate (cost=4649.62..4657.13 rows=751 width=8) (actual time=6.242..6.975 rows=648 loops=1)\n -> Index Scan using stol_trj_fk5 on stol_trj (cost=0.00..4647.61 rows=803 width=8) (actual time=0.055..4.901 rows=688 loops=1)\n Index Cond: (trj_datearrt >= '2007-07-05'::date)\n Filter: ((trj_datedept >= trj_datearrt) AND ((trj_datedept <= '2007-07-05'::date) OR (trj_datecreat <= '2007-07-05'::date)))\n -> Index Scan using stol_stc_pk on stol_stc stc (cost=0.00..7.21 rows=1 width=16) (actual time=0.004..0.005 rows=1 loops=648)\n Index Cond: (stc.stc_id = stol_trj.stc_id)\n Total runtime: 13.765 ms\n(8 rows)\n\nexplain analyze\nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE (stc.STC_DATE>='2007-07-05' AND stc.STC_DATEPLAN<='2007-07-05')\n OR\n (stc.STC_ID IN \n (SELECT STC_ID FROM STOL_TRJ \n WHERE TRJ_DATEARRT>='2007-07-05' \n AND TRJ_DATEDEPT>=TRJ_DATEARRT \n AND (TRJ_DATEDEPT<='2007-07-05' \n OR TRJ_DATECREAT<='2007-07-05') ));\n\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on stol_stc stc (cost=4649.62..29621.12 rows=143998 width=8) (actual time=21.564..146.365 rows=1048 loops=1)\n Filter: (((stc_date >= '2007-07-05'::date) AND (stc_dateplan <= '2007-07-05'::date)) OR (hashed subplan))\n SubPlan\n -> Index Scan using stol_trj_fk5 on stol_trj (cost=0.00..4647.61 rows=803 width=8) (actual time=0.054..4.941 rows=688 loops=1)\n Index Cond: (trj_datearrt >= '2007-07-05'::date)\n Filter: ((trj_datedept >= trj_datearrt) AND ((trj_datedept <= '2007-07-05'::date) OR (trj_datecreat <= '2007-07-05'::date)))\n Total runtime: 147.407 ms\n\n\nSELECT count(*) from stol_stc ;\n count\n--------\n 140960\n(1 row)\n\n\nHi,\n\nI'm having a weird problem on a query :\nI've simplified it to get the significant part (see end of message).\nThe point is I've got a simple\nSELECT field FROM table WHERE 'condition1'\nEstimated returned rows : 5453\nThen\nSELECT field FROM table WHERE 'condition2'\nEstimated returned rows : 705\nThen\nSELECT field FROM table WHERE 'condition1' OR 'condition2'\nEstimated returned rows : 143998\n\nCondition2 is a bit complicated (it's a subquery).\nNevertheless, shouldn't the third estimate be smaller or equal to the sum of the two others ?\n\n\nPostgresql is 8.2.4 on Linux, stats are up to date,\nshow default_statistics_target;\n default_statistics_target\n---------------------------\n 1000\n\n\n\nAny ideas ?\n\n\n\nexplain analyze \nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE (stc.STC_DATE>='2007-07-05' AND stc.STC_DATEPLAN<='2007-07-05');\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on stol_stc stc (cost=0.00..24265.15 rows=5453 width=8) (actual time=17.186..100.941 rows=721 loops=1)\n Filter: ((stc_date >= '2007-07-05'::date) AND (stc_dateplan <= '2007-07-05'::date))\n Total runtime: 101.656 ms\n(3 rows)\n\n\nexplain analyze \nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE stc.STC_ID IN \n (SELECT STC_ID FROM STOL_TRJ \n WHERE TRJ_DATEARRT>='2007-07-05' \n AND TRJ_DATEDEPT>=TRJ_DATEARRT \n AND (TRJ_DATEDEPT<='2007-07-05' \n OR TRJ_DATECREAT<='2007-07-05') );\n\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4649.62..10079.52 rows=705 width=8) (actual time=6.266..13.037 rows=640 loops=1)\n -> HashAggregate (cost=4649.62..4657.13 rows=751 width=8) (actual time=6.242..6.975 rows=648 loops=1)\n -> Index Scan using stol_trj_fk5 on stol_trj (cost=0.00..4647.61 rows=803 width=8) (actual time=0.055..4.901 rows=688 loops=1)\n Index Cond: (trj_datearrt >= '2007-07-05'::date)\n Filter: ((trj_datedept >= trj_datearrt) AND ((trj_datedept <= '2007-07-05'::date) OR (trj_datecreat <= '2007-07-05'::date)))\n -> Index Scan using stol_stc_pk on stol_stc stc (cost=0.00..7.21 rows=1 width=16) (actual time=0.004..0.005 rows=1 loops=648)\n Index Cond: (stc.stc_id = stol_trj.stc_id)\n Total runtime: 13.765 ms\n(8 rows)\n\nexplain analyze\nSELECT stc.CMD_ID\n FROM STOL_STC stc\n WHERE (stc.STC_DATE>='2007-07-05' AND stc.STC_DATEPLAN<='2007-07-05')\n OR\n (stc.STC_ID IN \n (SELECT STC_ID FROM STOL_TRJ \n WHERE TRJ_DATEARRT>='2007-07-05' \n AND TRJ_DATEDEPT>=TRJ_DATEARRT \n AND (TRJ_DATEDEPT<='2007-07-05' \n OR TRJ_DATECREAT<='2007-07-05') ));\n\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on stol_stc stc (cost=4649.62..29621.12 rows=143998 width=8) (actual time=21.564..146.365 rows=1048 loops=1)\n Filter: (((stc_date >= '2007-07-05'::date) AND (stc_dateplan <= '2007-07-05'::date)) OR (hashed subplan))\n SubPlan\n -> Index Scan using stol_trj_fk5 on stol_trj (cost=0.00..4647.61 rows=803 width=8) (actual time=0.054..4.941 rows=688 loops=1)\n Index Cond: (trj_datearrt >= '2007-07-05'::date)\n Filter: ((trj_datedept >= trj_datearrt) AND ((trj_datedept <= '2007-07-05'::date) OR (trj_datecreat <= '2007-07-05'::date)))\n Total runtime: 147.407 ms\n\n\nSELECT count(*) from stol_stc ;\n count\n--------\n 140960\n(1 row)", "msg_date": "Wed, 11 Jul 2007 17:28:43 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Weird row estimate" }, { "msg_contents": "Marc Cousin <[email protected]> writes:\n> Nevertheless, shouldn't the third estimate be smaller or equal to the sum of the two others ?\n\nThe planner's estimation for subplan conditions is pretty primitive\ncompared to joinable conditions. When you add the OR, it's no longer\npossible to treat the IN like a join, and everything gets an order of\nmagnitude dumber :-(\n\nIt might be worth trying this as a UNION of the two simple queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 16:35:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird row estimate " }, { "msg_contents": "Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit :\n> Marc Cousin <[email protected]> writes:\n> > Nevertheless, shouldn't the third estimate be smaller or equal to the sum\n> > of the two others ?\n>\n> The planner's estimation for subplan conditions is pretty primitive\n> compared to joinable conditions. When you add the OR, it's no longer\n> possible to treat the IN like a join, and everything gets an order of\n> magnitude dumber :-(\n>\n> It might be worth trying this as a UNION of the two simple queries.\n\nYes, it's much better on this query with a UNION.\nThe problem is that this is a small set of the query, and there are several \nnested IN with an OR condition... But at least now I understand where it \ncomes from.\nThanks a lot.\n", "msg_date": "Thu, 12 Jul 2007 10:55:07 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird row estimate" } ]
[ { "msg_contents": "Hi,\n I've two questions for which I not really found answers in the web.\n\n Intro:\n I've a Website with some traffic.\n 2 Million queries a day, during daylight.\n Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.\n Mainly updates on 1 tuple. And more or less complex SELECT statements.\n I noticed that the overall performance of postgres is decreasing \nwhen one or more long\n readers are present. Where a long reader here is already a Select \ncount(*) from table.\n\n As postgres gets slower an slower, and users still hammering on the \nreload button to get their\n page loaded. Postgres begins to reach max connections, and web site \nis stuck.\n It's not because of a bad schema or bad select statements. As I said, \na select count(*) on big table is already\n triggering this behaviour.\n\n Why do long readers influence the rest of the transactions in such a \nheavy way?\n Any configuration changes which can help here?\n Is it a disc-IO bottleneck thing?\n \n Second question. What is the right choice for the shared_buffers size?\n On a dedicated postgres server with 4 Giga RAM. Is there any rule of \nthumb?\n Actually I set it to +-256M.\n\n \nthanks for any suggestions.\n\nPatric\n\n\nMy Setup:\n\nDebian Etch\nPSQL: 8.1.4\n\nWAL files are located on another disc than the dbase itself.\n\nmax_connections = 190\nshared_buffers = 30000 \ntemp_buffers = 3000 \nwork_mem = 4096 \nmaintenance_work_mem = 16384 \nfsync = on \nwal_buffers = 16 \neffective_cache_size = 5000 \n\n", "msg_date": "Wed, 11 Jul 2007 17:35:33 +0200", "msg_from": "Patric de Waha <[email protected]>", "msg_from_op": true, "msg_subject": "Two questions.. shared_buffers and long reader issue" }, { "msg_contents": "We have a few tables that we need to pull relatively accurate aggregate\ncounts from, and we found the performance of SELECT COUNT(*) to be\nunacceptable. We solved this by creating triggers on insert and delete to\nupdate counts in a secondary table which we join to when we need the count\ninformation.\n\nThis may or may not work in your scenario, but it was a reasonable trade off\nfor us.\n\nBryan\n\nOn 7/11/07, Patric de Waha <[email protected]> wrote:\n>\n> Hi,\n> I've two questions for which I not really found answers in the web.\n>\n> Intro:\n> I've a Website with some traffic.\n> 2 Million queries a day, during daylight.\n> Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.\n> Mainly updates on 1 tuple. And more or less complex SELECT statements.\n> I noticed that the overall performance of postgres is decreasing\n> when one or more long\n> readers are present. Where a long reader here is already a Select\n> count(*) from table.\n>\n> As postgres gets slower an slower, and users still hammering on the\n> reload button to get their\n> page loaded. Postgres begins to reach max connections, and web site\n> is stuck.\n> It's not because of a bad schema or bad select statements. As I said,\n> a select count(*) on big table is already\n> triggering this behaviour.\n>\n> Why do long readers influence the rest of the transactions in such a\n> heavy way?\n> Any configuration changes which can help here?\n> Is it a disc-IO bottleneck thing?\n>\n> Second question. What is the right choice for the shared_buffers size?\n> On a dedicated postgres server with 4 Giga RAM. Is there any rule of\n> thumb?\n> Actually I set it to +-256M.\n>\n>\n> thanks for any suggestions.\n>\n> Patric\n>\n>\n> My Setup:\n>\n> Debian Etch\n> PSQL: 8.1.4\n>\n> WAL files are located on another disc than the dbase itself.\n>\n> max_connections = 190\n> shared_buffers = 30000\n> temp_buffers = 3000\n> work_mem = 4096\n> maintenance_work_mem = 16384\n> fsync = on\n> wal_buffers = 16\n> effective_cache_size = 5000\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\nWe have a few tables that we need to pull relatively accurate aggregate counts from, and we found the performance of SELECT COUNT(*) to be unacceptable.  We solved this by creating triggers on insert and delete to update counts in a secondary table which we join to when we need the count information.  \nThis may or may not work in your scenario, but it was a reasonable trade off for us.BryanOn 7/11/07, Patric de Waha <\[email protected]> wrote:Hi,   I've two questions for which I not really found answers in the web.\n   Intro:   I've a Website with some traffic.   2 Million queries a day, during daylight.   Postgres is running on a dedicated server  P4 DualCore, 4 Gig Ram.   Mainly updates on 1 tuple. And more or less complex SELECT statements.\n    I noticed that the overall performance of postgres is decreasingwhen one or more long   readers are present. Where a long reader here is already a Selectcount(*) from table.   As postgres gets slower an slower, and users still hammering on the\nreload button to get their   page loaded. Postgres begins to reach max connections, and web siteis stuck.   It's not because of a bad schema or bad select statements. As I said,a select count(*) on big table is already\n   triggering this behaviour.   Why do long readers influence the rest of the transactions in such aheavy way?   Any configuration changes which can help here?   Is it a disc-IO bottleneck thing?\n   Second question. What is the right choice for the shared_buffers size?   On a dedicated postgres server with 4 Giga RAM. Is there any rule ofthumb?   Actually I set it to +-256M.thanks for any suggestions.\nPatricMy Setup:Debian EtchPSQL: 8.1.4WAL files are located on another disc than the dbase itself.max_connections = 190shared_buffers = 30000temp_buffers = 3000work_mem = 4096\nmaintenance_work_mem = 16384fsync = onwal_buffers = 16effective_cache_size = 5000---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at\n                http://www.postgresql.org/about/donate", "msg_date": "Wed, 11 Jul 2007 10:51:51 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two questions.. shared_buffers and long reader issue" }, { "msg_contents": "Patric de Waha <[email protected]> writes:\n> Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.\n\nWhen you don't even mention your disk hardware, that's a bad sign.\nIn a database server the disk is usually more important than the CPU.\n\n> Why do long readers influence the rest of the transactions in such a \n> heavy way?\n> Any configuration changes which can help here?\n> Is it a disc-IO bottleneck thing?\n\nVery possibly. Have you spent any time watching \"vmstat 1\" output\nto get a sense of whether your I/O is saturated?\n\n> WAL files are located on another disc than the dbase itself.\n\nThat's good, but it only relates to update performance not SELECT\nperformance.\n\n> effective_cache_size = 5000 \n\nThat's way too small for a 4G machine. You could probably stand to\nboost maintenance_work_mem too. However, neither of these have any\nimmediate relationship to your problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 12:57:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two questions.. shared_buffers and long reader issue " }, { "msg_contents": "On Wed, Jul 11, 2007 at 05:35:33PM +0200, Patric de Waha wrote:\n> Mainly updates on 1 tuple. \n\nAre you vacuuming that table enough?\n\n> And more or less complex SELECT statements.\n> I noticed that the overall performance of postgres is decreasing \n> when one or more long\n> readers are present. Where a long reader here is already a Select \n> count(*) from table.\n\nSELECT count(*) is expensive in Postgres. Do you really need it? \nUnqualified count() in PostgreSQL is just a bad thing to do, so if\nyou can work around it (by doing limited subselects, for instance,\nwhere you never scan more than 50 rows, or by keeping counts using\ntriggers, or various other tricks), it's a good idea.\n\n> Why do long readers influence the rest of the transactions in such a \n> heavy way?\n\nIt could be because of all those updated tuples not getting vacuumed\n(which results in a bad plan). Or it could be that your connection\npool is exhausted: note that when someone hits \"reload\", that doesn't\nmean your old query goes away. It is still crunching through\nwhatever work it was doing.\n\n> Second question. What is the right choice for the shared_buffers size?\n> On a dedicated postgres server with 4 Giga RAM. Is there any rule of \n> thumb?\n> Actually I set it to +-256M.\n\nThere has been Much Discussion of this lately on this list. I\nsuggest you have a look through the recent archives on that topic.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n", "msg_date": "Wed, 11 Jul 2007 12:58:26 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two questions.. shared_buffers and long reader issue" }, { "msg_contents": "Ok thanks.\n\niostat confirmed it's an IO bottleneck.\nWill add some discs to the RAID unit.\n\nUsed 4 Raptor discs in Raid 10 until now.\n\n\nbest regards,\n patric\n\n\nTom Lane wrote:\n> Patric de Waha <[email protected]> writes:\n> \n>> Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.\n>> \n>\n> When you don't even mention your disk hardware, that's a bad sign.\n> In a database server the disk is usually more important than the CPU.\n>\n> \n>> Why do long readers influence the rest of the transactions in such a \n>> heavy way?\n>> Any configuration changes which can help here?\n>> Is it a disc-IO bottleneck thing?\n>> \n>\n> Very possibly. Have you spent any time watching \"vmstat 1\" output\n> to get a sense of whether your I/O is saturated?\n>\n> \n>> WAL files are located on another disc than the dbase itself.\n>> \n>\n> That's good, but it only relates to update performance not SELECT\n> performance.\n>\n> \n>> effective_cache_size = 5000 \n>> \n>\n> That's way too small for a 4G machine. You could probably stand to\n> boost maintenance_work_mem too. However, neither of these have any\n> immediate relationship to your problem.\n>\n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Wed, 11 Jul 2007 22:48:51 +0200", "msg_from": "Patric de Waha <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two questions.. shared_buffers and long reader issue" } ]
[ { "msg_contents": "Hi,\n\nOkay, i know, not really a recent version:\nPostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n\nI have a fresh ANALYZED table with some indexes.\n\nscholl=*# set enable_bitmapscan=1;\nSET\nscholl=*# explain analyse select sum(flaeche) from bde_meldungen where maschine=1200 and ab = 347735;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1371.95..1371.96 rows=1 width=8) (actual time=163.788..163.790 rows=1 loops=1)\n -> Bitmap Heap Scan on bde_meldungen (cost=1217.69..1371.85 rows=39 width=8) (actual time=163.702..163.758 rows=2 loops=1)\n Recheck Cond: ((ab = 347735) AND (maschine = 1200))\n -> BitmapAnd (cost=1217.69..1217.69 rows=39 width=0) (actual time=163.681..163.681 rows=0 loops=1)\n -> Bitmap Index Scan on idx_ab (cost=0.00..5.95 rows=558 width=0) (actual time=0.078..0.078 rows=109 loops=1)\n Index Cond: (ab = 347735)\n -> Bitmap Index Scan on idx_maschine (cost=0.00..1211.49 rows=148997 width=0) (actual time=163.459..163.459 rows=164760 loops=1)\n Index Cond: (maschine = 1200)\n Total runtime: 163.901 ms\n(9 rows)\n\n\nOkay, 163.901 ms with Bitmap Index Scan.\n\nAnd now i disable this and runs the same select:\n\nscholl=*# set enable_bitmapscan=0;\nSET\nscholl=*# explain analyse select sum(flaeche) from bde_meldungen where maschine=1200 and ab = 347735;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2142.77..2142.78 rows=1 width=8) (actual time=0.229..0.231 rows=1 loops=1)\n -> Index Scan using idx_ab on bde_meldungen (cost=0.00..2142.67 rows=39 width=8) (actual time=0.046..0.209 rows=2 loops=1)\n Index Cond: (ab = 347735)\n Filter: (maschine = 1200)\n Total runtime: 0.326 ms\n(5 rows)\n\nOkay, i got a really different plan, but i expected _NOT_ a\nperformance-boost like this. I expected the opposite.\n\n\nIt's not a really problem, i just played with this. But i'm confused\nabout this...\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Wed, 11 Jul 2007 20:36:38 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": true, "msg_subject": "bitmap-index-scan slower than normal index scan" }, { "msg_contents": "On 7/11/07, Andreas Kretschmer <[email protected]> wrote:\n> Hi,\n>\n> Okay, i know, not really a recent version:\n> PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n>\n> I have a fresh ANALYZED table with some indexes.\n>\n> scholl=*# set enable_bitmapscan=1;\n> SET\n> scholl=*# explain analyse select sum(flaeche) from bde_meldungen where maschine=1200 and ab = 347735;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1371.95..1371.96 rows=1 width=8) (actual time=163.788..163.790 rows=1 loops=1)\n> -> Bitmap Heap Scan on bde_meldungen (cost=1217.69..1371.85 rows=39 width=8) (actual time=163.702..163.758 rows=2 loops=1)\n> Recheck Cond: ((ab = 347735) AND (maschine = 1200))\n> -> BitmapAnd (cost=1217.69..1217.69 rows=39 width=0) (actual time=163.681..163.681 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_ab (cost=0.00..5.95 rows=558 width=0) (actual time=0.078..0.078 rows=109 loops=1)\n> Index Cond: (ab = 347735)\n> -> Bitmap Index Scan on idx_maschine (cost=0.00..1211.49 rows=148997 width=0) (actual time=163.459..163.459 rows=164760 loops=1)\n> Index Cond: (maschine = 1200)\n> Total runtime: 163.901 ms\n> (9 rows)\n>\n>\n> Okay, 163.901 ms with Bitmap Index Scan.\n>\n> And now i disable this and runs the same select:\n>\n> scholl=*# set enable_bitmapscan=0;\n> SET\n> scholl=*# explain analyse select sum(flaeche) from bde_meldungen where maschine=1200 and ab = 347735;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2142.77..2142.78 rows=1 width=8) (actual time=0.229..0.231 rows=1 loops=1)\n> -> Index Scan using idx_ab on bde_meldungen (cost=0.00..2142.67 rows=39 width=8) (actual time=0.046..0.209 rows=2 loops=1)\n> Index Cond: (ab = 347735)\n> Filter: (maschine = 1200)\n> Total runtime: 0.326 ms\n> (5 rows)\n>\n> Okay, i got a really different plan, but i expected _NOT_ a\n> performance-boost like this. I expected the opposite.\n>\n>\n> It's not a really problem, i just played with this. But i'm confused\n> about this...\n>\n\nyour results are getting cached. try two queries in a row with the same plan.\n\nAlex\n", "msg_date": "Wed, 11 Jul 2007 14:52:01 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" }, { "msg_contents": "am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:\n> >Okay, i got a really different plan, but i expected _NOT_ a\n> >performance-boost like this. I expected the opposite.\n> >\n> >\n> >It's not a really problem, i just played with this. But i'm confused\n> >about this...\n> >\n> \n> your results are getting cached. try two queries in a row with the same \n> plan.\n\nThanks for the response, but I've done this, no difference.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 11 Jul 2007 21:31:58 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" }, { "msg_contents": "On 7/11/07, A. Kretschmer <[email protected]> wrote:\n> am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:\n> > >Okay, i got a really different plan, but i expected _NOT_ a\n> > >performance-boost like this. I expected the opposite.\n> > >\n> > >\n> > >It's not a really problem, i just played with this. But i'm confused\n> > >about this...\n> > >\n> >\n> > your results are getting cached. try two queries in a row with the same\n> > plan.\n>\n> Thanks for the response, but I've done this, no difference.\n>\n\ntry bumping up the default stats target on the table in question and\nsee if that helps the planner choose a better plan.\n\nAlex\n", "msg_date": "Wed, 11 Jul 2007 16:01:30 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" }, { "msg_contents": "On 7/11/07, Alex Deucher <[email protected]> wrote:\n> On 7/11/07, A. Kretschmer <[email protected]> wrote:\n> > am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:\n> > > >Okay, i got a really different plan, but i expected _NOT_ a\n> > > >performance-boost like this. I expected the opposite.\n> > > >\n> > > >\n> > > >It's not a really problem, i just played with this. But i'm confused\n> > > >about this...\n> > > >\n> > >\n> > > your results are getting cached. try two queries in a row with the same\n> > > plan.\n> >\n> > Thanks for the response, but I've done this, no difference.\n> >\n>\n> try bumping up the default stats target on the table in question and\n> see if that helps the planner choose a better plan.\n>\n\nand be sure to run analyze again.\n\nAlex\n", "msg_date": "Wed, 11 Jul 2007 16:01:58 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" }, { "msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> Okay, i know, not really a recent version:\n> PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n\nYou need a newer one.\n\n> -> BitmapAnd (cost=1217.69..1217.69 rows=39 width=0) (actual time=163.681..163.681 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_ab (cost=0.00..5.95 rows=558 width=0) (actual time=0.078..0.078 rows=109 loops=1)\n> Index Cond: (ab = 347735)\n> -> Bitmap Index Scan on idx_maschine (cost=0.00..1211.49 rows=148997 width=0) (actual time=163.459..163.459 rows=164760 loops=1)\n> Index Cond: (maschine = 1200)\n\nThis is simply a stupid choice on the part of choose_bitmap_and() ---\nit's adding on a second index to try to filter on maschine when that\nscan will actually just increase the cost.\n\nI've revisited choose_bitmap_and() a couple times since then; try\n8.1.9 and see if it gets this right.\n\nAlso, part of the problem here looks to be an overestimate of the number\nof rows matching ab = 347735. It might help to increase the statistics\ntarget for that column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 16:04:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan " }, { "msg_contents": "Tom Lane <[email protected]> schrieb:\n\n\nThanks you and Alex for the response.\n\n> > PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n> \n> You need a newer one.\n\nI know ;-)\n\n> \n> This is simply a stupid choice on the part of choose_bitmap_and() ---\n> it's adding on a second index to try to filter on maschine when that\n> scan will actually just increase the cost.\n> \n> I've revisited choose_bitmap_and() a couple times since then; try\n> 8.1.9 and see if it gets this right.\n\nOkay, but later.\n\n> \n> Also, part of the problem here looks to be an overestimate of the number\n> of rows matching ab = 347735. It might help to increase the statistics\n> target for that column.\n\nI will try this tomorrow and inform you about the result. I've never\ndone this before, i need to read the docs about this.\n\nThank you again.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Wed, 11 Jul 2007 22:19:58 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" }, { "msg_contents": "am Wed, dem 11.07.2007, um 22:19:58 +0200 mailte Andreas Kretschmer folgendes:\n> > Also, part of the problem here looks to be an overestimate of the number\n> > of rows matching ab = 347735. It might help to increase the statistics\n> > target for that column.\n> \n> I will try this tomorrow and inform you about the result. I've never\n> done this before, i need to read the docs about this.\n\nOkay, done, setting to 100 (thanks to mastermind) and now i got an Index\nScan using idx_ab with a total runtime: 0.330 ms.\n\nGreat, thanks. And yes, i will update soon...\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 12 Jul 2007 07:06:31 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap-index-scan slower than normal index scan" } ]
[ { "msg_contents": "Recently, I have been doing extensive profiling of a version 8.1.4 Postgres DB with about 175 \ntables and 5 GB of data (the server running on Fedora Linux and the clients on Windows XP). \nSurprisingly, one of the bottlenecks is TRUNCATE TABLE and that command is really slow as compared \nto other operations. For example, we have operations like:\n\nTRUNCATE TABLE my_temporary_table\nCOPY my_temporary_table ... FROM STDIN BINARY\ndo_something\n\nwhere do_something is using the data in my_temporary_table to do something like a JOIN or a mass \nUPDATE or whatever.\n\nNow, it turns out that typically most time is lost in TRUNCATE TABLE, in fact it spoils the \nperformance of most operations on the DB !\n\nI read in a mailing list archive that TRUNCATE TABLE is slow since it was made transaction-safe \nsomewhere in version 7, but for operations on a temporary table (with data coming from the outside \nworld) that is irrelevant, at least for my application, in casu, a middleware software package.\n\nSo, my questions are\n\n1. Why is TRUNCATE TABLE so slow (even if transaction-safe)\n2. Is there is way to dig up in the source code somewhere a quick-and-dirty TRUNCATE TABLE \nalternative for operations on temporary tables that need not be transaction-safe (because the \nmiddleware itself can easily restore anything that goes wrong there).\n\nI noticed, by the way, that removing records in general is painfully slow, but I didn't do a \ndetailed analysis of that issue yet.\n\nAs an alternative to TRUNCATE TABLE I tried to CREATE and DROP a table, but that wasn't any faster.\n\nSincerely,\n\nAdriaan van Os\n", "msg_date": "Wed, 11 Jul 2007 22:10:49 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "TRUNCATE TABLE" }, { "msg_contents": "Adriaan van Os <[email protected]> writes:\n> Surprisingly, one of the bottlenecks is TRUNCATE TABLE and that\n> command is really slow as compared to other operations.\n\nWhen you don't quantify that statement at all, it's hard to make an\nintelligent comment on it, but TRUNCATE per se shouldn't be slow.\nAre you sure you are not measuring a delay to obtain exclusive lock\non the table before it can be truncated (ie, waiting for other\ntransactions to finish with it)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 17:54:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " }, { "msg_contents": "\"Adriaan van Os\" <[email protected]> writes:\n\n> Recently, I have been doing extensive profiling of a version 8.1.4 Postgres DB\n> with about 175 tables and 5 GB of data (the server running on Fedora Linux and\n> the clients on Windows XP). Surprisingly, one of the bottlenecks is TRUNCATE\n> TABLE and that command is really slow as compared to other operations. For\n> example, we have operations like:\n\nWhat filesystem is this? Some filesystems are notoriously slow at deleting\nlarge files. The mythtv folk who face this problem regularly recommend either\nJFS or XFS for this purpose.\n\nPostgres generally doesn't really need to be able to delete large files\nquickly. The only times files are deleted which come to mind are when you DROP\nor TRUNCATE or possibly when you VACUUM a table.\n\n> I noticed, by the way, that removing records in general is painfully slow, but\n> I didn't do a detailed analysis of that issue yet.\n\nThat's strange. Deleting should be the *quickest* operation in Postgres. Do\nyou perchance have foreign key references referencing this table? Do you have\nany triggers?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 11 Jul 2007 23:15:52 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Gregory Stark wrote:\n> That's strange. Deleting should be the *quickest* operation in Postgres. Do\n> you perchance have foreign key references referencing this table?\n\nNo.\n\n> Do you have any triggers?\n\nNo.\n\nTom Lane wrote:\n> Adriaan van Os <[email protected]> writes:\n>> Surprisingly, one of the bottlenecks is TRUNCATE TABLE and that\n>> command is really slow as compared to other operations.\n> \n> When you don't quantify that statement at all, it's hard to make an\n> intelligent comment on it, but TRUNCATE per se shouldn't be slow.\n> Are you sure you are not measuring a delay to obtain exclusive lock\n> on the table before it can be truncated (ie, waiting for other\n> transactions to finish with it)?\n\nDuring the tests, there is only one connection to the database server. No other transactions are \nrunning.\n\n> When you don't quantify that statement at all, it's hard to make an\n> intelligent comment on it, but TRUNCATE per se shouldn't be slow.\n\nBelow are some timings, in milliseconds.\n\n> TRUNCATE TABLE my_temporary_table\n> COPY my_temporary_table ... FROM STDIN BINARY\n> do_something\n\nThe temporary table has one INT4 field and no indices.\n\nNumrows\t\tTRUNCATE (ms)\t\t\tCOPY (ms)\t\tSELECT (ms)\n 5122\t\t\t\t 80,6\t\t\t\t\t16,1\t\t\t\t51,2\n 3910\t\t\t\t 79,5\t\t\t\t\t12,9\t\t\t\t39,9\n 2745\t\t\t\t 90,4\t\t\t\t\t10,7\t\t\t\t32,4\n 1568\t\t\t\t 99,5\t\t\t\t\t 7,6\t\t\t\t24,7\n 398\t\t\t\t 161,1\t\t\t\t\t 4,0\t\t\t\t22,1\n 200\t\t\t\t 79,5\t\t\t\t\t 3,3\t\t\t\t22,0\n 200\t\t\t\t 87,9 \t\t\t\t\t 3,1\t\t\t\t22,0\n222368 \t\t\t\t4943,5\t\t\t\t 728,6\t\t\t7659,5\n222368\t\t\t\t1685,7\t\t\t\t 512,2\t\t\t2883,1\n\nNote how fast the COPY is (which is nice). The SELECT statement uses the temporary table.\n\nRegards,\n\nAdriaan van Os\n\n", "msg_date": "Thu, 12 Jul 2007 09:37:40 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Adriaan van Os <[email protected]> writes:\n> Tom Lane wrote:\n>> When you don't quantify that statement at all, it's hard to make an\n>> intelligent comment on it, but TRUNCATE per se shouldn't be slow.\n\n> Below are some timings, in milliseconds.\n\nI can only conclude that you're using a seriously bad filesystem :-(\n\nI tried to replicate your results on a fairly old and slow HPUX box.\nI get a fairly repeatable time of around 40msec to truncate a table;\nthis is presumably mostly filesystem time to create one file and delete\nanother. I used CVS HEAD for this because the devel version of psql\nsupports reporting \\timing for \\copy commands, but I'm quite sure that\nTRUNCATE isn't any faster than it was in 8.2:\n\nregression=# create table tab(f1 int);\nCREATE TABLE\nTime: 63.775 ms\nregression=# insert into tab select random()*10000 from generate_series(1,5000);\nINSERT 0 5000\nTime: 456.011 ms\nregression=# \\copy tab to 'tab.data' binary\nTime: 80.343 ms\nregression=# truncate table tab;\nTRUNCATE TABLE\nTime: 35.825 ms\nregression=# \\copy tab from 'tab.data' binary\nTime: 391.928 ms\nregression=# select count(*) from tab;\n count \n-------\n 5000\n(1 row)\n\nTime: 21.457 ms\nregression=# truncate table tab;\nTRUNCATE TABLE\nTime: 47.867 ms\nregression=# \\copy tab from 'tab.data' binary\nTime: 405.074 ms\nregression=# select count(*) from tab;\n count \n-------\n 5000\n(1 row)\n\nTime: 20.247 ms\n\nIf I increase the test size to 200K rows, I get a proportional increase\nin the copy and select times, but truncate stays about the same:\n\nregression=# truncate table tab;\nTRUNCATE TABLE\nTime: 40.196 ms\nregression=# \\copy tab from 'tab.data' binary\nTime: 15779.689 ms\nregression=# select count(*) from tab;\n count \n--------\n 200000\n(1 row)\n\nTime: 642.965 ms\n\nYour numbers are not making any sense to me. In particular there is no\nreason in the Postgres code for it to take longer to truncate a 200K-row\ntable than a 5K-row table. (I would expect some increment at the point\nof having 1GB in the table, where we'd create a second table segment\nfile, but you are nowhere near that.)\n\nThe bottom line seems to be that you have a filesystem that takes a\nlong time to delete a file, with the cost rising rapidly as the file\ngets bigger. Can you switch to a different filesystem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Jul 2007 15:20:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " }, { "msg_contents": "Gregory Stark wrote:\n> What filesystem is this?\n\nExt3 on Fedora Linux.\n\n> Some filesystems are notoriously slow at deleting\n> large files. The mythtv folk who face this problem regularly recommend either\n> JFS or XFS for this purpose.\n\nThat's a remarkable advice, because XFS is known to be slow at creating and deleting files, see \n<http://en.wikipedia.org/wiki/XFS> and <http://everything2.com/index.pl?node_id=1479435>.\n\nRegards,\n\nAdriaan van Os\n\n", "msg_date": "Fri, 13 Jul 2007 09:47:06 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "\n\"Adriaan van Os\" <[email protected]> writes:\n\n> That's a remarkable advice, because XFS is known to be slow at creating and\n> deleting files, see <http://en.wikipedia.org/wiki/XFS> and\n> <http://everything2.com/index.pl?node_id=1479435>.\n\nI think this is a case of \"you're both right\". XFS may have to do more work\nthan other filesystems for meta-information updates. However It still only has\nto do a constant or nearly constant amount of work. So it may be slower at\nmanaging a large directory of thousands of small files than ext3, but it's\nfaster at deleting a single 1G file than ext3.\n\nOn mythtv the experience is that if you use ext3 and delete a large file while\nrecording another program you can expect the new recording to lose stutter at\nthat point. The large delete will lock out the recording from writing to the\nfilesystem for several seconds.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 13 Jul 2007 10:02:33 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Adriaan van Os a �crit :\n> That's a remarkable advice, because XFS is known to be slow at creating \n> and deleting files, see <http://en.wikipedia.org/wiki/XFS> and \n> <http://everything2.com/index.pl?node_id=1479435>.\n> \n\ndate of article: Fri Jul 25 2003 !\n", "msg_date": "Fri, 13 Jul 2007 11:05:53 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "On 7/13/07, Jean-Max Reymond <[email protected]> wrote:\n> Adriaan van Os a écrit :\n> > That's a remarkable advice, because XFS is known to be slow at creating\n> > and deleting files, see <http://en.wikipedia.org/wiki/XFS> and\n> > <http://everything2.com/index.pl?node_id=1479435>.\n> >\n>\n> date of article: Fri Jul 25 2003 !\n>\n\nEven at this date, the article end with :\n\n\"More interestingly, my delete performance has actually superseded\nthat of ext3, for\nboth random and sequential deletes! The most major weakness of XFS has been\neliminated, and my spankin' new filesystem is ready to rock. Cheers!\"\n\n-- \nThomas SAMSON\nI came, I saw, I deleted all your files.\n", "msg_date": "Fri, 13 Jul 2007 11:21:02 +0200", "msg_from": "\"Thomas Samson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "On Fri, Jul 13, 2007 at 09:47:06AM +0200, Adriaan van Os wrote:\n>That's a remarkable advice, because XFS is known to be slow at creating and \n>deleting files, see <http://en.wikipedia.org/wiki/XFS> and \n><http://everything2.com/index.pl?node_id=1479435>.\n\nxfs' slowness is proportional to the *number* rather than the *size* of \nthe files. In postgres you'll tend to have fewer, larger, files than you \nwould in (e.g.) a source code repository, so it is generally more \nimportant to have a filesystem that deletes large files quickly than a \nfilesystem that deletes lots of files quickly. I'd suspect that the same \nis true for mythtv.\n\nMike Stone\n", "msg_date": "Fri, 13 Jul 2007 08:44:38 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Michael Stone <[email protected]> writes:\n> xfs' slowness is proportional to the *number* rather than the *size* of \n> the files. In postgres you'll tend to have fewer, larger, files than you \n> would in (e.g.) a source code repository, so it is generally more \n> important to have a filesystem that deletes large files quickly than a \n> filesystem that deletes lots of files quickly.\n\nThe weird thing is that the files in question were hardly \"large\".\nIIRC his test case used a single int4 column, so the rows were probably\n36 bytes apiece allowing for all overhead. So the test cases with about\n5K rows were less than 200K in the file, and the ones with 200K rows\nwere still only a few megabytes.\n\nI tried the test on my Linux machine (which I couldn't do when I\nresponded earlier because it was tied up with another test), and\nsaw truncate times of a few milliseconds for both table sizes.\nThis is ext3 on Fedora 6.\n\nSo I'm still of the opinion that there's something broken about\nAdriaan's infrastructure, but maybe we have to look to an even\nlower level than the filesystem. Perhaps he should try getting\nsome bonnie++ benchmark numbers to see if his disk is behaving\nproperly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Jul 2007 10:40:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " }, { "msg_contents": "Tom Lane wrote:\n> Michael Stone <[email protected]> writes:\n>> xfs' slowness is proportional to the *number* rather than the *size* of \n>> the files. In postgres you'll tend to have fewer, larger, files than you \n>> would in (e.g.) a source code repository, so it is generally more \n>> important to have a filesystem that deletes large files quickly than a \n>> filesystem that deletes lots of files quickly.\n> \n> The weird thing is that the files in question were hardly \"large\".\n> IIRC his test case used a single int4 column, so the rows were probably\n> 36 bytes apiece allowing for all overhead. So the test cases with about\n> 5K rows were less than 200K in the file, and the ones with 200K rows\n> were still only a few megabytes.\n\nRight.\n\n> I tried the test on my Linux machine (which I couldn't do when I\n> responded earlier because it was tied up with another test), and\n> saw truncate times of a few milliseconds for both table sizes.\n> This is ext3 on Fedora 6.\n> \n> So I'm still of the opinion that there's something broken about\n> Adriaan's infrastructure, but maybe we have to look to an even\n> lower level than the filesystem. Perhaps he should try getting\n> some bonnie++ benchmark numbers to see if his disk is behaving\n> properly.\n\nWell, I can hardly believe that something is broken with the infrastructure, because I have seen \nthe same behaviour on other hardware (or it must be that I am using the standard postgresql.conf).\n\nI started another test. I copied an existing database (not very large, 35 tables, typically a few \nhundred up to a few thousand records) with CREATE DATABASE testdb TEMPLATE mydb and started to \nremove random tables from testdb with DROP TABLE and TRUNCATE TABLE. I did this with the query tool \nof pgAdmin III, to exclude any doubts about my own software (that uses pqlib). The hardware is an \nIntel dual-core 17-inch MacBook Pro running Mac OS X 10.4.\n\nI can not make any sense of the results. Truncating or dropping a table typically takes 1-2 ms or \n30-70 ms or 200-500 ms. I have seen that truncating the *same* table with the *same* data takes 1 \nms in one test and takes 532 ms in another one. The database has no foreign keys.\n\nBased on these results, I still believe there is a problem in Postgres.\n\nRegards,\n\nAdriaan van Os\n", "msg_date": "Fri, 13 Jul 2007 18:17:18 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "On Fri, Jul 13, 2007 at 06:17:18PM +0200, Adriaan van Os wrote:\n> The hardware is an Intel dual-core 17-inch MacBook Pro running Mac \n> OS X 10.4.\n\nTo isolate things, have you tried testing a different operating system?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 13 Jul 2007 18:29:47 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Adriaan van Os <[email protected]> writes:\n> I started another test. I copied an existing database (not very large,\n> 35 tables, typically a few hundred up to a few thousand records) with\n> CREATE DATABASE testdb TEMPLATE mydb and started to remove random\n> tables from testdb with DROP TABLE and TRUNCATE TABLE. I did this with\n> the query tool of pgAdmin III, to exclude any doubts about my own\n> software (that uses pqlib).\n\nCan you try it with plain psql? pgAdmin is a variable that wasn't\naccounted for in my tests.\n\n> The hardware is an Intel dual-core 17-inch\n> MacBook Pro running Mac OS X 10.4.\n\nHmm. I thought you said Fedora before. However, I'd done a few tests\nyesterday on my own Mac laptop (Al G4) and not gotten results that were\nout of line with HPUX or Fedora.\n\nDoes anyone else want to try replicating these tests?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Jul 2007 12:30:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " }, { "msg_contents": "On Fri, Jul 13, 2007 at 12:30:46PM -0400, Tom Lane wrote:\n> Adriaan van Os <[email protected]> writes:\n> > I started another test. I copied an existing database (not very large,\n> > 35 tables, typically a few hundred up to a few thousand records) with\n> > CREATE DATABASE testdb TEMPLATE mydb and started to remove random\n> > tables from testdb with DROP TABLE and TRUNCATE TABLE. I did this with\n> > the query tool of pgAdmin III, to exclude any doubts about my own\n> > software (that uses pqlib).\n> \n> Can you try it with plain psql? pgAdmin is a variable that wasn't\n> accounted for in my tests.\n> \n> > The hardware is an Intel dual-core 17-inch\n> > MacBook Pro running Mac OS X 10.4.\n> \n> Hmm. I thought you said Fedora before. However, I'd done a few tests\n> yesterday on my own Mac laptop (Al G4) and not gotten results that were\n> out of line with HPUX or Fedora.\n> \n> Does anyone else want to try replicating these tests?\n\nThe following is consistently between 1 and 3 ms:\ndecibel=# create table i as select * from generate_series(1,20000) i; drop table i;\nSELECT\nTime: 42.413 ms\nDROP TABLE\nTime: 1.415 ms\ndecibel=# select version();\n version \n--------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.3devel on i386-apple-darwin8.10.1, compiled by GCC i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5363)\n(1 row)\n\nTime: 46.870 ms\ndecibel=# \\! uname -a\nDarwin platter.local 8.10.1 Darwin Kernel Version 8.10.1: Wed May 23 16:33:00 PDT 2007; root:xnu-792.22.5~1/RELEASE_I386 i386 i386\ndecibel=# \n\nTruncate is a different story... this is consistently either 6 something ms or\n17 something ms:\n\ndecibel=# insert into i select generate_series(1,20000); truncate i;\nINSERT 0 20000\nTime: 600.940 ms\nTRUNCATE TABLE\nTime: 6.313 ms\ndecibel=# \n\nThis is on a 17\" MBP, fsync turned on.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Fri, 13 Jul 2007 12:12:47 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Tom Lane wrote:\n> Adriaan van Os <[email protected]> writes:\n> > I started another test. I copied an existing database (not very large,\n> > 35 tables, typically a few hundred up to a few thousand records) with\n> > CREATE DATABASE testdb TEMPLATE mydb and started to remove random\n> > tables from testdb with DROP TABLE and TRUNCATE TABLE. I did this with\n> > the query tool of pgAdmin III, to exclude any doubts about my own\n> > software (that uses pqlib).\n> \n> Can you try it with plain psql? pgAdmin is a variable that wasn't\n> accounted for in my tests.\n> \n> > The hardware is an Intel dual-core 17-inch\n> > MacBook Pro running Mac OS X 10.4.\n> \n> Hmm. I thought you said Fedora before. However, I'd done a few tests\n> yesterday on my own Mac laptop (Al G4) and not gotten results that were\n> out of line with HPUX or Fedora.\n> \n> Does anyone else want to try replicating these tests?\n\nI notice that the times are sometimes different when the table is TEMP.\nDROP TABLE times are sometimes in the vicinity of 13ms and at other\ntimes 200ms. My test is\n\nvacuum pg_class; vacuum pg_type; vacuum pg_attribute;\ncreate temp table van_os (a int);\ninsert into van_os select * from generate_series(1, 200000); drop table van_os;\n\npassed as a single line to psql (no -c).\n\nTimes are closer to 2ms when the table has only 5000 tuples.\n\n\n\n\nDoing this\ninsert into van_os select * from generate_series(1, 200000); truncate van_os;\n\nI get about 200ms on the truncate step.\n\nWhereas if I do this\ninsert into van_os select * from generate_series(1, 5000); truncate van_os;\ntimes are closer to 8-13 ms.\n\nI guess the difference is the amount of data that ext3 is logging on its\njournal. My ext3 journal settings are default.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\n\"�Que diferencia tiene para los muertos, los hu�rfanos, y aquellos que han\nperdido su hogar, si la loca destrucci�n ha sido realizada bajo el nombre\ndel totalitarismo o del santo nombre de la libertad y la democracia?\" (Gandhi)\n", "msg_date": "Fri, 13 Jul 2007 13:35:01 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Hello,\n\nI tested speed difference between TRUNCATE TABLE and DROP TABLE\n(tested on my notebook ext3 and Linux fedora 7):\n\nCREATE OR REPLACE FUNCTION test01() RETURNS SETOF double precision\nAS $$\nDECLARE t1 timestamp with time zone;\nBEGIN\n CREATE TEMP TABLE foo(a integer);\n FOR i IN 1..1000 LOOP\n INSERT INTO foo SELECT 1 FROM generate_series(1,10000);\n t1 := clock_timestamp();\n TRUNCATE TABLE foo;\n RETURN NEXT EXTRACT('ms' FROM clock_timestamp()-t1);\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION test02() RETURNS SETOF double precision\nAS $$\nDECLARE t1 timestamp with time zone;\nBEGIN\n FOR i IN 1..1000 LOOP\n EXECUTE 'CREATE TEMP TABLE foo(a integer);';\n EXECUTE 'INSERT INTO foo SELECT 1 FROM generate_series(1,10000);';\n t1 := clock_timestamp();\n EXECUTE 'DROP TABLE foo;';\n RETURN NEXT EXTRACT('ms' FROM clock_timestamp()-t1);\n END LOOP;\n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n\nvacuum pg_class; vacuum pg_type; vacuum pg_attribute;\n\npostgres=# select count(*), min(t), max(t), avg(t), stddev_samp(t),\nstddev_pop(t) from test01() t(t);\n count | min | max | avg | stddev_samp | stddev_pop\n-------+-------+---------+----------+------------------+------------------\n 1000 | 0.295 | 803.971 | 3.032483 | 30.0036729610037 | 29.9886673721876\n(1 row)\n\nTime: 33826,841 ms\npostgres=# select count(*), min(t), max(t), avg(t), stddev_samp(t),\nstddev_pop(t) from test02() t(t);\n count | min | max | avg | stddev_samp | stddev_pop\n-------+-------+--------+----------+------------------+-------------------\n 1000 | 0.418 | 20.792 | 0.619168 | 0.81550718804297 | 0.815099332459549\n(1 row)\n\nTime: 33568,818 ms\n\nIt's true, stddev_samp(TRUNCATE) >> stddev_samp(DROP)\n\nRegards\nPavel Stehule\n", "msg_date": "Fri, 13 Jul 2007 21:12:34 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Tom Lane wrote:\n> Adriaan van Os <[email protected]> writes:\n>> I started another test. I copied an existing database (not very large,\n>> 35 tables, typically a few hundred up to a few thousand records) with\n>> CREATE DATABASE testdb TEMPLATE mydb and started to remove random\n>> tables from testdb with DROP TABLE and TRUNCATE TABLE. I did this with\n>> the query tool of pgAdmin III, to exclude any doubts about my own\n>> software (that uses pqlib).\n> \n> Can you try it with plain psql? pgAdmin is a variable that wasn't\n> accounted for in my tests.\n\nWill do that and report the results.\n\n>> The hardware is an Intel dual-core 17-inch\n>> MacBook Pro running Mac OS X 10.4.\n> \n> Hmm. I thought you said Fedora before.\n\nYes, the test that I mentioned yesterday was on Fedora, but as you were \"of the opinion that \nthere's something broken about\nAdriaan's infrastructure\" I tried the new test on a completely different system today.\n\n However, I'd done a few tests\n> yesterday on my own Mac laptop (Al G4) and not gotten results that were\n> out of line with HPUX or Fedora.\n> \n> Does anyone else want to try replicating these tests?\n\nThanks,\n\nAdriaan van Os\n", "msg_date": "Fri, 13 Jul 2007 21:32:09 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "On Fri, Jul 13, 2007 at 09:12:34PM +0200, Pavel Stehule wrote:\n> Hello,\n> \n> I tested speed difference between TRUNCATE TABLE and DROP TABLE\n> (tested on my notebook ext3 and Linux fedora 7):\n> \n> CREATE OR REPLACE FUNCTION test01() RETURNS SETOF double precision\n> AS $$\n> DECLARE t1 timestamp with time zone;\n> BEGIN\n> CREATE TEMP TABLE foo(a integer);\n> FOR i IN 1..1000 LOOP\n> INSERT INTO foo SELECT 1 FROM generate_series(1,10000);\n> t1 := clock_timestamp();\n> TRUNCATE TABLE foo;\n> RETURN NEXT EXTRACT('ms' FROM clock_timestamp()-t1);\n> END LOOP;\n> RETURN;\n> END;\n> $$ LANGUAGE plpgsql;\n> \n> CREATE OR REPLACE FUNCTION test02() RETURNS SETOF double precision\n> AS $$\n> DECLARE t1 timestamp with time zone;\n> BEGIN\n> FOR i IN 1..1000 LOOP\n> EXECUTE 'CREATE TEMP TABLE foo(a integer);';\n> EXECUTE 'INSERT INTO foo SELECT 1 FROM generate_series(1,10000);';\n> t1 := clock_timestamp();\n> EXECUTE 'DROP TABLE foo;';\n> RETURN NEXT EXTRACT('ms' FROM clock_timestamp()-t1);\n> END LOOP;\n> RETURN;\n> END;\n> $$ LANGUAGE plpgsql;\n\nAre you sure you can ignore the added cost of an EXECUTE? I tried the following as a test, but my repeatability sucks... :/\n\nCREATE OR REPLACE FUNCTION test02() RETURNS SETOF double precision AS $$\nDECLARE t1 timestamp with time zone; \nBEGIN \n CREATE TEMP TABLE foo(a integer); \n FOR i IN 1..1000 LOOP \n EXECUTE 'INSERT INTO foo SELECT 1 FROM generate_series(1,10000)'; \n t1 := clock_timestamp(); \n EXECUTE 'TRUNCATE TABLE foo'; \n RETURN NEXT EXTRACT('ms' FROM clock_timestamp()-t1); \n END LOOP; \n RETURN; \nEND; \n$$ LANGUAGE plpgsql;\n\ndecibel=# drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test01() t(t);drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test03() t(t);drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test01() t(t);drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test03() t(t);drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test01() t(t);drop table foo;select count(*), min(t), max(t), avg(t), stddev_samp(t),stddev_pop(t) from test03() t(t);\nERROR: table \"foo\" does not exist\n count | min | max | avg | stddev_samp | stddev_pop \n-------+-------+----------+----------+------------------+------------------\n 1000 | 0.533 | 1405.747 | 3.444874 | 44.4166419484871 | 44.3944280726548\n(1 row)\n\nTime: 44945.101 ms\nDROP TABLE\nTime: 11.204 ms\n count | min | max | avg | stddev_samp | stddev_pop \n-------+-------+----------+----------+------------------+------------------\n 1000 | 0.446 | 1300.168 | 7.611269 | 79.7606049935278 | 79.7207147159672\n(1 row)\n\nTime: 44955.870 ms\nDROP TABLE\nTime: 148.186 ms\n count | min | max | avg | stddev_samp | stddev_pop \n-------+------+--------+----------+-----------------+------------------\n 1000 | 0.46 | 21.585 | 1.991845 | 1.2259573313755 | 1.22534419938848\n(1 row)\n\nTime: 47566.985 ms\nDROP TABLE\nTime: 5.065 ms\n count | min | max | avg | stddev_samp | stddev_pop \n-------+-------+----------+----------+------------------+------------------\n 1000 | 0.479 | 1907.865 | 5.368207 | 73.8576562901696 | 73.8207182251985\n(1 row)\n\nTime: 48681.777 ms\nDROP TABLE\nTime: 7.863 ms\n count | min | max | avg | stddev_samp | stddev_pop \n-------+-------+----------+----------+-----------------+-----------------\n 1000 | 0.562 | 1009.578 | 2.998867 | 31.874023877249 | 31.858082879064\n(1 row)\n\nTime: 37426.441 ms\nDROP TABLE\nTime: 4.935 ms\n count | min | max | avg | stddev_samp | stddev_pop \n-------+------+--------+----------+------------------+------------------\n 1000 | 0.42 | 20.721 | 2.064845 | 1.24241007069275 | 1.24178871027844\n(1 row)\n\nTime: 47906.628 ms\ndecibel=# \n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 16 Jul 2007 16:39:45 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Fri, Jul 13, 2007 at 09:12:34PM +0200, Pavel Stehule wrote:\n>> I tested speed difference between TRUNCATE TABLE and DROP TABLE\n>> (tested on my notebook ext3 and Linux fedora 7):\n\n> Are you sure you can ignore the added cost of an EXECUTE? I tried the\n> following as a test, but my repeatability sucks... :/\n\nThe repeatability was sucky for me too, until I turned off autovacuum.\nI am not sure why autovac is interfering with truncate more than with\ncreate/drop, but that seems to be what's happening. Note that all the\ntables involved are temp, so autovac really shouldn't be touching them\nat all, so this is a bit surprising.\n\n[ investigates awhile... ] There is a fairly serious mistake in Pavel's\ntest script, which is that it is testing 1000 iterations *within a\nsingle transaction*. We do not drop removable tables until end of\ntransaction, and that means that the actual filesystem effects of drop\nor truncate are not being timed by the script. The part that he's\nreally timing is:\n\n* for DROP: mark a bunch of system catalog tuples as deleted\n\n* for TRUNCATE: create one new, empty disk file, then mark one pg_class\ntuple as deleted and insert a replacement one.\n\nThus the timing issue (at least as exhibited by this script) has nothing\nwhatever to do with the time to delete a file, but with the time to\ncreate one. Since the part of DROP being timed has probably got no I/O\ninvolved at all (the tuples being touched are almost surely still in\nshared buffers), it's unsurprising that it is consistently fast.\n\nI tried strace -T on the backend while running the TRUNCATE script, and\ngot a smoking gun: most of the open(O_CREAT) calls take only 130 to 150\nmicroseconds, but the tail of the distribution is awful:\n\n0.000186\n0.000187\n0.000188\n0.000190\n0.000193\n0.000194\n0.000204\n0.000208\n0.000235\n0.000265\n0.000274\n0.000289\n0.000357\n0.000387\n0.000410\n0.000434\n0.000435\n0.000488\n0.000563\n0.065674\n0.583236\n\nSomehow, autovac is doing something that makes the filesystem go nuts\nevery so often, and take an astonishingly long time to create an empty\nfile. But autovac itself doesn't create or delete any files, so what's\nup here?\n\nAlso, I was able to reproduce the variability in timing on HPUX and\nDarwin as well as Linux, so we can't put all the blame on ext3.\n(I didn't drill down to the strace level on the other two machines,\nthough, so it's possible that there is a different mechanism at work\nthere.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jul 2007 20:18:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " }, { "msg_contents": "Tom Lane wrote:\n\n> Thus the timing issue (at least as exhibited by this script) has nothing\n> whatever to do with the time to delete a file, but with the time to\n> create one. Since the part of DROP being timed has probably got no I/O\n> involved at all (the tuples being touched are almost surely still in\n> shared buffers), it's unsurprising that it is consistently fast.\n\nIn my original profiling, CREATE TEMPORARY TABLE/DROP TABLE wasn't much faster than TRUNCATE TABLE. \nWhen I try it again now, I see that DROP TABLE is consistently fast, while the timings of CREATE \nTEMPORARY TABLE vary as much as those of TRUNCATE TABLE. Your observations on the time needed to \nopen a file confirm that, I think.\n\nIn my test databases, autovacuum is off.\n\nRegards,\n\nAdriaan van Os\n", "msg_date": "Wed, 18 Jul 2007 09:36:45 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Tom Lane wrote:\n\n> Somehow, autovac is doing something that makes the filesystem go nuts\n> every so often, and take an astonishingly long time to create an empty\n> file. But autovac itself doesn't create or delete any files, so what's\n> up here?\n> \n> Also, I was able to reproduce the variability in timing on HPUX and\n> Darwin as well as Linux, so we can't put all the blame on ext3.\n> (I didn't drill down to the strace level on the other two machines,\n> though, so it's possible that there is a different mechanism at work\n> there.)\n\nAny news since this message ? Should I file a bug report ?\n\nRegards,\n\nAdriaan van Os\n", "msg_date": "Wed, 01 Aug 2007 10:27:10 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": ">>> On Mon, Jul 16, 2007 at 7:18 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> Somehow, autovac is doing something that makes the filesystem go nuts\n> every so often, and take an astonishingly long time to create an empty\n> file. But autovac itself doesn't create or delete any files, so what's\n> up here?\n \nHave you ruled out checkpoints as the culprit?\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 01 Aug 2007 09:22:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Adriaan van Os wrote:\n> Tom Lane wrote:\n>\n>> Somehow, autovac is doing something that makes the filesystem go nuts\n>> every so often, and take an astonishingly long time to create an empty\n>> file. But autovac itself doesn't create or delete any files, so what's\n>> up here?\n>> Also, I was able to reproduce the variability in timing on HPUX and\n>> Darwin as well as Linux, so we can't put all the blame on ext3.\n>> (I didn't drill down to the strace level on the other two machines,\n>> though, so it's possible that there is a different mechanism at work\n>> there.)\n>\n> Any news since this message ? Should I file a bug report ?\n\nWere you able to show that turning off autovacuum removes the\nperformance problem?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 1 Aug 2007 11:11:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Kevin Grittner wrote:\n>>>> On Mon, Jul 16, 2007 at 7:18 PM, in message <[email protected]>,\n> Tom Lane <[email protected]> wrote: \n>> Somehow, autovac is doing something that makes the filesystem go nuts\n>> every so often, and take an astonishingly long time to create an empty\n>> file. But autovac itself doesn't create or delete any files, so what's\n>> up here?\n> \n> Have you ruled out checkpoints as the culprit?\n\nThat's a good question. I will do some more tests, but I also suspect fsync \"cascading\"\n<http://www.uwsg.iu.edu/hypermail/linux/kernel/0708.0/1435.html>.\n\nRegards,\n\nAdriaan van Os\n\n", "msg_date": "Sat, 04 Aug 2007 23:39:31 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "On Sat, Aug 04, 2007 at 11:39:31PM +0200, Adriaan van Os wrote:\n> Kevin Grittner wrote:\n> >>>>On Mon, Jul 16, 2007 at 7:18 PM, in message \n> >>>><[email protected]>,\n> >Tom Lane <[email protected]> wrote: \n> >>Somehow, autovac is doing something that makes the filesystem go nuts\n> >>every so often, and take an astonishingly long time to create an empty\n> >>file. But autovac itself doesn't create or delete any files, so what's\n> >>up here?\n> > \n> >Have you ruled out checkpoints as the culprit?\n> \n> That's a good question. I will do some more tests, but I also suspect fsync \n> \"cascading\"\n> <http://www.uwsg.iu.edu/hypermail/linux/kernel/0708.0/1435.html>.\n\nInteresting. I'm guessing that ext3 has to sync out the entire journal\nup to the point in time that fsync() is called, regardless of what\nfiles/information the journal contains. Fortunately I think it's common\nknowledge to mount PostgreSQL filesystems with data=writeback, which\nhopefully eliminates much of that bottleneck... but if you don't do\nnoatime you're probably still spewing a lot out to the drive.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 6 Aug 2007 00:48:54 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE" }, { "msg_contents": "Decibel! <[email protected]> writes:\n> Interesting. I'm guessing that ext3 has to sync out the entire journal\n> up to the point in time that fsync() is called, regardless of what\n> files/information the journal contains. Fortunately I think it's common\n> knowledge to mount PostgreSQL filesystems with data=3Dwriteback, which\n> hopefully eliminates much of that bottleneck... but if you don't do\n> noatime you're probably still spewing a lot out to the drive.\n\nFWIW, I tried to test the above by running Pavel's script on an ext3\npartition mounted noatime,data=writeback. This didn't seem to make any\ndifference --- still very large deviations in the time to do a TRUNCATE.\nHowever the problem seems harder to reproduce now than it was three weeks\nago. In the meantime I installed a 2.6.22-based kernel instead of the\n2.6.20 one that Fedora was using before; I wonder whether the kernel\nguys tweaked something related ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2007 14:46:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE TABLE " } ]
[ { "msg_contents": "All this talk of WAL writing lately has me wondering something I haven't \nspent enough time looking at the source to figure out myself this \nweek...any good rules of thumb out there for estimating WAL volume? I'm \nused to just measuring it via benchmarking but it strikes me a formula \nwould be nice to have for pre-planning.\n\nFor example, if I have a table where a typical row is X bytes wide, and \nI'm updating Y of those per second, what's the expected write rate of WAL \nvolume? Some % of those writes are going to be full pages; what's \ntypical? How much does the number and complexity of indexes factor into \nthings--just add the width of the index in bytes to the size of the \nrecord, or is it worse than that?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 11 Jul 2007 23:10:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Estimating WAL volume" }, { "msg_contents": "On Wed, 2007-07-11 at 23:10 -0400, Greg Smith wrote:\n> All this talk of WAL writing lately has me wondering something I haven't \n> spent enough time looking at the source to figure out myself this \n> week...any good rules of thumb out there for estimating WAL volume? I'm \n> used to just measuring it via benchmarking but it strikes me a formula \n> would be nice to have for pre-planning.\n> \n> For example, if I have a table where a typical row is X bytes wide, and \n> I'm updating Y of those per second, what's the expected write rate of WAL \n> volume? Some % of those writes are going to be full pages; what's \n> typical? How much does the number and complexity of indexes factor into \n> things--just add the width of the index in bytes to the size of the \n> record, or is it worse than that?\n\nI published an analysis of WAL traffic from Greg Stark earlier, based\nupon xlogdump.\n\nhttp://archives.postgresql.org/pgsql-hackers/2007-03/msg01589.php\n\nOther details of WAL volumes are in the code. Further analysis would be\nwelcome, to assist discussions of where to optimize next.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 16 Jul 2007 09:02:08 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimating WAL volume" } ]
[ { "msg_contents": "Hello.\n\nI've googled a bit but I think I can't match the keywords, so I thought\nI'll ask here:\n\nLet's say I've got a view with 100 columns and 1mln rows; some of them are\ncalculated \"on the fly\". For some reason I want only one column from\nthis view:\n\nselect col1 from huge_view;\n\nNow, does PostgreSQL skip all the calculations from other columns and\nexecutes this query faster then select * from huge_view?\n\n-- \n| And Do What You Will be the challenge | http://apcoln.linuxpl.org\n| So be it in love that harms none | http://biznes.linux.pl\n| For this is the only commandment. | http://www.juanperon.info\n`---* JID: [email protected] *---' http://www.naszedzieci.org\n\n", "msg_date": "Thu, 12 Jul 2007 08:33:45 +0000 (UTC)", "msg_from": "=?iso-8859-2?q?Marcin_St=EApnicki?= <[email protected]>", "msg_from_op": true, "msg_subject": "one column from huge view" }, { "msg_contents": "Marcin Stępnicki wrote:\n> Hello.\n> \n> I've googled a bit but I think I can't match the keywords, so I thought\n> I'll ask here:\n> \n> Let's say I've got a view with 100 columns and 1mln rows; some of them are\n> calculated \"on the fly\". For some reason I want only one column from\n> this view:\n> \n> select col1 from huge_view;\n> \n> Now, does PostgreSQL skip all the calculations from other columns and\n> executes this query faster then select * from huge_view?\n\nIn simple cases, yes. But for example, if you have a LEFT OUTER JOIN in \nthe view, the join is performed even if your query doesn't return any \ncolumns from the outer relation. Also, if the calculation contains \nimmutable functions, it's not skipped.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 12 Jul 2007 09:50:42 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: one column from huge view" }, { "msg_contents": "On Thu, Jul 12, 2007 at 09:50:42AM +0100, Heikki Linnakangas wrote:\n> Marcin Stępnicki wrote:\n> >Let's say I've got a view with 100 columns and 1mln rows; some of them are\n> >calculated \"on the fly\". For some reason I want only one column from\n> >this view:\n> >\n> >select col1 from huge_view;\n> >\n> >Now, does PostgreSQL skip all the calculations from other columns and\n> >executes this query faster then select * from huge_view?\n> \n> In simple cases, yes. But for example, if you have a LEFT OUTER JOIN in \n> the view, the join is performed even if your query doesn't return any \n> columns from the outer relation. Also, if the calculation contains \n> immutable functions, it's not skipped.\n\nDon't you mean \"if the calculation contains VOLATILE functions,\nit's not skipped\"?\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 12 Jul 2007 06:51:36 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: one column from huge view" }, { "msg_contents": "Michael Fuhr wrote:\n> On Thu, Jul 12, 2007 at 09:50:42AM +0100, Heikki Linnakangas wrote:\n>> Marcin Stępnicki wrote:\n>>> Let's say I've got a view with 100 columns and 1mln rows; some of them are\n>>> calculated \"on the fly\". For some reason I want only one column from\n>>> this view:\n>>>\n>>> select col1 from huge_view;\n>>>\n>>> Now, does PostgreSQL skip all the calculations from other columns and\n>>> executes this query faster then select * from huge_view?\n>> In simple cases, yes. But for example, if you have a LEFT OUTER JOIN in \n>> the view, the join is performed even if your query doesn't return any \n>> columns from the outer relation. Also, if the calculation contains \n>> immutable functions, it's not skipped.\n> \n> Don't you mean \"if the calculation contains VOLATILE functions,\n> it's not skipped\"?\n\nYes, thanks for the correction.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 12 Jul 2007 13:55:43 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: one column from huge view" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Marcin Stępnicki wrote:\n>> Now, does PostgreSQL skip all the calculations from other columns and\n>> executes this query faster then select * from huge_view?\n\n> In simple cases, yes.\n\nA rule of thumb is that it's been optimized if you don't see a \"Subquery\nScan\" node in the plan. As an example:\n\nregression=# create view v1 as select * from tenk1;\nCREATE VIEW\nregression=# create view v2 as select *,random() from tenk1;\nCREATE VIEW\nregression=# explain select unique1 from v1;\n QUERY PLAN\n-----------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=4)\n(1 row)\n\nregression=# explain select unique1 from v2;\n QUERY PLAN\n-------------------------------------------------------------------\n Subquery Scan v2 (cost=0.00..583.00 rows=10000 width=4)\n -> Seq Scan on tenk1 (cost=0.00..483.00 rows=10000 width=244)\n(2 rows)\n\nIf you want to look closer you can use EXPLAIN VERBOSE and count the\nTARGETENTRY nodes in the targetlist for each plan node. In the above\nexample, it's possible to see in the EXPLAIN VERBOSE output that the\nSeq Scan node in the first plan is computing only the single variable\nrequested, whereas in the second plan the Seq Scan node is computing\nall the outputs of the view (including the random() function call)\nand then the Subquery Scan is projecting only a single column from\nthat result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Jul 2007 10:48:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: one column from huge view " } ]
[ { "msg_contents": "\nHello all,\n\nI am a bit confused...I have a database which was performing very POORLY\nselecting from a view (posted earlier) on one server but extremely fast on\nanother server...\n\nI just backed up the database from the FAST server and loaded to the SLOW\nserver and it ran just as fast as it originally did...my questions are:\n\nAre STATISTICS some how saved with the database?? if so, how do I UPDATE\nview or update them?\n\nShould I backup the data \\ drop the database and reload it to make it get\nnew stats?? (vacuum analyze does nothing for this poor performing database)\n\nThanks-a-bunch.\n-- \nView this message in context: http://www.nabble.com/Database-Statistics----tf4075655.html#a11583450\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 13 Jul 2007 09:53:29 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Database Statistics???" }, { "msg_contents": "smiley2211 wrote:\n> Hello all,\n>\n> I am a bit confused...I have a database which was performing very POORLY\n> selecting from a view (posted earlier) on one server but extremely fast on\n> another server...\n>\n> I just backed up the database from the FAST server and loaded to the SLOW\n> server and it ran just as fast as it originally did...my questions are:\n>\n> Are STATISTICS some how saved with the database?? if so, how do I UPDATE\n> view or update them?\n>\n> Should I backup the data \\ drop the database and reload it to make it get\n> new stats?? (vacuum analyze does nothing for this poor performing database)\n>\n> Thanks-a-bunch.\n> \nYou can update statistics with the analyze or vacuum analyze command, \nbut I'd bet what you are seeing here is the effect of recreating the \nindices that replaying a backup does.\n", "msg_date": "Fri, 13 Jul 2007 10:39:19 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Statistics???" }, { "msg_contents": "On 2007-07-13 smiley2211 wrote:\n> I am a bit confused...I have a database which was performing very\n> POORLY selecting from a view (posted earlier) on one server but\n> extremely fast on another server...\n\nEXPLAIN ANALYZE'ing the query will show you the planner's estimates. The\nquery plans should give you an idea of what the problem actually is. Did\nyou already run ANALYZE on the database?\n\n> I just backed up the database from the FAST server and loaded to the\n> SLOW server and it ran just as fast as it originally did...my\n> questions are:\n> \n> Are STATISTICS some how saved with the database??\n\nNot with the database, but in the pg_statistic catalog, AFAIK.\n\n> if so, how do I UPDATE view or update them?\n\nYou collect statistics by ANALYZE'ing either particular tables or the\nentire database. They can be viewed in the pg_catalog.pg_statistic\ntable. However, viewing the query plans for your queries will probably\nbe more telling.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Fri, 13 Jul 2007 19:52:49 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Statistics???" }, { "msg_contents": "Am Freitag 13 Juli 2007 schrieb smiley2211:\n> Hello all,\n>\n> I am a bit confused...I have a database which was performing very POORLY\n> selecting from a view (posted earlier) on one server but extremely fast on\n> another server...\n>\n> I just backed up the database from the FAST server and loaded to the SLOW\n> server and it ran just as fast as it originally did...my questions are:\n>\n> Are STATISTICS some how saved with the database?? if so, how do I UPDATE\n> view or update them?\n>\n> Should I backup the data \\ drop the database and reload it to make it get\n> new stats?? (vacuum analyze does nothing for this poor performing database)\n>\n> Thanks-a-bunch.\n\nTry this on both machines:\nselect relname, relpages, reltuples\n from pg_class\n where relkind='i'\n order by relpages desc limit 20;\n\nCompare the results, are relpages much higher on the slow machine?\n\nIf so, REINDEX DATABASE slow_database;\n\n", "msg_date": "Fri, 13 Jul 2007 20:19:51 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Statistics???" }, { "msg_contents": "\nThanks Tom and Scott...that worked for a NEW database but not on the original\nSLOW database...meaning - I backed up the SLOW database and restored it to a\nNEW database and the query ran EXTREMELY FAST :clap:\n\nScott - (your question - What was the size of the slow databases data store\ncompared to the\nfast database? --- I am new, how do I know the size of the database (OS file\nsize ??))...is there an sp_helpdb equivalent command??\n\nMy EXPLAINS are under a previous thread:\n\nQuery is taking 5 HOURS to Complete on 8.1 version\n\nThanks...Michelle\n\n\nTom Arthurs wrote:\n> \n> smiley2211 wrote:\n>> Hello all,\n>>\n>> I am a bit confused...I have a database which was performing very POORLY\n>> selecting from a view (posted earlier) on one server but extremely fast\n>> on\n>> another server...\n>>\n>> I just backed up the database from the FAST server and loaded to the SLOW\n>> server and it ran just as fast as it originally did...my questions are:\n>>\n>> Are STATISTICS some how saved with the database?? if so, how do I UPDATE\n>> view or update them?\n>>\n>> Should I backup the data \\ drop the database and reload it to make it get\n>> new stats?? (vacuum analyze does nothing for this poor performing\n>> database)\n>>\n>> Thanks-a-bunch.\n>> \n> You can update statistics with the analyze or vacuum analyze command, \n> but I'd bet what you are seeing here is the effect of recreating the \n> indices that replaying a backup does.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Database-Statistics----tf4075655.html#a11585080\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 13 Jul 2007 11:27:50 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database Statistics???" }, { "msg_contents": "smiley2211 wrote:\n> \n> Thanks Tom and Scott...that worked for a NEW database but not on the original\n> SLOW database...meaning - I backed up the SLOW database and restored it to a\n> NEW database and the query ran EXTREMELY FAST :clap:\n\nHave you ever vacuumed the DB?\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"Aprender sin pensar es in�til; pensar sin aprender, peligroso\" (Confucio)\n", "msg_date": "Fri, 13 Jul 2007 17:05:02 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Statistics???" } ]
[ { "msg_contents": "Joseph wrote:\n> We just got a DELL POWEREDGE 2950. So I was tasked with putting\n> Linux Redhat and dumped our software/packages on it. Contrary to\n> common sense, I didn't bother reading the manuals that came with te\n> 2950. I went right ahead and installed Redhat server on it, then went\n> and loaded the backups software/data etc onto it and started having\n> the team use it.\n\nAnd this has to do with pgsql.performance exactly what?\n\nAnyway, as someone who seems to administrates a PostgreSQL production\nbox, you sure have a good backup plan. So just call DELL's support, fix\nyou RAID and restore form backup.\n\n From the DELL site it seems this `PERC 5/i' on board controller\n(assuming that's what you have) doesn't even have a BBU. If you don't\nplan to post here in a few weeks again about data corruption, go out and\nshop a serious controller.\n\nAnd before you move that box in production, check:\n\nIs my hardware and software setup fsync/fua clean?\nIs my backup plan working?\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Sat, 14 Jul 2007 10:29:05 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FORGOT TO CONFIGURE RAID! DELL POWEREDGE 2950" }, { "msg_contents": "On Sat, Jul 14, 2007 at 10:29:05AM +0200, Hannes Dorbath wrote:\n> From the DELL site it seems this `PERC 5/i' on board controller\n> (assuming that's what you have) doesn't even have a BBU. If you don't\n> plan to post here in a few weeks again about data corruption, go out and\n> shop a serious controller.\n\nWe have a 2950 with a PERC, and it has a BBU.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 14 Jul 2007 10:34:40 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FORGOT TO CONFIGURE RAID! DELL POWEREDGE 2950" }, { "msg_contents": "\n\"Hannes Dorbath\" <[email protected]> writes:\n\n> From the DELL site it seems this `PERC 5/i' on board controller\n> (assuming that's what you have) doesn't even have a BBU. If you don't\n> plan to post here in a few weeks again about data corruption, go out and\n> shop a serious controller.\n\nThis is a bit of a strange comment. A BBU will improve performance but\nPostgres doesn't require one to guarantee data integrity.\n\nIf your drives have write caching disabled (ie write-through) and your\ncontroller does write-through caching and you leave fsync=on and\nfull_page_writes=on which is the default then you shouldn't have any data\nintegrity issues.\n\nNote that many drives, especially IDE drives ship with write caching enabled\n(ie, write-back).\n\nAnd without a BBU many people are tempted to set fsync=off which improves\nperformance at the cost of data loss on a system crash or power failure. With\na BBU there's no advantage to fsync=off so that temptation to risk data loss\nis removed.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Sat, 14 Jul 2007 10:57:55 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FORGOT TO CONFIGURE RAID! DELL POWEREDGE 2950" }, { "msg_contents": "Gregory Stark wrote:\n>> From the DELL site it seems this `PERC 5/i' on board controller\n>> (assuming that's what you have) doesn't even have a BBU. If you don't\n>> plan to post here in a few weeks again about data corruption, go out and\n>> shop a serious controller.\n> \n> This is a bit of a strange comment. A BBU will improve performance but\n> Postgres doesn't require one to guarantee data integrity.\n> \n> If your drives have write caching disabled (ie write-through) and your\n> controller does write-through caching and you leave fsync=on and\n> full_page_writes=on which is the default then you shouldn't have any data\n> integrity issues.\n\nThat was my point, controllers without BBU usually leave drive caches\nturned on, as with drive caches off performance would be unbearable bad.\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Sat, 14 Jul 2007 12:19:51 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FORGOT TO CONFIGURE RAID! DELL POWEREDGE 2950" }, { "msg_contents": "On Sat, Jul 14, 2007 at 12:19:51PM +0200, Hannes Dorbath wrote:\n> Gregory Stark wrote:\n> >> From the DELL site it seems this `PERC 5/i' on board controller\n> >> (assuming that's what you have) doesn't even have a BBU. If you don't\n> >> plan to post here in a few weeks again about data corruption, go out and\n> >> shop a serious controller.\n> > \n> > This is a bit of a strange comment. A BBU will improve performance but\n> > Postgres doesn't require one to guarantee data integrity.\n> > \n> > If your drives have write caching disabled (ie write-through) and your\n> > controller does write-through caching and you leave fsync=on and\n> > full_page_writes=on which is the default then you shouldn't have any data\n> > integrity issues.\n> \n> That was my point, controllers without BBU usually leave drive caches\n> turned on, as with drive caches off performance would be unbearable bad.\n\nWow, are you sure about that? I've never heard it before, but that'd be\npretty disturbing if it's true...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 16 Jul 2007 16:02:11 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FORGOT TO CONFIGURE RAID! DELL POWEREDGE 2950" } ]
[ { "msg_contents": "Hi,\n Something I'd like to share. \n\n I switched to postgres about 4 months ago.\n The perfomance after a while got worse.\n I posted a message here, where the result was that my IO was the \nproblem.\n\n I run vacuum every night. I never used vacuum full because it is not\n explicitly recommended and I read somewhere in the archives a mail that\n the consistency of the db suffered after a vacuum full run.\n \n Yesterday I switched from 8.1 to 8.2. So I needed to dump the dbase\n and reimport it. The dbase after 4 months of running without \"vacuum \nfull\"\n reached 60 gigabyte of diskspace. Now after a fresh import it only \nhas 5 gigabyte!\n\n No wonder, I got IO problems with such a fragmentation.\n\n For people not very familiar with postgres especially those coming \nfrom mysql,\n i'd recommend paying attention to this.\n\nregards,\n patric de waha\n \n", "msg_date": "Sat, 14 Jul 2007 17:50:39 +0200", "msg_from": "Patric de Waha <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum full considered useful ;)" }, { "msg_contents": "\n> No wonder, I got IO problems with such a fragmentation.\n> \n> For people not very familiar with postgres especially those coming \n> from mysql,\n> i'd recommend paying attention to this.\n\nDefinitely. The problem here is that you just aren't vacuuming enough, \nnot that you didn't vacuum full. I would suggest reviewing autovacuum \nand seeing if that will help you.\n\nJoshua D. Drake\n\n> \n> regards,\n> patric de waha\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Sat, 14 Jul 2007 09:25:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full considered useful ;)" }, { "msg_contents": "Patric de Waha <[email protected]> writes:\n> Yesterday I switched from 8.1 to 8.2. So I needed to dump the dbase\n> and reimport it. The dbase after 4 months of running without \"vacuum \n> full\"\n> reached 60 gigabyte of diskspace. Now after a fresh import it only \n> has 5 gigabyte!\n\n> No wonder, I got IO problems with such a fragmentation.\n\nIndeed, but routine VACUUM FULL is not the best answer. What this\nsuggests is that you don't have the FSM size (max_fsm_pages and possibly\nmax_fsm_relations) set high enough for your DB size. If it isn't\nbig enough then you'll \"leak\" reusable space over time. Also, if\nyou are using manual rather than autovacuum you might need to be\nvacuuming more often.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Jul 2007 12:32:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full considered useful ;) " }, { "msg_contents": "Joshua D. Drake a �crit :\n> \n>> No wonder, I got IO problems with such a fragmentation.\n>>\n>> For people not very familiar with postgres especially those coming \n>> from mysql,\n>> i'd recommend paying attention to this.\n> \n> Definitely. The problem here is that you just aren't vacuuming enough, \n> not that you didn't vacuum full. I would suggest reviewing autovacuum \n> and seeing if that will help you.\n> \n\nAnd paying attention to the max_fsm_pages setting. A value too low won't \nhelp vacuum's work.\n\nRegards.\n\n\n-- \nGuillaume.\nhttp://www.postgresqlfr.org\nhttp://docs.postgresqlfr.org\n", "msg_date": "Sat, 14 Jul 2007 18:34:18 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full considered useful ;)" }, { "msg_contents": "\nOn Jul 14, 2007, at 11:50 AM, Patric de Waha wrote:\n\n> Yesterday I switched from 8.1 to 8.2. So I needed to dump the \n> dbase\n> and reimport it. The dbase after 4 months of running without \n> \"vacuum full\"\n> reached 60 gigabyte of diskspace. Now after a fresh import it \n> only has 5 gigabyte!\n\nAfter a couple more months running 8.2, compare your index sizes to \nwhat they are now relative to the table sizes. My bet is that if you \njust reindexed some of your tables that would have cleared out much \nof that bloat.\n\nA short while back I reindexed some tables on my primary production \nserver and shaved off about 20Gb of disk space. The table itself was \nnot bloated. A dump/reload to another server resulted in a table of \nroughly the same size.\n\n", "msg_date": "Sat, 14 Jul 2007 22:26:22 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full considered useful ;)" } ]
[ { "msg_contents": "hi, all...\ni would like to replicate my dbase over LAN and VPN,could you help me please,give the tutorial or url that i can follow it step-by-step..especially about the configuration in admin conninfo...\nthank you very much,\nregards,\nBayu\n\n \n---------------------------------\nGot a little couch potato? \nCheck out fun summer activities for kids.\nhi, all...i would like to replicate my dbase over LAN and VPN,could you help me please,give the tutorial or url that i can follow it step-by-step..especially about the configuration in admin conninfo...thank you very much,regards,Bayu\nGot a little couch potato? \nCheck out fun summer activities for kids.", "msg_date": "Sun, 15 Jul 2007 09:37:58 -0700 (PDT)", "msg_from": "angga erwina <[email protected]>", "msg_from_op": true, "msg_subject": "slony over LAN and VPN" } ]
[ { "msg_contents": "hi, all...\ni would like to replicate my dbase over LAN and VPN,could you help me please,give the tutorial or url that i can follow it step-by-step..especially about the configuration in admin conninfo...\nthank you very much,\nregards,\nBayu\n \n---------------------------------\nWe won't tell. Get more on shows you hate to love\n(and love to hate): Yahoo! TV's Guilty Pleasures list.\nhi, all...i would like to replicate my dbase over LAN and VPN,could you help me please,give the tutorial or url that i can follow it step-by-step..especially about the configuration in admin conninfo...thank you very much,regards,Bayu\nWe won't tell. Get more on shows you hate to love(and love to hate): Yahoo! TV's Guilty Pleasures list.", "msg_date": "Sun, 15 Jul 2007 09:41:53 -0700 (PDT)", "msg_from": "angga erwina <[email protected]>", "msg_from_op": true, "msg_subject": "slony over VPN and LAN" } ]
[ { "msg_contents": "Postgres configuration for 64 CPUs, 128 GB RAM...\n\nHello,\n\nWe have the oppotunity to benchmark our application on a large server. I\nhave to prepare the Postgres configuration and I'd appreciate some\ncomments on it as I am not experienced with servers of such a scale.\nMoreover the configuration should be fail-proof as I won't be able to\nattend the tests. \n\nOur application (java + perl) and Postgres will run on the same server,\nwhereas the application activity is low when Postgres has large\ntransactions to process.\n\nThere is a large gap between our current produtcion server (Linux, 4GB\nRAM, 4 cpus) and the benchmark server; one of the target of this\nbenchmark is to verify the scalability of our application. \n\n\nAnd you have no reason to be envious as the server doesn't belong us :-)\n\n\nThanks for your comments,\n\nMarc Mamin\n\n\n\n\n\nPosgres version: 8.2.1\n\n\n\nServer Specifications:\n----------------------\n\nSun SPARC Enterprise M8000 Server:\n\nhttp://www.sun.com/servers/highend/m8000/specs.xml\n\nFile system:\n\nhttp://en.wikipedia.org/wiki/ZFS\n\n\n\nPlanned configuration:\n--------------------------------\n\n# we don't expect more than 150 parallel connections, \n# but I suspect a leak in our application that let some idle connections\nopen\n\nmax_connections=2000\n\nssl = off \n\n#maximum allowed\nshared_buffers= 262143\n\n# on our current best production server with 4GB RAM (not dedicated to\nPostgres), work_mem is set to 600 MB\n# this limitation is probably the bottleneck for our application as the\nfiles in pgsql_tmp grows up to 15 GB \n# during large aggregations (we have a locking mechanismus to avoid\nparallel processing of such transactions)\nwork_mem = 31457280 # (30 GB)\n\n# index creation time is also an issue for us; the process is locking\nother large processes too.\n# our largest table so far is 13 GB + 11 GB indexes\nmaintenance_work_mem = 31457280 # (30 GB)\n\n# more than the max number of tables +indexes expected during the\nbenchmark\nmax_fsm_relations = 100000\n\nmax_fsm_pages = 1800000\n\n# don't know if I schoud modify this.\n# seems to be sufficient on our production servers\nmax_stack_depth = 2MB\n\n# vacuum will be done per hand between each test session\nautovacuum = off \n\n\n\n# required to analyse the benchmark\nlog_min_duration_statement = 1000\n\n\nmax_prepared_transaction = 100\n\n\n# seems to be required to drop schema/roles containing large number of\nobjects\nmax_locks_per_transaction = 128 \n\n\n\n\n# I use the default for the bgwriter as I couldnt find recommendation on\nthose\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\nscanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max\nwritten/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers\nscanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max\nwritten/round\n\n\n#WAL\n\nfsync = on\n\n#use default\n#wal_sync_method\n\n# we are using 32 on our production system\nwal_buffers=64\n\n\n# we didn't make any testing with this parameter until now, but this\nshould'nt be a relevant\n# point as our performance focus is on large transactions\ncommit_delay = 0 \n\n#CHECKPOINT\n\n# xlog will be on a separate disk\ncheckpoint_segments=256\n\ncheckpoint_timeout = 5min\n\n\n\n\n\nPostgres configuration for 64 CPUs, 128 GB RAM...\n\n\n\n\nPostgres configuration for 64 CPUs, 128 GB RAM...\n\nHello,\n\nWe have the oppotunity to benchmark our application on a large server. I have to prepare the Postgres configuration and I'd appreciate some comments on it as I am not experienced with servers of such a scale. Moreover the configuration should be fail-proof as I won't be able to attend the tests. \nOur application (java + perl) and Postgres will run on the same server, whereas the application activity is low when Postgres has large transactions to process.\nThere is a large gap between our current produtcion server (Linux, 4GB RAM, 4 cpus) and the benchmark server; one of the target of this  benchmark is to verify the scalability of our application. \n\nAnd you have no reason to be envious as the server doesn't belong us :-)\n\n\nThanks for your comments,\n\nMarc Mamin\n\n\n\n\n\nPosgres version: 8.2.1\n\n\n\nServer Specifications:\n----------------------\n\nSun SPARC Enterprise M8000 Server:\n\nhttp://www.sun.com/servers/highend/m8000/specs.xml\n\nFile system:\n\nhttp://en.wikipedia.org/wiki/ZFS\n\n\n\nPlanned configuration:\n--------------------------------\n\n# we don't expect more than 150 parallel connections, \n# but I suspect a leak in our application that let some idle connections open\n\nmax_connections=2000\n\nssl = off \n\n#maximum allowed\nshared_buffers= 262143\n\n# on our current best production server with 4GB RAM (not dedicated to Postgres), work_mem is set to 600 MB\n# this limitation is probably the bottleneck for our application as the files in pgsql_tmp grows up to 15 GB \n# during large aggregations (we have a locking mechanismus to avoid parallel processing of such transactions)\nwork_mem = 31457280  # (30 GB)\n\n# index creation time is also an issue for us; the process is locking other large processes too.\n# our largest table so far is 13 GB + 11 GB indexes\nmaintenance_work_mem = 31457280  # (30 GB)\n\n# more than the max number of tables +indexes expected during the benchmark\nmax_fsm_relations = 100000\n\nmax_fsm_pages = 1800000\n\n# don't know if I schoud modify this.\n# seems to be sufficient on our production servers\nmax_stack_depth = 2MB\n\n# vacuum will be done per hand between each test session\nautovacuum = off \n\n\n\n# required to analyse the benchmark\nlog_min_duration_statement = 1000\n\n\nmax_prepared_transaction = 100\n\n\n# seems to be required to drop schema/roles containing large number of objects\nmax_locks_per_transaction = 128 \n\n\n\n\n# I use the default for the bgwriter as I couldnt find recommendation on those\n\n#bgwriter_delay = 200ms                 # 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0             # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5              # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333           # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5              # 0-1000 buffers max written/round\n\n\n#WAL\n\nfsync = on\n\n#use default\n#wal_sync_method\n\n# we are using 32 on our production system\nwal_buffers=64\n\n\n# we didn't make any testing with this parameter until now, but this should'nt be a relevant\n# point as our performance focus is on large transactions\ncommit_delay = 0 \n\n#CHECKPOINT\n\n# xlog will be  on a separate disk\ncheckpoint_segments=256\n\ncheckpoint_timeout = 5min", "msg_date": "Tue, 17 Jul 2007 16:10:30 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Marc Mamin wrote:\n> \n> Postgres configuration for 64 CPUs, 128 GB RAM...\n\nthere are probably not that much installation out there that large - \ncomments below\n\n> \n> Hello,\n> \n> We have the oppotunity to benchmark our application on a large server. I \n> have to prepare the Postgres configuration and I'd appreciate some \n> comments on it as I am not experienced with servers of such a scale. \n> Moreover the configuration should be fail-proof as I won't be able to \n> attend the tests.\n> \n> Our application (java + perl) and Postgres will run on the same server, \n> whereas the application activity is low when Postgres has large \n> transactions to process.\n> \n> There is a large gap between our current produtcion server (Linux, 4GB \n> RAM, 4 cpus) and the benchmark server; one of the target of this \n> benchmark is to verify the scalability of our application.\n> \n\n[...]\n> Posgres version: 8.2.1\n\nupgrade to 8.2.4\n\n> File system:\n> \n> _http://en.wikipedia.org/wiki/ZFS_\n\nway more important is what kind of disk-IO subsystem you have attached ...\n\n> \n> \n> \n> Planned configuration:\n> --------------------------------\n> \n> # we don't expect more than 150 parallel connections,\n> # but I suspect a leak in our application that let some idle connections \n> open\n> \n> max_connections=2000\n> \n> ssl = off\n> \n> #maximum allowed\n> shared_buffers= 262143\n\nthis is probably on the lower side for a 128GB box\n\n> \n> # on our current best production server with 4GB RAM (not dedicated to \n> Postgres), work_mem is set to 600 MB\n> # this limitation is probably the bottleneck for our application as the \n> files in pgsql_tmp grows up to 15 GB\n> # during large aggregations (we have a locking mechanismus to avoid \n> parallel processing of such transactions)\n> work_mem = 31457280 # (30 GB)\n\nthis is simply ridiculous - work_mem is PER SORT - so if your query \nrequires 8 sorts it will feel free to use 8x30GB and needs to be \nmultiplied by the number of concurrent connections.\n\n> \n> # index creation time is also an issue for us; the process is locking \n> other large processes too.\n> # our largest table so far is 13 GB + 11 GB indexes\n> maintenance_work_mem = 31457280 # (30 GB)\n\nthis is ridiculous too - testing has shown that there is not much point \nin going beyond 1GB or so\n\n> \n> # more than the max number of tables +indexes expected during the benchmark\n> max_fsm_relations = 100000\n> \n> max_fsm_pages = 1800000\n\nthis is probably way to low for a database the size of yours - watch the \noputput of VACUUM VERBOSE on a database wide vacuum for some stats on that.\n\n> \n> # don't know if I schoud modify this.\n> # seems to be sufficient on our production servers\n> max_stack_depth = 2MB\n> \n> # vacuum will be done per hand between each test session\n> autovacuum = off\n> \n> \n> \n> # required to analyse the benchmark\n> log_min_duration_statement = 1000\n> \n> \n> max_prepared_transaction = 100\n> \n> \n> # seems to be required to drop schema/roles containing large number of \n> objects\n> max_locks_per_transaction = 128\n> \n> \n> \n> \n> # I use the default for the bgwriter as I couldnt find recommendation on \n> those\n> \n> #bgwriter_delay = 200ms # 10-10000ms between rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers \n> scanned/round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers \n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n> \n> \n> #WAL\n> \n> fsync = on\n> \n> #use default\n> #wal_sync_method\n> \n> # we are using 32 on our production system\n> wal_buffers=64\n\nvalues up to 512 or so have been reported to help on systems with very \nhigh concurrency\n\n\nwhat is missing here is your settings for:\n\neffective_cache_size\n\nand\n\nrandom_page_cost\n\n\n\nStefan\n", "msg_date": "Tue, 17 Jul 2007 17:06:03 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:\n> shared_buffers= 262143\n \nYou should at least try some runs with this set far, far larger. At\nleast 10% of memory, but it'd be nice to see what happens with this set\nto 50% or higher as well (though don't set it larger than the database\nsince it'd be a waste).\n\nHow big is the database, anyway?\n\n> # on our current best production server with 4GB RAM (not dedicated to\n> Postgres), work_mem is set to 600 MB\n> # this limitation is probably the bottleneck for our application as the\n> files in pgsql_tmp grows up to 15 GB \n> # during large aggregations (we have a locking mechanismus to avoid\n> parallel processing of such transactions)\n\nKeep in mind that a good filesystem will be caching most of pgsql_tmp if\nit can.\n\n> max_prepared_transaction = 100\n \nAre you using 2PC? If not, there's no reason to touch this (could could\njust set it to 0).\n\n> # I use the default for the bgwriter as I couldnt find recommendation on\n> those\n> \n> #bgwriter_delay = 200ms # 10-10000ms between rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\n> scanned/round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max\n> written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers\n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max\n> written/round\n \nYou'll probably want to increase both maxpages parameters substantially,\nassuming that you've got good IO hardware.\n \n> #CHECKPOINT\n> \n> # xlog will be on a separate disk\n> checkpoint_segments=256\n> \n> checkpoint_timeout = 5min\n\nThe further apart your checkpoints, the better. Might want to look at 10\nminutes. I'd also set checkpoint_warning to just a bit below\ncheckpoint_timeout and watch for warnings to make sure you're not\ncheckpointing a lot more frequently than you're expecting.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 17 Jul 2007 10:48:00 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "\n\"Marc Mamin\" <[email protected]> writes:\n\n> We have the oppotunity to benchmark our application on a large server. I\n> have to prepare the Postgres configuration and I'd appreciate some\n> comments on it as I am not experienced with servers of such a scale.\n> Moreover the configuration should be fail-proof as I won't be able to\n> attend the tests. \n\nI really think that's a recipe for disaster. Even on a regular machine you\nneed to treat tuning as an on-going feedback process. There's no such thing as\na fail-proof configuration since every application is different.\n\nOn an exotic machine like this you're going to run into unique problems that\nnobody here can anticipate with certainty.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 17 Jul 2007 17:18:28 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "On Tue, 17 Jul 2007, Marc Mamin wrote:\n\n> Moreover the configuration should be fail-proof as I won't be able to \n> attend the tests.\n\nThis is unreasonable. The idea that you'll get a magic perfect \nconfiguration in one shot suggests a fundamental misunderstanding of how \nwork like this is done. If there's any way you could adjust things so \nthat, say, you were allowed to give at least 4 different tuning setups and \nyou got a report back with each of the results for them, that would let \nyou design a much better test set.\n\n> Posgres version: 8.2.1\n\nThis has already been mentioned, but it really is critical for your type \nof test to run 8.2.4 instead so I wanted to emphasize it. There is a \nmajor scalability bug in 8.2.1. I'm going to ignore the other things that \nother people have already commented on (all the suggestions Stephan and \nJim already made are good ones you should heed) and try to fill in the \nremaining gaps instead.\n\n> # I use the default for the bgwriter as I couldnt find recommendation on\n> those\n\nThe defaults are so small that it will barely do anything on a server of \nyour size. Tuning it properly so that it's effective but doesn't waste a \nlot of resources is tricky, which is why you haven't found such \nrecommendations--they're fairly specific to what you're doing and require \nsome testing to get right. If you want to see an example from a big \nserver, look at\n\nhttp://www.spec.org/jAppServer2004/results/res2007q3/jAppServer2004-20070606-00065.html#DBDatabase_SW_Config0\n\nThat's tuned for a very specific benchmark though. Here's a fairly \ngeneric set of parameters that would be much more aggressive than the \ndefaults, while not going so far as to waste too many resources if the \nwriter is just getting in the way on your server:\n\nbgwriter_delay = 200ms\nbgwriter_lru_percent = 3.0\nbgwriter_lru_maxpages = 500\nbgwriter_all_percent = 1.0\nbgwriter_all_maxpages = 250\n\n> #WAL\n> fsync = on\n> #use default\n> #wal_sync_method\n\nI'd expect wal_sync_method=open_datasync would outperfom the default, but \nyou'd really want to test both ways here to be sure. The fact that the \nSun results I referenced above use the default of fdatasync makes me \nhesitate to recommend that change too strongly, as I haven't worked with \nthis particular piece of hardware. See \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm for more \ninformation about this parameter.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Jul 2007 12:46:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": ">\"Marc Mamin\" <[email protected]> writes:\n>\n>> We have the oppotunity to benchmark our application on a large\nserver. I\n>> have to prepare the Postgres configuration and I'd appreciate some\n>> comments on it as I am not experienced with servers of such a scale.\n>> Moreover the configuration should be fail-proof as I won't be able to\n>> attend the tests. \n>\n>I really think that's a recipe for disaster. Even on a regular machine\nyou\n>need to treat tuning as an on-going feedback process. There's no such\nthing >as\n>a fail-proof configuration since every application is different.\n>\n>On an exotic machine like this you're going to run into unique problems\n>that\n>nobody here can anticipate with certainty.\n>\n>-- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nMarc,\n\nYou're getting a lot of good advice for your project. Let me be another\nto reiterate that upgrading to Postgres 8.2.4 will bring added\nperformance and scalability benefits.\n\nOthers have mentioned that you do have to be data driven and\nunfortunately that is true. All you can really do is pick a reasonable\nstarting point and run a test to create a baseline number. Then,\nmonitor, make small changes and test again. That's the only way you're\ngoing to find the best configuration for your system. This will take\ntime and effort.\n\nIn addition, everything involved in your testing must scale - not just\nPostgres. For example, if your driver hardware or driver software does\nnot scale, you won't be able to generate enough throughput for your\napplication or Postgres. The same goes for your all of your networking\nequipment and any other hardware servers/software that might be involved\nin the test environment. So, you really have to monitor at all levels\ni.e. don't just focus on the database platform.\n\nI took a quick look at the Sun M8000 server link you provided. I don't\nknow the system specifically so I might be mistaken, but it looks like\nit is configured with 4 sockets per CPU board and 4 CPU boards per\nsystem. Each CPU board looks like it has the ability to take 128GB RAM.\nIn this case, you will have to keep an eye on how Solaris is binding\n(affinitizing) processes to CPU cores and/or boards. Any time a process\nis bound to a new CPU board it's likely that there will be a number of\ncache invalidations to move data the process was working on from the old\nboard to the new board. In addition, the moved process may still\ncontinue to refer to memory it allocated on the old board. This can be\nquite expensive. Typically, the more CPU cores/CPU boards you have, the\nmore likely this will happen. I'm no Solaris expert so I don't know if\nthere is a better way of doing this, but you might consider using the\npsrset or pbind commands to bind Postgres backend processes to a\nspecific CPU core or range of cores. If choosing a range of cores, these\nshould be on the same CPU board. Again, through monitoring, you'll have\nto determine how many CPU cores each backend really needs and then\nyou'll have to determine how best to spread the backends out over each\nof the CPU boards.\n\nGood luck.\n\nDavid\n", "msg_date": "Tue, 17 Jul 2007 10:43:44 -0700", "msg_from": "\"Strong, David\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "On Tue, 17 > We have the oppotunity to benchmark our application on a \nlarge server. I\n> have to prepare the Postgres configuration and I'd appreciate some\n> comments on it as I am not experienced with servers of such a scale.\n> Moreover the configuration should be fail-proof as I won't be able to\n> attend the tests.\n>\n> Our application (java + perl) and Postgres will run on the same server,\n> whereas the application activity is low when Postgres has large\n> transactions to process.\n\n\tPlease, can you be more specific about your application :\n\n\t- what does it do ?\n\t- what kind of workload does it generate ?\n\t[ie: many concurrent small queries (website) ; few huge queries, \nreporting, warehousing, all of the above, something else ?]\n\t- percentage and size of update queries ?\n\t- how many concurrent threads / connections / clients do you serve on a \nbusy day ?\n\t(I don't mean online users on a website, but ACTIVE concurrent database \nconnections)\n\n\tI assume you find your current server is too slow or foresee it will \nbecome too slow soon and want to upgrade, so :\n\n\t- what makes the current server's performance inadequate ? is it IO, CPU, \nRAM, a mix ? which proportions in the mix ?\n\n\tThis is very important. If you go to the dealer and ask \"I need a better \nvehicle\", he'll sell you a Porsche. But if you say \"I need a better vehcle \nto carry two tons of cinderblocks\" he'll sell you something else I guess. \nSame with database servers. You could need some humongous CPU power, but \nyou might as well not. Depends.\n\n> There is a large gap between our current produtcion server (Linux, 4GB\n> RAM, 4 cpus) and the benchmark server; one of the target of this\n> benchmark is to verify the scalability of our application.\n\n\tDefine scalability. (no this isn't a joke, I mean, you know your \napplication, how would you like it to \"scale\" ? How do you think it will \nscale ? Why ? What did you do so it would scale well ? etc.)\n\n\n\n", "msg_date": "Tue, 17 Jul 2007 20:04:28 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Marc,\n\n> Server Specifications:\n> ----------------------\n>\n> Sun SPARC Enterprise M8000 Server:\n>\n> http://www.sun.com/servers/highend/m8000/specs.xml\n>\n> File system:\n>\n> http://en.wikipedia.org/wiki/ZFS\n\nThere are some specific tuning parameters you need for ZFS or performance \nis going to suck.\n\nhttp://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n(scroll down to \"PostgreSQL\")\nhttp://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\nhttp://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n\nYou also don't say anything about what kind of workload you're running.\n\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 20 Jul 2007 16:26:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Having done something similar recently, I would recommend that you look at\nadding connection pooling using pgBouncer transaction pooling between your\nbenchmark app and PgSQL. In our application we have about 2000 clients\nfunneling down to 30 backends and are able to sustain large transaction per\nsecond volume. This has been the #1 key to success for us in running on\nmonster hardware.\nRegards,\n\nGavin\n\nOn 7/17/07, Marc Mamin <[email protected]> wrote:\n>\n>\n> Postgres configuration for 64 CPUs, 128 GB RAM...\n>\n> Hello,\n>\n> We have the oppotunity to benchmark our application on a large server. I\n> have to prepare the Postgres configuration and I'd appreciate some comments\n> on it as I am not experienced with servers of such a scale. Moreover the\n> configuration should be fail-proof as I won't be able to attend the tests.\n>\n> Our application (java + perl) and Postgres will run on the same server,\n> whereas the application activity is low when Postgres has large transactions\n> to process.\n>\n> There is a large gap between our current produtcion server (Linux, 4GB\n> RAM, 4 cpus) and the benchmark server; one of the target of this benchmark\n> is to verify the scalability of our application.\n>\n> And you have no reason to be envious as the server doesn't belong us :-)\n>\n> Thanks for your comments,\n>\n> Marc Mamin\n>\n>\n>\n>\n> Posgres version: 8.2.1\n>\n>\n> Server Specifications:\n> ----------------------\n>\n> Sun SPARC Enterprise M8000 Server:\n>\n> *http://www.sun.com/servers/highend/m8000/specs.xml*<http://www.sun.com/servers/highend/m8000/specs.xml>\n>\n> File system:\n>\n> *http://en.wikipedia.org/wiki/ZFS* <http://en.wikipedia.org/wiki/ZFS>\n>\n>\n> Planned configuration:\n> --------------------------------\n>\n> # we don't expect more than 150 parallel connections,\n> # but I suspect a leak in our application that let some idle connections\n> open\n>\n> max_connections=2000\n>\n> ssl = off\n>\n> #maximum allowed\n> shared_buffers= 262143\n>\n> # on our current best production server with 4GB RAM (not dedicated to\n> Postgres), work_mem is set to 600 MB\n> # this limitation is probably the bottleneck for our application as the\n> files in pgsql_tmp grows up to 15 GB\n> # during large aggregations (we have a locking mechanismus to avoid\n> parallel processing of such transactions)\n> work_mem = 31457280 # (30 GB)\n>\n> # index creation time is also an issue for us; the process is locking\n> other large processes too.\n> # our largest table so far is 13 GB + 11 GB indexes\n> maintenance_work_mem = 31457280 # (30 GB)\n>\n> # more than the max number of tables +indexes expected during the\n> benchmark\n> max_fsm_relations = 100000\n>\n> max_fsm_pages = 1800000\n>\n> # don't know if I schoud modify this.\n> # seems to be sufficient on our production servers\n> max_stack_depth = 2MB\n>\n> # vacuum will be done per hand between each test session\n> autovacuum = off\n>\n>\n> # required to analyse the benchmark\n> log_min_duration_statement = 1000\n>\n> max_prepared_transaction = 100\n>\n> # seems to be required to drop schema/roles containing large number of\n> objects\n> max_locks_per_transaction = 128\n>\n>\n>\n> # I use the default for the bgwriter as I couldnt find recommendation on\n> those\n>\n> #bgwriter_delay = 200ms # 10-10000ms between rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\n> scanned/round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers\n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n>\n> #WAL\n>\n> fsync = on\n>\n> #use default\n> #wal_sync_method\n>\n> # we are using 32 on our production system\n> wal_buffers=64\n>\n> # we didn't make any testing with this parameter until now, but this\n> should'nt be a relevant\n> # point as our performance focus is on large transactions\n> commit_delay = 0\n>\n> #CHECKPOINT\n>\n> # xlog will be on a separate disk\n> checkpoint_segments=256\n>\n> checkpoint_timeout = 5min\n>\n\nHaving done something similar recently, I would recommend that you look at adding connection pooling using pgBouncer transaction pooling between your benchmark app and PgSQL.  In our application we have about 2000 clients funneling down to 30 backends and are able to sustain large transaction per second volume.  This has been the #1 key to success for us in running on monster hardware.\nRegards,GavinOn 7/17/07, Marc Mamin\n <[email protected]> wrote:\n\n\nPostgres configuration for 64 CPUs, 128 GB RAM...\n\nHello,\n\nWe have the oppotunity to benchmark our application on a large server. I have to prepare the Postgres configuration and I'd appreciate some comments on it as I am not experienced with servers of such a scale. Moreover the configuration should be fail-proof as I won't be able to attend the tests. \n\nOur application (java + perl) and Postgres will run on the same server, whereas the application activity is low when Postgres has large transactions to process.\nThere is a large gap between our current produtcion server (Linux, 4GB RAM, 4 cpus) and the benchmark server; one of the target of this  benchmark is to verify the scalability of our application. \n\n\nAnd you have no reason to be envious as the server doesn't belong us :-)\n\n\nThanks for your comments,\n\nMarc Mamin\n\n\n\n\n\nPosgres version: 8.2.1\n\n\n\nServer Specifications:\n----------------------\n\nSun SPARC Enterprise M8000 Server:\n\nhttp://www.sun.com/servers/highend/m8000/specs.xml\n\n\nFile system:\n\nhttp://en.wikipedia.org/wiki/ZFS\n\n\n\nPlanned configuration:\n--------------------------------\n\n# we don't expect more than 150 parallel connections, \n# but I suspect a leak in our application that let some idle connections open\n\nmax_connections=2000\n\nssl = off \n\n#maximum allowed\nshared_buffers= 262143\n\n# on our current best production server with 4GB RAM (not dedicated to Postgres), work_mem is set to 600 MB\n# this limitation is probably the bottleneck for our application as the files in pgsql_tmp grows up to 15 GB \n# during large aggregations (we have a locking mechanismus to avoid parallel processing of such transactions)\nwork_mem = 31457280  # (30 GB)\n\n# index creation time is also an issue for us; the process is locking other large processes too.\n# our largest table so far is 13 GB + 11 GB indexes\nmaintenance_work_mem = 31457280  # (30 GB)\n\n# more than the max number of tables +indexes expected during the benchmark\nmax_fsm_relations = 100000\n\nmax_fsm_pages = 1800000\n\n# don't know if I schoud modify this.\n# seems to be sufficient on our production servers\nmax_stack_depth = 2MB\n\n# vacuum will be done per hand between each test session\nautovacuum = off \n\n\n\n# required to analyse the benchmark\nlog_min_duration_statement = 1000\n\n\nmax_prepared_transaction = 100\n\n\n# seems to be required to drop schema/roles containing large number of objects\nmax_locks_per_transaction = 128 \n\n\n\n\n# I use the default for the bgwriter as I couldnt find recommendation on those\n\n#bgwriter_delay = 200ms                 # 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0             # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5              # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333           # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5              # 0-1000 buffers max written/round\n\n\n#WAL\n\nfsync = on\n\n#use default\n#wal_sync_method\n\n# we are using 32 on our production system\nwal_buffers=64\n\n\n# we didn't make any testing with this parameter until now, but this should'nt be a relevant\n# point as our performance focus is on large transactions\ncommit_delay = 0 \n\n#CHECKPOINT\n\n# xlog will be  on a separate disk\ncheckpoint_segments=256\n\ncheckpoint_timeout = 5min", "msg_date": "Fri, 20 Jul 2007 20:18:53 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Josh,\n\nOn 7/20/07 4:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> There are some specific tuning parameters you need for ZFS or performance\n> is going to suck.\n> \n> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n> (scroll down to \"PostgreSQL\")\n> http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> http://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n> \n> You also don't say anything about what kind of workload you're running.\n\n\nI think we're assuming that the workload is OLTP when putting these tuning\nguidelines forward. Note that the ZFS tuning guidance referred to in this\nbug article recommend \"turning vdev prefetching off\" for \"random I/O\n(databases)\". This is exactly the opposite of what we should do for OLAP\nworkloads.\n\nAlso, the lore that setting recordsize on ZFS is mandatory for good database\nperformance is similarly not appropriate for OLAP work.\n\nIf the workload is OLAP / Data Warehousing, I'd suggest ignoring all of the\ntuning information from Sun that refers generically to \"database\". The\nuntuned ZFS performance should be far better in those cases. Specifically,\nthese three should be ignored:\n- (ignore this) limit ARC memory use\n- (ignore this) set recordsize to 8K\n- (ignore this) turn off vdev prefetch\n\n- Luke\n\n\n", "msg_date": "Sun, 22 Jul 2007 09:43:47 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": " \nHello,\n\nthank you for all your comments and recommendations.\n\nI'm aware that the conditions for this benchmark are not ideal, mostly\ndue to the lack of time to prepare it. We will also need an additional\nbenchmark on a less powerful - more realistic - server to better\nunderstand the scability of our application.\n\n\nOur application is based on java and is generating dynamic reports from\nlog files content. Dynamic means here that a repor will be calculated\nfrom the postgres data the first time it is requested (it will then be\ncached). Java is used to drive the data preparation and to\nhandle/generate the reports requests.\n\nThis is much more an OLAP system then an OLTP, at least for our\nperformance concern.\n\n\n\n\nData preparation:\n\n1) parsing the log files with a heavy use of perl (regular expressions)\nto generate csv files. Prepared statements also maintain reference\ntables in the DB. Postgres performance is not an issue for this first\nstep.\n\n2) loading the csv files with COPY. As around 70% of the data to load\ncome in a single daily table, we don't allow concurrent jobs for this\nstep. We have between a few and a few hundreds files to load into a\nsingle table; they are processed one after the other. A primary key is\nalways defined; for the case when the required indexes are alreay built\nand when the new data are above a given size, we are using a \"shadow\" \ntable instead (without the indexes) , build the index after the import\nand then replace the live table with the shadow one. \nFor example, we a have a table of 13 GB + 11 GB indexes (5 pieces).\n\nPerformances :\n\n a) is there an \"ideal\" size to consider for our csv files (100 x 10\nMB or better 1 x 1GB ?)\n b) maintenance_work_mem: I'll use around 1 GB as recommended by\nStefan\n \n3) Data agggregation. This is the heaviest part for Postgres. On our\ncurrent system some queries need above one hour, with phases of around\n100% cpu use, alterning with times of heavy i/o load when temporary\nresults are written/read to the plate (pgsql_tmp). During the\naggregation, other postgres activities are low (at least should be) as\nthis should take place at night. Currently we have a locking mechanism\nto avoid having more than one of such queries running concurently. This\nmay be to strict for the benchmark server but better reflect our current\nhardware capabilities.\n\nPerformances : Here we should favorise a single huge transaction and\nconsider a low probability to have another transaction requiring large\nsort space. Considering this, is it reasonable to define work_mem being\n3GB (I guess I should raise this parameter dynamically before running\nthe aggregation queries)\n\n4) Queries (report generation)\n\nWe have only few requests which are not satisfying while requiring large\nsort operations. The data are structured in different aggregation levels\n(minutes, hours, days) with logical time based partitions in oder to\nlimit the data size to compute for a given report. Moreover we can scale\nour infrastrucure while using different or dedicated Postgres servers\nfor different customers. Smaller customers may share a same instance,\neach of them having its own schema (The lock mechanism for large\naggregations apply to a whole Postgres instance, not to a single\ncustomer) . The benchmark will help us to plan such distribution.\n\nDuring the benchmark, we will probably not have more than 50 not idle\nconnections simultaneously. It is a bit too early for us to fine tune\nthis part. The benchmark will mainly focus on the steps 1 to 3\n\nDuring the benchmark, the Db will reach a size of about 400 GB,\nsimulating 3 different customers, also with data quite equally splitted\nin 3 scheemas.\n\n\n\nI will post our configuration(s) later on.\n\n\n\nThanks again for all your valuable input.\n\nMarc Mamin\n", "msg_date": "Tue, 24 Jul 2007 16:38:43 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Luke,\n\nZFS tuning is not coming from general suggestion ideas, but from real\npractice...\n\nSo,\n - limit ARC is the MUST for the moment to keep your database running\ncomfortable (specially DWH!)\n - 8K blocksize is chosen to read exactly one page when PG ask to\nread one page - don't mix it with prefetch! when prefetch is detected,\nZFS will read next blocks without any demand from PG; but otherwise\nwhy you need to read more pages each time PG asking only one?...\n - prefetch of course not needed for OLTP, but helps on OLAP/DWH, agree :)\n\nRgds,\n-Dimitri\n\n\nOn 7/22/07, Luke Lonergan <[email protected]> wrote:\n> Josh,\n>\n> On 7/20/07 4:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n>\n> > There are some specific tuning parameters you need for ZFS or performance\n> > is going to suck.\n> >\n> > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n> > (scroll down to \"PostgreSQL\")\n> > http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> > http://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n> >\n> > You also don't say anything about what kind of workload you're running.\n>\n>\n> I think we're assuming that the workload is OLTP when putting these tuning\n> guidelines forward. Note that the ZFS tuning guidance referred to in this\n> bug article recommend \"turning vdev prefetching off\" for \"random I/O\n> (databases)\". This is exactly the opposite of what we should do for OLAP\n> workloads.\n>\n> Also, the lore that setting recordsize on ZFS is mandatory for good database\n> performance is similarly not appropriate for OLAP work.\n>\n> If the workload is OLAP / Data Warehousing, I'd suggest ignoring all of the\n> tuning information from Sun that refers generically to \"database\". The\n> untuned ZFS performance should be far better in those cases. Specifically,\n> these three should be ignored:\n> - (ignore this) limit ARC memory use\n> - (ignore this) set recordsize to 8K\n> - (ignore this) turn off vdev prefetch\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Mon, 30 Jul 2007 23:26:03 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Marc,\n\nYou should expect that for the kind of OLAP workload you describe in steps 2\nand 3 you will have exactly one CPU working for you in Postgres.\n\nIf you want to accelerate the speed of this processing by a factor of 100 or\nmore on this machine, you should try Greenplum DB which is Postgres 8.2\ncompatible. Based on the overall setup you describe, you may have a hybrid\ninstallation with GPDB doing the reporting / OLAP workload and the other\nPostgres databases handling the customer workloads.\n\n- Luke\n\n\nOn 7/24/07 7:38 AM, \"Marc Mamin\" <[email protected]> wrote:\n\n> \n> Hello,\n> \n> thank you for all your comments and recommendations.\n> \n> I'm aware that the conditions for this benchmark are not ideal, mostly\n> due to the lack of time to prepare it. We will also need an additional\n> benchmark on a less powerful - more realistic - server to better\n> understand the scability of our application.\n> \n> \n> Our application is based on java and is generating dynamic reports from\n> log files content. Dynamic means here that a repor will be calculated\n> from the postgres data the first time it is requested (it will then be\n> cached). Java is used to drive the data preparation and to\n> handle/generate the reports requests.\n> \n> This is much more an OLAP system then an OLTP, at least for our\n> performance concern.\n> \n> \n> \n> \n> Data preparation:\n> \n> 1) parsing the log files with a heavy use of perl (regular expressions)\n> to generate csv files. Prepared statements also maintain reference\n> tables in the DB. Postgres performance is not an issue for this first\n> step.\n> \n> 2) loading the csv files with COPY. As around 70% of the data to load\n> come in a single daily table, we don't allow concurrent jobs for this\n> step. We have between a few and a few hundreds files to load into a\n> single table; they are processed one after the other. A primary key is\n> always defined; for the case when the required indexes are alreay built\n> and when the new data are above a given size, we are using a \"shadow\"\n> table instead (without the indexes) , build the index after the import\n> and then replace the live table with the shadow one.\n> For example, we a have a table of 13 GB + 11 GB indexes (5 pieces).\n> \n> Performances :\n> \n> a) is there an \"ideal\" size to consider for our csv files (100 x 10\n> MB or better 1 x 1GB ?)\n> b) maintenance_work_mem: I'll use around 1 GB as recommended by\n> Stefan\n> \n> 3) Data agggregation. This is the heaviest part for Postgres. On our\n> current system some queries need above one hour, with phases of around\n> 100% cpu use, alterning with times of heavy i/o load when temporary\n> results are written/read to the plate (pgsql_tmp). During the\n> aggregation, other postgres activities are low (at least should be) as\n> this should take place at night. Currently we have a locking mechanism\n> to avoid having more than one of such queries running concurently. This\n> may be to strict for the benchmark server but better reflect our current\n> hardware capabilities.\n> \n> Performances : Here we should favorise a single huge transaction and\n> consider a low probability to have another transaction requiring large\n> sort space. Considering this, is it reasonable to define work_mem being\n> 3GB (I guess I should raise this parameter dynamically before running\n> the aggregation queries)\n> \n> 4) Queries (report generation)\n> \n> We have only few requests which are not satisfying while requiring large\n> sort operations. The data are structured in different aggregation levels\n> (minutes, hours, days) with logical time based partitions in oder to\n> limit the data size to compute for a given report. Moreover we can scale\n> our infrastrucure while using different or dedicated Postgres servers\n> for different customers. Smaller customers may share a same instance,\n> each of them having its own schema (The lock mechanism for large\n> aggregations apply to a whole Postgres instance, not to a single\n> customer) . The benchmark will help us to plan such distribution.\n> \n> During the benchmark, we will probably not have more than 50 not idle\n> connections simultaneously. It is a bit too early for us to fine tune\n> this part. The benchmark will mainly focus on the steps 1 to 3\n> \n> During the benchmark, the Db will reach a size of about 400 GB,\n> simulating 3 different customers, also with data quite equally splitted\n> in 3 scheemas.\n> \n> \n> \n> I will post our configuration(s) later on.\n> \n> \n> \n> Thanks again for all your valuable input.\n> \n> Marc Mamin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n", "msg_date": "Wed, 01 Aug 2007 11:10:14 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." } ]
[ { "msg_contents": "It appears my multi-thread application (100 connections every 5 seconds) \nis stalled when working with postgresql database server. I have limited \nnumber of connections in my connection pool to postgresql to 20. At the \nbegining, connection is allocated and released from connection pool as \npostgres serves data request. The pool can recover from exhaustion. But \nvery quickly (after about 400 client requests), it seems postgres server \nstops serving and connection to postgres server is not released any more \nresulting a resource exhausting for clients.\n\nAnyone have experience with the performance aspect of this?\n\nFei\nunix 3 [ ] STREAM CONNECTED 1693655 \n31976/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693654 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693653 \n31975/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693652 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693651 \n31974/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693650 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693649 \n31973/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693648 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693647 \n31972/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693646 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693645 \n31971/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693644 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693641 \n31969/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693640 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693639 \n31968/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693638 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693637 \n31967/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693636 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693585 \n31941/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693584 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693583 \n31940/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693582 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693581 \n31939/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693580 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693579 \n31938/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693578 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693577 \n31937/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693576 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693575 \n31936/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693574 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693573 \n31935/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693572 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693571 \n31934/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693570 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693427 \n31851/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693426 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693425 \n31777/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693424 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693419 \n31764/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693418 \n31740/ns_ge_classif\n\n\nunix 3 [ ] STREAM CONNECTED 1693655 \n31976/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693654 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693653 \n31975/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693652 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693651 \n31974/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693650 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693649 \n31973/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693648 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693647 \n31972/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693646 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693645 \n31971/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693644 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693641 \n31969/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693640 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693639 \n31968/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693638 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693637 \n31967/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693636 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693585 \n31941/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693584 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693583 \n31940/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693582 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693581 \n31939/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693580 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693579 \n31938/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693578 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693577 \n31937/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693576 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693575 \n31936/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693574 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693573 \n31935/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693572 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693571 \n31934/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693570 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693427 \n31851/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693426 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693425 \n31777/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693424 \n31740/ns_ge_classif\nunix 3 [ ] STREAM CONNECTED 1693419 \n31764/postgres: pos /tmp/.s.PGSQL.5583\nunix 3 [ ] STREAM CONNECTED 1693418 \n31740/ns_ge_classif\n\n", "msg_date": "Tue, 17 Jul 2007 14:51:05 -0400", "msg_from": "Fei Liu <[email protected]>", "msg_from_op": true, "msg_subject": "large number of connected connections to postgres database (v8.0)" }, { "msg_contents": "Fei Liu,\n\n> It appears my multi-thread application (100 connections every 5 seconds)\n> is stalled when working with postgresql database server. I have limited\n> number of connections in my connection pool to postgresql to 20. At the\n> begining, connection is allocated and released from connection pool as\n> postgres serves data request. The pool can recover from exhaustion. But\n> very quickly (after about 400 client requests), it seems postgres server\n> stops serving and connection to postgres server is not released any more\n> resulting a resource exhausting for clients.\n\nThis sounds more like a problem with your connection pool. Unless \nPostgreSQL is slowing down due to CPU/RAM/I/O saturation?\n\nIn any case, I doubt the problem is too many connections, or you'd just get \nan error message ...\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 20 Jul 2007 16:28:27 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large number of connected connections to postgres database (v8.0)" } ]
[ { "msg_contents": "Hi\n\nI was doing some testing on \"insert\" compared to \"select into\". I \ninserted 100 000 rows (with 8 column values) into a table, which took 14 \nseconds, compared to a select into, which took 0.8 seconds.\n(fyi, the inserts where batched, autocommit was turned off and it all \nhappend on the local machine)\n\nNow I am wondering why the select into is that much faster?\nDoes the select into translate into a specially optimised function in c \nthat can cut corners which a insert can not do (e.g. lazy copying), or \nis it some other reason?\n\nThe reason I am asking is that select into shows that a number of rows \ncan be inserted into a table quite a lot faster than one would think was \npossible with ordinary sql. If that is the case, it means that if I \nwrite an pl-pgsql insert function in C instead of sql, then I can have \nmy db perform order of magnitude faster.\n\nAny comments?\n\nregards\n\nthomas\n", "msg_date": "Tue, 17 Jul 2007 21:38:59 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "insert vs select into performance" }, { "msg_contents": "Have you also tried the COPY-statement? Afaik select into is similar to \nwhat happens in there.\n\nBest regards,\n\nArjen\n\nOn 17-7-2007 21:38 Thomas Finneid wrote:\n> Hi\n> \n> I was doing some testing on \"insert\" compared to \"select into\". I \n> inserted 100 000 rows (with 8 column values) into a table, which took 14 \n> seconds, compared to a select into, which took 0.8 seconds.\n> (fyi, the inserts where batched, autocommit was turned off and it all \n> happend on the local machine)\n> \n> Now I am wondering why the select into is that much faster?\n> Does the select into translate into a specially optimised function in c \n> that can cut corners which a insert can not do (e.g. lazy copying), or \n> is it some other reason?\n> \n> The reason I am asking is that select into shows that a number of rows \n> can be inserted into a table quite a lot faster than one would think was \n> possible with ordinary sql. If that is the case, it means that if I \n> write an pl-pgsql insert function in C instead of sql, then I can have \n> my db perform order of magnitude faster.\n> \n> Any comments?\n> \n> regards\n> \n> thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Tue, 17 Jul 2007 21:53:51 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\nOn Jul 17, 2007, at 14:38 , Thomas Finneid wrote:\n\n> I was doing some testing on \"insert\" compared to \"select into\". I \n> inserted 100 000 rows (with 8 column values) into a table, which \n> took 14 seconds, compared to a select into, which took 0.8 seconds.\n> (fyi, the inserts where batched, autocommit was turned off and it \n> all happend on the local machine)\n>\n> Now I am wondering why the select into is that much faster?\n\nIt would be helpful if you included the actual queries you're using, \nas there are a number of variables:\n\n1) If there are any constraints on the original table, the INSERT \nwill be checking those constraints. AIUI, SELECT INTO does not \ngenerate any table constraints.\n\n2a) Are you using INSERT INTO foo (foo1, foo2, foo2) SELECT foo1, \nfoo2, foo3 FROM pre_foo or individual inserts for each row? The \nformer would be faster than the latter.\n\n2b) If you are doing individual inserts, are you wrapping them in a \ntransaction? The latter would be faster.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 17 Jul 2007 14:54:55 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "Michael Glaesemann <[email protected]> writes:\n> It would be helpful if you included the actual queries you're using, \n> as there are a number of variables:\n\nNot to mention which PG version he's testing. Since (I think) 8.1,\nSELECT INTO knows that it can substitute one fsync for WAL-logging\nthe individual row inserts, since if there's a crash the new table\nwill disappear anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jul 2007 16:31:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance " }, { "msg_contents": "\n\nMichael Glaesemann wrote:\n> \n> On Jul 17, 2007, at 14:38 , Thomas Finneid wrote:\n> \n>> I was doing some testing on \"insert\" compared to \"select into\". I \n>> inserted 100 000 rows (with 8 column values) into a table, which took \n>> 14 seconds, compared to a select into, which took 0.8 seconds.\n>> (fyi, the inserts where batched, autocommit was turned off and it all \n>> happend on the local machine)\n>>\n>> Now I am wondering why the select into is that much faster?\n> \n> It would be helpful if you included the actual queries you're using, as \n> there are a number of variables:\n\ncreate table ciu_data_type\n(\n\tid\t\tinteger,\n\tloc_id\t \tinteger,\n\tvalue1\t\tinteger,\n\tvalue2\t\treal,\n\tvalue3\t\tinteger,\n\tvalue4\t\treal,\n\tvalue5\t\treal,\n\tvalue6\t\tchar(2),\n\tvalue7\t\tchar(3),\n\tvalue8\t\tbigint,\n\tvalue9\t\tbigint,\n\tvalue10\t\treal,\n\tvalue11\t\tbigint,\n\tvalue12\t\tsmallint,\n\tvalue13\t\tdouble precision,\n\tvalue14\t\treal,\n\tvalue15\t\treal,\n\tvalue16\t\tchar(1),\n\tvalue17\t\tvarchar(18),\n\tvalue18\t\tbigint,\n\tvalue19\t\tchar(4)\n);\n\nperformed with JDBC\n\ninsert into ciu_data_type (id, loc_id, value3, value5, value8, value9, \nvalue10, value11 ) values (?,?,?,?,?,?,?,?)\n\nselect * into ciu_data_type_copy from ciu_data_type\n\n> 1) If there are any constraints on the original table, the INSERT will \n> be checking those constraints. AIUI, SELECT INTO does not generate any \n> table constraints.\n\nNo constraints in this test.\n\n> 2a) Are you using INSERT INTO foo (foo1, foo2, foo2) SELECT foo1, foo2, \n> foo3 FROM pre_foo or individual inserts for each row? The former would \n> be faster than the latter.\n> \n> 2b) If you are doing individual inserts, are you wrapping them in a \n> transaction? The latter would be faster.\n\ndisabling autocommit, but nothing more than that\n\n\nI havent done this test in a stored function yet, nor have I tried it \nwith a C client so far, so there is the chance that it is java/jdbc that \nmakes the insert so slow. I'll get to that test soon if there is any \nchance my theory makes sence.\n\nregards\n\nthomas\n\n", "msg_date": "Tue, 17 Jul 2007 22:50:22 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n\nTom Lane wrote:\n> Michael Glaesemann <[email protected]> writes:\n>> It would be helpful if you included the actual queries you're using, \n>> as there are a number of variables:\n> \n> Not to mention which PG version he's testing. \n\nIts pg 8.1, for now, I'll be upgrading to a compile optimised 8.2 when I \ndo the real test on the real server.\n\n(its on kubuntu 6.10 running on a Thinkpad T60 with dual core 1.5,GB RAM \nand 100GB SATA, just in case anybody feels that is of any interrest.)\n\n\n> Since (I think) 8.1,\n> SELECT INTO knows that it can substitute one fsync for WAL-logging\n> the individual row inserts, since if there's a crash the new table\n> will disappear anyway.\n\nI am not sure I understand you correctly here, are you saying that \nSELECT INTO in 8.1 disables WAL logging and uses just a single fsync at \nthe end? in that case it means that I could disable WAL as well and \nachieve the same performance, does it not?\n\nregards\n\nthomas\n\n", "msg_date": "Tue, 17 Jul 2007 22:58:01 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "On Tue, Jul 17, 2007 at 10:50:22PM +0200, Thomas Finneid wrote:\n>I havent done this test in a stored function yet, nor have I tried it \n>with a C client so far, so there is the chance that it is java/jdbc that \n>makes the insert so slow. I'll get to that test soon if there is any \n>chance my theory makes sence.\n\nWhat you're seeing is perfectly normal. Switch to COPY for fast inserts. \n(When you use inserts you need to wait for a round-trip for each row, \ninstead of sending data to the server as fast as possible.)\n\nMike Stone\n", "msg_date": "Tue, 17 Jul 2007 16:58:35 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "If you're performing via JDBC, are you using addBatch/executeBatch, or\nare you directly executing each insert? If you directly execute each\ninsert, then your code will wait for a server round-trip between each\ninsert.\n\nThat still won't get you to the speed of select into, but it should\nhelp. You could also look at the pgsql-jdbc archives for the JDBC\ndriver patches which allow you to use COPY-style bulk loading, which\nshould get you to the performance level of COPY, which should be\nreasonably close to the performance of select into.\n\n-- Mark Lewis\n\nOn Tue, 2007-07-17 at 22:50 +0200, Thomas Finneid wrote:\n> \n> Michael Glaesemann wrote:\n> > \n> > On Jul 17, 2007, at 14:38 , Thomas Finneid wrote:\n> > \n> >> I was doing some testing on \"insert\" compared to \"select into\". I \n> >> inserted 100 000 rows (with 8 column values) into a table, which took \n> >> 14 seconds, compared to a select into, which took 0.8 seconds.\n> >> (fyi, the inserts where batched, autocommit was turned off and it all \n> >> happend on the local machine)\n> >>\n> >> Now I am wondering why the select into is that much faster?\n> > \n> > It would be helpful if you included the actual queries you're using, as \n> > there are a number of variables:\n> \n> create table ciu_data_type\n> (\n> \tid\t\tinteger,\n> \tloc_id\t \tinteger,\n> \tvalue1\t\tinteger,\n> \tvalue2\t\treal,\n> \tvalue3\t\tinteger,\n> \tvalue4\t\treal,\n> \tvalue5\t\treal,\n> \tvalue6\t\tchar(2),\n> \tvalue7\t\tchar(3),\n> \tvalue8\t\tbigint,\n> \tvalue9\t\tbigint,\n> \tvalue10\t\treal,\n> \tvalue11\t\tbigint,\n> \tvalue12\t\tsmallint,\n> \tvalue13\t\tdouble precision,\n> \tvalue14\t\treal,\n> \tvalue15\t\treal,\n> \tvalue16\t\tchar(1),\n> \tvalue17\t\tvarchar(18),\n> \tvalue18\t\tbigint,\n> \tvalue19\t\tchar(4)\n> );\n> \n> performed with JDBC\n> \n> insert into ciu_data_type (id, loc_id, value3, value5, value8, value9, \n> value10, value11 ) values (?,?,?,?,?,?,?,?)\n> \n> select * into ciu_data_type_copy from ciu_data_type\n> \n> > 1) If there are any constraints on the original table, the INSERT will \n> > be checking those constraints. AIUI, SELECT INTO does not generate any \n> > table constraints.\n> \n> No constraints in this test.\n> \n> > 2a) Are you using INSERT INTO foo (foo1, foo2, foo2) SELECT foo1, foo2, \n> > foo3 FROM pre_foo or individual inserts for each row? The former would \n> > be faster than the latter.\n> > \n> > 2b) If you are doing individual inserts, are you wrapping them in a \n> > transaction? The latter would be faster.\n> \n> disabling autocommit, but nothing more than that\n> \n> \n> I havent done this test in a stored function yet, nor have I tried it \n> with a C client so far, so there is the chance that it is java/jdbc that \n> makes the insert so slow. I'll get to that test soon if there is any \n> chance my theory makes sence.\n> \n> regards\n> \n> thomas\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 17 Jul 2007 13:59:28 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n\nArjen van der Meijden wrote:\n> Have you also tried the COPY-statement? Afaik select into is similar to \n> what happens in there.\n\nNo, because it only works on file to db or vice versa not table to table.\n\nregards\n\nthoams\n", "msg_date": "Tue, 17 Jul 2007 23:01:15 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\nOn Jul 17, 2007, at 15:50 , Thomas Finneid wrote:\n\n> Michael Glaesemann wrote:\n\n>> 2a) Are you using INSERT INTO foo (foo1, foo2, foo2) SELECT foo1, \n>> foo2, foo3 FROM pre_foo or individual inserts for each row? The \n>> former would be faster than the latter.\n\n> performed with JDBC\n>\n> insert into ciu_data_type (id, loc_id, value3, value5, value8, \n> value9, value10, value11 ) values (?,?,?,?,?,?,?,?)\n\nAs they're individual inserts, I think what you're seeing is overhead \nfrom calling this statement 100,000 times, not just on the server but \nalso the overhead through JDBC. For comparison, try\n\nCREATE TABLE ciu_data_type_copy LIKE ciu_data_type;\n\nINSERT INTO ciu_data_type_copy (id, loc_id, value3, value5, value8, \nvalue9, value10, value11)\nSELECT id, loc_id, value3, value5, value8, value9, value10, value11\nFROM ciu_data_type;\n\nI think this would be more comparable to what you're seeing.\n\n> I havent done this test in a stored function yet, nor have I tried \n> it with a C client so far, so there is the chance that it is java/ \n> jdbc that makes the insert so slow. I'll get to that test soon if \n> there is any chance my theory makes sence.\n\nJust testing in psql with \\timing should be fairly easy.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 17 Jul 2007 16:07:04 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n\nMark Lewis wrote:\n> If you're performing via JDBC, are you using addBatch/executeBatch, or\n> are you directly executing each insert? If you directly execute each\n> insert, then your code will wait for a server round-trip between each\n> insert.\n\nI tested both and I found almost no difference in the time it took to \nperform it. Mind you this was on a local machine, but I still thought \nthat it was a bit strange.\n\n> That still won't get you to the speed of select into, but it should\n> help. You could also look at the pgsql-jdbc archives for the JDBC\n> driver patches which allow you to use COPY-style bulk loading, which\n> should get you to the performance level of COPY, which should be\n> reasonably close to the performance of select into.\n\nYes, someone else on the list suggested this a couple of weeks ago. I \nhavent had a chance to test it yet, but I am hopeful that I can use it.\n\nThe only issue I have is that the test I have done are rather \nsimplistic, because they are just speed trials. The real system will \nprobably use 5-10 tables, with up to 100 columns for all tables, that \nmeans I need a stored function which goes through all bulked data and \nreinserts them into their real tables. I am worried that this might hurt \nthe performance so much so that almost the entire bulk copy advantage \ndiasappears. This is why I am wondering about the details of SELECT INTO \nand C functions etc.\n\nregards\n\nthomas\n", "msg_date": "Tue, 17 Jul 2007 23:10:50 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n> I was doing some testing on \"insert\" compared to \"select into\". I \n> inserted 100 000 rows (with 8 column values) into a table, which took 14 \n> seconds, compared to a select into, which took 0.8 seconds.\n> (fyi, the inserts where batched, autocommit was turned off and it all \n> happend on the local machine)\n\n\tDid you use prepared statements ?\n\tDid you use INSERT INTO ... VALUES () with a long list of values, or just \n100K insert statements ?\n\n\tIt's the time to parse statements, plan, execute, roundtrips with the \nclient, context switches, time for your client library to escape the data \nand encode it and for postgres to decode it, etc. In a word : OVERHEAD.\n\n\tBy the way which language and client library are you using ?\n\n\tFYI 14s / 100k = 140 microseconds per individual SQL query. That ain't \nslow at all.\n\n> Does the select into translate into a specially optimised function in c \n> that can cut corners which a insert can not do (e.g. lazy copying), or \n> is it some other reason?\n\n\tYeah : instead of your client having to encode 100K * 8 values, send it \nover a socket, and postgres decoding it, INSERT INTO SELECT just takes the \ndata, and writes the data. Same thing as writing a file a byte at a time \nversus using a big buffer.\n\n> The reason I am asking is that select into shows that a number of rows \n> can be inserted into a table quite a lot faster than one would think was \n> possible with ordinary sql. If that is the case, it means that if I \n> write an pl-pgsql insert function in C instead of sql, then I can have \n> my db perform order of magnitude faster.\n\n\tFortunately this is already done for you : there is the PREPARE \nstatement, which will remove the parsing overhead. If you must insert many \nrows, use VALUES (),(),()...\n", "msg_date": "Tue, 17 Jul 2007 23:14:36 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n\nPFC wrote:\n> \n>> I was doing some testing on \"insert\" compared to \"select into\". I \n>> inserted 100 000 rows (with 8 column values) into a table, which took \n>> 14 seconds, compared to a select into, which took 0.8 seconds.\n>> (fyi, the inserts where batched, autocommit was turned off and it all \n>> happend on the local machine)\n> \n> Did you use prepared statements ?\n> Did you use INSERT INTO ... VALUES () with a long list of values, or \n> just 100K insert statements ?\n\nIt was prepared statements and I tried it both batched and non-batched \n(not much difference on a local machine)\n\n> It's the time to parse statements, plan, execute, roundtrips with \n> the client, context switches, time for your client library to escape the \n> data and encode it and for postgres to decode it, etc. In a word : \n> OVERHEAD.\n\nI know there is some overhead, but that much when running it batched...?\n\n> By the way which language and client library are you using ?\n> \n> FYI 14s / 100k = 140 microseconds per individual SQL query. That \n> ain't slow at all.\n\nUnfortunately its not fast enough, it needs to be done in no more than \n1-2 seconds, ( and in production it will be maybe 20-50 columns of data, \nperhaps divided over 5-10 tables.)\nAdditionally it needs to scale to perhaps three times as many columns \nand perhaps 2 - 3 times as many rows in some situation within 1 seconds.\nFurther on it needs to allow for about 20 - 50 clients reading much of \nthat data before the next batch of data arrives.\n\nI know the computer is going to be a much faster one than the one I am \ntesting with, but I need to make sure the solution scales well.\n\n\nregars\n\nthomas\n", "msg_date": "Tue, 17 Jul 2007 23:27:07 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n>> It's the time to parse statements, plan, execute, roundtrips with \n>> the client, context switches, time for your client library to escape \n>> the data and encode it and for postgres to decode it, etc. In a word : \n>> OVERHEAD.\n>\n> I know there is some overhead, but that much when running it batched...?\n\n\tWell, yeah ;)\n\n> Unfortunately its not fast enough, it needs to be done in no more than \n> 1-2 seconds, ( and in production it will be maybe 20-50 columns of data, \n> perhaps divided over 5-10 tables.)\n> Additionally it needs to scale to perhaps three times as many columns \n> and perhaps 2 - 3 times as many rows in some situation within 1 seconds.\n> Further on it needs to allow for about 20 - 50 clients reading much of \n> that data before the next batch of data arrives.\n\n\tWow. What is the application ?\n\n\tTest run on a desktop PC, Athlon 64 3200+, 2 IDE disks in RAID1 (pretty \nslow) :\n\ntest=> CREATE TABLE test (a INT, b INT, c INT, d INT, e INT, f INT);\nCREATE TABLE\nTemps : 11,463 ms\n\ntest=> INSERT INTO test SELECT 1,2,3,4,5,a FROM generate_series( 1, 100000 \n) as a;\nINSERT 0 100000\nTemps : 721,579 ms\n\n\tOK, so you see, insert speed is pretty fast. With a better CPU and faster \ndisks, you can get a lot more.\n\ntest=> TRUNCATE TABLE test;\nTRUNCATE TABLE\nTemps : 30,010 ms\n\ntest=> ALTER TABLE test ADD PRIMARY KEY (f);\nINFO: ALTER TABLE / ADD PRIMARY KEY créera un index implicite «test_pkey» \npour la table «test»\nALTER TABLE\nTemps : 100,577 ms\n\ntest=> INSERT INTO test SELECT 1,2,3,4,5,a FROM generate_series( 1, 100000 \n) as a;\nINSERT 0 100000\nTemps : 1915,928 ms\n\n\tThis includes the time to update the index.\n\ntest=> DROP TABLE test;\nDROP TABLE\nTemps : 28,804 ms\n\ntest=> CREATE TABLE test (a INT, b INT, c INT, d INT, e INT, f INT);\nCREATE TABLE\nTemps : 1,626 ms\n\ntest=> CREATE OR REPLACE FUNCTION test_insert( )\n RETURNS VOID\n LANGUAGE plpgsql\n AS\n$$\nDECLARE\n _i INTEGER;\nBEGIN\n FOR _i IN 0..100000 LOOP\n INSERT INTO test (a,b,c,d,e,f) VALUES (1,2,3,4,5, _i);\n END LOOP;\nEND;\n$$;\nCREATE FUNCTION\nTemps : 51,948 ms\n\ntest=> SELECT test_insert();\n test_insert\n-------------\n\n(1 ligne)\n\nTemps : 1885,382 ms\n\n\tNow you see, performing 100K individual inserts inside a plpgsql function \nis also fast.\n\tThe postgres engine is pretty damn fast ; it's the communication overhead \nthat you feel, especially switching between client and server processes.\n\n\tAnother example :\n\n=> INSERT INTO test (a,b,c,d,e,f) VALUES (... 100000 integer tuples)\nINSERT 0 100000\nTemps : 1836,458 ms\n\n\tVALUES is actually pretty fast. Here, there is no context switch, \neverything is done in 1 INSERT.\n\n\tHowever COPY is much faster because the parsing overhead and de-escaping \nof data is faster. COPY is optimized for throughput.\n\n\tSo, advice :\n\n\tFor optimum throughput, have your application build chunks of data into \ntext files and use COPY. Or if your client lib supports the copy \ninterface, use it.\n\tYou will need a fast disk system with xlog and data on separate disks, \nseveral CPU cores (1 insert thread will max out 1 core, use the others for \nselects), lots of RAM so index updates don't need to seek, and tuning of \nbgwriter and checkpoints to avoid load spikes.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 18 Jul 2007 13:07:18 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "On Tue, Jul 17, 2007 at 10:58:01PM +0200, Thomas Finneid wrote:\n>I am not sure I understand you correctly here, are you saying that \n>SELECT INTO in 8.1 disables WAL logging and uses just a single fsync at \n>the end? in that case it means that I could disable WAL as well and \n>achieve the same performance, does it not?\n\nYes. The difference is that the select into optimization just means that \nif the system crashes the data you're inserting is invalid (and is \nproperly cleaned up), and disabling the WAL means that if the system \ncrashes everything is invalid (and can't be cleaned up). \n\nMike Stone\n", "msg_date": "Wed, 18 Jul 2007 07:31:43 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "On Tue, Jul 17, 2007 at 11:01:15PM +0200, Thomas Finneid wrote:\n>Arjen van der Meijden wrote:\n>>Have you also tried the COPY-statement? Afaik select into is similar to \n>>what happens in there.\n>\n>No, because it only works on file to db or vice versa not table to table.\n\nI don't understand how the insert you described is table to table?\n\nMike Stone\n", "msg_date": "Wed, 18 Jul 2007 07:32:50 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\nPFC wrote:\n>> Unfortunately its not fast enough, it needs to be done in no more than \n>> 1-2 seconds, ( and in production it will be maybe 20-50 columns of \n>> data, perhaps divided over 5-10 tables.)\n>> Additionally it needs to scale to perhaps three times as many columns \n>> and perhaps 2 - 3 times as many rows in some situation within 1 seconds.\n>> Further on it needs to allow for about 20 - 50 clients reading much of \n>> that data before the next batch of data arrives.\n> \n> Wow. What is the application ?\n\nGeological surveys, where they perform realtime geo/hydro-phone shots of \nareas of the size of 10x10km every 3-15 seconds.\n\n\n> test=> CREATE OR REPLACE FUNCTION test_insert( )\n> RETURNS VOID\n> LANGUAGE plpgsql\n> AS\n> $$\n> DECLARE\n> _i INTEGER;\n> BEGIN\n> FOR _i IN 0..100000 LOOP\n> INSERT INTO test (a,b,c,d,e,f) VALUES (1,2,3,4,5, _i);\n> END LOOP;\n> END;\n> $$;\n> CREATE FUNCTION\n> Temps : 51,948 ms\n> \n> test=> SELECT test_insert();\n> test_insert\n> -------------\n> \n> (1 ligne)\n> \n> Temps : 1885,382 ms\n\nI tested this one and it took 4 seconds, compared to the jdbc insert \nwhich took 14 seconds, so its a lot faster. but not as fast as the \nSELECT INTO.\n\nI also tested an INSERT INTO FROM SELECT, which took 1.8 seconds, now we \nare starting to talk about real performance.\n\n\n> However COPY is much faster because the parsing overhead and \n> de-escaping of data is faster. COPY is optimized for throughput.\n> \n> So, advice :\n> \n> For optimum throughput, have your application build chunks of data \n> into text files and use COPY. Or if your client lib supports the copy \n> interface, use it.\n\nI did test COPY, i.e. the jdbc COPY patch for pg 8.1, it performs at \napprox 1.8 seconds :) The test was done with text input, I am going to \ntest it with binary input, which I expect will increase the performance \nwith 20-50%.\n\nAll these test have ben performed on a laptop with a Kubuntu 6.10 \nversion of pg 8.1 without any special pg performance tuning. So I expect \nthat compiling lates pg and doing some tuning on it and testing it on \nthe a representative server will give it an additional boost in performance.\n\nThe key here is that with abundance in performance, I can experiment \nwith the solution in a completely different way than if I had any \n\"artificial\" restrictions.\n\n> You will need a fast disk system with xlog and data on separate \n> disks, several CPU cores (1 insert thread will max out 1 core, use the \n> others for selects), lots of RAM so index updates don't need to seek, \n> and tuning of bgwriter and checkpoints to avoid load spikes.\n\nwill have a look at it.\n\nregards\n\nthomas\n", "msg_date": "Wed, 18 Jul 2007 21:08:08 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "Michael Glaesemann wrote:\n> \n> As they're individual inserts, I think what you're seeing is overhead \n> from calling this statement 100,000 times, not just on the server but \n> also the overhead through JDBC. For comparison, try\n> \n> CREATE TABLE ciu_data_type_copy LIKE ciu_data_type;\n> \n> INSERT INTO ciu_data_type_copy (id, loc_id, value3, value5, value8, \n> value9, value10, value11)\n> SELECT id, loc_id, value3, value5, value8, value9, value10, value11\n> FROM ciu_data_type;\n> \n> I think this would be more comparable to what you're seeing.\n\nThis is much faster than my previous solution, but, I also tested two \nother solutions\n- a stored function with array arguments and it performed 3 times better.\n- jdbc with COPY patch performed 8.4 times faster with text input, \nexpect binary input to be even faster.\n\nregards\n\nthomas\n", "msg_date": "Wed, 18 Jul 2007 21:11:20 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "\n\nMichael Stone wrote:\n> On Tue, Jul 17, 2007 at 11:01:15PM +0200, Thomas Finneid wrote:\n>> Arjen van der Meijden wrote:\n>>> Have you also tried the COPY-statement? Afaik select into is similar \n>>> to what happens in there.\n>>\n>> No, because it only works on file to db or vice versa not table to table.\n> \n> I don't understand how the insert you described is table to table?\n\nSELECT INTO is table to table, so is INSERT INTO SELECT FROM.\n\nregards\n\nthomas\n", "msg_date": "Wed, 18 Jul 2007 21:13:14 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "Michael Stone wrote:\n> On Tue, Jul 17, 2007 at 10:58:01PM +0200, Thomas Finneid wrote:\n>> I am not sure I understand you correctly here, are you saying that \n>> SELECT INTO in 8.1 disables WAL logging and uses just a single fsync \n>> at the end? in that case it means that I could disable WAL as well and \n>> achieve the same performance, does it not?\n> \n> Yes. The difference is that the select into optimization just means that \n> if the system crashes the data you're inserting is invalid (and is \n> properly cleaned up), and disabling the WAL means that if the system \n> crashes everything is invalid (and can't be cleaned up).\n\nSo, how does one (temporarily) disable WAL logging ? Or, for example, disable WAL logging for a \ntemporary table ?\n\nRegards,\n\nAdriaan van Os\n", "msg_date": "Wed, 18 Jul 2007 21:28:40 +0200", "msg_from": "Adriaan van Os <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "Adriaan van Os wrote:\n> So, how does one (temporarily) disable WAL logging ? Or, for example,\n> disable WAL logging for a temporary table ?\n\nOperations on temporary tables are never WAL logged. Operations on other\ntables are, and there's no way to disable it.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Jul 2007 20:58:02 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" }, { "msg_contents": "On Wed, Jul 18, 2007 at 09:13:14PM +0200, Thomas Finneid wrote:\n>Michael Stone wrote:\n>>I don't understand how the insert you described is table to table?\n>\n>SELECT INTO is table to table, so is INSERT INTO SELECT FROM.\n\nI could have sworn that at least one of the examples you gave didn't \nhave any select. Doesn't really matter.\n\nMike Stone\n", "msg_date": "Mon, 23 Jul 2007 07:51:50 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert vs select into performance" } ]
[ { "msg_contents": "Hi\n\nDuring the somes I did I noticed that it does not necessarily seem to be \ntrue that one needs the fastest disks to have a pg system that is fast.\n\nIt seems to me that its more important to:\n- choose the correct methods to use for the operation\n- tune the pg memory settings\n- tune/disable pg xlog/wal etc\n\nIt also seems to me that fast disks are more important for db systems of \nthe OLTP type applications with real concurrency of both readers and \nwrites across many, possibly larger, tables etc.\n\nAre the above statements close to having any truth in them?\n\nregards\n\nthomas\n", "msg_date": "Tue, 17 Jul 2007 23:44:14 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "importance of fast disks with pg" }, { "msg_contents": "Thomas Finneid wrote:\n> Hi\n> \n> During the somes I did I noticed that it does not necessarily seem to be \n> true that one needs the fastest disks to have a pg system that is fast.\n> \n> It seems to me that its more important to:\n> - choose the correct methods to use for the operation\n> - tune the pg memory settings\n> - tune/disable pg xlog/wal etc\n> \n> It also seems to me that fast disks are more important for db systems of \n> the OLTP type applications with real concurrency of both readers and \n> writes across many, possibly larger, tables etc.\n> \n> Are the above statements close to having any truth in them?\n> \n> regards\n> \n> thomas\n\nI'd say that \"it depends\". We run an OLAP workload on 350+ gigs of database on \na system with 64GB of RAM. I can tell you for certain that fetching non-cached \ndata is very sensitive to disk throughput!\n\nDifferent types of workloads will find different bottlenecks in the system..\n\n-Dan\n", "msg_date": "Tue, 17 Jul 2007 15:54:27 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: importance of fast disks with pg" }, { "msg_contents": "Thomas Finneid wrote:\n> During the somes I did I noticed that it does not necessarily seem to be\n> true that one needs the fastest disks to have a pg system that is fast.\n> \n> It seems to me that its more important to:\n> - choose the correct methods to use for the operation\n> - tune the pg memory settings\n> - tune/disable pg xlog/wal etc\n> \n> It also seems to me that fast disks are more important for db systems of\n> the OLTP type applications with real concurrency of both readers and\n> writes across many, possibly larger, tables etc.\n> \n> Are the above statements close to having any truth in them?\n\nIt depends.\n\nThe key to performance is to identify the bottleneck. If your CPU is\nrunning at 50%, and spends 50% of the time waiting for I/O, a faster\ndisk will help. But only up to a point. After you add enough I/O\ncapability that the CPU is running at 100%, getting faster disks doesn't\nhelp anymore. At that point you need to get more CPU power.\n\nHere's the algorithm for increasing application throughput:\n\nwhile throughput is not high enough\n{\n identify bottleneck\n resolve bottleneck, by faster/more hardware, or by optimizing application\n}\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Jul 2007 09:57:15 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: importance of fast disks with pg" } ]
[ { "msg_contents": "Seems Linux has IO scheduling through a program called ionice.\n\nHas anyone here experimented with using it rather than\nvacuum sleep settings?\n\nhttp://linux.die.net/man/1/ionice\n This program sets the io scheduling class and priority\n for a program. As of this writing, Linux supports 3 scheduling\n classes:\n\n Idle. A program running with idle io priority will only get disk\n time when no other program has asked for disk io for a defined\n grace period. The impact of idle io processes on normal system\n activity should be zero.[...]\n\n Best effort. This is the default scheduling class for any process\n that hasn't asked for a specific io priority. Programs inherit the\n CPU nice setting for io priorities. [...]\n\nhttp://friedcpu.wordpress.com/2007/07/17/why-arent-you-using-ionice-yet/\n", "msg_date": "Tue, 17 Jul 2007 15:29:32 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "ionice to make vacuum friendier?" }, { "msg_contents": "Ron Mayer wrote:\n> Seems Linux has IO scheduling through a program called ionice.\n> \n> Has anyone here experimented with using it rather than\n> vacuum sleep settings?\n\nI looked at that briefly for smoothing checkpoints, but it was\nunsuitable for that purpose because it only prioritizes reads, not writes.\n\nIt maybe worth trying for vacuum, though vacuum too can do a lot of\nwrites. In the worst case, the OS cache is saturated with dirty pages,\nwhich blocks all writes in the system.\n\nIf it did prioritize writes as well, that would be *excellent*. Any\nkernel hackers out there looking for a project?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Jul 2007 10:03:00 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ionice to make vacuum friendier?" }, { "msg_contents": "On Wed, Jul 18, 2007 at 10:03:00AM +0100, Heikki Linnakangas wrote:\n> Ron Mayer wrote:\n> > Seems Linux has IO scheduling through a program called ionice.\n> > \n> > Has anyone here experimented with using it rather than\n> > vacuum sleep settings?\n> \n> I looked at that briefly for smoothing checkpoints, but it was\n> unsuitable for that purpose because it only prioritizes reads, not writes.\n> \n> It maybe worth trying for vacuum, though vacuum too can do a lot of\n> writes. In the worst case, the OS cache is saturated with dirty pages,\n> which blocks all writes in the system.\n> \n> If it did prioritize writes as well, that would be *excellent*. Any\n> kernel hackers out there looking for a project?\n\nMy understanding is that FreeBSD will prioritize IO based on process\npriority, though I have no idea how it's actually accomplished or how\neffective it is. But if we put in special support for this for Linux we\nshould consider FBSD as well.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Wed, 18 Jul 2007 13:48:16 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ionice to make vacuum friendier?" } ]
[ { "msg_contents": "Hi All,\n\nI am trying to find out how to use a trigger function on a table to copy any\ninserted row to a remote PG server.\n\nie:\n\nRow X is inserted into TableX in DB1 on server1....TableX trigger function\nfires and contacts DB2 on server2 and inserts the row into TableY on\nserver2.\n\nI've looked around and can't see to find this. Essentially I need to know\nhow to write to a remote DB server from within a trigger function.\n\nThis is not replication, I'm not interested in a full blown trigger based\nreplication solution.\n\nAny Help is greatly appreciated!\n\nThanks\n\nMike\n\nHi All,I am trying to find out how to use a trigger function on a table to copy any inserted row to a remote PG server.ie:Row X is inserted into TableX in DB1 on server1....TableX trigger function fires and contacts DB2 on server2 and inserts the row into TableY on server2.\nI've looked around and can't see to find this. Essentially I need to know how to write to a remote DB server from within a trigger function. This is not replication, I'm not interested in a full blown trigger based replication solution. \nAny Help is greatly appreciated!ThanksMike", "msg_date": "Wed, 18 Jul 2007 09:36:33 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to use a trigger to write rows to a remote server" }, { "msg_contents": "Michael Dengler wrote:\n> I am trying to find out how to use a trigger function on a table to copy\n> any\n> inserted row to a remote PG server.\n\nHave a look at contrib/dblink.\n\nYou'll have to think what you want to happen in error scenarios. For\nexample, if the connection is down, or it brakes just after inserting\nthe row to the other db, but before committing. Or if the insert on the\nother server succeeds, but the local transaction aborts.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Jul 2007 14:46:05 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server" }, { "msg_contents": "On Wed, 2007-07-18 at 15:36, Michael Dengler wrote:\n> Row X is inserted into TableX in DB1 on server1....TableX trigger\n> function fires and contacts DB2 on server2 and inserts the row into\n> TableY on server2. \n\nThis kind of problem is usually solved more robustly by inserting the\n\"change\" into a local table and let the remote server (or some external\nprogram) poll that periodically, and make the necessary changes to the\nremote server. This method does not have the problems Heikki mentions in\nhis reply with disconnections and transaction rollbacks, as the external\nprogram/remote server will only see committed transactions and it can\napply the accumulated changes after connection is recovered in case of\nfailure, without blocking the activity on the \"master\".\n\nThis is also covered in a few past posts on the postgres lists (I guess\nyou should look in the \"general\" list for that), in particular you could\nbe interested in the possibility of notifications if you want your\npoller to be notified immediately when a change occurs.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 18 Jul 2007 16:02:12 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server" }, { "msg_contents": "\"Michael Dengler\" <[email protected]> writes:\n> I am trying to find out how to use a trigger function on a table to copy any\n> inserted row to a remote PG server.\n> ...\n> This is not replication, I'm not interested in a full blown trigger based\n> replication solution.\n\nTo be blunt, you're nuts. You *are* building a trigger based\nreplication system, and the fact that you think you can cut corners\njust shows how little you know about the problems involved.\n\nUse Slony, or some other solution that someone else has already gotten\nthe bugs out of.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Jul 2007 10:51:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server " }, { "msg_contents": "On Wed, 2007-07-18 at 16:02 +0200, Csaba Nagy wrote:\n> On Wed, 2007-07-18 at 15:36, Michael Dengler wrote:\n> > Row X is inserted into TableX in DB1 on server1....TableX trigger\n> > function fires and contacts DB2 on server2 and inserts the row into\n> > TableY on server2. \n> This kind of problem is usually solved more robustly by inserting the\n> \"change\" into a local table and let the remote server (or some external\n\nIf you don't want to build your own push/pull system [actually hard to\ndo well] then use something like xmlBlaster or some other MOM. You get\nlogging, transactions, and other features thrown in.\n\nhttp://www.xmlblaster.org/xmlBlaster/doc/requirements/contrib.replication.html\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Wed, 18 Jul 2007 11:43:42 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server" }, { "msg_contents": "Hmm..I was hoping to avoid personal insults....\n\nAnyway, Nuts or not...what I am attempting is to simply have row from one\ntable inserted into another servers DB I don't see it as replication\nbecause:\n\na) The destination table will have a trigger that modifies the arriving data\nto fit its table scheme.\nb) It is not critical that the data be synchronous (ie a lost row on the\ndestination DB is not a big deal)\nc) I see as more of a provision of data to the destination DB NOT A\nREPLICATION OF DATA.\n\nEssentially the remote server just wants to know when some record arrives at\nthe source server and wants to know some of the info contained in the new\nrecord.\n\nAnd yes it may be that I know little about the myriad of problems involved\nwith replication...but I do know how to carry on a civil, adult\nconversation....maybe we can have a knowledge exchange.\n\nCheers\n\nMike\n\n\n\nOn 7/18/07, Tom Lane <[email protected]> wrote:\n>\n> \"Michael Dengler\" <[email protected]> writes:\n> > I am trying to find out how to use a trigger function on a table to copy\n> any\n> > inserted row to a remote PG server.\n> > ...\n> > This is not replication, I'm not interested in a full blown trigger\n> based\n> > replication solution.\n>\n> To be blunt, you're nuts. You *are* building a trigger based\n> replication system, and the fact that you think you can cut corners\n> just shows how little you know about the problems involved.\n\n\n\n\nUse Slony, or some other solution that someone else has already gotten\n> the bugs out of.\n>\n> regards, tom lane\n>\n\nHmm..I was hoping to avoid personal insults....Anyway, Nuts or not...what I am attempting is to simply have row from one table inserted into another servers DB I don't see it as replication because:a) The destination table will have a trigger that modifies the arriving data to fit its table scheme.\nb) It is not critical that the data be synchronous (ie a lost row on the destination DB is not a big deal)c) I see as more of a provision of data to the destination DB NOT A REPLICATION OF DATA.Essentially the remote server just wants to know when some record arrives at the source server and wants to know some of the info contained in the new record.\nAnd yes it may be that I know little about the myriad of problems involved with replication...but I do know how to carry on a civil, adult conversation....maybe we can have a knowledge exchange.Cheers\nMikeOn 7/18/07, Tom Lane <[email protected]> wrote:\n\"Michael Dengler\" <[email protected]> writes:> I am trying to find out how to use a trigger function on a table to copy any> inserted row to a remote PG server.\n> ...> This is not replication, I'm not interested in a full blown trigger based> replication solution.To be blunt, you're nuts.  You *are* building a trigger basedreplication system, and the fact that you think you can cut corners\njust shows how little you know about the problems involved. \nUse Slony, or some other solution that someone else has already gottenthe bugs out of.                        regards, tom lane", "msg_date": "Wed, 18 Jul 2007 12:30:56 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to use a trigger to write rows to a remote server" }, { "msg_contents": "\nOn Jul 18, 2007, at 11:30 AM, Michael Dengler wrote:\n\n> Hmm..I was hoping to avoid personal insults....\n>\n> Anyway, Nuts or not...what I am attempting is to simply have row \n> from one table inserted into another servers DB I don't see it as \n> replication because:\n>\n> a) The destination table will have a trigger that modifies the \n> arriving data to fit its table scheme.\n> b) It is not critical that the data be synchronous (ie a lost row \n> on the destination DB is not a big deal)\n> c) I see as more of a provision of data to the destination DB NOT A \n> REPLICATION OF DATA.\n>\n> Essentially the remote server just wants to know when some record \n> arrives at the source server and wants to know some of the info \n> contained in the new record.\n>\n> And yes it may be that I know little about the myriad of problems \n> involved with replication...but I do know how to carry on a civil, \n> adult conversation....maybe we can have a knowledge exchange.\n>\n> Cheers\n>\n> Mike\n\nMike,\n\nIf all you need is for your trigger to make a simple query on another \ndb then you can use dblink or an untrusted version of one of the \navailable procedural languages such as plperlu or plpythonu.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Wed, 18 Jul 2007 12:06:34 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server" }, { "msg_contents": "On 7/18/07, Michael Dengler <[email protected]> wrote:\n> Hmm..I was hoping to avoid personal insults....\n>\n> Anyway, Nuts or not...what I am attempting is to simply have row from one\n> table inserted into another servers DB I don't see it as replication\n> because:\n\nI think you took Tom's comments the wrong way. He is suggesting you\nare nuts to attempt trigger based data transfer to remote server when\nthere are two clearly better options, slony and dblink. You took as a\npersonal insult which was just some frank (and frankly good) advice...\n\nSlony is in fact the _solution_ to the problem of transferring data\nbetween servers with triggers. If your tables are well designed and\nyou are reasonably proficient with stored procedures, and you\nrequirements of transfer are very specific and not extremely time\nsensitive, a poll based system over dblink is also a good solution.\n\n> a) The destination table will have a trigger that modifies the arriving data\n> to fit its table scheme.\n> b) It is not critical that the data be synchronous (ie a lost row on the\n> destination DB is not a big deal)\n> c) I see as more of a provision of data to the destination DB NOT A\n> REPLICATION OF DATA.\n\nbased on this you may want to rig dblink/poll. 3rd option is pitr\nshipping to warm standby, depending on your requirements.\n\n> Essentially the remote server just wants to know when some record arrives at\n> the source server and wants to know some of the info contained in the new\n> record.\n>\n> And yes it may be that I know little about the myriad of problems involved\n> with replication...but I do know how to carry on a civil, adult\n> conversation....maybe we can have a knowledge exchange.\n\nnow that's a bit dramatic :-)\n\nmerlin\n", "msg_date": "Fri, 27 Jul 2007 14:27:00 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use a trigger to write rows to a remote server" } ]
[ { "msg_contents": "Hi,\n\nIf I have a query such as:\n\nSELECT * FROM (SELECT * FROM A) UNION ALL (SELECT * FROM B) WHERE\nblah='food';\n\nAssuming the table A and B both have the same attributes and the data\nbetween the table is not partitioned in any special way, does Postgresql\nexecute WHERE blah=\"food\" on both table simultaiously or what? If not, is\nthere a way to execute the query on both in parrallel then aggregate the\nresults?\n\nTo give some context, I have a very large amount of new data being loaded\neach week. Currently I am partitioning the data into a new table every\nmonth which is working great from a indexing standpoint. But I want to\nparrallelize searches if possible to reduce the perofrmance loss of having\nmultiple tables.\n\nBenjamin\n\n", "msg_date": "Wed, 18 Jul 2007 09:14:35 -0700 (PDT)", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Parrallel query execution for UNION ALL Queries" }, { "msg_contents": "On 7/18/07, Benjamin Arai <[email protected]> wrote:\n> But I want to parrallelize searches if possible to reduce\n> the perofrmance loss of having multiple tables.\n\nPostgreSQL does not support parallel query. Parallel query on top of\nPostgreSQL is provided by ExtenDB and PGPool-II.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 18 Jul 2007 12:21:49 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Parrallel query execution for UNION ALL Queries" }, { "msg_contents": "On 7/18/07, Benjamin Arai <[email protected]> wrote:\n> Hi,\n>\n> If I have a query such as:\n>\n> SELECT * FROM (SELECT * FROM A) UNION ALL (SELECT * FROM B) WHERE\n> blah='food';\n>\n> Assuming the table A and B both have the same attributes and the data\n> between the table is not partitioned in any special way, does Postgresql\n> execute WHERE blah=\"food\" on both table simultaiously or what? If not, is\n> there a way to execute the query on both in parrallel then aggregate the\n> results?\n>\n> To give some context, I have a very large amount of new data being loaded\n> each week. Currently I am partitioning the data into a new table every\n> month which is working great from a indexing standpoint. But I want to\n> parrallelize searches if possible to reduce the perofrmance loss of having\n> multiple tables.\n\nMost of the time, the real issue would be the I/O throughput for such\nqueries, not the CPU capability.\n\nIf you have only one disk for your data storage, you're likely to get\nWORSE performance if you have two queries running at once, since the\nheads would not be going back and forth from one data set to the\nother.\n\nEnterpriseDB, a commercially enhanced version of PostgreSQL can do\nquery parallelization, but it comes at a cost, and that cost is making\nsure you have enough spindles / I/O bandwidth that you won't be\nactually slowing your system down.\n", "msg_date": "Wed, 18 Jul 2007 11:30:48 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Parrallel query execution for UNION ALL Queries" }, { "msg_contents": "On Wed, Jul 18, 2007 at 11:30:48AM -0500, Scott Marlowe wrote:\n> EnterpriseDB, a commercially enhanced version of PostgreSQL can do\n> query parallelization, but it comes at a cost, and that cost is making\n> sure you have enough spindles / I/O bandwidth that you won't be\n> actually slowing your system down.\n\nI think you're thinking ExtendDB. :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Wed, 18 Jul 2007 13:50:07 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Parrallel query execution for UNION ALL Queries" }, { "msg_contents": "Hi,\n\nLe mercredi 18 juillet 2007, Jonah H. Harris a écrit :\n> On 7/18/07, Benjamin Arai <[email protected]> wrote:\n> > But I want to parrallelize searches if possible to reduce\n> > the perofrmance loss of having multiple tables.\n>\n> PostgreSQL does not support parallel query. Parallel query on top of\n> PostgreSQL is provided by ExtenDB and PGPool-II.\n\nSeems to me that : \n - GreenPlum provides some commercial parallel query engine on top of\n PostgreSQL,\n \n - plproxy could be a solution to the given problem.\n https://developer.skype.com/SkypeGarage/DbProjects/PlProxy\n\nHope this helps,\n-- \ndim", "msg_date": "Thu, 19 Jul 2007 12:14:29 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Parrallel query execution for UNION ALL Queries" }, { "msg_contents": "Dimitri,\n\n> Seems to me that : \n> - GreenPlum provides some commercial parallel query engine on top of\n> PostgreSQL,\n\nI certainly think so and so do our customers in production with 100s of\nterabytes :-)\n \n> - plproxy could be a solution to the given problem.\n> https://developer.skype.com/SkypeGarage/DbProjects/PlProxy\n\nThis is solving real world problems at Skype of a different kind than\nGreenplum, well worth checking out.\n\n- Luke\n\n", "msg_date": "Thu, 19 Jul 2007 10:30:43 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Parrallel query execution for UNION ALL\n Queries" }, { "msg_contents": "On Jul 18, 11:50 am, [email protected] (\"Jim C. Nasby\") wrote:\n> On Wed, Jul 18, 2007 at 11:30:48AM -0500, Scott Marlowe wrote:\n> > EnterpriseDB, a commercially enhanced version of PostgreSQL can do\n> > query parallelization, but it comes at a cost, and that cost is making\n> > sure you have enough spindles / I/O bandwidth that you won't be\n> > actually slowing your system down.\n>\n> I think you're thinking ExtendDB. :)\n\nWell, now they are one and the same - seems that EnterpriseDB bought\nExtenDB and are calling it GridSQL.\n\nNow that it's a commercial endeavor competing with Greenplum, Netezza\nand Teradata I'd be very interested in some real world examples of\nExtenDB/GridSQL.\n\n- Luke\n\n", "msg_date": "Thu, 09 Aug 2007 20:41:29 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Parrallel query execution for UNION ALL Queries" } ]
[ { "msg_contents": "We're using Postgres 8.2.4.\n\nI'm trying to decide whether it's worthwhile to implement a process that\ndoes periodic reindexing. In a few ad hoc tests, where I've tried to set up\ndata similar to how our application does it, I've noticed decent performance\nincreases after doing a reindex as well as the planner being more likely to\nchoose an index scan.\n\nSome background: we make extensive use of partitioned tables. In fact, I'm\nreally only considering reindexing partitions that have \"just closed\". In\nour simplest/most general case, we have a table partitioned by a timestamp\ncolumn, each partition 24 hours wide. The partition will have an index on\nthe timestamp column as well as a few other indexes including a primary key\nindex (all b-tree). Is there a programmatic way I can decide, upon the\n\"closing\" of a partition, which, if any, of these indexes will benefit from\na reindex? Can I determine things like average node density, node depth, or\nany other indication as to the quality of an index? Will pg_class.relpages\nbe any help here?\n\nIs it a simple matter of running some queries, reindexing the table, then\nrunning the queries again to determine overall performance change? If so,\nwhat queries would exercise this best?\n\nJust trying to determine if the extra cost of reindexing newly closed\npartitions will be worth the performance benefit of querying the data.\nReindexing a table with a day's worth of data is taking on the order of a\nfew hours (10s of millions of rows).\n\nThe docs say that:\n\n\"...for B-tree indexes a freshly-constructed index is somewhat faster to\naccess than one that has been updated many times, because logically adjacent\npages are usually also physically adjacent in a newly built index... It\nmight be worthwhile to reindex periodically just to improve access speed.\"\n\nThanks,\nSteve\n\nWe're using Postgres 8.2.4.\n \nI'm trying to decide whether it's worthwhile to implement a process that does periodic reindexing.  In a few ad hoc tests, where I've tried to set up data similar to how our application does it, I've noticed decent performance increases after doing a reindex as well as the planner being more likely to choose an index scan.\n\n \nSome background: we make extensive use of partitioned tables.  In fact, I'm really only considering reindexing partitions that have \"just closed\".  In our simplest/most general case, we have a table partitioned by a timestamp column, each partition 24 hours wide.  The partition will have an index on the timestamp column as well as a few other indexes including a primary key index (all b-tree).  Is there a programmatic way I can decide, upon the \"closing\" of a partition, which, if any, of these indexes will benefit from a reindex?  Can I determine things like average node density, node depth, or any other indication as to the quality of an index?  Will pg_class.relpages be any help here?\n\n \nIs it a simple matter of running some queries, reindexing the table, then running the queries again to determine overall performance change?  If so, what queries would exercise this best?\n \nJust trying to determine if the extra cost of reindexing newly closed partitions will be worth the performance benefit of querying the data.  Reindexing a table with a day's worth of data is taking on the order of a few hours (10s of millions of rows).\n\n \nThe docs say that:\n \n\"...for B-tree indexes a freshly-constructed index is somewhat faster to access than one that has been updated many times, because logically adjacent pages are usually also physically adjacent in a newly built index... It might be worthwhile to reindex periodically just to improve access speed.\"\n\n \nThanks,\nSteve", "msg_date": "Wed, 18 Jul 2007 13:08:30 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "When/if to Reindex" }, { "msg_contents": "On Wed, Jul 18, 2007 at 01:08:30PM -0400, Steven Flatt wrote:\n> We're using Postgres 8.2.4.\n> \n> I'm trying to decide whether it's worthwhile to implement a process that\n> does periodic reindexing. In a few ad hoc tests, where I've tried to set up\n> data similar to how our application does it, I've noticed decent performance\n> increases after doing a reindex as well as the planner being more likely to\n> choose an index scan.\n> \n> Some background: we make extensive use of partitioned tables. In fact, I'm\n> really only considering reindexing partitions that have \"just closed\". In\n> our simplest/most general case, we have a table partitioned by a timestamp\n> column, each partition 24 hours wide. The partition will have an index on\n> the timestamp column as well as a few other indexes including a primary key\n> index (all b-tree). Is there a programmatic way I can decide, upon the\n> \"closing\" of a partition, which, if any, of these indexes will benefit from\n> a reindex? Can I determine things like average node density, node depth, or\n> any other indication as to the quality of an index? Will pg_class.relpages\n> be any help here?\n\nLooking at that stuff will help determine if the index is bloated, or if\nit's just bigger than optimal. Once you're done writing to an index, it\nmight be worth reindexing with a fillfactor of 100% to shrink things\ndown a bit.\n\n> Is it a simple matter of running some queries, reindexing the table, then\n> running the queries again to determine overall performance change? If so,\n> what queries would exercise this best?\n> \n> Just trying to determine if the extra cost of reindexing newly closed\n> partitions will be worth the performance benefit of querying the data.\n> Reindexing a table with a day's worth of data is taking on the order of a\n> few hours (10s of millions of rows).\n> \n> The docs say that:\n> \n> \"...for B-tree indexes a freshly-constructed index is somewhat faster to\n> access than one that has been updated many times, because logically adjacent\n> pages are usually also physically adjacent in a newly built index... It\n> might be worthwhile to reindex periodically just to improve access speed.\"\n\nThat's the other consideration, though if you're seeing a big difference\nI suspect it's an issue of indexes fitting in cache or not.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Wed, 18 Jul 2007 13:52:51 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\nOn Jul 18, 2007, at 1:08 PM, Steven Flatt wrote:\n\n> Some background: we make extensive use of partitioned tables. In \n> fact, I'm\n> really only considering reindexing partitions that have \"just \n> closed\". In\n> our simplest/most general case, we have a table partitioned by a \n> timestamp\n> column, each partition 24 hours wide. The partition will have an \n> index on\n> the timestamp column as well as a few other indexes including a \n> primary key\n\nIf all you ever did was insert into that table, then you probably \ndon't need to reindex. If you did mass updates/deletes mixed with \nyour inserts, then perhaps you do.\n\nDo some experiments comparing pg_class.relpages for your table and \nits indexes before and after a reindex. Decide if the number of \npages you save on the index is worth the trouble. If it shaves off \njust a handful of pages, I'd vote no...\n", "msg_date": "Wed, 8 Aug 2007 13:42:34 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "On 8/8/07, Vivek Khera <[email protected]> wrote:\n>\n> If all you ever did was insert into that table, then you probably\n> don't need to reindex. If you did mass updates/deletes mixed with\n> your inserts, then perhaps you do.\n>\n> Do some experiments comparing pg_class.relpages for your table and\n> its indexes before and after a reindex. Decide if the number of\n> pages you save on the index is worth the trouble. If it shaves off\n> just a handful of pages, I'd vote no...\n\n\nWhat's interesting is that an insert-only table can benefit significantly\nfrom reindexing after the table is fully loaded. I had done experiments\nexactly as you suggest (looking at pg_class.relpages), and determined that\nreindexing results in about a 30% space savings for all indexes except the\nPK index. The PK index (integer based on a sequence) does not benefit at\nall. By setting fillfactor=100 on the index prior to reindexing, I get\nanother 10% space savings on all the indexes.\n\nNot to mention the general performance improvements when reading from the\ntable...\n\nSo, we decided that reindexing partitions after they're fully loaded *was*\nworth it.\n\nSteve\n\nOn 8/8/07, Vivek Khera <[email protected]> wrote:\nIf all you ever did was insert into that table, then you probablydon't need to reindex.  If you did mass updates/deletes mixed with\nyour inserts, then perhaps you do.Do some experiments comparing pg_class.relpages for your table andits indexes before and after a reindex.  Decide if the number ofpages you save on the index is worth the trouble.  If it shaves off\njust a handful of pages, I'd vote no...\n \nWhat's interesting is that an insert-only table can benefit significantly from reindexing after the table is fully loaded.  I had done experiments exactly as you suggest (looking at pg_class.relpages), and determined that reindexing results in about a 30% space savings for all indexes except the PK index.  The PK index (integer based on a sequence) does not benefit at all.  By setting fillfactor=100 on the index prior to reindexing, I get another 10% space savings on all the indexes.\n\n \nNot to mention the general performance improvements when reading from the table...\n \nSo, we decided that reindexing partitions after they're fully loaded *was* worth it.\n \nSteve", "msg_date": "Wed, 8 Aug 2007 15:12:44 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "In response to \"Steven Flatt\" <[email protected]>:\n\n> On 8/8/07, Vivek Khera <[email protected]> wrote:\n> >\n> > If all you ever did was insert into that table, then you probably\n> > don't need to reindex. If you did mass updates/deletes mixed with\n> > your inserts, then perhaps you do.\n> >\n> > Do some experiments comparing pg_class.relpages for your table and\n> > its indexes before and after a reindex. Decide if the number of\n> > pages you save on the index is worth the trouble. If it shaves off\n> > just a handful of pages, I'd vote no...\n> \n> \n> What's interesting is that an insert-only table can benefit significantly\n> from reindexing after the table is fully loaded. I had done experiments\n> exactly as you suggest (looking at pg_class.relpages), and determined that\n> reindexing results in about a 30% space savings for all indexes except the\n> PK index. The PK index (integer based on a sequence) does not benefit at\n> all. By setting fillfactor=100 on the index prior to reindexing, I get\n> another 10% space savings on all the indexes.\n> \n> Not to mention the general performance improvements when reading from the\n> table...\n> \n> So, we decided that reindexing partitions after they're fully loaded *was*\n> worth it.\n\nI've had similar experience. One thing you didn't mention that I've noticed\nis that VACUUM FULL often bloats indexes. I've made it SOP that\nafter application upgrades (which usually includes lots of ALTER TABLES and\nother massive schema and data changes) I VACUUM FULL and REINDEX (in that\norder).\n\nLots of ALTER TABLEs seem to bloat the database size considerably, beyond\nwhat normal VACUUM seems to fix. A FULL seems to fix that, but it appears\nto bloat the indexes, thus a REINDEX helps.\n\nI would expect that setting fillfactor to 100 will encourage indexs to bloat\nfaster, and would only be recommended if you didn't expect the index contents\nto change?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 8 Aug 2007 15:27:57 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "On Wed, Aug 08, 2007 at 03:27:57PM -0400, Bill Moran wrote:\n> I've had similar experience. One thing you didn't mention that I've noticed\n> is that VACUUM FULL often bloats indexes. I've made it SOP that\n> after application upgrades (which usually includes lots of ALTER TABLES and\n> other massive schema and data changes) I VACUUM FULL and REINDEX (in that\n> order).\n\nYou'd be better off with a CLUSTER in that case. It'll be faster, and\nyou'll ensure that the table has optimal ordering.\n\n> Lots of ALTER TABLEs seem to bloat the database size considerably, beyond\n> what normal VACUUM seems to fix. A FULL seems to fix that, but it appears\n> to bloat the indexes, thus a REINDEX helps.\n\nHrm, are you sure that's still true? I just did an ALTER TABLE ... TYPE\nand it created a new file, meaning no bloating.\n\n> I would expect that setting fillfactor to 100 will encourage indexs to bloat\n> faster, and would only be recommended if you didn't expect the index contents\n> to change?\n\nYes.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Wed, 8 Aug 2007 15:35:54 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n> In response to \"Steven Flatt\" <[email protected]>:\n>> What's interesting is that an insert-only table can benefit significantly\n>> from reindexing after the table is fully loaded.\n\n> I've had similar experience. One thing you didn't mention that I've noticed\n> is that VACUUM FULL often bloats indexes. I've made it SOP that\n> after application upgrades (which usually includes lots of ALTER TABLES and\n> other massive schema and data changes) I VACUUM FULL and REINDEX (in that\n> order).\n\nActually, if that is your intent then the best plan is: drop indexes,\nVACUUM FULL, create indexes from scratch. A huge proportion of VACUUM\nFULL's time goes into updating the indexes, and that work is basically\nwasted if you are going to reindex afterwards.\n\nCLUSTER is a good substitute for V.F. partly because it doesn't try to\nupdate the indexes incrementally, but just does the equivalent of\nREINDEX after it's reordered the heap.\n\nI'd make the same remark about Steven's case: if possible, don't create\nthe indexes at all until you've loaded the table fully.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Aug 2007 21:51:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex " }, { "msg_contents": "In response to \"Decibel!\" <[email protected]>:\n\n> On Wed, Aug 08, 2007 at 03:27:57PM -0400, Bill Moran wrote:\n> > I've had similar experience. One thing you didn't mention that I've noticed\n> > is that VACUUM FULL often bloats indexes. I've made it SOP that\n> > after application upgrades (which usually includes lots of ALTER TABLES and\n> > other massive schema and data changes) I VACUUM FULL and REINDEX (in that\n> > order).\n> \n> You'd be better off with a CLUSTER in that case. It'll be faster, and\n> you'll ensure that the table has optimal ordering.\n\nPoint taken.\n\n> > Lots of ALTER TABLEs seem to bloat the database size considerably, beyond\n> > what normal VACUUM seems to fix. A FULL seems to fix that, but it appears\n> > to bloat the indexes, thus a REINDEX helps.\n> \n> Hrm, are you sure that's still true? I just did an ALTER TABLE ... TYPE\n> and it created a new file, meaning no bloating.\n\nNo, I'm not. This isn't something I've analyzed or investigated in detail.\nDuring upgrades, a lot happens: ATLER TABLES, tables are dropped, new tables\nare created, massive amounts of data may be altered in a short period, stored\nprocedures are replaced, etc, etc.\n\nI don't remember what led me to believe that the ALTER TABLES were causing the\nworst of the problem, but it's entirely possible that I was off-base. (I seem\nto remember being concerned about too many DROP COLUMN and ADD COLUMNs) In any\nevent, my original statement (that it's a good idea to REINDEX after VACUUM\nFULL) still seems to be correct.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 9 Aug 2007 09:04:11 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "On 8/8/07, Tom Lane <[email protected]> wrote:\n>\n> I'd make the same remark about Steven's case: if possible, don't create\n> the indexes at all until you've loaded the table fully.\n\n\nWe considered this, however in some of our 12-hour partitions, there are\nupwards of 50 or 60 million rows near the end of the 12 hours so read\nperformance gets bad on the current partition very quickly if there are no\nindexes.\n\nIt makes more sense for us to have ~1 hour's worth of reindexing afterwards\nduring which read performance on that partition is \"compromised\".\n\nSteve\n\nOn 8/8/07, Tom Lane <[email protected]> wrote:\nI'd make the same remark about Steven's case: if possible, don't createthe indexes at all until you've loaded the table fully.\n\n \nWe considered this, however in some of our 12-hour partitions, there are upwards of 50 or 60 million rows near the end of the 12 hours so read performance gets bad on the current partition very quickly if there are no indexes.\n\n \nIt makes more sense for us to have ~1 hour's worth of reindexing afterwards during which read performance on that partition is \"compromised\".\n \nSteve", "msg_date": "Thu, 9 Aug 2007 09:51:42 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": ">\n> It makes more sense for us to have ~1 hour's worth of reindexing\n> afterwards during which read performance on that partition is \"compromised\".\n>\n\nSo, based on the docs, I was expecting read performance to be compromised\nduring a reindex, specifically reads would not be allowed to use the index:\n\n\"REINDEX locks out writes but not reads of the index's parent table. It also\ntakes an exclusive lock on the specific index being processed, which will\nblock reads that attempt to use that index.\"\n\nHowever I'm seeing that all readers of that table are blocked until the\nreindex finishes, even reads that do not attempt to use the index. Is this\na problem with the docs or a bug?\n\nI'm considering creating a new index with the same definition as the first\n(different name), so while that index is being created, read access to the\ntable, and the original index, is not blocked. When the new index is\ncreated, drop the original index and rename the new index to the original,\nand we've essentially accomplished the same thing. In fact, why isn't\nreindex doing this sort of thing in the background anways?\n\nThanks,\nSteve\n\n\n\n\nIt makes more sense for us to have ~1 hour's worth of reindexing afterwards during which read performance on that partition is \"compromised\".\n \nSo, based on the docs, I was expecting read performance to be compromised during a reindex, specifically reads would not be allowed to use the index:\n \n\"REINDEX locks out writes but not reads of the index's parent table. It also takes an exclusive lock on the specific index being processed, which will block reads that attempt to use that index.\"\nHowever I'm seeing that all readers of that table are blocked until the reindex finishes, even reads that do not attempt to use the index.  Is this a problem with the docs or a bug?\n \nI'm considering creating a new index with the same definition as the first (different name), so while that index is being created, read access to the table, and the original index, is not blocked.  When the new index is created, drop the original index and rename the new index to the original, and we've essentially accomplished the same thing.  In fact, why isn't reindex doing this sort of thing in the background anways?\n\n \nThanks,\nSteve", "msg_date": "Wed, 22 Aug 2007 12:55:28 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n\n> However I'm seeing that all readers of that table are blocked until the\n> reindex finishes, even reads that do not attempt to use the index. Is this\n> a problem with the docs or a bug?\n\nYou'll have to describe in more detail what you're doing so we can see what's\ncausing it to not work for you because \"works for me\":\n\npostgres=# create table test (i integer);\nCREATE TABLE\npostgres=# insert into test select generate_series(1,1000);\nINSERT 0 1000\npostgres=# create or replace function slow(integer) returns integer as 'begin perform pg_sleep(0); return $1; end' language plpgsql immutable strict;\nCREATE FUNCTION\npostgres=# create index slowi on test (slow(i));\nCREATE INDEX\npostgres=# create or replace function slow(integer) returns integer as 'begin perform pg_sleep(1); return $1; end' language plpgsql immutable strict;\nCREATE FUNCTION\npostgres=# reindex index slowi;\n\nWhile that's running I ran:\n\npostgres=# select count(*) from test;\n count \n-------\n 1000\n(1 row)\n\n\n> I'm considering creating a new index with the same definition as the first\n> (different name), so while that index is being created, read access to the\n> table, and the original index, is not blocked. When the new index is\n> created, drop the original index and rename the new index to the original,\n> and we've essentially accomplished the same thing. In fact, why isn't\n> reindex doing this sort of thing in the background anways?\n\nIt is but one level lower down. But the locks which block people from using\nthe index must be at this level. Consider for example that one of the\noperations someone might be doing is creating a foreign key which depends on\nthis index. If we created a new index and then tried to drop this one the drop\nwould fail because of the foreign key which needs it. It's possible these\nproblems could all be worked out but it would still take quite a bit of work\nto do so.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 23 Aug 2007 00:50:53 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "On 8/22/07, Gregory Stark <[email protected]> wrote:\n\n> postgres=# create table test (i integer);\n> CREATE TABLE\n> postgres=# insert into test select generate_series(1,1000);\n> INSERT 0 1000\n> postgres=# create or replace function slow(integer) returns integer as\n> 'begin perform pg_sleep(0); return $1; end' language plpgsql immutable\n> strict;\n> CREATE FUNCTION\n> postgres=# create index slowi on test (slow(i));\n> CREATE INDEX\n> postgres=# create or replace function slow(integer) returns integer as\n> 'begin perform pg_sleep(1); return $1; end' language plpgsql immutable\n> strict;\n> CREATE FUNCTION\n> postgres=# reindex index slowi;\n>\n> While that's running I ran:\n>\n> postgres=# select count(*) from test;\n> count\n> -------\n> 1000\n> (1 row)\n\n\nInterestingly enough, the example you've given does not work for me either.\nThe select count(*) from test blocks until the reindex completes. Are we\nusing the same pg version?\n\n# select version();\n\n version\n\n--------------------------------------------------------------------------------\n----------------\n PostgreSQL 8.2.4 on i386-portbld-freebsd6.1, compiled by GCC cc (GCC)\n3.4.4[FreeBSD] 20050518\n(1 row)\nLooking at the pg_locks table, I see:\n\n\n# select locktype,relation,mode,granted from pg_locks where not granted;\n locktype | relation | mode | granted\n----------+----------+-----------------+---------\n relation | 69293 | AccessShareLock | f\n(1 row)\n\n# select relname from pg_class where oid = 69293;\n relname\n---------\n slowi\n(1 row)\n\n# select locktype,relation,mode,granted from pg_locks where relation =\n69293;\n locktype | relation | mode | granted\n----------+----------+---------------------+---------\n relation | 69293 | AccessShareLock | f\n relation | 69293 | AccessExclusiveLock | t\n(2 rows)\nSo the reindex statement has an AccessExclusiveLock on the index, which\nseems right, and this blocks the select count(*) from getting an\nAccessShareLock on the index. Why does the select count(*) need a lock on\nthe index? Is there some Postgres setting that could cause this behaviour?\nI can't even do an \"explain select count(*) from test\" without blocking.\n\nAny ideas?\n\nSteve\n\nOn 8/22/07, Gregory Stark <[email protected]> wrote:\npostgres=# create table test (i integer);CREATE TABLEpostgres=# insert into test select generate_series(1,1000);\nINSERT 0 1000postgres=# create or replace function slow(integer) returns integer as 'begin perform pg_sleep(0); return $1; end' language plpgsql immutable strict;CREATE FUNCTIONpostgres=# create index slowi on test (slow(i));\nCREATE INDEXpostgres=# create or replace function slow(integer) returns integer as 'begin perform pg_sleep(1); return $1; end' language plpgsql immutable strict;CREATE FUNCTIONpostgres=# reindex index slowi;\nWhile that's running I ran:postgres=# select count(*) from test;count-------1000(1 row)\n \nInterestingly enough, the example you've given does not work for me either.  The select count(*) from test blocks until the reindex completes.  Are we using the same pg version?\n \n# select version();\n\n                                            version\n------------------------------------------------------------------------------------------------ PostgreSQL 8.2.4 on i386-portbld-freebsd6.1, compiled by GCC cc (GCC) 3.4.4 [FreeBSD] 20050518(1 row)\nLooking at the pg_locks table, I see:\n \n# select locktype,relation,mode,granted from pg_locks where not granted; locktype | relation |      mode       | granted----------+----------+-----------------+--------- relation |    69293 | AccessShareLock | f\n(1 row)\n# select relname from pg_class where oid = 69293; relname--------- slowi(1 row)\n# select locktype,relation,mode,granted from pg_locks where relation = 69293; locktype | relation |        mode         | granted----------+----------+---------------------+--------- relation |    69293 | AccessShareLock     | f\n relation |    69293 | AccessExclusiveLock | t(2 rows)\nSo the reindex statement has an AccessExclusiveLock on the index, which seems right, and this blocks the select count(*) from getting an AccessShareLock on the index.  Why does the select count(*) need a lock on the index?  Is there some Postgres setting that could cause this behaviour?  I can't even do an \"explain select count(*) from test\" without blocking.\n\n \nAny ideas?\n \nSteve", "msg_date": "Thu, 23 Aug 2007 18:25:25 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Interestingly enough, the example you've given does not work for me either.\n> The select count(*) from test blocks until the reindex completes. Are we\n> using the same pg version?\n\nSeems like a fair question, because Greg's example blocks for me too,\nin plancat.c where the planner is trying to acquire information on each\nindex. This seems to be an unwanted side effect of this 8.2-era patch\nhttp://archives.postgresql.org/pgsql-committers/2006-07/msg00356.php\nspecifically, note here\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/util/plancat.c.diff?r1=1.121;r2=1.122;f=h\nhow the new planner coding takes at least AccessShareLock on each index,\nwhere the old coding took no lock at all.\n\nI think that the new coding rule of \"you *must* take some lock when\nopening the relation\" is essential for tables, but it might not be\nnecessary for indexes if you've got a lock on the parent table.\nWe don't allow any schema changes on an index to be made without holding\nexclusive lock on the parent, so plancat.c's basic purpose of finding\nout the properties of the index could be done safely without any index\nlock.\n\nThe fly in the ointment is that after collecting the pg_index definition\nof the index, plancat.c also wants to know how big it is --- it calls\nRelationGetNumberOfBlocks. And that absolutely does look at the\nphysical storage, which means it absolutely is unsafe to do in parallel\nwith a REINDEX that will be dropping the old physical storage at some\npoint.\n\nSo maybe we are stuck and we have to say \"that doesn't work anymore\".\nBut it feels like we might not be too far away from letting it still\nwork. Thoughts, ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2007 23:59:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex " }, { "msg_contents": "Tom Lane wrote:\n>\n> The fly in the ointment is that after collecting the pg_index definition\n> of the index, plancat.c also wants to know how big it is --- it calls\n> RelationGetNumberOfBlocks. And that absolutely does look at the\n> physical storage, which means it absolutely is unsafe to do in parallel\n> with a REINDEX that will be dropping the old physical storage at some\n> point.\n>\n> So maybe we are stuck and we have to say \"that doesn't work anymore\".\n> But it feels like we might not be too far away from letting it still\n> work. Thoughts, ideas?\n> \n\nA suggestion that seems a bit like a leap backwards in time - maybe just \nuse the pg_class.relpages entry for the index size?\n\nI'm punting that with autovacuum being enabled by default now, the \nrelpages entries for all relations will be more representative than they \nused to in previous releases.\n\nCheers\n\nMark\n\n", "msg_date": "Thu, 23 Aug 2007 22:11:31 -0700", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n\n> On 8/22/07, Gregory Stark <[email protected]> wrote:\n>\n> Interestingly enough, the example you've given does not work for me either.\n> The select count(*) from test blocks until the reindex completes. Are we\n> using the same pg version?\n\nI was using CVS head but given Tom's explanation I wouldn't expect to see any\ndifferent behaviour here.\n\nI just retried it and it did block. I can't think of anything I could have\ndone wrong last time to make it appear not to block. If I had missed an error\nat some point along the way I would have expected the reindex to complete\nquickly or fail or something but it was definitely just blocked. I remember\nnoting (much) later that it had finished.\n\nStrange.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 24 Aug 2007 07:49:16 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "On 8/24/07, Mark Kirkwood <[email protected]> wrote:\n>\n> Tom Lane wrote:\n> >\n> > The fly in the ointment is that after collecting the pg_index definition\n> > of the index, plancat.c also wants to know how big it is --- it calls\n> > RelationGetNumberOfBlocks. And that absolutely does look at the\n> > physical storage, which means it absolutely is unsafe to do in parallel\n> > with a REINDEX that will be dropping the old physical storage at some\n> > point.\n>\n> A suggestion that seems a bit like a leap backwards in time - maybe just\n> use the pg_class.relpages entry for the index size?\n\n\nJust throwing this out there (looking from a higher level)...\n\nWhy do we even need to consider calling RelationGetNumberOfBlocks or looking\nat the pg_class.relpages entry? My understanding of the expected behaviour\nis that while a reindex is happening, all queries run against the parent\ntable are planned as though the index isn't there (i.e. it's unusable).\nThis may/will result in sub-optimal query plans, but the point is that\nreindex never blocks readers. Not sure if from an implementation standpoint\nit's easy to mark an index as \"being reindexed\" in which case the planner\nshould just skip it.\n\nSteve\n\nOn 8/24/07, Mark Kirkwood <[email protected]> wrote:\nTom Lane wrote:>> The fly in the ointment is that after collecting the pg_index definition\n> of the index, plancat.c also wants to know how big it is --- it calls> RelationGetNumberOfBlocks.  And that absolutely does look at the> physical storage, which means it absolutely is unsafe to do in parallel\n> with a REINDEX that will be dropping the old physical storage at some> point.A suggestion that seems a bit like a leap backwards in time - maybe justuse the pg_class.relpages entry for the index size?\n\n \nJust throwing this out there (looking from a higher level)...\n \nWhy do we even need to consider calling RelationGetNumberOfBlocks or looking at the pg_class.relpages entry?  My understanding of the expected behaviour is that while a reindex is happening, all queries run against the parent table are planned as though the index isn't there (\ni.e. it's unusable).  This may/will result in sub-optimal query plans, but the point is that reindex never blocks readers.  Not sure if from an implementation standpoint it's easy to mark an index as \"being reindexed\" in which case the planner should just skip it.\n\n \nSteve", "msg_date": "Fri, 24 Aug 2007 10:22:38 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n>> The fly in the ointment is that after collecting the pg_index definition\n>> of the index, plancat.c also wants to know how big it is --- it calls\n>> RelationGetNumberOfBlocks.\n\n> Why do we even need to consider calling RelationGetNumberOfBlocks or looking\n> at the pg_class.relpages entry? My understanding of the expected behaviour\n> is that while a reindex is happening, all queries run against the parent\n> table are planned as though the index isn't there (i.e. it's unusable).\n\nWhere in the world did you get that idea?\n\nIf we had a REINDEX CONCURRENTLY it might work that way. A normal\nREINDEX cannot \"mark\" anything because it runs within a single\ntransaction; there is no way that it can emit any catalog changes\nthat will be visible before it's over.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2007 12:15:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex " }, { "msg_contents": "On 8/24/07, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > Why do we even need to consider calling RelationGetNumberOfBlocks or\n> looking\n> > at the pg_class.relpages entry? My understanding of the expected\n> behaviour\n> > is that while a reindex is happening, all queries run against the parent\n> > table are planned as though the index isn't there (i.e. it's unusable).\n>\n> Where in the world did you get that idea?\n\n\nMaybe that's what I was *hoping* the behaviour would be. :)\n\n From the docs:\n\"REINDEX locks out writes but not reads of the index's parent table.\"\n\"It also takes an exclusive lock on the specific index being processed...\"\n\nI believe those two statements imply that reads of the parent table don't\ntake any lock whatsoever on the index being processed, i.e. they ignore it.\n\nIf we had a REINDEX CONCURRENTLY it might work that way. A normal\n> REINDEX cannot \"mark\" anything because it runs within a single\n> transaction; there is no way that it can emit any catalog changes\n> that will be visible before it's over.\n>\n... but I understand this difficulty.\n\nSo, can we simply trust what's in pg_class.relpages and ignore looking\ndirectly at the index? This is a fairly serious concern for us, that\nreindex is blocking all readers of the parent table.\n\nThanks,\nSteve\n\nOn 8/24/07, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> Why do we even need to consider calling RelationGetNumberOfBlocks or looking> at the pg_class.relpages entry?  My understanding of the expected behaviour> is that while a reindex is happening, all queries run against the parent\n> table are planned as though the index isn't there (i.e. it's unusable).Where in the world did you get that idea?\n \nMaybe that's what I was *hoping* the behaviour would be. :)\n \nFrom the docs:\n\"REINDEX locks out writes but not reads of the index's parent table.\"\n\"It also takes an exclusive lock on the specific index being processed...\"\n \nI believe those two statements imply that reads of the parent table don't take any lock whatsoever on the index being processed, i.e. they ignore it.\nIf we had a REINDEX CONCURRENTLY it might work that way.  A normalREINDEX cannot \"mark\" anything because it runs within a single\ntransaction; there is no way that it can emit any catalog changesthat will be visible before it's over.\n... but I understand this difficulty.\n \nSo, can we simply trust what's in pg_class.relpages and ignore looking directly at the index?  This is a fairly serious concern for us, that reindex is blocking all readers of the parent table.\n \nThanks,\nSteve", "msg_date": "Fri, 24 Aug 2007 12:56:20 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> So, can we simply trust what's in pg_class.relpages and ignore looking\n> directly at the index?\n\nNo, we can't. In the light of morning I remember more about the reason\nfor the aforesaid patch: it's actually unsafe to read the pg_class row\nat all if you have not got lock on the index. We are reading with\nSnapshotNow in order to be sure we see up-to-date info, and that means\nthat a concurrent update of the row (eg, for REINDEX to report the new\nrelfilenode) can have the following behavior:\n\n1. REINDEX inserts the new modified version of the index's pg_class row.\n\n2. Would-be reader process visits the new version of the pg_class row.\n It's not committed yet, so we ignore it and continue scanning.\n\n3. REINDEX commits.\n\n4. Reader process visits the old version of the pg_class row. It's\n now committed dead, so we ignore it and continue scanning.\n\n5. Reader process bombs out with a complaint about no pg_class row for\n the index.\n\nSo we really have to have the lock.\n\n> This is a fairly serious concern for us, that\n> reindex is blocking all readers of the parent table.\n\nI'm afraid you're kinda stuck: I don't see any fix that would be\npractical to put into 8.2, or even 8.3 considering that it's way too\nlate to be thinking of implementing REINDEX CONCURRENTLY for 8.3.\n\nYou might be able to work around it for now by faking such a reindex\n\"by hand\"; that is, create a duplicate new index under a different\nname using CREATE INDEX CONCURRENTLY, then exclusive-lock the table\nfor just long enough to drop the old index and rename the new one\nto match.\n\nIt's probably worth asking also how badly you really need routine\nreindexing. Are you certain your app still needs that with 8.2,\nor is it a hangover from a few releases back? Could more aggressive\n(auto)vacuuming provide a better solution?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2007 13:28:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex " }, { "msg_contents": "On 8/24/07, Tom Lane <[email protected]> wrote:\n\n> You might be able to work around it for now by faking such a reindex\n> \"by hand\"; that is, create a duplicate new index under a different\n> name using CREATE INDEX CONCURRENTLY, then exclusive-lock the table\n> for just long enough to drop the old index and rename the new one\n> to match.\n\n\nThis is a good suggestion, one that we had thought of earlier. Looks like\nit might be time to try it out and observe system impact.\n\n\n\n\n> It's probably worth asking also how badly you really need routine\n> reindexing. Are you certain your app still needs that with 8.2,\n> or is it a hangover from a few releases back? Could more aggressive\n> (auto)vacuuming provide a better solution?\n\n\nRoutine reindexing was added (recently, since moving to 8.2) as more of an\noptimization than a necessity. If the idea above doesn't work for us or\ncauses locking issues, then we could always do away with the periodic\nreindexing. That would be unfortunate, because reindexing serves to be\nquite a nice optimization for us. We've observed up to 40% space savings\n(after setting the fillfactor to 100, then reindexing) along with general\nimprovement in read performance (although hard to quantify).\n\nAs mentioned earlier in this thread, we're only reindexing insert-only\npartitioned tables, once they're fully loaded.\n\nThanks for your help.\n\nSteve\n\nOn 8/24/07, Tom Lane <[email protected]> wrote:\nYou might be able to work around it for now by faking such a reindex\"by hand\"; that is, create a duplicate new index under a different\nname using CREATE INDEX CONCURRENTLY, then exclusive-lock the tablefor just long enough to drop the old index and rename the new oneto match.\n \nThis is a good suggestion, one that we had thought of earlier.  Looks like it might be time to try it out and observe system impact.\n \n \nIt's probably worth asking also how badly you really need routinereindexing.  Are you certain your app still needs that with \n8.2,or is it a hangover from a few releases back?  Could more aggressive(auto)vacuuming provide a better solution?\n \nRoutine reindexing was added (recently, since moving to 8.2) as more of an optimization than a necessity.  If the idea above doesn't work for us or causes locking issues, then we could always do away with the periodic reindexing.  That would be unfortunate, because reindexing serves to be quite a nice optimization for us.  We've observed up to 40% space savings (after setting the fillfactor to 100, then reindexing) along with general improvement in read performance (although hard to quantify).\n\n \nAs mentioned earlier in this thread, we're only reindexing insert-only partitioned tables, once they're fully loaded.\n \nThanks for your help.\n \nSteve", "msg_date": "Fri, 24 Aug 2007 13:49:01 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> writes:\n\n> \"Steven Flatt\" <[email protected]> writes:\n>> So, can we simply trust what's in pg_class.relpages and ignore looking\n>> directly at the index?\n>\n> No, we can't. In the light of morning I remember more about the reason\n> for the aforesaid patch: it's actually unsafe to read the pg_class row\n> at all if you have not got lock on the index. We are reading with\n> SnapshotNow in order to be sure we see up-to-date info, and that means\n> that a concurrent update of the row (eg, for REINDEX to report the new\n> relfilenode) can have the following behavior:\n\nShould reindex be doing an in-place update? Don't we have to do in-place\nupdates for other system catalogs which are read in snapshotnow for precisely\nthe same reasons?\n\nAlternatively, why does the planner need access to the pg_class entry and not\njust the pg_index record?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 24 Aug 2007 21:06:53 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Should reindex be doing an in-place update?\n\nNot if you'd like it to be crash-safe.\n\n> Alternatively, why does the planner need access to the pg_class entry and not\n> just the pg_index record?\n\nFor one thing, to find out how big the index is ... though if we could\nget around that problem, it might indeed be possible to treat the\npg_index records as property of the parent table not the index itself,\nwhich would give us license to read them without locking the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2007 17:43:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex " }, { "msg_contents": "\nThis has been saved for the 8.4 release:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches_hold\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Steven Flatt\" <[email protected]> writes:\n> > So, can we simply trust what's in pg_class.relpages and ignore looking\n> > directly at the index?\n> \n> No, we can't. In the light of morning I remember more about the reason\n> for the aforesaid patch: it's actually unsafe to read the pg_class row\n> at all if you have not got lock on the index. We are reading with\n> SnapshotNow in order to be sure we see up-to-date info, and that means\n> that a concurrent update of the row (eg, for REINDEX to report the new\n> relfilenode) can have the following behavior:\n> \n> 1. REINDEX inserts the new modified version of the index's pg_class row.\n> \n> 2. Would-be reader process visits the new version of the pg_class row.\n> It's not committed yet, so we ignore it and continue scanning.\n> \n> 3. REINDEX commits.\n> \n> 4. Reader process visits the old version of the pg_class row. It's\n> now committed dead, so we ignore it and continue scanning.\n> \n> 5. Reader process bombs out with a complaint about no pg_class row for\n> the index.\n> \n> So we really have to have the lock.\n> \n> > This is a fairly serious concern for us, that\n> > reindex is blocking all readers of the parent table.\n> \n> I'm afraid you're kinda stuck: I don't see any fix that would be\n> practical to put into 8.2, or even 8.3 considering that it's way too\n> late to be thinking of implementing REINDEX CONCURRENTLY for 8.3.\n> \n> You might be able to work around it for now by faking such a reindex\n> \"by hand\"; that is, create a duplicate new index under a different\n> name using CREATE INDEX CONCURRENTLY, then exclusive-lock the table\n> for just long enough to drop the old index and rename the new one\n> to match.\n> \n> It's probably worth asking also how badly you really need routine\n> reindexing. Are you certain your app still needs that with 8.2,\n> or is it a hangover from a few releases back? Could more aggressive\n> (auto)vacuuming provide a better solution?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 14 Sep 2007 00:23:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When/if to Reindex" } ]
[ { "msg_contents": "I am planning to add a tags (as in the \"web 2.0\" thing) feature to my web\nbased application. I would like some feedback from the experts here on\nwhat the best database design for that would be.\n\nThe possibilities I have come up with are:\n* A tags table containing the tag and id number of what it links to.\nselect pid from tags where tag='bla'\nselect tag from tags where pid=xxx.\n\n* a tags table where each tag exists only once, and a table with the tag\nID and picture ID to link them together.\n\nselect pid from tags inner join picture_tags using(tag_id) where tag='bla'\nselect tag from tags inner join picture_tags using(tag_id) where pid='xxx'\n\n* A full text index in the picture table containing the tags\n\nselect pid from pictures where tags @@ to_tsquery('bla')\n(or the non-fti version)\nselect pid from pictures where tags ~* '.*bla.*'\n\nselect tags from pictures where pid=xxx;\n", "msg_date": "Wed, 18 Jul 2007 14:26:00 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Optmal tags design?" }, { "msg_contents": "\n\nOn Wed, 2007-07-18 at 14:26 -0700, [email protected] wrote:\n> I am planning to add a tags (as in the \"web 2.0\" thing) feature to my web\n> based application. I would like some feedback from the experts here on\n> what the best database design for that would be.\n> \n> The possibilities I have come up with are:\n> * A tags table containing the tag and id number of what it links to.\n> select pid from tags where tag='bla'\n> select tag from tags where pid=xxx.\n\nProperly indexed, this schema can handle common lookups such as 'show me\nall pictures with tag X'.\n\nThe problem here is that any operation involving all tags (for example,\n'show me a list of all tags in the database') may be slow and/or\nawkward.\n\n> * a tags table where each tag exists only once, and a table with the tag\n> ID and picture ID to link them together.\n\nThis sounds the most reasonable, and is the \"right way\" to do it in the\nrelational model. Can handle common queries such as 'show me all\npictures with tag X'. Can also easily perform queries such as 'show me\na list of all tags in the database'.\n\nThis also gives you a logical place to store additional information for\neach tag, such as the user and timestamp of the first usage of the tag,\nor a cache of the approximate number of pictures with that tag (for a\nfuture performance optimization, maybe), or whatever else you can think\nup that might be useful to store on a per-tag level.\n\n> select pid from tags inner join picture_tags using(tag_id) where tag='bla'\n> select tag from tags inner join picture_tags using(tag_id) where pid='xxx'\n> \n> * A full text index in the picture table containing the tags\n> \n> select pid from pictures where tags @@ to_tsquery('bla')\n> (or the non-fti version)\n> select pid from pictures where tags ~* '.*bla.*'\n> \n> select tags from pictures where pid=xxx;\n\nI'm not experienced with full text indexing so perhaps I'm wrong about\nthis, but it seems like it would give you approximately the same\nflexibility as #1 in terms of your data model. The only reason I can\nthink of why you might want this over #1 would be for a performance\nimprovement, but if there's a reasonably small number of distinct tags\nand/or distinct tags per picture I can't imagine it being much faster\nthan #1.\n\n-- Mark Lewis\n", "msg_date": "Wed, 18 Jul 2007 14:51:52 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optmal tags design?" }, { "msg_contents": "We store tags on our items like this like this:\n\nTag.ID INT NOT NULL PRIMARY KEY\nTag.Value TEXT LCASE NOT NULL UNIQUE\n\nItem.ID INT NOT NULL PRIMARY KEY\n\nItemTagBinding.ItemID INT NOT NULL REFERENCES Item.ID\nItemTagBinding.TagID INT NOT NULL REFERENCES Tag.ID\nItemTagBinding.ItemID + ItemTagBinding.TagID UNIQUE\n\nwith appropriate indexes on the columns we need to frequently query.\n\nWe have about 3 million tag bindings right now, and have not run into any\nperformance issues related to tagging other than generating tag clouds\n(which we pre-calculate anyway).\n\nI'll have to get back to you when we get up to 10's, or even 100's of\nmillions and let you know how it scaled.\n\nBryan\n\nOn 7/18/07, [email protected] <[email protected]> wrote:\n>\n> I am planning to add a tags (as in the \"web 2.0\" thing) feature to my web\n> based application. I would like some feedback from the experts here on\n> what the best database design for that would be.\n>\n> The possibilities I have come up with are:\n> * A tags table containing the tag and id number of what it links to.\n> select pid from tags where tag='bla'\n> select tag from tags where pid=xxx.\n>\n> * a tags table where each tag exists only once, and a table with the tag\n> ID and picture ID to link them together.\n>\n> select pid from tags inner join picture_tags using(tag_id) where tag='bla'\n> select tag from tags inner join picture_tags using(tag_id) where pid='xxx'\n>\n> * A full text index in the picture table containing the tags\n>\n> select pid from pictures where tags @@ to_tsquery('bla')\n> (or the non-fti version)\n> select pid from pictures where tags ~* '.*bla.*'\n>\n> select tags from pictures where pid=xxx;\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nWe store tags on our items like this like this:Tag.ID INT NOT NULL PRIMARY KEYTag.Value TEXT LCASE NOT NULL UNIQUEItem.ID INT NOT NULL PRIMARY KEYItemTagBinding.ItemID INT NOT NULL REFERENCES Item.ID\nItemTagBinding.TagID INT NOT NULL REFERENCES Tag.IDItemTagBinding.ItemID + ItemTagBinding.TagID UNIQUEwith appropriate indexes on the columns we need to frequently query.We have about 3 million tag bindings right now, and have not run into any performance issues related to tagging other than generating tag clouds (which we pre-calculate anyway).\nI'll have to get back to you when we get up to 10's, or even 100's of millions and let you know how it scaled.BryanOn 7/18/07, \[email protected] <[email protected]> wrote:\nI am planning to add a tags (as in the \"web 2.0\" thing) feature to my webbased application. I would like some feedback from the experts here onwhat the best database design for that would be.The possibilities I have come up with are:\n* A tags table containing the tag and id number of what it links to.select pid from tags where tag='bla'select tag from tags where pid=xxx.* a tags table where each tag exists only once, and a table with the tag\nID and picture ID to link them together.select pid from tags inner join picture_tags using(tag_id) where tag='bla'select tag from tags inner join picture_tags using(tag_id) where pid='xxx'\n* A full text index in the picture table containing the tagsselect pid from pictures where tags @@ to_tsquery('bla')(or the non-fti version)select pid from pictures where tags ~* '.*bla.*'\nselect tags from pictures where pid=xxx;---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not\n       match", "msg_date": "Wed, 18 Jul 2007 16:54:07 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optmal tags design?" } ]
[ { "msg_contents": "Folks,\n\nI've run into this a number of times with various PostgreSQL users, so we \ntested it at Sun. What seems to be happening is that at some specific number \nof connections average throughput drops 30% and response time quadruples or \nworse. The amount seems to vary per machine; I've seen it as variously 95, \n1050, 1700 or 2800 connections. Tinkering with postgresql.conf parameters \ndoesn't seem to affect this threshold.\n\nAs an example of this behavior:\n\nUsers\tTxn/User Resp. Time\n50\t105.38\t0.01\n100\t113.05\t0.01\n150\t114.05\t0.01\n200\t113.51\t0.01\n250\t113.38\t0.01\n300\t112.14\t0.01\n350\t112.26\t0.01\n400\t111.43\t0.01\n450\t110.72\t0.01\n500\t110.44\t0.01\n550\t109.36\t0.01\n600\t107.01\t0.02\n650\t105.71\t0.02\n700\t106.95\t0.02\n750\t107.69\t0.02\n800\t106.78\t0.02\n850\t108.59\t0.02\n900\t106.03\t0.02\n950\t106.13\t0.02\n1000\t64.58\t0.15\n1050\t52.32\t0.23\n1100\t49.79\t0.25\n\nTinkering with shared_buffers has had no effect on this threholding (the above \nwas with 3gb to 6gb of shared_buffers). Any ideas on where we should look \nfor the source of the bottleneck?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 19 Jul 2007 08:28:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "User concurrency thresholding: where do I look?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n\n> 650\t105.71\t0.02\n> 700\t106.95\t0.02\n> 750\t107.69\t0.02\n> 800\t106.78\t0.02\n> 850\t108.59\t0.02\n> 900\t106.03\t0.02\n> 950\t106.13\t0.02\n> 1000\t64.58\t0.15\n> 1050\t52.32\t0.23\n> 1100\t49.79\t0.25\n> \n> Tinkering with shared_buffers has had no effect on this threholding (the above \n> was with 3gb to 6gb of shared_buffers). Any ideas on where we should look \n> for the source of the bottleneck?\n\nI have seen this as well. I always knocked it up to PG having to \nmanaging so many connections but there are some interesting evidences to \nreview.\n\nThe amount of memory \"each\" connection takes up. Consider 4-11 meg per \nconnection depending on various things like number of prepared queries.\n\nNumber of CPUs. Obviously 500 connections over 4 CPUS isn't the same as \n500 connections over 8 CPUS.\n\nThat number of connections generally means a higher velocity, a higher \nvelocity means more checkpoint segments. Wrong settings with your \ncheckpoint segments, bgwriter and checkpoint will cause you to start \nfalling down.\n\nI would also note that our experience is that PG falls down a little \nhigher, more toward 2500 connections last time I checked, but this was \nlikely on different hardware.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 19 Jul 2007 08:44:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> I've run into this a number of times with various PostgreSQL users, so we \n> tested it at Sun. What seems to be happening is that at some specific number \n> of connections average throughput drops 30% and response time quadruples or \n> worse. The amount seems to vary per machine; I've seen it as variously 95, \n> 1050, 1700 or 2800 connections. Tinkering with postgresql.conf parameters \n> doesn't seem to affect this threshold.\n> \n> As an example of this behavior:\n> \n> Users\tTxn/User Resp. Time\n> 50\t105.38\t0.01\n> 100\t113.05\t0.01\n> 150\t114.05\t0.01\n> 200\t113.51\t0.01\n> 250\t113.38\t0.01\n> 300\t112.14\t0.01\n> 350\t112.26\t0.01\n> 400\t111.43\t0.01\n> 450\t110.72\t0.01\n> 500\t110.44\t0.01\n> 550\t109.36\t0.01\n> 600\t107.01\t0.02\n> 650\t105.71\t0.02\n> 700\t106.95\t0.02\n> 750\t107.69\t0.02\n> 800\t106.78\t0.02\n> 850\t108.59\t0.02\n> 900\t106.03\t0.02\n> 950\t106.13\t0.02\n> 1000\t64.58\t0.15\n> 1050\t52.32\t0.23\n> 1100\t49.79\t0.25\n> \n> Tinkering with shared_buffers has had no effect on this threholding (the above \n> was with 3gb to 6gb of shared_buffers). Any ideas on where we should look \n> for the source of the bottleneck?\n\nHave you messed with max_connections and/or max_locks_per_transaction\nwhile testing this? The lock table is sized to max_locks_per_xact times\nmax_connections, and shared memory hash tables get slower when they are\nfull. Of course, the saturation point would depend on the avg number of\nlocks acquired per user, which would explain why you are seeing a lower\nnumber for some users and higher for others (simpler/more complex\nqueries).\n\nThis is just a guess though. No profiling or measuring at all, really.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n", "msg_date": "Thu, 19 Jul 2007 11:49:12 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Alvaro,\n\n> Have you messed with max_connections and/or max_locks_per_transaction\n> while testing this? The lock table is sized to max_locks_per_xact times\n> max_connections, and shared memory hash tables get slower when they are\n> full. Of course, the saturation point would depend on the avg number of\n> locks acquired per user, which would explain why you are seeing a lower\n> number for some users and higher for others (simpler/more complex\n> queries).\n\nThat's an interesting thought. Let me check lock counts and see if this is \npossibly the case.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 19 Jul 2007 09:22:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Alvaro,\n>> Have you messed with max_connections and/or max_locks_per_transaction\n>> while testing this? The lock table is sized to max_locks_per_xact times\n>> max_connections, and shared memory hash tables get slower when they are\n>> full. Of course, the saturation point would depend on the avg number of\n>> locks acquired per user, which would explain why you are seeing a lower\n>> number for some users and higher for others (simpler/more complex\n>> queries).\n\n> That's an interesting thought. Let me check lock counts and see if this is \n> possibly the case.\n\nAFAIK you'd get hard failures, not slowdowns, if you ran out of lock\nspace entirely; and the fact that you can continue the curve upwards\nsays that you're not on the edge of running out. However I agree that\nit's worth experimenting with those two parameters to see if the curve\nmoves around at all.\n\nAnother resource that might be interesting is the number of open files.\n\nAlso, have you tried watching vmstat or local equivalent to confirm that\nthe machine's not starting to swap?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jul 2007 13:25:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > Alvaro,\n> >> Have you messed with max_connections and/or max_locks_per_transaction\n> >> while testing this? The lock table is sized to max_locks_per_xact times\n> >> max_connections, and shared memory hash tables get slower when they are\n> >> full. Of course, the saturation point would depend on the avg number of\n> >> locks acquired per user, which would explain why you are seeing a lower\n> >> number for some users and higher for others (simpler/more complex\n> >> queries).\n> \n> > That's an interesting thought. Let me check lock counts and see if this is \n> > possibly the case.\n> \n> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock\n> space entirely;\n\nWell, if there still is shared memory available, the lock hash can\ncontinue to grow, but it would slow down according to this comment in\nShmemInitHash:\n\n * max_size is the estimated maximum number of hashtable entries. This is\n * not a hard limit, but the access efficiency will degrade if it is\n * exceeded substantially (since it's used to compute directory size and\n * the hash table buckets will get overfull).\n\nFor the lock hash tables this max_size is\n(MaxBackends+max_prepared_xacts) * max_locks_per_xact.\n\nSo maybe this does not make much sense in normal operation, thus not\napplicable to what Josh Berkus is reporting.\n\nHowever I was talking to Josh Drake yesterday and he told me that\npg_dump was spending some significant amount of time in LOCK TABLE when\nthere are lots of tables (say 300k).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 19 Jul 2007 13:37:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock\n>> space entirely;\n\n> Well, if there still is shared memory available, the lock hash can\n> continue to grow, but it would slow down according to this comment in\n> ShmemInitHash:\n\nRight, but there's not an enormous amount of headroom in shared memory\nbeyond the intended size of the hash tables. I'd think that you'd start\nseeing hard failures not very far beyond the point at which performance\nimpacts became visible. Of course this is all speculation; I quite\nagree with varying the table-size parameters to see if it makes a\ndifference.\n\nJosh, what sort of workload is being tested here --- read-mostly,\nwrite-mostly, a mixture?\n\n> However I was talking to Josh Drake yesterday and he told me that\n> pg_dump was spending some significant amount of time in LOCK TABLE when\n> there are lots of tables (say 300k).\n\nI wouldn't be too surprised if there's some O(N^2) effects when a single\ntransaction holds that many locks, because of the linked-list proclock\ndata structures. This would not be relevant to Josh's case though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jul 2007 13:45:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "Alvaro Herrera wrote:\n> Tom Lane wrote:\n>> Josh Berkus <[email protected]> writes:\n\n> So maybe this does not make much sense in normal operation, thus not\n> applicable to what Josh Berkus is reporting.\n> \n> However I was talking to Josh Drake yesterday and he told me that\n> pg_dump was spending some significant amount of time in LOCK TABLE when\n> there are lots of tables (say 300k).\n\nLess, 128k\n\nJoshua D. Drake\n\n\n> \n\n", "msg_date": "Thu, 19 Jul 2007 11:27:45 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Thu, 19 Jul 2007, Josh Berkus wrote:\n\n> What seems to be happening is that at some specific number of \n> connections average throughput drops 30% and response time quadruples or \n> worse.\n\nCould you characterize what each connection is doing and how you're \ngenerating the load? I don't know how productive speculating about the \ncause here will be until there's a test script available so other people \ncan see where the tipping point is on their system.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 19 Jul 2007 15:04:44 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Tom, all:\n\n> Also, have you tried watching vmstat or local equivalent to confirm that\n> the machine's not starting to swap?\n\nWe're not swapping.\n\n> Josh, what sort of workload is being tested here --- read-mostly,\n> write-mostly, a mixture?\n\nIt's a TPCC-like workload, so heavy single-row updates, and the \nupdates/inserts are what's being measured. For that matter, when I've seen \nthis before it was with heavy-write workloads and we were measuring the \nnumber of updates/inserts and not the number of reads.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 19 Jul 2007 18:19:13 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Josh, what sort of workload is being tested here --- read-mostly,\n>> write-mostly, a mixture?\n\n> It's a TPCC-like workload, so heavy single-row updates, and the \n> updates/inserts are what's being measured. For that matter, when I've seen \n> this before it was with heavy-write workloads and we were measuring the \n> number of updates/inserts and not the number of reads.\n\nWell, if the load is a lot of short writing transactions then you'd\nexpect the throughput to depend on how fast stuff can be pushed down to\nWAL. What have you got wal_buffers set to? Are you using a commit\ndelay? What's the I/O system anyway (any BB write cache on the WAL\ndisk?) and what wal sync method are you using?\n\nWhile I'm asking questions, exactly what were the data columns you\npresented? Txn/User doesn't make much sense to me, and I'm not sure\nwhat \"response time\" you were measuring either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jul 2007 21:45:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "Tom,\n\n> Well, if the load is a lot of short writing transactions then you'd\n> expect the throughput to depend on how fast stuff can be pushed down to\n> WAL. What have you got wal_buffers set to? Are you using a commit\n> delay? What's the I/O system anyway (any BB write cache on the WAL\n> disk?) and what wal sync method are you using?\n\nYou know, I think Jignesh needs to me on this list so I can stop relaying \nquestions on a workload I didn't design. Let me get him.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 19 Jul 2007 21:38:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> writes:\n\n> Josh Berkus <[email protected]> writes:\n>\n>> That's an interesting thought. Let me check lock counts and see if this is \n>> possibly the case.\n>\n> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock\n> space entirely\n\nI assume you've checked the server logs and are sure that you aren't in fact\ngetting errors. I could, for example, envision a situation where a fraction of\nthe transactions are getting some error and those transactions are therefore\nnot being counted against the txn/s result.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 20 Jul 2007 12:46:07 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Thu, 19 Jul 2007, Josh Berkus wrote:\n\n> It's a TPCC-like workload, so heavy single-row updates, and the \n> updates/inserts are what's being measured.\n\nThere's so much going on with a TPC-C kind of workload. Has anyone ever \nlooked into quantifying scaling for more fundamental operations? There \nare so many places a complicated workload could get caught up that \nstarting there is hard. I've found it's helpful to see the breaking \npoints for simpler operations, then compare how things change as each new \ntransaction element is introduced.\n\nAs an example, take a look at the MySQL SysBench tool:\nhttp://sysbench.sourceforge.net/docs/\n\nSpecifically their \"oltp\" tests. Note how you can get a handle on how \nsimple selects scale, then simple inserts, then updates, and so on. The \nonly thing I've thought of they missed is testing a trivial operation that \ndoesn't even touch the buffer cache ('SELECT 1'?) that would let you \nquantify just general connection scaling issues.\n\nIt seems to me that you could narrow the list of possible causes here much \nmore quickly if you had a good handle on the upper concurrency of \nlower-level operations.\n\n[Note: it's possible to run SysBench against a PG database, but the code \nis very immature. Last time I tried there were plenty of crashes and \nthere seemed to be some transaction wrapping issues that caused deadlocks \nwith some tests.]\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 20 Jul 2007 13:05:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Greg,\n\n> There's so much going on with a TPC-C kind of workload. Has anyone ever \n> looked into quantifying scaling for more fundamental operations? There \n> are so many places a complicated workload could get caught up that \n> starting there is hard. I've found it's helpful to see the breaking \n> points for simpler operations, then compare how things change as each \n> new transaction element is introduced.\n\n... eagerly awaiting Michael Doilson's PgUnitTest ....\n\n--Josh\n", "msg_date": "Fri, 20 Jul 2007 10:17:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Awww Josh,\n\nI was just enjoying the chat on the picket fence! :-)\n\nAnyway the workload is mixed (reads,writes) with simple to medium \nqueries. The workload is known to scale well. But inorder to provide \nsubstantial input I am still trying to eliminate things that can \nbottleneck. Currently I have eliminated CPU (plenty free) , RAM \n(memory is 48GB RAM in this server for a 32-bit postgresql instance), \nIO Storage (used the free ram to do /tmp database to eliminate IO) and \nam still trying to eliminate any network bottlenecks to say for sure we \nhave a problem in PostgreSQL. But yes till that final thing is confirmed \n(network which can very well be the case) it could be a problem \nsomewhere else. However the thing that worries me is more of the big \ndrop instead of remaining constant out there..\n\nAnyway more on this within a day or two once I add more network nics \nbetween the systems to eliminate network problems (even though stats \ndont show them as problems right now) and also reduce malloc lock \npenalties if any.\n\nAs for other questions:\n\nmax_locks_per_transactions is set to default (10 I believe) increasing \nit still seems to degrade overall throughput number.\n\nmax_connections is set to 1500 for now till I get decent scaling till \n1400-1500 users.\n\nThere are no hard failures reported anywhere. Log min durations does \nshow that queries are now slowing down and taking longer.\n\nOS is not swapping and also eliminated IO by putting the whole database \non /tmp\n\nSo while I finish adding more network connections between the two \nsystems (need to get cards) do enjoy the following URL :-)\n\nhttp://www.spec.org/jAppServer2004/results/res2007q3/jAppServer2004-20070703-00073.html\n\nOf course all postgresql.conf still remains from the old test so no \nflames on that one again :-)\n\nRegards,\nJignesh\n\n\n\n\nJosh Berkus wrote:\n> Tom,\n>\n> \n>> Well, if the load is a lot of short writing transactions then you'd\n>> expect the throughput to depend on how fast stuff can be pushed down to\n>> WAL. What have you got wal_buffers set to? Are you using a commit\n>> delay? What's the I/O system anyway (any BB write cache on the WAL\n>> disk?) and what wal sync method are you using?\n>> \n>\n> You know, I think Jignesh needs to me on this list so I can stop relaying \n> questions on a workload I didn't design. Let me get him.\n>\n> \n", "msg_date": "Fri, 20 Jul 2007 14:08:49 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "I forgot to add one more piece of information.. I also tried the same \ntest with 64-bit postgresql with 6GB shared_buffers and results are the \nsame it drops around the same point which to me sounds like a bottleneck..\n\nMore later\n\n-Jignesh\n\n\nJignesh K. Shah wrote:\n> Awww Josh,\n>\n> I was just enjoying the chat on the picket fence! :-)\n>\n> Anyway the workload is mixed (reads,writes) with simple to medium \n> queries. The workload is known to scale well. But inorder to provide \n> substantial input I am still trying to eliminate things that can \n> bottleneck. Currently I have eliminated CPU (plenty free) , RAM \n> (memory is 48GB RAM in this server for a 32-bit postgresql \n> instance), IO Storage (used the free ram to do /tmp database to \n> eliminate IO) and am still trying to eliminate any network \n> bottlenecks to say for sure we have a problem in PostgreSQL. But yes \n> till that final thing is confirmed (network which can very well be the \n> case) it could be a problem somewhere else. However the thing that \n> worries me is more of the big drop instead of remaining constant out \n> there..\n>\n> Anyway more on this within a day or two once I add more network nics \n> between the systems to eliminate network problems (even though stats \n> dont show them as problems right now) and also reduce malloc lock \n> penalties if any.\n>\n> As for other questions:\n>\n> max_locks_per_transactions is set to default (10 I believe) increasing \n> it still seems to degrade overall throughput number.\n>\n> max_connections is set to 1500 for now till I get decent scaling till \n> 1400-1500 users.\n>\n> There are no hard failures reported anywhere. Log min durations does \n> show that queries are now slowing down and taking longer.\n>\n> OS is not swapping and also eliminated IO by putting the whole \n> database on /tmp\n>\n> So while I finish adding more network connections between the two \n> systems (need to get cards) do enjoy the following URL :-)\n>\n> http://www.spec.org/jAppServer2004/results/res2007q3/jAppServer2004-20070703-00073.html \n>\n>\n> Of course all postgresql.conf still remains from the old test so no \n> flames on that one again :-)\n>\n> Regards,\n> Jignesh\n>\n>\n>\n>\n> Josh Berkus wrote:\n>> Tom,\n>>\n>> \n>>> Well, if the load is a lot of short writing transactions then you'd\n>>> expect the throughput to depend on how fast stuff can be pushed down to\n>>> WAL. What have you got wal_buffers set to? Are you using a commit\n>>> delay? What's the I/O system anyway (any BB write cache on the WAL\n>>> disk?) and what wal sync method are you using?\n>>> \n>>\n>> You know, I think Jignesh needs to me on this list so I can stop \n>> relaying questions on a workload I didn't design. Let me get him.\n>>\n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n", "msg_date": "Fri, 20 Jul 2007 14:18:05 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> There are no hard failures reported anywhere. Log min durations does \n> show that queries are now slowing down and taking longer.\n> OS is not swapping and also eliminated IO by putting the whole database \n> on /tmp\n\nHmm. Do you see any evidence of a context swap storm (ie, a drastic\nincrease in the context swaps/second reading reported by vmstat)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 14:23:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "Yes I did see increase in context switches and CPU migrations at that \npoint using mpstat.\n\nRegards,\nJignesh\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> There are no hard failures reported anywhere. Log min durations does \n>> show that queries are now slowing down and taking longer.\n>> OS is not swapping and also eliminated IO by putting the whole database \n>> on /tmp\n>> \n>\n> Hmm. Do you see any evidence of a context swap storm (ie, a drastic\n> increase in the context swaps/second reading reported by vmstat)?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Fri, 20 Jul 2007 14:56:31 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> Yes I did see increase in context switches and CPU migrations at that \n> point using mpstat.\n\nSo follow that up --- try to determine which lock is being contended\nfor. There's some very crude code in the sources that you can enable\nwith -DLWLOCK_STATS, but probably DTrace would be a better tool.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 15:13:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>> Yes I did see increase in context switches and CPU migrations at that \n>> point using mpstat.\n>\n> So follow that up --- try to determine which lock is being contended\n> for. There's some very crude code in the sources that you can enable\n> with -DLWLOCK_STATS, but probably DTrace would be a better tool.\n>\n> \t\t\tregards, tom lane\n\nUsing plockstat -A -s 5 -p $pid\n\non bgwriter: doesnt report anything\n\nOn one of the many connections:\n\nThis one is hard to read easily\nBy default, plockstat monitors all lock con-\ntention events, gathers frequency and timing data about\nthose events, and displays the data in decreasing frequency\norder, so that the most common events appear first.\n\n\n^Cbash-3.00# plockstat -A -s 5 -p 6401\n^C\nMutex hold\n\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 59 186888 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 16384 | | 1 libumem.so.1`process_free+0x12c\n 32768 |@@@@@ | 14 postgres`AllocSetDelete+0x98\n 65536 |@@ | 5 \npostgres`MemoryContextDelete+0x78\n 131072 | | 0 postgres`CommitTransaction+0x240\n 262144 |@@@@@@@@@@@@@@@ | 39\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 530 12226 0x10059e280 \nlibumem.so.1`umem_cache_alloc+0x200\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@ | 338 \nlibumem.so.1`umem_cache_alloc+0x200\n 8192 |@ | 24 libumem.so.1`umem_alloc+0x5c\n 16384 |@ | 37 libumem.so.1`malloc+0x40\n 32768 |@@@@@ | 131 postgres`AllocSetAlloc+0x1c4\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 324 10214 0x100578030 libumem.so.1`vmem_xfree+0x164\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@ | 192 libumem.so.1`vmem_xfree+0x164\n 8192 |@@@@ | 56 libumem.so.1`process_free+0x12c\n 16384 |@ | 26 postgres`AllocSetDelete+0x98\n 32768 |@@@ | 50 \npostgres`MemoryContextDelete+0x78\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 161 13585 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@ | 118 libumem.so.1`process_free+0x12c\n 8192 | | 4 postgres`AllocSetDelete+0x98\n 16384 |@ | 10 \npostgres`MemoryContextDelete+0x78\n 32768 |@@@ | 24 postgres`PortalDrop+0x160\n 65536 | | 3\n 131072 | | 0\n 262144 | | 2\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 6081 libumem.so.1`vmem0+0xc38 libumem.so.1`vmem_xalloc+0x630\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@ | 170 libumem.so.1`vmem_xalloc+0x630\n 8192 |@@@@@@@@@@@ | 155 libumem.so.1`vmem_alloc+0x1f8\n 16384 | | 1 libumem.so.1`vmem_xalloc+0x524\n libumem.so.1`vmem_alloc+0x1f8\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 5867 libumem.so.1`vmem0+0x30 libumem.so.1`vmem_alloc+0x248\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@ | 185 libumem.so.1`vmem_alloc+0x248\n 8192 |@@@@@@@@@@ | 141 \nlibumem.so.1`vmem_sbrk_alloc+0x30\n libumem.so.1`vmem_xalloc+0x524\n libumem.so.1`vmem_alloc+0x1f8\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 318 5873 0x100578030 libumem.so.1`vmem_alloc+0x1d0\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@ | 228 libumem.so.1`vmem_alloc+0x1d0\n 8192 |@@@@@ | 78 libumem.so.1`umem_alloc+0xec\n 16384 | | 6 libumem.so.1`malloc+0x40\n 32768 | | 6 postgres`AllocSetAlloc+0x1c4\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 5591 0x100578030 libumem.so.1`vmem_xalloc+0x630\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@ | 213 libumem.so.1`vmem_xalloc+0x630\n 8192 |@@@@@@@@ | 112 libumem.so.1`vmem_alloc+0x1f8\n 16384 | | 0 libumem.so.1`umem_alloc+0xec\n 32768 | | 1 libumem.so.1`malloc+0x40\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 324 5208 libumem.so.1`vmem0+0xc38 libumem.so.1`vmem_xfree+0x164\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@ | 236 libumem.so.1`vmem_xfree+0x164\n 8192 |@@@@@@ | 88 libumem.so.1`process_free+0x12c\n postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 4108 libumem.so.1`vmem0+0xc38 libumem.so.1`vmem_alloc+0x1d0\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@ | 325 libumem.so.1`vmem_alloc+0x1d0\n 8192 | | 1 libumem.so.1`vmem_xalloc+0x524\n libumem.so.1`vmem_alloc+0x1f8\n libumem.so.1`umem_alloc+0xec\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 4108 0x100578030 libumem.so.1`vmem_xalloc+0x50c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@ | 325 libumem.so.1`vmem_xalloc+0x50c\n 8192 | | 1 libumem.so.1`vmem_alloc+0x1f8\n libumem.so.1`umem_alloc+0xec\n libumem.so.1`malloc+0x40\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 326 4096 libumem.so.1`vmem0+0xc38 libumem.so.1`vmem_xalloc+0x50c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 326 libumem.so.1`vmem_xalloc+0x50c\n libumem.so.1`vmem_alloc+0x1f8\n libumem.so.1`vmem_xalloc+0x524\n libumem.so.1`vmem_alloc+0x1f8\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 240 5444 libumem.so.1`vmem0+0x30 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@ | 167 libumem.so.1`process_free+0x12c\n 8192 |@@@@@@@ | 72 postgres`AllocSetDelete+0x98\n 16384 | | 0 \npostgres`MemoryContextDelete+0x78\n 32768 | | 1 postgres`ExecutorEnd+0x40\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 123 9057 0x10059e1d0 \nlibumem.so.1`umem_depot_alloc+0xb8\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@ | 60 \nlibumem.so.1`umem_depot_alloc+0xb8\n 8192 |@@@@ | 24 \nlibumem.so.1`umem_cache_free+0xc4\n 16384 |@@@@@@@ | 37 libumem.so.1`process_free+0x12c\n 32768 | | 2 postgres`AllocSetDelete+0x98\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 200 4935 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@ | 185 libumem.so.1`process_free+0x12c\n 8192 | | 4 postgres`AllocSetDelete+0x98\n 16384 |@ | 10 \npostgres`MemoryContextDelete+0x78\n 32768 | | 1 postgres`ExecutorEnd+0x40\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 164 5219 0x100595700 \nlibumem.so.1`umem_cache_alloc+0x200\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@ | 121 \nlibumem.so.1`umem_cache_alloc+0x200\n 8192 |@@@@@@ | 42 libumem.so.1`umem_alloc+0x5c\n 16384 | | 1 libumem.so.1`malloc+0x40\n postgres`AllocSetAlloc+0x1c4\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 122 6748 0x10059e1d0 \nlibumem.so.1`umem_depot_alloc+0xb8\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@ | 43 \nlibumem.so.1`umem_depot_alloc+0xb8\n 8192 |@@@@@@@@@@@@@@@ | 79 \nlibumem.so.1`umem_cache_alloc+0xa0\n libumem.so.1`umem_alloc+0x5c\n libumem.so.1`malloc+0x40\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 163 4146 0x100595700 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@ | 161 libumem.so.1`process_free+0x12c\n 8192 | | 2 postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n postgres`PortalDrop+0x160\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 50 12615 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@ | 28 libumem.so.1`process_free+0x12c\n 8192 |@ | 3 postgres`AllocSetDelete+0x98\n 16384 |@@@ | 8 \npostgres`MemoryContextDelete+0x78\n 32768 |@@@@@ | 11 postgres`FreeExecutorState+0x6c\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 123 4096 0x10059e1d0 \nlibumem.so.1`umem_cache_free+0xec\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 123 \nlibumem.so.1`umem_cache_free+0xec\n libumem.so.1`process_free+0x12c\n postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 122 4096 0x10059e1d0 \nlibumem.so.1`umem_cache_alloc+0xc4\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 122 \nlibumem.so.1`umem_cache_alloc+0xc4\n libumem.so.1`umem_alloc+0x5c\n libumem.so.1`malloc+0x40\n postgres`AllocSetAlloc+0x1c4\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 37 7970 libumem.so.1`vmem0+0x30 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@ | 2 libumem.so.1`process_free+0x12c\n 8192 |@@@@@@@@@@@@@@@@@@@@@@ | 35 postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n \npostgres`exec_parse_message+0x130\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 37 5867 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@ | 33 libumem.so.1`process_free+0x12c\n 8192 |@ | 2 postgres`AllocSetDelete+0x98\n 16384 | | 0 \npostgres`MemoryContextDelete+0x78\n 32768 |@ | 2 \npostgres`exec_parse_message+0x130\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 39 4516 libumem.so.1`vmem0+0x30 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@ | 35 libumem.so.1`process_free+0x12c\n 8192 |@@ | 4 postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n postgres`PortalDrop+0x160\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 37 4428 0x10058b700 \nlibumem.so.1`umem_cache_alloc+0x200\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@ | 34 \nlibumem.so.1`umem_cache_alloc+0x200\n 8192 |@ | 3 libumem.so.1`umem_alloc+0x5c\n libumem.so.1`malloc+0x40\n \npostgres`base_yy_scan_buffer+0x38\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 37 4206 0x10058b700 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@ | 36 libumem.so.1`process_free+0x12c\n 8192 | | 1 postgres`scanner_finish+0x50\n postgres`raw_parser+0x3c\n postgres`pg_parse_query+0x54\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 11 10426 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@ | 6 libumem.so.1`process_free+0x12c\n 8192 |@@@@@@ | 3 postgres`AllocSetDelete+0x98\n 16384 | | 0 \npostgres`MemoryContextDelete+0x78\n 32768 |@@@@ | 2 postgres`ExecEndAgg+0x68\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 8 5120 libumem.so.1`vmem0+0x30 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@ | 6 libumem.so.1`process_free+0x12c\n 8192 |@@@@@@ | 2 postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n postgres`ExecEndSort+0x24\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 8 4096 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 8 libumem.so.1`process_free+0x12c\n postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n postgres`ExecEndSort+0x24\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 8 4096 0x100578030 libumem.so.1`vmem_alloc+0x1d0\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 8 libumem.so.1`vmem_alloc+0x1d0\n libumem.so.1`umem_alloc+0xec\n libumem.so.1`malloc+0x40\n postgres`AllocSetAlloc+0x314\n-------------------------------------------------------------------------------\nCount nsec Lock Caller\n 3 4096 0x10059e280 libumem.so.1`process_free+0x12c\n\n nsec ---- Time Distribution --- count Stack\n 4096 |@@@@@@@@@@@@@@@@@@@@@@@@| 3 libumem.so.1`process_free+0x12c\n postgres`AllocSetDelete+0x98\n \npostgres`MemoryContextDelete+0x78\n postgres`tbm_free+0x10\nbash-3.00#\n\n\n\n\n", "msg_date": "Fri, 20 Jul 2007 15:46:30 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> Tom Lane wrote:\n>> So follow that up --- try to determine which lock is being contended\n>> for. There's some very crude code in the sources that you can enable\n>> with -DLWLOCK_STATS, but probably DTrace would be a better tool.\n\n> Using plockstat -A -s 5 -p $pid\n\nI don't know what that is, but it doesn't appear to have anything to do\nwith Postgres LWLocks or spinlocks, which are the locks I was thinking of.\nTry asking Robert Lor about this --- IIRC he had some dtrace probes to\nwork with our locks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 16:16:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "sorry..\n\nThe are solaris mutex locks used by the postgresql process.\n\nWhat its saying is that there are holds/waits in trying to get locks \nwhich are locked at Solaris user library levels called from the \npostgresql functions:\nFor example both the following functions are hitting on the same mutex \nlock 0x10059e280 in Solaris Library call:\npostgres`AllocSetDelete+0x98\npostgres`AllocSetAlloc+0x1c4\n\n\nI need to enable the DTrace probes on my builds\n\n-Jignesh\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> Tom Lane wrote:\n>> \n>>> So follow that up --- try to determine which lock is being contended\n>>> for. There's some very crude code in the sources that you can enable\n>>> with -DLWLOCK_STATS, but probably DTrace would be a better tool.\n>>> \n>\n> \n>> Using plockstat -A -s 5 -p $pid\n>> \n>\n> I don't know what that is, but it doesn't appear to have anything to do\n> with Postgres LWLocks or spinlocks, which are the locks I was thinking of.\n> Try asking Robert Lor about this --- IIRC he had some dtrace probes to\n> work with our locks.\n>\n> \t\t\tregards, tom lane\n> \n", "msg_date": "Fri, 20 Jul 2007 16:51:39 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> What its saying is that there are holds/waits in trying to get locks \n> which are locked at Solaris user library levels called from the \n> postgresql functions:\n> For example both the following functions are hitting on the same mutex \n> lock 0x10059e280 in Solaris Library call:\n> postgres`AllocSetDelete+0x98\n> postgres`AllocSetAlloc+0x1c4\n\nThat's a perfect example of the sort of useless overhead that I was\ncomplaining of just now in pgsql-patches. Having malloc/free use\nan internal mutex is necessary in multi-threaded programs, but the\nbackend isn't multi-threaded. And yet, apparently you can't turn\nthat off in Solaris.\n\n(Fortunately, the palloc layer is probably insulating us from malloc's\nperformance enough that this isn't a huge deal. But it's annoying.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 16:57:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "True you cant switch off the locks since libthread has been folded into \nlibc in Solaris 10.\n\nAnyway just to give you an idea of the increase in context switching at \nthe break point here are the mpstat (taken at 10 second interval) on \nthis 8-socket Sun Fire V890.\n\nThe low icsw (Involuntary Context Switches) is about 950-1000 user mark \nafter which a context switch storm starts at users above 1000-1050 mark \nand drops in total throughput drops about 30% instantaneously.. I will \ntry rebuilding the postgresql with dtrace probes to get more clues. \n(NOTE you will see 1 cpu (cpuid:22) doing more system work... thats the \none doing handling the network interrupts)\n\n\nCPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl\n 0 57 0 27 108 6 4072 98 1749 416 1 7763 47 13 0 40\n 1 46 0 24 22 6 4198 11 1826 427 0 7547 45 13 0 42\n 2 42 0 34 104 8 4103 91 1682 424 1 7797 46 13 0 41\n 3 51 0 22 21 6 4125 10 1734 435 0 7399 45 13 0 43\n 4 65 0 27 19 6 4015 8 1706 411 0 7292 44 15 0 41\n 5 54 0 21 21 6 4297 10 1702 464 0 7708 45 13 0 42\n 6 36 0 16 66 47 4218 12 1713 426 0 7685 47 11 0 42\n 7 40 0 100 318 206 3699 10 1534 585 0 6851 45 14 0 41\n 16 41 0 30 87 5 3780 78 1509 401 1 7604 45 13 0 42\n 17 39 0 24 22 5 3970 12 1631 408 0 7265 44 12 0 44\n 18 42 0 24 99 5 3829 89 1519 401 1 7343 45 12 0 43\n 19 39 0 31 78830 5 3588 8 1509 400 0 6629 43 13 0 44\n 20 22 0 20 19 6 3925 9 1577 419 0 7364 44 12 0 44\n 21 38 0 31 23 5 3792 13 1566 407 0 7133 45 12 0 44\n 22 8 0 110 7053 7045 1641 8 728 838 0 2917 16 50 0 33\n 23 62 0 29 21 5 3985 10 1579 449 0 7368 44 12 0 44\nCPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl\n 0 13 0 27 123 6 4228 113 1820 433 1 8084 49 13 0 38\n 1 16 0 63 26 6 4253 15 1875 420 0 7754 47 14 0 39\n 2 11 0 31 110 8 4178 97 1741 425 1 8095 48 14 0 38\n 3 8 0 24 20 6 4257 9 1818 444 0 7807 47 13 0 40\n 4 13 0 54 28 6 4145 17 1774 426 1 7732 46 16 0 38\n 5 12 0 35 23 6 4412 12 1775 447 0 8249 48 13 0 39\n 6 8 0 24 38 15 4323 14 1760 422 0 8016 49 11 0 39\n 7 8 0 120 323 206 3801 15 1599 635 0 7290 47 15 0 38\n 16 11 0 44 107 5 3896 98 1582 393 1 7997 47 15 0 39\n 17 15 0 29 24 5 4120 14 1716 416 0 7648 46 13 0 41\n 18 9 0 35 113 5 3933 103 1594 399 1 7714 47 13 0 40\n 19 8 0 34 83271 5 3702 12 1564 403 0 7010 45 14 0 41\n 20 7 0 28 27 6 3997 16 1624 400 0 7676 46 13 0 41\n 21 8 0 28 25 5 3997 15 1664 402 0 7658 47 12 0 41\n 22 4 0 97 7741 7731 1586 11 704 906 0 2933 17 51 0 32\n 23 13 0 28 25 5 4144 15 1658 437 0 7810 47 12 0 41\nCPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl\n 0 0 0 141 315 6 9262 301 2812 330 0 10905 49 16 0 35\n 1 1 0 153 199 6 9400 186 2808 312 0 11066 48 16 0 37\n 2 0 0 140 256 8 8798 242 2592 310 0 10111 47 15 0 38\n 3 1 0 141 189 6 8803 172 2592 314 0 10171 47 15 0 39\n 4 0 0 120 214 6 9540 207 2801 322 0 10531 46 17 0 36\n 5 1 0 152 180 6 8764 161 2564 342 0 9904 47 15 0 38\n 6 1 0 107 344 148 8180 181 2512 290 0 9314 51 14 0 35\n 7 0 0 665 443 204 8733 153 2574 404 0 9892 43 21 0 37\n 16 0 0 113 217 5 6446 201 1975 265 0 7552 45 12 0 44\n 17 0 0 107 153 5 6568 140 2021 274 0 7586 44 11 0 45\n 18 0 0 121 215 5 6072 201 1789 276 1 7690 44 12 0 44\n 19 1 0 102 47142 5 6123 126 1829 262 0 7185 43 12 0 45\n 20 0 0 102 143 6 6451 129 1939 262 0 7450 43 13 0 44\n 21 1 0 106 150 5 6538 133 1997 285 0 7425 44 11 0 44\n 22 0 0 494 5949 5876 3586 73 1040 399 0 4058 26 39 0 34\n 23 0 0 102 159 5 6393 142 1942 324 0 7226 43 12 0 46\nCPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl\n 0 0 0 217 441 7 10763 426 3234 363 0 12449 47 18 \n0 35\n 1 0 0 210 322 7 11113 309 3273 351 0 12527 46 17 \n0 37\n 2 1 0 212 387 8 10306 370 2977 354 0 11320 45 16 \n0 38\n 3 0 0 230 276 7 10332 257 2947 341 0 11901 43 16 \n0 40\n 4 0 0 234 306 7 11324 290 3265 352 0 12805 45 18 \n0 37\n 5 0 0 212 284 7 10590 262 3042 388 0 11789 44 17 \n0 39\n 6 1 0 154 307 48 9583 241 2903 324 0 10564 50 15 0 35\n 7 0 0 840 535 206 10354 247 3035 428 0 11700 42 22 \n0 37\n 16 0 0 169 303 5 7446 286 2250 290 0 8361 42 13 0 45\n 17 0 0 173 240 5 7640 225 2288 295 0 8674 41 13 0 47\n 18 0 0 170 289 5 7445 270 2108 286 0 8167 41 12 0 47\n 19 0 0 176 51118 5 7365 197 2138 288 0 7934 40 13 0 47\n 20 1 0 172 222 6 7835 204 2323 298 0 8759 40 14 0 46\n 21 0 0 167 233 5 7749 218 2339 326 0 8264 42 13 0 46\n 22 0 0 749 6612 6516 4173 97 1166 421 0 4741 23 44 0 33\n 23 0 0 181 239 6 7709 219 2258 383 0 8402 41 12 0 47\nCPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl\n 0 0 0 198 439 6 10364 417 3113 327 0 11962 49 17 \n0 34\n 1 0 0 210 299 6 10655 282 3135 346 0 12463 47 17 \n0 36\n 2 0 0 202 352 8 9960 332 2890 320 0 11261 47 16 0 37\n 3 0 0 182 276 6 9950 255 2857 334 0 11021 46 16 0 38\n 4 0 0 200 305 6 10841 286 3127 325 0 12440 48 18 \n0 35\n 5 0 0 240 286 6 9983 272 2912 358 0 11450 46 16 0 37\n 6 0 0 153 323 81 9062 233 2767 300 0 9675 49 18 0 33\n 7 0 0 850 556 206 10027 271 2910 415 0 11048 43 22 \n0 35\n 16 0 0 152 306 5 7261 291 2216 266 0 8055 44 12 0 44\n 17 0 0 151 236 5 7193 217 2170 283 0 8099 43 12 0 45\n 18 0 0 170 263 5 7008 246 2009 254 0 7836 43 12 0 46\n 19 0 0 165 47738 5 6824 197 1989 273 0 7663 42 12 0 46\n 20 0 0 188 217 6 7496 197 2222 280 0 8435 43 13 0 44\n 21 0 0 179 248 5 7352 234 2233 309 0 8237 43 12 0 44\n 22 0 0 813 6041 5963 4006 82 1125 448 0 4442 25 42 0 33\n 23 0 0 162 241 5 7364 225 2170 355 0 7720 43 11 0 45\n\n\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> What its saying is that there are holds/waits in trying to get locks \n>> which are locked at Solaris user library levels called from the \n>> postgresql functions:\n>> For example both the following functions are hitting on the same mutex \n>> lock 0x10059e280 in Solaris Library call:\n>> postgres`AllocSetDelete+0x98\n>> postgres`AllocSetAlloc+0x1c4\n>> \n>\n> That's a perfect example of the sort of useless overhead that I was\n> complaining of just now in pgsql-patches. Having malloc/free use\n> an internal mutex is necessary in multi-threaded programs, but the\n> backend isn't multi-threaded. And yet, apparently you can't turn\n> that off in Solaris.\n>\n> (Fortunately, the palloc layer is probably insulating us from malloc's\n> performance enough that this isn't a huge deal. But it's annoying.)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Fri, 20 Jul 2007 17:24:33 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Tom Lane wrote:\n> Having malloc/free use\n> an internal mutex is necessary in multi-threaded programs, but the\n> backend isn't multi-threaded. \n> \nHmm...confused. I'm not following why then there is contention for the \nmutex.\nSurely this has to be some other mutex that is in contention, not a heap \nlock ?\n\nIt'd be handy to see the call stack for the wait state -- if the thing \nis spending\na significant proportion of its time in contention it should be easy to \nget that with\na simple tool such as pstack or a debugger.\n\n\n", "msg_date": "Fri, 20 Jul 2007 21:31:14 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "David Boreham <[email protected]> writes:\n> Tom Lane wrote:\n>> Having malloc/free use\n>> an internal mutex is necessary in multi-threaded programs, but the\n>> backend isn't multi-threaded. \n\n> Hmm...confused. I'm not following why then there is contention for the \n> mutex.\n\nThere isn't any contention for that mutex; Jignesh's results merely show\nthat it was taken and released a lot of times.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Jul 2007 00:26:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Fri, 2007-07-20 at 16:57 -0400, Tom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> > What its saying is that there are holds/waits in trying to get locks \n> > which are locked at Solaris user library levels called from the \n> > postgresql functions:\n> > For example both the following functions are hitting on the same mutex \n> > lock 0x10059e280 in Solaris Library call:\n> > postgres`AllocSetDelete+0x98\n> > postgres`AllocSetAlloc+0x1c4\n> \n> That's a perfect example of the sort of useless overhead that I was\n> complaining of just now in pgsql-patches. Having malloc/free use\n> an internal mutex is necessary in multi-threaded programs, but the\n> backend isn't multi-threaded. And yet, apparently you can't turn\n> that off in Solaris.\n> \n> (Fortunately, the palloc layer is probably insulating us from malloc's\n> performance enough that this isn't a huge deal. But it's annoying.)\n\nThere is one thing that the palloc layer doesn't handle: EState. All\nother memory contexts have a very well chosen initial allocation that\nprevents mallocs during low-medium complexity OLTP workloads.\n\nEState is about 8300 bytes, so just above the large allocation limit.\nThis means that every time we request an EState, i.e. at least once per\nstatement we need to malloc() and then later free().\n\nWould it be worth a special case in the palloc system to avoid having to\nrepeatedly issue external memory allocation calls?\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 09:51:08 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> EState is about 8300 bytes,\n\nWhat?\n\n(gdb) p sizeof(EState)\n$1 = 112\n\nThis is on a 32-bit machine, but even on 64-bit it wouldn't be more than\ndouble that.\n\n> Would it be worth a special case in the palloc system to avoid having to\n> repeatedly issue external memory allocation calls?\n\nThe appropriate hack would be to change the AllocSetContextCreate\ninitial-size parameter for the containing context. But I really have\nno idea what you're on about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jul 2007 10:11:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Mon, 2007-07-23 at 10:11 -0400, Tom Lane wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > EState is about 8300 bytes,\n> \n> What?\n> \n> (gdb) p sizeof(EState)\n> $1 = 112\n> \n> This is on a 32-bit machine, but even on 64-bit it wouldn't be more than\n> double that.\n> \n> > Would it be worth a special case in the palloc system to avoid having to\n> > repeatedly issue external memory allocation calls?\n> \n> The appropriate hack would be to change the AllocSetContextCreate\n> initial-size parameter for the containing context. But I really have\n> no idea what you're on about.\n\nI looked at this last May and my notes say \"ExecutorState\". I guess that\nwas wrong, but my analysis showed there was a single malloc of 8228\nbytes happening once per query during my tests. \n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 15:39:16 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> I looked at this last May and my notes say \"ExecutorState\". I guess that\n> was wrong, but my analysis showed there was a single malloc of 8228\n> bytes happening once per query during my tests. \n\nWell, if you can track down where it's coming from, we could certainly\nhack the containing context's parameters. But EState's not it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jul 2007 10:54:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Mon, 2007-07-23 at 10:54 -0400, Tom Lane wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > I looked at this last May and my notes say \"ExecutorState\". I guess that\n> > was wrong, but my analysis showed there was a single malloc of 8228\n> > bytes happening once per query during my tests. \n> \n> Well, if you can track down where it's coming from, we could certainly\n> hack the containing context's parameters. But EState's not it.\n\nWell, I discover there is an allocation of 8232 (inflation...) made once\nper statement by a memory context called... ExecutorState. Still not\nsure exactly which allocation this is, but its definitely once per\nstatement on pgbench, which should narrow it down. Plan, query etc?\n\nI don't see a way to hack the allocation, since the max chunk size is\n8K.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 16:47:43 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> Well, I discover there is an allocation of 8232 (inflation...) made once\n> per statement by a memory context called... ExecutorState. Still not\n> sure exactly which allocation this is, but its definitely once per\n> statement on pgbench, which should narrow it down. Plan, query etc?\n\nAre you working with stock sources? The only allocation exceeding 1K\nthat I can see during pgbench is BTScanOpaqueData, which is 6600 bytes.\n(Checked by setting a conditional breakpoint on AllocSetAlloc.) The\npath that allocates a single-chunk block is never taken at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jul 2007 12:35:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Mon, 2007-07-23 at 16:48 +0100, Simon Riggs wrote:\n> On Mon, 2007-07-23 at 10:54 -0400, Tom Lane wrote:\n> > \"Simon Riggs\" <[email protected]> writes:\n> > > I looked at this last May and my notes say \"ExecutorState\". I guess that\n> > > was wrong, but my analysis showed there was a single malloc of 8228\n> > > bytes happening once per query during my tests. \n> > \n> > Well, if you can track down where it's coming from, we could certainly\n> > hack the containing context's parameters. But EState's not it.\n> \n> Well, I discover there is an allocation of 8232 (inflation...) made once\n> per statement by a memory context called... ExecutorState. Still not\n> sure exactly which allocation this is, but its definitely once per\n> statement on pgbench, which should narrow it down. Plan, query etc?\n> \n> I don't see a way to hack the allocation, since the max chunk size is\n> 8K.\n\nIt is the allocation of BTScanOpaqueData called from btrescan() in\nnbtree.c\n\ncurrPos and markPos are defined as BTScanPosData, which is an array of\nBTScanPosItems. That makes BTScanOpaqueData up to 8232 bytes, which\nseems wasteful since markPos is only ever used during merge joins. Most\nof that space isn't even used during merge joins either, we just do that\nto slightly optimise the speed of the restore during merge joins.\n\nSeems like we should allocate the memory when we do the first mark.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 18:37:41 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Mon, 2007-07-23 at 12:35 -0400, Tom Lane wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > Well, I discover there is an allocation of 8232 (inflation...) made once\n> > per statement by a memory context called... ExecutorState. Still not\n> > sure exactly which allocation this is, but its definitely once per\n> > statement on pgbench, which should narrow it down. Plan, query etc?\n> \n> Are you working with stock sources? The only allocation exceeding 1K\n> that I can see during pgbench is BTScanOpaqueData, which is 6600 bytes.\n> (Checked by setting a conditional breakpoint on AllocSetAlloc.) The\n> path that allocates a single-chunk block is never taken at all.\n\nI do have the bitmap patch currently applied, but it doesn't touch that\npart of the code.\n\n(gdb) p size\n$1 = 8232\n\n(gdb) p sizeof(int)\n$2 = 4\n\n(gdb) p sizeof(BTScanPosData)\n$3 = 4104\n\nSince my notes say I got 8228 last year, seems reasonable.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 18:50:53 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> currPos and markPos are defined as BTScanPosData, which is an array of\n> BTScanPosItems. That makes BTScanOpaqueData up to 8232 bytes, which\n> seems wasteful since markPos is only ever used during merge joins. Most\n> of that space isn't even used during merge joins either, we just do that\n> to slightly optimise the speed of the restore during merge joins.\n\nAh. I was seeing it as 6600 bytes on HPPA and 6608 on x86_64, but\nI forgot that both of those architectures have MAXALIGN = 8. On a\nMAXALIGN = 4 machine, MaxIndexTuplesPerPage will be significantly\nlarger, leading to larger BTScanPosData.\n\nNot sure it's worth fooling with, given that these days almost everyone\nwho's seriously concerned about performance is probably using 64bit\nhardware. One less malloc cycle per indexscan is never going to be a\nmeasurable savings anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jul 2007 14:19:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Mon, 2007-07-23 at 14:19 -0400, Tom Lane wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > currPos and markPos are defined as BTScanPosData, which is an array of\n> > BTScanPosItems. That makes BTScanOpaqueData up to 8232 bytes, which\n> > seems wasteful since markPos is only ever used during merge joins. Most\n> > of that space isn't even used during merge joins either, we just do that\n> > to slightly optimise the speed of the restore during merge joins.\n> \n> Ah. I was seeing it as 6600 bytes on HPPA and 6608 on x86_64, but\n> I forgot that both of those architectures have MAXALIGN = 8. On a\n> MAXALIGN = 4 machine, MaxIndexTuplesPerPage will be significantly\n> larger, leading to larger BTScanPosData.\n> \n> Not sure it's worth fooling with, given that these days almost everyone\n> who's seriously concerned about performance is probably using 64bit\n> hardware. One less malloc cycle per indexscan is never going to be a\n> measurable savings anyway...\n\nOh sure, I was thinking to avoid Solaris' mutex by avoiding malloc()\ncompletely.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 23 Jul 2007 19:30:48 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Here is how I got the numbers..\nI had about 1600 users login into postgresql. Then started the run with \n500 users and using DTrace I started tracking Postgresql Locking \"as \nviewed from one user/connection\". Echo statements indicate how many \nusers were active at that point and how was throughput performing. All \nIO is done on /tmp which means on a RAM disk.\n\nbash-3.00# echo 500 users - baseline number\n500 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n FirstLockMgrLock Exclusive 1\n RelCacheInitLock Exclusive 2\n SInvalLock Exclusive 2\n WALInsertLock Exclusive 10\n BufMappingLock Exclusive 12\n CheckpointLock Shared 29\n CheckpointStartLock Shared 29\n OidGenLock Exclusive 29\n XidGenLock Exclusive 29\n FirstLockMgrLock Shared 33\n CheckpointStartLock Exclusive 78\n FreeSpaceLock Exclusive 114\n OidGenLock Shared 126\n XidGenLock Shared 152\n ProcArrayLock Shared 482\n\n Lock Id Combined Time (ns)\n SInvalLock 29800\n RelCacheInitLock 30300\n BufMappingLock 168800\n FirstLockMgrLock 414300\n FreeSpaceLock 1281700\n ProcArrayLock 7869900\n WALInsertLock 11113200\n CheckpointStartLock 13494700\n OidGenLock 25719100\n XidGenLock 26443300\n CheckpointLock 194267800\n\nbash-3.00# echo 600 users - Throughput rising\n600 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n RelCacheInitLock Exclusive 1\n SInvalLock Exclusive 1\n BufMappingLock Exclusive 2\n CLogControlLock Exclusive 2\n WALInsertLock Exclusive 11\n FirstLockMgrLock Shared 20\n CheckpointLock Shared 24\n CheckpointStartLock Shared 24\n OidGenLock Exclusive 24\n XidGenLock Exclusive 24\n CheckpointStartLock Exclusive 72\n FreeSpaceLock Exclusive 102\n OidGenLock Shared 106\n XidGenLock Shared 128\n ProcArrayLock Shared 394\n\n Lock Id Combined Time (ns)\n SInvalLock 15600\n RelCacheInitLock 15700\n BufMappingLock 31000\n CLogControlLock 41500\n FirstLockMgrLock 289000\n FreeSpaceLock 3045400\n CheckpointStartLock 7371800\n WALInsertLock 9383200\n ProcArrayLock 10457900\n OidGenLock 20005900\n XidGenLock 20331500\n CheckpointLock 187067900\n\nbash-3.00# echo 700 users - Throughput rising\n700 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n RelCacheInitLock Exclusive 1\n SInvalLock Exclusive 1\n BufMappingLock Exclusive 2\n WALInsertLock Exclusive 17\n CheckpointLock Shared 33\n CheckpointStartLock Shared 33\n OidGenLock Exclusive 33\n XidGenLock Exclusive 33\n FirstLockMgrLock Shared 81\n CheckpointStartLock Exclusive 87\n FreeSpaceLock Exclusive 124\n OidGenLock Shared 125\n XidGenLock Shared 150\n ProcArrayLock Shared 500\n\n Lock Id Combined Time (ns)\n RelCacheInitLock 15100\n SInvalLock 15400\n BufMappingLock 47400\n FirstLockMgrLock 3021000\n FreeSpaceLock 3794300\n WALInsertLock 7567300\n XidGenLock 18427400\n ProcArrayLock 20884000\n CheckpointStartLock 24084900\n OidGenLock 26399500\n CheckpointLock 256549800\n\nbash-3.00# echo 800 users - Throughput rising\n800 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n BufMappingLock Exclusive 1\n RelCacheInitLock Exclusive 1\n SInvalLock Exclusive 1\n WALWriteLock Exclusive 1\n WALInsertLock Exclusive 11\n CheckpointLock Shared 27\n CheckpointStartLock Shared 27\n OidGenLock Exclusive 27\n XidGenLock Exclusive 27\n FirstLockMgrLock Shared 32\n CheckpointStartLock Exclusive 73\n FreeSpaceLock Exclusive 110\n OidGenLock Shared 118\n XidGenLock Shared 140\n ProcArrayLock Shared 442\n\n Lock Id Combined Time (ns)\n WALWriteLock 13900\n SInvalLock 15000\n RelCacheInitLock 15500\n BufMappingLock 18600\n FirstLockMgrLock 391100\n WALInsertLock 3953700\n FreeSpaceLock 4801300\n CheckpointStartLock 13131800\n ProcArrayLock 14480500\n OidGenLock 17736500\n XidGenLock 21723100\n CheckpointLock 206423500\n\nbash-3.00# echo 850 users - SLIGHT DROP in throughput\n850 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n FirstLockMgrLock Exclusive 1\n RelCacheInitLock Exclusive 1\n SInvalLock Exclusive 1\n WALWriteLock Exclusive 1\n BufMappingLock Exclusive 3\n WALInsertLock Exclusive 7\n CheckpointLock Shared 39\n CheckpointStartLock Shared 39\n OidGenLock Exclusive 39\n XidGenLock Exclusive 39\n FirstLockMgrLock Shared 47\n CheckpointStartLock Exclusive 113\n FreeSpaceLock Exclusive 152\n OidGenLock Shared 162\n XidGenLock Shared 194\n ProcArrayLock Shared 621\n\n Lock Id Combined Time (ns)\n WALWriteLock 14200\n RelCacheInitLock 15100\n SInvalLock 15600\n BufMappingLock 64100\n WALInsertLock 2073200\n FirstLockMgrLock 3040300\n FreeSpaceLock 7329500\n OidGenLock 21619100\n CheckpointStartLock 23261300\n ProcArrayLock 23917500\n XidGenLock 24873100\n CheckpointLock 309221200\n\nbash-3.00# echo 900 users - ANOTHER SLIGHT DROP IN THROUGPUT\n900 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n WALWriteLock Exclusive 1\n WALInsertLock Exclusive 7\n CheckpointStartLock Shared 13\n OidGenLock Exclusive 13\n CheckpointLock Shared 14\n XidGenLock Exclusive 14\n FirstLockMgrLock Shared 15\n FreeSpaceLock Exclusive 51\n OidGenLock Shared 51\n XidGenLock Shared 62\n CheckpointStartLock Exclusive 170\n ProcArrayLock Shared 202\n\n Lock Id Combined Time (ns)\n WALWriteLock 16800\n FirstLockMgrLock 170300\n FreeSpaceLock 601500\n ProcArrayLock 3971300\n WALInsertLock 7757200\n OidGenLock 8261900\n XidGenLock 18450900\n CheckpointStartLock 39155100\n CheckpointLock 143751500\n\nbash-3.00# echo 950 users - BIG DROP IN THROUGHPUT\n950 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n WALInsertLock Exclusive 3\n FirstLockMgrLock Shared 4\n CheckpointLock Shared 7\n CheckpointStartLock Shared 7\n OidGenLock Exclusive 7\n XidGenLock Exclusive 7\n FreeSpaceLock Exclusive 29\n OidGenLock Shared 30\n XidGenLock Shared 36\n ProcArrayLock Shared 115\n CheckpointStartLock Exclusive 134\n\n Lock Id Combined Time (ns)\n FirstLockMgrLock 64400\n FreeSpaceLock 342300\n WALInsertLock 1759600\n OidGenLock 4276900\n ProcArrayLock 6234300\n XidGenLock 6865000\n CheckpointStartLock 37590800\n CheckpointLock 58994300\n\nbash-3.00# echo 1000 users - STEADY AT PREVIOUS LOW VALUE\n1000 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n BufMappingLock Exclusive 1\n RelCacheInitLock Exclusive 1\n SInvalLock Exclusive 1\n WALInsertLock Exclusive 3\n CheckpointLock Shared 9\n CheckpointStartLock Shared 9\n OidGenLock Exclusive 9\n XidGenLock Exclusive 9\n FirstLockMgrLock Shared 14\n FreeSpaceLock Exclusive 33\n OidGenLock Shared 37\n XidGenLock Shared 44\n CheckpointStartLock Exclusive 122\n ProcArrayLock Shared 145\n\n Lock Id Combined Time (ns)\n RelCacheInitLock 14300\n SInvalLock 15600\n BufMappingLock 21400\n FirstLockMgrLock 184000\n FreeSpaceLock 366200\n WALInsertLock 1769500\n ProcArrayLock 5076500\n XidGenLock 5898400\n OidGenLock 9244800\n CheckpointStartLock 31077500\n CheckpointLock 91861900\n\nbash-3.00# echo 1050 users - SMALL INCREASE\n1050 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n BufMappingLock Exclusive 2\n WALInsertLock Exclusive 9\n CheckpointLock Shared 24\n XidGenLock Exclusive 24\n CheckpointStartLock Shared 25\n OidGenLock Exclusive 25\n FirstLockMgrLock Shared 30\n FreeSpaceLock Exclusive 100\n OidGenLock Shared 107\n XidGenLock Shared 129\n CheckpointStartLock Exclusive 153\n ProcArrayLock Shared 400\n\n Lock Id Combined Time (ns)\n BufMappingLock 36600\n FirstLockMgrLock 420600\n FreeSpaceLock 2998400\n WALInsertLock 3818300\n ProcArrayLock 8986900\n OidGenLock 18127200\n XidGenLock 18569200\n CheckpointStartLock 44795700\n CheckpointLock 206488400\n\nbash-3.00# echo 1100 users - SMALL DROP AGAIN\n1100 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n BufMappingLock Exclusive 1\n WALInsertLock Exclusive 6\n CheckpointLock Shared 11\n XidGenLock Exclusive 11\n CheckpointStartLock Shared 12\n OidGenLock Exclusive 12\n FirstLockMgrLock Shared 24\n FreeSpaceLock Exclusive 39\n OidGenLock Shared 44\n XidGenLock Shared 51\n CheckpointStartLock Exclusive 88\n ProcArrayLock Shared 171\n\n Lock Id Combined Time (ns)\n BufMappingLock 19500\n FirstLockMgrLock 302700\n FreeSpaceLock 511200\n ProcArrayLock 5042300\n WALInsertLock 5592800\n CheckpointStartLock 25009900\n OidGenLock 25231600\n XidGenLock 108045300\n CheckpointLock 379734000\n\nbash-3.00# echo 1150 users - STEADY AT LOW VALUE\n1150 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n WALWriteLock Exclusive 1\n WALInsertLock Exclusive 2\n CheckpointLock Shared 5\n CheckpointStartLock Shared 6\n OidGenLock Exclusive 6\n XidGenLock Exclusive 6\n FirstLockMgrLock Shared 8\n FreeSpaceLock Exclusive 21\n OidGenLock Shared 26\n XidGenLock Shared 31\n ProcArrayLock Shared 93\n CheckpointStartLock Exclusive 122\n\n Lock Id Combined Time (ns)\n WALWriteLock 14900\n WALInsertLock 116900\n FirstLockMgrLock 120600\n FreeSpaceLock 2177800\n XidGenLock 4899200\n ProcArrayLock 20721700\n CheckpointStartLock 27805200\n CheckpointLock 76369300\n OidGenLock 470145800\n\nbash-3.00# echo 1250 users - STEADY AT LOW VALUE\n1250 users\nbash-3.00# ./3_lwlock_acquires.d 19178\n\n Lock Id Mode Count\n CheckpointLock Shared 2\n CheckpointStartLock Shared 2\n OidGenLock Exclusive 2\n WALInsertLock Exclusive 2\n XidGenLock Exclusive 2\n FreeSpaceLock Exclusive 9\n OidGenLock Shared 10\n XidGenLock Shared 12\n ProcArrayLock Shared 36\n CheckpointStartLock Exclusive 135\n\n Lock Id Combined Time (ns)\n WALInsertLock 39500\n FreeSpaceLock 98600\n ProcArrayLock 318800\n XidGenLock 1379900\n OidGenLock 3437700\n CheckpointLock 9565200\n CheckpointStartLock 56547900\n\nbash-3.00#\n\n\n\n", "msg_date": "Wed, 25 Jul 2007 18:58:53 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> Here is how I got the numbers..\n> I had about 1600 users login into postgresql. Then started the run with \n> 500 users and using DTrace I started tracking Postgresql Locking \"as \n> viewed from one user/connection\". Echo statements indicate how many \n> users were active at that point and how was throughput performing. All \n> IO is done on /tmp which means on a RAM disk.\n\n> bash-3.00# echo 500 users - baseline number\n> 500 users\n> bash-3.00# ./3_lwlock_acquires.d 19178\n\n> Lock Id Mode Count\n> FirstLockMgrLock Exclusive 1\n> RelCacheInitLock Exclusive 2\n> SInvalLock Exclusive 2\n> WALInsertLock Exclusive 10\n> BufMappingLock Exclusive 12\n> CheckpointLock Shared 29\n> CheckpointStartLock Shared 29\n> OidGenLock Exclusive 29\n> XidGenLock Exclusive 29\n> FirstLockMgrLock Shared 33\n> CheckpointStartLock Exclusive 78\n> FreeSpaceLock Exclusive 114\n> OidGenLock Shared 126\n> XidGenLock Shared 152\n> ProcArrayLock Shared 482\n\nI don't think I believe these numbers. For one thing, CheckpointLock\nis simply not ever taken in shared mode. The ratios of counts for\ndifferent locks seems pretty improbable too, eg there is no way on\nearth that the LockMgrLocks are taken more often shared than\nexclusive (I would expect no shared acquires at all in the sort of\ntest you are running). Not to mention that the absolute number of\ncounts seems way too low. So I think the counting tool is broken.\n\n> Lock Id Combined Time (ns)\n> SInvalLock 29800\n> RelCacheInitLock 30300\n> BufMappingLock 168800\n> FirstLockMgrLock 414300\n> FreeSpaceLock 1281700\n> ProcArrayLock 7869900\n> WALInsertLock 11113200\n> CheckpointStartLock 13494700\n> OidGenLock 25719100\n> XidGenLock 26443300\n> CheckpointLock 194267800\n\nCombined time of what exactly? It looks like this must be the total\nduration the lock is held, at least assuming that the time for\nCheckpointLock is correctly reported. It'd be much more useful to see\nthe total time spent waiting to acquire the lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Jul 2007 19:24:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "The count is only for a 10-second snapshot.. Plus remember there are \nabout 1000 users running so the connection being profiled only gets \n0.01 of the period on CPU.. And the count is for that CONNECTION only.\n\nAnyway using the lock wait script it shows the real picture as you \nrequested. Here the combined time means time \"spent waiting\" for the lock.\n\nbash-3.00# echo 500 users\n500 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Exclusive 1\n CheckpointStartLock Shared 3\n OidGenLock Shared 4\n WALInsertLock Exclusive 7\n FreeSpaceLock Exclusive 8\n XidGenLock Exclusive 15\n CheckpointStartLock Exclusive 16\n\n Lock Id Combined Time (ns)\n XidGenLock 3825000\n OidGenLock 5307100\n WALInsertLock 6317800\n FreeSpaceLock 7244100\n CheckpointStartLock 22199200\n\nbash-3.00# echo 600 users\n600 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Exclusive 1\n WALInsertLock Exclusive 1\n CheckpointStartLock Shared 4\n CheckpointStartLock Exclusive 11\n XidGenLock Exclusive 21\n\n Lock Id Combined Time (ns)\n OidGenLock 1728200\n WALInsertLock 2040700\n XidGenLock 19878500\n CheckpointStartLock 24156500\n\nbash-3.00# echo 700 users\n700 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Shared 1\n XidGenLock Shared 1\n CheckpointStartLock Shared 2\n WALInsertLock Exclusive 4\n CheckpointStartLock Exclusive 6\n FreeSpaceLock Exclusive 6\n XidGenLock Exclusive 13\n\n Lock Id Combined Time (ns)\n OidGenLock 1730000\n WALInsertLock 7253400\n FreeSpaceLock 10977700\n CheckpointStartLock 13356800\n XidGenLock 38220500\n\nbash-3.00# echo 800 users\n800 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n CheckpointStartLock Shared 1\n WALInsertLock Exclusive 2\n XidGenLock Shared 2\n CheckpointStartLock Exclusive 5\n FreeSpaceLock Exclusive 8\n XidGenLock Exclusive 12\n\n Lock Id Combined Time (ns)\n WALInsertLock 3746800\n FreeSpaceLock 7628900\n CheckpointStartLock 11297500\n XidGenLock 16649000\n\nbash-3.00# echo 900 users - BIG DROP IN THROUGHPUT OCCURS...\n900 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Exclusive 1\n OidGenLock Shared 1\n XidGenLock Shared 1\n FreeSpaceLock Exclusive 2\n WALInsertLock Exclusive 2\n CheckpointStartLock Shared 6\n XidGenLock Exclusive 12\n CheckpointStartLock Exclusive 121\n\n Lock Id Combined Time (ns)\n OidGenLock 1968100\n FreeSpaceLock 2076300\n WALInsertLock 2190400\n XidGenLock 20259400\n CheckpointStartLock 1407294300\n\nbash-3.00# echo 950 users\n950 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Exclusive 1\n OidGenLock Shared 2\n CheckpointStartLock Shared 3\n WALInsertLock Exclusive 4\n FreeSpaceLock Exclusive 5\n XidGenLock Exclusive 11\n CheckpointStartLock Exclusive 50\n\n Lock Id Combined Time (ns)\n WALInsertLock 5577100\n FreeSpaceLock 9115900\n XidGenLock 13765800\n OidGenLock 50155500\n CheckpointStartLock 759872200\n\nbash-3.00# echo 1000 users\n1000 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n OidGenLock Shared 1\n WALInsertLock Exclusive 1\n XidGenLock Exclusive 5\n CheckpointStartLock Shared 6\n CheckpointStartLock Exclusive 102\n\n Lock Id Combined Time (ns)\n OidGenLock 21900\n WALInsertLock 82500\n XidGenLock 3313400\n CheckpointStartLock 1448289900\n\nbash-3.00# echo 1050 users\n1050 users\nbash-3.00# ./4_lwlock_waits.d 20764\n\n Lock Id Mode Count\n FreeSpaceLock Exclusive 1\n CheckpointStartLock Shared 3\n XidGenLock Exclusive 3\n CheckpointStartLock Exclusive 146\n\n Lock Id Combined Time (ns)\n FreeSpaceLock 18400\n XidGenLock 1900900\n CheckpointStartLock 2392893700\n\nbash-3.00#\n\n\n\n\n\n\n-Jignesh\n\n\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> Here is how I got the numbers..\n>> I had about 1600 users login into postgresql. Then started the run with \n>> 500 users and using DTrace I started tracking Postgresql Locking \"as \n>> viewed from one user/connection\". Echo statements indicate how many \n>> users were active at that point and how was throughput performing. All \n>> IO is done on /tmp which means on a RAM disk.\n>> \n>\n> \n>> bash-3.00# echo 500 users - baseline number\n>> 500 users\n>> bash-3.00# ./3_lwlock_acquires.d 19178\n>> \n> I don't think I believe these numbers. For one thing, CheckpointLock\n> is simply not ever taken in shared mode. The ratios of counts for\n> different locks seems pretty improbable too, eg there is no way on\n> earth that the LockMgrLocks are taken more often shared than\n> exclusive (I would expect no shared acquires at all in the sort of\n> test you are running). Not to mention that the absolute number of\n> counts seems way too low. So I think the counting tool is broken.\n>\n> \n> Combined time of what exactly? It looks like this must be the total\n> duration the lock is held, at least assuming that the time for\n> CheckpointLock is correctly reported. It'd be much more useful to see\n> the total time spent waiting to acquire the lock.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Thu, 26 Jul 2007 10:29:51 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> The count is only for a 10-second snapshot.. Plus remember there are \n> about 1000 users running so the connection being profiled only gets \n> 0.01 of the period on CPU.. And the count is for that CONNECTION only.\n\nOK, that explains the low absolute levels of the counts, but if the\ncounts are for a regular backend then there's still a lot of bogosity\nhere. Backends won't be taking the CheckpointLock at all, nor do they\ntake CheckpointStartLock in exclusive mode. The bgwriter would do that\nbut it'd not be taking most of these other locks. So I think the script\nis mislabeling the locks somehow.\n\nAlso, elementary statistics should tell you that a sample taken as above\nis going to have enormous amounts of noise. You should be sampling over\na much longer period, say on the order of a minute of runtime, to have\nnumbers that are trustworthy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jul 2007 10:40:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Thu, 2007-07-26 at 10:29 -0400, Jignesh K. Shah wrote:\n> The count is only for a 10-second snapshot.. Plus remember there are \n> about 1000 users running so the connection being profiled only gets \n> 0.01 of the period on CPU.. And the count is for that CONNECTION only.\n\nIs that for one process, or all processes aggregated in some way?\n\n> CheckpointStartLock Shared 6\n> CheckpointStartLock Exclusive 102\n\nThat's definitely whacked. Surely we didn't start 102 checkpoints yet\nattempt to commit 6 times?\n \n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 26 Jul 2007 16:25:31 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "I will look for runs with longer samples..\n\nNow the script could have mislabeled lock names.. Anyway digging into \nthe one that seems to increase over time... I did stack profiles on how \nthat increases... and here are some numbers..\n\n\nFor 600-850 Users: that potential mislabeled CheckPointStartLock or \nLockID==12 comes from various sources where the top source (while system \nis still doing great) comes from:\n\n\n\n postgres`LWLockAcquire+0x1c8\n postgres`SimpleLruReadPage_ReadOnly+0xc\n postgres`TransactionIdGetStatus+0x14\n postgres`TransactionLogFetch+0x58\n postgres`TransactionIdDidCommit+0x4\n postgres`HeapTupleSatisfiesSnapshot+0x234\n postgres`heap_release_fetch+0x1a8\n postgres`index_getnext+0xf4\n postgres`IndexNext+0x7c\n postgres`ExecScan+0x8c\n postgres`ExecProcNode+0xb4\n postgres`ExecutePlan+0xdc\n postgres`ExecutorRun+0xb0\n postgres`PortalRunSelect+0x9c\n postgres`PortalRun+0x244\n postgres`exec_execute_message+0x3a0\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n 8202100\n\n postgres`LWLockAcquire+0x1c8\n postgres`TransactionIdSetStatus+0x1c\n postgres`RecordTransactionCommit+0x2a8\n postgres`CommitTransaction+0xc8\n postgres`CommitTransactionCommand+0x90\n postgres`finish_xact_command+0x60\n postgres`exec_execute_message+0x3d8\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 30822100\n\n\nHowever at 900 Users where the big drop in throughput occurs:\nIt gives a different top \"consumer\" of time:\n\n\n\n postgres`LWLockAcquire+0x1c8\n postgres`TransactionIdSetStatus+0x1c\n postgres`RecordTransactionCommit+0x2a8\n postgres`CommitTransaction+0xc8\n postgres`CommitTransactionCommand+0x90\n postgres`finish_xact_command+0x60\n postgres`exec_execute_message+0x3d8\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 406601300\n\n postgres`LWLockAcquire+0x1c8\n postgres`SimpleLruReadPage+0x1ac\n postgres`TransactionIdGetStatus+0x14\n postgres`TransactionLogFetch+0x58\n postgres`TransactionIdDidCommit+0x4\n postgres`HeapTupleSatisfiesUpdate+0x360\n postgres`heap_lock_tuple+0x27c\n postgres`ExecutePlan+0x33c\n postgres`ExecutorRun+0xb0\n postgres`PortalRunSelect+0x9c\n postgres`PortalRun+0x244\n postgres`exec_execute_message+0x3a0\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 444523100\n\n postgres`LWLockAcquire+0x1c8\n postgres`SimpleLruReadPage+0x1ac\n postgres`TransactionIdGetStatus+0x14\n postgres`TransactionLogFetch+0x58\n postgres`TransactionIdDidCommit+0x4\n postgres`HeapTupleSatisfiesSnapshot+0x234\n postgres`heap_release_fetch+0x1a8\n postgres`index_getnext+0xf4\n postgres`IndexNext+0x7c\n postgres`ExecScan+0x8c\n postgres`ExecProcNode+0xb4\n postgres`ExecutePlan+0xdc\n postgres`ExecutorRun+0xb0\n postgres`PortalRunSelect+0x9c\n postgres`PortalRun+0x244\n postgres`exec_execute_message+0x3a0\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n 1661300000\n\n\n\nMaybe you all will understand more than I do on what it does here.. \nLooks like IndexNext has a problem at high number of users to me.. but I \ncould be wrong..\n\n-Jignesh\n\n\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> The count is only for a 10-second snapshot.. Plus remember there are \n>> about 1000 users running so the connection being profiled only gets \n>> 0.01 of the period on CPU.. And the count is for that CONNECTION only.\n>> \n>\n> OK, that explains the low absolute levels of the counts, but if the\n> counts are for a regular backend then there's still a lot of bogosity\n> here. Backends won't be taking the CheckpointLock at all, nor do they\n> take CheckpointStartLock in exclusive mode. The bgwriter would do that\n> but it'd not be taking most of these other locks. So I think the script\n> is mislabeling the locks somehow.\n>\n> Also, elementary statistics should tell you that a sample taken as above\n> is going to have enormous amounts of noise. You should be sampling over\n> a much longer period, say on the order of a minute of runtime, to have\n> numbers that are trustworthy.\n>\n> \t\t\tregards, tom lane\n> \n", "msg_date": "Thu, 26 Jul 2007 11:27:30 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> For 600-850 Users: that potential mislabeled CheckPointStartLock or \n> LockID==12 comes from various sources where the top source (while system \n> is still doing great) comes from:\n\n> postgres`LWLockAcquire+0x1c8\n> postgres`SimpleLruReadPage_ReadOnly+0xc\n> postgres`TransactionIdGetStatus+0x14\n\nThat path would be taking CLogControlLock ... so you're off by at least\none. Compare the script to src/include/storage/lwlock.h.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jul 2007 11:42:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look? " }, { "msg_contents": "On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:\n\n> However at 900 Users where the big drop in throughput occurs:\n> It gives a different top \"consumer\" of time:\n\n\n postgres`LWLockAcquire+0x1c8\n> postgres`SimpleLruReadPage+0x1ac\n> postgres`TransactionIdGetStatus+0x14\n> postgres`TransactionLogFetch+0x58\n\nTransactionIdGetStatus doesn't directly call SimpleLruReadPage().\nPresumably the compiler has been rearranging things??\n\nLooks like you're out of clog buffers. It seems like the clog buffers\naren't big enough to hold clog pages for long enough and the SELECT FOR\nSHARE processing is leaving lots of additional read locks that are\nincreasing the number of clog requests for older xids.\n\nTry the enclosed patch.\n \n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com", "msg_date": "Thu, 26 Jul 2007 16:51:16 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "BEAUTIFUL!!!\n\nUsing the Patch I can now go upto 1300 users without dropping.. But now \nit still repeats at 1300-1350 users..\n\nI corrected the Lock Descriptions based on what I got from lwlock.h and \nretried the whole scalability again with profiling again.. This time it \nlooks like the ProcArrayLock\n\n\nbash-3.00# echo 600 users\n600 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n XidGenLock Exclusive 1\n CLogControlLock Shared 2\n XidGenLock Shared 2\n WALWriteLock Exclusive 4\n WALInsertLock Exclusive 8\n CLogControlLock Exclusive 9\n ProcArrayLock Exclusive 9\n\n Lock Id Combined Time (ns)\n WALWriteLock 2842300\n XidGenLock 4951000\n CLogControlLock 11151800\n WALInsertLock 13035600\n ProcArrayLock 20040000\n\nbash-3.00# echo 700 users\n700 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n XidGenLock Exclusive 1\n WALWriteLock Exclusive 2\n XidGenLock Shared 2\n CLogControlLock Shared 3\n CLogControlLock Exclusive 8\n WALInsertLock Exclusive 9\n ProcArrayLock Exclusive 22\n\n Lock Id Combined Time (ns)\n XidGenLock 4093300\n WALWriteLock 4914800\n WALInsertLock 7389100\n ProcArrayLock 10248200\n CLogControlLock 11989400\n\nbash-3.00# echo 800 users\n800 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n WALWriteLock Exclusive 1\n XidGenLock Shared 2\n CLogControlLock Shared 3\n CLogControlLock Exclusive 7\n WALInsertLock Exclusive 7\n ProcArrayLock Exclusive 31\n\n Lock Id Combined Time (ns)\n WALWriteLock 319100\n XidGenLock 5388700\n WALInsertLock 9901400\n CLogControlLock 13465000\n ProcArrayLock 42979700\n\nbash-3.00# echo 900 users\n900 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n CLogControlLock Shared 1\n XidGenLock Exclusive 1\n WALWriteLock Exclusive 2\n CLogControlLock Exclusive 6\n WALInsertLock Exclusive 9\n ProcArrayLock Exclusive 25\n\n Lock Id Combined Time (ns)\n XidGenLock 3197700\n WALWriteLock 3887100\n CLogControlLock 15774500\n WALInsertLock 38268700\n ProcArrayLock 162073100\n\nbash-3.00# ./6_lwlock_stack.d 4 7056\n\n Lock Id Mode Count\n ProcArrayLock Shared 1\n ProcArrayLock Exclusive 67\n\n Lock Id Combined Time (ns)\n ProcArrayLock 216773800\n\n Lock Id Combined Time (ns)\n\n\n postgres`LWLockAcquire+0x1c8\n postgres`GetSnapshotData+0x118\n postgres`GetTransactionSnapshot+0x5c\n postgres`PortalStart+0x150\n postgres`exec_bind_message+0x81c\n postgres`PostgresMain+0x12b8\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 2779000\n\n postgres`LWLockAcquire+0x1c8\n postgres`CommitTransaction+0xe0\n postgres`CommitTransactionCommand+0x90\n postgres`finish_xact_command+0x60\n postgres`exec_execute_message+0x3d8\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 213994800\n\nbash-3.00# echo 1000 users - SLIGHT DROP\n1000 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n WALWriteLock Exclusive 1\n CLogControlLock Exclusive 2\n XidGenLock Shared 2\n CLogControlLock Shared 3\n WALInsertLock Exclusive 4\n ProcArrayLock Exclusive 26\n\n Lock Id Combined Time (ns)\n WALWriteLock 1807400\n XidGenLock 2024000\n WALInsertLock 2177500\n CLogControlLock 9064700\n ProcArrayLock 199216000\n\nbash-3.00# ./6_lwlock_stack.d 4 7056\n\n Lock Id Mode Count\n ProcArrayLock Shared 3\n ProcArrayLock Exclusive 38\n\n Lock Id Combined Time (ns)\n ProcArrayLock 858238600\n\n Lock Id Combined Time (ns)\n\n\n postgres`LWLockAcquire+0x1c8\n postgres`TransactionIdIsInProgress+0x50\n postgres`HeapTupleSatisfiesVacuum+0x2ec\n postgres`_bt_check_unique+0x2a0\n postgres`_bt_doinsert+0x98\n postgres`btinsert+0x54\n postgres`FunctionCall6+0x44\n postgres`index_insert+0x90\n postgres`ExecInsertIndexTuples+0x1bc\n postgres`ExecUpdate+0x500\n postgres`ExecutePlan+0x704\n postgres`ExecutorRun+0x60\n postgres`PortalRunMulti+0x2a0\n postgres`PortalRun+0x310\n postgres`exec_execute_message+0x3a0\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n 167100\n\n postgres`LWLockAcquire+0x1c8\n postgres`GetSnapshotData+0x118\n postgres`GetTransactionSnapshot+0x5c\n postgres`PortalRunMulti+0x22c\n postgres`PortalRun+0x310\n postgres`exec_execute_message+0x3a0\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 7125900\n\n postgres`LWLockAcquire+0x1c8\n postgres`CommitTransaction+0xe0\n postgres`CommitTransactionCommand+0x90\n postgres`finish_xact_command+0x60\n postgres`exec_execute_message+0x3d8\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 850945600\n\nbash-3.00# echo 1100 users - DROP ....\n1100 users\nbash-3.00# ./4_lwlock_waits.d 7056\n\n Lock Id Mode Count\n CLogControlLock Shared 1\n WALWriteLock Exclusive 1\n XidGenLock Exclusive 1\n ProcArrayLock Shared 2\n WALInsertLock Exclusive 2\n XidGenLock Shared 2\n CLogControlLock Exclusive 4\n ProcArrayLock Exclusive 20\n\n Lock Id Combined Time (ns)\n WALWriteLock 4179500\n XidGenLock 6249400\n CLogControlLock 20411100\n WALInsertLock 29707600\n ProcArrayLock 207923700\n\nbash-3.00# ./6_lwlock_stack.d 4 7056\n\n Lock Id Mode Count\n ProcArrayLock Exclusive 40\n\n Lock Id Combined Time (ns)\n ProcArrayLock 692242100\n\n Lock Id Combined Time (ns)\n\n\n postgres`LWLockAcquire+0x1c8\n postgres`CommitTransaction+0xe0\n postgres`CommitTransactionCommand+0x90\n postgres`finish_xact_command+0x60\n postgres`exec_execute_message+0x3d8\n postgres`PostgresMain+0x1300\n postgres`BackendRun+0x278\n postgres`ServerLoop+0x63c\n postgres`PostmasterMain+0xc40\n postgres`main+0x394\n postgres`_start+0x108\n 692242100\n\nbash-3.00#\n\n\n\n\nLockID for ProcArrayLock is 4 or the 5 entry in lwlock.h which seems to \nindicate it is lwlock.h\nAny tweaks for that?\n\n-Jignesh\n\n\nSimon Riggs wrote:\n> On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:\n>\n> \n>> However at 900 Users where the big drop in throughput occurs:\n>> It gives a different top \"consumer\" of time:\n>> \n>\n>\n> postgres`LWLockAcquire+0x1c8\n> \n>> postgres`SimpleLruReadPage+0x1ac\n>> postgres`TransactionIdGetStatus+0x14\n>> postgres`TransactionLogFetch+0x58\n>> \n>\n> TransactionIdGetStatus doesn't directly call SimpleLruReadPage().\n> Presumably the compiler has been rearranging things??\n>\n> Looks like you're out of clog buffers. It seems like the clog buffers\n> aren't big enough to hold clog pages for long enough and the SELECT FOR\n> SHARE processing is leaving lots of additional read locks that are\n> increasing the number of clog requests for older xids.\n>\n> Try the enclosed patch.\n> \n> \n> ------------------------------------------------------------------------\n>\n> Index: src/include/access/clog.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/access/clog.h,v\n> retrieving revision 1.19\n> diff -c -r1.19 clog.h\n> *** src/include/access/clog.h\t5 Jan 2007 22:19:50 -0000\t1.19\n> --- src/include/access/clog.h\t26 Jul 2007 15:44:58 -0000\n> ***************\n> *** 29,35 ****\n> \n> \n> /* Number of SLRU buffers to use for clog */\n> ! #define NUM_CLOG_BUFFERS\t8\n> \n> \n> extern void TransactionIdSetStatus(TransactionId xid, XidStatus status);\n> --- 29,35 ----\n> \n> \n> /* Number of SLRU buffers to use for clog */\n> ! #define NUM_CLOG_BUFFERS\t64\t\n> \n> \n> extern void TransactionIdSetStatus(TransactionId xid, XidStatus status);\n> \n> ------------------------------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Thu, 26 Jul 2007 15:44:42 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Tom Lane wrote:\n> \n> That path would be taking CLogControlLock ... so you're off by at least\n> one. Compare the script to src/include/storage/lwlock.h.\n> \n\nIndeed, the indexing was off by one due to the removal of \nBuffMappingLock in src/include/storage/lwlock.h between 8.1 and 8.2 \nwhich was not updated in the DTrace script.\n\nThanks,\nRobert\n\n", "msg_date": "Thu, 26 Jul 2007 14:56:08 -0500", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Thu, 2007-07-26 at 15:44 -0400, Jignesh K. Shah wrote:\n> BEAUTIFUL!!!\n> \n> Using the Patch I can now go upto 1300 users without dropping.. But now \n> it still repeats at 1300-1350 users..\n\nOK, can you try again with 16 and 32 buffers please? We need to know\nhow many is enough and whether this number needs to be variable via a\nparameter, or just slightly larger by default. Thanks.\n\n> I corrected the Lock Descriptions based on what I got from lwlock.h and \n> retried the whole scalability again with profiling again.. This time it \n> looks like the ProcArrayLock\n\nThat's what I would expect with that many users.\n\n> Lock Id Mode Count\n> XidGenLock Exclusive 1\n> CLogControlLock Shared 2\n> XidGenLock Shared 2\n> WALWriteLock Exclusive 4\n> WALInsertLock Exclusive 8\n> CLogControlLock Exclusive 9\n> ProcArrayLock Exclusive 9\n\n...but as Tom mentioned, we need to do longer runs now so these counts\nget to somewhere in the hundreds so we have some statistical validity.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 26 Jul 2007 21:04:50 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Will try 16 and 32 CLOGBUFFER tomorrow:\n\nBut here is locks data again with about increased time profiling (about \n2 minutes) for the connection with about 2000 users:\n\nbash-3.00# time ./4_lwlock_waits.d 13583\n^C\n\n Lock Id Mode Count\n ProcArrayLock Shared 5\n XidGenLock Exclusive 13\n CLogControlLock Shared 14\n XidGenLock Shared 15\n CLogControlLock Exclusive 21\n WALInsertLock Exclusive 77\n WALWriteLock Exclusive 175\n ProcArrayLock Exclusive 275\n\n Lock Id Combined Time (ns)\n XidGenLock 194966200\n WALInsertLock 517955000\n CLogControlLock 679665100\n WALWriteLock 2838716200\n ProcArrayLock 44181002600\n\n\nTop Wait time seems to come from the following code path for \nProcArrayLock:\n\n Lock Id Mode Count\n ProcArrayLock Exclusive 21\n\n Lock Id Combined Time (ns)\n ProcArrayLock 5255937500\n\n Lock Id Combined Time (ns)\n\n\n postgres`LWLockAcquire+0x1f0\n postgres`CommitTransaction+0x104\n postgres`CommitTransactionCommand+0xbc\n postgres`finish_xact_command+0x78\n postgres`exec_execute_message+0x42c\n postgres`PostgresMain+0x1838\n postgres`BackendRun+0x2f8\n postgres`ServerLoop+0x680\n postgres`PostmasterMain+0xda8\n postgres`main+0x3d0\n postgres`_start+0x17c\n 5255937500\n\n\n\nRegards,\nJignesh\n\n\nSimon Riggs wrote:\n> On Thu, 2007-07-26 at 15:44 -0400, Jignesh K. Shah wrote:\n> \n>> BEAUTIFUL!!!\n>>\n>> Using the Patch I can now go upto 1300 users without dropping.. But now \n>> it still repeats at 1300-1350 users..\n>> \n>\n> OK, can you try again with 16 and 32 buffers please? We need to know\n> how many is enough and whether this number needs to be variable via a\n> parameter, or just slightly larger by default. Thanks.\n>\n> \n>> I corrected the Lock Descriptions based on what I got from lwlock.h and \n>> retried the whole scalability again with profiling again.. This time it \n>> looks like the ProcArrayLock\n>> \n>\n> That's what I would expect with that many users.\n>\n> \n>> Lock Id Mode Count\n>> XidGenLock Exclusive 1\n>> CLogControlLock Shared 2\n>> XidGenLock Shared 2\n>> WALWriteLock Exclusive 4\n>> WALInsertLock Exclusive 8\n>> CLogControlLock Exclusive 9\n>> ProcArrayLock Exclusive 9\n>> \n>\n> ...but as Tom mentioned, we need to do longer runs now so these counts\n> get to somewhere in the hundreds so we have some statistical validity.\n>\n> \n", "msg_date": "Thu, 26 Jul 2007 17:17:55 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Jignesh K. Shah wrote:\n\n> Top Wait time seems to come from the following code path for \n> ProcArrayLock:\n>\n> Lock Id Mode Count\n> ProcArrayLock Exclusive 21\n>\n> Lock Id Combined Time (ns)\n> ProcArrayLock 5255937500\n>\n> Lock Id Combined Time (ns)\n>\n>\n> postgres`LWLockAcquire+0x1f0\n> postgres`CommitTransaction+0x104\n\nYeah, ProcArrayLock is pretty contended. I think it would be kinda neat\nif we could split it up in partitions. This lock is quite particular\nthough.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 27 Jul 2007 04:58:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Fri, 2007-07-27 at 04:58 -0400, Alvaro Herrera wrote:\n> Jignesh K. Shah wrote:\n> \n> > Top Wait time seems to come from the following code path for \n> > ProcArrayLock:\n> >\n> > Lock Id Mode Count\n> > ProcArrayLock Exclusive 21\n> >\n> > Lock Id Combined Time (ns)\n> > ProcArrayLock 5255937500\n> >\n> > Lock Id Combined Time (ns)\n> >\n> >\n> > postgres`LWLockAcquire+0x1f0\n> > postgres`CommitTransaction+0x104\n> \n> Yeah, ProcArrayLock is pretty contended. I think it would be kinda neat\n> if we could split it up in partitions. This lock is quite particular\n> though.\n\nMaybe, if we did we should set the partitions according to numbers of\nusers, so lower numbers of users are all in one partition.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 27 Jul 2007 10:15:13 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "On Thu, 2007-07-26 at 17:17 -0400, Jignesh K. Shah wrote:\n> Lock Id Combined Time (ns)\n> XidGenLock 194966200\n> WALInsertLock 517955000\n> CLogControlLock 679665100\n> WALWriteLock 2838716200\n> ProcArrayLock 44181002600\n\nIs this the time the lock is held for or the time that we wait for that\nlock? It would be good to see the break down of time separately for\nshared and exclusive.\n\nCan we have a table like this:\n\tLockId,LockMode,SumTimeLockHeld,SumTimeLockWait\n\n> Top Wait time seems to come from the following code path for \n> ProcArrayLock:\n> \n> Lock Id Mode Count\n> ProcArrayLock Exclusive 21\n> \n> Lock Id Combined Time (ns)\n> ProcArrayLock 5255937500\n> \n> Lock Id Combined Time (ns)\n> \n> \n> postgres`LWLockAcquire+0x1f0\n> postgres`CommitTransaction+0x104\n> postgres`CommitTransactionCommand+0xbc\n> postgres`finish_xact_command+0x78\n\nWell thats pretty weird. That code path clearly only happens once per\ntransaction and ought to be fast. The other code paths that take\nProcArrayLock like TransactionIdIsInProgress() and GetSnapshotData()\nought to spend more time holding the lock. Presumably you are running\nwith a fair number of SERIALIZABLE transactions? \n\nAre you running with commit_delay > 0? Its possible that the call to\nCountActiveBackends() is causing pinging of the procarray by other\nbackends while we're trying to read it during CommitTransaction(). If\nso, try the attached patch.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com", "msg_date": "Fri, 27 Jul 2007 10:15:27 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Yes I can try to breakup the Shared and exclusive time..\n\nAlso yes I use commit delays =10, it helps out a lot in reducing IO load..\n\nI will try out the patch soon.\n\n-Jignesh\n\n\nSimon Riggs wrote:\n> On Thu, 2007-07-26 at 17:17 -0400, Jignesh K. Shah wrote:\n> \n>> Lock Id Combined Time (ns)\n>> XidGenLock 194966200\n>> WALInsertLock 517955000\n>> CLogControlLock 679665100\n>> WALWriteLock 2838716200\n>> ProcArrayLock 44181002600\n>> \n>\n> Is this the time the lock is held for or the time that we wait for that\n> lock? It would be good to see the break down of time separately for\n> shared and exclusive.\n>\n> Can we have a table like this:\n> \tLockId,LockMode,SumTimeLockHeld,SumTimeLockWait\n>\n> \n>> Top Wait time seems to come from the following code path for \n>> ProcArrayLock:\n>>\n>> Lock Id Mode Count\n>> ProcArrayLock Exclusive 21\n>>\n>> Lock Id Combined Time (ns)\n>> ProcArrayLock 5255937500\n>>\n>> Lock Id Combined Time (ns)\n>>\n>>\n>> postgres`LWLockAcquire+0x1f0\n>> postgres`CommitTransaction+0x104\n>> postgres`CommitTransactionCommand+0xbc\n>> postgres`finish_xact_command+0x78\n>> \n>\n> Well thats pretty weird. That code path clearly only happens once per\n> transaction and ought to be fast. The other code paths that take\n> ProcArrayLock like TransactionIdIsInProgress() and GetSnapshotData()\n> ought to spend more time holding the lock. Presumably you are running\n> with a fair number of SERIALIZABLE transactions? \n>\n> Are you running with commit_delay > 0? Its possible that the call to\n> CountActiveBackends() is causing pinging of the procarray by other\n> backends while we're trying to read it during CommitTransaction(). If\n> so, try the attached patch.\n>\n> \n", "msg_date": "Fri, 27 Jul 2007 08:49:57 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "I tried CLOG Buffers 32 and the performance is as good as 64 bit.. (I \nhavent tried 16 yet though.. ) I am going to try your second patch now..\n\nAlso here is the breakup by Mode. The combined time is the total time it \nwaits for all counts.\n\n\n Lock Id Mode Count\n ProcArrayLock Shared 1\n CLogControlLock Exclusive 4\n CLogControlLock Shared 4\n XidGenLock Shared 4\n XidGenLock Exclusive 7\n WALInsertLock Exclusive 21\n WALWriteLock Exclusive 62\n ProcArrayLock Exclusive 79\n\n Lock Id Mode Combined Time (ns)\n CLogControlLock Exclusive 325200\n CLogControlLock Shared 4509200\n XidGenLock Exclusive 11839600\n ProcArrayLock Shared 40506600\n XidGenLock Shared 119013700\n WALInsertLock Exclusive 148063100\n WALWriteLock Exclusive 347052100\n ProcArrayLock Exclusive 1054780600\n\nHere is another one at higher user count 1600:\n\nbash-3.00# ./4_lwlock_waits.d 9208\n\n Lock Id Mode Count\n CLogControlLock Exclusive 1\n CLogControlLock Shared 2\n XidGenLock Shared 7\n WALInsertLock Exclusive 12\n WALWriteLock Exclusive 50\n ProcArrayLock Exclusive 82\n\n Lock Id Mode Combined Time (ns)\n CLogControlLock Exclusive 27300\n XidGenLock Shared 14689300\n CLogControlLock Shared 72664900\n WALInsertLock Exclusive 101431300\n WALWriteLock Exclusive 534357400\n ProcArrayLock Exclusive 4110350300\n\nNow I will try with your second patch.\n\nRegards,\nJignesh\n\nSimon Riggs wrote:\n> On Thu, 2007-07-26 at 17:17 -0400, Jignesh K. Shah wrote:\n> \n>> Lock Id Combined Time (ns)\n>> XidGenLock 194966200\n>> WALInsertLock 517955000\n>> CLogControlLock 679665100\n>> WALWriteLock 2838716200\n>> ProcArrayLock 44181002600\n>> \n>\n> Is this the time the lock is held for or the time that we wait for that\n> lock? It would be good to see the break down of time separately for\n> shared and exclusive.\n>\n> Can we have a table like this:\n> \tLockId,LockMode,SumTimeLockHeld,SumTimeLockWait\n>\n> \n>> Top Wait time seems to come from the following code path for \n>> ProcArrayLock:\n>>\n>> Lock Id Mode Count\n>> ProcArrayLock Exclusive 21\n>>\n>> Lock Id Combined Time (ns)\n>> ProcArrayLock 5255937500\n>>\n>> Lock Id Combined Time (ns)\n>>\n>>\n>> postgres`LWLockAcquire+0x1f0\n>> postgres`CommitTransaction+0x104\n>> postgres`CommitTransactionCommand+0xbc\n>> postgres`finish_xact_command+0x78\n>> \n>\n> Well thats pretty weird. That code path clearly only happens once per\n> transaction and ought to be fast. The other code paths that take\n> ProcArrayLock like TransactionIdIsInProgress() and GetSnapshotData()\n> ought to spend more time holding the lock. Presumably you are running\n> with a fair number of SERIALIZABLE transactions? \n>\n> Are you running with commit_delay > 0? Its possible that the call to\n> CountActiveBackends() is causing pinging of the procarray by other\n> backends while we're trying to read it during CommitTransaction(). If\n> so, try the attached patch.\n>\n> \n> ------------------------------------------------------------------------\n>\n> Index: src/backend/access/transam/xact.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/xact.c,v\n> retrieving revision 1.245\n> diff -c -r1.245 xact.c\n> *** src/backend/access/transam/xact.c\t7 Jun 2007 21:45:58 -0000\t1.245\n> --- src/backend/access/transam/xact.c\t27 Jul 2007 09:09:08 -0000\n> ***************\n> *** 820,827 ****\n> \t\t\t * are fewer than CommitSiblings other backends with active\n> \t\t\t * transactions.\n> \t\t\t */\n> ! \t\t\tif (CommitDelay > 0 && enableFsync &&\n> ! \t\t\t\tCountActiveBackends() >= CommitSiblings)\n> \t\t\t\tpg_usleep(CommitDelay);\n> \n> \t\t\tXLogFlush(recptr);\n> --- 820,826 ----\n> \t\t\t * are fewer than CommitSiblings other backends with active\n> \t\t\t * transactions.\n> \t\t\t */\n> ! \t\t\tif (CommitDelay > 0 && enableFsync)\n> \t\t\t\tpg_usleep(CommitDelay);\n> \n> \t\t\tXLogFlush(recptr);\n> \n> ------------------------------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Fri, 27 Jul 2007 15:11:35 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Using CLOG Buffers 32 and the commit sibling check patch I still see a \ndrop at 1200-1300 users..\n\n\n\nbash-3.00# ./4_lwlock_waits.d 18250\n\n Lock Id Mode Count\n XidGenLock Shared 1\n CLogControlLock Shared 2\n ProcArrayLock Shared 2\n XidGenLock Exclusive 4\n CLogControlLock Exclusive 15\n WALInsertLock Exclusive 18\n WALWriteLock Exclusive 38\n ProcArrayLock Exclusive 77\n\n Lock Id Mode Combined Time (ns)\n XidGenLock Shared 88700\n WALInsertLock Exclusive 69556000\n ProcArrayLock Shared 95656800\n XidGenLock Exclusive 139634100\n CLogControlLock Exclusive 148822200\n CLogControlLock Shared 161630000\n WALWriteLock Exclusive 332781800\n ProcArrayLock Exclusive 5688265500\n\nbash-3.00# ./4_lwlock_waits.d 18599\n\n Lock Id Mode Count\n ProcArrayLock Shared 2\n XidGenLock Exclusive 3\n XidGenLock Shared 4\n CLogControlLock Shared 5\n WALInsertLock Exclusive 10\n CLogControlLock Exclusive 21\n WALWriteLock Exclusive 28\n ProcArrayLock Exclusive 54\n\n Lock Id Mode Combined Time (ns)\n XidGenLock Exclusive 5688800\n WALInsertLock Exclusive 11424700\n CLogControlLock Shared 55589100\n ProcArrayLock Shared 135220400\n WALWriteLock Exclusive 177906900\n XidGenLock Shared 524146500\n CLogControlLock Exclusive 524563900\n ProcArrayLock Exclusive 5828744500\n\nbash-3.00#\nbash-3.00# ./6_lwlock_stack.d 4 18599\n\n Lock Id Mode Count\n ProcArrayLock Shared 1\n ProcArrayLock Exclusive 52\n\n Lock Id Mode Combined Time (ns)\n ProcArrayLock Shared 41428300\n ProcArrayLock Exclusive 3858386500\n\n Lock Id Combined Time (ns)\n\n\n postgres`LWLockAcquire+0x1f0\n postgres`GetSnapshotData+0x120\n postgres`GetTransactionSnapshot+0x80\n postgres`PortalStart+0x198\n postgres`exec_bind_message+0x84c\n postgres`PostgresMain+0x17f8\n postgres`BackendRun+0x2f8\n postgres`ServerLoop+0x680\n postgres`PostmasterMain+0xda8\n postgres`main+0x3d0\n postgres`_start+0x17c\n Shared 41428300\n\n postgres`LWLockAcquire+0x1f0\n postgres`CommitTransaction+0x104\n postgres`CommitTransactionCommand+0xbc\n postgres`finish_xact_command+0x78\n postgres`exec_execute_message+0x42c\n postgres`PostgresMain+0x1838\n postgres`BackendRun+0x2f8\n postgres`ServerLoop+0x680\n postgres`PostmasterMain+0xda8\n postgres`main+0x3d0\n postgres`_start+0x17c\n Exclusive 3858386500\n\n\n-Jignesh\n\n", "msg_date": "Fri, 27 Jul 2007 16:04:57 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "With CLOG 16 the drp[s comes at about 1150 users with the following lock \nstats\nbash-3.00# ./4_lwlock_waits.d 16404\n\n Lock Id Mode Count\n ProcArrayLock Shared 2\n XidGenLock Exclusive 2\n XidGenLock Shared 4\n WALInsertLock Exclusive 7\n CLogControlLock Shared 8\n WALWriteLock Exclusive 46\n ProcArrayLock Exclusive 64\n CLogControlLock Exclusive 263\n\n Lock Id Mode Combined Time (ns)\n XidGenLock Exclusive 528300\n ProcArrayLock Shared 968800\n WALInsertLock Exclusive 4090900\n XidGenLock Shared 73987600\n WALWriteLock Exclusive 86200700\n ProcArrayLock Exclusive 130756000\n CLogControlLock Shared 240471000\n CLogControlLock Exclusive 4115158500\n\nSo I think 32 is a better option for CLogs before ProcArrayLock becomes \nthe bottleneck.\n\nThough I havent seen what we can do with ProcArrayLock problem.\n\n\nRegards,\nJignesh\n\n\n\nJignesh K. Shah wrote:\n> Using CLOG Buffers 32 and the commit sibling check patch I still see a \n> drop at 1200-1300 users..\n>\n>\n>\n> bash-3.00# ./4_lwlock_waits.d 18250\n>\n> Lock Id Mode Count\n> XidGenLock Shared 1\n> CLogControlLock Shared 2\n> ProcArrayLock Shared 2\n> XidGenLock Exclusive 4\n> CLogControlLock Exclusive 15\n> WALInsertLock Exclusive 18\n> WALWriteLock Exclusive 38\n> ProcArrayLock Exclusive 77\n>\n> Lock Id Mode Combined Time (ns)\n> XidGenLock Shared 88700\n> WALInsertLock Exclusive 69556000\n> ProcArrayLock Shared 95656800\n> XidGenLock Exclusive 139634100\n> CLogControlLock Exclusive 148822200\n> CLogControlLock Shared 161630000\n> WALWriteLock Exclusive 332781800\n> ProcArrayLock Exclusive 5688265500\n>\n> bash-3.00# ./4_lwlock_waits.d 18599\n>\n> Lock Id Mode Count\n> ProcArrayLock Shared 2\n> XidGenLock Exclusive 3\n> XidGenLock Shared 4\n> CLogControlLock Shared 5\n> WALInsertLock Exclusive 10\n> CLogControlLock Exclusive 21\n> WALWriteLock Exclusive 28\n> ProcArrayLock Exclusive 54\n>\n> Lock Id Mode Combined Time (ns)\n> XidGenLock Exclusive 5688800\n> WALInsertLock Exclusive 11424700\n> CLogControlLock Shared 55589100\n> ProcArrayLock Shared 135220400\n> WALWriteLock Exclusive 177906900\n> XidGenLock Shared 524146500\n> CLogControlLock Exclusive 524563900\n> ProcArrayLock Exclusive 5828744500\n>\n> bash-3.00#\n> bash-3.00# ./6_lwlock_stack.d 4 18599\n>\n> Lock Id Mode Count\n> ProcArrayLock Shared 1\n> ProcArrayLock Exclusive 52\n>\n> Lock Id Mode Combined Time (ns)\n> ProcArrayLock Shared 41428300\n> ProcArrayLock Exclusive 3858386500\n>\n> Lock Id Combined Time (ns)\n>\n>\n> postgres`LWLockAcquire+0x1f0\n> postgres`GetSnapshotData+0x120\n> postgres`GetTransactionSnapshot+0x80\n> postgres`PortalStart+0x198\n> postgres`exec_bind_message+0x84c\n> postgres`PostgresMain+0x17f8\n> postgres`BackendRun+0x2f8\n> postgres`ServerLoop+0x680\n> postgres`PostmasterMain+0xda8\n> postgres`main+0x3d0\n> postgres`_start+0x17c\n> Shared 41428300\n>\n> postgres`LWLockAcquire+0x1f0\n> postgres`CommitTransaction+0x104\n> postgres`CommitTransactionCommand+0xbc\n> postgres`finish_xact_command+0x78\n> postgres`exec_execute_message+0x42c\n> postgres`PostgresMain+0x1838\n> postgres`BackendRun+0x2f8\n> postgres`ServerLoop+0x680\n> postgres`PostmasterMain+0xda8\n> postgres`main+0x3d0\n> postgres`_start+0x17c\n> Exclusive 3858386500\n>\n>\n> -Jignesh\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Mon, 30 Jul 2007 15:23:42 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Simon,\n\n> Well thats pretty weird. That code path clearly only happens once per\n> transaction and ought to be fast. The other code paths that take\n> ProcArrayLock like TransactionIdIsInProgress() and GetSnapshotData()\n> ought to spend more time holding the lock. Presumably you are running\n> with a fair number of SERIALIZABLE transactions?\n\nGiven that this is TPCC-analog, I'd assume that we are.\n\nJignesh?\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 31 Jul 2007 14:33:20 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Yep quite a bit of transactions .. But the piece that's slow is where it \nis clearing it up in CommitTransaction().\nI am not sure of how ProcArrayLock is designed to work and hence not \nclear what we are seeing is what we expect.\n\nDo we have some design doc on ProcArrayLock to understand its purpose?\n\nThanks.\nRegards,\nJignesh\n\n\nJosh Berkus wrote:\n> Simon,\n>\n> \n>> Well thats pretty weird. That code path clearly only happens once per\n>> transaction and ought to be fast. The other code paths that take\n>> ProcArrayLock like TransactionIdIsInProgress() and GetSnapshotData()\n>> ought to spend more time holding the lock. Presumably you are running\n>> with a fair number of SERIALIZABLE transactions?\n>> \n>\n> Given that this is TPCC-analog, I'd assume that we are.\n>\n> Jignesh?\n>\n> \n", "msg_date": "Tue, 31 Jul 2007 17:46:26 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User concurrency thresholding: where do I look?" }, { "msg_contents": "Hi Simon,\n\nThis patch seems to work well (both with 32 and 64 value but not with 16 \nand the default 8). Is there a way we can integrate this in 8.3?\n\nThis will improve out of box performance quite a bit for high number of \nusers (atleat 30% in my OLTP test)\n\nRegards,\nJignesh\n\n\nSimon Riggs wrote:\n> On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:\n>\n> \n>> However at 900 Users where the big drop in throughput occurs:\n>> It gives a different top \"consumer\" of time:\n>> \n>\n>\n> postgres`LWLockAcquire+0x1c8\n> \n>> postgres`SimpleLruReadPage+0x1ac\n>> postgres`TransactionIdGetStatus+0x14\n>> postgres`TransactionLogFetch+0x58\n>> \n>\n> TransactionIdGetStatus doesn't directly call SimpleLruReadPage().\n> Presumably the compiler has been rearranging things??\n>\n> Looks like you're out of clog buffers. It seems like the clog buffers\n> aren't big enough to hold clog pages for long enough and the SELECT FOR\n> SHARE processing is leaving lots of additional read locks that are\n> increasing the number of clog requests for older xids.\n>\n> Try the enclosed patch.\n> \n> \n> ------------------------------------------------------------------------\n>\n> Index: src/include/access/clog.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/access/clog.h,v\n> retrieving revision 1.19\n> diff -c -r1.19 clog.h\n> *** src/include/access/clog.h\t5 Jan 2007 22:19:50 -0000\t1.19\n> --- src/include/access/clog.h\t26 Jul 2007 15:44:58 -0000\n> ***************\n> *** 29,35 ****\n> \n> \n> /* Number of SLRU buffers to use for clog */\n> ! #define NUM_CLOG_BUFFERS\t8\n> \n> \n> extern void TransactionIdSetStatus(TransactionId xid, XidStatus status);\n> --- 29,35 ----\n> \n> \n> /* Number of SLRU buffers to use for clog */\n> ! #define NUM_CLOG_BUFFERS\t64\t\n> \n> \n> extern void TransactionIdSetStatus(TransactionId xid, XidStatus status);\n> \n> ------------------------------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Fri, 03 Aug 2007 16:09:39 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "CLOG Patch" }, { "msg_contents": "On Fri, 2007-08-03 at 16:09 -0400, Jignesh K. Shah wrote:\n\n> This patch seems to work well (both with 32 and 64 value but not with 16 \n> and the default 8). \n\nCould you test at 24 please also? Tom has pointed out the additional\ncost of setting this higher, even in workloads that don't benefit from\nthe I/O-induced contention reduction.\n\n> Is there a way we can integrate this in 8.3?\n\nI just replied to Josh's thread on -hackers about this.\n\n> This will improve out of box performance quite a bit for high number of \n> users (atleat 30% in my OLTP test)\n\nYes, thats good. Will this have a dramatic effect on a particular\nbenchmark, or for what reason might we need this? Tom has questioned the\nuse case here, so I think it would be good to explain a little more for\neveryone. Thanks.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 03 Aug 2007 21:29:47 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLOG Patch" }, { "msg_contents": "I tried with CLOG 24 also and I got linear performance upto 1250 users \nafter which it started to tank. 32 got us to 1350 users before some \nother bottleneck overtook it.\n\n\nBased on what Tom said earlier, it might then make sense to make it a \ntunable with the default of 8 but something one can change for high \nnumber of users.\n\n\nThanks.\nRegards,\nJignesh\n\n\nSimon Riggs wrote:\n> On Fri, 2007-08-03 at 16:09 -0400, Jignesh K. Shah wrote:\n>\n> \n>> This patch seems to work well (both with 32 and 64 value but not with 16 \n>> and the default 8). \n>> \n>\n> Could you test at 24 please also? Tom has pointed out the additional\n> cost of setting this higher, even in workloads that don't benefit from\n> the I/O-induced contention reduction.\n>\n> \n>> Is there a way we can integrate this in 8.3?\n>> \n>\n> I just replied to Josh's thread on -hackers about this.\n>\n> \n>> This will improve out of box performance quite a bit for high number of \n>> users (atleat 30% in my OLTP test)\n>> \n>\n> Yes, thats good. Will this have a dramatic effect on a particular\n> benchmark, or for what reason might we need this? Tom has questioned the\n> use case here, so I think it would be good to explain a little more for\n> everyone. Thanks.\n>\n> \n", "msg_date": "Fri, 10 Aug 2007 13:54:41 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLOG Patch" }, { "msg_contents": "On Fri, 2007-08-10 at 13:54 -0400, Jignesh K. Shah wrote:\n> I tried with CLOG 24 also and I got linear performance upto 1250 users \n> after which it started to tank. 32 got us to 1350 users before some \n> other bottleneck overtook it.\n\nJignesh,\n\nThanks for testing that.\n\nIt's not very clear to everybody why an extra 100 users is useful and it\nwould certainly help your case if you can explain.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 10 Aug 2007 21:03:19 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLOG Patch" } ]
[ { "msg_contents": "Hi all,\n\n I have a serious problem with a server. This server holds severals \nDB, the problem is thet the CPU's spend most of the time waiting:\n\nCpu0: 4.0% us, 2.3% sy, 0.0% ni, 61.5% id, 32.1% wa, 0.0% hi, 0.0% si\nCpu1: 2.3% us, 0.3% sy, 0.0% ni, 84.1% id, 13.3% wa, 0.0% hi, 0.0% si\nCpu2: 1.3% us, 0.3% sy, 0.0% ni, 68.6% id, 29.8% wa, 0.0% hi, 0.0% si\nCpu3: 4.6% us, 3.3% sy, 0.0% ni, 2.6% id, 88.4% wa, 0.3% hi, 0.7% si\n\n The iostat -c says about 8% of time waiting for IO. I'm afraid this \nis due to locks between concurrent queries, is there anyway to have more \ninfo about?\n\nThanks all\n-- \nArnau\n", "msg_date": "Thu, 19 Jul 2007 18:44:48 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Is it possible to know where is the \"deadlock\"" }, { "msg_contents": "In response to Arnau <[email protected]>:\n\n> Hi all,\n> \n> I have a serious problem with a server. This server holds severals \n> DB, the problem is thet the CPU's spend most of the time waiting:\n> \n> Cpu0: 4.0% us, 2.3% sy, 0.0% ni, 61.5% id, 32.1% wa, 0.0% hi, 0.0% si\n> Cpu1: 2.3% us, 0.3% sy, 0.0% ni, 84.1% id, 13.3% wa, 0.0% hi, 0.0% si\n> Cpu2: 1.3% us, 0.3% sy, 0.0% ni, 68.6% id, 29.8% wa, 0.0% hi, 0.0% si\n> Cpu3: 4.6% us, 3.3% sy, 0.0% ni, 2.6% id, 88.4% wa, 0.3% hi, 0.7% si\n> \n> The iostat -c says about 8% of time waiting for IO. I'm afraid this \n> is due to locks between concurrent queries, is there anyway to have more \n> info about?\n\nThis looks perfectly normal for a medium-load server.\n\nAlthough you don't state your problem (you state what you think is a\nsymptom, and call it the problem) I'm guessing you have queries that\nare executing slower than you would like? If that's the case, I would\nsuggest investigating the slow queries directly. Check for indexes and\nensure your vacuum/analyze schedule is acceptable. If you get\nstumped, post details of the queries here asking for help.\n\nAnother thing that (I'm guessing) may be confusing you is if this \nsystem has multiple CPUs, each query can only execute on a single\nCPU. So a single query at full throttle on a 8-way system will\nonly use 12.5% max.\n\nIf you have reason to believe that locks are an issue, the pg_locks\nview can help you prove/disprove that theory:\nhttp://www.postgresql.org/docs/8.2/interactive/view-pg-locks.html\n\nIf none of those are the case, then please describe the actual problem\nyou are having.\n\nHTH.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 19 Jul 2007 13:03:26 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to know where is the \"deadlock\"" }, { "msg_contents": "> The iostat -c says about 8% of time waiting for IO. I'm afraid this\n> is due to locks between concurrent queries, is there anyway to have more\n> info about?\n\nI do believe that if you told what OS you're running, what pg-version\nyou're running, what type of sql-statements you perform the list can\nprovide some help.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 19 Jul 2007 19:05:24 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to know where is the \"deadlock\"" } ]
[ { "msg_contents": "I'd like any advice you have on my postgres.conf. The machine in\nquestion is a 2.4 Ghz Xeon with 2 gigs of ram running freebsd 6.2 and\npostgres 8.24. There are 16 concurrent users. This machine is used\nonly for the database. Usage is split out pretty evenly between reads\nand writes.\n\nThanks,\nPat\n\n\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the server.\n#\n# Any option can also be given as a command line switch to the server,\n# e.g., 'postgres -c log_connections=on'. Some options can be changed at\n# run-time with the 'SET' SQL command.\n#\n# This file is read on server startup and when the server receives a\n# SIGHUP. If you edit the file on a running system, you have to SIGHUP the\n# server for the changes to take effect, or use \"pg_ctl reload\". Some\n# settings, which are marked below, require a server shutdown and restart\n# to take effect.\n#\n# Memory units: kB = kilobytes MB = megabytes GB = gigabytes\n# Time units: ms = milliseconds s = seconds min = minutes h = hours d = days\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory\n # (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n # (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file\n # (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = '(none)' # write an extra PID file\n # (change requires restart)\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = 'localhost' # what IP address(es) to listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\n # (change requires restart)\n#port = 5432 # (change requires restart)\nmax_connections = 20 # (change requires restart)\n# Note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 3 # (change requires restart)\n#unix_socket_directory = '' # (change requires restart)\n#unix_socket_group = '' # (change requires restart)\n#unix_socket_permissions = 0777 # octal\n # (change requires restart)\n#bonjour_name = '' # defaults to the computer name\n # (change requires restart)\n\n# - Security & Authentication -\n\n#authentication_timeout = 1min # 1s-600s\n#ssl = off # (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = '' # (change requires restart)\n#krb_srvname = 'postgres' # (change requires restart)\n#krb_server_hostname = '' # empty string matches any keytab entry\n # (change requires restart)\n#krb_caseins_users = off # (change requires restart)\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 256MB # min 128kB or max_connections*16kB\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10MB # min 64kB\n#maintenance_work_mem = 16MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 400000 # min max_fsm_relations*16, 6 bytes each\n # (change requires restart)\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n # (change requires restart)\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or off\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 64kB # min 32kB\n # (change requires restart)\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_warning = 30s # 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile segment\n#archive_timeout = 0 # force a logfile segment switch after this\n # many seconds; 0 is off\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 650MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\nlog_destination = 'syslog'\n#log_destination = 'stderr' # Valid values are combinations of\n # stderr, syslog and eventlog,\n # depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off # Enable capturing of stderr into log\n # files\n # (change requires restart)\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log' # Directory where log files are written\n # Can be absolute or relative to PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n # Can include strftime() escapes\n#log_truncate_on_rotation = off # If on, any existing log file of the same\n # name as the new log file will be\n # truncated rather than appended to. But\n # such truncation only occurs on\n # time-driven rotation, not on restarts\n # or size-driven rotation. Default is\n # off, meaning append to existing files\n # in all cases.\n#log_rotation_age = 1d # Automatic rotation of logfiles will\n # happen after that time. 0 to\n # disable.\n#log_rotation_size = 10MB # Automatic rotation of logfiles will\n # happen after that much log\n # output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\n#log_min_messages = notice # Values, in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = error # Values in order of\nincreasing severity:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # fatal\n # panic (effectively off)\n\n#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements\n # and their durations.\n\nsilent_mode = on\n#silent_mode = off # DO NOT USE without syslog or\n # redirect_stderr\n # (change requires restart)\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_line_prefix = '' # Special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = PID\n # %t = timestamp (no milliseconds)\n # %m = timestamp with milliseconds\n # %i = command tag\n # %c = session id\n # %l = session line number\n # %s = session start timestamp\n # %x = transaction id\n # %q = stop here in non-session\n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\n#log_statement = 'none' # none, ddl, mod, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#stats_command_string = on\n#update_process_title = on\n\nstats_start_collector = on # needed for block or row stats\n # (change requires restart)\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off # (change requires restart)\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on\n\n#autovacuum = off # enable autovacuum subprocess?\n # 'on' requires stats_start_collector\n # and stats_row_level to also be on\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\n#autovacuum_analyze_threshold = 250 # min # of tuple updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\n # vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\n # analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n # (change requires restart)\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovacuum, -1 means use\n # vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n # vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public' # schema names\n#default_tablespace = '' # a tablespace name, '' uses\n # the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0 # 0 is disabled\n#vacuum_freeze_min_age = 100000000\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ\n # environment setting\n#timezone_abbreviations = 'Default' # select the set of available timezone\n # abbreviations. Currently, there are\n # Default\n # Australia\n # India\n # However you can also create your own\n # file in share/timezonesets/.\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database\n # encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'C' # locale for system error message\n # strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\n#max_locks_per_transaction = 64 # min 10\n # (change requires restart)\n# Note: each lock table slot uses ~270 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding # on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#standard_conforming_strings = off\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable class names\n", "msg_date": "Thu, 19 Jul 2007 11:33:30 -0600", "msg_from": "\"Pat Maddox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Trying to tune postgres, how is this config?" } ]
[ { "msg_contents": "Hi,\n\nOne of our end users was complaining about a report that was taking too much\ntime to execute and I�ve discovered that the following SQL statement was the\nresponsible for it.\n\nI would appreciate any suggestions to improve performance of it.\n\nThank you very much in advance!\n\n____________________________________________________________________________\n_________________________________________________\n\nexplain analyze select (VEN.DOCUME)::varchar(13) as COLUNA0,\n (VENCODPGT.APEPGT)::varchar(9) as COLUNA1,\n (COALESCE(COALESCE(VEN.VLRLIQ,0) * (CASE VEN.VLRNOT WHEN 0\nTHEN 0 ELSE IVE.VLRMOV / VEN.VLRNOT END),0)) as COLUNA2,\n (COALESCE(IVE.QTDMOV,0)) as COLUNA3,\n (VIPR.NOMPRO)::varchar(83) as COLUNA4,\n (VIPR.REFPRO)::varchar(20) as COLUNA5\n from TV_VEN VEN\n inner join TT_IVE IVE ON IVE.SEQUEN = VEN.SEQUEN and\n IVE.CODFIL = VEN.CODFIL\n inner join TV_IPR VIPR ON VIPR.FILMAT = IVE.FILMAT and\n VIPR.CODMAT = IVE.CODMAT and\n VIPR.CODCOR = IVE.CODCOR and\n VIPR.CODTAM = IVE.CODTAM\n\n left join TT_PLA VENCODPGT ON VEN.FILPGT = VENCODPGT.FILPGT AND\nVEN.CODPGT = VENCODPGT.CODPGT\n where ('001' = VEN.CODFIL)\n and VEN.DATHOR between '07/12/2007 00:00:00' and '07/12/2007\n23:59:59'\n and (VEN.CODNAT = '-3')\n and IVE.SITMOV <> 'C'\n and ('1' = VIPR.DEPART) ;\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Nested Loop Left Join (cost=995.52..75661.01 rows=1 width=195) (actual\ntime=4488.166..1747121.374 rows=256 loops=1)\n -> Nested Loop (cost=995.52..75660.62 rows=1 width=199) (actual\ntime=4481.323..1747105.903 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.11..3906.12 rows=1 width=151) (actual\ntime=15.626..128.934 rows=414 loops=1)\n Join Filter: (div.coddiv = ddiv.codtab)\n -> Nested Loop (cost=1.11..3905.05 rows=1 width=160)\n(actual time=15.611..121.455 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.11..3903.99 rows=1 width=169)\n(actual time=15.593..113.866 rows=414 loops=1)\n Join Filter: ((gra.codcor)::text =\n((div.codite)::text || ''::text))\n -> Hash Join (cost=1.11..3888.04 rows=11\nwidth=146) (actual time=15.560..85.376 rows=414 loops=1)\n Hash Cond: ((gra.codtam)::text =\n((sub.codite)::text || ''::text))\n -> Nested Loop (cost=0.00..3883.64\nrows=423 width=123) (actual time=15.376..81.482 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..149.65 rows=516 width=77) (actual\ntime=15.244..30.586 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.22 rows=1 width=46) (actual time=0.104..0.110 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.05..1.05 rows=5 width=32)\n(actual time=0.048..0.048 rows=5 loops=1)\n -> Seq Scan on tt_sub sub\n(cost=0.00..1.05 rows=5 width=32) (actual time=0.016..0.024 rows=5 loops=1)\n -> Seq Scan on tt_div div (cost=0.00..1.15\nrows=15 width=32) (actual time=0.004..0.022 rows=15 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\nwidth=9) (actual time=0.003..0.007 rows=3 loops=414)\n -> Seq Scan on td_div ddiv (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.002..0.007 rows=3 loops=414)\n -> Hash Join (cost=994.41..71746.74 rows=388 width=114) (actual\ntime=5.298..4218.486 rows=857 loops=414)\n Hash Cond: (ive.sequen = ven.sequen)\n -> Nested Loop (cost=0.00..68318.52 rows=647982 width=85)\n(actual time=0.026..3406.170 rows=643739 loops=414)\n -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\nwidth=9) (actual time=0.004..0.014 rows=1 loops=414)\n Filter: (-3::numeric = codtab)\n -> Seq Scan on tt_ive ive (cost=0.00..61837.46\nrows=647982 width=76) (actual time=0.017..1926.983 rows=643739 loops=414)\n Filter: ((sitmov <> 'C'::bpchar) AND\n('001'::bpchar = codfil))\n -> Hash (cost=992.08..992.08 rows=186 width=89) (actual\ntime=33.234..33.234 rows=394 loops=1)\n -> Hash Left Join (cost=3.48..992.08 rows=186\nwidth=89) (actual time=13.163..32.343 rows=394 loops=1)\n Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n(ven.codcli = cfg.vc_codcli))\n -> Hash Join (cost=2.45..989.65 rows=186\nwidth=106) (actual time=13.131..31.060 rows=394 loops=1)\n Hash Cond: ((ven.filpgt = pla.filpgt) AND\n(ven.codpgt = pla.codpgt))\n -> Index Scan using i_lc_ven_dathor on\ntt_ven ven (cost=0.00..983.95 rows=186 width=106) (actual\ntime=13.026..29.634 rows=394 loops=1)\n Index Cond: ((dathor >= '2007-07-12\n00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar = codfil) AND\n(codnat = -3::numeric))\n -> Hash (cost=2.18..2.18 rows=18\nwidth=14) (actual time=0.081..0.081 rows=18 loops=1)\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=0.013..0.043 rows=18\nloops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=17)\n(actual time=0.017..0.017 rows=1 loops=1)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.010..0.011 rows=1 loops=1)\n -> Index Scan using pk_pla on tt_pla vencodpgt (cost=0.00..0.31 rows=1\nwidth=24) (actual time=0.037..0.040 rows=1 loops=256)\n Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n Total runtime: 1747122.219 ms\n(43 rows)\n\n____________________________________________________________________________\n_________________________________________________________\n\nTable and view definitions can be accessed at:\nhttp://www.opendb.com.br/v1/problem0707.txt\n\nReimer\n\n\n\n\n\n\nHi,\n \nOne of our end \nusers was complaining about a report that was taking too much time to execute \nand I´ve discovered that the following SQL statement was the responsible for it. \n\n \nI would appreciate \nany suggestions to improve performance of it.\n \nThank you very much \nin advance!\n \n_____________________________________________________________________________________________________________________________\n \nexplain analyze select (VEN.DOCUME)::varchar(13) as \nCOLUNA0,  \n               \n(VENCODPGT.APEPGT)::varchar(9) as COLUNA1,  \n               \n(COALESCE(COALESCE(VEN.VLRLIQ,0) * (CASE  VEN.VLRNOT  WHEN 0 \nTHEN  0 ELSE  IVE.VLRMOV / VEN.VLRNOT  END),0)) as COLUNA2,  \n               \n(COALESCE(IVE.QTDMOV,0)) as COLUNA3,  \n               \n(VIPR.NOMPRO)::varchar(83) as COLUNA4,  \n               \n(VIPR.REFPRO)::varchar(20) as COLUNA5 \n        from TV_VEN VEN \n              \ninner join TT_IVE IVE ON IVE.SEQUEN = VEN.SEQUEN and \n        IVE.CODFIL = VEN.CODFIL \n              \ninner join TV_IPR VIPR ON VIPR.FILMAT = IVE.FILMAT and \n        VIPR.CODMAT = IVE.CODMAT and \n        VIPR.CODCOR = IVE.CODCOR and \n        VIPR.CODTAM = IVE.CODTAM \n         \n             \nleft join TT_PLA VENCODPGT ON VEN.FILPGT = VENCODPGT.FILPGT AND VEN.CODPGT = \nVENCODPGT.CODPGT         where ('001' = \nVEN.CODFIL)         and VEN.DATHOR \nbetween '07/12/2007 00:00:00' and '07/12/2007 23:59:59' \n        and (VEN.CODNAT = '-3') \n        and IVE.SITMOV <> 'C' \n        and ('1' = VIPR.DEPART) \n;\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested \nLoop Left Join  (cost=995.52..75661.01 rows=1 width=195) (actual \ntime=4488.166..1747121.374 rows=256 loops=1)   ->  Nested \nLoop  (cost=995.52..75660.62 rows=1 width=199) (actual \ntime=4481.323..1747105.903 rows=256 \nloops=1)         Join Filter: \n((gra.filmat = ive.filmat) AND (gra.codmat = ive.codmat) AND (gra.codcor = \nive.codcor) AND (gra.codtam = \nive.codtam))         ->  \nNested Loop  (cost=1.11..3906.12 rows=1 width=151) (actual \ntime=15.626..128.934 rows=414 \nloops=1)               \nJoin Filter: (div.coddiv = \nddiv.codtab)               \n->  Nested Loop  (cost=1.11..3905.05 rows=1 width=160) (actual \ntime=15.611..121.455 rows=414 \nloops=1)                     \nJoin Filter: (sub.codsub = \ndsub.codtab)                     \n->  Nested Loop  (cost=1.11..3903.99 rows=1 width=169) (actual \ntime=15.593..113.866 rows=414 \nloops=1)                           \nJoin Filter: ((gra.codcor)::text = ((div.codite)::text || \n''::text))                           \n->  Hash Join  (cost=1.11..3888.04 rows=11 width=146) (actual \ntime=15.560..85.376 rows=414 \nloops=1)                                 \nHash Cond: ((gra.codtam)::text = ((sub.codite)::text || \n''::text))                                 \n->  Nested Loop  (cost=0.00..3883.64 rows=423 width=123) (actual \ntime=15.376..81.482 rows=414 \nloops=1)                                       \n->  Index Scan using i_fk_pro_ddep on tt_pro pro  \n(cost=0.00..149.65 rows=516 width=77) (actual time=15.244..30.586 rows=414 \nloops=1)                                             \nIndex Cond: (1::numeric = \ndepart)                                       \n->  Index Scan using pk_gra on tt_gra gra  (cost=0.00..7.22 rows=1 \nwidth=46) (actual time=0.104..0.110 rows=1 \nloops=414)                                             \nIndex Cond: ((pro.filmat = gra.filmat) AND (pro.codmat = \ngra.codmat))                                 \n->  Hash  (cost=1.05..1.05 rows=5 width=32) (actual \ntime=0.048..0.048 rows=5 \nloops=1)                                       \n->  Seq Scan on tt_sub sub  (cost=0.00..1.05 rows=5 width=32) \n(actual time=0.016..0.024 rows=5 \nloops=1)                           \n->  Seq Scan on tt_div div  (cost=0.00..1.15 rows=15 width=32) \n(actual time=0.004..0.022 rows=15 \nloops=414)                     \n->  Seq Scan on td_sub dsub  (cost=0.00..1.03 rows=3 width=9) \n(actual time=0.003..0.007 rows=3 \nloops=414)               \n->  Seq Scan on td_div ddiv  (cost=0.00..1.03 rows=3 width=9) \n(actual time=0.002..0.007 rows=3 \nloops=414)         ->  Hash \nJoin  (cost=994.41..71746.74 rows=388 width=114) (actual \ntime=5.298..4218.486 rows=857 \nloops=414)               \nHash Cond: (ive.sequen = \nven.sequen)               \n->  Nested Loop  (cost=0.00..68318.52 rows=647982 width=85) (actual \ntime=0.026..3406.170 rows=643739 \nloops=414)                     \n->  Seq Scan on td_nat nat  (cost=0.00..1.24 rows=1 width=9) \n(actual time=0.004..0.014 rows=1 \nloops=414)                           \nFilter: (-3::numeric = \ncodtab)                     \n->  Seq Scan on tt_ive ive  (cost=0.00..61837.46 rows=647982 \nwidth=76) (actual time=0.017..1926.983 rows=643739 \nloops=414)                           \nFilter: ((sitmov <> 'C'::bpchar) AND ('001'::bpchar = \ncodfil))               \n->  Hash  (cost=992.08..992.08 rows=186 width=89) (actual \ntime=33.234..33.234 rows=394 \nloops=1)                     \n->  Hash Left Join  (cost=3.48..992.08 rows=186 width=89) (actual \ntime=13.163..32.343 rows=394 \nloops=1)                           \nHash Cond: ((ven.filcli = cfg.vc_filcli) AND (ven.codcli = \ncfg.vc_codcli))                           \n->  Hash Join  (cost=2.45..989.65 rows=186 width=106) (actual \ntime=13.131..31.060 rows=394 \nloops=1)                                 \nHash Cond: ((ven.filpgt = pla.filpgt) AND (ven.codpgt = \npla.codpgt))                                 \n->  Index Scan using i_lc_ven_dathor on tt_ven ven  \n(cost=0.00..983.95 rows=186 width=106) (actual time=13.026..29.634 rows=394 \nloops=1)                                       \nIndex Cond: ((dathor >= '2007-07-12 00:00:00'::timestamp without time zone) \nAND (dathor <= '2007-07-12 23:59:59'::timestamp without time \nzone))                                       \nFilter: (('001'::bpchar = codfil) AND (codnat = \n-3::numeric))                                 \n->  Hash  (cost=2.18..2.18 rows=18 width=14) (actual \ntime=0.081..0.081 rows=18 \nloops=1)                                       \n->  Seq Scan on tt_pla pla  (cost=0.00..2.18 rows=18 width=14) \n(actual time=0.013..0.043 rows=18 \nloops=1)                           \n->  Hash  (cost=1.01..1.01 rows=1 width=17) (actual \ntime=0.017..0.017 rows=1 \nloops=1)                                 \n->  Seq Scan on tt_cfg cfg  (cost=0.00..1.01 rows=1 width=17) \n(actual time=0.010..0.011 rows=1 loops=1)   ->  Index Scan \nusing pk_pla on tt_pla vencodpgt  (cost=0.00..0.31 rows=1 width=24) (actual \ntime=0.037..0.040 rows=1 \nloops=256)         Index Cond: \n((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt = \nvencodpgt.codpgt)) Total runtime: 1747122.219 ms(43 \nrows)\n \n_____________________________________________________________________________________________________________________________________\n \nTable \nand view definitions can be accessed at: http://www.opendb.com.br/v1/problem0707.txt\n \n\nReimer", "msg_date": "Thu, 19 Jul 2007 21:41:27 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Improving select peformance" }, { "msg_contents": "\"Carlos H. Reimer\" <[email protected]> writes:\n> One of our end users was complaining about a report that was taking too much\n> time to execute and I�ve discovered that the following SQL statement was the\n> responsible for it.\n\nHere's part of the problem:\n\n> Join Filter: ((gra.codcor)::text =\n> ((div.codite)::text || ''::text))\n> -> Hash Join (cost=1.11..3888.04 rows=11\n> width=146) (actual time=15.560..85.376 rows=414 loops=1)\n> Hash Cond: ((gra.codtam)::text =\n> ((sub.codite)::text || ''::text))\n\nWhy such bizarre join conditions? Why don't you lose the useless\nconcatenations of empty strings and have just a plain equality\ncomparison? This technique completely destroys any chance of the\nplanner making good estimates of the join result sizes (and the bad\nestimates it's coming out with are part of the problem).\n\n> -> Nested Loop (cost=0.00..68318.52 rows=647982 width=85)\n> (actual time=0.026..3406.170 rows=643739 loops=414)\n> -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\n> width=9) (actual time=0.004..0.014 rows=1 loops=414)\n> Filter: (-3::numeric = codtab)\n> -> Seq Scan on tt_ive ive (cost=0.00..61837.46\n> rows=647982 width=76) (actual time=0.017..1926.983 rows=643739 loops=414)\n> Filter: ((sitmov <> 'C'::bpchar) AND\n> ('001'::bpchar = codfil))\n\nThe other big problem seems to be that it's choosing to do this\nunconstrained join first. I'm not sure about the cause of that,\nbut maybe you need to increase join_collapse_limit. What PG version\nis this anyway?\n\nA more general comment, if you are open to schema changes, is that you\nshould change all the \"numeric(n,0)\" fields to integer (or possibly\nsmallint or bigint as needed). Particularly the ones that are used as\njoin keys, primary keys, foreign keys.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jul 2007 21:31:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving select peformance " }, { "msg_contents": "Hi,\n\nI have changed the view to eliminate the bizarre concatenation conditions\nbut even so the response time did not change.\n\nChanging the join_collapse_limit from 8 to 1 caused the decrease in response\ntime.\n\nHere is the explain analyze with the join_collapse_limit set to 1:\n\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------\n Nested Loop Left Join (cost=969.53..20638.03 rows=1 width=194) (actual\ntime=10.309..5405.701 rows=256 loops=1)\n Join Filter: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n -> Nested Loop (cost=969.53..20635.51 rows=1 width=198) (actual\ntime=10.211..5391.358 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.34..3410.10 rows=1 width=150) (actual\ntime=0.248..38.966 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.34..3409.04 rows=1 width=159)\n(actual time=0.237..32.520 rows=414 loops=1)\n Join Filter: ((gra.codtam)::text = ((sub.codite)::text\n|| ''::text))\n -> Nested Loop (cost=1.34..3376.84 rows=28 width=136)\n(actual time=0.226..20.978 rows=414 loops=1)\n -> Hash Join (cost=1.34..3356.99 rows=28\nwidth=145) (actual time=0.215..15.225 rows=414 loops=1)\n Hash Cond: ((gra.codcor)::text =\n((div.codite)::text || ''::text))\n -> Nested Loop (cost=0.00..3352.55\nrows=377 width=122) (actual time=0.139..12.115 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..123.83 rows=437 width=76) (actual time=0.092..1.212\nrows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.37 rows=1 width=46) (actual time=0.016..0.018 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.15..1.15 rows=15\nwidth=32) (actual time=0.060..0.060 rows=15 loops=1)\n -> Seq Scan on tt_div div\n(cost=0.00..1.15 rows=15 width=32) (actual time=0.005..0.021 rows=15\nloops=1)\n -> Index Scan using pk_ddiv on td_div ddiv\n(cost=0.00..0.70 rows=1 width=9) (actual time=0.006..0.009 rows=1 loops=414)\n Index Cond: (div.coddiv = ddiv.codtab)\n -> Seq Scan on tt_sub sub (cost=0.00..1.05 rows=5\nwidth=32) (actual time=0.003..0.007 rows=5 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.002..0.006 rows=3 loops=414)\n -> Nested Loop (cost=968.19..17218.15 rows=363 width=114) (actual\ntime=0.040..12.019 rows=857 loops=414)\n -> Nested Loop (cost=968.19..974.85 rows=174 width=80)\n(actual time=0.022..3.149 rows=394 loops=414)\n -> Merge Join (cost=966.95..970.13 rows=174 width=89)\n(actual time=0.019..1.317 rows=394 loops=414)\n Merge Cond: ((pla.codpgt = ven.codpgt) AND\n(pla.filpgt = ven.filpgt))\n -> Sort (cost=2.56..2.60 rows=18 width=14)\n(actual time=0.001..0.007 rows=8 loops=414)\n Sort Key: pla.codpgt, pla.filpgt\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=0.005..0.031 rows=18\nloops=1)\n -> Sort (cost=964.39..964.83 rows=174 width=89)\n(actual time=0.013..0.328 rows=394 loops=414)\n Sort Key: ven.codpgt, ven.filpgt\n -> Nested Loop Left Join\n(cost=1.01..957.92 rows=174 width=89) (actual time=0.068..4.212 rows=394\nloops=1)\n Join Filter: ((ven.filcli =\ncfg.vc_filcli) AND (ven.codcli = cfg.vc_codcli))\n -> Index Scan using i_lc_ven_dathor\non tt_ven ven (cost=0.00..952.56 rows=174 width=106) (actual\ntime=0.054..2.079 rows=394 loops=1)\n Index Cond: ((dathor >=\n'2007-07-12 00:00:00'::timestamp without time zone) AND (dathor <=\n'2007-07-12 23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar =\ncodfil) AND (codnat = -3::numeric))\n -> Materialize (cost=1.01..1.02\nrows=1 width=17) (actual time=0.001..0.002 rows=1 loops=394)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.004..0.006 rows=1 loops=1)\n -> Materialize (cost=1.24..1.25 rows=1 width=9)\n(actual time=0.001..0.002 rows=1 loops=163116)\n -> Seq Scan on td_nat nat (cost=0.00..1.24\nrows=1 width=9) (actual time=0.010..0.015 rows=1 loops=1)\n Filter: (-3::numeric = codtab)\n -> Index Scan using pk_ive on tt_ive ive (cost=0.00..93.04\nrows=25 width=76) (actual time=0.012..0.017 rows=2 loops=163116)\n Index Cond: (('001'::bpchar = ive.codfil) AND\n(ive.sequen = ven.sequen))\n Filter: (sitmov <> 'C'::bpchar)\n -> Seq Scan on tt_pla vencodpgt (cost=0.00..2.18 rows=18 width=24)\n(actual time=0.003..0.018 rows=18 loops=256)\n Total runtime: 5406.470 ms\n(46 rows)\n\n\n\nWhen the join_collapse_limit is set to 8:\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------\n Nested Loop Left Join (cost=995.52..75661.01 rows=1 width=195) (actual\ntime=4488.166..1747121.374 rows=256 loops=1)\n -> Nested Loop (cost=995.52..75660.62 rows=1 width=199) (actual\ntime=4481.323..1747105.903 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.11..3906.12 rows=1 width=151) (actual\ntime=15.626..128.934 rows=414 loops=1)\n Join Filter: (div.coddiv = ddiv.codtab)\n -> Nested Loop (cost=1.11..3905.05 rows=1 width=160)\n(actual time=15.611..121.455 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.11..3903.99 rows=1 width=169)\n(actual time=15.593..113.866 rows=414 loops=1)\n Join Filter: ((gra.codcor)::text =\n((div.codite)::text || ''::text))\n -> Hash Join (cost=1.11..3888.04 rows=11\nwidth=146) (actual time=15.560..85.376 rows=414 loops=1)\n Hash Cond: ((gra.codtam)::text =\n((sub.codite)::text || ''::text))\n -> Nested Loop (cost=0.00..3883.64\nrows=423 width=123) (actual time=15.376..81.482 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..149.65 rows=516 width=77) (actual\ntime=15.244..30.586 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.22 rows=1 width=46) (actual time=0.104..0.110 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.05..1.05 rows=5 width=32)\n(actual time=0.048..0.048 rows=5 loops=1)\n -> Seq Scan on tt_sub sub\n(cost=0.00..1.05 rows=5 width=32) (actual time=0.016..0.024 rows=5 loops=1)\n -> Seq Scan on tt_div div (cost=0.00..1.15\nrows=15 width=32) (actual time=0.004..0.022 rows=15 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\nwidth=9) (actual time=0.003..0.007 rows=3 loops=414)\n -> Seq Scan on td_div ddiv (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.002..0.007 rows=3 loops=414)\n -> Hash Join (cost=994.41..71746.74 rows=388 width=114) (actual\ntime=5.298..4218.486 rows=857 loops=414)\n Hash Cond: (ive.sequen = ven.sequen)\n -> Nested Loop (cost=0.00..68318.52 rows=647982 width=85)\n(actual time=0.026..3406.170 rows=643739 loops=414)\n -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\nwidth=9) (actual time=0.004..0.014 rows=1 loops=414)\n Filter: (-3::numeric = codtab)\n -> Seq Scan on tt_ive ive (cost=0.00..61837.46\nrows=647982 width=76) (actual time=0.017..1926.983 rows=643739 loops=414)\n Filter: ((sitmov <> 'C'::bpchar) AND\n('001'::bpchar = codfil))\n -> Hash (cost=992.08..992.08 rows=186 width=89) (actual\ntime=33.234..33.234 rows=394 loops=1)\n -> Hash Left Join (cost=3.48..992.08 rows=186\nwidth=89) (actual time=13.163..32.343 rows=394 loops=1)\n Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n(ven.codcli = cfg.vc_codcli))\n -> Hash Join (cost=2.45..989.65 rows=186\nwidth=106) (actual time=13.131..31.060 rows=394 loops=1)\n Hash Cond: ((ven.filpgt = pla.filpgt) AND\n(ven.codpgt = pla.codpgt))\n -> Index Scan using i_lc_ven_dathor on\ntt_ven ven (cost=0.00..983.95 rows=186 width=106) (actual\ntime=13.026..29.634 rows=394 loops=1)\n Index Cond: ((dathor >= '2007-07-12\n00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar = codfil) AND\n(codnat = -3::numeric))\n -> Hash (cost=2.18..2.18 rows=18\nwidth=14) (actual time=0.081..0.081 rows=18 loops=1)\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=0.013..0.043 rows=18\nloops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=17)\n(actual time=0.017..0.017 rows=1 loops=1)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.010..0.011 rows=1 loops=1)\n -> Index Scan using pk_pla on tt_pla vencodpgt (cost=0.00..0.31 rows=1\nwidth=24) (actual time=0.037..0.040 rows=1 loops=256)\n Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n Total runtime: 1747122.219 ms\n(43 rows)\n\nThe PG version is the 8.2.3.\n\nApparently the planner is not doing the correct choice by default, correct?\n\nI could change the application and insert the set join_collapse_limit to 1\nbefore the select, but can this solution be considered or the problem is in\nanother place?\n\nThank you in advance!\n\nReimer\n\n> -----Mensagem original-----\n> De: [email protected]\n> [mailto:[email protected]]Em nome de Tom Lane\n> Enviada em: quinta-feira, 19 de julho de 2007 22:31\n> Para: [email protected]\n> Cc: [email protected]\n> Assunto: Re: [PERFORM] Improving select peformance\n>\n>\n> \"Carlos H. Reimer\" <[email protected]> writes:\n> > One of our end users was complaining about a report that was\n> taking too much\n> > time to execute and I�ve discovered that the following SQL\n> statement was the\n> > responsible for it.\n>\n> Here's part of the problem:\n>\n> > Join Filter: ((gra.codcor)::text =\n> > ((div.codite)::text || ''::text))\n> > -> Hash Join (cost=1.11..3888.04 rows=11\n> > width=146) (actual time=15.560..85.376 rows=414 loops=1)\n> > Hash Cond: ((gra.codtam)::text =\n> > ((sub.codite)::text || ''::text))\n>\n> Why such bizarre join conditions? Why don't you lose the useless\n> concatenations of empty strings and have just a plain equality\n> comparison? This technique completely destroys any chance of the\n> planner making good estimates of the join result sizes (and the bad\n> estimates it's coming out with are part of the problem).\n>\n> > -> Nested Loop (cost=0.00..68318.52\n> rows=647982 width=85)\n> > (actual time=0.026..3406.170 rows=643739 loops=414)\n> > -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\n> > width=9) (actual time=0.004..0.014 rows=1 loops=414)\n> > Filter: (-3::numeric = codtab)\n> > -> Seq Scan on tt_ive ive (cost=0.00..61837.46\n> > rows=647982 width=76) (actual time=0.017..1926.983 rows=643739\n> loops=414)\n> > Filter: ((sitmov <> 'C'::bpchar) AND\n> > ('001'::bpchar = codfil))\n>\n> The other big problem seems to be that it's choosing to do this\n> unconstrained join first. I'm not sure about the cause of that,\n> but maybe you need to increase join_collapse_limit. What PG version\n> is this anyway?\n>\n> A more general comment, if you are open to schema changes, is that you\n> should change all the \"numeric(n,0)\" fields to integer (or possibly\n> smallint or bigint as needed). Particularly the ones that are used as\n> join keys, primary keys, foreign keys.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 1 Aug 2007 12:12:39 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: Improving select peformance " }, { "msg_contents": "Carlos H. Reimer wrote:\n> Hi,\n> \n> I have changed the view to eliminate the bizarre concatenation conditions\n> but even so the response time did not change.\n\nAre you sure you did that? In the EXPLAIN it's still possible to see\nthem, for example\n\n> -> Nested Loop (cost=1.34..3409.04 rows=1 width=159)\n> (actual time=0.237..32.520 rows=414 loops=1)\n> Join Filter: ((gra.codtam)::text = ((sub.codite)::text\n> || ''::text))\n> -> Nested Loop (cost=1.34..3376.84 rows=28 width=136)\n> (actual time=0.226..20.978 rows=414 loops=1)\n> -> Hash Join (cost=1.34..3356.99 rows=28\n> width=145) (actual time=0.215..15.225 rows=414 loops=1)\n> Hash Cond: ((gra.codcor)::text =\n> ((div.codite)::text || ''::text))\n\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"Uno combate cuando es necesario... �no cuando est� de humor!\nEl humor es para el ganado, o para hacer el amor, o para tocar el\nbaliset. No para combatir.\" (Gurney Halleck)\n", "msg_date": "Wed, 1 Aug 2007 12:52:48 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Improving select peformance" }, { "msg_contents": "Yes, but as the change did not alter the response time I used the original\nview.\n\nAnyway here are the response times using the changed view (without the\nconcatenation conditions):\n\nwith join_collapse_limit set to 8:\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Nested Loop Left Join (cost=963.68..76116.63 rows=1 width=194) (actual\ntime=8219.028..1316669.201 rows=256 loops=1)\n -> Nested Loop (cost=963.68..76116.23 rows=1 width=198) (actual\ntime=8196.502..1316638.186 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.11..3370.95 rows=1 width=150) (actual\ntime=33.058..255.428 rows=414 loops=1)\n Join Filter: (div.coddiv = ddiv.codtab)\n -> Nested Loop (cost=1.11..3369.89 rows=1 width=159)\n(actual time=33.043..249.609 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.11..3368.82 rows=1 width=168)\n(actual time=33.026..243.603 rows=414 loops=1)\n Join Filter: ((gra.codcor)::text =\n(div.codite)::text)\n -> Hash Join (cost=1.11..3356.11 rows=9\nwidth=145) (actual time=33.004..222.375 rows=414 loops=1)\n Hash Cond: ((gra.codtam)::text =\n(sub.codite)::text)\n -> Nested Loop (cost=0.00..3352.55\nrows=377 width=122) (actual time=32.810..219.046 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..123.83 rows=437 width=76) (actual\ntime=25.199..118.851 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.37 rows=1 width=46) (actual time=0.225..0.231 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.05..1.05 rows=5 width=32)\n(actual time=0.039..0.039 rows=5 loops=1)\n -> Seq Scan on tt_sub sub\n(cost=0.00..1.05 rows=5 width=32) (actual time=0.009..0.015 rows=5 loops=1)\n -> Seq Scan on tt_div div (cost=0.00..1.15\nrows=15 width=32) (actual time=0.003..0.015 rows=15 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\nwidth=9) (actual time=0.002..0.005 rows=3 loops=414)\n -> Seq Scan on td_div ddiv (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.002..0.005 rows=3 loops=414)\n -> Hash Join (cost=962.57..72738.01 rows=363 width=114) (actual\ntime=0.588..3178.606 rows=857 loops=414)\n Hash Cond: (ive.sequen = ven.sequen)\n -> Nested Loop (cost=0.00..69305.21 rows=657761 width=85)\n(actual time=0.041..2623.627 rows=656152 loops=414)\n -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\nwidth=9) (actual time=0.004..0.012 rows=1 loops=414)\n Filter: (-3::numeric = codtab)\n -> Seq Scan on tt_ive ive (cost=0.00..62726.36\nrows=657761 width=76) (actual time=0.034..1685.506 rows=656152 loops=414)\n Filter: ((sitmov <> 'C'::bpchar) AND\n('001'::bpchar = codfil))\n -> Hash (cost=960.39..960.39 rows=174 width=89) (actual\ntime=41.542..41.542 rows=394 loops=1)\n -> Hash Left Join (cost=3.48..960.39 rows=174\nwidth=89) (actual time=16.936..40.693 rows=394 loops=1)\n Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n(ven.codcli = cfg.vc_codcli))\n -> Hash Join (cost=2.45..958.05 rows=174\nwidth=106) (actual time=16.895..39.747 rows=394 loops=1)\n Hash Cond: ((ven.filpgt = pla.filpgt) AND\n(ven.codpgt = pla.codpgt))\n -> Index Scan using i_lc_ven_dathor on\ntt_ven ven (cost=0.00..952.56 rows=174 width=106) (actual\ntime=16.797..38.626 rows=394 loops=1)\n Index Cond: ((dathor >= '2007-07-12\n00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar = codfil) AND\n(codnat = -3::numeric))\n -> Hash (cost=2.18..2.18 rows=18\nwidth=14) (actual time=0.073..0.073 rows=18 loops=1)\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=0.017..0.039 rows=18\nloops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=17)\n(actual time=0.020..0.020 rows=1 loops=1)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.010..0.011 rows=1 loops=1)\n -> Index Scan using pk_pla on tt_pla vencodpgt (cost=0.00..0.31 rows=1\nwidth=24) (actual time=0.099..0.101 rows=1 loops=256)\n Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n Total runtime: 1316670.331 ms\n(43 rows)\n\n\n\nwith join_collapse_limit set to 1:\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------\n Nested Loop Left Join (cost=1106.16..25547.95 rows=1 width=195) (actual\ntime=2363.202..9534.955 rows=256 loops=1)\n Join Filter: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n -> Nested Loop (cost=1106.16..25545.43 rows=1 width=199) (actual\ntime=2363.117..9521.704 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.34..2576.72 rows=1 width=151) (actual\ntime=154.268..1054.391 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.34..2575.65 rows=1 width=160)\n(actual time=138.588..1032.830 rows=414 loops=1)\n Join Filter: ((gra.codtam)::text = (sub.codite)::text)\n -> Nested Loop (cost=1.34..2551.77 rows=21 width=137)\n(actual time=134.262..1018.756 rows=414 loops=1)\n -> Hash Join (cost=1.34..2533.88 rows=21\nwidth=146) (actual time=116.724..996.297 rows=414 loops=1)\n Hash Cond: ((gra.codcor)::text =\n(div.codite)::text)\n -> Nested Loop (cost=0.00..2530.60\nrows=278 width=123) (actual time=106.879..983.761 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..108.20 rows=318 width=77) (actual\ntime=44.303..286.618 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.60 rows=1 width=46) (actual time=1.674..1.676 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.15..1.15 rows=15\nwidth=32) (actual time=9.824..9.824 rows=15 loops=1)\n -> Seq Scan on tt_div div\n(cost=0.00..1.15 rows=15 width=32) (actual time=9.774..9.788 rows=15\nloops=1)\n -> Index Scan using pk_ddiv on td_div ddiv\n(cost=0.00..0.84 rows=1 width=9) (actual time=0.047..0.049 rows=1 loops=414)\n Index Cond: (div.coddiv = ddiv.codtab)\n -> Seq Scan on tt_sub sub (cost=0.00..1.05 rows=5\nwidth=32) (actual time=0.013..0.017 rows=5 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.040..0.043 rows=3 loops=414)\n -> Nested Loop (cost=1104.83..22960.29 rows=421 width=114)\n(actual time=0.727..19.609 rows=857 loops=414)\n -> Nested Loop (cost=1104.83..1112.46 rows=200 width=80)\n(actual time=0.559..3.497 rows=394 loops=414)\n -> Merge Join (cost=1103.59..1107.22 rows=200\nwidth=89) (actual time=0.532..1.751 rows=394 loops=414)\n Merge Cond: ((pla.codpgt = ven.codpgt) AND\n(pla.filpgt = ven.filpgt))\n -> Sort (cost=2.56..2.60 rows=18 width=14)\n(actual time=0.019..0.025 rows=8 loops=414)\n Sort Key: pla.codpgt, pla.filpgt\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=7.430..7.613 rows=18\nloops=1)\n -> Sort (cost=1101.03..1101.53 rows=200\nwidth=89) (actual time=0.508..0.805 rows=394 loops=414)\n Sort Key: ven.codpgt, ven.filpgt\n -> Nested Loop Left Join\n(cost=1.01..1093.39 rows=200 width=89) (actual time=39.399..209.096 rows=394\nloops=1)\n Join Filter: ((ven.filcli =\ncfg.vc_filcli) AND (ven.codcli = cfg.vc_codcli))\n -> Index Scan using i_lc_ven_dathor\non tt_ven ven (cost=0.00..1087.38 rows=200 width=106) (actual\ntime=39.378..207.111 rows=394 loops=1)\n Index Cond: ((dathor >=\n'2007-07-12 00:00:00'::timestamp without time zone) AND (dathor <=\n'2007-07-12 23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar =\ncodfil) AND (codnat = -3::numeric))\n -> Materialize (cost=1.01..1.02\nrows=1 width=17) (actual time=0.001..0.001 rows=1 loops=394)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.006..0.008 rows=1 loops=1)\n -> Materialize (cost=1.24..1.25 rows=1 width=9)\n(actual time=0.001..0.002 rows=1 loops=163116)\n -> Seq Scan on td_nat nat (cost=0.00..1.24\nrows=1 width=9) (actual time=9.994..10.001 rows=1 loops=1)\n Filter: (-3::numeric = codtab)\n -> Index Scan using pk_ive on tt_ive ive (cost=0.00..108.86\nrows=30 width=76) (actual time=0.020..0.036 rows=2 loops=163116)\n Index Cond: (('001'::bpchar = ive.codfil) AND\n(ive.sequen = ven.sequen))\n Filter: (sitmov <> 'C'::bpchar)\n -> Seq Scan on tt_pla vencodpgt (cost=0.00..2.18 rows=18 width=24)\n(actual time=0.002..0.017 rows=18 loops=256)\n Total runtime: 9546.971 ms\n(46 rows)\n\n\n> -----Mensagem original-----\n> De: Alvaro Herrera [mailto:[email protected]]\n> Enviada em: quarta-feira, 1 de agosto de 2007 13:53\n> Para: Carlos H. Reimer\n> Cc: Tom Lane; [email protected]\n> Assunto: Re: RES: [PERFORM] Improving select peformance\n>\n>\n> Carlos H. Reimer wrote:\n> > Hi,\n> >\n> > I have changed the view to eliminate the bizarre concatenation\n> conditions\n> > but even so the response time did not change.\n>\n> Are you sure you did that? In the EXPLAIN it's still possible to see\n> them, for example\n>\n> > -> Nested Loop (cost=1.34..3409.04 rows=1 width=159)\n> > (actual time=0.237..32.520 rows=414 loops=1)\n> > Join Filter: ((gra.codtam)::text =\n> ((sub.codite)::text\n> > || ''::text))\n> > -> Nested Loop (cost=1.34..3376.84\n> rows=28 width=136)\n> > (actual time=0.226..20.978 rows=414 loops=1)\n> > -> Hash Join (cost=1.34..3356.99 rows=28\n> > width=145) (actual time=0.215..15.225 rows=414 loops=1)\n> > Hash Cond: ((gra.codcor)::text =\n> > ((div.codite)::text || ''::text))\n>\n>\n> --\n> Alvaro Herrera\n> http://www.amazon.com/gp/registry/CTMLCN8V17R4\n> \"Uno combate cuando es necesario... �no cuando est� de humor!\n> El humor es para el ganado, o para hacer el amor, o para tocar el\n> baliset. No para combatir.\" (Gurney Halleck)\n\n", "msg_date": "Wed, 1 Aug 2007 21:26:24 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: Improving select peformance" }, { "msg_contents": "Hi,\n\nIn this case, I believe the best choice to improve the performance of this\nparticular SQL statement is adding the 'set join_collapse_limit = 1;' just\nbefore the join statement, correct?\n\nIt there anything else we could do to, in this case, make the planner choose\nbetter paths using the default join_collapse_limit?\n\nThank you in advance!\n\nReimer\n\n> -----Mensagem original-----\n> De: [email protected]\n> [mailto:[email protected]]Em nome de Carlos H.\n> Reimer\n> Enviada em: quarta-feira, 1 de agosto de 2007 21:26\n> Para: Alvaro Herrera\n> Cc: Tom Lane; [email protected]\n> Assunto: RES: RES: [PERFORM] Improving select peformance\n>\n>\n> Yes, but as the change did not alter the response time I used the original\n> view.\n>\n> Anyway here are the response times using the changed view (without the\n> concatenation conditions):\n>\n> with join_collapse_limit set to 8:\n> ------------------------------------------------------------------\n> ----------\n> ------------------------------------------------------------------\n> ----------\n> -------------------------------\n> Nested Loop Left Join (cost=963.68..76116.63 rows=1 width=194) (actual\n> time=8219.028..1316669.201 rows=256 loops=1)\n> -> Nested Loop (cost=963.68..76116.23 rows=1 width=198) (actual\n> time=8196.502..1316638.186 rows=256 loops=1)\n> Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\n> ive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n> -> Nested Loop (cost=1.11..3370.95 rows=1 width=150) (actual\n> time=33.058..255.428 rows=414 loops=1)\n> Join Filter: (div.coddiv = ddiv.codtab)\n> -> Nested Loop (cost=1.11..3369.89 rows=1 width=159)\n> (actual time=33.043..249.609 rows=414 loops=1)\n> Join Filter: (sub.codsub = dsub.codtab)\n> -> Nested Loop (cost=1.11..3368.82 rows=1\n> width=168)\n> (actual time=33.026..243.603 rows=414 loops=1)\n> Join Filter: ((gra.codcor)::text =\n> (div.codite)::text)\n> -> Hash Join (cost=1.11..3356.11 rows=9\n> width=145) (actual time=33.004..222.375 rows=414 loops=1)\n> Hash Cond: ((gra.codtam)::text =\n> (sub.codite)::text)\n> -> Nested Loop (cost=0.00..3352.55\n> rows=377 width=122) (actual time=32.810..219.046 rows=414 loops=1)\n> -> Index Scan using\n> i_fk_pro_ddep on\n> tt_pro pro (cost=0.00..123.83 rows=437 width=76) (actual\n> time=25.199..118.851 rows=414 loops=1)\n> Index Cond: (1::numeric =\n> depart)\n> -> Index Scan using\n> pk_gra on tt_gra\n> gra (cost=0.00..7.37 rows=1 width=46) (actual time=0.225..0.231 rows=1\n> loops=414)\n> Index Cond: ((pro.filmat =\n> gra.filmat) AND (pro.codmat = gra.codmat))\n> -> Hash (cost=1.05..1.05\n> rows=5 width=32)\n> (actual time=0.039..0.039 rows=5 loops=1)\n> -> Seq Scan on tt_sub sub\n> (cost=0.00..1.05 rows=5 width=32) (actual time=0.009..0.015\n> rows=5 loops=1)\n> -> Seq Scan on tt_div div (cost=0.00..1.15\n> rows=15 width=32) (actual time=0.003..0.015 rows=15 loops=414)\n> -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\n> width=9) (actual time=0.002..0.005 rows=3 loops=414)\n> -> Seq Scan on td_div ddiv (cost=0.00..1.03\n> rows=3 width=9)\n> (actual time=0.002..0.005 rows=3 loops=414)\n> -> Hash Join (cost=962.57..72738.01 rows=363 width=114) (actual\n> time=0.588..3178.606 rows=857 loops=414)\n> Hash Cond: (ive.sequen = ven.sequen)\n> -> Nested Loop (cost=0.00..69305.21 rows=657761 width=85)\n> (actual time=0.041..2623.627 rows=656152 loops=414)\n> -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\n> width=9) (actual time=0.004..0.012 rows=1 loops=414)\n> Filter: (-3::numeric = codtab)\n> -> Seq Scan on tt_ive ive (cost=0.00..62726.36\n> rows=657761 width=76) (actual time=0.034..1685.506 rows=656152 loops=414)\n> Filter: ((sitmov <> 'C'::bpchar) AND\n> ('001'::bpchar = codfil))\n> -> Hash (cost=960.39..960.39 rows=174 width=89) (actual\n> time=41.542..41.542 rows=394 loops=1)\n> -> Hash Left Join (cost=3.48..960.39 rows=174\n> width=89) (actual time=16.936..40.693 rows=394 loops=1)\n> Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n> (ven.codcli = cfg.vc_codcli))\n> -> Hash Join (cost=2.45..958.05 rows=174\n> width=106) (actual time=16.895..39.747 rows=394 loops=1)\n> Hash Cond: ((ven.filpgt = pla.filpgt) AND\n> (ven.codpgt = pla.codpgt))\n> -> Index Scan using i_lc_ven_dathor on\n> tt_ven ven (cost=0.00..952.56 rows=174 width=106) (actual\n> time=16.797..38.626 rows=394 loops=1)\n> Index Cond: ((dathor >= '2007-07-12\n> 00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n> 23:59:59'::timestamp without time zone))\n> Filter: (('001'::bpchar =\n> codfil) AND\n> (codnat = -3::numeric))\n> -> Hash (cost=2.18..2.18 rows=18\n> width=14) (actual time=0.073..0.073 rows=18 loops=1)\n> -> Seq Scan on tt_pla pla\n> (cost=0.00..2.18 rows=18 width=14) (actual time=0.017..0.039 rows=18\n> loops=1)\n> -> Hash (cost=1.01..1.01 rows=1 width=17)\n> (actual time=0.020..0.020 rows=1 loops=1)\n> -> Seq Scan on tt_cfg cfg\n> (cost=0.00..1.01 rows=1 width=17) (actual time=0.010..0.011\n> rows=1 loops=1)\n> -> Index Scan using pk_pla on tt_pla vencodpgt\n> (cost=0.00..0.31 rows=1\n> width=24) (actual time=0.099..0.101 rows=1 loops=256)\n> Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\n> vencodpgt.codpgt))\n> Total runtime: 1316670.331 ms\n> (43 rows)\n>\n>\n>\n> with join_collapse_limit set to 1:\n>\n>\n> QUERY PLAN\n> ------------------------------------------------------------------\n> ----------\n> ------------------------------------------------------------------\n> ----------\n> -------------------------------------\n> Nested Loop Left Join (cost=1106.16..25547.95 rows=1 width=195) (actual\n> time=2363.202..9534.955 rows=256 loops=1)\n> Join Filter: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\n> vencodpgt.codpgt))\n> -> Nested Loop (cost=1106.16..25545.43 rows=1 width=199) (actual\n> time=2363.117..9521.704 rows=256 loops=1)\n> Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\n> ive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n> -> Nested Loop (cost=1.34..2576.72 rows=1 width=151) (actual\n> time=154.268..1054.391 rows=414 loops=1)\n> Join Filter: (sub.codsub = dsub.codtab)\n> -> Nested Loop (cost=1.34..2575.65 rows=1 width=160)\n> (actual time=138.588..1032.830 rows=414 loops=1)\n> Join Filter: ((gra.codtam)::text =\n> (sub.codite)::text)\n> -> Nested Loop (cost=1.34..2551.77 rows=21\n> width=137)\n> (actual time=134.262..1018.756 rows=414 loops=1)\n> -> Hash Join (cost=1.34..2533.88 rows=21\n> width=146) (actual time=116.724..996.297 rows=414 loops=1)\n> Hash Cond: ((gra.codcor)::text =\n> (div.codite)::text)\n> -> Nested Loop (cost=0.00..2530.60\n> rows=278 width=123) (actual time=106.879..983.761 rows=414 loops=1)\n> -> Index Scan using\n> i_fk_pro_ddep on\n> tt_pro pro (cost=0.00..108.20 rows=318 width=77) (actual\n> time=44.303..286.618 rows=414 loops=1)\n> Index Cond: (1::numeric =\n> depart)\n> -> Index Scan using\n> pk_gra on tt_gra\n> gra (cost=0.00..7.60 rows=1 width=46) (actual time=1.674..1.676 rows=1\n> loops=414)\n> Index Cond: ((pro.filmat =\n> gra.filmat) AND (pro.codmat = gra.codmat))\n> -> Hash (cost=1.15..1.15 rows=15\n> width=32) (actual time=9.824..9.824 rows=15 loops=1)\n> -> Seq Scan on tt_div div\n> (cost=0.00..1.15 rows=15 width=32) (actual time=9.774..9.788 rows=15\n> loops=1)\n> -> Index Scan using pk_ddiv on td_div ddiv\n> (cost=0.00..0.84 rows=1 width=9) (actual time=0.047..0.049 rows=1\n> loops=414)\n> Index Cond: (div.coddiv = ddiv.codtab)\n> -> Seq Scan on tt_sub sub (cost=0.00..1.05 rows=5\n> width=32) (actual time=0.013..0.017 rows=5 loops=414)\n> -> Seq Scan on td_sub dsub (cost=0.00..1.03\n> rows=3 width=9)\n> (actual time=0.040..0.043 rows=3 loops=414)\n> -> Nested Loop (cost=1104.83..22960.29 rows=421 width=114)\n> (actual time=0.727..19.609 rows=857 loops=414)\n> -> Nested Loop (cost=1104.83..1112.46 rows=200 width=80)\n> (actual time=0.559..3.497 rows=394 loops=414)\n> -> Merge Join (cost=1103.59..1107.22 rows=200\n> width=89) (actual time=0.532..1.751 rows=394 loops=414)\n> Merge Cond: ((pla.codpgt = ven.codpgt) AND\n> (pla.filpgt = ven.filpgt))\n> -> Sort (cost=2.56..2.60 rows=18 width=14)\n> (actual time=0.019..0.025 rows=8 loops=414)\n> Sort Key: pla.codpgt, pla.filpgt\n> -> Seq Scan on tt_pla pla\n> (cost=0.00..2.18 rows=18 width=14) (actual time=7.430..7.613 rows=18\n> loops=1)\n> -> Sort (cost=1101.03..1101.53 rows=200\n> width=89) (actual time=0.508..0.805 rows=394 loops=414)\n> Sort Key: ven.codpgt, ven.filpgt\n> -> Nested Loop Left Join\n> (cost=1.01..1093.39 rows=200 width=89) (actual\n> time=39.399..209.096 rows=394\n> loops=1)\n> Join Filter: ((ven.filcli =\n> cfg.vc_filcli) AND (ven.codcli = cfg.vc_codcli))\n> -> Index Scan using\n> i_lc_ven_dathor\n> on tt_ven ven (cost=0.00..1087.38 rows=200 width=106) (actual\n> time=39.378..207.111 rows=394 loops=1)\n> Index Cond: ((dathor >=\n> '2007-07-12 00:00:00'::timestamp without time zone) AND (dathor <=\n> '2007-07-12 23:59:59'::timestamp without time zone))\n> Filter: (('001'::bpchar =\n> codfil) AND (codnat = -3::numeric))\n> -> Materialize (cost=1.01..1.02\n> rows=1 width=17) (actual time=0.001..0.001 rows=1 loops=394)\n> -> Seq Scan on tt_cfg cfg\n> (cost=0.00..1.01 rows=1 width=17) (actual time=0.006..0.008\n> rows=1 loops=1)\n> -> Materialize (cost=1.24..1.25 rows=1 width=9)\n> (actual time=0.001..0.002 rows=1 loops=163116)\n> -> Seq Scan on td_nat nat (cost=0.00..1.24\n> rows=1 width=9) (actual time=9.994..10.001 rows=1 loops=1)\n> Filter: (-3::numeric = codtab)\n> -> Index Scan using pk_ive on tt_ive ive\n> (cost=0.00..108.86\n> rows=30 width=76) (actual time=0.020..0.036 rows=2 loops=163116)\n> Index Cond: (('001'::bpchar = ive.codfil) AND\n> (ive.sequen = ven.sequen))\n> Filter: (sitmov <> 'C'::bpchar)\n> -> Seq Scan on tt_pla vencodpgt (cost=0.00..2.18 rows=18 width=24)\n> (actual time=0.002..0.017 rows=18 loops=256)\n> Total runtime: 9546.971 ms\n> (46 rows)\n>\n>\n> > -----Mensagem original-----\n> > De: Alvaro Herrera [mailto:[email protected]]\n> > Enviada em: quarta-feira, 1 de agosto de 2007 13:53\n> > Para: Carlos H. Reimer\n> > Cc: Tom Lane; [email protected]\n> > Assunto: Re: RES: [PERFORM] Improving select peformance\n> >\n> >\n> > Carlos H. Reimer wrote:\n> > > Hi,\n> > >\n> > > I have changed the view to eliminate the bizarre concatenation\n> > conditions\n> > > but even so the response time did not change.\n> >\n> > Are you sure you did that? In the EXPLAIN it's still possible to see\n> > them, for example\n> >\n> > > -> Nested Loop (cost=1.34..3409.04 rows=1 width=159)\n> > > (actual time=0.237..32.520 rows=414 loops=1)\n> > > Join Filter: ((gra.codtam)::text =\n> > ((sub.codite)::text\n> > > || ''::text))\n> > > -> Nested Loop (cost=1.34..3376.84\n> > rows=28 width=136)\n> > > (actual time=0.226..20.978 rows=414 loops=1)\n> > > -> Hash Join (cost=1.34..3356.99 rows=28\n> > > width=145) (actual time=0.215..15.225 rows=414 loops=1)\n> > > Hash Cond: ((gra.codcor)::text =\n> > > ((div.codite)::text || ''::text))\n> >\n> >\n> > --\n> > Alvaro Herrera\n> > http://www.amazon.com/gp/registry/CTMLCN8V17R4\n> > \"Uno combate cuando es necesario... �no cuando est� de humor!\n> > El humor es para el ganado, o para hacer el amor, o para tocar el\n> > baliset. No para combatir.\" (Gurney Halleck)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Thu, 2 Aug 2007 22:47:25 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: Improving select peformance" }, { "msg_contents": "\"Carlos H. Reimer\" <[email protected]> writes:\n> In this case, I believe the best choice to improve the performance of this\n> particular SQL statement is adding the 'set join_collapse_limit = 1;' just\n> before the join statement, correct?\n\nThat's a mighty blunt instrument. The real problem with your query is\nthe misestimation of the join sizes --- are you sure the table\nstatistics are up to date? Maybe you'd get better estimates with more\nstatistics (ie, increase the stats target for these tables).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Aug 2007 22:13:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: RES: Improving select peformance " }, { "msg_contents": "Hi,\n\nThanks for the suggestions but apparently the problem in another place.\n\nI have changed the default_statistics_target from to 1000 but the result is\npretty much the same as with when it was 10.\n\nAfter the change the database was vacuumed and analyzed.\n\nLet me know if I miss anything.\n\nIs there anything else we could try to identify why the planner is making\nthis choice?\n\nThank you in advance!\n\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Nested Loop Left Join (cost=1293.06..77117.75 rows=1 width=193) (actual\ntime=8623.464..1317305.299 rows=256 loops=1)\n -> Nested Loop (cost=1293.06..77117.36 rows=1 width=197) (actual\ntime=8607.108..1317280.517 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.11..3223.73 rows=1 width=149) (actual\ntime=127.296..1592.118 rows=414 loops=1)\n Join Filter: (div.coddiv = ddiv.codtab)\n -> Nested Loop (cost=1.11..3222.67 rows=1 width=158)\n(actual time=113.482..1572.752 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.11..3221.60 rows=1 width=167)\n(actual time=108.122..1561.692 rows=414 loops=1)\n Join Filter: ((gra.codcor)::text =\n(div.codite)::text)\n -> Hash Join (cost=1.11..3208.89 rows=9\nwidth=144) (actual time=99.794..1532.498 rows=414 loops=1)\n Hash Cond: ((gra.codtam)::text =\n(sub.codite)::text)\n -> Nested Loop (cost=0.00..3205.49\nrows=351 width=121) (actual time=80.811..1510.179 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..128.18 rows=414 width=75) (actual\ntime=35.525..353.854 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.42 rows=1 width=46) (actual time=2.776..2.782 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.05..1.05 rows=5 width=32)\n(actual time=6.510..6.510 rows=5 loops=1)\n -> Seq Scan on tt_sub sub\n(cost=0.00..1.05 rows=5 width=32) (actual time=6.479..6.485 rows=5 loops=1)\n -> Seq Scan on tt_div div (cost=0.00..1.15\nrows=15 width=32) (actual time=0.023..0.034 rows=15 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\nwidth=9) (actual time=0.015..0.018 rows=3 loops=414)\n -> Seq Scan on td_div ddiv (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.035..0.038 rows=3 loops=414)\n -> Hash Join (cost=1291.94..73883.89 rows=487 width=114) (actual\ntime=1.241..3176.965 rows=857 loops=414)\n Hash Cond: (ive.sequen = ven.sequen)\n -> Nested Loop (cost=0.00..70110.52 rows=660415 width=85)\n(actual time=0.038..2621.327 rows=658236 loops=414)\n -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\nwidth=9) (actual time=0.006..0.015 rows=1 loops=414)\n Filter: (-3::numeric = codtab)\n -> Seq Scan on tt_ive ive (cost=0.00..63505.13\nrows=660415 width=76) (actual time=0.029..1681.597 rows=658236 loops=414)\n Filter: ((sitmov <> 'C'::bpchar) AND\n('001'::bpchar = codfil))\n -> Hash (cost=1289.03..1289.03 rows=233 width=89) (actual\ntime=307.760..307.760 rows=394 loops=1)\n -> Hash Left Join (cost=3.48..1289.03 rows=233\nwidth=89) (actual time=61.851..306.897 rows=394 loops=1)\n Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n(ven.codcli = cfg.vc_codcli))\n -> Hash Join (cost=2.45..1286.25 rows=233\nwidth=106) (actual time=61.802..305.928 rows=394 loops=1)\n Hash Cond: ((ven.filpgt = pla.filpgt) AND\n(ven.codpgt = pla.codpgt))\n -> Index Scan using i_lc_ven_dathor on\ntt_ven ven (cost=0.00..1279.72 rows=233 width=106) (actual\ntime=53.539..296.648 rows=394 loops=1)\n Index Cond: ((dathor >= '2007-07-12\n00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar = codfil) AND\n(codnat = -3::numeric))\n -> Hash (cost=2.18..2.18 rows=18\nwidth=14) (actual time=8.237..8.237 rows=18 loops=1)\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=8.162..8.205 rows=18\nloops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=17)\n(actual time=0.029..0.029 rows=1 loops=1)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.018..0.019 rows=1 loops=1)\n -> Index Scan using pk_pla on tt_pla vencodpgt (cost=0.00..0.30 rows=1\nwidth=24) (actual time=0.075..0.076 rows=1 loops=256)\n Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n Total runtime: 1317306.468 ms\n(43 rows)\n\n\nReimer\n\n> -----Mensagem original-----\n> De: [email protected]\n> [mailto:[email protected]]Em nome de Tom Lane\n> Enviada em: quinta-feira, 2 de agosto de 2007 23:13\n> Para: [email protected]\n> Cc: Alvaro Herrera; [email protected]\n> Assunto: Re: RES: RES: [PERFORM] Improving select peformance\n>\n>\n> \"Carlos H. Reimer\" <[email protected]> writes:\n> > In this case, I believe the best choice to improve the\n> performance of this\n> > particular SQL statement is adding the 'set join_collapse_limit\n> = 1;' just\n> > before the join statement, correct?\n>\n> That's a mighty blunt instrument. The real problem with your query is\n> the misestimation of the join sizes --- are you sure the table\n> statistics are up to date? Maybe you'd get better estimates with more\n> statistics (ie, increase the stats target for these tables).\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Sat, 4 Aug 2007 11:02:26 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: RES: Improving select peformance " } ]
[ { "msg_contents": "Hi, I'm trying to post the following message to the performance group but\nthe message does not appears in the list.\n\nCan someone help to solve this issue?\n\nThanks in advance!\n\n____________________________________________________________________________\n___________________________________________________\n\nHi,\n\nOne of our end users was complaining about a report that was taking too much\ntime to execute and I�ve discovered that the following SQL statement was the\nresponsible for it.\n\nI would appreciate any suggestions to improve performance of it.\n\nThank you very much in advance!\n\n____________________________________________________________________________\n_________________________________________________\n\nexplain analyze select (VEN.DOCUME)::varchar(13) as COLUNA0,\n (VENCODPGT.APEPGT)::varchar(9) as COLUNA1,\n (COALESCE(COALESCE(VEN.VLRLIQ,0) * (CASE VEN.VLRNOT WHEN 0\nTHEN 0 ELSE IVE.VLRMOV / VEN.VLRNOT END),0)) as COLUNA2,\n (COALESCE(IVE.QTDMOV,0)) as COLUNA3,\n (VIPR.NOMPRO)::varchar(83) as COLUNA4,\n (VIPR.REFPRO)::varchar(20) as COLUNA5\n from TV_VEN VEN\n inner join TT_IVE IVE ON IVE.SEQUEN = VEN.SEQUEN and\n IVE.CODFIL = VEN.CODFIL\n inner join TV_IPR VIPR ON VIPR.FILMAT = IVE.FILMAT and\n VIPR.CODMAT = IVE.CODMAT and\n VIPR.CODCOR = IVE.CODCOR and\n VIPR.CODTAM = IVE.CODTAM\n\n left join TT_PLA VENCODPGT ON VEN.FILPGT = VENCODPGT.FILPGT AND\nVEN.CODPGT = VENCODPGT.CODPGT\n where ('001' = VEN.CODFIL)\n and VEN.DATHOR between '07/12/2007 00:00:00' and '07/12/2007\n23:59:59'\n and (VEN.CODNAT = '-3')\n and IVE.SITMOV <> 'C'\n and ('1' = VIPR.DEPART) ;\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Nested Loop Left Join (cost=995.52..75661.01 rows=1 width=195) (actual\ntime=4488.166..1747121.374 rows=256 loops=1)\n -> Nested Loop (cost=995.52..75660.62 rows=1 width=199) (actual\ntime=4481.323..1747105.903 rows=256 loops=1)\n Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =\nive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))\n -> Nested Loop (cost=1.11..3906.12 rows=1 width=151) (actual\ntime=15.626..128.934 rows=414 loops=1)\n Join Filter: (div.coddiv = ddiv.codtab)\n -> Nested Loop (cost=1.11..3905.05 rows=1 width=160)\n(actual time=15.611..121.455 rows=414 loops=1)\n Join Filter: (sub.codsub = dsub.codtab)\n -> Nested Loop (cost=1.11..3903.99 rows=1 width=169)\n(actual time=15.593..113.866 rows=414 loops=1)\n Join Filter: ((gra.codcor)::text =\n((div.codite)::text || ''::text))\n -> Hash Join (cost=1.11..3888.04 rows=11\nwidth=146) (actual time=15.560..85.376 rows=414 loops=1)\n Hash Cond: ((gra.codtam)::text =\n((sub.codite)::text || ''::text))\n -> Nested Loop (cost=0.00..3883.64\nrows=423 width=123) (actual time=15.376..81.482 rows=414 loops=1)\n -> Index Scan using i_fk_pro_ddep on\ntt_pro pro (cost=0.00..149.65 rows=516 width=77) (actual\ntime=15.244..30.586 rows=414 loops=1)\n Index Cond: (1::numeric =\ndepart)\n -> Index Scan using pk_gra on tt_gra\ngra (cost=0.00..7.22 rows=1 width=46) (actual time=0.104..0.110 rows=1\nloops=414)\n Index Cond: ((pro.filmat =\ngra.filmat) AND (pro.codmat = gra.codmat))\n -> Hash (cost=1.05..1.05 rows=5 width=32)\n(actual time=0.048..0.048 rows=5 loops=1)\n -> Seq Scan on tt_sub sub\n(cost=0.00..1.05 rows=5 width=32) (actual time=0.016..0.024 rows=5 loops=1)\n -> Seq Scan on tt_div div (cost=0.00..1.15\nrows=15 width=32) (actual time=0.004..0.022 rows=15 loops=414)\n -> Seq Scan on td_sub dsub (cost=0.00..1.03 rows=3\nwidth=9) (actual time=0.003..0.007 rows=3 loops=414)\n -> Seq Scan on td_div ddiv (cost=0.00..1.03 rows=3 width=9)\n(actual time=0.002..0.007 rows=3 loops=414)\n -> Hash Join (cost=994.41..71746.74 rows=388 width=114) (actual\ntime=5.298..4218.486 rows=857 loops=414)\n Hash Cond: (ive.sequen = ven.sequen)\n -> Nested Loop (cost=0.00..68318.52 rows=647982 width=85)\n(actual time=0.026..3406.170 rows=643739 loops=414)\n -> Seq Scan on td_nat nat (cost=0.00..1.24 rows=1\nwidth=9) (actual time=0.004..0.014 rows=1 loops=414)\n Filter: (-3::numeric = codtab)\n -> Seq Scan on tt_ive ive (cost=0.00..61837.46\nrows=647982 width=76) (actual time=0.017..1926.983 rows=643739 loops=414)\n Filter: ((sitmov <> 'C'::bpchar) AND\n('001'::bpchar = codfil))\n -> Hash (cost=992.08..992.08 rows=186 width=89) (actual\ntime=33.234..33.234 rows=394 loops=1)\n -> Hash Left Join (cost=3.48..992.08 rows=186\nwidth=89) (actual time=13.163..32.343 rows=394 loops=1)\n Hash Cond: ((ven.filcli = cfg.vc_filcli) AND\n(ven.codcli = cfg.vc_codcli))\n -> Hash Join (cost=2.45..989.65 rows=186\nwidth=106) (actual time=13.131..31.060 rows=394 loops=1)\n Hash Cond: ((ven.filpgt = pla.filpgt) AND\n(ven.codpgt = pla.codpgt))\n -> Index Scan using i_lc_ven_dathor on\ntt_ven ven (cost=0.00..983.95 rows=186 width=106) (actual\ntime=13.026..29.634 rows=394 loops=1)\n Index Cond: ((dathor >= '2007-07-12\n00:00:00'::timestamp without time zone) AND (dathor <= '2007-07-12\n23:59:59'::timestamp without time zone))\n Filter: (('001'::bpchar = codfil) AND\n(codnat = -3::numeric))\n -> Hash (cost=2.18..2.18 rows=18\nwidth=14) (actual time=0.081..0.081 rows=18 loops=1)\n -> Seq Scan on tt_pla pla\n(cost=0.00..2.18 rows=18 width=14) (actual time=0.013..0.043 rows=18\nloops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=17)\n(actual time=0.017..0.017 rows=1 loops=1)\n -> Seq Scan on tt_cfg cfg\n(cost=0.00..1.01 rows=1 width=17) (actual time=0.010..0.011 rows=1 loops=1)\n -> Index Scan using pk_pla on tt_pla vencodpgt (cost=0.00..0.31 rows=1\nwidth=24) (actual time=0.037..0.040 rows=1 loops=256)\n Index Cond: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =\nvencodpgt.codpgt))\n Total runtime: 1747122.219 ms\n(43 rows)\n\n____________________________________________________________________________\n_________________________________________________________\n\nTable and view definitions can be accessed at:\nhttp://www.opendb.com.br/v1/problem0707.txt\n\nReimer\n\n\n\n\n\n\nHi, I'm trying to post the following message to the \nperformance group but the message does not appears in the list. \n\n \nCan someone help to solve this \nissue?\n \nThanks in advance!\n \n_______________________________________________________________________________________________________________________________\n \nHi,\n \nOne of our end \nusers was complaining about a report that was taking too much time to execute \nand I´ve discovered that the following SQL statement was the responsible for it. \n\n \nI would appreciate \nany suggestions to improve performance of it.\n \nThank you very much \nin advance!\n \n_____________________________________________________________________________________________________________________________\n \nexplain analyze select (VEN.DOCUME)::varchar(13) as \nCOLUNA0,  \n               \n(VENCODPGT.APEPGT)::varchar(9) as COLUNA1,  \n               \n(COALESCE(COALESCE(VEN.VLRLIQ,0) * (CASE  VEN.VLRNOT  WHEN 0 \nTHEN  0 ELSE  IVE.VLRMOV / VEN.VLRNOT  END),0)) as COLUNA2,  \n               \n(COALESCE(IVE.QTDMOV,0)) as COLUNA3,  \n               \n(VIPR.NOMPRO)::varchar(83) as COLUNA4,  \n               \n(VIPR.REFPRO)::varchar(20) as COLUNA5 \n        from TV_VEN VEN \n              \ninner join TT_IVE IVE ON IVE.SEQUEN = VEN.SEQUEN and \n        IVE.CODFIL = VEN.CODFIL \n              \ninner join TV_IPR VIPR ON VIPR.FILMAT = IVE.FILMAT and \n        VIPR.CODMAT = IVE.CODMAT and \n        VIPR.CODCOR = IVE.CODCOR and \n        VIPR.CODTAM = IVE.CODTAM \n         \n             \nleft join TT_PLA VENCODPGT ON VEN.FILPGT = VENCODPGT.FILPGT AND VEN.CODPGT = \nVENCODPGT.CODPGT         where ('001' = \nVEN.CODFIL)         and VEN.DATHOR \nbetween '07/12/2007 00:00:00' and '07/12/2007 23:59:59' \n        and (VEN.CODNAT = '-3') \n        and IVE.SITMOV <> 'C' \n        and ('1' = VIPR.DEPART) \n;\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested \nLoop Left Join  (cost=995.52..75661.01 rows=1 width=195) (actual \ntime=4488.166..1747121.374 rows=256 loops=1)   ->  Nested \nLoop  (cost=995.52..75660.62 rows=1 width=199) (actual \ntime=4481.323..1747105.903 rows=256 \nloops=1)         Join Filter: \n((gra.filmat = ive.filmat) AND (gra.codmat = ive.codmat) AND (gra.codcor = \nive.codcor) AND (gra.codtam = \nive.codtam))         ->  \nNested Loop  (cost=1.11..3906.12 rows=1 width=151) (actual \ntime=15.626..128.934 rows=414 \nloops=1)               \nJoin Filter: (div.coddiv = \nddiv.codtab)               \n->  Nested Loop  (cost=1.11..3905.05 rows=1 width=160) (actual \ntime=15.611..121.455 rows=414 \nloops=1)                     \nJoin Filter: (sub.codsub = \ndsub.codtab)                     \n->  Nested Loop  (cost=1.11..3903.99 rows=1 width=169) (actual \ntime=15.593..113.866 rows=414 \nloops=1)                           \nJoin Filter: ((gra.codcor)::text = ((div.codite)::text || \n''::text))                           \n->  Hash Join  (cost=1.11..3888.04 rows=11 width=146) (actual \ntime=15.560..85.376 rows=414 \nloops=1)                                 \nHash Cond: ((gra.codtam)::text = ((sub.codite)::text || \n''::text))                                 \n->  Nested Loop  (cost=0.00..3883.64 rows=423 width=123) (actual \ntime=15.376..81.482 rows=414 \nloops=1)                                       \n->  Index Scan using i_fk_pro_ddep on tt_pro pro  \n(cost=0.00..149.65 rows=516 width=77) (actual time=15.244..30.586 rows=414 \nloops=1)                                             \nIndex Cond: (1::numeric = \ndepart)                                       \n->  Index Scan using pk_gra on tt_gra gra  (cost=0.00..7.22 rows=1 \nwidth=46) (actual time=0.104..0.110 rows=1 \nloops=414)                                             \nIndex Cond: ((pro.filmat = gra.filmat) AND (pro.codmat = \ngra.codmat))                                 \n->  Hash  (cost=1.05..1.05 rows=5 width=32) (actual \ntime=0.048..0.048 rows=5 \nloops=1)                                       \n->  Seq Scan on tt_sub sub  (cost=0.00..1.05 rows=5 width=32) \n(actual time=0.016..0.024 rows=5 \nloops=1)                           \n->  Seq Scan on tt_div div  (cost=0.00..1.15 rows=15 width=32) \n(actual time=0.004..0.022 rows=15 \nloops=414)                     \n->  Seq Scan on td_sub dsub  (cost=0.00..1.03 rows=3 width=9) \n(actual time=0.003..0.007 rows=3 \nloops=414)               \n->  Seq Scan on td_div ddiv  (cost=0.00..1.03 rows=3 width=9) \n(actual time=0.002..0.007 rows=3 \nloops=414)         ->  Hash \nJoin  (cost=994.41..71746.74 rows=388 width=114) (actual \ntime=5.298..4218.486 rows=857 \nloops=414)               \nHash Cond: (ive.sequen = \nven.sequen)               \n->  Nested Loop  (cost=0.00..68318.52 rows=647982 width=85) (actual \ntime=0.026..3406.170 rows=643739 \nloops=414)                     \n->  Seq Scan on td_nat nat  (cost=0.00..1.24 rows=1 width=9) \n(actual time=0.004..0.014 rows=1 \nloops=414)                           \nFilter: (-3::numeric = \ncodtab)                     \n->  Seq Scan on tt_ive ive  (cost=0.00..61837.46 rows=647982 \nwidth=76) (actual time=0.017..1926.983 rows=643739 \nloops=414)                           \nFilter: ((sitmov <> 'C'::bpchar) AND ('001'::bpchar = \ncodfil))               \n->  Hash  (cost=992.08..992.08 rows=186 width=89) (actual \ntime=33.234..33.234 rows=394 \nloops=1)                     \n->  Hash Left Join  (cost=3.48..992.08 rows=186 width=89) (actual \ntime=13.163..32.343 rows=394 \nloops=1)                           \nHash Cond: ((ven.filcli = cfg.vc_filcli) AND (ven.codcli = \ncfg.vc_codcli))                           \n->  Hash Join  (cost=2.45..989.65 rows=186 width=106) (actual \ntime=13.131..31.060 rows=394 \nloops=1)                                 \nHash Cond: ((ven.filpgt = pla.filpgt) AND (ven.codpgt = \npla.codpgt))                                 \n->  Index Scan using i_lc_ven_dathor on tt_ven ven  \n(cost=0.00..983.95 rows=186 width=106) (actual time=13.026..29.634 rows=394 \nloops=1)                                       \nIndex Cond: ((dathor >= '2007-07-12 00:00:00'::timestamp without time zone) \nAND (dathor <= '2007-07-12 23:59:59'::timestamp without time \nzone))                                       \nFilter: (('001'::bpchar = codfil) AND (codnat = \n-3::numeric))                                 \n->  Hash  (cost=2.18..2.18 rows=18 width=14) (actual \ntime=0.081..0.081 rows=18 \nloops=1)                                       \n->  Seq Scan on tt_pla pla  (cost=0.00..2.18 rows=18 width=14) \n(actual time=0.013..0.043 rows=18 \nloops=1)                           \n->  Hash  (cost=1.01..1.01 rows=1 width=17) (actual \ntime=0.017..0.017 rows=1 \nloops=1)                                 \n->  Seq Scan on tt_cfg cfg  (cost=0.00..1.01 rows=1 width=17) \n(actual time=0.010..0.011 rows=1 loops=1)   ->  Index Scan \nusing pk_pla on tt_pla vencodpgt  (cost=0.00..0.31 rows=1 width=24) (actual \ntime=0.037..0.040 rows=1 \nloops=256)         Index Cond: \n((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt = \nvencodpgt.codpgt)) Total runtime: 1747122.219 ms(43 \nrows)\n \n_____________________________________________________________________________________________________________________________________\n \nTable \nand view definitions can be accessed at: http://www.opendb.com.br/v1/problem0707.txt\n \n\nReimer", "msg_date": "Thu, 19 Jul 2007 22:19:35 -0300", "msg_from": "\"Carlos H. Reimer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with posting" } ]
[ { "msg_contents": "Sorry for the cross-post, but this is performance and advocacy \nrelated...\n\nHas anyone benchmarked HEAD against 8.2? I'd like some numbers to use \nin my OSCon lightning talk. Numbers for both with and without HOT \nwould be even better (I know we've got HOT-specific benchmarks, but I \nwant complete 8.2 -> 8.3 numbers).\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Fri, 20 Jul 2007 00:58:35 -0400", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "8.2 -> 8.3 performance numbers" }, { "msg_contents": "Jim,\n\n> Has anyone benchmarked HEAD against 8.2? I'd like some numbers to use in \n> my OSCon lightning talk. Numbers for both with and without HOT would be \n> even better (I know we've got HOT-specific benchmarks, but I want \n> complete 8.2 -> 8.3 numbers).\n\nWe've done it on TPCE, which is a hard benchmark for PostgreSQL. On \nthat it's +9% without HOT and +13% with HOT. I think SpecJ would show a \ngreater difference, but we're still focussed on benchmarks we can \npublish (i.e. 8.2.4) right now.\n\n--Josh\n", "msg_date": "Fri, 20 Jul 2007 10:03:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On Jul 20, 2007, at 1:03 PM, Josh Berkus wrote:\n> Jim,\n>\n>> Has anyone benchmarked HEAD against 8.2? I'd like some numbers to \n>> use in my OSCon lightning talk. Numbers for both with and without \n>> HOT would be even better (I know we've got HOT-specific \n>> benchmarks, but I want complete 8.2 -> 8.3 numbers).\n>\n> We've done it on TPCE, which is a hard benchmark for PostgreSQL. \n> On that it's +9% without HOT and +13% with HOT. I think SpecJ \n> would show a greater difference, but we're still focussed on \n> benchmarks we can publish (i.e. 8.2.4) right now.\n\nBleh, that's not a very impressive number.\n\nAnyone else have something better?\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Fri, 20 Jul 2007 14:32:34 -0400", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On 7/20/07, Josh Berkus <[email protected]> wrote:\n> Jim,\n>\n> > Has anyone benchmarked HEAD against 8.2? I'd like some numbers to use in\n> > my OSCon lightning talk. Numbers for both with and without HOT would be\n> > even better (I know we've got HOT-specific benchmarks, but I want\n> > complete 8.2 -> 8.3 numbers).\n>\n> We've done it on TPCE, which is a hard benchmark for PostgreSQL. On\n> that it's +9% without HOT and +13% with HOT. I think SpecJ would show a\n> greater difference, but we're still focussed on benchmarks we can\n> publish (i.e. 8.2.4) right now.\n\nAre there any industry standard benchmarks that you know of which\nPostgreSQL excels at?\n\nmerlin\n", "msg_date": "Wed, 25 Jul 2007 06:58:00 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On Fri, 2007-07-20 at 10:03 -0700, Josh Berkus wrote:\n> Jim,\n> \n> > Has anyone benchmarked HEAD against 8.2? I'd like some numbers to use in \n> > my OSCon lightning talk. Numbers for both with and without HOT would be \n> > even better (I know we've got HOT-specific benchmarks, but I want \n> > complete 8.2 -> 8.3 numbers).\n> \n> We've done it on TPCE, which is a hard benchmark for PostgreSQL. On \n> that it's +9% without HOT and +13% with HOT. I think SpecJ would show a \n> greater difference, but we're still focussed on benchmarks we can \n> publish (i.e. 8.2.4) right now.\n\nJosh,\n\nShould you get the chance I would appreciate a comparative test for\nTPC-E.\n\n1. Normal TPC-E versus \n2. TPC-E with all FKs against Fixed tables replaced with CHECK( col IN\n(VALUES(x,x,x,...))) constraints on the referencing tables.\n\nI have reasonable evidence that Referential Integrity is the major\nperformance bottleneck and would like some objective evidence that this\nis the case.\n\nNo rush, since it will be an 8.4 thing to discuss and improve this\nsubstantially in any of the ways I envisage.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 25 Jul 2007 13:52:13 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "Am Mittwoch 25 Juli 2007 schrieb Simon Riggs:\n> I have reasonable evidence that Referential Integrity is the major\n> performance bottleneck and would like some objective evidence that this\n> is the case.\n\nJust curious, will 8.3 still check FK constraints (and use locks) even if the \nreferencing column value does not change?\n", "msg_date": "Wed, 25 Jul 2007 15:07:23 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On Wed, 2007-07-25 at 15:07 +0200, Mario Weilguni wrote:\n> Am Mittwoch 25 Juli 2007 schrieb Simon Riggs:\n> > I have reasonable evidence that Referential Integrity is the major\n> > performance bottleneck and would like some objective evidence that this\n> > is the case.\n> \n> Just curious, will 8.3 still check FK constraints (and use locks) even if the \n> referencing column value does not change?\n\nThat is optimised away in 8.0+\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 25 Jul 2007 15:09:09 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On 7/25/07, Simon Riggs <[email protected]> wrote:\n> On Fri, 2007-07-20 at 10:03 -0700, Josh Berkus wrote:\n> > Jim,\n> >\n> > > Has anyone benchmarked HEAD against 8.2? I'd like some numbers to use in\n> > > my OSCon lightning talk. Numbers for both with and without HOT would be\n> > > even better (I know we've got HOT-specific benchmarks, but I want\n> > > complete 8.2 -> 8.3 numbers).\n> >\n> > We've done it on TPCE, which is a hard benchmark for PostgreSQL. On\n> > that it's +9% without HOT and +13% with HOT. I think SpecJ would show a\n> > greater difference, but we're still focussed on benchmarks we can\n> > publish (i.e. 8.2.4) right now.\n>\n> Josh,\n>\n> Should you get the chance I would appreciate a comparative test for\n> TPC-E.\n>\n> 1. Normal TPC-E versus\n> 2. TPC-E with all FKs against Fixed tables replaced with CHECK( col IN\n> (VALUES(x,x,x,...))) constraints on the referencing tables.\n>\n> I have reasonable evidence that Referential Integrity is the major\n> performance bottleneck and would like some objective evidence that this\n> is the case.\n>\n> No rush, since it will be an 8.4 thing to discuss and improve this\n> substantially in any of the ways I envisage.\n\njust a small 'me too' here, the RI penalty seems higher than it should\nbe...especially when the foreign key table is very small, and I can\nsee how this would impact benchmarks.\n\nmerlin\n", "msg_date": "Wed, 25 Jul 2007 10:09:50 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "On Wed, 2007-07-25 at 10:09 -0400, Merlin Moncure wrote:\n> On 7/25/07, Simon Riggs <[email protected]> wrote:\n> >\n> > Should you get the chance I would appreciate a comparative test for\n> > TPC-E.\n> >\n> > 1. Normal TPC-E versus\n> > 2. TPC-E with all FKs against Fixed tables replaced with CHECK( col IN\n> > (VALUES(x,x,x,...))) constraints on the referencing tables.\n> >\n> > I have reasonable evidence that Referential Integrity is the major\n> > performance bottleneck and would like some objective evidence that this\n> > is the case.\n> >\n> > No rush, since it will be an 8.4 thing to discuss and improve this\n> > substantially in any of the ways I envisage.\n> \n> just a small 'me too' here, the RI penalty seems higher than it should\n> be...especially when the foreign key table is very small, and I can\n> see how this would impact benchmarks.\n\nAny measurements to back that up would be appreciated. \"Turning it off\"\nisn't really a valid comparison because we do want to make the checks\nand expect there to be some cost to that. We just want to quantify the\ncost to allow prioritising our efforts to improve performance on that.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 25 Jul 2007 15:22:25 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n\n> On Wed, 2007-07-25 at 10:09 -0400, Merlin Moncure wrote:\n>> \n>> just a small 'me too' here, the RI penalty seems higher than it should\n>> be...especially when the foreign key table is very small, and I can\n>> see how this would impact benchmarks.\n>\n> Any measurements to back that up would be appreciated. \"Turning it off\"\n> isn't really a valid comparison because we do want to make the checks\n> and expect there to be some cost to that. We just want to quantify the\n> cost to allow prioritising our efforts to improve performance on that.\n\nIf anyone's interested in this I would be very interested in seeing the\nresults of your application benchmarks with various parts of the RI checking\ncode turned off.\n\nAttached is a patch which adds three gucs for profiling purposes which cut off\nthe RI checks at various stages. To use them you would want to benchmark your\napplication five times in comparable conditions:\n\nall variables set to 'no'\nskip_ri_locks set to 'yes'\nskip_ri_queries set to 'yes'\nskip_ri_triggers set to 'yes'\nno RI constraints at all \n\nThe last ought to be nearly identical to the fourth case. Note that it's\nreally important to repeat your benchmarks several times to ensure that you're\nseeing repeatable results. Measuring CPU overhead is pretty tricky since a\nsingle checkpoint or autovacuum run can completely throw off your results.\n\nIn my limited testing I found a *huge* effect for batch loads where many\ninserts are done in a single transaction. I only see about a 20% hit on\npgbench with RI checks half of which comes from the trigger overhead and about\na quarter of which comes from each of the SPI queries and the locks. I have\nsome ideas for tackling the SPI queries which would help the batch loading\ncase but I'm not sure how much resources it makes sense to expend to save 5%\nin the OLTP case.\n\n\n\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com", "msg_date": "Wed, 25 Jul 2007 15:43:29 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] 8.2 -> 8.3 performance numbers" } ]
[ { "msg_contents": "Today, I looked at 'top' on my PG server and saw a pid that reported 270 hours \nof CPU time. Considering this is a very simple query, I was surprised to say \nthe least. I was about to just kill the pid, but I figured I'd try and see \nexactly what it was stuck doing for so long.\n\nHere's the strace summary as run for a few second sample:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 97.25 0.671629 92 7272 semop\n 1.76 0.012171 406 30 recvfrom\n 0.57 0.003960 66 60 gettimeofday\n 0.36 0.002512 28 90 sendto\n 0.05 0.000317 10 32 lseek\n 0.01 0.000049 1 48 select\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.690638 7532 total\n\nHere's the query:\n\nselect id from eventkeywords where word = '00003322'\n\nIf I run the query manually, it completes in about 500ms, which is very reasonable.\n\nThere are 408563 rows in this table. I just noticed there is no index on word ( \nthere should be! ). Would this have caused the problem?\n\nThis is 8.0.12\n\nLinux sunrise 2.6.15-26-amd64-server #1 SMP Fri Sep 8 20:33:15 UTC 2006 x86_64 \nGNU/Linux\n\nAny idea what might have set it into this loop?\n\n-Dan\n", "msg_date": "Fri, 20 Jul 2007 09:43:50 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Simple query showing 270 hours of CPU time" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> Here's the strace summary as run for a few second sample:\n\n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 97.25 0.671629 92 7272 semop\n> 1.76 0.012171 406 30 recvfrom\n> 0.57 0.003960 66 60 gettimeofday\n> 0.36 0.002512 28 90 sendto\n> 0.05 0.000317 10 32 lseek\n> 0.01 0.000049 1 48 select\n> ------ ----------- ----------- --------- --------- ----------------\n> 100.00 0.690638 7532 total\n\n> Here's the query:\n> select id from eventkeywords where word = '00003322'\n\nHow sure are you that (a) that's really what it's doing and (b) you are\nnot observing multiple executions of the query? There are no recvfrom\ncalls in the inner loops of the backend AFAIR, so this looks to me like\nthe execution of 30 different queries. The number of semops is\ndistressingly high, but that's a contention issue not an\namount-of-runtime issue. I think you're looking at a backend that has\nsimply executed one heckuva lot of queries on behalf of its client,\nand that inquiring into what the client thinks it's doing might be the\nfirst order of business.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 12:42:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query showing 270 hours of CPU time " }, { "msg_contents": "\n> Today, I looked at 'top' on my PG server and saw a pid that reported 270 \n> hours of CPU time. Considering this is a very simple query, I was \n> surprised to say the least. I was about to just kill the pid, but I \n> figured I'd try and see exactly what it was stuck doing for so long.\n\n\tIf you are using connection pooling, or if your client keeps the \nconnections for a long time, this backend could be very old...\n\tWith PHP's persistent connections, for instance, backends restart when \nyou restart the webserver, which isn't usually very often.\n", "msg_date": "Fri, 20 Jul 2007 18:55:02 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query showing 270 hours of CPU time" }, { "msg_contents": "Tom Lane wrote:\n> Dan Harris <[email protected]> writes:\n>> Here's the strace summary as run for a few second sample:\n> \n>> % time seconds usecs/call calls errors syscall\n>> ------ ----------- ----------- --------- --------- ----------------\n>> 97.25 0.671629 92 7272 semop\n>> 1.76 0.012171 406 30 recvfrom\n>> 0.57 0.003960 66 60 gettimeofday\n>> 0.36 0.002512 28 90 sendto\n>> 0.05 0.000317 10 32 lseek\n>> 0.01 0.000049 1 48 select\n>> ------ ----------- ----------- --------- --------- ----------------\n>> 100.00 0.690638 7532 total\n> \n>> Here's the query:\n>> select id from eventkeywords where word = '00003322'\n> \n> How sure are you that (a) that's really what it's doing and (b) you are\n> not observing multiple executions of the query? There are no recvfrom\n> calls in the inner loops of the backend AFAIR, so this looks to me like\n> the execution of 30 different queries. The number of semops is\n> distressingly high, but that's a contention issue not an\n> amount-of-runtime issue. \n\nYou were absolutely right. This is one connection that is doing a whole lot of \n( slow ) queries. I jumped the gun on this and assumed it was a single query \ntaking this long. Sorry to waste time and bandwidth.\n\nSince you mentioned the number of semops is distressingly high, does this \nindicate a tuning problem? The machine has 64GB of RAM and as far as I can tell \nabout 63GB is all cache. I wonder if this is a clue to an undervalued \nmemory-related setting somewhere?\n\n-Dan\n", "msg_date": "Fri, 20 Jul 2007 11:18:35 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple query showing 270 hours of CPU time" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> Since you mentioned the number of semops is distressingly high, does this \n> indicate a tuning problem?\n\nMore like an old-version problem. We've done a lot of work on\nconcurrent performance since 8.0.x, and most likely you are hitting\none of the bottlenecks that have been solved since then.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jul 2007 14:12:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query showing 270 hours of CPU time " } ]
[ { "msg_contents": "I'm having this very disturbing problem. I got a table with about\n100,000 rows in it. Our software deletes the majority of these rows and\nthen bulk loads another 100,000 rows into the same table. All this is\nhappening within a single transaction. I then perform a simple \"select\ncount(*) from ...\" statement that never returns. In the mean time, the\nbackend Postgres process is taking close to 100% of the CPU. The hang-up\ndoes not always happen on the same statement but eventually it happens 2\nout of 3 times. If I dump and then restore the schema where this table\nresides the problem is gone until the next time we run through the whole\nprocess of deleting, loading and querying the table.\n\n \n\nThere is no other activity in the database. All requested locks are\ngranted. \n\n \n\nHas anyone seen similar behavior? \n\n \n\nSome details:\n\n \n\nPostgres v 8.1.2\n\nLinux Fedora 3\n\n \n\nshared_buffers = 65536\n\ntemp_buffers = 32768\n\nwork_mem = 131072 \n\nmaintenance_work_mem = 131072\n\nmax_stack_depth = 8192\n\nmax_fsm_pages = 40000\n\nwal_buffers = 16\n\ncheckpoint_segments = 16\n\n \n\n \n\ntop reports\n\n \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16 postmaster\n\n \n\nps -ef | grep postgres reports \n\n \n\npostgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n[local] SELECT\n\n \n\nstrace -p 19478 \n\nno system calls reported\n\n \n\n \n\nThanks for the help!\n\nJozsef\n\n\n\n\n\n\n\n\n\n\nI’m having this very disturbing problem. I got a table\nwith about 100,000 rows in it. Our software deletes the majority of these rows\nand then bulk loads another 100,000 rows into the same table. All this is\nhappening within a single transaction. I then perform a simple “select\ncount(*) from …” statement that never returns. In the mean time, the\nbackend Postgres process is taking close to 100% of the CPU. The hang-up does\nnot always happen on the same statement but eventually it happens 2 out of 3\ntimes. If I dump and then restore the schema where this table resides the\nproblem is gone until the next time we run through the whole process of\ndeleting, loading and querying the table.\n \nThere is no other activity in the database. All requested\nlocks are granted. \n \nHas anyone seen similar behavior? \n \nSome details:\n \nPostgres v 8.1.2\nLinux Fedora 3\n \nshared_buffers = 65536\ntemp_buffers = 32768\nwork_mem = 131072 \nmaintenance_work_mem = 131072\nmax_stack_depth = 8192\nmax_fsm_pages = 40000\nwal_buffers = 16\ncheckpoint_segments = 16\n \n \ntop reports\n \n  PID USER      PR \nNI  VIRT  RES  SHR S %CPU %MEM    TIME+ \nCOMMAND\n19478 postgres  25   0  740m 721m 536m R\n99.7  4.4 609:41.16 postmaster\n \nps –ef | grep postgres reports \n \npostgres 19478  8061 99 00:11\n?        10:13:03 postgres: user dbase\n[local] SELECT\n \nstrace –p 19478 \nno system calls reported\n \n \nThanks for the help!\nJozsef", "msg_date": "Sun, 22 Jul 2007 10:29:04 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Simple select hangs while CPU close to 100%" }, { "msg_contents": "Hello\n\ndid you vacuum?\n\nIt's good technique do vacuum table after remove bigger number of rows.\n\nRegards\nPavel Stehule\n\n2007/7/22, Jozsef Szalay <[email protected]>:\n>\n>\n>\n>\n> I'm having this very disturbing problem. I got a table with about 100,000\n> rows in it. Our software deletes the majority of these rows and then bulk\n> loads another 100,000 rows into the same table. All this is happening within\n> a single transaction. I then perform a simple \"select count(*) from …\"\n> statement that never returns. In the mean time, the backend Postgres process\n> is taking close to 100% of the CPU. The hang-up does not always happen on\n> the same statement but eventually it happens 2 out of 3 times. If I dump and\n> then restore the schema where this table resides the problem is gone until\n> the next time we run through the whole process of deleting, loading and\n> querying the table.\n>\n>\n>\n> There is no other activity in the database. All requested locks are granted.\n>\n>\n>\n> Has anyone seen similar behavior?\n>\n>\n>\n> Some details:\n>\n>\n>\n> Postgres v 8.1.2\n>\n> Linux Fedora 3\n>\n>\n>\n> shared_buffers = 65536\n>\n> temp_buffers = 32768\n>\n> work_mem = 131072\n>\n> maintenance_work_mem = 131072\n>\n> max_stack_depth = 8192\n>\n> max_fsm_pages = 40000\n>\n> wal_buffers = 16\n>\n> checkpoint_segments = 16\n>\n>\n>\n>\n>\n> top reports\n>\n>\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>\n> 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16 postmaster\n>\n>\n>\n> ps –ef | grep postgres reports\n>\n>\n>\n> postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase [local]\n> SELECT\n>\n>\n>\n> strace –p 19478\n>\n> no system calls reported\n>\n>\n>\n>\n>\n> Thanks for the help!\n>\n> Jozsef\n", "msg_date": "Sun, 22 Jul 2007 17:52:34 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "\n> I’m having this very disturbing problem. I got a table with about\n> 100,000 rows in it. Our software deletes the majority of these rows\n> and then bulk loads another 100,000 rows into the same table. All this\n> is happening within a single transaction. I then perform a simple\n> “select count(*) from …” statement that never returns. In the mean\n\nCOUNT(*) is always slow; but either way if the process is deleting and\nthen adding records, can't you just keep track of how may records you\nloaded [aka count++] rather than turning around and asking the database\nbefore any statistics have had a chance to be updated.\n\n> time, the backend Postgres process is taking close to 100% of the\n> CPU. The hang-up does not always happen on the same statement but\n> eventually it happens 2 out of 3 times. If I dump and then restore the\n> schema where this table resides the problem is gone until the next\n> time we run through the whole process of deleting, loading and\n> querying the table.\n> There is no other activity in the database. All requested locks are\n> granted. \n\n\n", "msg_date": "Sun, 22 Jul 2007 14:32:45 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "Hi Pavel,\n\n\nYes I did vacuum. In fact the only way to \"fix\" this problem is\nexecuting a \"full\" vacuum. The plain vacuum did not help.\n\n\nRegards,\nJozsef\n\n\n-----Original Message-----\nFrom: Pavel Stehule [mailto:[email protected]] \nSent: Sunday, July 22, 2007 10:53 AM\nTo: Jozsef Szalay\nCc: [email protected]\nSubject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n\nHello\n\ndid you vacuum?\n\nIt's good technique do vacuum table after remove bigger number of rows.\n\nRegards\nPavel Stehule\n\n2007/7/22, Jozsef Szalay <[email protected]>:\n>\n>\n>\n>\n> I'm having this very disturbing problem. I got a table with about\n100,000\n> rows in it. Our software deletes the majority of these rows and then\nbulk\n> loads another 100,000 rows into the same table. All this is happening\nwithin\n> a single transaction. I then perform a simple \"select count(*) from\n...\"\n> statement that never returns. In the mean time, the backend Postgres\nprocess\n> is taking close to 100% of the CPU. The hang-up does not always happen\non\n> the same statement but eventually it happens 2 out of 3 times. If I\ndump and\n> then restore the schema where this table resides the problem is gone\nuntil\n> the next time we run through the whole process of deleting, loading\nand\n> querying the table.\n>\n>\n>\n> There is no other activity in the database. All requested locks are\ngranted.\n>\n>\n>\n> Has anyone seen similar behavior?\n>\n>\n>\n> Some details:\n>\n>\n>\n> Postgres v 8.1.2\n>\n> Linux Fedora 3\n>\n>\n>\n> shared_buffers = 65536\n>\n> temp_buffers = 32768\n>\n> work_mem = 131072\n>\n> maintenance_work_mem = 131072\n>\n> max_stack_depth = 8192\n>\n> max_fsm_pages = 40000\n>\n> wal_buffers = 16\n>\n> checkpoint_segments = 16\n>\n>\n>\n>\n>\n> top reports\n>\n>\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>\n> 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\npostmaster\n>\n>\n>\n> ps -ef | grep postgres reports\n>\n>\n>\n> postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n[local]\n> SELECT\n>\n>\n>\n> strace -p 19478\n>\n> no system calls reported\n>\n>\n>\n>\n>\n> Thanks for the help!\n>\n> Jozsef\n\n", "msg_date": "Wed, 25 Jul 2007 12:37:23 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "2007/7/25, Jozsef Szalay <[email protected]>:\n> Hi Pavel,\n>\n>\n> Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> executing a \"full\" vacuum. The plain vacuum did not help.\n>\n>\n> Regards,\n> Jozsef\n\nIt's question if vacuum was done.\n\nTry vacuum verbose;\n\nMaybe your max_fsm_pages is too low and you have to up it.\n\nRegards\nPavel Stehule\n", "msg_date": "Wed, 25 Jul 2007 19:42:11 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "\nThe actual application does not have to perform this statement since, as\nyou suggested; it keeps track of what got loaded. However, the table has\nto go thru a de-duplication process because bulk load is utilized to\nload the potentially large number (millions) of rows. All indexes were\ndropped for the bulk load. This de-duplication procedure starts with a\nSELECT statement that identifies duplicate rows. This is the original\nSELECT that never returned. Later on I used the SELECT COUNT(*) to see\nif somehow my original SELECT had something to do with the hang and I\nfound that this simple query hung as well.\n\nThe only way I could avoid getting into this stage was to perform a\nVACUUM FULL on the table before the bulk load. I would prefer not using\na full vacuum every time due to the exclusive access to the table and\ntime it requires. The plain VACUUM did not work.\n\nRegards,\nJozsef\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adam Tauno\nWilliams\nSent: Sunday, July 22, 2007 1:33 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n\n\n> I'm having this very disturbing problem. I got a table with about\n> 100,000 rows in it. Our software deletes the majority of these rows\n> and then bulk loads another 100,000 rows into the same table. All this\n> is happening within a single transaction. I then perform a simple\n> \"select count(*) from ...\" statement that never returns. In the mean\n\nCOUNT(*) is always slow; but either way if the process is deleting and\nthen adding records, can't you just keep track of how may records you\nloaded [aka count++] rather than turning around and asking the database\nbefore any statistics have had a chance to be updated.\n\n> time, the backend Postgres process is taking close to 100% of the\n> CPU. The hang-up does not always happen on the same statement but\n> eventually it happens 2 out of 3 times. If I dump and then restore the\n> schema where this table resides the problem is gone until the next\n> time we run through the whole process of deleting, loading and\n> querying the table.\n> There is no other activity in the database. All requested locks are\n> granted. \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n", "msg_date": "Wed, 25 Jul 2007 12:51:39 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "In response to \"Jozsef Szalay\" <[email protected]>:\n\n> Hi Pavel,\n> \n> Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> executing a \"full\" vacuum. The plain vacuum did not help.\n\nBased on the information, I would expect that this problem is the result\nof improper PG tuning, or inadequate server sizing (RAM, etc).\n\nA table changing 100,000 rows shouldn't cause enough bloat to hurt\ncount(*)'s performance significantly, unless something else is wrong.\n\nSome quick piddling around shows that tables with over a million rows\ncan count(*) the whole table in about 1/2 second on a system 2G of RAM.\nA table with 13 mil takes a min and a half. We haven't specifically\ntuned this server for count(*) performance, as it's not a priority\nfor this database, so I expect the performance drop for the 13 mil\ndatabase is a result of exhausting shared_buffers and hitting the\ndisks.\n\n> -----Original Message-----\n> From: Pavel Stehule [mailto:[email protected]] \n> Sent: Sunday, July 22, 2007 10:53 AM\n> To: Jozsef Szalay\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> \n> Hello\n> \n> did you vacuum?\n> \n> It's good technique do vacuum table after remove bigger number of rows.\n> \n> Regards\n> Pavel Stehule\n> \n> 2007/7/22, Jozsef Szalay <[email protected]>:\n> >\n> >\n> >\n> >\n> > I'm having this very disturbing problem. I got a table with about\n> 100,000\n> > rows in it. Our software deletes the majority of these rows and then\n> bulk\n> > loads another 100,000 rows into the same table. All this is happening\n> within\n> > a single transaction. I then perform a simple \"select count(*) from\n> ...\"\n> > statement that never returns. In the mean time, the backend Postgres\n> process\n> > is taking close to 100% of the CPU. The hang-up does not always happen\n> on\n> > the same statement but eventually it happens 2 out of 3 times. If I\n> dump and\n> > then restore the schema where this table resides the problem is gone\n> until\n> > the next time we run through the whole process of deleting, loading\n> and\n> > querying the table.\n> >\n> >\n> >\n> > There is no other activity in the database. All requested locks are\n> granted.\n> >\n> >\n> >\n> > Has anyone seen similar behavior?\n> >\n> >\n> >\n> > Some details:\n> >\n> >\n> >\n> > Postgres v 8.1.2\n> >\n> > Linux Fedora 3\n> >\n> >\n> >\n> > shared_buffers = 65536\n> >\n> > temp_buffers = 32768\n> >\n> > work_mem = 131072\n> >\n> > maintenance_work_mem = 131072\n> >\n> > max_stack_depth = 8192\n> >\n> > max_fsm_pages = 40000\n> >\n> > wal_buffers = 16\n> >\n> > checkpoint_segments = 16\n> >\n> >\n> >\n> >\n> >\n> > top reports\n> >\n> >\n> >\n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> >\n> > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> postmaster\n> >\n> >\n> >\n> > ps -ef | grep postgres reports\n> >\n> >\n> >\n> > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n> [local]\n> > SELECT\n> >\n> >\n> >\n> > strace -p 19478\n> >\n> > no system calls reported\n> >\n> >\n> >\n> >\n> >\n> > Thanks for the help!\n> >\n> > Jozsef\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 25 Jul 2007 13:59:20 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "In response to \"Jozsef Szalay\" <[email protected]>:\n\n> Hi Pavel,\n> \n> \n> Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> executing a \"full\" vacuum. The plain vacuum did not help.\n\nI read over my previous reply and picked up on something else ...\n\nWhat is your vacuum _policy_? i.e. how often do you vacuum/analyze?\nThe fact that you had to do a vacuum full to get things back under\ncontrol tends to suggest that your current vacuum schedule is not\naggressive enough.\n\nAn explicit vacuum of this table after the large delete/insert may\nbe helpful.\n\n> -----Original Message-----\n> From: Pavel Stehule [mailto:[email protected]] \n> Sent: Sunday, July 22, 2007 10:53 AM\n> To: Jozsef Szalay\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> \n> Hello\n> \n> did you vacuum?\n> \n> It's good technique do vacuum table after remove bigger number of rows.\n> \n> Regards\n> Pavel Stehule\n> \n> 2007/7/22, Jozsef Szalay <[email protected]>:\n> >\n> >\n> >\n> >\n> > I'm having this very disturbing problem. I got a table with about\n> 100,000\n> > rows in it. Our software deletes the majority of these rows and then\n> bulk\n> > loads another 100,000 rows into the same table. All this is happening\n> within\n> > a single transaction. I then perform a simple \"select count(*) from\n> ...\"\n> > statement that never returns. In the mean time, the backend Postgres\n> process\n> > is taking close to 100% of the CPU. The hang-up does not always happen\n> on\n> > the same statement but eventually it happens 2 out of 3 times. If I\n> dump and\n> > then restore the schema where this table resides the problem is gone\n> until\n> > the next time we run through the whole process of deleting, loading\n> and\n> > querying the table.\n> >\n> >\n> >\n> > There is no other activity in the database. All requested locks are\n> granted.\n> >\n> >\n> >\n> > Has anyone seen similar behavior?\n> >\n> >\n> >\n> > Some details:\n> >\n> >\n> >\n> > Postgres v 8.1.2\n> >\n> > Linux Fedora 3\n> >\n> >\n> >\n> > shared_buffers = 65536\n> >\n> > temp_buffers = 32768\n> >\n> > work_mem = 131072\n> >\n> > maintenance_work_mem = 131072\n> >\n> > max_stack_depth = 8192\n> >\n> > max_fsm_pages = 40000\n> >\n> > wal_buffers = 16\n> >\n> > checkpoint_segments = 16\n> >\n> >\n> >\n> >\n> >\n> > top reports\n> >\n> >\n> >\n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> >\n> > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> postmaster\n> >\n> >\n> >\n> > ps -ef | grep postgres reports\n> >\n> >\n> >\n> > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n> [local]\n> > SELECT\n> >\n> >\n> >\n> > strace -p 19478\n> >\n> > no system calls reported\n> >\n> >\n> >\n> >\n> >\n> > Thanks for the help!\n> >\n> > Jozsef\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 25 Jul 2007 14:12:20 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "Our application is such that any update to the database is done by a\nsingle session in a batch process using bulk load. The frequency of\nthese usually larger scale updates is variable but an update runs every\n2-3 days on average.\n\nOriginally a plain VACUUM ANALYZE was executed on every affected table\nafter every load. \n\nVACUUM FULL ANALYZE is scheduled to run on a weekly basis.\n\nI do understand the need for vacuuming. Nevertheless I expect Postgres\nto return data eventually even if I do not vacuum. In my case, the\nsimple SELECT COUNT(*) FROM table; statement on a table that had around\n100K \"live\" rows has not returned the result for more than 6 hours after\nwhich I manually killed it.\n \nJozsef\n\n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]] \nSent: Wednesday, July 25, 2007 1:12 PM\nTo: Jozsef Szalay\nCc: Pavel Stehule; [email protected]\nSubject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n\nIn response to \"Jozsef Szalay\" <[email protected]>:\n\n> Hi Pavel,\n> \n> \n> Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> executing a \"full\" vacuum. The plain vacuum did not help.\n\nI read over my previous reply and picked up on something else ...\n\nWhat is your vacuum _policy_? i.e. how often do you vacuum/analyze?\nThe fact that you had to do a vacuum full to get things back under\ncontrol tends to suggest that your current vacuum schedule is not\naggressive enough.\n\nAn explicit vacuum of this table after the large delete/insert may\nbe helpful.\n\n> -----Original Message-----\n> From: Pavel Stehule [mailto:[email protected]] \n> Sent: Sunday, July 22, 2007 10:53 AM\n> To: Jozsef Szalay\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> \n> Hello\n> \n> did you vacuum?\n> \n> It's good technique do vacuum table after remove bigger number of\nrows.\n> \n> Regards\n> Pavel Stehule\n> \n> 2007/7/22, Jozsef Szalay <[email protected]>:\n> >\n> >\n> >\n> >\n> > I'm having this very disturbing problem. I got a table with about\n> 100,000\n> > rows in it. Our software deletes the majority of these rows and then\n> bulk\n> > loads another 100,000 rows into the same table. All this is\nhappening\n> within\n> > a single transaction. I then perform a simple \"select count(*) from\n> ...\"\n> > statement that never returns. In the mean time, the backend Postgres\n> process\n> > is taking close to 100% of the CPU. The hang-up does not always\nhappen\n> on\n> > the same statement but eventually it happens 2 out of 3 times. If I\n> dump and\n> > then restore the schema where this table resides the problem is gone\n> until\n> > the next time we run through the whole process of deleting, loading\n> and\n> > querying the table.\n> >\n> >\n> >\n> > There is no other activity in the database. All requested locks are\n> granted.\n> >\n> >\n> >\n> > Has anyone seen similar behavior?\n> >\n> >\n> >\n> > Some details:\n> >\n> >\n> >\n> > Postgres v 8.1.2\n> >\n> > Linux Fedora 3\n> >\n> >\n> >\n> > shared_buffers = 65536\n> >\n> > temp_buffers = 32768\n> >\n> > work_mem = 131072\n> >\n> > maintenance_work_mem = 131072\n> >\n> > max_stack_depth = 8192\n> >\n> > max_fsm_pages = 40000\n> >\n> > wal_buffers = 16\n> >\n> > checkpoint_segments = 16\n> >\n> >\n> >\n> >\n> >\n> > top reports\n> >\n> >\n> >\n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> >\n> > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> postmaster\n> >\n> >\n> >\n> > ps -ef | grep postgres reports\n> >\n> >\n> >\n> > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n> [local]\n> > SELECT\n> >\n> >\n> >\n> > strace -p 19478\n> >\n> > no system calls reported\n> >\n> >\n> >\n> >\n> >\n> > Thanks for the help!\n> >\n> > Jozsef\n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n\n", "msg_date": "Wed, 25 Jul 2007 13:47:58 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "hello\n\nshow me, please, output from vacuum verbose table\n\nPavel\n\n2007/7/25, Jozsef Szalay <[email protected]>:\n> Our application is such that any update to the database is done by a\n> single session in a batch process using bulk load. The frequency of\n> these usually larger scale updates is variable but an update runs every\n> 2-3 days on average.\n>\n> Originally a plain VACUUM ANALYZE was executed on every affected table\n> after every load.\n>\n> VACUUM FULL ANALYZE is scheduled to run on a weekly basis.\n>\n> I do understand the need for vacuuming. Nevertheless I expect Postgres\n> to return data eventually even if I do not vacuum. In my case, the\n> simple SELECT COUNT(*) FROM table; statement on a table that had around\n> 100K \"live\" rows has not returned the result for more than 6 hours after\n> which I manually killed it.\n>\n> Jozsef\n>\n>\n> -----Original Message-----\n> From: Bill Moran [mailto:[email protected]]\n> Sent: Wednesday, July 25, 2007 1:12 PM\n> To: Jozsef Szalay\n> Cc: Pavel Stehule; [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n>\n> In response to \"Jozsef Szalay\" <[email protected]>:\n>\n> > Hi Pavel,\n> >\n> >\n> > Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> > executing a \"full\" vacuum. The plain vacuum did not help.\n>\n> I read over my previous reply and picked up on something else ...\n>\n> What is your vacuum _policy_? i.e. how often do you vacuum/analyze?\n> The fact that you had to do a vacuum full to get things back under\n> control tends to suggest that your current vacuum schedule is not\n> aggressive enough.\n>\n> An explicit vacuum of this table after the large delete/insert may\n> be helpful.\n>\n> > -----Original Message-----\n> > From: Pavel Stehule [mailto:[email protected]]\n> > Sent: Sunday, July 22, 2007 10:53 AM\n> > To: Jozsef Szalay\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> >\n> > Hello\n> >\n> > did you vacuum?\n> >\n> > It's good technique do vacuum table after remove bigger number of\n> rows.\n> >\n> > Regards\n> > Pavel Stehule\n> >\n> > 2007/7/22, Jozsef Szalay <[email protected]>:\n> > >\n> > >\n> > >\n> > >\n> > > I'm having this very disturbing problem. I got a table with about\n> > 100,000\n> > > rows in it. Our software deletes the majority of these rows and then\n> > bulk\n> > > loads another 100,000 rows into the same table. All this is\n> happening\n> > within\n> > > a single transaction. I then perform a simple \"select count(*) from\n> > ...\"\n> > > statement that never returns. In the mean time, the backend Postgres\n> > process\n> > > is taking close to 100% of the CPU. The hang-up does not always\n> happen\n> > on\n> > > the same statement but eventually it happens 2 out of 3 times. If I\n> > dump and\n> > > then restore the schema where this table resides the problem is gone\n> > until\n> > > the next time we run through the whole process of deleting, loading\n> > and\n> > > querying the table.\n> > >\n> > >\n> > >\n> > > There is no other activity in the database. All requested locks are\n> > granted.\n> > >\n> > >\n> > >\n> > > Has anyone seen similar behavior?\n> > >\n> > >\n> > >\n> > > Some details:\n> > >\n> > >\n> > >\n> > > Postgres v 8.1.2\n> > >\n> > > Linux Fedora 3\n> > >\n> > >\n> > >\n> > > shared_buffers = 65536\n> > >\n> > > temp_buffers = 32768\n> > >\n> > > work_mem = 131072\n> > >\n> > > maintenance_work_mem = 131072\n> > >\n> > > max_stack_depth = 8192\n> > >\n> > > max_fsm_pages = 40000\n> > >\n> > > wal_buffers = 16\n> > >\n> > > checkpoint_segments = 16\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > top reports\n> > >\n> > >\n> > >\n> > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> > >\n> > > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> > postmaster\n> > >\n> > >\n> > >\n> > > ps -ef | grep postgres reports\n> > >\n> > >\n> > >\n> > > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n> > [local]\n> > > SELECT\n> > >\n> > >\n> > >\n> > > strace -p 19478\n> > >\n> > > no system calls reported\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > Thanks for the help!\n> > >\n> > > Jozsef\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> >\n> >\n> >\n> >\n> >\n>\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n> ****************************************************************\n> IMPORTANT: This message contains confidential information and is\n> intended only for the individual named. If the reader of this\n> message is not an intended recipient (or the individual\n> responsible for the delivery of this message to an intended\n> recipient), please be advised that any re-use, dissemination,\n> distribution or copying of this message is prohibited. Please\n> notify the sender immediately by e-mail if you have received\n> this e-mail by mistake and delete this e-mail from your system.\n> E-mail transmission cannot be guaranteed to be secure or\n> error-free as information could be intercepted, corrupted, lost,\n> destroyed, arrive late or incomplete, or contain viruses. The\n> sender therefore does not accept liability for any errors or\n> omissions in the contents of this message, which arise as a\n> result of e-mail transmission.\n> ****************************************************************\n>\n>\n", "msg_date": "Wed, 25 Jul 2007 21:06:02 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "In response to \"Jozsef Szalay\" <[email protected]>:\n\n> Our application is such that any update to the database is done by a\n> single session in a batch process using bulk load. The frequency of\n> these usually larger scale updates is variable but an update runs every\n> 2-3 days on average.\n> \n> Originally a plain VACUUM ANALYZE was executed on every affected table\n> after every load.\n\nAny other insert/update activity outside of the bulk loads? What's\nthe vacuum policy outside the bulk loads? You say originally, does\nit still do so?\n\nI agree with Pavel that the output of vacuum verbose when the problem\nis occurring would be helpful.\n\n> VACUUM FULL ANALYZE is scheduled to run on a weekly basis.\n\nIf you need to do this, then other settings are incorrect.\n\n> I do understand the need for vacuuming. Nevertheless I expect Postgres\n> to return data eventually even if I do not vacuum. In my case, the\n> simple SELECT COUNT(*) FROM table; statement on a table that had around\n> 100K \"live\" rows has not returned the result for more than 6 hours after\n> which I manually killed it.\n\nIt should, 6 hours is too long for that process, unless you're running\na 486dx2. You didn't mention your hardware or your postgresql.conf\nsettings. What other activity is occurring during this long count()?\nCan you give us a shot of the iostat output and/or top during this\nphenomenon?\n\n> \n> Jozsef\n> \n> \n> -----Original Message-----\n> From: Bill Moran [mailto:[email protected]] \n> Sent: Wednesday, July 25, 2007 1:12 PM\n> To: Jozsef Szalay\n> Cc: Pavel Stehule; [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> \n> In response to \"Jozsef Szalay\" <[email protected]>:\n> \n> > Hi Pavel,\n> > \n> > \n> > Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> > executing a \"full\" vacuum. The plain vacuum did not help.\n> \n> I read over my previous reply and picked up on something else ...\n> \n> What is your vacuum _policy_? i.e. how often do you vacuum/analyze?\n> The fact that you had to do a vacuum full to get things back under\n> control tends to suggest that your current vacuum schedule is not\n> aggressive enough.\n> \n> An explicit vacuum of this table after the large delete/insert may\n> be helpful.\n> \n> > -----Original Message-----\n> > From: Pavel Stehule [mailto:[email protected]] \n> > Sent: Sunday, July 22, 2007 10:53 AM\n> > To: Jozsef Szalay\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> > \n> > Hello\n> > \n> > did you vacuum?\n> > \n> > It's good technique do vacuum table after remove bigger number of\n> rows.\n> > \n> > Regards\n> > Pavel Stehule\n> > \n> > 2007/7/22, Jozsef Szalay <[email protected]>:\n> > >\n> > >\n> > >\n> > >\n> > > I'm having this very disturbing problem. I got a table with about\n> > 100,000\n> > > rows in it. Our software deletes the majority of these rows and then\n> > bulk\n> > > loads another 100,000 rows into the same table. All this is\n> happening\n> > within\n> > > a single transaction. I then perform a simple \"select count(*) from\n> > ...\"\n> > > statement that never returns. In the mean time, the backend Postgres\n> > process\n> > > is taking close to 100% of the CPU. The hang-up does not always\n> happen\n> > on\n> > > the same statement but eventually it happens 2 out of 3 times. If I\n> > dump and\n> > > then restore the schema where this table resides the problem is gone\n> > until\n> > > the next time we run through the whole process of deleting, loading\n> > and\n> > > querying the table.\n> > >\n> > >\n> > >\n> > > There is no other activity in the database. All requested locks are\n> > granted.\n> > >\n> > >\n> > >\n> > > Has anyone seen similar behavior?\n> > >\n> > >\n> > >\n> > > Some details:\n> > >\n> > >\n> > >\n> > > Postgres v 8.1.2\n> > >\n> > > Linux Fedora 3\n> > >\n> > >\n> > >\n> > > shared_buffers = 65536\n> > >\n> > > temp_buffers = 32768\n> > >\n> > > work_mem = 131072\n> > >\n> > > maintenance_work_mem = 131072\n> > >\n> > > max_stack_depth = 8192\n> > >\n> > > max_fsm_pages = 40000\n> > >\n> > > wal_buffers = 16\n> > >\n> > > checkpoint_segments = 16\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > top reports\n> > >\n> > >\n> > >\n> > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> > >\n> > > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> > postmaster\n> > >\n> > >\n> > >\n> > > ps -ef | grep postgres reports\n> > >\n> > >\n> > >\n> > > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user dbase\n> > [local]\n> > > SELECT\n> > >\n> > >\n> > >\n> > > strace -p 19478\n> > >\n> > > no system calls reported\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > Thanks for the help!\n> > >\n> > > Jozsef\n> > \n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> > \n> > \n> > \n> > \n> > \n> \n> \n> -- \n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n> \n> [email protected]\n> Phone: 412-422-3463x4023\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 25 Jul 2007 16:28:34 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100%" }, { "msg_contents": "With the limited time I had, I could not produce a test case that I\ncould have submitted to this forum. \n\nI have found another issue in production though. In addition to the \n\nSELECT COUNT(*) FROM table; \n\ntaking forever (>6 hours on a table with 100,000 rows and with no\nindexes on a system with Linux Fedora 3 and with two 3.2GHz Xeon\nprocessors plus hyperthreading), the \n\n\"SELECT column_1 FROM table GROUP BY column_1 HAVING COUNT(*) > 1;\n\nstatement actually ran out of memory (on a table with 300,000 rows and\nwith no indexes, while the OS reported >3.5 GB virtual memory for the\npostgres backend process).\n\nTo make it short, we found that both problems could be solved with a\nsingle magic bullet, namely by calling ANALYZE every time after large\namount of changes were introduced to the table. (Earlier, we called\nANALYZE only after we did some serious post-processing on freshly\nbulk-loaded data.)\n\nI don't know why ANALYZE would have any effect on a sequential scan of a\ntable but it does appear to impact both performance and memory usage\nsignificantly.\n\nBoth of our production issues have vanished after this simple change! We\ndo not have to call a FULL VACUUM on the table anymore. The plain VACUUM\nis satisfactory.\n\nThanks for all the responses!\nJozsef \n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]] \nSent: Wednesday, July 25, 2007 3:29 PM\nTo: Jozsef Szalay\nCc: Pavel Stehule; [email protected]\nSubject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n\nIn response to \"Jozsef Szalay\" <[email protected]>:\n\n> Our application is such that any update to the database is done by a\n> single session in a batch process using bulk load. The frequency of\n> these usually larger scale updates is variable but an update runs\nevery\n> 2-3 days on average.\n> \n> Originally a plain VACUUM ANALYZE was executed on every affected table\n> after every load.\n\nAny other insert/update activity outside of the bulk loads? What's\nthe vacuum policy outside the bulk loads? You say originally, does\nit still do so?\n\nI agree with Pavel that the output of vacuum verbose when the problem\nis occurring would be helpful.\n\n> VACUUM FULL ANALYZE is scheduled to run on a weekly basis.\n\nIf you need to do this, then other settings are incorrect.\n\n> I do understand the need for vacuuming. Nevertheless I expect Postgres\n> to return data eventually even if I do not vacuum. In my case, the\n> simple SELECT COUNT(*) FROM table; statement on a table that had\naround\n> 100K \"live\" rows has not returned the result for more than 6 hours\nafter\n> which I manually killed it.\n\nIt should, 6 hours is too long for that process, unless you're running\na 486dx2. You didn't mention your hardware or your postgresql.conf\nsettings. What other activity is occurring during this long count()?\nCan you give us a shot of the iostat output and/or top during this\nphenomenon?\n\n> \n> Jozsef\n> \n> \n> -----Original Message-----\n> From: Bill Moran [mailto:[email protected]] \n> Sent: Wednesday, July 25, 2007 1:12 PM\n> To: Jozsef Szalay\n> Cc: Pavel Stehule; [email protected]\n> Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> \n> In response to \"Jozsef Szalay\" <[email protected]>:\n> \n> > Hi Pavel,\n> > \n> > \n> > Yes I did vacuum. In fact the only way to \"fix\" this problem is\n> > executing a \"full\" vacuum. The plain vacuum did not help.\n> \n> I read over my previous reply and picked up on something else ...\n> \n> What is your vacuum _policy_? i.e. how often do you vacuum/analyze?\n> The fact that you had to do a vacuum full to get things back under\n> control tends to suggest that your current vacuum schedule is not\n> aggressive enough.\n> \n> An explicit vacuum of this table after the large delete/insert may\n> be helpful.\n> \n> > -----Original Message-----\n> > From: Pavel Stehule [mailto:[email protected]] \n> > Sent: Sunday, July 22, 2007 10:53 AM\n> > To: Jozsef Szalay\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%\n> > \n> > Hello\n> > \n> > did you vacuum?\n> > \n> > It's good technique do vacuum table after remove bigger number of\n> rows.\n> > \n> > Regards\n> > Pavel Stehule\n> > \n> > 2007/7/22, Jozsef Szalay <[email protected]>:\n> > >\n> > >\n> > >\n> > >\n> > > I'm having this very disturbing problem. I got a table with about\n> > 100,000\n> > > rows in it. Our software deletes the majority of these rows and\nthen\n> > bulk\n> > > loads another 100,000 rows into the same table. All this is\n> happening\n> > within\n> > > a single transaction. I then perform a simple \"select count(*)\nfrom\n> > ...\"\n> > > statement that never returns. In the mean time, the backend\nPostgres\n> > process\n> > > is taking close to 100% of the CPU. The hang-up does not always\n> happen\n> > on\n> > > the same statement but eventually it happens 2 out of 3 times. If\nI\n> > dump and\n> > > then restore the schema where this table resides the problem is\ngone\n> > until\n> > > the next time we run through the whole process of deleting,\nloading\n> > and\n> > > querying the table.\n> > >\n> > >\n> > >\n> > > There is no other activity in the database. All requested locks\nare\n> > granted.\n> > >\n> > >\n> > >\n> > > Has anyone seen similar behavior?\n> > >\n> > >\n> > >\n> > > Some details:\n> > >\n> > >\n> > >\n> > > Postgres v 8.1.2\n> > >\n> > > Linux Fedora 3\n> > >\n> > >\n> > >\n> > > shared_buffers = 65536\n> > >\n> > > temp_buffers = 32768\n> > >\n> > > work_mem = 131072\n> > >\n> > > maintenance_work_mem = 131072\n> > >\n> > > max_stack_depth = 8192\n> > >\n> > > max_fsm_pages = 40000\n> > >\n> > > wal_buffers = 16\n> > >\n> > > checkpoint_segments = 16\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > top reports\n> > >\n> > >\n> > >\n> > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n> > >\n> > > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16\n> > postmaster\n> > >\n> > >\n> > >\n> > > ps -ef | grep postgres reports\n> > >\n> > >\n> > >\n> > > postgres 19478 8061 99 00:11 ? 10:13:03 postgres: user\ndbase\n> > [local]\n> > > SELECT\n> > >\n> > >\n> > >\n> > > strace -p 19478\n> > >\n> > > no system calls reported\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > Thanks for the help!\n> > >\n> > > Jozsef\n> > \n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> > \n> > \n> > \n> > \n> > \n> \n> \n> -- \n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n> \n> [email protected]\n> Phone: 412-422-3463x4023\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n", "msg_date": "Fri, 17 Aug 2007 17:54:29 -0500", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple select hangs while CPU close to 100% - Analyze" }, { "msg_contents": "Jozsef Szalay escribi�:\n\n> I don't know why ANALYZE would have any effect on a sequential scan of a\n> table but it does appear to impact both performance and memory usage\n> significantly.\n\nIt doesn't. What it does is provide the query optimizer with the\ninformation that it needs to know that the table contains many different\nvalues, which makes it turn the initial hashed aggregation into a sort\nplus group aggregation. This allows the aggregation to use less memory.\n\nAs an exercise, see an EXPLAIN of the query, both before and after the\nanalyze, and study the difference.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 17 Aug 2007 23:03:49 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple select hangs while CPU close to 100% - Analyze" } ]
[ { "msg_contents": "Hello all,\n\nhow to build an multicolumn index with one column order ASCENDING and\nanother column order DESCENDING?\n\nThe use case that I have is that I use 2 column index where the first\ncolumn is kind of flag and the second column is an actual ordering\ncolumn. The flag should be always ordered DESCENDING, but the second\ncolumn is ordered DESCENDING when it is a numeric column, and\nASCENDING when it is a text column.\n\nCREATE TABLE storage (id int, flag int, numeric_data int, text_data\ntext);\n\nSELECT * FROM storage\n ORDER BY flag DESC, numeric_column DESC\nLIMIT 20 OFFSET 0;\n\nSELECT * FROM storage\n ORDER BY flag DESC, text_column ASC\nLIMIT 20 OFFSET 0;\n\nDefinitely the multicolumn index on (flag, numeric_column) is being\nused.\n\nBut how to create an index on (flag, text_column DESC)?\n\nI will try to index by ((-flag), text_column) and sort by (-flag) ASC,\nbut it, to say the truth, does not really look like a nice solution.\n\n", "msg_date": "Mon, 23 Jul 2007 16:47:03 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": true, "msg_subject": "multicolumn index column order" }, { "msg_contents": "valgog <[email protected]> writes:\n> how to build an multicolumn index with one column order ASCENDING and\n> another column order DESCENDING?\n\nUse 8.3 ;-)\n\nIn existing releases you could fake it with a custom reverse-sorting\noperator class, but it's a pain in the neck to create one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jul 2007 13:00:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multicolumn index column order " }, { "msg_contents": "On Jul 23, 7:00 pm, [email protected] (Tom Lane) wrote:\n> valgog <[email protected]> writes:\n> > how to build an multicolumn index with one column order ASCENDING and\n> > another column order DESCENDING?\n>\n> Use 8.3 ;-)\n>\n> In existing releases you could fake it with a custom reverse-sorting\n> operator class, but it's a pain in the neck to create one.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\nok, thanks for a rapid answer, can live with the ((-flag),\ntext_column) functional multicolumn index by now.\n\nWaiting for 8.3 :-)\n\n", "msg_date": "Mon, 23 Jul 2007 17:13:16 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multicolumn index column order" }, { "msg_contents": "the manual somewhere states \"... if archiving is enabled...\" To me \nthis implies that archiving can be disabled. However I cannot find \nthe parameter to use to get this result. Or should I enable archiving \nand use a backup script like\n\n#!/usr/bin/bash\nexit 0\n\n\n\nWould appreciate a hint. And yes I know I put my database in danger \netc. This is for some benchmarks where I do not want the overhead of \narchiving. Jus a file system that will not fill with zillions of \nthese 16MB WAL files ;^)\n\nThanks\nPaul.\n\n\n", "msg_date": "Mon, 23 Jul 2007 19:24:48 +0200", "msg_from": "Paul van den Bogaard <[email protected]>", "msg_from_op": false, "msg_subject": "disable archiving" }, { "msg_contents": "Paul van den Bogaard wrote:\n> the manual somewhere states \"... if archiving is enabled...\" To me this \n> implies that archiving can be disabled. However I cannot find the parameter \n> to use to get this result.\n\nArchiving is disabled by not setting archive_command.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 23 Jul 2007 13:34:20 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disable archiving" }, { "msg_contents": "am Mon, dem 23.07.2007, um 19:24:48 +0200 mailte Paul van den Bogaard folgendes:\n> the manual somewhere states \"... if archiving is enabled...\" To me \n\nPlease don't hijack other threads...\n\n(don't edit a mail-subject to create a new thread. Create a NEW mail!)\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 23 Jul 2007 19:37:49 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disable archiving" }, { "msg_contents": "Perhaps you should've read the configuration-manual-page more carefully. ;)\nBesides, WAL-archiving is turned off by default, so if you see them \nbeing archived you actually enabled it earlier\n\nThe \"archive_command\" is empty by default: \"If this is an empty string \n(the default), WAL archiving is disabled.\"\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html\n\nBest regards,\n\nArjen\n\nOn 23-7-2007 19:24 Paul van den Bogaard wrote:\n> the manual somewhere states \"... if archiving is enabled...\" To me this \n> implies that archiving can be disabled. However I cannot find the \n> parameter to use to get this result. Or should I enable archiving and \n> use a backup script like\n> \n> #!/usr/bin/bash\n> exit 0\n> \n> \n> \n> Would appreciate a hint. And yes I know I put my database in danger etc. \n> This is for some benchmarks where I do not want the overhead of \n> archiving. Jus a file system that will not fill with zillions of these \n> 16MB WAL files ;^)\n> \n> Thanks\n> Paul.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Mon, 23 Jul 2007 19:39:10 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disable archiving" }, { "msg_contents": "Alvaro,\n\nthanks for the quick reply. Just to make sure: I do not set this \ncommand. This results in the database cycling through a finite set \n(hopefully small) set of WAL files. So old WAL files are reused once \nthe engine thinks this can be done.\n\nThanks\nPaul\n\n\nOn 23-jul-2007, at 19:34, Alvaro Herrera wrote:\n\n> Paul van den Bogaard wrote:\n>> the manual somewhere states \"... if archiving is enabled...\" To me \n>> this\n>> implies that archiving can be disabled. However I cannot find the \n>> parameter\n>> to use to get this result.\n>\n> Archiving is disabled by not setting archive_command.\n>\n> -- \n> Alvaro Herrera http:// \n> www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n\n------------------------------------------------------------------------ \n---------------------\nPaul van den Bogaard \[email protected]\nCIE -- Collaboration and ISV Engineering, Opensource Engineering group\n\nSun Microsystems, Inc phone: +31 \n334 515 918\nSaturnus 1 \nextentsion: x (70)15918\n3824 ME Amersfoort mobile: +31 \n651 913 354\nThe Netherlands \nfax: +31 334 515 001\n\n", "msg_date": "Mon, 23 Jul 2007 19:42:06 +0200", "msg_from": "Paul van den Bogaard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disable archiving" }, { "msg_contents": "On Jul 23, 7:24 pm, [email protected] (Paul van den Bogaard)\nwrote:\n> the manual somewhere states \"... if archiving is enabled...\" To me\n> this implies that archiving can be disabled. However I cannot find\n> the parameter to use to get this result. Or should I enable archiving\n> and use a backup script like\n>\n> #!/usr/bin/bash\n> exit 0\n>\n> Would appreciate a hint. And yes I know I put my database in danger\n> etc. This is for some benchmarks where I do not want the overhead of\n> archiving. Jus a file system that will not fill with zillions of\n> these 16MB WAL files ;^)\n>\n> Thanks\n> Paul.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\nIs it normal to spoil other threads? or is it a bug?\n\nIf it is not a bug, please change the subject of the topic back to\nwhat it was!\n\nWith best regards,\n\nValentine Gogichashvili\n\n", "msg_date": "Tue, 24 Jul 2007 07:33:55 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": true, "msg_subject": "Re: disable archiving" } ]
[ { "msg_contents": "http://blogs.sun.com/jkshah/entry/specjappserver2004_with_glassfish_v2_and\n\nThis time with 33% less App Server hardware but same setup for \nPostgreSQL 8.2.4 with 4.5% better score .. There has been reduction in \nCPU utilization by postgresql with the new app server which means there \nis potential to do more JOPS. But looks like Simon Rigg added another \nproject for us to look into maxalign side effect with Solaris on SPARC \nbefore doing more benchmarks now.\n\nCheers,\nJignesh\n\n", "msg_date": "Tue, 24 Jul 2007 00:09:00 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Second SpecJAppserver2004 with PostgreSQL" } ]
[ { "msg_contents": "valgog <[email protected]> wrote ..\n> On Jul 23, 7:00 pm, [email protected] (Tom Lane) wrote:\n> > valgog <[email protected]> writes:\n> > > how to build an multicolumn index with one column order ASCENDING and\n> > > another column order DESCENDING?\n> >\n> > Use 8.3 ;-)\n> >\n> > In existing releases you could fake it with a custom reverse-sorting\n> > operator class, but it's a pain in the neck to create one.\n\nI've often gotten what I want by using a calculated index on (f1, -f2). ORDER BY will take an expression, e.g. ORDER BY f1, -f2. Simpler than a custom operator.\n", "msg_date": "Tue, 24 Jul 2007 00:10:30 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: multicolumn index column order" }, { "msg_contents": "On 7/24/07, [email protected] <[email protected]> wrote:\n>\n> valgog <[email protected]> wrote ..\n> > On Jul 23, 7:00 pm, [email protected] (Tom Lane) wrote:\n> > > valgog <[email protected]> writes:\n> > > > how to build an multicolumn index with one column order ASCENDING\n> and\n> > > > another column order DESCENDING?\n> > >\n> > > Use 8.3 ;-)\n> > >\n> > > In existing releases you could fake it with a custom reverse-sorting\n> > > operator class, but it's a pain in the neck to create one.\n>\n> I've often gotten what I want by using a calculated index on (f1, -f2).\n> ORDER BY will take an expression, e.g. ORDER BY f1, -f2. Simpler than a\n> custom operator.\n>\n\n\nYes, this is true, but I do now know how to make text order be reversible?\nThere is no - (minus) operator for text value. By now it is not a problem\nfor me, but theoretically I do not see other chance to reverse text fields\norder...\n\nOn 7/24/07, [email protected] <[email protected]> wrote:\nvalgog <[email protected]> wrote ..> On Jul 23, 7:00 pm, [email protected] (Tom Lane) wrote:> > valgog <\[email protected]> writes:> > > how to build an multicolumn index with one column order ASCENDING and> > > another column order DESCENDING?> >> > Use 8.3 ;-)> >\n> > In existing releases you could fake it with a custom reverse-sorting> > operator class, but it's a pain in the neck to create one.I've often gotten what I want by using a calculated index on (f1, -f2). ORDER BY will take an expression, \ne.g. ORDER BY f1, -f2. Simpler than a custom operator.Yes, this is true, but I do now know how to make text order be reversible? There is no - (minus) operator for text value. By now it is not a problem for me, but theoretically I do not see other chance to reverse text fields order...", "msg_date": "Tue, 24 Jul 2007 09:40:43 +0200", "msg_from": "\"Valentine Gogichashvili\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multicolumn index column order" }, { "msg_contents": "On Jul 25, 2:14 am, Lew <[email protected]> wrote:\n>\n> How about two indexes, one on each column? Then the indexes will cooperate\n> when combined in a WHERE clause.\n> <http://www.postgresql.org/docs/8.2/interactive/indexes-bitmap-scans.html>\n>\n> I don't believe the index makes a semantic difference with regard to ascending\n> or descending. An index is used to locate records in the selection phase of a\n> query or modification command.\n>\n> --\n> Lew\n\nOrdered indexes (b-tree in this case) are also used to get the needed\nrecord order and it is absolutely not necessary to have a WHARE clause\nin your select statement to use them when you are using ORDER BY.\n\n--\nValentine\n\n", "msg_date": "Wed, 25 Jul 2007 08:34:00 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multicolumn index column order" }, { "msg_contents": "valgog wrote:\n> On Jul 25, 2:14 am, Lew <[email protected]> wrote:\n>> How about two indexes, one on each column? Then the indexes will cooperate\n>> when combined in a WHERE clause.\n>> <http://www.postgresql.org/docs/8.2/interactive/indexes-bitmap-scans.html>\n>>\n>> I don't believe the index makes a semantic difference with regard to ascending\n>> or descending. An index is used to locate records in the selection phase of a\n>> query or modification command.\n>>\n>> --\n>> Lew\n> \n> Ordered indexes (b-tree in this case) are also used to get the needed\n> record order and it is absolutely not necessary to have a WHARE clause\n> in your select statement to use them when you are using ORDER BY.\n\nBut does that affect anything when you \"ORDER BY foo ASC\" vs. when you \"ORDER \nBY foo DESC\"?\n\nFor use by ORDER BY, separate column indexes are an even better idea.\n\n-- \nLew\n", "msg_date": "Fri, 27 Jul 2007 16:03:11 -0400", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multicolumn index column order" } ]
[ { "msg_contents": "We're experiencing a query performance problem related to the planner and\nits ability to perform a specific type of merge.\n\n \n\nWe have created a test case (as attached, or here:\nhttp://www3.streamy.com/postgres/indextest.sql) which involves a\nhypothetical customer ordering system, with customers, orders, and customer\ngroups.\n\n \n\nIf we want to retrieve a single customers 10 most recent orders, sorted by\ndate, we can use a double index on (customer,date); Postgres's query planner\nwill use the double index with a backwards index scan on the second indexed\ncolumn (date).\n\n \n\nHowever, if we want to retrieve a \"customer class's\" 10 most recent orders,\nsorted by date, we are not able to get Postgres to use double indexes.\n\n \n\nWe have come to the conclusion that the fastest way to accomplish this type\nof query is to merge, in sorted order, each customers set of orders (for\nwhich we can use the double index). Using a heap to merge these ordered\nlists (until we reach the limit) seems the most algorithmically efficient\nway we are able to find. This is implemented in the attachment as a\npl/pythonu function.\n\n \n\nAnother less algorithmically efficient solution, but faster in practice for\nmany cases, is to fetch the full limit of orders from each customer, sort\nthese by date, and return up to the limit.\n\n \n\nWe are no masters of reading query plans, but for straight SQL queries the\nplanner seems to yield two different types of plan. They are fast in\ncertain cases but breakdown in our typical use cases, where the number of\norders per customer is sparse compared to the total number of orders across\nthe date range.\n\n \n\nWe are interested in whether a mechanism internal to Postgres can accomplish\nthis type of merging of indexed columns in sorted order.\n\n \n\nIf this cannot currently be accomplished (or if there is something we are\nmissing about why it shouldn't be) we would appreciate any pointers to be\nable to translate our python heap approach into C functions integrated more\nclosely with Postgres. The python function incurs large constant costs\nbecause of type conversions and repeated queries to the database.\n\n \n\nThanks for any help or direction.\n\n \n\nJonathan Gray / Miguel Simon", "msg_date": "Tue, 24 Jul 2007 00:48:07 -0700", "msg_from": "\"Jonathan Gray\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance issue" }, { "msg_contents": "Jonathan Gray wrote:\n> We�re experiencing a query performance problem related to the planner \n> and its ability to perform a specific type of merge.\n> \n> \n> \n> We have created a test case (as attached, or here: \n> http://www3.streamy.com/postgres/indextest.sql) which involves a \n> hypothetical customer ordering system, with customers, orders, and \n> customer groups.\n> \n> \n> \n> If we want to retrieve a single customers 10 most recent orders, sorted \n> by date, we can use a double index on (customer,date); Postgres�s query \n> planner will use the double index with a backwards index scan on the \n> second indexed column (date).\n> \n> \n> \n> However, if we want to retrieve a �customer class�s� 10 most recent \n> orders, sorted by date, we are not able to get Postgres to use double \n> indexes.\n\nYou don't have any indexes on the 'customerclass' table.\n\nCreating a foreign key doesn't create an index, you need to do that \nseparately.\n\nTry\n\ncreate index cc_customerid_class on indextest.customerclass(classid, \ncustomerid);\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Tue, 24 Jul 2007 18:44:04 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Chris wrote:\n> Jonathan Gray wrote:\n>> We�re experiencing a query performance problem related to the planner \n>> and its ability to perform a specific type of merge.\n>>\n>> \n>>\n>> We have created a test case (as attached, or here: \n>> http://www3.streamy.com/postgres/indextest.sql) which involves a \n>> hypothetical customer ordering system, with customers, orders, and \n>> customer groups.\n>>\n>> \n>>\n>> If we want to retrieve a single customers 10 most recent orders, \n>> sorted by date, we can use a double index on (customer,date); \n>> Postgres�s query planner will use the double index with a backwards \n>> index scan on the second indexed column (date).\n>>\n>> \n>>\n>> However, if we want to retrieve a �customer class�s� 10 most recent \n>> orders, sorted by date, we are not able to get Postgres to use double \n>> indexes.\n> \n> You don't have any indexes on the 'customerclass' table.\n> \n> Creating a foreign key doesn't create an index, you need to do that \n> separately.\n> \n> Try\n> \n> create index cc_customerid_class on indextest.customerclass(classid, \n> customerid);\n> \n\nIt could also be that since you don't have very much data (10,000) rows \n- postgres is ignoring the indexes because it'll be quicker to scan the \ntables.\n\nIf you bump it up to say 100k rows, what happens?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Tue, 24 Jul 2007 18:50:55 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Chris,\n\nCreating indexes on the customerclass table does speed up the queries but\nstill does not create the plan we are looking for (using the double index\nwith a backward index scan on the orders table).\n\nThe plans we now get, with times on par or slightly better than with the\nplpgsql hack, are:\n\n EXPLAIN ANALYZE\n SELECT o.orderid,o.orderstamp FROM indextest.orders o \n INNER JOIN indextest.customerclass cc ON (cc.classid = 2) \n WHERE o.customerid = cc.customerid ORDER BY o.orderstamp DESC LIMIT 5;\n\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------\n Limit (cost=0.00..176.65 rows=5 width=12) (actual time=0.930..3.675 rows=5\nloops=1)\n -> Nested Loop (cost=0.00..46388.80 rows=1313 width=12) (actual\ntime=0.927..3.664 rows=5 loops=1)\n -> Index Scan Backward using orders_orderstamp_idx on orders o\n(cost=0.00..6225.26 rows=141001 width=16) (actual time=0.015..0.957 rows=433\nloops=1)\n -> Index Scan using customerclass_customerid_idx on customerclass\ncc (cost=0.00..0.27 rows=1 width=4) (actual time=0.004..0.004 rows=0\nloops=433)\n Index Cond: (o.customerid = cc.customerid)\n Filter: (classid = 2)\n\nAnd\n\n EXPLAIN ANALYZE\n SELECT o.orderid,o.orderstamp FROM indextest.orders o \n INNER JOIN indextest.customerclass cc ON (cc.classid = 2) \n WHERE o.customerid = cc.customerid ORDER BY o.orderstamp DESC LIMIT 100;\n\n QUERY\nPLAN\n----------------------------------------------------------------------------\n---------------------------------------------------------------------------\n Limit (cost=1978.80..1979.05 rows=100 width=12) (actual time=6.167..6.448\nrows=100 loops=1)\n -> Sort (cost=1978.80..1982.09 rows=1313 width=12) (actual\ntime=6.165..6.268 rows=100 loops=1)\n Sort Key: o.orderstamp\n -> Nested Loop (cost=3.99..1910.80 rows=1313 width=12) (actual\ntime=0.059..4.576 rows=939 loops=1)\n -> Bitmap Heap Scan on customerclass cc (cost=3.99..55.16\nrows=95 width=4) (actual time=0.045..0.194 rows=95 loops=1)\n Recheck Cond: (classid = 2)\n -> Bitmap Index Scan on customerclass_classid_idx\n(cost=0.00..3.96 rows=95 width=0) (actual time=0.032..0.032 rows=95 loops=1)\n Index Cond: (classid = 2)\n -> Index Scan using orders_customerid_idx on orders o\n(cost=0.00..19.35 rows=15 width=16) (actual time=0.006..0.025 rows=10\nloops=95)\n Index Cond: (o.customerid = cc.customerid)\n\n\nAs I said, this is a hypothetical test case we have arrived at that\ndescribes our situation as best as we can given a simple case. We're\ninterested in potential issues with the approach, why postgres would not\nattempt something like it, and how we might go about implementing it\nourselves at a lower level than we currently have (in SPI, libpq, etc). \n\nIf it could be generalized then we could use it in cases where we aren't\npulling from just one table (the orders table) but rather trying to merge,\nin sorted order, results from different conditions on different tables.\nRight now we use something like the plpgsql or plpythonu functions in the\nexample and they outperform our regular SQL queries by a fairly significant\nmargin.\n\nAn example might be:\n\nSELECT * FROM ( \n(SELECT orderid,stamp FROM indextest.orders_usa WHERE customerid = <setof\ncustomerids> ORDER BY stamp DESC LIMIT 5) UNION \n(SELECT orderid,stamp FROM indextest.orders_can WHERE customerid = <setoff\ncustomerids> ORDER BY stamp DESC LIMIT 5) \n) as x ORDER BY x.stamp DESC\n\nAgain, that's a general example but some of my queries contain between 5 and\n10 different sorted joins of this kind and it would be helpful to have\nsomething internal in postgres to efficiently handle it (do something just\nlike the above query but not have to do the full LIMIT 5 for each set, some\nkind of in order merge/heap join?)\n \nJonathan Gray\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Chris\nSent: Tuesday, July 24, 2007 1:51 AM\nTo: Jonathan Gray\nCc: [email protected]\nSubject: Re: [PERFORM] Query performance issue\n\nChris wrote:\n> Jonathan Gray wrote:\n>> We're experiencing a query performance problem related to the planner \n>> and its ability to perform a specific type of merge.\n>>\n>> \n>>\n>> We have created a test case (as attached, or here: \n>> http://www3.streamy.com/postgres/indextest.sql) which involves a \n>> hypothetical customer ordering system, with customers, orders, and \n>> customer groups.\n>>\n>> \n>>\n>> If we want to retrieve a single customers 10 most recent orders, \n>> sorted by date, we can use a double index on (customer,date); \n>> Postgres's query planner will use the double index with a backwards \n>> index scan on the second indexed column (date).\n>>\n>> \n>>\n>> However, if we want to retrieve a \"customer class's\" 10 most recent \n>> orders, sorted by date, we are not able to get Postgres to use double \n>> indexes.\n> \n> You don't have any indexes on the 'customerclass' table.\n> \n> Creating a foreign key doesn't create an index, you need to do that \n> separately.\n> \n> Try\n> \n> create index cc_customerid_class on indextest.customerclass(classid, \n> customerid);\n> \n\nIt could also be that since you don't have very much data (10,000) rows \n- postgres is ignoring the indexes because it'll be quicker to scan the \ntables.\n\nIf you bump it up to say 100k rows, what happens?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Tue, 24 Jul 2007 02:18:53 -0700", "msg_from": "\"Jonathan Gray\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Jonathan Gray wrote:\n> Chris,\n> \n> Creating indexes on the customerclass table does speed up the queries but\n> still does not create the plan we are looking for (using the double index\n> with a backward index scan on the orders table).\n\nStupid question - why is that particular plan your \"goal\" plan?\n\n> The plans we now get, with times on par or slightly better than with the\n> plpgsql hack, are:\n> \n> EXPLAIN ANALYZE\n> SELECT o.orderid,o.orderstamp FROM indextest.orders o \n> INNER JOIN indextest.customerclass cc ON (cc.classid = 2) \n> WHERE o.customerid = cc.customerid ORDER BY o.orderstamp DESC LIMIT 5;\n\nDidn't notice this before...\n\nShouldn't this be:\n\nINNER JOIN indextest.customerclass cc ON (o.customerid = cc.customerid)\nWHERE cc.classid = 2\n\nie join on the common field not the classid one which doesn't appear in \nthe 2nd table?\n\n> As I said, this is a hypothetical test case we have arrived at that\n> describes our situation as best as we can given a simple case. We're\n> interested in potential issues with the approach, why postgres would not\n> attempt something like it, and how we might go about implementing it\n> ourselves at a lower level than we currently have (in SPI, libpq, etc). \n> \n> If it could be generalized then we could use it in cases where we aren't\n> pulling from just one table (the orders table) but rather trying to merge,\n> in sorted order, results from different conditions on different tables.\n> Right now we use something like the plpgsql or plpythonu functions in the\n> example and they outperform our regular SQL queries by a fairly significant\n> margin.\n\nI'm sure if you posted the queries you are running with relevant info \nyou'd get some help ;)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Tue, 24 Jul 2007 19:36:25 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "That particular plan is our goal because we've \"hacked\" it together to\nperform better than the normal sql plans. Analytically it makes sense to\napproach this particular problem in this way because it is relatively\ninvariant to the distributions and sizes of the tables (with only having to\ndeal with increased index size).\n\nAlso, changing around the query doesn't change the query plan at all. The\nplanner is intelligent enough to figure out what it really needs to join on\ndespite my poor query writing. I originally had it this way to ensure my\n(customerid,orderstamp) conditions were in the correct order but again\nappears to not matter.\n\nI will try to get a more complex/sophisticated test case running. I'm not\nable to post my actual structure or queries but I'll try to produce a better\nexample of the other (multiple table) case tomorrow. \n\nThanks.\n\nJonathan Gray\n\n\n-----Original Message-----\nFrom: Chris [mailto:[email protected]] \nSent: Tuesday, July 24, 2007 2:36 AM\nTo: Jonathan Gray\nCc: [email protected]\nSubject: Re: [PERFORM] Query performance issue\n\nJonathan Gray wrote:\n> Chris,\n> \n> Creating indexes on the customerclass table does speed up the queries but\n> still does not create the plan we are looking for (using the double index\n> with a backward index scan on the orders table).\n\nStupid question - why is that particular plan your \"goal\" plan?\n\n> The plans we now get, with times on par or slightly better than with the\n> plpgsql hack, are:\n> \n> EXPLAIN ANALYZE\n> SELECT o.orderid,o.orderstamp FROM indextest.orders o \n> INNER JOIN indextest.customerclass cc ON (cc.classid = 2) \n> WHERE o.customerid = cc.customerid ORDER BY o.orderstamp DESC LIMIT 5;\n\nDidn't notice this before...\n\nShouldn't this be:\n\nINNER JOIN indextest.customerclass cc ON (o.customerid = cc.customerid)\nWHERE cc.classid = 2\n\nie join on the common field not the classid one which doesn't appear in \nthe 2nd table?\n\n> As I said, this is a hypothetical test case we have arrived at that\n> describes our situation as best as we can given a simple case. We're\n> interested in potential issues with the approach, why postgres would not\n> attempt something like it, and how we might go about implementing it\n> ourselves at a lower level than we currently have (in SPI, libpq, etc). \n> \n> If it could be generalized then we could use it in cases where we aren't\n> pulling from just one table (the orders table) but rather trying to merge,\n> in sorted order, results from different conditions on different tables.\n> Right now we use something like the plpgsql or plpythonu functions in the\n> example and they outperform our regular SQL queries by a fairly\nsignificant\n> margin.\n\nI'm sure if you posted the queries you are running with relevant info \nyou'd get some help ;)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Tue, 24 Jul 2007 02:50:18 -0700", "msg_from": "\"Jonathan Gray\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" } ]
[ { "msg_contents": "I have installed pgAdmin III 1.6. In the tool when you click on a\nparticular table you can select a tab called \"Statistics\". This tab has\nall kinds of info on your table. For some reason the only info I see is\nfor table size, toast table size and indexes size. Is there a reason\nthat the other 15 fields have zeros in them? I was thinking that maybe\nI needed to turn on a setting within my database in order to get\nstatistics reported.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI have installed pgAdmin III 1.6.  In the tool when you\nclick on a particular table you can select a tab called “Statistics”. \nThis tab has all kinds of info on your table.  For some reason the only info I\nsee is for table size, toast table size and indexes size.  Is there a reason\nthat the other 15 fields have zeros in them?  I was thinking that maybe I\nneeded to turn on a setting within my database in order to get statistics\nreported.\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Tue, 24 Jul 2007 12:06:13 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Table Statistics with pgAdmin III" }, { "msg_contents": "Campbell, Lance a �crit :\n> I have installed pgAdmin III 1.6. In the tool when you click on a \n> particular table you can select a tab called �Statistics�. This tab has \n> all kinds of info on your table. For some reason the only info I see is \n> for table size, toast table size and indexes size. Is there a reason \n> that the other 15 fields have zeros in them? I was thinking that maybe \n> I needed to turn on a setting within my database in order to get \n> statistics reported.\n\nit seems that the module pgstattuple is needed\n\n-- \nJean-Max Reymond\nCKR Solutions http://www.ckr-solutions.com\n", "msg_date": "Tue, 24 Jul 2007 19:23:53 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Statistics with pgAdmin III" } ]
[ { "msg_contents": "\n\n> ------- Original Message -------\n> From: Jean-Max Reymond <[email protected]>\n> To: [email protected]\n> Sent: 24/07/07, 18:23:53\n> Subject: Re: [PERFORM] Table Statistics with pgAdmin III\n> \n> Campbell, Lance a �crit :\n> > I have installed pgAdmin III 1.6. In the tool when you click on a \n> > particular table you can select a tab called �Statistics�. This tab has \n> > all kinds of info on your table. For some reason the only info I see is \n> > for table size, toast table size and indexes size. Is there a reason \n> > that the other 15 fields have zeros in them? I was thinking that maybe \n> > I needed to turn on a setting within my database in order to get \n> > statistics reported.\n> \n> it seems that the module pgstattuple is needed\n\nThat'll allow you to see extra stats in 1.8, but won't alter what you already see, in fact 1.6 won't use it at all. What values are at zero?\n\nRegards, Dave\n", "msg_date": "Tue, 24 Jul 2007 18:49:47 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table Statistics with pgAdmin III" }, { "msg_contents": "All of the fields are zero except for the three I listed in my posting.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Dave Page\nSent: Tuesday, July 24, 2007 12:50 PM\nTo: Jean-Max Reymond\nCc: [email protected]\nSubject: Re: [PERFORM] Table Statistics with pgAdmin III\n\n\n\n> ------- Original Message -------\n> From: Jean-Max Reymond <[email protected]>\n> To: [email protected]\n> Sent: 24/07/07, 18:23:53\n> Subject: Re: [PERFORM] Table Statistics with pgAdmin III\n> \n> Campbell, Lance a écrit :\n> > I have installed pgAdmin III 1.6. In the tool when you click on a \n> > particular table you can select a tab called \"Statistics\". This tab has \n> > all kinds of info on your table. For some reason the only info I see is \n> > for table size, toast table size and indexes size. Is there a reason \n> > that the other 15 fields have zeros in them? I was thinking that maybe \n> > I needed to turn on a setting within my database in order to get \n> > statistics reported.\n> \n> it seems that the module pgstattuple is needed\n\nThat'll allow you to see extra stats in 1.8, but won't alter what you already see, in fact 1.6 won't use it at all. What values are at zero?\n\nRegards, Dave\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n", "msg_date": "Tue, 24 Jul 2007 13:41:22 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Statistics with pgAdmin III" }, { "msg_contents": "Campbell, Lance wrote:\n> All of the fields are zero except for the three I listed in my posting.\n\nDo you have the stats collector enabled, and row & block level stats \nturned on?\n\nRegards, Dave.\n", "msg_date": "Wed, 25 Jul 2007 09:59:54 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Statistics with pgAdmin III" } ]
[ { "msg_contents": "Hi all,\n\n I've got the following two tables running on postgresql 8.1.4\n\n transactions\n Column | Type | Modifiers\n----------------------+-----------------------------+---------------\ntransaction_id | character varying(32) | not null\nuser_id | bigint | not null\ntimestamp_in | timestamp without time zone | default now()\ntype_id | integer |\ntechnology_id | integer |\nIndexes:\n \"pk_phusrtrans_transid\" PRIMARY KEY, btree (transaction_id)\n \"idx_phusrtrans_paytyptech\" btree (type_id, technology_id)\n \"idx_putrnsctns_tstampin\" btree (timestamp_in)\n\n\n\n statistics\n Column | Type | Modifiers\n----------------------+-----------------------------+-------------------\nstatistic_id | bigint | not null\nduration | bigint |\ntransaction_id | character varying(32) |\nIndexes:\n \"pk_phstat_statid\" PRIMARY KEY, btree (statistic_id)\n \"idx_phstat_transid\" btree (transaction_id)\n\n\nthe idea is to have a summary of how many transactions, duration, and\ntype for every date. To do so, I've done the following query:\n\n\nSELECT\n count(t.transaction_id) AS num_transactions\n , SUM(s.duration) AS duration\n , date(t.timestamp_in) as date\n , t.type_id\nFROM\n transactions t\n LEFT OUTER JOIN statistics s ON t.transaction_id = s.transaction_id\nWHERE\n t.timestamp_in >= to_timestamp('20070101', 'YYYYMMDD')\nGROUP BY date, t.type_id;\n\nI think this could be speed up if the index idx_putrnsctns_tstampin\n(index over the timestamp) could be used, but I haven't been able to do\nit. Any suggestion?\n\nThanks all\n-- \nArnau\n", "msg_date": "Tue, 24 Jul 2007 20:27:01 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "index over timestamp not being used" }, { "msg_contents": "Arnau <[email protected]> writes:\n> timestamp_in | timestamp without time zone | default now()\n\n> SELECT ...\n> FROM\n> transactions t\n> LEFT OUTER JOIN statistics s ON t.transaction_id = s.transaction_id\n> WHERE\n> t.timestamp_in >= to_timestamp('20070101', 'YYYYMMDD')\n> GROUP BY date, t.type_id;\n\nto_timestamp() produces timestamp *with* timezone, so your WHERE query\nis effectively\n t.timestamp_in::timestamptz >= to_timestamp('20070101', 'YYYYMMDD')\nwhich doesn't match the index.\n\nThe first question you should ask yourself is whether you picked the\nright datatype for the column. IMHO timestamp with tz is the more\nappropriate choice in the majority of cases.\n\nIf you do want to stick with timestamp without tz, you'll need to cast\nthe result of to_timestamp to that.\n\nAlternatively, do you really need to_timestamp at all? The standard\ntimestamp input routine won't have any problem with that format:\n t.timestamp_in >= '20070101'\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jul 2007 15:24:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index over timestamp not being used " }, { "msg_contents": "Hi Tom,\n\n> \n> Alternatively, do you really need to_timestamp at all? The standard\n> timestamp input routine won't have any problem with that format:\n> t.timestamp_in >= '20070101'\n\nThis is always I think I'm worried, what happens if one day the internal \nformat in which the DB stores the date/timestamps changes. I mean, if \ninstead of being stored as YYYYMMDD is stored as DDMMYYYY, should we \nhave to change all the queries? I thought the \nto_char/to_date/to_timestamp functions were intented for this purposes\n\n\n-- \nArnau\n", "msg_date": "Tue, 24 Jul 2007 21:31:07 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index over timestamp not being used" }, { "msg_contents": "Arnau <[email protected]> writes:\n>> Alternatively, do you really need to_timestamp at all? The standard\n>> timestamp input routine won't have any problem with that format:\n>> t.timestamp_in >= '20070101'\n\n> This is always I think I'm worried, what happens if one day the internal \n> format in which the DB stores the date/timestamps changes. I mean, if \n> instead of being stored as YYYYMMDD is stored as DDMMYYYY, should we \n> have to change all the queries?\n\nYou are confusing internal storage format with the external\nrepresentation.\n\n> I thought the \n> to_char/to_date/to_timestamp functions were intented for this purposes\n\nNo, they're intended for dealing with wacky formats that the regular\ninput/output routines can't understand or produce.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jul 2007 15:43:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index over timestamp not being used " }, { "msg_contents": "Am Dienstag 24 Juli 2007 schrieb Tom Lane:\n> > I thought the\n> > to_char/to_date/to_timestamp functions were intented for this purposes\n>\n> No, they're intended for dealing with wacky formats that the regular\n> input/output routines can't understand or produce.\n\nReally? I use them alot, because of possible problems with different date \nformats. 20070503 means May 3, 2007 for germans, I don't know what it means \nto US citizens, but following the strange logic of having the month first \n(8/13/2005) it might mean March 5, 2007 too. Therefore, using to_timestamp \nseems to be a safe choice for me, working in any environment regardless of \nthe \"date_style\" setting.\n\n", "msg_date": "Wed, 25 Jul 2007 15:05:10 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index over timestamp not being used" }, { "msg_contents": "On 2007-07-25 Mario Weilguni wrote:\n> Am Dienstag 24 Juli 2007 schrieb Tom Lane:\n>>> I thought the to_char/to_date/to_timestamp functions were intented\n>>> for this purposes\n>>\n>> No, they're intended for dealing with wacky formats that the regular\n>> input/output routines can't understand or produce.\n> \n> Really? I use them alot, because of possible problems with different\n> date formats. 20070503 means May 3, 2007 for germans,\n\nActually, no. 20070503 is the condensed form of the ISO international\ncalendar date format (see ISO 8601). German formats would be 03.05.2007\nor 3. Mai 2007.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Wed, 25 Jul 2007 15:35:12 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index over timestamp not being used" } ]
[ { "msg_contents": "I've got an interesting issue here that I'm running into with 8.2.3\n\nThis is an application that has run quite well for a long time, and has \nbeen operating without significant changes (other than recompilation) \nsince back in the early 7.x Postgres days. But now we're seeing a LOT \nmore load than we used to with it, and suddenly, we're seeing odd \nperformance issues.\n\nIt APPEARS that the problem isn't query performance per-se. That is, \nwhile I can find a few processes here and there in a run state when I \nlook with a PS, I don't see them consistently churning.\n\nBut.... here's the query that has a habit of taking the most time....\n\nselect forum, * from post where toppost = 1 and (replied > (select \nlastview from forumlog where login='theuser' and forum=post.forum and \nnumber is null)) is not false AND (replied > (select lastview from \nforumlog where login='theuser' and forum=post.forum and \nnumber=post.number)) is not f\nalse order by pinned desc, replied desc offset 0 limit 20\n\nNow the numeric and \"login\" fields may change; when I plug it into \nexplain what I get back is:\n\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------\n Limit (cost=57270.22..57270.27 rows=20 width=757)\n -> Sort (cost=57270.22..57270.50 rows=113 width=757)\n Sort Key: pinned, replied\n -> Index Scan using post_top on post (cost=0.00..57266.37 \nrows=113 width=757)\n Index Cond: (toppost = 1)\n Filter: (((replied > (subplan)) IS NOT FALSE) AND \n((replied > (subplan)) IS NOT FALSE))\n SubPlan\n -> Index Scan using forumlog_composite on forumlog \n(cost=0.00..8.29 rows=1 width\n Index Cond: ((\"login\" = 'theuser'::text) AND \n(forum = $0) AND (number = $1))\n -> Bitmap Heap Scan on forumlog (cost=4.39..47.61 \nrows=1 width=8)\n Recheck Cond: ((\"login\" = 'theuser'::text) AND \n(forum = $0))\n Filter: (number IS NULL)\n -> Bitmap Index Scan on forumlog_composite \n(cost=0.00..4.39 rows=12 width=0)\n Index Cond: ((\"login\" = 'theuser'::text) \nAND (forum = $0))\n(14 rows)\n\nAnd indeed, it returns a fairly reasonable number of rows.\n\nThis takes a second or two to return - not all that bad - although this \nis one that people hit a LOT. \n\nOne thing that DOES bother me is this line from the EXPLAIN output:\n-> Index Scan using post_top on post (cost=0.00..57266.53 rows=113 \nwidth=757)\n\nThis is indexed using:\n\n \"post_top\" btree (toppost)\n\nAin't nothing fancy there. So how come the planner thinks this is going \nto take that long?!?\n\nMore interesting, if I do a simple query on that line, I get....\n\nticker=> explain select forum from post where toppost='1';\n QUERY PLAN \n---------------------------------------------------------------------------\n Index Scan using post_top on post (cost=0.00..632.03 rows=1013 width=11)\n Index Cond: (toppost = 1)\n\nHmmmmmm; that's a bit more reasonable. So what's up with the above line?\n\nWhat I'm seeing is that as concurrency increases, I see the CPU load \nspike. Disk I/O is low to moderate at less than 10% of maximum \naccording to systat -vm, no swap in use, 300mb dedicated to shared \nmemory buffers for Postgres (machine has 1GB of RAM and is a P4/3G/HT \nrunning FreeBSD 6.2-STABLE) It does not swap at all, so it does not \nappear I've got a problem with running out of physical memory. shmem is \npinned to physical memory via the sysctl tuning parameter to prevent \npage table thrashing.\n\nHowever, load goes up substantially and under moderate to high \nconcurrency gets into the 4-5 range with response going somewhat to \ncrap. The application is still usable, but its not \"crisp\". If I do a \n\"ps\" during times that performance is particularly bad, I don't see any \nparticular overrepresentation of this query .vs. others (I have the \napplication doing a \"setproctitle\" so I know what command - and thus \nwhat sets of queries - it is executing.)\n\nNot sure where to start here. It appears that I'm CPU limited and the \nproblem may be that this is a web-served application that must connect \nto the Postgres backend for each transaction, perform its queries, and \nthen close the connection down - in other words the load may be coming \nnot from Postgres but rather from places I can't fix at the application \nlayer (e.g. fork() overhead, etc). The DBMS and Apache server are on \nthe same machine, so there's no actual network overhead involved. \n\nIf that's the case the only solution is to throw more hardware at it. I \ncan do that, but before I go tossing more CPU at the problem I'd like to \nknow I'm not just wasting money.\n\nThe application uses the \"C\" language interface and just calls \n\"Connectdb\" - the only parameter is the dbname, so it should be \ndefaulting to the local socket. It appears that this is the case.\n\nThe system load is typically divided up as about 2:1 system .vs. user \ntime on the CPU meter.\n\nIdeas on where to start in trying to profile where the bottlenecks are?\n\nThe indices are btrees - I'm wondering if perhaps I should be using \nsomething different here?\n\nThanks in advance.....\n\n-- \nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\n%SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n", "msg_date": "Tue, 24 Jul 2007 20:20:09 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "Karl Denninger <[email protected]> writes:\n> But.... here's the query that has a habit of taking the most time....\n\n> select forum, * from post where toppost = 1 and (replied > (select \n> lastview from forumlog where login='theuser' and forum=post.forum and \n> number is null)) is not false AND (replied > (select lastview from \n> forumlog where login='theuser' and forum=post.forum and \n> number=post.number)) is not f\n> alse order by pinned desc, replied desc offset 0 limit 20\n\nDid that ever perform well for you? It's the sub-selects that are\nlikely to hurt ... in particular,\n\n> -> Index Scan using post_top on post (cost=0.00..57266.37 \n> rows=113 width=757)\n> Index Cond: (toppost = 1)\n> Filter: (((replied > (subplan)) IS NOT FALSE) AND \n> ((replied > (subplan)) IS NOT FALSE))\n\nversus\n\n> Index Scan using post_top on post (cost=0.00..632.03 rows=1013 width=11)\n> Index Cond: (toppost = 1)\n\nThe planner thinks that the two subplan filter conditions will eliminate\nabout 90% of the rows returned by the bare indexscan (IIRC this is\npurely a rule of thumb, not based on any statistics) and that testing\nthem 1013 times will add over 50000 cost units to the basic indexscan.\nThat part I believe --- correlated subqueries are expensive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jul 2007 22:45:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application " }, { "msg_contents": "Yeah, the problem doesn't appear to be there. As I said, if I look at \nthe PS of the system when its bogging, there aren't a whole bunch of \nprocesses stuck doing these, so while this does take a second or two to \ncome back, that's not that bad.\n\nIts GENERAL performance that just bites - the system is obviously out of \nCPU, but what I can't get a handle on is WHY. It does not appear to be \naccumulating large amounts of runtime in processes I can catch, but the \nload average is quite high.\n\nThis is why I'm wondering if what I'm taking here is a hit on the \nfork/exec inside the portmaster, in the setup internally in there, in \nthe IPC between my process via libPQ, etc - and how I can profile what's \ngoing on.\n\nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\nTom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> \n>> But.... here's the query that has a habit of taking the most time....\n>> \n>\n> \n>> select forum, * from post where toppost = 1 and (replied > (select \n>> lastview from forumlog where login='theuser' and forum=post.forum and \n>> number is null)) is not false AND (replied > (select lastview from \n>> forumlog where login='theuser' and forum=post.forum and \n>> number=post.number)) is not f\n>> alse order by pinned desc, replied desc offset 0 limit 20\n>> \n>\n> Did that ever perform well for you? It's the sub-selects that are\n> likely to hurt ... in particular,\n>\n> \n>> -> Index Scan using post_top on post (cost=0.00..57266.37 \n>> rows=113 width=757)\n>> Index Cond: (toppost = 1)\n>> Filter: (((replied > (subplan)) IS NOT FALSE) AND \n>> ((replied > (subplan)) IS NOT FALSE))\n>> \n>\n> versus\n>\n> \n>> Index Scan using post_top on post (cost=0.00..632.03 rows=1013 width=11)\n>> Index Cond: (toppost = 1)\n>> \n>\n> The planner thinks that the two subplan filter conditions will eliminate\n> about 90% of the rows returned by the bare indexscan (IIRC this is\n> purely a rule of thumb, not based on any statistics) and that testing\n> them 1013 times will add over 50000 cost units to the basic indexscan.\n> That part I believe --- correlated subqueries are expensive.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n> %SPAMBLOCK-SYS: Matched [hub.org+], message ok\n> \n\n\n\n\n\n\nYeah, the problem doesn't appear to be there.  As I said, if I look at\nthe PS of the system when its bogging, there aren't a whole bunch of\nprocesses stuck doing these, so while this does take a second or two to\ncome back, that's not that bad.\n\nIts GENERAL performance that just bites - the system is obviously out\nof CPU, but what I can't get a handle on is WHY.  It does not appear to\nbe accumulating large amounts of runtime in processes I can catch, but\nthe load average is quite high.\n\nThis is why I'm wondering if what I'm taking here is a hit on the\nfork/exec inside the portmaster, in the setup internally in there, in\nthe IPC between my process via libPQ, etc - and how I can profile\nwhat's going on.\nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\nTom Lane wrote:\n\nKarl Denninger <[email protected]> writes:\n \n\nBut.... here's the query that has a habit of taking the most time....\n \n\n\n \n\nselect forum, * from post where toppost = 1 and (replied > (select \nlastview from forumlog where login='theuser' and forum=post.forum and \nnumber is null)) is not false AND (replied > (select lastview from \nforumlog where login='theuser' and forum=post.forum and \nnumber=post.number)) is not f\nalse order by pinned desc, replied desc offset 0 limit 20\n \n\n\nDid that ever perform well for you? It's the sub-selects that are\nlikely to hurt ... in particular,\n\n \n\n -> Index Scan using post_top on post (cost=0.00..57266.37 \nrows=113 width=757)\n Index Cond: (toppost = 1)\n Filter: (((replied > (subplan)) IS NOT FALSE) AND \n((replied > (subplan)) IS NOT FALSE))\n \n\n\nversus\n\n \n\n Index Scan using post_top on post (cost=0.00..632.03 rows=1013 width=11)\n Index Cond: (toppost = 1)\n \n\n\nThe planner thinks that the two subplan filter conditions will eliminate\nabout 90% of the rows returned by the bare indexscan (IIRC this is\npurely a rule of thumb, not based on any statistics) and that testing\nthem 1013 times will add over 50000 cost units to the basic indexscan.\nThat part I believe --- correlated subqueries are expensive.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n%SPAMBLOCK-SYS: Matched [hub.org+], message ok", "msg_date": "Tue, 24 Jul 2007 21:55:45 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "On 7/25/07, Karl Denninger <[email protected]> wrote:\n>\n> Yeah, the problem doesn't appear to be there. As I said, if I look at the\n> PS of the system when its bogging, there aren't a whole bunch of processes\n> stuck doing these, so while this does take a second or two to come back,\n> that's not that bad.\n>\n> Its GENERAL performance that just bites - the system is obviously out of\n> CPU, but what I can't get a handle on is WHY. It does not appear to be\n> accumulating large amounts of runtime in processes I can catch, but the load\n> average is quite high.\n\n8.2.3 has the 'stats collector bug' (fixed in 8.2.4) which increased\nload in high concurrency conditions. on a client's machine after\npatching the postmaster load drop from the 4-5 range to 1-2 range on a\n500 tps server. maybe this is biting you? symptoms are high load avg\nand high cpu usage of stats collector process.\n\nmerlin\n", "msg_date": "Wed, 25 Jul 2007 08:45:19 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "Hmmmmm..... now that's interesting. Stats collector IS accumulating \nquite a bit of runtime..... me thinks its time to go grab 8.2.4.\n\nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\nMerlin Moncure wrote:\n> On 7/25/07, Karl Denninger <[email protected]> wrote:\n>>\n>> Yeah, the problem doesn't appear to be there. As I said, if I look \n>> at the\n>> PS of the system when its bogging, there aren't a whole bunch of \n>> processes\n>> stuck doing these, so while this does take a second or two to come back,\n>> that's not that bad.\n>>\n>> Its GENERAL performance that just bites - the system is obviously \n>> out of\n>> CPU, but what I can't get a handle on is WHY. It does not appear to be\n>> accumulating large amounts of runtime in processes I can catch, but \n>> the load\n>> average is quite high.\n>\n> 8.2.3 has the 'stats collector bug' (fixed in 8.2.4) which increased\n> load in high concurrency conditions. on a client's machine after\n> patching the postmaster load drop from the 4-5 range to 1-2 range on a\n> 500 tps server. maybe this is biting you? symptoms are high load avg\n> and high cpu usage of stats collector process.\n>\n> merlin\n>\n>\n> %SPAMBLOCK-SYS: Matched [google.com+], message ok\n\n\n%SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n", "msg_date": "Tue, 24 Jul 2007 22:16:16 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "Karl Denninger <[email protected]> writes:\n> Hmmmmm..... now that's interesting. Stats collector IS accumulating \n> quite a bit of runtime..... me thinks its time to go grab 8.2.4.\n\nI think Merlin might have nailed it --- the \"stats collector bug\" is\nthat it tries to write out the stats file way more often than it\nshould. So any excessive userland CPU time you see is just the tip\nof the iceberg compared to the system and I/O costs incurred.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jul 2007 23:25:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application " }, { "msg_contents": "Aha!\n\nBIG difference. I won't know for sure until the biz day tomorrow but \nthe \"first blush\" look is that it makes a HUGE difference in system \nload, and I no longer have the stats collector process on the top of the \n\"top\" list......\n\nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\nTom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> \n>> Hmmmmm..... now that's interesting. Stats collector IS accumulating \n>> quite a bit of runtime..... me thinks its time to go grab 8.2.4.\n>> \n>\n> I think Merlin might have nailed it --- the \"stats collector bug\" is\n> that it tries to write out the stats file way more often than it\n> should. So any excessive userland CPU time you see is just the tip\n> of the iceberg compared to the system and I/O costs incurred.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> %SPAMBLOCK-SYS: Matched [hub.org+], message ok\n> \n\n\n%SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n", "msg_date": "Tue, 24 Jul 2007 22:35:41 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "Karl Denninger skrev:\n> I've got an interesting issue here that I'm running into with 8.2.3\n> \n> This is an application that has run quite well for a long time, and has\n> been operating without significant changes (other than recompilation)\n> since back in the early 7.x Postgres days. But now we're seeing a LOT\n> more load than we used to with it, and suddenly, we're seeing odd\n> performance issues.\n> \n> It APPEARS that the problem isn't query performance per-se. That is,\n> while I can find a few processes here and there in a run state when I\n> look with a PS, I don't see them consistently churning.\n> \n> But.... here's the query that has a habit of taking the most time....\n> \n> select forum, * from post where toppost = 1 and (replied > (select\n> lastview from forumlog where login='theuser' and forum=post.forum and\n> number is null)) is not false AND (replied > (select lastview from\n> forumlog where login='theuser' and forum=post.forum and\n> number=post.number)) is not false order by pinned desc, replied desc offset 0 limit 20\n\nSince I can do little to help you with anything else, here is a little\nhelp from a guy with a hammer. It seems you may be able to convert the\nsubqueries into a left join. Not sure whether this helps, nor whether I\ngot some bits of the logic wrong, but something like this might help the\nplanner find a better plan:\n\nSELECT forum, *\nFROM post\nLEFT JOIN forumlog\nON post.forum = forumlog.forum\nAND forumlog.login = 'theuser'\nAND (post.number = forumlog.number OR forumlog.number IS NULL)\nAND post.replied <= lastview\nWHERE forumlog.forum IS NULL\nAND forum.toppost = 1\nORDER BY pinned DESC, replied DESC OFFSET 0 LIMIT 20 ;\n\n\nNis\n\n", "msg_date": "Wed, 25 Jul 2007 08:17:58 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "\"Karl Denninger\" <[email protected]> writes:\n\n> Not sure where to start here. It appears that I'm CPU limited and the problem\n> may be that this is a web-served application that must connect to the Postgres\n> backend for each transaction, perform its queries, and then close the\n> connection down - in other words the load may be coming not from Postgres but\n> rather from places I can't fix at the application layer (e.g. fork() overhead,\n> etc). The DBMS and Apache server are on the same machine, so there's no actual\n> network overhead involved.\n>\n> If that's the case the only solution is to throw more hardware at it. I can do\n> that, but before I go tossing more CPU at the problem I'd like to know I'm not\n> just wasting money.\n\nI know you found the proximate cause of your current problems, but it sounds\nlike you have something else you should consider looking at here. There are\ntechniques for avoiding separate database connections for each request.\n\nIf you're using Apache you can reduce the CPU usage a lot by writing your\nmodule as an Apache module instead of a CGI or whatever type of program it is\nnow. Then your module would live as long as a single Apache instance which you\ncan configure to be hours or days instead of a single request. It can keep\naround the database connection for that time.\n\nIf that's impossible there are still techniques that can help. You can set up\nPGPool or PGBouncer or some other connection aggregating tool to handle the\nconnections. This is a pretty low-impact change which shouldn't require making\nany application changes aside from changing the database connection string.\nEffectively this is a just a connection pool that lives in a separate\nprocess.\n \n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 25 Jul 2007 08:58:40 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" }, { "msg_contents": "Looks like that was the problem - got a day under the belt now with the \n8.2.4 rev and all is back to normal.\n\nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\nKarl Denninger wrote:\n> Aha!\n>\n> BIG difference. I won't know for sure until the biz day tomorrow but \n> the \"first blush\" look is that it makes a HUGE difference in system \n> load, and I no longer have the stats collector process on the top of \n> the \"top\" list......\n>\n> Karl Denninger ([email protected])\n> http://www.denninger.net\n>\n>\n>\n>\n> Tom Lane wrote:\n>> Karl Denninger <[email protected]> writes:\n>> \n>>> Hmmmmm..... now that's interesting. Stats collector IS accumulating \n>>> quite a bit of runtime..... me thinks its time to go grab 8.2.4.\n>>> \n>>\n>> I think Merlin might have nailed it --- the \"stats collector bug\" is\n>> that it tries to write out the stats file way more often than it\n>> should. So any excessive userland CPU time you see is just the tip\n>> of the iceberg compared to the system and I/O costs incurred.\n>>\n>> regards, tom lane\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>>\n>> %SPAMBLOCK-SYS: Matched [hub.org+], message ok\n>> \n>\n>\n> %SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n>\n> %SPAMBLOCK-SYS: Matched [hub.org+], message ok\n\n\n%SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n", "msg_date": "Wed, 25 Jul 2007 16:01:28 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with 8.2.3 - \"C\" application" } ]
[ { "msg_contents": "I am wondering if reindexing heavily used tables can have an impact on\nvacuum times. If it does, will the impact be noticeable the next time I\nvacuum? Please note that I am doing vacuum, not vacuum full.\n\nI am on a FreeBSD 6.1 Release, Postgresql is 8.09\n\nCurrently I seeing a phenomenon where vacuum times go up beyond 1 hour.\nAfter I re-index 3 tables, heavily used, the vacuum times stay up for the\nnext 3 daily vacuums and then come down to 30 to 40 minutes. I am trying to\nsee if there is a relationship between re-indexinf and vacuum times. All\nother things remain the same. Which means the only change I am performing is\nre-indexing.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\nhttp://theurbanturban.blogspot.com/\nhttp://ysidhu.googlepages.com/\n\nI am wondering if reindexing heavily used tables can have an impact on vacuum times. If it does, will the impact be noticeable the next time I vacuum? Please note that I am doing vacuum, not vacuum full. I am on a FreeBSD \n6.1 Release, Postgresql is 8.09Currently I seeing a phenomenon where vacuum times go up beyond 1 hour. After I re-index 3 tables, heavily used, the vacuum times stay up for the next 3 daily vacuums and then come down to 30 to 40 minutes. I am trying to see if there is a relationship between re-indexinf and vacuum times. All other things remain the same. Which means the only change I am performing is re-indexing.\n-- Yudhvir Singh Sidhu408 375 3134 cellhttp://theurbanturban.blogspot.com/http://ysidhu.googlepages.com/", "msg_date": "Wed, 25 Jul 2007 11:53:16 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Affect of Reindexing on Vacuum Times" }, { "msg_contents": "On Jul 25, 2007, at 11:53 AM, Y Sidhu wrote:\n> I am wondering if reindexing heavily used tables can have an impact \n> on vacuum times. If it does, will the impact be noticeable the next \n> time I vacuum? Please note that I am doing vacuum, not vacuum full.\n>\n> I am on a FreeBSD 6.1 Release, Postgresql is 8.09\n>\n> Currently I seeing a phenomenon where vacuum times go up beyond 1 \n> hour. After I re-index 3 tables, heavily used, the vacuum times \n> stay up for the next 3 daily vacuums and then come down to 30 to 40 \n> minutes. I am trying to see if there is a relationship between re- \n> indexinf and vacuum times. All other things remain the same. Which \n> means the only change I am performing is re-indexing.\n\nReindex will shrink index sizes, which will speed up vacuuming. But \nthat alone doesn't explain what you're seeing, which is rather odd.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Thu, 26 Jul 2007 16:27:59 -0700", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Affect of Reindexing on Vacuum Times" } ]
[ { "msg_contents": "Hi,\n\n I am having problems with some of the Insert statements in the prod\ndatabase. Our client application is trying into insert some of the\nrecords and it is not going through , they are just hanging. They are\nrunning in a transaction and some how it is not telling us what is it\nwaiting on . Here is the output from pg_stat_activity\n\nselect current_query from pg_stat_activity where current_query <>\n'<IDLE>' order by query_start;\n\ncurrent_query\n------------------\n insert into provisioning.accountnote (fkserviceinstanceid, fkaccountid,\nfknoteid, accountnoteid) values ($1, $2, $3, $4)\n\nAs soon as I kill the process id for this insert statement I see these\nstatements in the postgres log file\n\n FATAL: terminating connection due to administrator command\n CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"provisioning\".\"account\" x\nWHERE \"accountid\" = $1 FOR UPDATE OF x\"\n\nI am hoping \"SELECT 1 FROM ONLY \"provisioning\".\"account\" x WHERE\n\"accountid\" = $1 FOR UPDATE OF x\" is causing the problem. If that is the\ncase why doesnt it show in the pg_stat_activity view ? or am I missing\nsomething here ? what would be the reason for insert statement to hang\nlike that ? \n\nPostgres version: 8.0.12, vacuums and analyze are done regularly.\n\nHere are table structures\n\n\\d provisioning.accountnote\n Table \"provisioning.accountnote\"\n Column | Type | Modifiers\n---------------------+---------+---------------------------------------------------------------\n accountnoteid | integer | not null default\nnextval('provisioning.AccountNoteSeq'::text)\n fkaccountid | integer | not null\n fknoteid | integer | not null\n fkserviceinstanceid | integer |\nIndexes:\n \"pk_accountnote_accountnoteid\" PRIMARY KEY, btree (accountnoteid)\n \"idx_accountnote_fkaccountid\" btree (fkaccountid)\n \"idx_accountnote_fknoteid\" btree (fknoteid)\n \"idx_accountnote_fkserviceinstanceid\" btree (fkserviceinstanceid)\nForeign-key constraints:\n \"fk_accountnote_serviceinstance\" FOREIGN KEY (fkserviceinstanceid)\nREFERENCES provisioning.serviceinstance(serviceinstanceid)\n \"fk_accountnote_note\" FOREIGN KEY (fknoteid) REFERENCES\ncommon.note(noteid)\n \"fk_accountnote_account\" FOREIGN KEY (fkaccountid) REFERENCES\nprovisioning.account(accountid)\n\n\n\n\\d provisioning.account\n Table \"provisioning.account\"\n Column | Type \n| Modifiers\n--------------------------+-----------------------------+----------------------------------------------------------------\n accountid | integer | not null\ndefault nextval('provisioning.AccountSeq'::text)\n createdate | timestamp without time zone | not null\ndefault ('now'::text)::timestamp(6) without time zone\n fkcontactid | integer |\n login | text | not null\n password | text |\n fkserviceproviderid | integer |\n serviceproviderreference | text |\nIndexes:\n \"pk_account_accountid\" PRIMARY KEY, btree (accountid)\n \"idx_account_fkcontactid\" btree (fkcontactid)\n \"idx_account_login\" btree (login)\n \"idx_account_serviceproviderreference\" btree (serviceproviderreference)\nForeign-key constraints:\n \"fk_account_serviceprovider\" FOREIGN KEY (fkserviceproviderid)\nREFERENCES provisioning.serviceprovider(serviceproviderid)\n \"fk_account_contact\" FOREIGN KEY (fkcontactid) REFERENCES\ncommon.contact(contactid)\n\nThanks!\nPallav.\n\n", "msg_date": "Wed, 25 Jul 2007 16:27:02 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Insert Statements Hanging " }, { "msg_contents": "On Wednesday 25 July 2007 13:27, Pallav Kalva <[email protected]> \nwrote:\n> I am hoping \"SELECT 1 FROM ONLY \"provisioning\".\"account\" x WHERE\n> \"accountid\" = $1 FOR UPDATE OF x\" is causing the problem. If that is the\n> case why doesnt it show in the pg_stat_activity view ? or am I missing\n> something here ? what would be the reason for insert statement to hang\n> like that ?\n\nIt's waiting for a lock, probably on one of the tables that it references \nfor foreign keys.\n\n8.1 or later would have that happen a lot less; they altered the locking \nrequirements for foreign key lookups.\n\n-- \n\"It is a besetting vice of democracies to substitute public opinion for\nlaw.\" - James Fenimore Cooper \n\n", "msg_date": "Wed, 25 Jul 2007 13:55:13 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert Statements Hanging" } ]
[ { "msg_contents": "Hi all,\nwhats the benefits of replication by using slony in\npostgresql??\nMy office is separate in several difference place..its\nabout hundreds branch office in the difference\nplace..so any one can help me to replicate our dbase\nby using slony?? and why slony??\n\nthanks,\nAlgebra.corp\nBayu\n\n\n \n____________________________________________________________________________________\nPinpoint customers who are looking for what you sell. \nhttp://searchmarketing.yahoo.com/\n", "msg_date": "Thu, 26 Jul 2007 01:44:19 -0700 (PDT)", "msg_from": "angga erwina <[email protected]>", "msg_from_op": true, "msg_subject": "performance of postgresql in replication using slony" }, { "msg_contents": "On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:\n> Hi all,\n> whats the benefits of replication by using slony in\n> postgresql??\n> My office is separate in several difference place..its\n> about hundreds branch office in the difference\n> place..so any one can help me to replicate our dbase\n> by using slony?? and why slony??\n> \n\nThis question should be asked on the slony1-general list, you'll get\nmore responses there.\n\nThe benefit of using slony is that you can read from many servers rather\nthan just one.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 26 Jul 2007 10:21:45 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of postgresql in replication using slony" }, { "msg_contents": "[email protected] (Jeff Davis) writes:\n> On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:\n>> Hi all,\n>> whats the benefits of replication by using slony in\n>> postgresql??\n>> My office is separate in several difference place..its\n>> about hundreds branch office in the difference\n>> place..so any one can help me to replicate our dbase\n>> by using slony?? and why slony??\n>> \n>\n> This question should be asked on the slony1-general list, you'll get\n> more responses there.\n>\n> The benefit of using slony is that you can read from many servers rather\n> than just one.\n\nIndeed.\n\nIt would be worth taking a peek at the documentation, notably the\nintroductory material, as that will give some idea as to whether\nSlony-I is suitable at all for the desired purpose.\n\nOne thing that \"tweaks\" my antennae a bit is the mention of having\n\"hundreds branch office\"; there are two things worth mentioning that\nwould be relevant to that:\n\n- Slony-I is a single-master replication system, *not* a multimaster\nsystem. If someone is expecting to do updates at branch offices, and\nthat this will propagate everywhere, that is likely not to work out\neasily or well.\n\n- If it *is* fair to assess that there will only be one \"master\",\nSlony-I is intended to support a relatively limited number of nodes.\nThere are no absolute restrictions on numbers of subscribers, but\nthere are enough communications costs that grow in a polynomial\nfashion that I would be quite disinclined to have more than a dozen\nnodes in a cluster.\n-- \n(reverse (concatenate 'string \"ofni.sesabatadxunil\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/slony.html\nHELP! I'm being attacked by a tenured professor!\n", "msg_date": "Thu, 26 Jul 2007 14:28:12 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of postgresql in replication using slony" } ]
[ { "msg_contents": "In response to \"Brandon Shalton\" <[email protected]>:\n\n> Hello all,\n> \n> My hard disk is filling up in the /base directory to where it has consumed \n> all 200gig of that drive.\n> \n> All the posts that i see keep saying move to a bigger drive, but at some \n> point a bigger drive would just get consumed.\n> \n> How can i keep the disk from filling up other than get like a half TB setup \n> just to hold the ./base/* folder\n\nAre you vacuuming regularly? What is the output of vacuum verbose.\n\nIf table bloat (fixed by correctly vacuuming) is not your problem, then\nyou either need to implement a data expiration policy to get rid of\nold data, or increase the amount of storage to accommodate the data\nyou want to keep.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 26 Jul 2007 09:34:32 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: disk filling up" }, { "msg_contents": "On Thu, 2007-07-26 at 09:18 -0700, Brandon Shalton wrote:\n> Hello all,\n> \n> My hard disk is filling up in the /base directory to where it has consumed \n> all 200gig of that drive.\n> \n> All the posts that i see keep saying move to a bigger drive, but at some \n> point a bigger drive would just get consumed.\n> \n> How can i keep the disk from filling up other than get like a half TB setup \n> just to hold the ./base/* folder\n\n\nUmmm, don't put more 200G worth of data in there? :)\n\nYou didn't give us any information about what you're using the database\nfor, why you think that using 200G is excessive, what version of the\ndatabase you're running, stuff like that. So really there's nothing\nthat we can help you out with, except for the normal \"read the manual\nabout vacuuming and make sure you're doing it\" newbie answer.\n\n-- Mark\n", "msg_date": "Thu, 26 Jul 2007 07:37:23 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" }, { "msg_contents": "Brandon Shalton wrote:\n> Hello all,\n> \n> My hard disk is filling up in the /base directory to where it has \n> consumed all 200gig of that drive.\n> \n> All the posts that i see keep saying move to a bigger drive, but at some \n> point a bigger drive would just get consumed.\n> \n> How can i keep the disk from filling up other than get like a half TB \n> setup just to hold the ./base/* folder\n\n1. Don't have two hundred gig of data.\n2. Sounds more like you don't have 200G of data and you aren't vacuuming \nenough.\n\nJoshua D. Drake\n\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 26 Jul 2007 07:47:08 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" }, { "msg_contents": "Mark Kirkwood wrote:\n> Brandon Shalton wrote:\n>> Hello all,\n>>\n>> My hard disk is filling up in the /base directory to where it has \n>> consumed all 200gig of that drive.\n>>\n>> All the posts that i see keep saying move to a bigger drive, but at \n>> some point a bigger drive would just get consumed.\n>>\n>> How can i keep the disk from filling up other than get like a half TB \n>> setup just to hold the ./base/* folder\n>>\n>\n> Two things come to mind:\n>\n> - Vacuum as already mentioned by others.\n> - Temporary sort files from queries needing to sort massive amounts of \n> data.\n>\n> But you need to help us out by supplying more info...\n>\n> Cheers\n>\n> Mark\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\nTry fsck-ing and do a df before and after and tell us if that makes a \ndifference.\n\nYudhvir\n", "msg_date": "Thu, 26 Jul 2007 20:36:37 +0530", "msg_from": "Yudhvir Singh Sidhu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" }, { "msg_contents": "Hello all,\n\nMy hard disk is filling up in the /base directory to where it has consumed \nall 200gig of that drive.\n\nAll the posts that i see keep saying move to a bigger drive, but at some \npoint a bigger drive would just get consumed.\n\nHow can i keep the disk from filling up other than get like a half TB setup \njust to hold the ./base/* folder\n\n\n\n\n", "msg_date": "Thu, 26 Jul 2007 09:18:55 -0700", "msg_from": "\"Brandon Shalton\" <[email protected]>", "msg_from_op": false, "msg_subject": "disk filling up" }, { "msg_contents": "Brandon Shalton wrote:\n> Hello all,\n>\n> My hard disk is filling up in the /base directory to where it has \n> consumed all 200gig of that drive.\n>\n> All the posts that i see keep saying move to a bigger drive, but at \n> some point a bigger drive would just get consumed.\n>\n> How can i keep the disk from filling up other than get like a half TB \n> setup just to hold the ./base/* folder\n>\n\nTwo things come to mind:\n\n- Vacuum as already mentioned by others.\n- Temporary sort files from queries needing to sort massive amounts of data.\n\nBut you need to help us out by supplying more info...\n\nCheers\n\nMark\n", "msg_date": "Fri, 27 Jul 2007 12:48:36 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" }, { "msg_contents": "On 7/26/07, Joshua D. Drake <[email protected]> wrote:\n> Brandon Shalton wrote:\n> > Hello all,\n> >\n> > My hard disk is filling up in the /base directory to where it has\n> > consumed all 200gig of that drive.\n> >\n> > All the posts that i see keep saying move to a bigger drive, but at some\n> > point a bigger drive would just get consumed.\n> >\n> > How can i keep the disk from filling up other than get like a half TB\n> > setup just to hold the ./base/* folder\n>\n> 1. Don't have two hundred gig of data.\n> 2. Sounds more like you don't have 200G of data and you aren't vacuuming\n> enough.\n\nthird (but unlikely) possibility is there are various dropped tables,\netc which need to be deleted but there are stale postgresql processes\nholding on to the fd. This would only happen following a postmaster\ncrash or some other bizarre scenario, but i've seen it on production\nbox. symptoms are du and df reporting different numbers. solutions\nis easy: reboot or stop postmaster and kill all postgresql processes\n(or, if you are brave, do it with dbms running and nail all processes\nnot following postmaster, do a ps axf [on linux] to see them).\n\nmerlin\n", "msg_date": "Fri, 27 Jul 2007 09:44:31 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> third (but unlikely) possibility is there are various dropped tables,\n> etc which need to be deleted but there are stale postgresql processes\n> holding on to the fd. This would only happen following a postmaster\n> crash or some other bizarre scenario, but i've seen it on production\n> box.\n\nBrent Reid reported something similar in bug #3483 but I'm still quite\nunclear how it'd happen in any realistic scenario. Can you create a\ntest case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Jul 2007 01:19:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up " }, { "msg_contents": "On 7/31/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > third (but unlikely) possibility is there are various dropped tables,\n> > etc which need to be deleted but there are stale postgresql processes\n> > holding on to the fd. This would only happen following a postmaster\n> > crash or some other bizarre scenario, but i've seen it on production\n> > box.\n>\n> Brent Reid reported something similar in bug #3483 but I'm still quite\n> unclear how it'd happen in any realistic scenario. Can you create a\n> test case?\n\nNo, but I've seen it on a production 8.1 box (once). I didn't\nactually cause the problem, just cleaned it up. It was unnoticed for\nseveral weeks/months because the postmaster processes showed up\nwithout a controlling tty.\n\nMy best guess is the postmaster was killed improperly out of haste\nduring a maintenance window, or possibly an out of disk space related\nissue at an earlier point. I never really considered that it was a\npostgresql problem.\n\nmerlin\n", "msg_date": "Tue, 31 Jul 2007 12:16:38 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk filling up" } ]
[ { "msg_contents": "Hi,\nI have a couple questions about how update, truncate and vacuum would \nwork together.\n\n1) If I update a table foo (id int, value numeric (20, 6))\nwith\nupdate foo set value = 100 where id = 1\n\nWould a vacuum be necessary after this type of operation since the \nupdated value is a numeric? (as opposed to a sql type where its size \ncould potentially change i.e varchar)\n\n2) After several updates/deletes to a table, if I truncate it, would \nit be necessary to run vacuum in order to reclaim the space?\n\nthanks,\nScott\n", "msg_date": "Thu, 26 Jul 2007 15:36:50 -0700", "msg_from": "Scott Feldstein <[email protected]>", "msg_from_op": true, "msg_subject": "update, truncate and vacuum" }, { "msg_contents": "> From: Scott Feldstein\n> Subject: [PERFORM] update, truncate and vacuum\n> \n> Hi,\n> I have a couple questions about how update, truncate and \n> vacuum would work together.\n> \n> 1) If I update a table foo (id int, value numeric (20, 6)) \n> with update foo set value = 100 where id = 1\n> \n> Would a vacuum be necessary after this type of operation \n> since the updated value is a numeric? (as opposed to a sql \n> type where its size could potentially change i.e varchar)\n\nYes a vacuum is still necessary. The type doesn't really matter. Postgres\neffectively does a delete and insert on all updates.\n\n \n> 2) After several updates/deletes to a table, if I truncate \n> it, would it be necessary to run vacuum in order to reclaim the space?\n\nNo a vacuum is not necessary after a truncate because the whole data file is\ndeleted once a truncate commits. There aren't any dead rows because there\naren't any rows.\n\nDave\n\n", "msg_date": "Thu, 26 Jul 2007 17:59:58 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update, truncate and vacuum" } ]
[ { "msg_contents": "1) Yes\n\nAll rows are treated the same, there are no in place updates.\n\n2) No\n\nTruncate recreates the object as a new one, releasing the space held by the old one.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tScott Feldstein [mailto:[email protected]]\nSent:\tThursday, July 26, 2007 06:44 PM Eastern Standard Time\nTo:\[email protected]\nSubject:\t[PERFORM] update, truncate and vacuum\n\nHi,\nI have a couple questions about how update, truncate and vacuum would \nwork together.\n\n1) If I update a table foo (id int, value numeric (20, 6))\nwith\nupdate foo set value = 100 where id = 1\n\nWould a vacuum be necessary after this type of operation since the \nupdated value is a numeric? (as opposed to a sql type where its size \ncould potentially change i.e varchar)\n\n2) After several updates/deletes to a table, if I truncate it, would \nit be necessary to run vacuum in order to reclaim the space?\n\nthanks,\nScott\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\nRE: [PERFORM] update, truncate and vacuum\n\n\n\n1) Yes\n\nAll rows are treated the same, there are no in place updates.\n\n2) No\n\nTruncate recreates the object as a new one, releasing the space held by the old one.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Scott Feldstein [mailto:[email protected]]\nSent:   Thursday, July 26, 2007 06:44 PM Eastern Standard Time\nTo:     [email protected]\nSubject:        [PERFORM] update, truncate and vacuum\n\nHi,\nI have a couple questions about how update, truncate and vacuum would \nwork together.\n\n1) If I update a table foo (id int, value numeric (20, 6))\nwith\nupdate foo set value = 100 where id = 1\n\nWould a vacuum be necessary after this type of operation since the \nupdated value is a numeric? (as opposed to a sql type where its size \ncould potentially change i.e varchar)\n\n2) After several updates/deletes to a table, if I truncate it, would \nit be necessary to run vacuum in order to reclaim the space?\n\nthanks,\nScott\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n       subscribe-nomail command to [email protected] so that your\n       message can get through to the mailing list cleanly", "msg_date": "Thu, 26 Jul 2007 19:17:32 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update, truncate and vacuum" } ]
[ { "msg_contents": "Dear list,\n\n\nI am having problems selecting the 10 most recent rows from a large\ntable (4.5M rows), sorted by a date column of that table. The large\ntable has a column user_id which either should match a given user_id,\nor should match the column contact_id in a correlated table where the\nuser_id of that correlated table must match the given user_id.\n\nThe user_id values in the large table are distributed in such a way\nthat in the majority of cases for a given user_id 10 matching rows can\nbe found very early when looking at the table sorted by the date -\npropably within the first 1%. Sometimes the given user_id however\nwould match any rows only very far towards the end of the sorted large\ntable, or not at all.\n\nThe query works fine for the common cases when matching rows are found\nearly in the sorted large table, like this:\n\ntestdb=# EXPLAIN ANALYZE\nSELECT * FROM large_table lt\nLEFT JOIN relationships r ON lt.user_id=r.contact_id\nWHERE r.user_id = 55555 OR lt.user_id = 55555\nORDER BY lt.created_at DESC LIMIT 10;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..33809.31 rows=10 width=646) (actual time=0.088..69.448 rows=10 loops=1)\n -> Nested Loop Left Join (cost=0.00..156983372.66 rows=46432 width=646) (actual time=0.082..69.393 rows=10 loops=1)\n Filter: ((r.user_id = 55555) OR (lt.user_id = 55555))\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=0.00..914814.94 rows=4382838 width=622) (actual time=0.028..0.067 rows=13 loops=1)\n -> Index Scan using relationships_contact_id_index on relationships r (cost=0.00..35.33 rows=16 width=24) (actual time=0.017..2.722 rows=775 loops=13)\n Index Cond: (lt.user_id = r.contact_id)\n Total runtime: 69.640 ms\n\n\nbut for the following user_id there are 3M rows in the large table\nwhich are more recent then the 10th matching one. The query then does\nnot perform so well:\n\n\ntestdb=# EXPLAIN ANALYZE\nSELECT * FROM large_table lt\nLEFT JOIN relationships r ON lt.user_id=r.contact_id\nWHERE r.user_id = 12345 OR lt.user_id = 12345\nORDER BY lt.created_at DESC LIMIT 10;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..33809.31 rows=10 width=646) (actual time=203454.353..425978.718 rows=10 loops=1)\n -> Nested Loop Left Join (cost=0.00..156983372.66 rows=46432 width=646) (actual time=203454.347..425978.662 rows=10 loops=1)\n Filter: ((r.user_id = 12345) OR (lt.user_id = 12345))\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=0.00..914814.94 rows=4382838 width=622) (actual time=0.031..78386.769 rows=3017547 loops=1)\n -> Index Scan using relationships_contact_id_index on relationships r (cost=0.00..35.33 rows=16 width=24) (actual time=0.006..0.060 rows=18 loops=3017547)\n Index Cond: (lt.user_id = r.contact_id)\n Total runtime: 425978.903 ms\n\n\n\nWhen split it up into the two following queries it performs much\nbetter for that user_id. Since the results of the two could be\ncombined into the desired result, I'm assuming it could also be done\nefficiently within one query, if only a better plan would be used.\n\n\ntestdb=# EXPLAIN ANALYZE\nSELECT * FROM large_table lt\nWHERE lt.user_id = 12345\nORDER BY lt.created_at DESC LIMIT 10;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..33.57 rows=10 width=622) (actual time=64.030..71.720 rows=10 loops=1)\n -> Index Scan Backward using large_user_id_created_at_index on large_table lt (cost=0.00..7243.59 rows=2158 width=622) (actual time=64.023..71.662 rows=10 loops=1)\n Index Cond: (user_id = 12345)\n Total runtime: 71.826 ms\n\n\ntestdb=# EXPLAIN ANALYZE\nSELECT * FROM large_table lt\nWHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\nORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6902.52..6902.54 rows=10 width=622) (actual time=0.232..0.262 rows=4 loops=1)\n -> Sort (cost=6902.52..6905.57 rows=1220 width=622) (actual time=0.225..0.237 rows=4 loops=1)\n Sort Key: lt.created_at\n -> Nested Loop (cost=42.78..6839.98 rows=1220 width=622) (actual time=0.090..0.185 rows=4 loops=1)\n -> HashAggregate (cost=42.78..42.79 rows=1 width=4) (actual time=0.059..0.062 rows=1 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.34..42.75 rows=11 width=4) (actual time=0.038..0.041 rows=1 loops=1)\n Recheck Cond: (user_id = 12345)\n -> Bitmap Index Scan on relationships_user_id_index (cost=0.00..4.34 rows=11 width=0) (actual time=0.027..0.027 rows=1 loops=1)\n Index Cond: (user_id = 12345)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6768.48 rows=2297 width=622) (actual time=0.020..0.087 rows=4 loops=1)\n Index Cond: (lt.user_id = relationships.contact_id)\n Total runtime: 0.439 ms\n\n\n\n\nI'm not very experienced reading query plans and don't know how to go\nabout this from here - is it theoretically possible to have a query\nthat performs well with the given data in both cases or is there a\nconceptual problem?\n\nThe database was freshly imported and ANALYZEd before running the\nabove tests.\n\nI also tried the following for every involved column: increase\nstatistics target, analyze the table, explain analyze the slow query,\nbut the plan never changed.\n\nThe relevant schema and indices portions are:\n\ntestdb=# \\d large_table\n Table \"public.large_table\"\n Column | Type | Modifiers\n-------------+-----------------------------+--------------------------------------------------------\n id | integer | not null default nextval('large_id_seq'::regclass)\n user_id | integer | not null\n created_at | timestamp without time zone |\nIndexes:\n \"large_pkey\" PRIMARY KEY, btree (id)\n \"large_created_at_index\" btree (created_at)\n \"large_user_id_created_at_index\" btree (user_id, created_at)\n\n\ntestdb=# \\d relationships\n Table \"public.relationships\"\n Column | Type | Modifiers\n------------+-----------------------------+------------------------------------------------------------\n id | integer | not null default nextval('relationships_id_seq'::regclass)\n user_id | integer |\n contact_id | integer |\nIndexes:\n \"relationships_pkey\" PRIMARY KEY, btree (id)\n \"relationships_contact_id_index\" btree (contact_id)\n \"relationships_user_id_index\" btree (user_id, contact_id)\n\n\ntestdb=# select tablename, attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename in ('large_table', 'relationships');\n tablename | attname | null_frac | avg_width | n_distinct | correlation\n-----------------+-------------+------------+-----------+------------+-------------\n relationships | id | 0 | 4 | -1 | 0.252015\n relationships | user_id | 0 | 4 | 3872 | 0.12848\n relationships | contact_id | 0 | 4 | 3592 | 0.6099\n large_table | id | 0 | 4 | -1 | 0.980048\n large_table | user_id | 0 | 4 | 1908 | 0.527973\n large_table | created_at | 0 | 8 | 87636 | 0.973318\n\n\nselect version();\n version\n-----------------------------------------------------------------------------------------------\n PostgreSQL 8.2.4 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)\n(1 row)\n\n\n\n\ngrateful for any advice, Til\n", "msg_date": "Fri, 27 Jul 2007 19:27:03 +0200", "msg_from": "Tilmann Singer <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query with backwards index scan" }, { "msg_contents": "Tilmann Singer skrev:\n\n> The query works fine for the common cases when matching rows are found\n> early in the sorted large table, like this:\n> \n> testdb=# EXPLAIN ANALYZE\n> SELECT * FROM large_table lt\n> LEFT JOIN relationships r ON lt.user_id=r.contact_id\n> WHERE r.user_id = 55555 OR lt.user_id = 55555\n> ORDER BY lt.created_at DESC LIMIT 10;\n> QUERY PLAN \n> but for the following user_id there are 3M rows in the large table\n> which are more recent then the 10th matching one. The query then does\n> not perform so well:\n> \n> \n> testdb=# EXPLAIN ANALYZE\n> SELECT * FROM large_table lt\n> LEFT JOIN relationships r ON lt.user_id=r.contact_id\n> WHERE r.user_id = 12345 OR lt.user_id = 12345\n> ORDER BY lt.created_at DESC LIMIT 10;\n> QUERY PLAN \n> When split it up into the two following queries it performs much\n> better for that user_id. Since the results of the two could be\n> combined into the desired result, I'm assuming it could also be done\n> efficiently within one query, if only a better plan would be used.\n> \n> \n> testdb=# EXPLAIN ANALYZE\n> SELECT * FROM large_table lt\n> WHERE lt.user_id = 12345\n> ORDER BY lt.created_at DESC LIMIT 10;\n> QUERY PLAN \n> testdb=# EXPLAIN ANALYZE\n> SELECT * FROM large_table lt\n> WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n> ORDER BY created_at DESC LIMIT 10;\n> QUERY PLAN \n> I'm not very experienced reading query plans and don't know how to go\n> about this from here - is it theoretically possible to have a query\n> that performs well with the given data in both cases or is there a\n> conceptual problem?\n\nHow does the \"obvious\" UNION query do - ie:\n\nSELECT * FROM (\nSELECT * FROM large_table lt\nWHERE lt.user_id = 12345\n\nUNION\n\nSELECT * FROM large_table lt\nWHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n) q\n\nORDER BY created_at DESC LIMIT 10;\n\n?\n\nHow about\n\nSELECT * FROM large_table lt\nWHERE lt.user_id = 12345 OR user_id IN (SELECT contact_id FROM\nrelationships WHERE user_id=12345)\nORDER BY created_at DESC LIMIT 10;\n\n?\n\nI am missing a unique constraint on (user_id, contact_id) - otherwise\nthe subselect is not equivalent to the join.\n\nProbably you also should have foreign key constraints on\nrelationships.user_id and relationships.contact_id. These are unlikely\nto affect performance though, in my experience.\n\nIt might be good to know whether contact_id = user_id is possible -\nsince this would rule out the possibility of a row satisfying both\nbranches of the union.\n\nNis\n\n", "msg_date": "Fri, 27 Jul 2007 20:28:52 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "* Nis J�rgensen <[email protected]> [20070727 20:31]:\n> How does the \"obvious\" UNION query do - ie:\n> \n> SELECT * FROM (\n> SELECT * FROM large_table lt\n> WHERE lt.user_id = 12345\n> \n> UNION\n> \n> SELECT * FROM large_table lt\n> WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n> ) q\n> \n> ORDER BY created_at DESC LIMIT 10;\n\nGreat for the user with little data:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM (\nSELECT * FROM large_table lt\nWHERE lt.user_id = 12345\nUNION\nSELECT * FROM large_table lt\nWHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n) q\nORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=14673.77..14673.80 rows=10 width=3140) (actual time=133.877..133.946 rows=10 loops=1)\n -> Sort (cost=14673.77..14682.22 rows=3378 width=3140) (actual time=133.870..133.894 rows=10 loops=1)\n Sort Key: q.created_at\n -> Unique (cost=14315.34..14442.01 rows=3378 width=622) (actual time=133.344..133.705 rows=38 loops=1)\n -> Sort (cost=14315.34..14323.78 rows=3378 width=622) (actual time=133.337..133.432 rows=38 loops=1)\n Sort Key: id, user_id, plaze_id, device, started_at, updated_at, status, \"type\", duration, permission, created_at, mac_address, subnet, msc\n -> Append (cost=0.00..14117.35 rows=3378 width=622) (actual time=39.144..133.143 rows=38 loops=1)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..7243.59 rows=2158 width=622) (actual time=39.138..109.831 rows=34 loops=1)\n Index Cond: (user_id = 12345)\n -> Nested Loop (cost=42.78..6839.98 rows=1220 width=622) (actual time=14.859..23.112 rows=4 loops=1)\n -> HashAggregate (cost=42.78..42.79 rows=1 width=4) (actual time=8.092..8.095 rows=1 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.34..42.75 rows=11 width=4) (actual time=8.067..8.070 rows=1 loops=1)\n Recheck Cond: (user_id = 12345)\n -> Bitmap Index Scan on relationships_user_id_index (cost=0.00..4.34 rows=11 width=0) (actual time=8.057..8.057 rows=1 loops=1)\n Index Cond: (user_id = 12345)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6768.48 rows=2297 width=622) (actual time=6.751..14.970 rows=4 loops=1)\n Index Cond: (lt.user_id = relationships.contact_id)\n Total runtime: 134.220 ms\n\n\nNot so great for the user with many early matches:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM (\nSELECT * FROM large_table lt\nWHERE lt.user_id = 55555\nUNION\nSELECT * FROM large_table lt\nWHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=55555)\n) q\nORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=14673.77..14673.80 rows=10 width=3140) (actual time=3326.304..3326.367 rows=10 loops=1)\n -> Sort (cost=14673.77..14682.22 rows=3378 width=3140) (actual time=3326.297..3326.318 rows=10 loops=1)\n Sort Key: q.created_at\n -> Unique (cost=14315.34..14442.01 rows=3378 width=622) (actual time=2413.070..3019.385 rows=69757 loops=1)\n -> Sort (cost=14315.34..14323.78 rows=3378 width=622) (actual time=2413.062..2590.354 rows=69757 loops=1)\n Sort Key: id, user_id, plaze_id, device, started_at, updated_at, status, \"type\", duration, permission, created_at, mac_address, subnet, msc\n -> Append (cost=0.00..14117.35 rows=3378 width=622) (actual time=0.067..1911.626 rows=69757 loops=1)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..7243.59 rows=2158 width=622) (actual time=0.062..3.440 rows=739 loops=1)\n Index Cond: (user_id = 55555)\n -> Nested Loop (cost=42.78..6839.98 rows=1220 width=622) (actual time=0.451..1557.901 rows=69018 loops=1)\n -> HashAggregate (cost=42.78..42.79 rows=1 width=4) (actual time=0.404..0.580 rows=40 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.34..42.75 rows=11 width=4) (actual time=0.075..0.241 rows=40 loops=1)\n Recheck Cond: (user_id = 55555)\n -> Bitmap Index Scan on relationships_user_id_index (cost=0.00..4.34 rows=11 width=0) (actual time=0.053..0.053 rows=40 loops=1)\n Index Cond: (user_id = 55555)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6768.48 rows=2297 width=622) (actual time=0.048..28.033 rows=1725 loops=40)\n Index Cond: (lt.user_id = relationships.contact_id)\n Total runtime: 3327.744 ms\n\n\n> How about\n> \n> SELECT * FROM large_table lt\n> WHERE lt.user_id = 12345 OR user_id IN (SELECT contact_id FROM\n> relationships WHERE user_id=12345)\n> ORDER BY created_at DESC LIMIT 10;\n\nNot good for the one with few matches:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\nWHERE lt.user_id = 12345 OR user_id IN (SELECT contact_id FROM\nrelationships WHERE user_id=12345)\nORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=42.78..47.05 rows=10 width=622) (actual time=38360.090..62008.336 rows=10 loops=1)\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=42.78..935924.84 rows=2192498 width=622) (actual time=38360.084..62008.269 rows=10 loops=1)\n Filter: ((user_id = 12345) OR (hashed subplan))\n SubPlan\n -> Bitmap Heap Scan on relationships (cost=4.34..42.75 rows=11 width=4) (actual time=0.031..0.034 rows=1 loops=1)\n Recheck Cond: (user_id = 12345)\n -> Bitmap Index Scan on relationships_user_id_index (cost=0.00..4.34 rows=11 width=0) (actual time=0.020..0.020 rows=1 loops=1)\n Index Cond: (user_id = 12345)\n Total runtime: 62008.500 ms\n\n\nGood for the one with many early matches:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\nWHERE lt.user_id = 55555 OR user_id IN (SELECT contact_id FROM\nrelationships WHERE user_id=55555)\nORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=42.78..47.05 rows=10 width=622) (actual time=0.473..0.572 rows=10 loops=1)\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=42.78..935924.84 rows=2192498 width=622) (actual time=0.467..0.512 rows=10 loops=1)\n Filter: ((user_id = 55555) OR (hashed subplan))\n SubPlan\n -> Bitmap Heap Scan on relationships (cost=4.34..42.75 rows=11 width=4) (actual time=0.070..0.264 rows=40 loops=1)\n Recheck Cond: (user_id = 55555)\n -> Bitmap Index Scan on relationships_user_id_index (cost=0.00..4.34 rows=11 width=0) (actual time=0.047..0.047 rows=40 loops=1)\n Index Cond: (user_id = 55555)\n Total runtime: 0.710 ms\n\n\n> I am missing a unique constraint on (user_id, contact_id) - otherwise\n> the subselect is not equivalent to the join.\n> \n> It might be good to know whether contact_id = user_id is possible -\n> since this would rule out the possibility of a row satisfying both\n> branches of the union.\n\nThanks for the hint - this is a rails project with the validations\nstarting out in the application only, so sometimes we forget to also\ncheck the data integrity in the database. There were in fact a couple\nof duplicate user_id/contact_id pairs and a couple of rows where\nuser_id=contact_id although those shouldn't be allowed.\n\nI added a unique index on (user_id, contact_id) and a check constraint\nto prevent (user_id=contact_id), vacuumed, and ran the 4 above query\nvariants again - but it did not result in changed plans or substantial\ndifferences in execution time.\n\n> Probably you also should have foreign key constraints on\n> relationships.user_id and relationships.contact_id. These are unlikely\n> to affect performance though, in my experience.\n\nThey are there, I just removed them from the schema output before\nposting, also assuming that they are not relevant for join\nperformance.\n\n\nTil\n", "msg_date": "Sat, 28 Jul 2007 14:52:36 +0200", "msg_from": "Tilmann Singer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with backwards index scan" } ]
[ { "msg_contents": "Postgres 8.2.4.\n\nWe have a large table, let's call it \"foo\", whereby an automated process\nperiodically inserts many (hundreds of thousands or millions) rows into it\nat a time. It's essentially INSERT INTO foo SELECT FROM <another table>\nWHERE <some conditions>. Recently, for whatever reason, the query started\nto run out of memory. This happened on the order of 50 times before it was\nnoticed and the process was stopped. (Admittedly, more investigation needs\nto go into the OOM problem... )\n\nNow autovacuum promptly kicked in trying to clean up this mess, however it\ncouldn't keep up at the rate that dead tuples were being generated. I'm not\nsure if it got into a weird state. After a few days, long after the\ninserting process was stopped, we decided to abort the vacuum (which we\nweren't convinced was doing anything), then start a manual vacuum with a\nhigher vacuum_cost_limit to get things cleaned up quicker.\n\nAfter 28 hours, here was the output of vacuum verbose:\n\n# VACUUM VERBOSE foo;\nINFO: vacuuming \"public.foo\"\nINFO: scanned index \"foo_pkey\" to remove 44739062 row versions\nDETAIL: CPU 5.74s/26.09u sec elapsed 529.57 sec.\nINFO: scanned index \"foo_1\" to remove 44739062 row versions\nDETAIL: CPU 760.09s/619.83u sec elapsed 56929.54 sec.\nINFO: scanned index \"foo_2\" to remove 44739062 row versions\nDETAIL: CPU 49.35s/99.57u sec elapsed 4410.74 sec.\nINFO: \"foo\": removed 44739062 row versions in 508399 pages\nDETAIL: CPU 47.35s/12.88u sec elapsed 3985.92 sec.\nINFO: scanned index \"foo_pkey\" to remove 32534234 row versions\nDETAIL: CPU 22.05s/32.51u sec elapsed 2259.05 sec.\n\nThe vacuum then just sat there. What I can't understand is why it went back\nfor a second pass of the pkey index? There was nothing writing to the table\nonce the vacuum began. Is this behaviour expected? Are these times\nreasonable for a vacuum (on a busy system, mind you)?\n\nWe have since aborted the vacuum and truncated the table. We're now working\non the root OOM problem, which is easier said than done...\n\nSteve\n\nPostgres 8.2.4.\n \nWe have a large table, let's call it \"foo\", whereby an automated process periodically inserts many (hundreds of thousands or millions) rows into it at a time.  It's essentially INSERT INTO foo SELECT FROM <another table> WHERE <some conditions>.  Recently, for whatever reason, the query started to run out of memory.  This happened on the order of 50 times before it was noticed and the process was stopped.  (Admittedly, more investigation needs to go into the OOM problem... )\n\n \nNow autovacuum promptly kicked in trying to clean up this mess, however it couldn't keep up at the rate that dead tuples were being generated.  I'm not sure if it got into a weird state.  After a few days, long after the inserting process was stopped, we decided to abort the vacuum (which we weren't convinced was doing anything), then start a manual vacuum with a higher vacuum_cost_limit to get things cleaned up quicker.\n\n \nAfter 28 hours, here was the output of vacuum verbose:\n \n# VACUUM VERBOSE foo;INFO:  vacuuming \"public.foo\"INFO:  scanned index \"foo_pkey\" to remove 44739062 row versionsDETAIL:  CPU 5.74s/26.09u sec elapsed 529.57 sec.INFO:  scanned index \"foo_1\" to remove 44739062 row versions\nDETAIL:  CPU 760.09s/619.83u sec elapsed 56929.54 sec.INFO:  scanned index \"foo_2\" to remove 44739062 row versionsDETAIL:  CPU 49.35s/99.57u sec elapsed 4410.74 sec.INFO:  \"foo\": removed 44739062 row versions in 508399 pages\nDETAIL:  CPU 47.35s/12.88u sec elapsed 3985.92 sec.INFO:  scanned index \"foo_pkey\" to remove 32534234 row versionsDETAIL:  CPU 22.05s/32.51u sec elapsed 2259.05 sec. \nThe vacuum then just sat there.  What I can't understand is why it went back for a second pass of the pkey index?  There was nothing writing to the table once the vacuum began.  Is this behaviour expected?  Are these times reasonable for a vacuum (on a busy system, mind you)?\n\n \nWe have since aborted the vacuum and truncated the table.  We're now working on the root OOM problem, which is easier said than done...\n \nSteve", "msg_date": "Fri, 27 Jul 2007 17:32:11 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum looping?" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> The vacuum then just sat there. What I can't understand is why it went back\n> for a second pass of the pkey index? There was nothing writing to the table\n> once the vacuum began. Is this behaviour expected?\n\nYes (hint: the numbers tell me what your maintenance_work_mem setting is).\nYou should have left it alone, probably, though there seems to be\nsomething funny about your foo_1 index --- why was that so much slower\nthan the others for the first pass?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Jul 2007 10:37:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum looping? " }, { "msg_contents": "On Fri, Jul 27, 2007 at 05:32:11PM -0400, Steven Flatt wrote:\n> weren't convinced was doing anything), then start a manual vacuum with a\n> higher vacuum_cost_limit to get things cleaned up quicker.\n\nWhat are your vacuum_cost_* settings? If you set those too aggressively\nyou'll be in big trouble.\n\nThe second pass on the vacuum means that maintenance_work_memory isn't\nlarge enough.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Sat, 28 Jul 2007 11:36:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum looping?" }, { "msg_contents": "On 7/28/07, Jim C. Nasby <[email protected]> wrote:\n>\n> What are your vacuum_cost_* settings? If you set those too aggressively\n> you'll be in big trouble.\n\n\n autovacuum_vacuum_cost_delay = 100\nautovacuum_vacuum_cost_limit = 200\n\nThese are generally fine, autovacuum keeps up, and there is minimal impact\non the system.\n\nvacuum_cost_delay = 100\nvacuum_cost_limit = 1000\n\nWe set this cost_limit a little higher so that, in the few cases where we\nhave to intervene manually, vacuum runs faster.\n\n\nThe second pass on the vacuum means that maintenance_work_memory isn't\n> large enough.\n\n\nmaintenance_work_mem is set to 256MB and I don't think we want to make this\nany bigger by default. Like I say above, generally autovacuum runs fine.\nIf we do run into this situation again (lots of OOM queries and lots to\ncleanup), we'll probably increase maintenance_work_mem locally and run a\nvacuum in that session.\n\nGood to know that vacuum was doing the right thing.\n\nThanks,\nSteve\n\nOn 7/28/07, Jim C. Nasby <[email protected]> wrote:\nWhat are your vacuum_cost_* settings? If you set those too aggressivelyyou'll be in big trouble.\n \n\nautovacuum_vacuum_cost_delay = 100\nautovacuum_vacuum_cost_limit = 200\n \nThese are generally fine, autovacuum keeps up, and there is minimal impact on the system.\n \nvacuum_cost_delay = 100\nvacuum_cost_limit = 1000\n \nWe set this cost_limit a little higher so that, in the few cases where we have to intervene manually, vacuum runs faster.\n \nThe second pass on the vacuum means that maintenance_work_memory isn'tlarge enough.\n \nmaintenance_work_mem is set to 256MB and I don't think we want to make this any bigger by default.  Like I say above, generally autovacuum runs fine.  If we do run into this situation again (lots of OOM queries and lots to cleanup), we'll probably increase maintenance_work_mem locally and run a vacuum in that session.\n\n \nGood to know that vacuum was doing the right thing.\n \nThanks,\nSteve", "msg_date": "Mon, 30 Jul 2007 12:04:08 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum looping?" }, { "msg_contents": "On Jul 30, 2007, at 9:04 AM, Steven Flatt wrote:\n> On 7/28/07, Jim C. Nasby <[email protected]> wrote: What are your \n> vacuum_cost_* settings? If you set those too aggressively\n> you'll be in big trouble.\n>\n> autovacuum_vacuum_cost_delay = 100\n\nWow, that's *really* high. I don't think I've ever set it higher than \n25. I'd cut it way back.\n\n> autovacuum_vacuum_cost_limit = 200\n>\n> These are generally fine, autovacuum keeps up, and there is minimal \n> impact on the system.\n>\n> vacuum_cost_delay = 100\n> vacuum_cost_limit = 1000\n>\n> We set this cost_limit a little higher so that, in the few cases \n> where we have to intervene manually, vacuum runs faster.\n\nIIRC, when the cost delay was initially introduced (8.0), someone did \ntesting and decided that the cost limit of 200 was optimal, so I \nwouldn't go changing it like that without good reason.\n\nNormally, I'll use a delay of 10ms on good disk hardware, and 20ms on \nslower hardware.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Mon, 30 Jul 2007 18:46:37 -0700", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum looping?" } ]
[ { "msg_contents": "Tilmann Singer <[email protected]> wrote ..\n> * Nis J�rgensen <[email protected]> [20070727 20:31]:\n> > How does the \"obvious\" UNION query do - ie:\n> > \n> > SELECT * FROM (\n> > SELECT * FROM large_table lt\n> > WHERE lt.user_id = 12345\n> > \n> > UNION\n> > \n> > SELECT * FROM large_table lt\n> > WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n> > ) q\n> > \n> > ORDER BY created_at DESC LIMIT 10;\n\nLet's try putting the sort/limit in each piece of the UNION to speed them up separately.\n\nSELECT * FROM (\n (SELECT * FROM large_table lt\n WHERE lt.user_id = 12345\n ORDER BY created_at DESC LIMIT 10) AS q1\n UNION\n (SELECT * FROM large_table lt\n WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n ORDER BY created_at DESC LIMIT 10) AS q2\nORDER BY created_at DESC LIMIT 10;\n", "msg_date": "Sat, 28 Jul 2007 12:03:52 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "* [email protected] <[email protected]> [20070728 21:05]:\n> Let's try putting the sort/limit in each piece of the UNION to speed them up separately.\n> \n> SELECT * FROM (\n> (SELECT * FROM large_table lt\n> WHERE lt.user_id = 12345\n> ORDER BY created_at DESC LIMIT 10) AS q1\n> UNION\n> (SELECT * FROM large_table lt\n> WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n> ORDER BY created_at DESC LIMIT 10) AS q2\n> ORDER BY created_at DESC LIMIT 10;\n\nIt's not possible to use ORDER BY or LIMIT within unioned queries.\n\nhttp://www.postgresql.org/docs/8.2/static/sql-select.html#SQL-UNION\n\nWould that make sense at all given the way the postgresql planner\nworks? Does that work in other DB's?\n\n\nTil\n", "msg_date": "Sat, 28 Jul 2007 21:27:13 +0200", "msg_from": "Tilmann Singer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "Tilmann Singer wrote:\n> * [email protected] <[email protected]> [20070728 21:05]:\n>> Let's try putting the sort/limit in each piece of the UNION to speed them up separately.\n>>\n>> SELECT * FROM (\n>> (SELECT * FROM large_table lt\n>> WHERE lt.user_id = 12345\n>> ORDER BY created_at DESC LIMIT 10) AS q1\n>> UNION\n>> (SELECT * FROM large_table lt\n>> WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n>> ORDER BY created_at DESC LIMIT 10) AS q2\n>> ORDER BY created_at DESC LIMIT 10;\n> \n> It's not possible to use ORDER BY or LIMIT within unioned queries.\n> \n> http://www.postgresql.org/docs/8.2/static/sql-select.html#SQL-UNION\n\n\"ORDER BY and LIMIT can be attached to a subexpression if it is enclosed in parentheses\"\n", "msg_date": "Sat, 28 Jul 2007 20:40:12 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "Tilmann Singer wrote:\n> * [email protected] <[email protected]> [20070728 21:05]:\n>> Let's try putting the sort/limit in each piece of the UNION to speed them up separately.\n>>\n>> SELECT * FROM (\n>> (SELECT * FROM large_table lt\n>> WHERE lt.user_id = 12345\n>> ORDER BY created_at DESC LIMIT 10) AS q1\n>> UNION\n>> (SELECT * FROM large_table lt\n>> WHERE user_id IN (SELECT contact_id FROM relationships WHERE user_id=12345)\n>> ORDER BY created_at DESC LIMIT 10) AS q2\n>> ORDER BY created_at DESC LIMIT 10;\n> \n> It's not possible to use ORDER BY or LIMIT within unioned queries.\n> \n> http://www.postgresql.org/docs/8.2/static/sql-select.html#SQL-UNION\n\nIf I'm reading this documentation correctly, it *is* possible, as long as they're inside of a sub-select, as in this case.\n\nCraig\n", "msg_date": "Sat, 28 Jul 2007 13:06:02 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "* Craig James <[email protected]> [20070728 22:00]:\n> >>SELECT * FROM (\n> >> (SELECT * FROM large_table lt\n> >> WHERE lt.user_id = 12345\n> >> ORDER BY created_at DESC LIMIT 10) AS q1\n> >> UNION\n> >> (SELECT * FROM large_table lt\n> >> WHERE user_id IN (SELECT contact_id FROM relationships WHERE \n> >> user_id=12345)\n> >> ORDER BY created_at DESC LIMIT 10) AS q2\n> >>ORDER BY created_at DESC LIMIT 10;\n> >\n> >It's not possible to use ORDER BY or LIMIT within unioned queries.\n> >\n> >http://www.postgresql.org/docs/8.2/static/sql-select.html#SQL-UNION\n> \n> If I'm reading this documentation correctly, it *is* possible, as long as \n> they're inside of a sub-select, as in this case.\n\nI completely overlooked that obvious note in the documentation,\nsorry. I tried it only with the aliases which fooled me into thinking\nthat doesn't work at all:\n\ntestdb=# (select 1 limit 1) as q1 union (select 2) as q2;\nERROR: syntax error at or near \"as\"\nLINE 1: (select 1 limit 1) as q1 union (select 2) as q2;\n ^\nbut this works:\n\ntestdb=# (select 1 limit 1) union (select 2);\n ?column?\n----------\n 1\n 2\n\n\nGreat - that works!\n\nWhat I didn't realize in the original post is that the problem\nactually seems to be how to retrieve the rows from large_table for the\ncorrelated relationships in an efficient way - the second of the two\nqueries that could be UNIONed.\n\nUsing a subselect is efficient for the user with few relationships and\nmatched rows at the end of the sorted large_table:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\n WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n user_id=12345)\n ORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6963.94..6963.96 rows=10 width=621) (actual time=94.598..94.629 rows=4 loops=1)\n -> Sort (cost=6963.94..6966.96 rows=1211 width=621) (actual time=94.592..94.602 rows=4 loops=1)\n Sort Key: lt.created_at\n -> Nested Loop (cost=39.52..6901.92 rows=1211 width=621) (actual time=85.670..94.547 rows=4 loops=1)\n -> HashAggregate (cost=39.52..39.53 rows=1 width=4) (actual time=23.549..23.552 rows=1 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.33..39.49 rows=10 width=4) (actual time=23.526..23.530 rows=1 loops=1)\n Recheck Cond: (user_id = 12345)\n -> Bitmap Index Scan on relationships_user_id_contact_id_index (cost=0.00..4.33 rows=10 width=0) (actual time=0.027..0.027 rows=1 loops=1)\n Index Cond: (user_id = 12345)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6834.04 rows=2268 width=621) (actual time=62.108..70.952 rows=4 loops=1)\n Index Cond: (lt.user_id = relationships.contact_id)\n Total runtime: 94.875 ms\n\n\nBut the subselect is not fast for the user with many relationships and\nmatched rows at the beginning of the sorted large_table:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\n WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n user_id=55555)\n ORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6963.94..6963.96 rows=10 width=621) (actual time=53187.349..53187.424 rows=10 loops=1)\n -> Sort (cost=6963.94..6966.96 rows=1211 width=621) (actual time=53187.341..53187.360 rows=10 loops=1)\n Sort Key: lt.created_at\n -> Nested Loop (cost=39.52..6901.92 rows=1211 width=621) (actual time=201.728..52673.800 rows=69018 loops=1)\n -> HashAggregate (cost=39.52..39.53 rows=1 width=4) (actual time=178.777..178.966 rows=40 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.33..39.49 rows=10 width=4) (actual time=47.049..178.560 rows=40 loops=1)\n Recheck Cond: (user_id = 55555)\n -> Bitmap Index Scan on relationships_user_id_contact_id_index (cost=0.00..4.33 rows=10 width=0) (actual time=28.721..28.721 rows=40 loops=1)\n Index Cond: (user_id = 55555)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6834.04 rows=2268 width=621) (actual time=21.994..1301.375 rows=1725 loops=40)\n Index Cond: (lt.user_id = relationships.contact_id)\n Total runtime: 53188.584 ms\n\n\n\nUsing a join now the query for matches for the user with little data is slow:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\n JOIN relationships r ON lt.user_id=r.contact_id WHERE\n r.user_id=12345\n ORDER BY lt.created_at DESC LIMIT 10;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..24116.65 rows=10 width=645) (actual time=100348.436..145552.633 rows=4 loops=1)\n -> Nested Loop (cost=0.00..28751864.52 rows=11922 width=645) (actual time=100348.429..145552.602 rows=4 loops=1)\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=0.00..824833.09 rows=4384343 width=621) (actual time=28.961..82448.167 rows=4384064 loops=1)\n -> Index Scan using relationships_user_id_contact_id_index on relationships r (cost=0.00..6.36 rows=1 width=24) (actual time=0.009..0.009 rows=0 loops=4384064)\n Index Cond: ((r.user_id = 12345) AND (lt.user_id = r.contact_id))\n Total runtime: 145552.809 ms\n\n\nAnd for the user with much data it's fast:\n\ntestdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\n JOIN relationships r ON lt.user_id=r.contact_id WHERE\n r.user_id=55555\n ORDER BY lt.created_at DESC LIMIT 10;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..24116.65 rows=10 width=645) (actual time=0.068..0.428 rows=10 loops=1)\n -> Nested Loop (cost=0.00..28751864.52 rows=11922 width=645) (actual time=0.063..0.376 rows=10 loops=1)\n -> Index Scan Backward using large_created_at_index on large_table lt (cost=0.00..824833.09 rows=4384343 width=621) (actual time=0.028..0.064 rows=13 loops=1)\n -> Index Scan using relationships_user_id_contact_id_index on relationships r (cost=0.00..6.36 rows=1 width=24) (actual time=0.010..0.013 rows=1 loops=13)\n Index Cond: ((r.user_id = 55555) AND (lt.user_id = r.contact_id))\n Total runtime: 0.609 ms\n\n\n\nAny ideas?\n\n\n\ntia, Til\n", "msg_date": "Sat, 28 Jul 2007 22:54:50 +0200", "msg_from": "Tilmann Singer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "Tilmann Singer skrev:\n\n> But the subselect is not fast for the user with many relationships and\n> matched rows at the beginning of the sorted large_table:\n> \n> testdb=# EXPLAIN ANALYZE SELECT * FROM large_table lt\n> WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n> user_id=55555)\n> ORDER BY created_at DESC LIMIT 10;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=6963.94..6963.96 rows=10 width=621) (actual time=53187.349..53187.424 rows=10 loops=1)\n> -> Sort (cost=6963.94..6966.96 rows=1211 width=621) (actual time=53187.341..53187.360 rows=10 loops=1)\n> Sort Key: lt.created_at\n> -> Nested Loop (cost=39.52..6901.92 rows=1211 width=621) (actual time=201.728..52673.800 rows=69018 loops=1)\n> -> HashAggregate (cost=39.52..39.53 rows=1 width=4) (actual time=178.777..178.966 rows=40 loops=1)\n> -> Bitmap Heap Scan on relationships (cost=4.33..39.49 rows=10 width=4) (actual time=47.049..178.560 rows=40 loops=1)\n> Recheck Cond: (user_id = 55555)\n> -> Bitmap Index Scan on relationships_user_id_contact_id_index (cost=0.00..4.33 rows=10 width=0) (actual time=28.721..28.721 rows=40 loops=1)\n> Index Cond: (user_id = 55555)\n> -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..6834.04 rows=2268 width=621) (actual time=21.994..1301.375 rows=1725 loops=40)\n> Index Cond: (lt.user_id = relationships.contact_id)\n> Total runtime: 53188.584 ms\n> \n> \n> \n> Using a join now the query for mat\n\n> Any ideas?\n\nIt seems to me the subselect plan would benefit quite a bit from not\nreturning all rows, but only the 10 latest for each user. I believe the\nproblem is similar to what is discussed for UNIONs here:\n\nhttp://groups.google.dk/group/pgsql.general/browse_thread/thread/4f74d7faa8a5a608/367f5052b1bbf1c8?lnk=st&q=postgresql+limit++union&rnum=1&hl=en#367f5052b1bbf1c8\n\nwhich got implemented here:\n\nhttp://groups.google.dk/group/pgsql.committers/browse_thread/thread/b1ac3c3026db096c/9b3e5bd2d612f565?lnk=st&q=postgresql+limit++union&rnum=26&hl=en#9b3e5bd2d612f565\n\nIt seems to me the planner in this case would actually need to push the\nlimit into the nested loop, since we want the plan to take advantage of\nthe index (using a backwards index scan). I am ready to be corrected though.\n\nYou could try this (quite hackish) attempt at forcing the query planner\nto use this plan:\n\nSELECT * FROM large_table lt\n WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n user_id=55555) AND created_at in (SELECT created_at FROM large_table\nlt2 WHERE lt2.user_id = lt.user_id ORDER BY created_at DESC limit 10)\n ORDER BY created_at DESC LIMIT 10;\n\nIf that doesn't work, you might have reached the point where you need to\nuse some kind of record-keeping system to keep track of which records to\nlook at.\n\n/Nis\n\n", "msg_date": "Mon, 30 Jul 2007 18:30:50 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "* Nis J�rgensen <[email protected]> [20070730 18:33]:\n> It seems to me the subselect plan would benefit quite a bit from not\n> returning all rows, but only the 10 latest for each user. I believe the\n> problem is similar to what is discussed for UNIONs here:\n> \n> http://groups.google.dk/group/pgsql.general/browse_thread/thread/4f74d7faa8a5a608/367f5052b1bbf1c8?lnk=st&q=postgresql+limit++union&rnum=1&hl=en#367f5052b1bbf1c8\n> \n> which got implemented here:\n> \n> http://groups.google.dk/group/pgsql.committers/browse_thread/thread/b1ac3c3026db096c/9b3e5bd2d612f565?lnk=st&q=postgresql+limit++union&rnum=26&hl=en#9b3e5bd2d612f565\n> \n> It seems to me the planner in this case would actually need to push the\n> limit into the nested loop, since we want the plan to take advantage of\n> the index (using a backwards index scan). I am ready to be corrected though.\n\nIf I understand that correctly than this means that it would benefit\nthe planning for something like\n\nSELECT FROM (q1 UNION ALL q2) ORDER BY ... LIMIT ...\n\nif any of q1 or q2 would satisfy the rows requested by limit early,\ninstead of planning q1 and q2 without having the limit of the outer\nquery an influence.\n\nUnfortunately I'm having problems making my q2 reasonably efficient in\nthe first place, even before UNIONing it.\n\n> You could try this (quite hackish) attempt at forcing the query planner\n> to use this plan:\n> \n> SELECT * FROM large_table lt\n> WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n> user_id=55555) AND created_at in (SELECT created_at FROM large_table\n> lt2 WHERE lt2.user_id = lt.user_id ORDER BY created_at DESC limit 10)\n> ORDER BY created_at DESC LIMIT 10;\n\nNo for the user with many matches at the beginning:\n\ntestdb=# EXPLAIN ANALYZE\nSELECT * FROM large_table lt\n WHERE user_id IN (SELECT contact_id FROM relationships WHERE\n user_id=55555) AND created_at IN (SELECT created_at FROM large_table lt2\n WHERE lt2.user_id = lt.user_id ORDER BY created_at DESC limit 10)\n ORDER BY created_at DESC LIMIT 10;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=45555.94..45555.97 rows=10 width=621) (actual time=70550.549..70550.616 rows=10 loops=1)\n -> Sort (cost=45555.94..45557.46 rows=605 width=621) (actual time=70550.542..70550.569 rows=10 loops=1)\n Sort Key: lt.created_at\n -> Nested Loop (cost=39.52..45527.99 rows=605 width=621) (actual time=2131.501..70548.313 rows=321 loops=1)\n -> HashAggregate (cost=39.52..39.53 rows=1 width=4) (actual time=0.406..0.615 rows=40 loops=1)\n -> Bitmap Heap Scan on relationships (cost=4.33..39.49 rows=10 width=4) (actual time=0.075..0.248 rows=40 loops=1)\n Recheck Cond: (user_id = 55555)\n -> Bitmap Index Scan on relationships_user_id_contact_id_index (cost=0.00..4.33 rows=10 width=0) (actual time=0.052..0.052 rows=40 loops=1)\n Index Cond: (user_id = 55555)\n -> Index Scan using large_user_id_started_at_index on large_table lt (cost=0.00..45474.29 rows=1134 width=621) (actual time=1762.067..1763.637 rows=8 loops=40)\n Index Cond: (lt.user_id = relationships.contact_id)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.00..34.04 rows=10 width=8) (actual time=0.048..0.147 rows=10 loops=69018)\n -> Index Scan Backward using large_user_id_created_at_index on large_table lt2 (cost=0.00..7721.24 rows=2268 width=8) (actual time=0.040..0.087 rows=10 loops=69018)\n Index Cond: (user_id = $0)\n Total runtime: 70550.847 ms\n\n\nThe same plan is generated for the user with few matches and executes\nvery fast.\n\n\n> If that doesn't work, you might have reached the point where you need to\n> use some kind of record-keeping system to keep track of which records to\n> look at.\n\nYes, I'm considering that unfortunately.\n\nSeeing however that there are 2 different queries which result in very\nefficient plans for one of the 2 different cases, but not the other,\nmakes me hope there is a way to tune the planner into always coming up\nwith the right plan. I'm not sure if I explained the problem well\nenough and will see if I can come up with a reproduction case with\ngenerated data.\n\n\nTil\n", "msg_date": "Mon, 30 Jul 2007 21:10:25 +0200", "msg_from": "Tilmann Singer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" }, { "msg_contents": "Tilmann Singer skrev:\n> * Nis Jørgensen <[email protected]> [20070730 18:33]:\n>> It seems to me the subselect plan would benefit quite a bit from not\n>> returning all rows, but only the 10 latest for each user. I believe the\n>> problem is similar to what is discussed for UNIONs here:\n>>\n>> http://groups.google.dk/group/pgsql.general/browse_thread/thread/4f74d7faa8a5a608/367f5052b1bbf1c8?lnk=st&q=postgresql+limit++union&rnum=1&hl=en#367f5052b1bbf1c8\n>>\n>> which got implemented here:\n>>\n>> http://groups.google.dk/group/pgsql.committers/browse_thread/thread/b1ac3c3026db096c/9b3e5bd2d612f565?lnk=st&q=postgresql+limit++union&rnum=26&hl=en#9b3e5bd2d612f565\n>>\n>> It seems to me the planner in this case would actually need to push the\n>> limit into the nested loop, since we want the plan to take advantage of\n>> the index (using a backwards index scan). I am ready to be corrected though.\n> \n> If I understand that correctly than this means that it would benefit\n> the planning for something like\n>\n> SELECT FROM (q1 UNION ALL q2) ORDER BY ... LIMIT ...\n> \n> if any of q1 or q2 would satisfy the rows requested by limit early,\n> instead of planning q1 and q2 without having the limit of the outer\n> query an influence.\n>\n> Unfortunately I'm having problems making my q2 reasonably efficient in\n> the first place, even before UNIONing it.\n\nThe \"second branch\" of your UNION is really equivalent to the following\npseudo_code:\n\ncontacts = SELECT contact_id FROM relations WHERE user_id = $id\n\nsql = SELECT * FROM (\n\tSELECT * FROM lt WHERE user_id = contacts[0]\n\tUNION ALL\n\tSELECT * FROM lt WHERE user_id = contacts[1]\n\t.\n\t.\n\t.\n) ORDER BY created_at LIMIT 10;\n\nCurrently, it seems the \"subqueries\" are fetching all rows.\n\nThus a plan which makes each of the subqueries aware of the LIMIT might\nbe able to improve performance. Unlike the UNION case, it seems this\nmeans making the subqueries aware that the plan is valid, not just\nchanging the cost estimate.\n\n\nHow does this \"imperative approach\" perform? I realize you probably\ndon't want to use this, but it would give us an idea whether we would be\nable to get the speed we need by forcing this plan on pg.\n\n>> If that doesn't work, you might have reached the point where you need to\n>> use some kind of record-keeping system to keep track of which records to\n>> look at.\n> \n> Yes, I'm considering that unfortunately.\n> \n> Seeing however that there are 2 different queries which result in very\n> efficient plans for one of the 2 different cases, but not the other,\n> makes me hope there is a way to tune the planner into always coming up\n> with the right plan. I'm not sure if I explained the problem well\n> enough and will see if I can come up with a reproduction case with\n> generated data.\n\nI think the problem is that Postgresql does not have the necessary\nstatistics to determine which of the two plans will perform well. There\nare basically two unknowns in the query:\n\n- How many uninteresting records do we have to scan through before get\nto the interesting ones (if using plan 1).\n- How many matching rows do we find in \"relations\"\n\nThe first one it is not surprising that pg cannot estimate.\n\nI am a little surprised that pg is not able to estimate the second one\nbetter. Increasing the statistics settings might help in getting a\ndifferent plan.\n\nI am also slightly surprised that the two equivalent formulations of the\nquery yield such vastly different query plans. In my experience, pg is\nquite good at coming up with similar query plans for equivalent queries.\n You might want to fiddle with DISTINCT or indexing to make sure that\nthey are indeed logically equivalent.\n\nAnyway, it seems likely that you will at some point run into the query\nfor many matches at the end of the table - which none of your plans will\nbe good at supplying. So perhaps you can just as well prepare for it now.\n\n\nNis\n\n", "msg_date": "Mon, 30 Jul 2007 22:05:14 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with backwards index scan" } ]
[ { "msg_contents": "Friends, \n\n \n\n \n\n \n\n Who can help me? My SELECT in a base with 1 milion register,\nusing expression index = 6seconds.\n\n \n\n \n\nPlease, I don't know how to makes it better.\n\n \n\n \n\nThanks\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nFriends, \n \n \n \n            Who can help me? My SELECT\nin a base with 1 milion register, using  expression index = 6seconds…\n \n \nPlease, I don’t  know how to makes it better.\n \n \nThanks", "msg_date": "Sat, 28 Jul 2007 16:27:14 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "select on 1milion register = 6s" }, { "msg_contents": "Do you have analyzed your table before doing this ?\n\nLe samedi 28 juillet 2007, Bruno Rodrigues Siqueira a écrit :\n> Friends,\n>\n>\n>\n>\n>\n>\n>\n> Who can help me? My SELECT in a base with 1 milion register,\n> using expression index = 6seconds.\n>\n>\n>\n>\n>\n> Please, I don't know how to makes it better.\n>\n>\n>\n>\n>\n> Thanks\n\n\n\n-- \nHervé Piedvache\n", "msg_date": "Sat, 28 Jul 2007 21:56:58 +0200", "msg_from": "=?iso-8859-1?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 1milion register = 6s" }, { "msg_contents": "Bruno Rodrigues Siqueira wrote:\n> Who can help me? My SELECT in a base with 1 milion register, \n> using expression index = 6seconds�\n\nRun your query using \n\n EXPLAIN ANALYZE SELECT ... your query ...\n\nand then post the results to this newsgroup. Nobody can help until they see the results of EXPLAIN ANALYZE. Also, include all other relevant information, such as Postgres version, operating system, amount of memory, and any changes you have made to the Postgres configuration file.\n\nCraig\n\n\n", "msg_date": "Sat, 28 Jul 2007 12:59:26 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 1milion register = 6s" }, { "msg_contents": "\nOk.\n\t\n\n\tQuery\n\nEXPLAIN \nANALYZE\nselect \n to_char(data_encerramento,'mm/yyyy') as opcoes_mes, \n to_char(data_encerramento,'yyyy-mm') as ordem\nfrom detalhamento_bas\nwhere\n\nto_char( data_encerramento ,'yyyy-mm') \nbetween '2006-12' and '2007-01'\n\nGROUP BY opcoes_mes, ordem\nORDER BY ordem DESC\n\n\n\n\n\n\tQUERY RESULT\n\nQUERY PLAN\nSort (cost=11449.37..11449.40 rows=119 width=8) (actual\ntime=14431.537..14431.538 rows=2 loops=1)\n Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n -> HashAggregate (cost=11448.79..11448.96 rows=119 width=8) (actual\ntime=14431.521..14431.523 rows=2 loops=1)\n -> Index Scan using detalhamento_bas_idx3003 on detalhamento_bas\n(cost=0.00..11442.95 rows=11679 width=8) (actual time=0.135..12719.155\nrows=2335819 loops=1)\n Index Cond: ((to_char(data_encerramento, 'yyyy-mm'::text) >=\n'2006-12'::text) AND (to_char(data_encerramento, 'yyyy-mm'::text) <=\n'2007-01'::text))\nTotal runtime: 14431.605 ms\t\n\n\n\n\n\n\n\n\n\tSERVER\n\t \t DELL PowerEdge 2950\n\t\t XEON Quad-Core 3.0Ghz\n\t\t 4Gb RAM\n\t\t Linux CentOS 5.0 64-bits\n\n\n\n\n\n Postgres 8.1.4\n\n\n\n\n Postgresql.conf\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have \n# to SIGHUP the postmaster for the changes to take effect, or use \n# \"pg_ctl reload\". Some settings, such as listen_addresses, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf'\t# IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)'\t\t# write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*'\t\t# what IP address(es) to listen on; \n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\n#port = 5432\nmax_connections = 10\n# note: increasing max_connections costs ~400 bytes of shared memory per \n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t\t# octal\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60\t\t# 1-600, in seconds\n#ssl = off\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\n#krb_server_hostname = ''\t\t# empty string matches any keytab\nentry\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 50000\t\t\t# min 16 or max_connections*2, 8KB\neach\ntemp_buffers = 1000\t\t\t# min 100, 8KB each\n#max_prepared_transactions = 5\t\t# can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 3145728\t\t\t# min 64, size in KB\nmaintenance_work_mem = 4194304\t\t# min 1024, size in KB\nmax_stack_depth = 2048\t\t\t# min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 208000\t\t\t# min max_fsm_relations*16, 6 bytes\neach\nmax_fsm_relations = 10000\t\t# min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 50\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 200\t\t\t# 10-10000 milliseconds between\nrounds\nbgwriter_lru_percent = 20.0\t\t# 0-100% of LRU buffers\nscanned/round\nbgwriter_lru_maxpages = 100\t\t# 0-1000 buffers max written/round\nbgwriter_all_percent = 3\t\t# 0-100% of all buffers\nscanned/round\nbgwriter_all_maxpages = 600\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = off\t\t\t\t# turns forced synchronization on or\noff\n#wal_sync_method = fsync\t\t# the default is the first option \n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\nfull_page_writes = off\t\t\t# recover from partial page writes\nwal_buffers = 2300\t\t\t# min 4, 8KB each\ncommit_delay = 10\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 256\t\t# in logfile segments, min 1, 16MB\neach\ncheckpoint_timeout = 300\t\t# range 30-3600, in seconds\ncheckpoint_warning = 99\t\t# in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t\t# command to use to archive a\nlogfile \n\t\t\t\t\t# segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 41943040\t\t# typically 8KB each\nrandom_page_cost = 1\t\t\t# units are one sequential page\nfetch \n\t\t\t\t\t# cost\ncpu_tuple_cost = 0.001\t\t\t# (same)\ncpu_index_tuple_cost = 0.0005\t\t# (same)\ncpu_operator_cost = 0.00025\t\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t\t# range 1-1000\nconstraint_exclusion = on\n#from_collapse_limit = 8\njoin_collapse_limit = 1\t\t# 1 disables collapsing of explicit \n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t\t# Valid values are combinations of \n\t\t\t\t\t# stderr, syslog and eventlog, \n\t\t\t\t\t# depending on platform.\n\n# This is used when logging to stderr:\nredirect_stderr = on\t\t\t# Enable capturing of stderr into\nlog \n\t\t\t\t\t# files\n\n# These are only used if redirect_stderr is on:\nlog_directory = 'pg_log'\t\t# Directory where log files are\nwritten\n\t\t\t\t\t# Can be absolute or relative to\nPGDATA\nlog_filename = 'postgresql-%a.log'\t# Log file name pattern.\n\t\t\t\t\t# Can include strftime() escapes\nlog_truncate_on_rotation = on\t# If on, any existing log file of the same \n\t\t\t\t\t# name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to.\nBut\n\t\t\t\t\t# such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on\nrestarts\n\t\t\t\t\t# or size-driven rotation. Default\nis\n\t\t\t\t\t# off, meaning append to existing\nfiles\n\t\t\t\t\t# in all cases.\nlog_rotation_age = 1440\t\t\t# Automatic rotation of logfiles\nwill \n\t\t\t\t\t# happen after so many minutes. 0\nto \n\t\t\t\t\t# disable.\nlog_rotation_size = 0\t\t\t# Automatic rotation of logfiles\nwill \n\t\t\t\t\t# happen after so many kilobytes of\nlog\n\t\t\t\t\t# output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t\t# Values, in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\n#log_min_messages = notice\t\t# Values, in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or verbose\nmessages\n\n#log_min_error_statement = panic\t# Values in order of increasing\nseverity:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# panic(off)\n\t\t\t\t \n#log_min_duration_statement = -1\t# -1 is disabled, 0 logs all\nstatements\n\t\t\t\t\t# and their durations, in\nmilliseconds.\n\n#silent_mode = off\t\t\t# DO NOT USE without syslog or \n\t\t\t\t\t# redirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_line_prefix = ''\t\t\t# Special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = PID\n\t\t\t\t\t# %t = timestamp (no milliseconds)\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session id\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %x = transaction id\n\t\t\t\t\t# %q = stop here in non-session \n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_statement = 'none'\t\t\t# none, mod, ddl, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = off\n#stats_command_string = off\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on\t\t\t# enable autovacuum subprocess?\n#autovacuum_naptime = 60\t\t# time between autovacuum runs, in\nsecs\n#autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 500\t# min # of tuple updates before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before \n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.2\t# fraction of rel size before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for \n\t\t\t\t\t# autovac, -1 means use \n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for \n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\ndefault_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, dmy'\n#timezone = unknown\t\t\t# actually, defaults to TZ \n\t\t\t\t\t# environment setting\n#australian_timezones = off\n#extra_float_digits = 0\t\t\t# min -15, max 2\nclient_encoding = LATIN1\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'pt_BR.ISO-8859-1'\t\t\t# locale for system\nerror message \n\t\t\t\t\t# strings\nlc_monetary = 'pt_BR.ISO-8859-1'\t\t\t# locale for\nmonetary formatting\nlc_numeric = 'pt_BR.ISO-8859-1'\t\t\t# locale for number\nformatting\nlc_time = 'pt_BR.ISO-8859-1'\t\t\t\t# locale for time\nformatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\ndeadlock_timeout = 1000\t\t# in milliseconds\n#max_locks_per_transaction = 64\t\t# min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom variable class\nnames\n\n\n\n\n\n\n\n\n\n\n\n-----Mensagem original-----\nDe: Craig James [mailto:[email protected]] \nEnviada em: sábado, 28 de julho de 2007 16:59\nPara: Bruno Rodrigues Siqueira; [email protected]\nAssunto: Re: [PERFORM] select on 1milion register = 6s\n\nBruno Rodrigues Siqueira wrote:\n> Who can help me? My SELECT in a base with 1 milion register, \n> using expression index = 6seconds…\n\nRun your query using \n\n EXPLAIN ANALYZE SELECT ... your query ...\n\nand then post the results to this newsgroup. Nobody can help until they see\nthe results of EXPLAIN ANALYZE. Also, include all other relevant\ninformation, such as Postgres version, operating system, amount of memory,\nand any changes you have made to the Postgres configuration file.\n\nCraig\n\n\n\n", "msg_date": "Sat, 28 Jul 2007 17:12:10 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: select on 1milion register = 6s" }, { "msg_contents": "On lau, 2007-07-28 at 17:12 -0300, Bruno Rodrigues Siqueira wrote:\n\n> where\n> \n> to_char( data_encerramento ,'yyyy-mm') \n> between '2006-12' and '2007-01'\n\nassuming data_encerramento is a date column, try:\nWHERE data_encerramento between '2006-12-01' and '2007-01-31'\n \ngnari\n\n\n", "msg_date": "Sat, 28 Jul 2007 22:36:16 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: select on 1milion register = 6s" }, { "msg_contents": "Data_encerramento is a timestamp column\n\n\nI will try your tip.\nThanks\n\n\n-----Mensagem original-----\nDe: [email protected]\n[mailto:[email protected]] Em nome de Ragnar\nEnviada em: sábado, 28 de julho de 2007 19:36\nPara: Bruno Rodrigues Siqueira\nCc: [email protected]\nAssunto: Re: RES: [PERFORM] select on 1milion register = 6s\n\nOn lau, 2007-07-28 at 17:12 -0300, Bruno Rodrigues Siqueira wrote:\n\n> where\n> \n> to_char( data_encerramento ,'yyyy-mm') \n> between '2006-12' and '2007-01'\n\nassuming data_encerramento is a date column, try:\nWHERE data_encerramento between '2006-12-01' and '2007-01-31'\n \ngnari\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n", "msg_date": "Sat, 28 Jul 2007 19:38:54 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: select on 1milion register = 6s" }, { "msg_contents": "Yes.\n\n\n\nLook this... and please, tell me if you can help me...\n\n\nThanks\n\nQuery\n\nEXPLAIN\nANALYZE\nselect \n to_char(data_encerramento,'mm/yyyy') as opcoes_mes, \n to_char(data_encerramento,'yyyy-mm') as ordem from detalhamento_bas\nwhere\n\nto_char( data_encerramento ,'yyyy-mm') \nbetween '2006-12' and '2007-01'\n\nGROUP BY opcoes_mes, ordem\nORDER BY ordem DESC\n\n\n\n\n\n\tQUERY RESULT\n\nQUERY PLAN\nSort (cost=11449.37..11449.40 rows=119 width=8) (actual\ntime=14431.537..14431.538 rows=2 loops=1)\n Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n -> HashAggregate (cost=11448.79..11448.96 rows=119 width=8) (actual\ntime=14431.521..14431.523 rows=2 loops=1)\n -> Index Scan using detalhamento_bas_idx3003 on detalhamento_bas\n(cost=0.00..11442.95 rows=11679 width=8) (actual time=0.135..12719.155\nrows=2335819 loops=1)\n Index Cond: ((to_char(data_encerramento, 'yyyy-mm'::text) >=\n'2006-12'::text) AND (to_char(data_encerramento, 'yyyy-mm'::text) <=\n'2007-01'::text))\nTotal runtime: 14431.605 ms\t\n\n\n\n\n\n\n\n\n\tSERVER\n\t \t DELL PowerEdge 2950\n\t\t XEON Quad-Core 3.0Ghz\n\t\t 4Gb RAM\n\t\t Linux CentOS 5.0 64-bits\n\n\n\n\n\n Postgres 8.1.4\n\n\n\n\n Postgresql.conf\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced #\nwith '#' anywhere on a line. The complete list of option names and # allowed\nvalues can be found in the PostgreSQL documentation. The # commented-out\nsettings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it #\nto the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the # postmaster,\ne.g. 'postmaster -c log_connections=on'. Some options # can be changed at\nrun-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster # receives\na SIGHUP. If you edit the file on a running system, you have # to SIGHUP the\npostmaster for the changes to take effect, or use # \"pg_ctl reload\". Some\nsettings, such as listen_addresses, require # a postmaster shutdown and\nrestart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf'\t# IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)'\t\t# write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*'\t\t# what IP address(es) to listen on; \n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\n#port = 5432 max_connections = 10 # note: increasing max_connections costs\n~400 bytes of shared memory per # connection slot, plus lock space (see\nmax_locks_per_transaction). You # might also need to raise shared_buffers\nto support more connections.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t\t# octal\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60\t\t# 1-600, in seconds\n#ssl = off\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\n#krb_server_hostname = ''\t\t# empty string matches any keytab\nentry\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 50000\t\t\t# min 16 or max_connections*2, 8KB\neach\ntemp_buffers = 1000\t\t\t# min 100, 8KB each\n#max_prepared_transactions = 5\t\t# can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory # per transaction slot, plus lock space (see\nmax_locks_per_transaction).\nwork_mem = 3145728\t\t\t# min 64, size in KB\nmaintenance_work_mem = 4194304\t\t# min 1024, size in KB\nmax_stack_depth = 2048\t\t\t# min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 208000\t\t\t# min max_fsm_relations*16, 6 bytes\neach\nmax_fsm_relations = 10000\t\t# min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 50\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 200\t\t\t# 10-10000 milliseconds between\nrounds\nbgwriter_lru_percent = 20.0\t\t# 0-100% of LRU buffers\nscanned/round\nbgwriter_lru_maxpages = 100\t\t# 0-1000 buffers max written/round\nbgwriter_all_percent = 3\t\t# 0-100% of all buffers\nscanned/round\nbgwriter_all_maxpages = 600\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = off\t\t\t\t# turns forced synchronization on or\noff\n#wal_sync_method = fsync\t\t# the default is the first option \n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\nfull_page_writes = off\t\t\t# recover from partial page writes\nwal_buffers = 2300\t\t\t# min 4, 8KB each\ncommit_delay = 10\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 256\t\t# in logfile segments, min 1, 16MB\neach\ncheckpoint_timeout = 300\t\t# range 30-3600, in seconds\ncheckpoint_warning = 99\t\t# in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t\t# command to use to archive a\nlogfile \n\t\t\t\t\t# segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 41943040\t\t# typically 8KB each\nrandom_page_cost = 1\t\t\t# units are one sequential page\nfetch \n\t\t\t\t\t# cost\ncpu_tuple_cost = 0.001\t\t\t# (same)\ncpu_index_tuple_cost = 0.0005\t\t# (same)\ncpu_operator_cost = 0.00025\t\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t\t# range 1-1000\nconstraint_exclusion = on\n#from_collapse_limit = 8\njoin_collapse_limit = 1\t\t# 1 disables collapsing of explicit \n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t\t# Valid values are combinations of \n\t\t\t\t\t# stderr, syslog and eventlog, \n\t\t\t\t\t# depending on platform.\n\n# This is used when logging to stderr:\nredirect_stderr = on\t\t\t# Enable capturing of stderr into\nlog \n\t\t\t\t\t# files\n\n# These are only used if redirect_stderr is on:\nlog_directory = 'pg_log'\t\t# Directory where log files are\nwritten\n\t\t\t\t\t# Can be absolute or relative to\nPGDATA\nlog_filename = 'postgresql-%a.log'\t# Log file name pattern.\n\t\t\t\t\t# Can include strftime() escapes\nlog_truncate_on_rotation = on\t# If on, any existing log file of the same \n\t\t\t\t\t# name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to.\nBut\n\t\t\t\t\t# such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on\nrestarts\n\t\t\t\t\t# or size-driven rotation. Default\nis\n\t\t\t\t\t# off, meaning append to existing\nfiles\n\t\t\t\t\t# in all cases.\nlog_rotation_age = 1440\t\t\t# Automatic rotation of logfiles\nwill \n\t\t\t\t\t# happen after so many minutes. 0\nto \n\t\t\t\t\t# disable.\nlog_rotation_size = 0\t\t\t# Automatic rotation of logfiles\nwill \n\t\t\t\t\t# happen after so many kilobytes of\nlog\n\t\t\t\t\t# output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t\t# Values, in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\n#log_min_messages = notice\t\t# Values, in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or verbose\nmessages\n\n#log_min_error_statement = panic\t# Values in order of increasing\nseverity:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# panic(off)\n\t\t\t\t \n#log_min_duration_statement = -1\t# -1 is disabled, 0 logs all\nstatements\n\t\t\t\t\t# and their durations, in\nmilliseconds.\n\n#silent_mode = off\t\t\t# DO NOT USE without syslog or \n\t\t\t\t\t# redirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_line_prefix = ''\t\t\t# Special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = PID\n\t\t\t\t\t# %t = timestamp (no milliseconds)\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session id\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %x = transaction id\n\t\t\t\t\t# %q = stop here in non-session \n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_statement = 'none'\t\t\t# none, mod, ddl, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = off\n#stats_command_string = off\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on\t\t\t# enable autovacuum subprocess?\n#autovacuum_naptime = 60\t\t# time between autovacuum runs, in\nsecs\n#autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 500\t# min # of tuple updates before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before \n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.2\t# fraction of rel size before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for \n\t\t\t\t\t# autovac, -1 means use \n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for \n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\ndefault_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, dmy'\n#timezone = unknown\t\t\t# actually, defaults to TZ \n\t\t\t\t\t# environment setting\n#australian_timezones = off\n#extra_float_digits = 0\t\t\t# min -15, max 2\nclient_encoding = LATIN1\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'pt_BR.ISO-8859-1'\t\t\t# locale for system\nerror message \n\t\t\t\t\t# strings\nlc_monetary = 'pt_BR.ISO-8859-1'\t\t\t# locale for\nmonetary formatting\nlc_numeric = 'pt_BR.ISO-8859-1'\t\t\t# locale for number\nformatting\nlc_time = 'pt_BR.ISO-8859-1'\t\t\t\t# locale for time\nformatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\ndeadlock_timeout = 1000\t\t# in milliseconds\n#max_locks_per_transaction = 64\t\t# min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom variable class\nnames\n\n\n\n\n\n\n\n-----Mensagem original-----\nDe: Hervé Piedvache [mailto:[email protected]] \nEnviada em: sábado, 28 de julho de 2007 16:57\nPara: [email protected]\nCc: Bruno Rodrigues Siqueira\nAssunto: Re: [PERFORM] select on 1milion register = 6s\n\nDo you have analyzed your table before doing this ?\n\nLe samedi 28 juillet 2007, Bruno Rodrigues Siqueira a écrit :\n> Friends,\n>\n>\n>\n>\n>\n>\n>\n> Who can help me? My SELECT in a base with 1 milion register,\n> using expression index = 6seconds.\n>\n>\n>\n>\n>\n> Please, I don't know how to makes it better.\n>\n>\n>\n>\n>\n> Thanks\n\n\n\n-- \nHervé Piedvache\n\n", "msg_date": "Sat, 28 Jul 2007 19:40:48 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: select on 1milion register = 6s" }, { "msg_contents": "Yes, i do.\n\n\n\n\n\n-----Mensagem original-----\nDe: [email protected]\n[mailto:[email protected]] Em nome de Hervé Piedvache\nEnviada em: sábado, 28 de julho de 2007 16:57\nPara: [email protected]\nCc: Bruno Rodrigues Siqueira\nAssunto: Re: [PERFORM] select on 1milion register = 6s\n\nDo you have analyzed your table before doing this ?\n\nLe samedi 28 juillet 2007, Bruno Rodrigues Siqueira a écrit :\n> Friends,\n>\n>\n>\n>\n>\n>\n>\n> Who can help me? My SELECT in a base with 1 milion register,\n> using expression index = 6seconds.\n>\n>\n>\n>\n>\n> Please, I don't know how to makes it better.\n>\n>\n>\n>\n>\n> Thanks\n\n\n\n-- \nHervé Piedvache\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Sat, 28 Jul 2007 20:07:18 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: select on 1milion register = 6s" }, { "msg_contents": "On 7/28/07, Bruno Rodrigues Siqueira <[email protected]> wrote:\n>\n> Ok.\n> QUERY PLAN\n> Sort (cost=11449.37..11449.40 rows=119 width=8) (actual\n> time=14431.537..14431.538 rows=2 loops=1)\n> Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n> -> HashAggregate (cost=11448.79..11448.96 rows=119 width=8) (actual\n> time=14431.521..14431.523 rows=2 loops=1)\n> -> Index Scan using detalhamento_bas_idx3003 on detalhamento_bas\n> (cost=0.00..11442.95 rows=11679 width=8) (actual time=0.135..12719.155\n> rows=2335819 loops=1)\n\nSee the row mismatch there? It expects about 11k rows, gets back 2.3\nmillion. That's a pretty big misestimate. Have you run analyze\nrecently on this table?\n\nIs there a reason you're doing this:\n\n\nto_char( data_encerramento ,'yyyy-mm')\nbetween '2006-12' and '2007-01'\n\nwhen you should be able to just do:\n\ndata_encerramento between '2006-12-01' and '2007-01-31'\n? that should be able to use good estimates from analyze. My guess\nis the planner is making a bad guess because of the way you're\nhandling the dates.\n\n> SERVER\n> DELL PowerEdge 2950\n> XEON Quad-Core 3.0Ghz\n> 4Gb RAM\n> Linux CentOS 5.0 64-bits\n> Postgres 8.1.4\n\n> Postgresql.conf\n> # - Memory -\n>\n> shared_buffers = 50000 # min 16 or max_connections*2, 8KB\n\n400 Meg is kind of low for a server with 4 G ram. 25% is more\nreasonable (i.e. 125000 buffers)\n\n> work_mem = 3145728 # min 64, size in KB\n> maintenance_work_mem = 4194304 # min 1024, size in KB\n\nWhoa nellie! thats ~ 3 Gig of work mem, and 4 gig of maintenance work\nmem. In a machine with 4 gig ram, that's a recipe for disaster.\n\nSomething more reasonable would be 128000 (~125Meg) for each since\nyou've limited your machine to 10 connections you should be ok.\nsetting work_mem too high can run your machine out of memory and into\na swap storm that will kill performance.\n\n> fsync = off # turns forced synchronization on or\n> off\n\nSo, the data in this database isn't important? Cause that's what\nfsync = off says to me. Better to buy yourself a nice battery backed\ncaching RAID controller than turn off fsync.\n\n> effective_cache_size = 41943040 # typically 8KB each\n\nAnd you're machine has 343,604,830,208 bytes of memory available for\ncaching? Seems a little high to me.\n\n> random_page_cost = 1 # units are one sequential page\n> fetch\n\nSeldom if ever is it a good idea to bonk the planner on the head with\nrandom_page_cost=1. setting it to 1.2 ot 1.4 is low enough, but 1.4\nto 2.0 is more realistic.\n\n> stats_start_collector = off\n> #stats_command_string = off\n> #stats_block_level = off\n> #stats_row_level = off\n> #stats_reset_on_server_start = off\n\nI think you need stats_row_level on for autovacuum, but I'm not 100% sure.\n\nLet us know what happens after fixing these settings and running\nanalyze and running explain analyze, with possible changes to the\nquery.\n", "msg_date": "Sat, 28 Jul 2007 23:51:41 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 1milion register = 6s" }, { "msg_contents": "On Sat, Jul 28, 2007 at 10:36:16PM +0000, Ragnar wrote:\n> On lau, 2007-07-28 at 17:12 -0300, Bruno Rodrigues Siqueira wrote:\n> \n> > where\n> > \n> > to_char( data_encerramento ,'yyyy-mm') \n> > between '2006-12' and '2007-01'\n> \n> assuming data_encerramento is a date column, try:\n> WHERE data_encerramento between '2006-12-01' and '2007-01-31'\n\nIMO, much better would be:\n\nWHERE data_encerramento >= '2006-12-01' AND data_encerramento <\n'2007-02-01'\n\nThis means you don't have to worry about last day of the month or\ntimestamp precision. In fact, since the field is actually a timestamp,\nthe between posted above won't work correctly.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Sun, 29 Jul 2007 11:35:37 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: select on 1milion register = 6s" }, { "msg_contents": "Look it\n\n\n\n\nEXPLAIN\n ANALYZE\nselect \n to_char(data_encerramento,'mm/yyyy') as opcoes_mes, \n to_char(data_encerramento,'yyyy-mm') as ordem from detalhamento_bas \nwhere\n\ndata_encerramento = '01/12/2006' \n\nGROUP BY opcoes_mes, ordem\nORDER BY ordem DESC\n\n\n****************************************************************************\n\n\nQUERY PLAN\nSort (cost=60.72..60.72 rows=1 width=8) (actual time=4.586..4.586 rows=0\nloops=1)\n Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n -> HashAggregate (cost=60.72..60.72 rows=1 width=8) (actual\ntime=4.579..4.579 rows=0 loops=1)\n -> Index Scan using detalhamento_bas_idx3005 on detalhamento_bas\n(cost=0.00..60.67 rows=105 width=8) (actual time=4.576..4.576 rows=0\nloops=1)\n Index Cond: (data_encerramento = '2006-12-01\n00:00:00'::timestamp without time zone)\nTotal runtime: 4.629 ms\n\n\n////////////////////////////////////////////////////////////////////////////\n\nEXPLAIN\n ANALYZE\nselect \n to_char(data_encerramento,'mm/yyyy') as opcoes_mes, \n to_char(data_encerramento,'yyyy-mm') as ordem from detalhamento_bas \nwhere\n\ndata_encerramento >= '01/12/2006' and\ndata_encerramento < '01/02/2007' \n\nGROUP BY opcoes_mes, ordem\nORDER BY ordem DESC\n\n****************************************************************************\n\nQUERY PLAN\nSort (cost=219113.10..219113.10 rows=4 width=8) (actual\ntime=10079.212..10079.213 rows=2 loops=1)\n Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n -> HashAggregate (cost=219113.09..219113.09 rows=4 width=8) (actual\ntime=10079.193..10079.195 rows=2 loops=1)\n -> Seq Scan on detalhamento_bas (cost=0.00..217945.41 rows=2335358\nwidth=8) (actual time=0.041..8535.792 rows=2335819 loops=1)\n Filter: ((data_encerramento >= '2006-12-01\n00:00:00'::timestamp without time zone) AND (data_encerramento < '2007-02-01\n00:00:00'::timestamp without time zone))\nTotal runtime: 10079.256 ms\n\n\n\n\n\n\n\nStrange!!! Why does the index not works?\n\nAll my querys doesn't work with range dates.... I don't know what to do...\nPlease, help! \n\n\nBruno\n\n\n\n-----Mensagem original-----\nDe: Decibel! [mailto:[email protected]] \nEnviada em: domingo, 29 de julho de 2007 13:36\nPara: Ragnar\nCc: Bruno Rodrigues Siqueira; [email protected]\nAssunto: Re: RES: [PERFORM] select on 1milion register = 6s\n\nOn Sat, Jul 28, 2007 at 10:36:16PM +0000, Ragnar wrote:\n> On lau, 2007-07-28 at 17:12 -0300, Bruno Rodrigues Siqueira wrote:\n> \n> > where\n> > \n> > to_char( data_encerramento ,'yyyy-mm') \n> > between '2006-12' and '2007-01'\n> \n> assuming data_encerramento is a date column, try:\n> WHERE data_encerramento between '2006-12-01' and '2007-01-31'\n\nIMO, much better would be:\n\nWHERE data_encerramento >= '2006-12-01' AND data_encerramento <\n'2007-02-01'\n\nThis means you don't have to worry about last day of the month or\ntimestamp precision. In fact, since the field is actually a timestamp,\nthe between posted above won't work correctly.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n", "msg_date": "Sun, 29 Jul 2007 13:44:23 -0300", "msg_from": "\"Bruno Rodrigues Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: RES: select on 1milion register = 6s" }, { "msg_contents": "Scott Marlowe wrote:\n> On 7/28/07, Bruno Rodrigues Siqueira <[email protected]> wrote:\n\n> > stats_start_collector = off\n> > #stats_command_string = off\n> > #stats_block_level = off\n> > #stats_row_level = off\n> > #stats_reset_on_server_start = off\n> \n> I think you need stats_row_level on for autovacuum, but I'm not 100% sure.\n\nThat's correct (of course you need \"start_collector\" on as well). Most\nlikely, autovacuum is not even running.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sun, 29 Jul 2007 14:02:20 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 1milion register = 6s" }, { "msg_contents": "On Sun, Jul 29, 2007 at 01:44:23PM -0300, Bruno Rodrigues Siqueira wrote:\n> EXPLAIN\n> ANALYZE\n> select \n> to_char(data_encerramento,'mm/yyyy') as opcoes_mes, \n> to_char(data_encerramento,'yyyy-mm') as ordem from detalhamento_bas \n> where\n> \n> data_encerramento >= '01/12/2006' and\n> data_encerramento < '01/02/2007' \n> \n> GROUP BY opcoes_mes, ordem\n> ORDER BY ordem DESC\n> \n> ****************************************************************************\n> \n> QUERY PLAN\n> Sort (cost=219113.10..219113.10 rows=4 width=8) (actual\n> time=10079.212..10079.213 rows=2 loops=1)\n> Sort Key: to_char(data_encerramento, 'yyyy-mm'::text)\n> -> HashAggregate (cost=219113.09..219113.09 rows=4 width=8) (actual\n> time=10079.193..10079.195 rows=2 loops=1)\n> -> Seq Scan on detalhamento_bas (cost=0.00..217945.41 rows=2335358\n> width=8) (actual time=0.041..8535.792 rows=2335819 loops=1)\n> Filter: ((data_encerramento >= '2006-12-01\n> 00:00:00'::timestamp without time zone) AND (data_encerramento < '2007-02-01\n> 00:00:00'::timestamp without time zone))\n> Total runtime: 10079.256 ms\n> \n> Strange!!! Why does the index not works?\n\nIt's unlikely that it's going to be faster to index scan 2.3M rows than\nto sequential scan them. Try setting enable_seqscan=false and see if it\nis or not.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Sun, 29 Jul 2007 20:29:26 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: RES: select on 1milion register = 6s" }, { "msg_contents": "Please reply-all so others can learn and contribute.\n\nOn Sun, Jul 29, 2007 at 09:38:12PM -0700, Craig James wrote:\n> Decibel! wrote:\n> >It's unlikely that it's going to be faster to index scan 2.3M rows than\n> >to sequential scan them. Try setting enable_seqscan=false and see if it\n> >is or not.\n> \n> Out of curiosity ... Doesn't that depend on the table? Are all of the data \n> for one row stored contiguously, or are the data stored column-wise? If \n> it's the former, and the table has hundreds of columns, or a few columns \n> with large text strings, then wouldn't the time for a sequential scan \n> depend not on the number of rows, but rather the total amount of data?\n\nYes, the time for a seqscan is mostly dependent on table size and not\nthe number of rows. But the number of rows plays a very large role in\nthe cost of an indexscan.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Mon, 30 Jul 2007 01:52:41 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: RES: select on 1milion register = 6s" }, { "msg_contents": "Scott Marlowe wrote:\n>> random_page_cost = 1 # units are one sequential page\n>> fetch\n> \n> Seldom if ever is it a good idea to bonk the planner on the head with\n> random_page_cost=1. setting it to 1.2 ot 1.4 is low enough, but 1.4\n> to 2.0 is more realistic.\n\nWhich is probably the reason why the planner thinks a seq scan is\nfaster than an index scan...\n\nJan\n", "msg_date": "Wed, 01 Aug 2007 19:37:11 +0200", "msg_from": "Jan Dittmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 1milion register = 6s" } ]
[ { "msg_contents": "As other posters have pointed out, you can overcome the ORDER BY/LIMIT restriction on UNIONs with parentheses. I think I misbalanced the parentheses in my original post, which would have caused an error if you just copied and pasted.\n\nI don't think the limitation has to do with planning--just parsing the query.\n", "msg_date": "Sat, 28 Jul 2007 13:43:58 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Slow query with backwards index scan" } ]
[ { "msg_contents": "Hello,\n\nI'm currently trying to decide on a database design for tags in my web\n2.0application. The problem I'm facing is that I have 3 separate\ntables\ni.e. cars, planes, and schools. All three tables need to interact with the\ntags, so there will only be one universal set of tags for the three tables.\n\nI read a lot about tags and the best articles I found were:\n\nRoad to Web 2.0 ( http://wyome.com/docs/Road_to_Web_2.0:_The_Database_Design )\n\ntags: database schema (\nhttp://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html )\n\nand a forum discussion on tags with a very similar problem:\nhttp://www.webmasterworld.com/forum112/502.htm\nBut I don't like the solution, would like to stick with serial integer for\nCars, Planes and Schools tables.\n\nCurrently, this is my DB design:\n\nCars (carid, carname, text, etc.)\nPlanes (planeid, planename, text, etc.)\nSchools (schoolname, text, etc.) <------ School does not take int as primary\nkey but a varchar.\n\nTags (tagid, tagname, etc)\n\n--- Now here is where I have the question. I have to link up three separate\ntables to use Tags\n--- So when a new car is created in the Cars table, should I insert that\ncarID into the TagsItems table\n--- as itemID? So something like this?\n\nTagsItems\n(\n tagid INT NOT NULL REFERENCES Tags.TagID,\n itemid INT NULL, <---- really references Cars.carID and Planes.planeID\n schoolname varchar NULL <---- Saves the Schools.schoolname\n itemid + tagId as Unique\n)\n\nI also have a question on the schoolname field, because it accepts varchar\nnot integer. There seems to be some design that would better fit my needs.\nI'm asking you guys for a little assistance.\n\n-- \nRegards,\nJay Kang\n\nHello,\n\nI'm currently trying to decide on a database design for tags in my\nweb 2.0 application. The problem I'm facing is that I have 3 separate\ntables i.e. cars, planes, and schools. All three tables need to\ninteract with the tags, so there will only be one universal set of tags\nfor the three tables. \nI read a lot about tags and the best articles I found were:\n\nRoad to Web 2.0 ( http://wyome.com/docs/Road_to_Web_2.0:_The_Database_Design\n )\ntags: database schema ( http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html\n )\n\nand a forum discussion on tags with a very similar problem: \nhttp://www.webmasterworld.com/forum112/502.htm \nBut I don't like the solution, would like to stick with serial integer for Cars, Planes and Schools tables.\n\nCurrently, this is my DB design:\n\nCars (carid, carname, text, etc.)\nPlanes (planeid, planename, text, etc.)\nSchools (schoolname, text, etc.) <------ School does not take int as primary key but a varchar.\n\nTags (tagid, tagname, etc) --- Now here is where I have the question. I have to link up three separate tables to use Tags \n--- So when a new car is created in the Cars table, should I insert that carID into the TagsItems table\n--- as itemID? So something like this?\n\nTagsItems \n(\n  tagid INT NOT NULL REFERENCES Tags.TagID, \n  itemid INT NULL,  <---- really references Cars.carID and Planes.planeID\n  schoolname varchar NULL  <---- Saves the Schools.schoolname\n  itemid + tagId as Unique\n) \n\nI also have a question on the schoolname field, because it accepts\nvarchar not integer. There seems to be some design that would better\nfit my needs. I'm asking  you guys for a little assistance.-- Regards,Jay Kang", "msg_date": "Sun, 29 Jul 2007 16:22:30 +0200", "msg_from": "\"Jay Kang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Questions on Tags table schema" }, { "msg_contents": "Jay Kang wrote:\n> Hello,\n> \n> I'm currently trying to decide on a database design for tags in my web\n> 2.0application. The problem I'm facing is that I have 3 separate\n> tables\n> i.e. cars, planes, and schools. All three tables need to interact with the\n> tags, so there will only be one universal set of tags for the three tables.\n> \n> I read a lot about tags and the best articles I found were:\n> \n> Road to Web 2.0 ( http://wyome.com/docs/Road_to_Web_2.0:_The_Database_Design )\n\nAnd what in particular recommended this to you?\n\n> Currently, this is my DB design:\n> \n> Cars (carid, carname, text, etc.)\n> Planes (planeid, planename, text, etc.)\n> Schools (schoolname, text, etc.) <------ School does not take int as primary\n> key but a varchar.\n\nYou don't mention a primary-key here at all. You're not thinking of \nusing \"schoolname\" are you?\n\n> Tags (tagid, tagname, etc)\n> \n> --- Now here is where I have the question. I have to link up three separate\n> tables to use Tags\n> --- So when a new car is created in the Cars table, should I insert that\n> carID into the TagsItems table\n> --- as itemID? So something like this?\n> \n> TagsItems\n> (\n> tagid INT NOT NULL REFERENCES Tags.TagID,\n> itemid INT NULL, <---- really references Cars.carID and Planes.planeID\n> schoolname varchar NULL <---- Saves the Schools.schoolname\n> itemid + tagId as Unique\n> )\n\nWhat's wrong with the completely standard:\n car_tags (carid, tagid)\n plane_tags (planeid, tagid)\n school_tags (schoolid, tagid)\n\n> I also have a question on the schoolname field, because it accepts varchar\n> not integer. There seems to be some design that would better fit my needs.\n> I'm asking you guys for a little assistance.\n\nSorry, don't understand this question.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jul 2007 08:21:34 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Thanks for the reply Richard, but I guess I didn't explain myself well. I\nhave three tables that needs to be mapped to the Tags table. Most of the web\nreferences that I mentioned only maps one table to the Tags table. Here is\nmy Tags table:\n\nCREATE TABLE Tags\n(\n TagID serial NOT NULL,\n TagName varchar(64) NOT NULL,\n AddedBy varchar(256) NOT NULL,\n AddedDate timestamp NOT NULL,\n Status int NOT NULL,\n ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount DEFAULT (('0'))\n);\n\nIs it your opinion that the most standard solution for my problem would be\nto create three separate tables called car_tags, plane_tags and school_tags,\nwhich maps to each of the tables:\n\nCREATE TABLE car_tags\n(\n CarID integer NOT NULL,\n TagID integer NOT NULL\n);\n\nCREATE TABLE plane_tags\n(\n PlaneID integer NOT NULL,\n TagID integer NOT NULL\n);\n\nCREATE TABLE school_tags\n(\n SchoolID integer NOT NULL,\n TagID integer NOT NULL\n);\n\nWould TagID for each of these three tables be a foreign key for the Tags\ntable? Also would each CarID, PlaneID, and SchoolID be a foreign for each\ncorresponding tables? Also won't getting tags for three tables be more\ncomplicated? Isn't there a better solution or is this wishful thinking?\n\nOn 7/30/07, Richard Huxton <[email protected]> wrote:\n>\n> Jay Kang wrote:\n> > Hello,\n> >\n> > I'm currently trying to decide on a database design for tags in my web\n> > 2.0application. The problem I'm facing is that I have 3 separate\n> > tables\n> > i.e. cars, planes, and schools. All three tables need to interact with\n> the\n> > tags, so there will only be one universal set of tags for the three\n> tables.\n> >\n> > I read a lot about tags and the best articles I found were:\n> >\n> > Road to Web 2.0 (\n> http://wyome.com/docs/Road_to_Web_2.0:_The_Database_Design )\n>\n> And what in particular recommended this to you?\n\n\nThe Road to Web 2.0 is an example of tag implementation, just thought it\nwould be helpful to someone with the same problem that I have.\n\n> Currently, this is my DB design:\n> >\n> > Cars (carid, carname, text, etc.)\n> > Planes (planeid, planename, text, etc.)\n> > Schools (schoolname, text, etc.) <------ School does not take int as\n> primary\n> > key but a varchar.\n>\n> You don't mention a primary-key here at all. You're not thinking of\n> using \"schoolname\" are you?\n\n\nYes, I used school name varchar(64) as primary key for the school tables.\nYou can consider schoolName as Pagename for a wiki.\n\n> Tags (tagid, tagname, etc)\n> >\n> > --- Now here is where I have the question. I have to link up three\n> separate\n> > tables to use Tags\n> > --- So when a new car is created in the Cars table, should I insert that\n>\n> > carID into the TagsItems table\n> > --- as itemID? So something like this?\n> >\n> > TagsItems\n> > (\n> > tagid INT NOT NULL REFERENCES Tags.TagID,\n> > itemid INT NULL, <---- really references Cars.carID and\n> Planes.planeID\n> > schoolname varchar NULL <---- Saves the Schools.schoolname\n> > itemid + tagId as Unique\n> > )\n>\n> What's wrong with the completely standard:\n> car_tags (carid, tagid)\n> plane_tags (planeid, tagid)\n> school_tags (schoolid, tagid)\n\n> I also have a question on the schoolname field, because it accepts varchar\n> > not integer. There seems to be some design that would better fit my\n> needs.\n> > I'm asking you guys for a little assistance.\n>\n> Sorry, don't understand this question.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nRegards,\nJay Kang\n\nThanks for the reply Richard, but I guess I didn't explain myself well. I have three tables that needs to be mapped to the Tags table. Most of the web references that I mentioned only maps one table to the Tags table. Here is my Tags table:\nCREATE TABLE Tags(   TagID serial NOT NULL,   TagName varchar(64) NOT NULL,   AddedBy varchar(256) NOT NULL,   AddedDate timestamp NOT NULL,   Status int NOT NULL,   ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount  DEFAULT (('0'))\n);Is it your opinion that the most standard solution for my problem would be to create three separate tables called car_tags, plane_tags and school_tags, which maps to each of the tables:CREATE TABLE car_tags \n(   CarID integer NOT NULL,    TagID integer NOT NULL);CREATE TABLE plane_tags \n(\n   PlaneID integer NOT NULL, \n   TagID integer NOT NULL\n);CREATE TABLE school_tags \n(\n   SchoolID integer NOT NULL, \n   TagID integer NOT NULL\n);Would TagID for each of these three tables be a foreign key for the Tags table? Also would each CarID, PlaneID, and SchoolID be a foreign for each corresponding tables? Also won't getting tags for three tables be more complicated? Isn't there a better solution or is this wishful thinking? \nOn 7/30/07, Richard Huxton <[email protected]\n> wrote:\nJay Kang wrote:> Hello,>> I'm currently trying to decide on a database design for tags in my web> 2.0application. The problem I'm facing is that I have 3 separate> tables> \ni.e. cars, planes, and schools. All three tables need to interact with the> tags, so there will only be one universal set of tags for the three tables.>> I read a lot about tags and the best articles I found were:\n>> Road to Web 2.0 ( http://wyome.com/docs/Road_to_Web_2.0:_The_Database_Design\n )And what in particular recommended this to you?\nThe Road to Web 2.0 is an example of tag implementation, just thought it would be helpful to someone with the same problem that I have.\n> Currently, this is my DB design:>\n> Cars (carid, carname, text, etc.)> Planes (planeid, planename, text, etc.)> Schools (schoolname, text, etc.) <------ School does not take int as primary> key but a varchar.You don't mention a primary-key here at all. You're not thinking of\nusing \"schoolname\" are you?Yes, I used school name varchar(64) as primary key for the school tables. You can consider schoolName as Pagename for a wiki. \n> Tags (tagid, tagname, etc)>> --- Now here is where I have the question. I have to link up three separate> tables to use Tags> --- So when a new car is created in the Cars table, should I insert that\n> carID into the TagsItems table> --- as itemID? So something like this?>> TagsItems> (>   tagid INT NOT NULL REFERENCES Tags.TagID,>   itemid INT NULL,  <---- really references \nCars.carID and Planes.planeID>   schoolname varchar NULL  <---- Saves the Schools.schoolname>   itemid + tagId as Unique> )What's wrong with the completely standard:   car_tags (carid, tagid)\n   plane_tags (planeid, tagid)   school_tags (schoolid, tagid)> I also have a question on the schoolname field, because it accepts varchar\n> not integer. There seems to be some design that would better fit my needs.\n> I'm asking  you guys for a little assistance.Sorry, don't understand this question.--   Richard Huxton   Archonet Ltd-- Regards,\n\nJay Kang", "msg_date": "Mon, 30 Jul 2007 12:13:19 +0200", "msg_from": "\"Jay Kang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Jay Kang wrote:\n> Thanks for the reply Richard, but I guess I didn't explain myself well. I\n> have three tables that needs to be mapped to the Tags table. Most of the web\n> references that I mentioned only maps one table to the Tags table. Here is\n> my Tags table:\n\nOne quick point. SQL is case-insensitive unless you double-quote \nidentifiers. This means CamelCase tend not to be used. So instead of \nAddedBy you'd more commonly see added_by.\n\n> CREATE TABLE Tags\n> (\n> TagID serial NOT NULL,\n> TagName varchar(64) NOT NULL,\n> AddedBy varchar(256) NOT NULL,\n\nThis is supposed to be a user? But it's not a foreign-key, and you've \ndecided that 255 characters will be a good length, but 257 is impossible.\n\n> AddedDate timestamp NOT NULL,\n\nYou probably want \"timestamp with time zone\" (which represents an \nabsolute time) rather than without time-zone (which means 1pm in London \nis different from 1pm in New York).\n\nAlso, if it's \"AddedDate\" why isn't it a date?\n\n> Status int NOT NULL,\n> ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount DEFAULT (('0'))\n> );\n\nYou might not want to mix in details about number of views with details \nof the tag. Particularly if you might record more details later (when \nviewed, by whom etc).\n\n> Is it your opinion that the most standard solution for my problem would be\n> to create three separate tables called car_tags, plane_tags and school_tags,\n> which maps to each of the tables:\n\nWell, yes.\n\n> CREATE TABLE car_tags\n> (\n> CarID integer NOT NULL,\n> TagID integer NOT NULL\n> );\n[snip other table defs]\n\nDon't forget CarID isn't really an integer (I mean, you're not going to \nbe doing sums with car id's are you?) it's actually just a unique code. \nOf course, computers are particularly fast at dealing with 32-bit integers.\n\n> Would TagID for each of these three tables be a foreign key for the Tags\n> table? Also would each CarID, PlaneID, and SchoolID be a foreign for each\n> corresponding tables? Also won't getting tags for three tables be more\n> complicated? Isn't there a better solution or is this wishful thinking?\n\nYes, yes, and no.\n\nYou have cars which have tags and planes which have tags. Tagging a \nplane is not the same as tagging a car. Either you confuse that issue, \nor you want separate tables to track each relationship.\n\nFetching a list of everything with a specific tag is straightforward enough:\n\nSELECT 'car'::text AS item_type, car_id AS item_id, carname AS item_name\nFROM cars JOIN car_tags WHERE tag_id = <x>\nUNION ALL\nSELECT 'plane'::text AS item_type, plane_id AS item_id, planename AS \nitem_name\nFROM planes JOIN plane_tags WHERE tag_id = <x>\n...\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jul 2007 11:28:05 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Richard Huxton wrote:\n> \n>> CREATE TABLE car_tags\n>> (\n>> CarID integer NOT NULL,\n>> TagID integer NOT NULL\n>> );\n> [snip other table defs]\n> \n> Don't forget CarID isn't really an integer (I mean, you're not going to \n> be doing sums with car id's are you?) it's actually just a unique code. \n> Of course, computers are particularly fast at dealing with 32-bit integers.\n\nJust realised I haven't explained what I meant by that.\n\nCarID is a different type from PlaneID and TagID. As it happens, we are \nusing integers to represent them all, but a CarID = 1 is different from \na PlaneID = 1 and although you can numerically compare the two it is an \nerror to do so.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jul 2007 11:48:55 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Hey Richard,\n\nThanks again for the reply, its great to hear some feedback. So once again,\nhere we go:\n\nOn 7/30/07, Richard Huxton <[email protected]> wrote:\n>\n> Jay Kang wrote:\n> > Thanks for the reply Richard, but I guess I didn't explain myself well.\n> I\n> > have three tables that needs to be mapped to the Tags table. Most of the\n> web\n> > references that I mentioned only maps one table to the Tags table. Here\n> is\n> > my Tags table:\n>\n> One quick point. SQL is case-insensitive unless you double-quote\n> identifiers. This means CamelCase tend not to be used. So instead of\n> AddedBy you'd more commonly see added_by.\n\n\n\nYes, I am aware that postgre is case-insensitive, but I write all query with\ncase so its easier for me to read later on.\n\n> CREATE TABLE Tags\n> > (\n> > TagID serial NOT NULL,\n> > TagName varchar(64) NOT NULL,\n> > AddedBy varchar(256) NOT NULL,\n>\n> This is supposed to be a user? But it's not a foreign-key, and you've\n> decided that 255 characters will be a good length, but 257 is impossible.\n\n\nI'm developing in c# with asp.net 2.0 which as a membership provider. I'm\nusing ASP.NET 2.0 Website Programming / Problem - Design - Solution\" (Wrox\nPress) <http://www.amazon.com/gp/product/0764584642> as a reference, so not\nhaving AddedBy as a foreign key within each of the tables was taken directly\nfrom the text. I do not understand your comment about 255 character with 257\nbeing impossible? Could you elaborate, if you feel it warrants further\nelaboration.\n\n> AddedDate timestamp NOT NULL,\n>\n> You probably want \"timestamp with time zone\" (which represents an\n> absolute time) rather than without time-zone (which means 1pm in London\n> is different from 1pm in New York).\n\n\nOK, timestamp with time zone it is. To be honest, I've been using postgresql\nfor a while now, but never tried using timestamp with time zone.\n\n\nAlso, if it's \"AddedDate\" why isn't it a date?\n\n\nI had this first as a date, but asp.net 2.0 didn't like it, and changing it\nto a timestamp fixed the problem.\n\n> Status int NOT NULL,\n> > ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount DEFAULT\n> (('0'))\n> > );\n>\n> You might not want to mix in details about number of views with details\n> of the tag. Particularly if you might record more details later (when\n> viewed, by whom etc).\n\n\nAre you suggesting to separate the Tags table into Tags and TagDetails?\nBecause ViewCount within Tags table would represent how many times that tag\nwas clicked, I think others would call this field Popularity. I've been\nreading alot about tags and I am fascinated at all the information about\nuser tags can provide. Where would I put information such as ViewCount,\nAddedBy, Status, etc if not within the Tags table? Sorry, if I'm totally\nmissing your point.\n\n> Is it your opinion that the most standard solution for my problem would be\n> > to create three separate tables called car_tags, plane_tags and\n> school_tags,\n> > which maps to each of the tables:\n>\n> Well, yes.\n>\n> > CREATE TABLE car_tags\n> > (\n> > CarID integer NOT NULL,\n> > TagID integer NOT NULL\n> > );\n> [snip other table defs]\n>\n> Don't forget CarID isn't really an integer (I mean, you're not going to\n> be doing sums with car id's are you?) it's actually just a unique code.\n> Of course, computers are particularly fast at dealing with 32-bit\n> integers.\n\n\nYes, within the Cars table CarID would be a serial so it would auto\nincrement with each row. I understand your concern.\n\n> Would TagID for each of these three tables be a foreign key for the Tags\n> > table? Also would each CarID, PlaneID, and SchoolID be a foreign for\n> each\n> > corresponding tables? Also won't getting tags for three tables be more\n> > complicated? Isn't there a better solution or is this wishful thinking?\n>\n> Yes, yes, and no.\n>\n> You have cars which have tags and planes which have tags. Tagging a\n> plane is not the same as tagging a car. Either you confuse that issue,\n> or you want separate tables to track each relationship.\n\n\nHmm, so if I have a tag called \"Saab\" and a user clicks on Saab, then\ninformation from both Cars and Planes table would appear. If I'm inserting a\nnew row for a tag, wouldn't I need to check if that tagname already appears\nwithin the Tags table or would I just create a new row with that tag name.\nSorry, I'm not sure what \" 'car'::text \" this is doing, but I'm guessing its\nused to group the cars, planes, etc. so it knows which item_type it is.\nBrilliant!\n\nFetching a list of everything with a specific tag is straightforward enough:\n>\n> SELECT 'car'::text AS item_type, car_id AS item_id, carname AS item_name\n> FROM cars JOIN car_tags WHERE tag_id = <x>\n> UNION ALL\n> SELECT 'plane'::text AS item_type, plane_id AS item_id, planename AS\n> item_name\n> FROM planes JOIN plane_tags WHERE tag_id = <x>\n\n\n\nThanks for the query, I'm going to start programming so I can figure it out\nas I go along.\n\n...\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nRegards,\nJay Kang\n\nHey Richard,Thanks again for the reply, its great to hear some feedback. So once again, here we go:On 7/30/07, Richard Huxton <\[email protected]> wrote:Jay Kang wrote:> Thanks for the reply Richard, but I guess I didn't explain myself well. I\n> have three tables that needs to be mapped to the Tags table. Most of the web> references that I mentioned only maps one table to the Tags table. Here is> my Tags table:One quick point. SQL is case-insensitive unless you double-quote\nidentifiers. This means CamelCase tend not to be used. So instead ofAddedBy you'd more commonly see added_by.Yes, I am aware that postgre is case-insensitive, but I write all query with case so its easier for me to read later on. \n> CREATE TABLE Tags> (>    TagID serial NOT NULL,>    TagName varchar(64) NOT NULL,\n>    AddedBy varchar(256) NOT NULL,This is supposed to be a user? But it's not a foreign-key, and you'vedecided that 255 characters will be a good length, but 257 is impossible.\nI'm developing in c# with asp.net 2.0 which as a membership provider. I'm using \nASP.NET 2.0 Website Programming / Problem - Design - Solution\" (Wrox Press) as a reference, so not having AddedBy as a foreign key within each of the tables was taken directly from the text. I do not understand your comment about 255 character with 257 being impossible? Could you elaborate, if you feel it warrants further elaboration. \n>    AddedDate timestamp NOT NULL,You probably want \"timestamp with time zone\" (which represents an\nabsolute time) rather than without time-zone (which means 1pm in Londonis different from 1pm in New York).OK, timestamp with time zone it is. To be honest, I've been using postgresql for a while now, but never tried using timestamp with time zone. \n Also, if it's \"AddedDate\" why isn't it a date?\nI had this first as a date, but asp.net 2.0 didn't like it, and changing it to a timestamp fixed the problem.\n>    Status int NOT NULL,>    ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount  DEFAULT (('0'))> );You might not want to mix in details about number of views with detailsof the tag. Particularly if you might record more details later (when\nviewed, by whom etc).Are you suggesting to separate the Tags table into Tags and TagDetails? Because ViewCount within Tags table would represent how many times that tag was clicked, I think others would call this field Popularity. I've been reading alot about tags and I am fascinated at all the information about user tags can provide. Where would I put information such as ViewCount, AddedBy, Status, etc if not within the Tags table? Sorry, if I'm totally missing your point. \n> Is it your opinion that the most standard solution for my problem would be\n> to create three separate tables called car_tags, plane_tags and school_tags,> which maps to each of the tables:Well, yes.> CREATE TABLE car_tags> (>    CarID integer NOT NULL,\n>    TagID integer NOT NULL> );[snip other table defs]Don't forget CarID isn't really an integer (I mean, you're not going tobe doing sums with car id's are you?) it's actually just a unique code.\nOf course, computers are particularly fast at dealing with 32-bit integers.Yes, within the Cars table CarID would be a serial so it would auto increment with each row. I understand your concern.\n> Would TagID for each of these three tables be a foreign key for the Tags> table? Also would each CarID, PlaneID, and SchoolID be a foreign for each\n> corresponding tables? Also won't getting tags for three tables be more> complicated? Isn't there a better solution or is this wishful thinking?Yes, yes, and no.You have cars which have tags and planes which have tags. Tagging a\nplane is not the same as tagging a car. Either you confuse that issue,or you want separate tables to track each relationship.Hmm, so if I have a tag called \"Saab\" and a user clicks on Saab, then information from both Cars and Planes table would appear. If I'm inserting a new row for a tag, wouldn't I need to check if that tagname already appears within the Tags table or would I just create a new row with that tag name. Sorry, I'm not sure what \" 'car'::text \" this is doing, but I'm guessing its used to group the cars, planes, etc. so it knows which item_type it is. Brilliant!\nFetching a list of everything with a specific tag is straightforward enough:\nSELECT 'car'::text AS item_type, car_id AS item_id, carname AS item_nameFROM cars JOIN car_tags WHERE tag_id = <x>UNION ALLSELECT 'plane'::text AS item_type, plane_id AS item_id, planename AS\nitem_nameFROM planes JOIN plane_tags WHERE tag_id = <x>Thanks for the query, I'm going to start programming so I can figure it out as I go along. \n...--   Richard Huxton   Archonet Ltd-- Regards,Jay Kang", "msg_date": "Mon, 30 Jul 2007 13:26:10 +0200", "msg_from": "\"Jay Kang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Jay Kang wrote:\n>> One quick point. SQL is case-insensitive unless you double-quote\n>> identifiers. This means CamelCase tend not to be used. So instead of\n>> AddedBy you'd more commonly see added_by.\n> \n> Yes, I am aware that postgre is case-insensitive, but I write all query with\n> case so its easier for me to read later on.\n\nIt's SQL that's case insensitive. Pretty much any SQL-based database \nsystem you use will do case-folding in some way.\n\n>> CREATE TABLE Tags\n>>> (\n>>> TagID serial NOT NULL,\n>>> TagName varchar(64) NOT NULL,\n>>> AddedBy varchar(256) NOT NULL,\n>> This is supposed to be a user? But it's not a foreign-key, and you've\n>> decided that 255 characters will be a good length, but 257 is impossible.\n> \n> \n> I'm developing in c# with asp.net 2.0 which as a membership provider. I'm\n> using ASP.NET 2.0 Website Programming / Problem - Design - Solution\" (Wrox\n> Press) <http://www.amazon.com/gp/product/0764584642> as a reference, so not\n> having AddedBy as a foreign key within each of the tables was taken directly\n> from the text. I do not understand your comment about 255 character with 257\n> being impossible? Could you elaborate, if you feel it warrants further\n> elaboration.\n\nWhat is AddedBy - a name, a user-id?\nIf it's an ID, then it seems very long.\nIf it's a name, then 256 characters sounds a bit arbitrary as a length. \nWhy choose 256?\n\nThe advantage of *not* having AddedBy as a foreign-key is that you can \ndelete users and not have to update tags with their user-id. The \ndisadvantage is the same thing. You can end up with tags added by \nnon-existent users.\n\n>> AddedDate timestamp NOT NULL,\n>>\n>> You probably want \"timestamp with time zone\" (which represents an\n>> absolute time) rather than without time-zone (which means 1pm in London\n>> is different from 1pm in New York).\n> \n> OK, timestamp with time zone it is. To be honest, I've been using postgresql\n> for a while now, but never tried using timestamp with time zone.\n\nYou can get away with it as long as the time-zone setting on your client \n stays the same. Then it changes, and you're left wondering why all \nyour comparisons are hours out.\n\n> Also, if it's \"AddedDate\" why isn't it a date?\n> \n> I had this first as a date, but asp.net 2.0 didn't like it, and changing it\n> to a timestamp fixed the problem.\n\nSurely asp.net has a date type? If not, I'd suggest AddedTimestamp as a \nname (or AddedTS if you don't enjoy lots of typing :-). It won't matter \nto you now, but 12 months from now it'll save you looking up data types.\n\n>> Status int NOT NULL,\n>>> ViewCount int NOT NULL CONSTRAINT DF_tm_Tags_ViewCount DEFAULT\n>> (('0'))\n>>> );\n>> You might not want to mix in details about number of views with details\n>> of the tag. Particularly if you might record more details later (when\n>> viewed, by whom etc).\n> \n> Are you suggesting to separate the Tags table into Tags and TagDetails?\n> Because ViewCount within Tags table would represent how many times that tag\n> was clicked, I think others would call this field Popularity. I've been\n> reading alot about tags and I am fascinated at all the information about\n> user tags can provide. Where would I put information such as ViewCount,\n> AddedBy, Status, etc if not within the Tags table? Sorry, if I'm totally\n> missing your point.\n\nWell, it all depends on your use analysis. You could make a good \nargument that there are two sets of fact data:\n1. \"Identity\" data - id, name, added-by, added-ts, status\n2. \"Activity\" data - number of views, last time clicked etc\n\nDepending on how you intend to use the tags, it might make sense to \nseparate these. Particularly if you find yourself with no sensible \nvalues in activity data until a tag is used.\n\n From a separate performance-related point of view, you would expect \nactivity data to be updated much more often than identity data.\n\n>> You have cars which have tags and planes which have tags. Tagging a\n>> plane is not the same as tagging a car. Either you confuse that issue,\n>> or you want separate tables to track each relationship.\n> \n> Hmm, so if I have a tag called \"Saab\" and a user clicks on Saab, then\n> information from both Cars and Planes table would appear.\n\nWell, if you UNION them, yes. Of course you'll need to find columns that \nmake sense across all types. If planes have e.g. \"wingspan\" then you'll \nneed to add 'not applicable'::text in the car-related subquery.\n\n > If I'm inserting a\n> new row for a tag, wouldn't I need to check if that tagname already appears\n> within the Tags table or would I just create a new row with that tag name.\n> Sorry, I'm not sure what \" 'car'::text \" this is doing, but I'm guessing its\n> used to group the cars, planes, etc. so it knows which item_type it is.\n> Brilliant!\n\nYes, if you're giving a list of all \"Saab\"s, I'm assuming your users \nwill want to know if it's a plane or car. The ::text is just PostgreSQL \nshorthand for a cast - it's good practice to specify precise types for \nliteral values.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jul 2007 12:57:06 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Jay Kang wrote:\n> Hey Richard,\n> \n> Sorry for the late reply, I was just making my first test version of the DB\n> closely resembling you suggested design. Just wanted to write you back\n> answering your questions. So here we go:\n\nNo problem - it's email and what with different timezones it's common to \nhave gaps.\n\n>> What is AddedBy - a name, a user-id?\n>> If it's an ID, then it seems very long.\n>> If it's a name, then 256 characters sounds a bit arbitrary as a length.\n>> Why choose 256?\n> \n> No, AddedBy is the username of the individual. Why is 256 characters\n> arbitrary as a length? Would 255 be better or 32? I guess, your saying its\n> too long, shorten it, I'm just going with what the books says, but I really\n> welcome any comments you have^^\n\nThe book doesn't know what you're trying to do. You do. The important \nthing is not whether you choose 256 or 32 or 100, it's that you've \nthought about it first.\n\nObvious thoughts:\n1. Is this going to be a real name \"Richard Huxton\" or an identifier \n\"rhuxton123\"? You'll want more characters for the real name than an \nidentifier.\n2. Where will this be displayed and will it take up too much space? If I \npick a username of WWW...250 repeats...WWW does that mess up any formatting?\n3. Do we allow HTML-unsafe characters ('<', '&') and escape them when \nused, or just not allow them?\n\nNo wrong or right, the process of thinking about it is important.\n\n> The advantage of *not* having AddedBy as a foreign-key is that you can\n>> delete users and not have to update tags with their user-id. The\n>> disadvantage is the same thing. You can end up with tags added by\n>> non-existent users.\n> \n> Thanks, I would like anonymous users to be able to add tags, so I guess I'll\n> leave it the way it is^^\n\nYou would normally use NULL to indicate \"unknown\", which in the case of \nan anonymous user would be true. A NULL foreign-key is allowed (unless \nyou define the column not-null of course).\n\n[snip]\n>> Well, it all depends on your use analysis. You could make a good\n>> argument that there are two sets of fact data:\n>> 1. \"Identity\" data - id, name, added-by, added-ts, status\n>> 2. \"Activity\" data - number of views, last time clicked etc\n> \n> If I were to create Identity and Activity for the Tags table, would I be\n> creating two separate tables called TagActivities and TagIdentities?\n\nThat's what I'm talking about, and you'd have a foreign-key constraint \nto make sure activity refers to a real tag identity. Again, I'm not \nsaying you *do* want to do this, just that you'll need to think about it.\n\n> Currently, I'm not sure how I'll analysis the data for Tags. I know that I\n> want to do the bigger font if it is popular and smaller font if its not.\n> Hmm, would like to see examples of other websites that utilized Tags tables\n> to see how they implemented this function. I was thinking of adding tagcount\n> (popularity) for each user within the user definition table.\n\nFor this particular case, you'll almost certainly want to cache the \nresults anyway. The popularity isn't going to change that fast, and \npresumably you'll only want to categorise them as VERY BIG, Big, normal \netc. I assume asp.net allows you to cache this sort of information somehow.\n\n>>>> You have cars which have tags and planes which have tags. Tagging a\n>>>> plane is not the same as tagging a car. Either you confuse that issue,\n>>>> or you want separate tables to track each relationship.\n>>> Hmm, so if I have a tag called \"Saab\" and a user clicks on Saab, then\n>>> information from both Cars and Planes table would appear.\n>> Well, if you UNION them, yes. Of course you'll need to find columns that\n>> make sense across all types. If planes have e.g. \"wingspan\" then you'll\n>> need to add 'not applicable'::text in the car-related subquery.\n> \n> Hmm, currently I can't visualize the query, again, it would help if I can\n> see the data to see what you mean. If planes table had a tag called\n> wingspan, wouldn't the query just not show any value for the field so it\n> wouldn't need 'not applicable' in the car-related subquery? Not sure really.\n\nSorry - I'm trying to say that if you UNION together several queries \nthey all need to have the same columns. So - if one subquery doesn't \nhave that column you'll need to provide a \"not applicable\" value instead.\n\nBest of luck with the application, and don't forget to cache query \nresults when they don't change often. It'll boost performance quite a bit.\n\nP.S. - try the \"general\" mailing list if you want to discuss this sort \nof thing some more. This one is really supposed to be \nperformance-related questions only.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jul 2007 15:58:31 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" }, { "msg_contents": "Jay Kang wrote:\n> Hello,\n> \n> I'm currently trying to decide on a database design for tags in my web\n> 2.0 application. The problem I'm facing is that I have 3 separate tables\n> i.e. cars, planes, and schools. All three tables need to interact with\n> the tags, so there will only be one universal set of tags for the three\n> tables.\n\nIt strikes me that some tsearch2 ts_vector like datatype might\nwork well for this; depending on the types of queries you're doing.\n", "msg_date": "Mon, 30 Jul 2007 13:40:31 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on Tags table schema" } ]
[ { "msg_contents": "Hi Dimitri,\n\nCan you post some experimental evidence that these settings matter?\n\nAt this point we have several hundred terabytes of PG databases running on ZFS, all of them setting speed records for data warehouses.\n\nWe did testing on these settings last year on S10U2, perhaps things have changed since then.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tDimitri [mailto:[email protected]]\nSent:\tMonday, July 30, 2007 05:26 PM Eastern Standard Time\nTo:\tLuke Lonergan\nCc:\tJosh Berkus; [email protected]; Marc Mamin\nSubject:\tRe: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...\n\nLuke,\n\nZFS tuning is not coming from general suggestion ideas, but from real\npractice...\n\nSo,\n - limit ARC is the MUST for the moment to keep your database running\ncomfortable (specially DWH!)\n - 8K blocksize is chosen to read exactly one page when PG ask to\nread one page - don't mix it with prefetch! when prefetch is detected,\nZFS will read next blocks without any demand from PG; but otherwise\nwhy you need to read more pages each time PG asking only one?...\n - prefetch of course not needed for OLTP, but helps on OLAP/DWH, agree :)\n\nRgds,\n-Dimitri\n\n\nOn 7/22/07, Luke Lonergan <[email protected]> wrote:\n> Josh,\n>\n> On 7/20/07 4:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n>\n> > There are some specific tuning parameters you need for ZFS or performance\n> > is going to suck.\n> >\n> > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n> > (scroll down to \"PostgreSQL\")\n> > http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> > http://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n> >\n> > You also don't say anything about what kind of workload you're running.\n>\n>\n> I think we're assuming that the workload is OLTP when putting these tuning\n> guidelines forward. Note that the ZFS tuning guidance referred to in this\n> bug article recommend \"turning vdev prefetching off\" for \"random I/O\n> (databases)\". This is exactly the opposite of what we should do for OLAP\n> workloads.\n>\n> Also, the lore that setting recordsize on ZFS is mandatory for good database\n> performance is similarly not appropriate for OLAP work.\n>\n> If the workload is OLAP / Data Warehousing, I'd suggest ignoring all of the\n> tuning information from Sun that refers generically to \"database\". The\n> untuned ZFS performance should be far better in those cases. Specifically,\n> these three should be ignored:\n> - (ignore this) limit ARC memory use\n> - (ignore this) set recordsize to 8K\n> - (ignore this) turn off vdev prefetch\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\nRe: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...\n\n\n\nHi Dimitri,\n\nCan you post some experimental evidence that these settings matter?\n\nAt this point we have several hundred terabytes of PG databases running on ZFS, all of them setting speed records for data warehouses.\n\nWe did testing on these settings last year on S10U2, perhaps things have changed since then.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Dimitri [mailto:[email protected]]\nSent:   Monday, July 30, 2007 05:26 PM Eastern Standard Time\nTo:     Luke Lonergan\nCc:     Josh Berkus; [email protected]; Marc Mamin\nSubject:        Re: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...\n\nLuke,\n\nZFS tuning is not coming from general suggestion ideas, but from real\npractice...\n\nSo,\n  - limit ARC is the MUST for the moment to keep your database running\ncomfortable (specially DWH!)\n  - 8K blocksize is chosen to read exactly one page when PG ask to\nread one page - don't mix it with prefetch! when prefetch is detected,\nZFS will read next blocks without any demand from PG; but otherwise\nwhy you need to read more  pages each time PG asking only one?...\n  - prefetch of course not needed for OLTP, but helps on OLAP/DWH, agree :)\n\nRgds,\n-Dimitri\n\n\nOn 7/22/07, Luke Lonergan <[email protected]> wrote:\n> Josh,\n>\n> On 7/20/07 4:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n>\n> > There are some specific tuning parameters you need for ZFS or performance\n> > is going to suck.\n> >\n> > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n> > (scroll down to \"PostgreSQL\")\n> > http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> > http://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n> >\n> > You also don't say anything about what kind of workload you're running.\n>\n>\n> I think we're assuming that the workload is OLTP when putting these tuning\n> guidelines forward.  Note that the ZFS tuning guidance referred to in this\n> bug article recommend \"turning vdev prefetching off\" for \"random I/O\n> (databases)\".  This is exactly the opposite of what we should do for OLAP\n> workloads.\n>\n> Also, the lore that setting recordsize on ZFS is mandatory for good database\n> performance is similarly not appropriate for OLAP work.\n>\n> If the workload is OLAP / Data Warehousing, I'd suggest ignoring all of the\n> tuning information from Sun that refers generically to \"database\".  The\n> untuned ZFS performance should be far better in those cases.  Specifically,\n> these three should be ignored:\n> - (ignore this) limit ARC memory use\n> - (ignore this) set recordsize to 8K\n> - (ignore this) turn off vdev prefetch\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>        choose an index scan if your joining column's datatypes do not\n>        match\n>", "msg_date": "Mon, 30 Jul 2007 17:41:38 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." }, { "msg_contents": "Hi Luke,\n\nOn the same page of Solaris internals wiki you may find links to the\nstudy with db_STRESS benchmark (done on UFS and ZFS with PostgreSQL,\nMySQL and Oracle (well, Oracle results are removed, but at least I may\nsay it entered into the same tuning as PgSQL). Tests were done on\nSol10u3 (as well you may find any other platform details in report\ndocument)...\n\nAlso, if block size adjustment is less or more transparent (don't\nread 32K if you need only 8K - with huge data volume you'll simply\nwaste your cache; in case you're doing full scan - leave prefetch\nalgorithm to work for you); probably ARC (cache) limitation need more\nlight. Well, I even cannot say there is any problem, etc. with it - it\njust has too much aggressive implementation :)) If all your running\nprograms fitting into 1GB of RAM - you may leave ARC size by default\n(leaves 1GB free of system RAM). Otherwise, you should limit ARC to\nkeep your workload execution comfortable: ARC allocating memory very\nquickly and every time your program need more RAM - it entering into\nconcurrency with ARC... In my tests I observed short workload freezes\nduring such periods and I did not like it too much :)) specially with\nhigh connection numbers :))\n\nwell, we may spend hours to discuss :) (sorry to be short, I have a\nvery limited mail access for the moment)...\n\nHowever, ZFS is improving all the time and works better and better\nwith every Solaris release, so probably all current tuning will be\ndifferent or obsolete at the end of this year :))\n\nBTW, forgot to mention, you'll need Solaris 10u4 or at least 10u3 but\nwith all recent patches applied to run M8000 on full power.\n\nBest regards!\n-Dimitri\n\n\nOn 7/30/07, Luke Lonergan <[email protected]> wrote:\n> Hi Dimitri,\n>\n> Can you post some experimental evidence that these settings matter?\n>\n> At this point we have several hundred terabytes of PG databases running on\n> ZFS, all of them setting speed records for data warehouses.\n>\n> We did testing on these settings last year on S10U2, perhaps things have\n> changed since then.\n>\n> - Luke\n>\n> Msg is shrt cuz m on ma treo\n>\n> -----Original Message-----\n> From: \tDimitri [mailto:[email protected]]\n> Sent:\tMonday, July 30, 2007 05:26 PM Eastern Standard Time\n> To:\tLuke Lonergan\n> Cc:\tJosh Berkus; [email protected]; Marc Mamin\n> Subject:\tRe: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...\n>\n> Luke,\n>\n> ZFS tuning is not coming from general suggestion ideas, but from real\n> practice...\n>\n> So,\n> - limit ARC is the MUST for the moment to keep your database running\n> comfortable (specially DWH!)\n> - 8K blocksize is chosen to read exactly one page when PG ask to\n> read one page - don't mix it with prefetch! when prefetch is detected,\n> ZFS will read next blocks without any demand from PG; but otherwise\n> why you need to read more pages each time PG asking only one?...\n> - prefetch of course not needed for OLTP, but helps on OLAP/DWH, agree :)\n>\n> Rgds,\n> -Dimitri\n>\n>\n> On 7/22/07, Luke Lonergan <[email protected]> wrote:\n> > Josh,\n> >\n> > On 7/20/07 4:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n> >\n> > > There are some specific tuning parameters you need for ZFS or\n> performance\n> > > is going to suck.\n> > >\n> > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n> > > (scroll down to \"PostgreSQL\")\n> > > http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp\n> > > http://bugs.opensolaris.org/view_bug.do?bug_id=6437054\n> > >\n> > > You also don't say anything about what kind of workload you're running.\n> >\n> >\n> > I think we're assuming that the workload is OLTP when putting these tuning\n> > guidelines forward. Note that the ZFS tuning guidance referred to in this\n> > bug article recommend \"turning vdev prefetching off\" for \"random I/O\n> > (databases)\". This is exactly the opposite of what we should do for OLAP\n> > workloads.\n> >\n> > Also, the lore that setting recordsize on ZFS is mandatory for good\n> database\n> > performance is similarly not appropriate for OLAP work.\n> >\n> > If the workload is OLAP / Data Warehousing, I'd suggest ignoring all of\n> the\n> > tuning information from Sun that refers generically to \"database\". The\n> > untuned ZFS performance should be far better in those cases.\n> Specifically,\n> > these three should be ignored:\n> > - (ignore this) limit ARC memory use\n> > - (ignore this) set recordsize to 8K\n> > - (ignore this) turn off vdev prefetch\n> >\n> > - Luke\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n>\n", "msg_date": "Wed, 1 Aug 2007 00:14:53 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 64 CPUs, 128 GB RAM..." } ]
[ { "msg_contents": "In a followup to a question I put forward here on performance which I \ntraced to the \"stats\" bug (and fixed it). Now I'm trying to optimize \nthat query and....... I'm getting confused fast...\n\nI have the following (fairly complex) statement which is run with some \nfrequency:\n\nselect post.forum, post.subject, post.replied from post where toppost = \n1 and (replied > (select lastview from forumlog where login='someone' \nand forum=post.forum and number is null)) is not false AND (replied > \n(select lastview from forumlog where login='someone' and \nforum=post.forum and number=post.number)) is not false order by pinned \ndesc, replied desc;\n\nThis gives me exactly what I'm looking for BUT can be quite slow.\n\nThe \"forumlog\" table has one tuple for each post and user; it has the \nfields \"forum\", \"number\", \"login\" and \"lastview\". The \"post\" items have \na \"forum\", \"number\" and \"replied\" field (which is used to match the \n\"lastview\" one.) \n\nWhen you look at a \"post\" (which may have replies) the application \nupdates your existing entry in that table if there is one, or INSERTs a \nnew tuple if not.\n\nTherefore, for each post you have viewed, there is a tuple in the \n\"forumlog\" table which represents the last time you looked at that item.\n\nThe problem is that for a person who has NOT visited a specific thread \nof discussion, there is no \"forumlog\" entry for that person and post in \nthe table. Thus, to get all posts which (1) you've not seen at all, or \n(2) you've seen but someone has added to since you saw them, the above \ncomplex query is what I've come up with; there may be a \"null\" table \nentry which a \"wildcard\" match if its present - if there is no match \nthen the item also must treated as new. The above statement works - but \nits slow.\n\nThe following query is VERY fast but only returns those in which there \nIS an entry in the table (e.g. you've visited the item at least once)\n\nselect post.forum, post.subject, post.replied from post, forumlog where \npost.number = forumlog.number and post.toppost = 1 and post.replied > \nforumlog.lastview and forumlog.login='someone' order by pinned desc, \nreplied desc;\n\nWhat I haven't been able to figure out is how to structure a query that \nis both fast and will return the posts for which you DO NOT have a \nmatching entry in the \"forumlog\" table for the specific post but DO \neither (1) match the \"null\" number entry (that is, they're posted later \nthan that) OR (2) have no match at all. (The first statement matches \nthese other two cases)\n\nAny ideas? (Its ok if that query(s) are separate; in other words, its \ncool if I have to execute two or even three queries and get the results \nseparately - in fact, that might be preferrable in some circumstances)\n\nIdeas?\n\n-- \nKarl Denninger ([email protected])\nhttp://www.denninger.net\n\n\n\n\n%SPAMBLOCK-SYS: Matched [@postgresql.org+], message ok\n", "msg_date": "Mon, 30 Jul 2007 20:08:44 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Query optimization...." } ]
[ { "msg_contents": "Hello list,\n\nI have a problem with a simple count query on a pgsql 8.2.3 server.\n\nSELECT COUNT(pk_file_structure_id) FROM tbl_file_structure INNER JOIN \ntbl_file ON fk_file_id = pk_file_id WHERE lower(file_name) like lower \n('awstats%');\nUsing Explain analyze I've noticed that it makes a seq scan on \ntbl_file_structure but I have an index on fk_file_id and its \nstatistics is set to 200. I ran an analyze on both tbl_file and \ntbl_file_structure.\nThe count retrieved is 75 000 so its way lower than the total 3 834 \n059 rows.\n\nShould I raise the statistics more? Is there a rule of thumb how much \nthe statistics should be reagards to the number of rows in the table?\nCan I make my database adjust the statistics dynamically? I don't \nwant to go around to my customers changing statistics every time the \ntables starts to fill up.\n\nAnyway here is the explain analyze on the slow query.\n\nEXPLAIN ANALYZE SELECT COUNT(pk_file_structure_id) FROM \ntbl_file_structure INNER JOIN tbl_file ON fk_file_id = pk_file_id \nWHERE lower(file_name) like lower('awstats%');\n\n\"Aggregate (cost=172512.17..172512.18 rows=1 width=8) (actual \ntime=30316.316..30316.317 rows=1 loops=1)\"\n\" -> Hash Join (cost=12673.69..171634.39 rows=351110 width=8) \n(actual time=1927.730..30191.260 rows=75262 loops=1)\"\n\" Hash Cond: (tbl_file_structure.fk_file_id = \ntbl_file.pk_file_id)\"\n\" -> Seq Scan on tbl_file_structure (cost=0.00..80537.59 \nrows=3834059 width=16) (actual time=10.056..14419.662 rows=3834059 \nloops=1)\"\n\" -> Hash (cost=11999.34..11999.34 rows=39868 width=8) \n(actual time=1896.859..1896.859 rows=39959 loops=1)\"\n\" -> Bitmap Heap Scan on tbl_file \n(cost=1157.12..11999.34 rows=39868 width=8) (actual \ntime=457.867..1779.792 rows=39959 loops=1)\"\n\" Filter: (lower((file_name)::text) ~~ 'awstats \n%'::text)\"\n\" -> Bitmap Index Scan on tbl_file_idx \n(cost=0.00..1147.15 rows=35881 width=0) (actual time=450.469..450.469 \nrows=39959 loops=1)\"\n\" Index Cond: ((lower((file_name)::text) \n~>=~ 'awstats'::character varying) AND (lower((file_name)::text) ~<~ \n'awstatt'::character varying))\"\n\"Total runtime: 30316.739 ms\"\n\nCould this have something to do with low settings in postgresql.conf?\nI haven't tweaked any settings in postgresql.conf yet.\n\nPlease help,\nRegards, henke \n", "msg_date": "Tue, 31 Jul 2007 10:23:38 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Seq scan on join table despite index and high statistics" } ]
[ { "msg_contents": "Hi,\n\nI have found under\n\n http://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql.html#PLPGSQL-OVERVIEW\n\n Note: The PL/pgSQL EXECUTE statement is not related to the EXECUTE\n statement supported by the PostgreSQL server. The server's EXECUTE\n statement cannot be used within PL/pgSQL functions (and is not needed).\n\nI'm especially stumbling over the \"is not needed\" part. My plan\nis to write a server side function (either SQL or pgsql) that wraps\nthe output of a PREPAREd statement but I have no idea how to do this.\n\nThe final task is to obtain some XML for of my data via a simple shell script\nthat contains\n\n psql -t MyDatabase -c 'SELECT * FROM MyFunction ($1, $2);'\n\nThe task of MyFunction($1,$2) is to wrap up the main data into an XML\nheader (just some text like\n <?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n ...\n) around the real data that will be obtained via a PREPAREd statement that is\ndeclared like this\n\n PREPARE xml_data(int, int) AS ( SELECT ... WHERE id = $1 AND source = $2 );\n\nwhere \"...\" stands for wrapping the output into xml format.\n\nI don't know whether this is a reasonable way. I know how to solve this\nproblem when using a pgsql function and preparing the output as a text\nstring but I learned that PREPAREd statements might be much more clever\nperformance wise and thus I wonder whether I could do it this way.\n\nKind regards and thanks for any help\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n", "msg_date": "Tue, 31 Jul 2007 16:10:03 +0200 (CEST)", "msg_from": "Andreas Tille <[email protected]>", "msg_from_op": true, "msg_subject": "Using EXECUTE in a function" }, { "msg_contents": "\nHello,\nI am getting following error from my application. Can any body tell me\nhow to find the process name and transaction details when the deadlock\noccurred? \n\nThis problem did not occur consistently. \n\nError log\n\n2007-07-30 19:09:12,140 ERROR [se.em.asset.persistence.AssetUpdate]\nSQLException calling procedure{? = call update_asset_dependents(?,?,?)}\nfor asset id 36\n\norg.postgresql.util.PSQLException: ERROR: deadlock detected\n\n Detail: Process 21172 waits for ShareLock on transaction 5098759;\nblocked by process 21154.\n\nProcess 21154 waits for ShareLock on transaction 5098760; blocked by\nprocess 21172.\n\n at\norg.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecu\ntorImpl.java:1548)\n\n at\norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImp\nl.java:1316)\n\n at\norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:\n191)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Stateme\nnt.java:452)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdb\nc2Statement.java:351)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Stateme\nnt.java:344)\n\n at\norg.jboss.resource.adapter.jdbc.CachedPreparedStatement.execute(CachedPr\neparedStatement.java:216)\n\n at\norg.jboss.resource.adapter.jdbc.WrappedPreparedStatement.execute(Wrapped\nPreparedStatement.java:209)\n\n at\nse.em.asset.persistence.AssetUpdate.callProcedure(AssetUpdate.java:1751)\n\n at\nse.em.asset.persistence.AssetUpdate.updateAsset(AssetUpdate.java:1028)\n\n at\nse.em.asset.service.AssetService.updateAsset(AssetService.java:3843)\n\n at\nse.em.asset.service.AssetService.process(AssetService.java:1042)\n\n at sun.reflect.GeneratedMethodAccessor669.invoke(Unknown Source)\n\n at\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor\nImpl.java:25)\n\n at java.lang.reflect.Method.invoke(Method.java:585)\n\n at\nse.em.framework.service.ServiceAbstract.process(ServiceAbstract.java:163\n)\n\n at\nse.em.framework.service.ServiceAbstract.process(ServiceAbstract.java:58)\n\n at\nse.em.commwebservice.webservice.AssetDataHandler.getandCallService(Asset\nDataHandler.java:1810)\n\n at\nse.em.commwebservice.webservice.AssetDataHandler.run(AssetDataHandler.ja\nva:487)\nThanks\nRegards\nSachchida N Ojha\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andreas\nTille\nSent: Tuesday, July 31, 2007 10:10 AM\nTo: [email protected]\nSubject: [PERFORM] Using EXECUTE in a function\n\nHi,\n\nI have found under\n\n \nhttp://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/pl\npgsql.html#PLPGSQL-OVERVIEW\n\n Note: The PL/pgSQL EXECUTE statement is not related to the\nEXECUTE\n statement supported by the PostgreSQL server. The server's\nEXECUTE\n statement cannot be used within PL/pgSQL functions (and is\nnot needed).\n\nI'm especially stumbling over the \"is not needed\" part. My plan\nis to write a server side function (either SQL or pgsql) that wraps\nthe output of a PREPAREd statement but I have no idea how to do this.\n\nThe final task is to obtain some XML for of my data via a simple shell\nscript\nthat contains\n\n psql -t MyDatabase -c 'SELECT * FROM MyFunction ($1, $2);'\n\nThe task of MyFunction($1,$2) is to wrap up the main data into an XML\nheader (just some text like\n <?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n ...\n) around the real data that will be obtained via a PREPAREd statement\nthat is\ndeclared like this\n\n PREPARE xml_data(int, int) AS ( SELECT ... WHERE id = $1 AND source\n= $2 );\n\nwhere \"...\" stands for wrapping the output into xml format.\n\nI don't know whether this is a reasonable way. I know how to solve this\nproblem when using a pgsql function and preparing the output as a text\nstring but I learned that PREPAREd statements might be much more clever\nperformance wise and thus I wonder whether I could do it this way.\n\nKind regards and thanks for any help\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n", "msg_date": "Tue, 31 Jul 2007 11:00:48 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deadlock detected when calling function (Call function_name)" }, { "msg_contents": "Sachchida Ojha wrote:\n> \n> Hello,\n> I am getting following error from my application. Can any body tell me\n> how to find the process name and transaction details when the deadlock\n> occurred? \n> \n> This problem did not occur consistently. \n> \n> Error log\n> \n> 2007-07-30 19:09:12,140 ERROR [se.em.asset.persistence.AssetUpdate]\n> SQLException calling procedure{? = call update_asset_dependents(?,?,?)}\n> for asset id 36\n> \n> org.postgresql.util.PSQLException: ERROR: deadlock detected\n> \n> Detail: Process 21172 waits for ShareLock on transaction 5098759;\n> blocked by process 21154.\n> Process 21154 waits for ShareLock on transaction 5098760; blocked by\n> process 21172.\n\nWhat Postgres version is this?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 31 Jul 2007 11:28:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deadlock detected when calling function (Call\n\tfunction_name)" }, { "msg_contents": "8.2.3\n\nThanks\nRegards\nSachchida N Ojha\[email protected]\nSecure Elements Incorporated\n198 Van Buren Street, Suite 110\nHerndon Virginia 20170-5338 USA\nhttp://www.secure-elements.com/\n800-709-5011 Main\n703-709-2168 Direct\n703-709-2180 Fax\nThis email message and any attachment to this email message is intended\nonly for the use of the addressee(s) named above. If the reader of this\nmessage is not the intended recipient or the employee or agent\nresponsible for delivering the message to the intended recipient(s),\nplease note that any distribution or copying of this communication is\nstrictly prohibited. If you have received this email in error, please\nnotify me immediately and delete this message. Please note that if this\nemail contains a forwarded message or is a reply to a prior message,\nsome or all of the contents of this message or any attachments may not\nhave been produced by the sender.\n\n-----Original Message-----\nFrom: Alvaro Herrera [mailto:[email protected]] \nSent: Tuesday, July 31, 2007 11:28 AM\nTo: Sachchida Ojha\nCc: Andreas Tille; [email protected]\nSubject: Re: [PERFORM] deadlock detected when calling function\n(Callfunction_name)\n\nSachchida Ojha wrote:\n> \n> Hello,\n> I am getting following error from my application. Can any body tell me\n> how to find the process name and transaction details when the deadlock\n> occurred? \n> \n> This problem did not occur consistently. \n> \n> Error log\n> \n> 2007-07-30 19:09:12,140 ERROR [se.em.asset.persistence.AssetUpdate]\n> SQLException calling procedure{? = call\nupdate_asset_dependents(?,?,?)}\n> for asset id 36\n> \n> org.postgresql.util.PSQLException: ERROR: deadlock detected\n> \n> Detail: Process 21172 waits for ShareLock on transaction 5098759;\n> blocked by process 21154.\n> Process 21154 waits for ShareLock on transaction 5098760; blocked by\n> process 21172.\n\nWhat Postgres version is this?\n\n-- \nAlvaro Herrera\nhttp://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 31 Jul 2007 12:38:51 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deadlock detected when calling function (Callfunction_name)" }, { "msg_contents": "On 7/31/07, Andreas Tille <[email protected]> wrote:\nhttp://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql.html#PLPGSQL-OVERVIEW\n>\n> Note: The PL/pgSQL EXECUTE statement is not related to the EXECUTE\n> statement supported by the PostgreSQL server. The server's EXECUTE\n> statement cannot be used within PL/pgSQL functions (and is not needed).\n\nIf I read the documentation correctly, EXECUTE is not needed because\nquery plans are generally cached within pl/pgsql after the first\nexecution of the function.\n\n> I'm especially stumbling over the \"is not needed\" part. My plan\n> is to write a server side function (either SQL or pgsql) that wraps\n> the output of a PREPAREd statement but I have no idea how to do this.\n>\n> The final task is to obtain some XML for of my data via a simple shell script\n> that contains\n>\n> psql -t MyDatabase -c 'SELECT * FROM MyFunction ($1, $2);'\n>\n> The task of MyFunction($1,$2) is to wrap up the main data into an XML\n> header (just some text like\n> <?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n> ...\n> ) around the real data that will be obtained via a PREPAREd statement that is\n> declared like this\n>\n> PREPARE xml_data(int, int) AS ( SELECT ... WHERE id = $1 AND source = $2 );\n>\n> where \"...\" stands for wrapping the output into xml format.\n>\n> I don't know whether this is a reasonable way. I know how to solve this\n> problem when using a pgsql function and preparing the output as a text\n> string but I learned that PREPAREd statements might be much more clever\n> performance wise and thus I wonder whether I could do it this way.\n\nprepared statements are the fastest possible way to execute queries\nbut generally that extra speed is not measurable or only useful under\nspecific conditions. also, prepared statements can't be folded into\nqueries the way functions can:\n\nselect xml_data(foo, bar) from baz;\n\nso, I'd stick with the function approach (I think that's what you are\nasking). If you are doing XML work you may be interested in the\nupcoming xml features of 8.3:\n\nhttp://developer.postgresql.org/pgdocs/postgres/functions-xml.html\n\nmerlin\n", "msg_date": "Wed, 1 Aug 2007 08:58:50 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using EXECUTE in a function" } ]
[ { "msg_contents": "\nHello,\n\nI would like to better understand the semantics of the statistics shown in PostgreSQL Logs. For example, in the report:\n\nDETAIL: ! system usage stats:\n ! 0.000100 elapsed 0.000000 user 0.000000 system sec\n ! [0.016997 user 0.006998 sys total]\n ! 0/0 [0/0] filesystem blocks in/out\n ! 0/2 [0/1301] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/0 [6/20] voluntary/involuntary context switches\n ! buffer usage stats:\n ! Shared blocks: 1 read, 0 written, buffer hit rate = 0.00%\n ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n ! Direct blocks: 0 read, 0 written\n\nWhat each element of this report mean?\n\nmany thanks. \n[Camilo Porto]\n\n_________________________________________________________________\nReceba GRÁTIS as mensagens do Messenger no seu celular quando você estiver offline. Conheça o MSN Mobile!\nhttp://mobile.live.com/signup/signup2.aspx?lc=pt-br", "msg_date": "Tue, 31 Jul 2007 14:35:56 +0000", "msg_from": "Camilo Porto <[email protected]>", "msg_from_op": true, "msg_subject": "Semantics of PostgreSQL Server Log Stats" }, { "msg_contents": "Camilo Porto wrote:\n> \n> Hello,\n> \n> I would like to better understand the semantics of the statistics shown in PostgreSQL Logs. For example, in the report:\n> \n> DETAIL: ! system usage stats:\n> ! 0.000100 elapsed 0.000000 user 0.000000 system sec\n> ! [0.016997 user 0.006998 sys total]\n> ! 0/0 [0/0] filesystem blocks in/out\n> ! 0/2 [0/1301] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [6/20] voluntary/involuntary context switches\n\nThe above are from getrusage(). See the getrusage manual page.\n\n> ! buffer usage stats:\n> ! Shared blocks: 1 read, 0 written, buffer hit rate = 0.00%\n> ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n> ! Direct blocks: 0 read, 0 written\n> \n> What each element of this report mean?\n\nThis outlines the shared/local buffer I/O and file system I/O performed.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 31 Jul 2007 14:22:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Semantics of PostgreSQL Server Log Stats" } ]
[ { "msg_contents": "We have a complicated stored procedure that we run frequently. It\npegs one of our postmaster processes at 100% CPU utilization for a few\nhours. This has the unfortunate side effect of causing increased\nlatency for our other queries. We are currently planning a fix, but\nbecause of the complicated nature of this procedure it is going to\ntake some time to implement.\n\nI've noticed that if I renice the process that is running the query,\nthe other postmaster processes are able to respond to our other\nqueries in a timely fashion.\n\nMy question: Is there a way I can decrease the priority of a specific\nquery, or determine the PID of the process it is running in? I'd like\nto throw together a quick shell script if at all possible, as right\nnow I have to monitor the process manually and we'll have fixed the\nproblem long before we have the chance to implement proper database\nclustering.\n\nBryan\n", "msg_date": "Thu, 2 Aug 2007 11:02:07 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": true, "msg_subject": "cpu throttling" }, { "msg_contents": "On Thursday 02 August 2007 09:02, \"Bryan Murphy\" <[email protected]> \nwrote:\n> My question: Is there a way I can decrease the priority of a specific\n> query, or determine the PID of the process it is running in? I'd like\n> to throw together a quick shell script if at all possible, as right\n> now I have to monitor the process manually and we'll have fixed the\n> problem long before we have the chance to implement proper database\n> clustering.\n\nselect procpid from pg_stat_activity where current_query \n like '%stored_proc%' and current_query not like '%pg_stat_activity%';\n\nrequires stats_command_string to be enabled\n\nI'm surprised your operating system doesn't automatically lower the priority \nof the process, though ..\n\n-- \n\"Remember when computers were frustrating because they did exactly what\nyou told them to? That actually seems sort of quaint now.\" --J.D. Baldwin\n\n", "msg_date": "Thu, 2 Aug 2007 09:14:37 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu throttling" }, { "msg_contents": "It's a 4 processor Intel xeon machine with more than enough ram. The\nentire database can fit in memory, and while the CPU is pegged,\nnothing is chewing up I/O bandwidth, and nothing is getting swapped\nout of RAM.\n\nI'm running Debian stable with only a few tweaks to the kernel's\nmemory settings. As far as I'm aware, I have not changed anything\nthat would impact scheduling.\n\nOther queries do respond, but it's more like every couple of seconds\none query which normally takes 300ms might take 8000ms. Nothing\nterrible, but enough that our users will notice.\n\nBryam\n\nOn 8/2/07, Alan Hodgson <[email protected]> wrote:\n> On Thursday 02 August 2007 09:02, \"Bryan Murphy\" <[email protected]>\n> wrote:\n> > My question: Is there a way I can decrease the priority of a specific\n> > query, or determine the PID of the process it is running in? I'd like\n> > to throw together a quick shell script if at all possible, as right\n> > now I have to monitor the process manually and we'll have fixed the\n> > problem long before we have the chance to implement proper database\n> > clustering.\n>\n> select procpid from pg_stat_activity where current_query\n> like '%stored_proc%' and current_query not like '%pg_stat_activity%';\n>\n> requires stats_command_string to be enabled\n>\n> I'm surprised your operating system doesn't automatically lower the priority\n> of the process, though ..\n>\n> --\n> \"Remember when computers were frustrating because they did exactly what\n> you told them to? That actually seems sort of quaint now.\" --J.D. Baldwin\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Thu, 2 Aug 2007 13:09:55 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu throttling" }, { "msg_contents": "On Thu, Aug 02, 2007 at 09:14:37AM -0700, Alan Hodgson wrote:\n> On Thursday 02 August 2007 09:02, \"Bryan Murphy\" <[email protected]> \n> wrote:\n> > My question: Is there a way I can decrease the priority of a specific\n> > query, or determine the PID of the process it is running in? I'd like\n> > to throw together a quick shell script if at all possible, as right\n> > now I have to monitor the process manually and we'll have fixed the\n> > problem long before we have the chance to implement proper database\n> > clustering.\n> \n> select procpid from pg_stat_activity where current_query \n> like '%stored_proc%' and current_query not like '%pg_stat_activity%';\n> \n> requires stats_command_string to be enabled\n> \n> I'm surprised your operating system doesn't automatically lower the priority \n> of the process, though ..\n\nThe OS will only lower it to a certain extent.\n\nAlso, make sure you understand the concept of priority inversion before\ngoing into production with this solution.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Fri, 3 Aug 2007 17:17:42 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu throttling" } ]
[ { "msg_contents": "I notice that I get different plans when I run the\nfollowing two queries that I thought would be\nidentical.\n\n select distinct test_col from mytable;\n select test_col from mytable group by test_col;\n\nAny reason why it favors one in one case but not the other?\n\n\n\nd=# explain analyze select distinct test_col from mytable;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..14927.69 rows=27731 width=4) (actual time=0.144..915.214 rows=208701 loops=1)\n -> Index Scan using \"mytable(test_col)\" on mytable (cost=0.00..14160.38 rows=306925 width=4) (actual time=0.140..575.580 rows=306925 loops=1)\n Total runtime: 1013.657 ms\n(3 rows)\n\nd=# explain analyze select test_col from mytable group by test_col;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=7241.56..7518.87 rows=27731 width=4) (actual time=609.058..745.295 rows=208701 loops=1)\n -> Seq Scan on mytable (cost=0.00..6474.25 rows=306925 width=4) (actual time=0.063..280.000 rows=306925 loops=1)\n Total runtime: 840.321 ms\n(3 rows)\n\n\nd=# select version();\n version\n----------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)\n(1 row)\n\n", "msg_date": "Thu, 02 Aug 2007 14:32:14 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Why are distinct and group by choosing different plans?" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n\n> I notice that I get different plans when I run the\n> following two queries that I thought would be\n> identical.\n>\n> select distinct test_col from mytable;\n> select test_col from mytable group by test_col;\n>\n> Any reason why it favors one in one case but not the other?\n\nI think \"distinct\" just doesn't know about hash aggregates yet. That's partly\nan oversight and partly of a \"feature\" in that it gives a convenient way to\nwrite a query which avoids them. I think it's also partly that \"distinct\" is\ntrickier to fix because it's the same codepath as \"distinct on\" which is\ndecidedly more complex than a simple \"distinct\".\n\n> d=# explain analyze select distinct test_col from mytable;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=0.00..14927.69 rows=27731 width=4) (actual time=0.144..915.214 rows=208701 loops=1)\n> -> Index Scan using \"mytable(test_col)\" on mytable (cost=0.00..14160.38 rows=306925 width=4) (actual time=0.140..575.580 rows=306925 loops=1)\n> Total runtime: 1013.657 ms\n> (3 rows)\n\nI assume you have random_page_cost dialled way down? The costs seem too low\nfor the default random_page_cost. This query would usually generate a sort\nrather than an index scan.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 02 Aug 2007 23:37:43 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why are distinct and group by choosing different plans?" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> I think \"distinct\" just doesn't know about hash aggregates yet. That's partly\n> an oversight and partly of a \"feature\" in that it gives a convenient way to\n> write a query which avoids them. I think it's also partly that \"distinct\" is\n> trickier to fix because it's the same codepath as \"distinct on\" which is\n> decidedly more complex than a simple \"distinct\".\n\nIt's not an oversight :-(. But the DISTINCT/DISTINCT ON code is old,\ncrufty, and tightly entwined with ORDER BY processing. It'd be nice to\nclean it all up someday, but the effort seems a bit out of proportion\nto the reward...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Aug 2007 20:48:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why are distinct and group by choosing different plans? " } ]
[ { "msg_contents": "Hello everybody,\n\nas I'm new to this list I hope that it is the right place to post this and\nalso the right format, so if I'm committing an error, I apologize in\nadvance.\n\nFirst the background of my request:\n\nI'm currently employed by an enterprise which has approx. 250 systems\ndistributed worldwide which are sending telemetric data to the main\nPostgreSQL.\nThe remote systems are generating about 10 events per second per system\nwhich accumulates to about 2500/tps.\nThe data is stored for about a month before it is exported and finally\ndeleted from the database.\nOn the PostgreSQL server are running to databases one with little traffic\n(about 750K per day) and the telemetric database with heavy write operations\nall around the day (over 20 million per day).\nWe already found that the VACUUM process takes excessively long and as\nconsequence the database is Vacuumed permanently.\n\nThe hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM and 2x\n250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated to database.\nOS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1) with the libpq\nfrontend library.\n\nNow the problem:\n\nThe problem we are experiencing is that our queries are slowing down\ncontinuously even if we are performing queries on the index which is the\ntimestamp of the event, a simple SELECT query with only a simple WHERE\nclause (< or >) takes very long to complete. So the database becomes\nunusable for production use as the data has to be retrieved very quickly if\nwe want to act based on the telemetric data.\n\nSo I'm asking me if it is useful to update to the actual 8.2 version and if\nwe could experience performance improvement only by updating.\n\nThank you for your answers,\nSven Clement\n\nHello everybody,as I'm new to this list I hope that it is the right place to post this and also the right format, so if I'm committing an error, I apologize in advance.First the background of my request:\nI'm currently employed by an enterprise which has approx. 250 systems distributed worldwide which are sending telemetric data to the main PostgreSQL.The remote systems are generating about 10 events per second per system which accumulates to about 2500/tps.\nThe data is stored for about a month before it is exported and finally deleted from the database.On the PostgreSQL server are running to databases one with little traffic (about 750K per day) and the telemetric database with heavy write operations all around the day (over 20 million per day).\nWe already found that the VACUUM process takes excessively long and as consequence the database is Vacuumed permanently.The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated to database.\nOS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1) with the libpq frontend library.Now the problem:The problem we are experiencing is that our queries are slowing down continuously even if we are performing queries on the index which is the timestamp of the event, a simple SELECT query with only a simple WHERE clause (< or >) takes very long to complete. So the database becomes unusable for production use as the data has to be retrieved very quickly if we want to act based on the telemetric data.\nSo I'm asking me if it is useful to update to the actual 8.2 version and if we could experience performance improvement only by updating.Thank you for your answers,Sven Clement", "msg_date": "Fri, 3 Aug 2007 06:52:38 -0700", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "On 3 Aug 2007 at 6:52, Sven Clement wrote:\n\n> Hello everybody,\n> \n> as I'm new to this list I hope that it is the right place to post this\n> and also the right format, so if I'm committing an error, I apologize\n> in advance.\n> \n> First the background of my request:\n> \n> I'm currently employed by an enterprise which has approx. 250 systems\n> distributed worldwide which are sending telemetric data to the main\n> PostgreSQL. The remote systems are generating about 10 events per\n> second per system which accumulates to about 2500/tps. The data is\n> stored for about a month before it is exported and finally deleted\n> from the database. On the PostgreSQL server are running to databases\n> one with little traffic (about 750K per day) and the telemetric\n> database with heavy write operations all around the day (over 20\n> million per day). We already found that the VACUUM process takes\n> excessively long and as consequence the database is Vacuumed\n> permanently.\n> \n> The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM\n> and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated\n> to database. OS is Debian 3.1 Sarge with PostgreSQL 7.4.7\n> (7.4.7-6sarge1) with the libpq frontend library.\n> \n> Now the problem:\n> \n> The problem we are experiencing is that our queries are slowing down\n> continuously even if we are performing queries on the index which is\n> the timestamp of the event, a simple SELECT query with only a simple\n> WHERE clause (< or >) takes very long to complete. So the database\n> becomes unusable for production use as the data has to be retrieved\n> very quickly if we want to act based on the telemetric data.\n\nHave you confirmed via explain (or explain analyse) that the index is \nbeing used?\n\n> So I'm asking me if it is useful to update to the actual 8.2 version\n> and if we could experience performance improvement only by updating.\n\nThere are other benefits from upgrading, but you may be able to solve \nthis problem without upgrading.\n\n-- \nDan Langille - http://www.langille.org/\nAvailable for hire: http://www.freebsddiary.org/dan_langille.php\n\n\n", "msg_date": "Fri, 03 Aug 2007 10:17:58 -0400", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "On Fri, 2007-08-03 at 06:52 -0700, Sven Clement wrote:\n> Hello everybody,\n> \n> as I'm new to this list I hope that it is the right place to post this\n> and also the right format, so if I'm committing an error, I apologize\n> in advance.\n> \n> First the background of my request: \n> \n> I'm currently employed by an enterprise which has approx. 250 systems\n> distributed worldwide which are sending telemetric data to the main\n> PostgreSQL.\n> The remote systems are generating about 10 events per second per\n> system which accumulates to about 2500/tps. \n> The data is stored for about a month before it is exported and finally\n> deleted from the database.\n> On the PostgreSQL server are running to databases one with little\n> traffic (about 750K per day) and the telemetric database with heavy\n> write operations all around the day (over 20 million per day). \n> We already found that the VACUUM process takes excessively long and as\n> consequence the database is Vacuumed permanently.\n> \n> The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM\n> and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated\n> to database. \n> OS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1) with the\n> libpq frontend library.\n> \n> Now the problem:\n> \n> The problem we are experiencing is that our queries are slowing down\n> continuously even if we are performing queries on the index which is\n> the timestamp of the event, a simple SELECT query with only a simple\n> WHERE clause (< or >) takes very long to complete. So the database\n> becomes unusable for production use as the data has to be retrieved\n> very quickly if we want to act based on the telemetric data. \n> \n> So I'm asking me if it is useful to update to the actual 8.2 version\n> and if we could experience performance improvement only by updating.\n> \n> Thank you for your answers,\n> Sven Clement\n\nUpgrading from 7.4.x to 8.2.x will probably give you a performance\nbenefit, yes. There have been numerous changes since the days of 7.4.\n\nBut you didn't really give any information about why the query is\nrunning slow. Specifically, could you provide the query itself, some\ninformation about the tables/indexes/foreign keys involved, and an\nEXPLAIN ANALYZE for one of the problematic queries?\n\nAlso, what kind of vacuuming regimen are you using? Just a daily cron\nmaybe? Are you regularly analyzing the tables?\n\n-- Mark Lewis\n", "msg_date": "Fri, 03 Aug 2007 07:20:36 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets\n\ton 7.4.2" }, { "msg_contents": "Hi,\n\nFirst thank you already for your answers, as we are working in an\nenvironment with NDA's I have first to check all the queries before I may\npublish them here, but the structure of the DB is publishable:\n\n2 Tables:\nTable: \"public.tmdata\"\n\n Column | Type | Modifiers\n------------+-----------------------------+-----------------------------\ntimestamp | timestamp without time zone |\nid | integer | default -2147483684::bigint\ndatapointid | integer | default 0\nvalue | integer | default 0\n\nIndexes:\n \"tmdata_idx1\" btree (\"timestamp\")\n\nLegend:\n-------\ntimestamp = Timeindex of the event\nid = Hostname of the system who sent the event\ndatapointid = ID of the Datapoint ( less than 100 )\nvalue = The value of the event\n\n========================================================================\n\nTable: \"public.tmdataintervalsec\"\n\n Column | Type | Modifiers\n------------+-----------------------------+-----------------------------\ntimestamp | timestamp without time zone |\nid | integer | default -2147483684::bigint\ndatapointid | integer | default 0\nmax | integer | default 0\nmin | integer | default 0\navg | integer | default 0\ncount | integer | default 0\n\nIndexes:\n \"tmdataintervalsec_idx1\" btree (\"timestamp\", id)\n\nLegend:\n-------\ntimestamp = Sets the period\nid = Hostname of the system who sent the event\ndatapointid = ID of the Datapoint ( less than 100 )\nmax = Max value for the period\nmin = Min value for the period\navg = Average of all values for the period\ncount = Number of rows used for generation of the statistic\n\nThe data for the second table is generated by the daemon who receives the\ndata and writes it to the database.\n\n\nAnd we also confirmed that the index is used by the queries.\n\nRegards,\nSven\n\nP.S: I hope the databse layout is tsill readable when you receive it... ;)\n\n2007/8/3, Mark Lewis <[email protected]>:\n>\n> On Fri, 2007-08-03 at 06:52 -0700, Sven Clement wrote:\n> > Hello everybody,\n> >\n> > as I'm new to this list I hope that it is the right place to post this\n> > and also the right format, so if I'm committing an error, I apologize\n> > in advance.\n> >\n> > First the background of my request:\n> >\n> > I'm currently employed by an enterprise which has approx. 250 systems\n> > distributed worldwide which are sending telemetric data to the main\n> > PostgreSQL.\n> > The remote systems are generating about 10 events per second per\n> > system which accumulates to about 2500/tps.\n> > The data is stored for about a month before it is exported and finally\n> > deleted from the database.\n> > On the PostgreSQL server are running to databases one with little\n> > traffic (about 750K per day) and the telemetric database with heavy\n> > write operations all around the day (over 20 million per day).\n> > We already found that the VACUUM process takes excessively long and as\n> > consequence the database is Vacuumed permanently.\n> >\n> > The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM\n> > and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated\n> > to database.\n> > OS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1) with the\n> > libpq frontend library.\n> >\n> > Now the problem:\n> >\n> > The problem we are experiencing is that our queries are slowing down\n> > continuously even if we are performing queries on the index which is\n> > the timestamp of the event, a simple SELECT query with only a simple\n> > WHERE clause (< or >) takes very long to complete. So the database\n> > becomes unusable for production use as the data has to be retrieved\n> > very quickly if we want to act based on the telemetric data.\n> >\n> > So I'm asking me if it is useful to update to the actual 8.2 version\n> > and if we could experience performance improvement only by updating.\n> >\n> > Thank you for your answers,\n> > Sven Clement\n>\n> Upgrading from 7.4.x to 8.2.x will probably give you a performance\n> benefit, yes. There have been numerous changes since the days of 7.4.\n>\n> But you didn't really give any information about why the query is\n> running slow. Specifically, could you provide the query itself, some\n> information about the tables/indexes/foreign keys involved, and an\n> EXPLAIN ANALYZE for one of the problematic queries?\n>\n> Also, what kind of vacuuming regimen are you using? Just a daily cron\n> maybe? Are you regularly analyzing the tables?\n>\n> -- Mark Lewis\n>\n\n\n\n-- \nDSIGN.LU\nSven Clement\n+352 621 63 21 18\[email protected]\n\nwww.dsign.lu\n\nHi,First thank you already for your answers, as we are working in an environment with NDA's I have first to check all the queries before I may publish them here, but the structure of the DB is publishable:\n2 Tables:Table: \"public.tmdata\"  Column    |        Type          |    Modifiers------------+-----------------------------+-----------------------------timestamp   | timestamp without time zone |\nid        | integer              | default -2147483684::bigintdatapointid | integer              | default 0value       | integer              | default 0Indexes:    \"tmdata_idx1\" btree (\"timestamp\")\nLegend:-------timestamp    = Timeindex of the eventid        = Hostname of the system who sent the eventdatapointid    = ID of the Datapoint ( less than 100 )value        = The value of the event\n========================================================================Table: \"public.tmdataintervalsec\"  Column    |        Type          |    Modifiers------------+-----------------------------+-----------------------------\ntimestamp   | timestamp without time zone |id        | integer              | default -2147483684::bigintdatapointid | integer              | default 0max        | integer              | default 0min         | integer              | default 0\navg        | integer              | default 0count       | integer              | default 0Indexes:    \"tmdataintervalsec_idx1\" btree (\"timestamp\", id)Legend:-------\ntimestamp    = Sets the periodid        = Hostname of the system who sent the eventdatapointid    = ID of the Datapoint ( less than 100 )max        = Max value for the periodmin        = Min value for the period\navg        = Average of all values for the periodcount        = Number of rows used for generation of the statisticThe data for the second table is generated by the daemon who receives the data and writes it to the database.\nAnd we also confirmed that the index is used by the queries.Regards,SvenP.S: I hope the databse layout is tsill readable when you receive it... ;)2007/8/3, Mark Lewis <\[email protected]>:On Fri, 2007-08-03 at 06:52 -0700, Sven Clement wrote:\n> Hello everybody,>> as I'm new to this list I hope that it is the right place to post this> and also the right format, so if I'm committing an error, I apologize> in advance.\n>> First the background of my request:>> I'm currently employed by an enterprise which has approx. 250 systems> distributed worldwide which are sending telemetric data to the main> PostgreSQL.\n> The remote systems are generating about 10 events per second per> system which accumulates to about 2500/tps.> The data is stored for about a month before it is exported and finally> deleted from the database.\n> On the PostgreSQL server are running to databases one with little> traffic (about 750K per day) and the telemetric database with heavy> write operations all around the day (over 20 million per day).\n> We already found that the VACUUM process takes excessively long and as> consequence the database is Vacuumed permanently.>> The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM\n> and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated> to database.> OS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1) with the> libpq frontend library.>\n> Now the problem:>> The problem we are experiencing is that our queries are slowing down> continuously even if we are performing queries on the index which is> the timestamp of the event, a simple SELECT query with only a simple\n> WHERE clause (< or >) takes very long to complete. So the database> becomes unusable for production use as the data has to be retrieved> very quickly if we want to act based on the telemetric data.\n>> So I'm asking me if it is useful to update to the actual 8.2 version> and if we could experience performance improvement only by updating.>> Thank you for your answers,> Sven Clement\nUpgrading from 7.4.x to 8.2.x will probably give you a performancebenefit, yes.  There have been numerous changes since the days of 7.4.But you didn't really give any information about why the query is\nrunning slow.  Specifically, could you provide the query itself, someinformation about the tables/indexes/foreign keys involved, and anEXPLAIN ANALYZE for one of the problematic queries?Also, what kind of vacuuming regimen are you using?  Just a daily cron\nmaybe?  Are you regularly analyzing the tables?-- Mark Lewis-- DSIGN.LUSven Clement+352 621 63 21 18\[email protected]", "msg_date": "Fri, 3 Aug 2007 08:57:11 -0700", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "Sven,\n\n> The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM\n> and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated to\n> database. OS is Debian 3.1 Sarge with PostgreSQL 7.4.7 (7.4.7-6sarge1)\n> with the libpq frontend library.\n\nNote that 7.4.7 is not the current bugfix version of 7.4.x. It is 5 or 6 \npatches behind.\n\n> So I'm asking me if it is useful to update to the actual 8.2 version and\n> if we could experience performance improvement only by updating.\n\n8.2 will give you a number of features which should greatly improve your \nperformance situation:\n\n1) partitioning: break up your main data table into smaller \neasier-to-maintain segments (this will require some application changes)\n\n2) VACUUM delay, which lowers the impact of vacuum on concurrent queries\n\n3) AUTOVACUUM, which helps keep your tables in reasonable maintenance.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 3 Aug 2007 10:56:23 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "Sven Clement wrote:\n> Table: \"public.tmdata\"\n...\n> id | integer | default -2147483684::bigint\n...\n> Table: \"public.tmdataintervalsec\"\n...\n> id | integer | default -2147483684::bigint\n\nNot that this directly addresses the performance issues you described,\nbut I have already seen 2 recommendations that you upgrade...\n\nWith the table definitions you posted, one of the first things I noticed\nwas that the default value for an integer column was a bigint value. I\ndid some quick 32-bit math and found that the smallest legal 32-bit\ninteger value is -2147483648, not -2147483684 (notice the last 2 numbers\nare transposed).\n\nI checked your previous post and saw that you are currently running PG\n7.4.2/7.4.7 (subject says 7.4.2, but you indicate 7.4.7 in the body of\nyour original post). I did a quick check on my 8.1.9 box using the same\nbigint default value for an integer column and received an \"integer out\nof range\" error when I attempted to use the default value.\n\nI don't know the exact workings of your system, but you'll need to watch\nout for any cases where the default value for the id columns was used.\nIf that default value was used (and was allowed by your PG version) you\nwill probably have values in the id column that are not what you'd\nexpect. I don't know how a bigint would be coerced into an integer, but\nit would probably truncate in some form which would give you positive\nvalues in the id column where you expected the smallest 32-bit integer\nvalue (i.e. -2147483648).\n\nI don't know if this was ever actually an issue (if you never rely on\nthe default value for the id column -- maybe version 7.4.7 would\ngenerate the same error if you did), but if it was, you need to look at\na couple of things before upgrading (whether to a more recent 7.4.X or\n8.2.4):\n\n1. If you do rely on the default clause for the id column, you may\nencounter the \"integer out of range\" errors with your existing codebase.\n\n2. You may have values in the id column that are supposed to represent\nthe smallest 32-bit integer that may in fact be positive integers.\n\nYou will probably want to investigate these potential issues and perform\nany necessary schema changes and data cleanup before attempting any upgrade.\n\nAgain, I'm not sure if this was ever an issue or if this issue has any\neffects on your database. I don't have any PG machines running anything\nprior to 8.1.X, so I can't really test these. I just saw the bigint\nvalue as a default for an integer column and it caught my eye.\n\nHope this might help you avoid some problems when upgrading.\n\nAndrew\n\n", "msg_date": "Fri, 03 Aug 2007 16:13:27 -0500", "msg_from": "Andrew Kroeger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets\n on 7.4.2" }, { "msg_contents": "Andrew Kroeger <[email protected]> writes:\n> With the table definitions you posted, one of the first things I noticed\n> was that the default value for an integer column was a bigint value. I\n> did some quick 32-bit math and found that the smallest legal 32-bit\n> integer value is -2147483648, not -2147483684 (notice the last 2 numbers\n> are transposed).\n\nOooh, good catch, but 7.4 seems to notice the overflow all right:\n\nregression=# create temp table foo(f1 int default -2147483684::bigint);\nCREATE TABLE\nregression=# insert into foo default values;\nERROR: integer out of range\nregression=# select version();\n version \n----------------------------------------------------------------\n PostgreSQL 7.4.17 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n(1 row)\n\nSo I think we can conclude that the OP never actually uses this default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Aug 2007 18:34:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2 " }, { "msg_contents": "Hi everybody,\n\nThe bigint problem was probably a typo because I had to type the entire\ndefinitions, as the server is on a vpn and I don't had access with my\nmachine where I wrote the mail, and the 7.4.2 was surely a typo... ;) I\napology...\n\nOK so beginning on Monday I will test the config on a 8.2.x to verify the\nperformance issues, but I also found some disturbing info's on the net, that\nthe index may be corrupted because of the big difference between an index\nentry which is deleted and the new value inserted afterwards, which should\nnot be an issue with a btree, but do you guys know something more about it,\nsorry I'm really good in SQL but in Postgre I'm still a beginner.\n\nWhat the version belongs, so I know that it's not the actual bug fix, but as\nit is used in a running prod system and as my employer began considering the\nmigration a year ago and so they froze the version waiting for an major\nupdate.\n\nThanks really to everybody here who already helped me a lot... Thanks!!!\nSven Clement\n\n\n2007/8/4, Tom Lane <[email protected]>:\n>\n> Andrew Kroeger <[email protected]> writes:\n> > With the table definitions you posted, one of the first things I noticed\n> > was that the default value for an integer column was a bigint value. I\n> > did some quick 32-bit math and found that the smallest legal 32-bit\n> > integer value is -2147483648, not -2147483684 (notice the last 2 numbers\n> > are transposed).\n>\n> Oooh, good catch, but 7.4 seems to notice the overflow all right:\n>\n> regression=# create temp table foo(f1 int default -2147483684::bigint);\n> CREATE TABLE\n> regression=# insert into foo default values;\n> ERROR: integer out of range\n> regression=# select version();\n> version\n> ----------------------------------------------------------------\n> PostgreSQL 7.4.17 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n> (1 row)\n>\n> So I think we can conclude that the OP never actually uses this default.\n>\n> regards, tom lane\n>\n\n\n\n-- \nDSIGN.LU\nSven Clement\n+352 621 63 21 18\[email protected]\n\nwww.dsign.lu\n\nHi everybody,\n \nThe bigint problem was probably a typo because I had to type the entire definitions, as the server is on a vpn and I don't had access with my machine where I wrote the mail, and the 7.4.2 was surely a typo... ;) I apology...\n\n \nOK so beginning on Monday I will test the config on a 8.2.x to verify the performance issues, but I also found some disturbing info's on the net, that the index may be corrupted because of the big difference between an index entry which is deleted and the new value inserted afterwards, which should not be an issue with a btree, but do you guys know something more about it, sorry I'm really good in SQL but in Postgre I'm still a beginner.\n\n \nWhat the version belongs, so I know that it's not the actual bug fix, but as it is used in a running prod system and as my employer began considering the migration a year ago and so they froze the version waiting for an major update.\n\n \nThanks really to everybody here who already helped me a lot... Thanks!!!\nSven Clement \n2007/8/4, Tom Lane <[email protected]>:\nAndrew Kroeger <[email protected]> writes:\n> With the table definitions you posted, one of the first things I noticed> was that the default value for an integer column was a bigint value.  I> did some quick 32-bit math and found that the smallest legal 32-bit\n> integer value is -2147483648, not -2147483684 (notice the last 2 numbers> are transposed).Oooh, good catch, but 7.4 seems to notice the overflow all right:regression=# create temp table foo(f1 int default -2147483684::bigint);\nCREATE TABLEregression=# insert into foo default values;ERROR:  integer out of rangeregression=# select version();                           version----------------------------------------------------------------\nPostgreSQL 7.4.17 on hppa-hp-hpux10.20, compiled by GCC 2.95.3(1 row)So I think we can conclude that the OP never actually uses this default.                       regards, tom lane\n-- DSIGN.LUSven Clement+352 621 63 21 [email protected]", "msg_date": "Sat, 4 Aug 2007 22:03:02 +0200", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "Sven Clement wrote:\n> OK so beginning on Monday I will test the config on a 8.2.x to verify the\n> performance issues, but I also found some disturbing info's on the net, that\n> the index may be corrupted because of the big difference between an index\n> entry which is deleted and the new value inserted afterwards, which should\n> not be an issue with a btree, but do you guys know something more about it,\n> sorry I'm really good in SQL but in Postgre I'm still a beginner.\n\nI don't remember a bug like that. Where did you read that from?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 05 Aug 2007 17:07:35 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets\n on 7.4.2" }, { "msg_contents": "2007/8/5, Heikki Linnakangas <[email protected]>:\n>\n>\n> I don't remember a bug like that. Where did you read that from?\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nPartially I found that one in the PostgreSQL Documentation for the\n7.x.xversions under the command REINDEX where they claim that you\nshould run a\nreindex under certain circumstances and for my comprehension this says that\nwith some access pattern (as ours (major writes / one big delete per day))\nthe index may be corrupted or otherwise not really useful.\n\nSven Clement\n\n2007/8/5, Heikki Linnakangas <[email protected]>:\nI don't remember a bug like that. Where did you read that from?--  Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com\nPartially I found that one in the PostgreSQL Documentation for the 7.x.x versions under the command REINDEX where they claim that you should run a reindex under certain circumstances and for my comprehension this says that with some access pattern (as ours (major writes / one big delete per day)) the index may be corrupted or otherwise not really useful.\nSven Clement", "msg_date": "Mon, 6 Aug 2007 00:10:01 -0700", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "On m�n, 2007-08-06 at 00:10 -0700, Sven Clement wrote:\n> \n> \n> 2007/8/5, Heikki Linnakangas <[email protected]>:\n> \n> I don't remember a bug like that. Where did you read that\n> from?\n> \n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n> \n> Partially I found that one in the PostgreSQL Documentation for the\n> 7.x.x versions under the command REINDEX where they claim that you\n> should run a reindex under certain circumstances and for my\n> comprehension this says that with some access pattern (as ours (major\n> writes / one big delete per day)) the index may be corrupted or\n> otherwise not really useful. \n\nyou are probably talking about index bloat, not corruption.\n\nwhen that happens, the index consumes more space that needed,\nand its effectivity is reduced, but it is not corrupted and does\nnot cause wrong results.\n\ni believe this is a lot less common now than in the 7.x days\n\ngnari\n\n\n", "msg_date": "Mon, 06 Aug 2007 09:50:25 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets\n\ton 7.4.2" }, { "msg_contents": "Sven Clement wrote:\n> Partially I found that one in the PostgreSQL Documentation for the\n> 7.x.xversions under the command REINDEX where they claim that you\n> should run a\n> reindex under certain circumstances and for my comprehension this says that\n> with some access pattern (as ours (major writes / one big delete per day))\n> the index may be corrupted or otherwise not really useful.\n\nUp to 7.3, periodical REINDEX was needed to trim down bloated indexes.\nSince 7.4, empty index pages are recycled so that's no longer necessary.\nYou can still end up with larger than necessary indexes in recent\nversions under unusual access patterns, like if you delete all but a few\nindex tuples from each index page, but it's rare in practice. And it's\nnot unbounded growth like in <= 7.3.\n\nIn any case, the indexes won't become corrupt.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 06 Aug 2007 10:51:09 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets\n on 7.4.2" }, { "msg_contents": "Ok thanks everybody for the calrification, after all now I allready learned\nsomething new... ;)\n\nMy employer is currently thinking about migration to 8.2.x because of your\nfeedback, so I think that the problem could be resolved... ;)\n\nThanks to everyone...\n\nSven Clement\n\n2007/8/6, Heikki Linnakangas <[email protected]>:\n>\n> Sven Clement wrote:\n> > Partially I found that one in the PostgreSQL Documentation for the\n> > 7.x.xversions under the command REINDEX where they claim that you\n> > should run a\n> > reindex under certain circumstances and for my comprehension this says\n> that\n> > with some access pattern (as ours (major writes / one big delete per\n> day))\n> > the index may be corrupted or otherwise not really useful.\n>\n> Up to 7.3, periodical REINDEX was needed to trim down bloated indexes.\n> Since 7.4, empty index pages are recycled so that's no longer necessary.\n> You can still end up with larger than necessary indexes in recent\n> versions under unusual access patterns, like if you delete all but a few\n> index tuples from each index page, but it's rare in practice. And it's\n> not unbounded growth like in <= 7.3.\n>\n> In any case, the indexes won't become corrupt.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nDSIGN.LU\nSven Clement\n+352 621 63 21 18\[email protected]\n\nwww.dsign.lu\n\nOk thanks everybody for the calrification, after all now I allready learned something new... ;)My employer is currently thinking about migration to 8.2.x because of your feedback, so I think that the problem could be resolved... ;)\nThanks to everyone...Sven Clement2007/8/6, Heikki Linnakangas <[email protected]>:\nSven Clement wrote:> Partially I found that one in the PostgreSQL Documentation for the> 7.x.xversions under the command REINDEX where they claim that you> should run a> reindex under certain circumstances and for my comprehension this says that\n> with some access pattern (as ours (major writes / one big delete per day))> the index may be corrupted or otherwise not really useful.Up to 7.3, periodical REINDEX was needed to trim down bloated indexes.\nSince 7.4, empty index pages are recycled so that's no longer necessary.You can still end up with larger than necessary indexes in recentversions under unusual access patterns, like if you delete all but a few\nindex tuples from each index page, but it's rare in practice. And it'snot unbounded growth like in <= 7.3.In any case, the indexes won't become corrupt.--  Heikki Linnakangas\n  EnterpriseDB   http://www.enterprisedb.com-- DSIGN.LUSven Clement+352 621 63 21 18\[email protected]", "msg_date": "Mon, 6 Aug 2007 02:54:52 -0700", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" }, { "msg_contents": "On 8/6/07, Sven Clement <[email protected]> wrote:\n> Ok thanks everybody for the calrification, after all now I allready learned\n> something new... ;)\n>\n> My employer is currently thinking about migration to 8.2.x because of your\n> feedback, so I think that the problem could be resolved... ;)\n\nNote that whether the 8.2 migration is forthcoming, you should\ndefinitely update to the latest patch version of 7.4. The update is\npretty much painless, although I remember there being a change around\n7.4.13 that might have changed db behaviour for security reasons.\nRelease notes here:\n\nhttp://www.postgresql.org/docs/8.2/static/release-7-4-13.html\n\nNote that there are settings to work around the change in behaviour\nshould it create a problem.\n", "msg_date": "Tue, 7 Aug 2007 09:07:09 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with large telemetric datasets on 7.4.2" } ]
[ { "msg_contents": "Hello,\n\nI have a Postgres instance (version 8.1) running on a Solaris 10\nmachine. When I run the following query\n\n \n\nSELECT * FROM PROR_ORG, ( ( ( ( (PRPT_PRT LEFT OUTER JOIN PRPT_PRTADR\nON \n\nPRPT_PRT.PRT_NRI = PRPT_PRTADR.PRT_NRI AND PRPT_PRTADR.ADR_F_DEF=true) \n\nLEFT OUTER JOIN PLGE_CTY ON PRPT_PRTADR.CTY_NRI = PLGE_CTY.CTY_NRI)\nLEFT \n\nOUTER JOIN PLGE_CTY1 PLGE_CTY_PLGE_CTY1 ON PLGE_CTY.CTY_NRI = \n\nPLGE_CTY_PLGE_CTY1.CTY_NRI AND PLGE_CTY_PLGE_CTY1.LNG_CD = 'fr') LEFT \n\nOUTER JOIN PLGE_CTRSD ON PRPT_PRTADR.CTRSD_CD = PLGE_CTRSD.CTRSD_CD \n\nAND PRPT_PRTADR.CTR_ISO_CD = PLGE_CTRSD.CTR_ISO_CD) LEFT OUTER JOIN \n\nPLGE_CTR ON PRPT_PRTADR.CTR_ISO_CD = PLGE_CTR.CTR_ISO_CD) , PROR_ORG1 \n\nPROR_ORG_PROR_ORG1, PROR_ORGT, PROR_ORGT1 PROR_ORGT_PROR_ORGT1 \n\nWHERE ( (PROR_ORG.ORGT_CD = PROR_ORGT.ORGT_CD) AND \n\n(PROR_ORGT.ORGT_CD = PROR_ORGT_PROR_ORGT1.ORGT_CD AND \n\nPROR_ORGT_PROR_ORGT1.LNG_CD = 'fr') AND (PROR_ORG.PRT_NRI = \n\nPROR_ORG_PROR_ORG1.PRT_NRI AND PROR_ORG_PROR_ORG1.LNG_CD = 'fr') AND \n\n(PROR_ORG.PRT_NRI = PRPT_PRT.PRT_NRI) ) AND ( ((PROR_ORG.ORGT_CD\n='CHAIN')) )\n\n \n\nit takes 45 seconds to run. In this case the optimizer does a sequential\nscan of the PRPT_PRT table (which is the largest one) despite the\nexistence of an index on PRT_NRI column of PRPT_PRT table.\n\n I've activated the GEQO but it still takes nearly the same time to run\n(between 40 and 45s).\n\nWhen I change the order of PRPT_PRT and PROR_ORG tables, it takes only\n30 milliseconds to run. In this case the optimizer uses the index on\nPRT_NRI column of PRPT_PRT table, what is normal and what I was\nexpecting.\n\nIs there a known problem with the Postgres optimizer?\n\nFor your information, the same query takes 20 milliseconds to run on\nInformix and 60 milliseconds to run on Oracle independently of the order\nof the tables in the query.\n\n \n\nPRPT_PRT has 1.3 millions rows\n\nPRPT_PRTADR has 300.000 rows\n\nPROR_ORG has 1500 rows\n\nThese are the largest tables, all the others are small tables. All\nstatistics are up to date.\n\n \n\nThanks\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHello,\nI have a Postgres instance (version 8.1) running on a\nSolaris 10 machine. When I run the following query\n \nSELECT *  FROM PROR_ORG,  ( ( ( ( (PRPT_PRT LEFT OUTER JOIN\nPRPT_PRTADR ON \nPRPT_PRT.PRT_NRI = PRPT_PRTADR.PRT_NRI AND\nPRPT_PRTADR.ADR_F_DEF=true)  \nLEFT OUTER JOIN PLGE_CTY\nON PRPT_PRTADR.CTY_NRI = PLGE_CTY.CTY_NRI)  LEFT \nOUTER JOIN PLGE_CTY1\nPLGE_CTY_PLGE_CTY1 ON PLGE_CTY.CTY_NRI = \nPLGE_CTY_PLGE_CTY1.CTY_NRI\nAND PLGE_CTY_PLGE_CTY1.LNG_CD = 'fr')  LEFT \nOUTER JOIN PLGE_CTRSD ON\nPRPT_PRTADR.CTRSD_CD = PLGE_CTRSD.CTRSD_CD \nAND\nPRPT_PRTADR.CTR_ISO_CD = PLGE_CTRSD.CTR_ISO_CD)  LEFT OUTER JOIN \nPLGE_CTR ON PRPT_PRTADR.CTR_ISO_CD\n= PLGE_CTR.CTR_ISO_CD) , PROR_ORG1 \nPROR_ORG_PROR_ORG1,\nPROR_ORGT, PROR_ORGT1 PROR_ORGT_PROR_ORGT1 \nWHERE (  (PROR_ORG.ORGT_CD\n= PROR_ORGT.ORGT_CD) AND \n(PROR_ORGT.ORGT_CD =\nPROR_ORGT_PROR_ORGT1.ORGT_CD AND \nPROR_ORGT_PROR_ORGT1.LNG_CD\n= 'fr') AND (PROR_ORG.PRT_NRI = \nPROR_ORG_PROR_ORG1.PRT_NRI\nAND PROR_ORG_PROR_ORG1.LNG_CD = 'fr') AND \n(PROR_ORG.PRT_NRI =\nPRPT_PRT.PRT_NRI) )  AND ( ((PROR_ORG.ORGT_CD ='CHAIN')) )\n \nit takes 45 seconds to run. In this case the\noptimizer does a sequential scan of the PRPT_PRT table (which is the largest\none) despite the existence of an index on PRT_NRI column of PRPT_PRT table.\n I’ve\nactivated the GEQO but it still takes nearly the same time to run (between 40\nand 45s).\nWhen I change the order of PRPT_PRT and PROR_ORG\ntables, it takes only 30 milliseconds to run. In this case the optimizer uses\nthe index on PRT_NRI column of PRPT_PRT table, what is normal and what I was\nexpecting.\nIs there a known problem with the Postgres optimizer?\nFor your information, the same query takes 20 milliseconds\nto run on Informix and 60 milliseconds to run on Oracle independently of the\norder of the tables in the query.\n \nPRPT_PRT has 1.3 millions rows\nPRPT_PRTADR has 300.000 rows\nPROR_ORG has 1500 rows\nThese are the largest tables, all the others are\nsmall tables. All statistics are up to date.\n \nThanks", "msg_date": "Fri, 3 Aug 2007 13:58:51 -0400", "msg_from": "\"Mouhamadou Dia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres optimizer" }, { "msg_contents": "On Fri, 2007-08-03 at 13:58 -0400, Mouhamadou Dia wrote:\n> Hello,\n> \n> I have a Postgres instance (version 8.1) running on a Solaris 10\n> machine. When I run the following query\n> \n> \n> \n> SELECT * FROM PROR_ORG, ( ( ( ( (PRPT_PRT LEFT OUTER JOIN\n> PRPT_PRTADR ON \n> \n> PRPT_PRT.PRT_NRI = PRPT_PRTADR.PRT_NRI AND\n> PRPT_PRTADR.ADR_F_DEF=true) \n> \n> LEFT OUTER JOIN PLGE_CTY ON PRPT_PRTADR.CTY_NRI = PLGE_CTY.CTY_NRI)\n> LEFT \n> \n> OUTER JOIN PLGE_CTY1 PLGE_CTY_PLGE_CTY1 ON PLGE_CTY.CTY_NRI = \n> \n> PLGE_CTY_PLGE_CTY1.CTY_NRI AND PLGE_CTY_PLGE_CTY1.LNG_CD = 'fr')\n> LEFT \n> \n> OUTER JOIN PLGE_CTRSD ON PRPT_PRTADR.CTRSD_CD = PLGE_CTRSD.CTRSD_CD \n> \n> AND PRPT_PRTADR.CTR_ISO_CD = PLGE_CTRSD.CTR_ISO_CD) LEFT OUTER JOIN \n> \n> PLGE_CTR ON PRPT_PRTADR.CTR_ISO_CD = PLGE_CTR.CTR_ISO_CD) , PROR_ORG1 \n> \n> PROR_ORG_PROR_ORG1, PROR_ORGT, PROR_ORGT1 PROR_ORGT_PROR_ORGT1 \n> \n> WHERE ( (PROR_ORG.ORGT_CD = PROR_ORGT.ORGT_CD) AND \n> \n> (PROR_ORGT.ORGT_CD = PROR_ORGT_PROR_ORGT1.ORGT_CD AND \n> \n> PROR_ORGT_PROR_ORGT1.LNG_CD = 'fr') AND (PROR_ORG.PRT_NRI = \n> \n> PROR_ORG_PROR_ORG1.PRT_NRI AND PROR_ORG_PROR_ORG1.LNG_CD = 'fr') AND \n> \n> (PROR_ORG.PRT_NRI = PRPT_PRT.PRT_NRI) ) AND ( ((PROR_ORG.ORGT_CD\n> ='CHAIN')) )\n> \n> \n> \n> it takes 45 seconds to run. In this case the optimizer does a\n> sequential scan of the PRPT_PRT table (which is the largest one)\n> despite the existence of an index on PRT_NRI column of PRPT_PRT table.\n> \n> I’ve activated the GEQO but it still takes nearly the same time to\n> run (between 40 and 45s).\n> \n> When I change the order of PRPT_PRT and PROR_ORG tables, it takes only\n> 30 milliseconds to run. In this case the optimizer uses the index on\n> PRT_NRI column of PRPT_PRT table, what is normal and what I was\n> expecting.\n> \n> Is there a known problem with the Postgres optimizer?\n> \n> For your information, the same query takes 20 milliseconds to run on\n> Informix and 60 milliseconds to run on Oracle independently of the\n> order of the tables in the query.\n> \n> \n> \n> PRPT_PRT has 1.3 millions rows\n> \n> PRPT_PRTADR has 300.000 rows\n> \n> PROR_ORG has 1500 rows\n> \n> These are the largest tables, all the others are small tables. All\n> statistics are up to date.\n\nIf I recall correctly, PG 8.2 was the first version where the planner\nsupported reordering outer joins. Prior releases would get poor\nperformance unless the joins were listed in the right order.\n\nSo it is quite possible that upgrading to 8.2 would solve your problem.\nDo you have the ability to try that?\n\n-- Mark Lewis\n", "msg_date": "Fri, 03 Aug 2007 11:28:33 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres optimizer" } ]
[ { "msg_contents": "Hi \n\n \n\nWe currently have some databases running under 8.0 and 8.1. I have noticed\na significant difference in the time taken to restore dumps between the two\nversions, with 8.0 being much faster. For example a dump of a few MB will\ntake just seconds to load under 8.0 but the same dump will take several\nminutes to load under 8.1. Currently we're just using the default parameter\nsettings when installed via apt on Ubuntu Linux.\n\n \n\nI would just like to know if this is a well known issue and if there is a\nquick fix, or should I be digging deeper and seeing if the default settings\nare mismatched to certain linux kernel settings etc.\n\n \n\nAny help would be most appreciated.\n\n \n\nThanks,\n\nTed\n\n\n\n\n\n\n\n\n\n\nHi \n \nWe currently have some databases running\nunder 8.0 and 8.1.  I have noticed a significant difference in the time\ntaken to restore dumps between the two versions, with 8.0 being much\nfaster.  For example a dump of a few MB will take just seconds to load\nunder 8.0 but the same dump will take several minutes to load under 8.1. \nCurrently we’re just using the default parameter settings when installed\nvia apt on Ubuntu Linux.\n \nI would just like to know if this is a well\nknown issue and if there is a quick fix, or should I be digging deeper and\nseeing if the default settings are mismatched to certain linux kernel settings\netc.\n \nAny help would be most appreciated.\n \nThanks,\nTed", "msg_date": "Sun, 5 Aug 2007 08:35:59 +1200", "msg_from": "\"Ted Jordan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Default Performance between 8.0 and 8.1" }, { "msg_contents": "On 8/5/07, Ted Jordan <[email protected]> wrote:\n> We currently have some databases running under 8.0 and 8.1. I have noticed\n> a significant difference in the time taken to restore dumps between the two\n> versions, with 8.0 being much faster. For example a dump of a few MB will\n> take just seconds to load under 8.0 but the same dump will take several\n> minutes to load under 8.1. Currently we're just using the default parameter\n> settings when installed via apt on Ubuntu Linux.\n\na few seconds to several minutes is not the kind of variance that is\nexplainable by architecture changes between the versions. something\nis amiss. 'maintenance_work_mem' can make a big difference with data\nloads due to faster index builds but even that would not account for\nthe difference.\n\nare you running locally or remotely? if running remotely, you might\nbe seeing some networking/tcp problems.\n\ncan you reduce your problem down to a single copy statement for analysis?\n\nmerlin\n", "msg_date": "Mon, 6 Aug 2007 07:26:35 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default Performance between 8.0 and 8.1" }, { "msg_contents": "Thanks Merlin. I was running both via ssh so it effectively these were\nlocal results. I will try to isolate the issue to a single statement. But\nI think you have answered the larger question i.e. there is no well known\nsituation where this happens so I should expect to see roughly the same\nperformance between 8.0 and 8.1 for data loads and will therefore focus on\nthe specifics of our environment.\n\nRegards,\n-Ted.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Merlin Moncure\nSent: 6 August 2007 1:57 p.m.\nTo: Ted Jordan\nCc: [email protected]\nSubject: Re: [PERFORM] Default Performance between 8.0 and 8.1\n\nOn 8/5/07, Ted Jordan <[email protected]> wrote:\n> We currently have some databases running under 8.0 and 8.1. I have\nnoticed\n> a significant difference in the time taken to restore dumps between the\ntwo\n> versions, with 8.0 being much faster. For example a dump of a few MB will\n> take just seconds to load under 8.0 but the same dump will take several\n> minutes to load under 8.1. Currently we're just using the default\nparameter\n> settings when installed via apt on Ubuntu Linux.\n\na few seconds to several minutes is not the kind of variance that is\nexplainable by architecture changes between the versions. something\nis amiss. 'maintenance_work_mem' can make a big difference with data\nloads due to faster index builds but even that would not account for\nthe difference.\n\nare you running locally or remotely? if running remotely, you might\nbe seeing some networking/tcp problems.\n\ncan you reduce your problem down to a single copy statement for analysis?\n\nmerlin\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Tue, 7 Aug 2007 05:58:19 +1200", "msg_from": "\"Ted Jordan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Default Performance between 8.0 and 8.1" } ]
[ { "msg_contents": "Hello list,\n\nWe have a database keeping track of old files on different computers.\nWe have now added some search functionality to this system.\nThe problem is that on some searches it is really really slow and the \nproblem lies in the planner are using seq scans on tables with over \n20 million rows.\n\nWe added appropriate indexes and raised the statistics on some \ncolumns (really having hard time tracking down which column I should \nraise it for and how much) but still the problems occurs.\n\nBelow is the query and the explain analyze. Any suggestions will be \ngreatly appreciated!\n\nThanks,\nHenrik\n\nexplain analyze (SELECT max(pk_file_structure_id) as pk_object_id, max \n(fk_archive_id) AS fk_archive_id, file_name AS\n object_name, \nstructure_path AS object_path, computer_name,\n file_ctime \nAS object_ctime, pk_computer_id, filetype_icon AS object_img, 'file' \nAS object_type, share_name, share_path FROM tbl_file_structure\n \n JOIN tbl_file ON pk_file_id = fk_file_id\n \n JOIN tbl_structure ON pk_structure_id = fk_structure_id\n \n JOIN tbl_archive ON pk_archive_id = fk_archive_id\n \n JOIN tbl_share ON pk_share_id = fk_share_id\n \n JOIN tbl_computer ON pk_computer_id = fk_computer_id\n \n JOIN tbl_filetype ON pk_filetype_id = fk_filetype_id\n \n JOIN tbl_acl ON fk_file_structure_id = pk_file_structure_id\n \n LEFT OUTER JOIN tbl_job ON tbl_archive.fk_job_id = pk_job_id\n \n LEFT OUTER JOIN tbl_job_group ON tbl_job.fk_job_group_id = \npk_job_group_id\n \n WHERE LOWER(file_name) LIKE LOWER('awstats%') AND \narchive_complete = true AND job_group_type != 'R' GROUP BY \nfile_name, file_ctime, structure_path,\n \n pk_computer_id, filetype_icon, computer_name, \nshare_name, share_path)UNION ALL(SELECT max(pk_file_structure_id) AS \npk_object_id, max(fk_archive_id) AS fk_archive_id, \nstructure_path_name AS\n object_name, \nstructure_path AS object_path, computer_name,\n structure_ctime AS \nobject_ctime, pk_computer_id, 'dir-open.gif' AS object_img, 'folder' \nAS object_type, share_name, share_path FROM tbl_file_structure\n JOIN tbl_acl ON \nfk_file_structure_id = pk_file_structure_id\n JOIN tbl_structure \nON pk_structure_id = fk_structure_id\n JOIN tbl_archive ON \npk_archive_id = fk_archive_id\n JOIN tbl_share ON \npk_share_id = fk_share_id\n JOIN tbl_computer ON \npk_computer_id = fk_computer_id\n LEFT OUTER JOIN \ntbl_job ON tbl_archive.fk_job_id = pk_job_id\n LEFT OUTER JOIN \ntbl_job_group ON tbl_job.fk_job_group_id = pk_job_group_id\n WHERE LOWER \n(structure_path_name) LIKE LOWER('awstats%') AND archive_complete = \ntrue AND fk_file_id IS NULL AND job_group_type != 'R' GROUP BY \nstructure_path_name, structure_ctime, structure_path,\n \npk_computer_id, computer_name, share_name, share_path) ORDER BY \nobject_name LIMIT 20 OFFSET 0\n\n\n\n\n\n\n\n\nLimit (cost=2266221.90..2266221.95 rows=20 width=140) (actual \ntime=368409.873..368409.966 rows=20 loops=1)\n -> Sort (cost=2266221.90..2270129.66 rows=1563107 width=140) \n(actual time=368409.850..368409.892 rows=20 loops=1)\n Sort Key: object_name\n -> Append (cost=1734540.55..1816905.56 rows=1563107 \nwidth=140) (actual time=349422.914..368072.151 rows=14536 loops=1)\n -> Subquery Scan *SELECT* 1 \n(cost=1734540.55..1816603.40 rows=1563102 width=140) (actual \ntime=349422.910..360586.872 rows=14532 loops=1)\n -> GroupAggregate (cost=1734540.55..1800972.38 \nrows=1563102 width=140) (actual time=349422.892..360524.575 \nrows=14532 loops=1)\n -> Sort (cost=1734540.55..1738448.30 \nrows=1563102 width=140) (actual time=349421.873..357658.685 \nrows=486179 loops=1)\n Sort Key: tbl_file.file_name, \ntbl_file.file_ctime, tbl_structure.structure_path, \ntbl_computer.pk_computer_id, tbl_filetype.filetype_icon, \ntbl_computer.computer_name, tbl_share.share_name, tbl_share.share_path\n -> Hash Join \n(cost=318514.39..1285224.77 rows=1563102 width=140) (actual \ntime=73276.810..318013.218 rows=486179 loops=1)\n Hash Cond: \n(tbl_archive.fk_job_id = tbl_job.pk_job_id)\n -> Hash Join \n(cost=318504.37..1263355.75 rows=1660796 width=148) (actual \ntime=73205.950..315847.893 rows=486179 loops=1)\n Hash Cond: \n(tbl_acl.fk_file_structure_id = tbl_file_structure.pk_file_structure_id)\n -> Seq Scan on tbl_acl \n(cost=0.00..563241.21 rows=28612321 width=8) (actual \ntime=9.164..128288.879 rows=26759522 loops=1)\n -> Hash \n(cost=308650.10..308650.10 rows=289942 width=148) (actual \ntime=68345.766..68345.766 rows=87777 loops=1)\n -> Hash Join \n(cost=63392.76..308650.10 rows=289942 width=148) (actual \ntime=32384.694..67853.749 rows=87777 loops=1)\n Hash Cond: \n(tbl_file.fk_filetype_id = tbl_filetype.pk_filetype_id)\n -> Hash \nJoin (cost=63391.26..304661.90 rows=289942 width=145) (actual \ntime=32378.444..67472.130 rows=87777 loops=1)\n Hash \nCond: (tbl_structure.fk_archive_id = tbl_archive.pk_archive_id)\n -> \nHash Join (cost=62832.48..300084.90 rows=298346 width=105) (actual \ntime=32106.191..66811.853 rows=87896 loops=1)\n \nHash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\n - \n > Hash Join (cost=26628.01..248903.31 rows=298346 width=47) \n(actual time=4149.775..56510.415 rows=87896 loops=1)\n \n Hash Cond: (tbl_file_structure.fk_file_id = tbl_file.pk_file_id)\n \n -> Seq Scan on tbl_file_structure (cost=0.00..105507.42 \nrows=4995142 width=24) (actual time=0.368..21066.207 rows=4648014 \nloops=1)\n \n -> Hash (cost=25583.99..25583.99 rows=50161 width=39) (actual \ntime=4148.337..4148.337 rows=48870 loops=1)\n \n -> Bitmap Heap Scan on tbl_file (cost=1935.72..25583.99 \nrows=50161 width=39) (actual time=1271.867..3905.037 rows=48870 loops=1)\n \n Filter: (lower((file_name)::text) ~~ 'awstats%'::text)\n \n -> Bitmap Index Scan on tbl_file_idx \n(cost=0.00..1923.18 rows=42565 width=0) (actual \ntime=1254.230..1254.230 rows=89837 loops=1)\n \n Index Cond: ((lower((file_name)::text) ~>=~ \n'awstats'::character varying) AND (lower((file_name)::text) ~<~ \n'awstatt'::character varying))\n - \n > Hash (cost=27090.10..27090.10 rows=361710 width=74) (actual \ntime=5599.792..5599.792 rows=318631 loops=1)\n \n -> Seq Scan on tbl_structure (cost=0.00..27090.10 rows=361710 \nwidth=74) (actual time=15.301..4125.041 rows=318631 loops=1)\n -> \nHash (cost=557.92..557.92 rows=69 width=48) (actual \ntime=272.208..272.208 rows=54 loops=1)\n - \n > Hash Join (cost=6.47..557.92 rows=69 width=48) (actual \ntime=52.081..271.961 rows=54 loops=1)\n \n Hash Cond: (tbl_share.fk_computer_id = tbl_computer.pk_computer_id)\n \n -> Hash Join (cost=1.18..551.68 rows=69 width=37) (actual \ntime=47.672..267.298 rows=54 loops=1)\n \n Hash Cond: (tbl_archive.fk_share_id = tbl_share.pk_share_id)\n \n -> Index Scan using tbl_archive_pkey on tbl_archive \n(cost=0.00..549.55 rows=69 width=24) (actual time=39.761..259.068 \nrows=54 loops=1)\n \n Filter: archive_complete\n \n -> Hash (cost=1.08..1.08 rows=8 width=29) (actual \ntime=7.838..7.838 rows=8 loops=1)\n \n -> Seq Scan on tbl_share (cost=0.00..1.08 rows=8 \nwidth=29) (actual time=7.772..7.790 rows=8 loops=1)\n \n -> Hash (cost=5.13..5.13 rows=13 width=19) (actual \ntime=4.362..4.362 rows=8 loops=1)\n \n -> Seq Scan on tbl_computer (cost=0.00..5.13 rows=13 \nwidth=19) (actual time=0.329..4.319 rows=8 loops=1)\n -> Hash \n(cost=1.22..1.22 rows=22 width=19) (actual time=6.215..6.215 rows=22 \nloops=1)\n -> \nSeq Scan on tbl_filetype (cost=0.00..1.22 rows=22 width=19) (actual \ntime=6.096..6.141 rows=22 loops=1)\n -> Hash (cost=9.81..9.81 \nrows=16 width=8) (actual time=70.811..70.811 rows=16 loops=1)\n -> Hash Join \n(cost=4.38..9.81 rows=16 width=8) (actual time=66.016..70.759 rows=16 \nloops=1)\n Hash Cond: \n(tbl_job_group.pk_job_group_id = tbl_job.fk_job_group_id)\n -> Seq Scan on \ntbl_job_group (cost=0.00..5.21 rows=16 width=8) (actual \ntime=5.006..9.680 rows=16 loops=1)\n Filter: \n(job_group_type <> 'R'::bpchar)\n -> Hash \n(cost=4.17..4.17 rows=17 width=16) (actual time=60.943..60.943 \nrows=17 loops=1)\n -> Seq Scan \non tbl_job (cost=0.00..4.17 rows=17 width=16) (actual \ntime=1.056..60.867 rows=17 loops=1)\n -> Subquery Scan *SELECT* 2 (cost=302.03..302.16 \nrows=5 width=121) (actual time=7450.513..7450.544 rows=4 loops=1)\n -> HashAggregate (cost=302.03..302.11 rows=5 \nwidth=121) (actual time=7450.497..7450.516 rows=4 loops=1)\n -> Nested Loop (cost=6.27..301.90 rows=6 \nwidth=121) (actual time=319.925..7449.175 rows=94 loops=1)\n -> Nested Loop (cost=6.27..209.37 \nrows=1 width=121) (actual time=249.894..5430.726 rows=24 loops=1)\n -> Nested Loop \n(cost=6.27..204.59 rows=1 width=110) (actual time=236.381..5416.033 \nrows=24 loops=1)\n -> Nested Loop \n(cost=6.27..200.24 rows=1 width=118) (actual time=235.827..5414.547 \nrows=24 loops=1)\n -> Nested Loop \n(cost=6.27..196.18 rows=1 width=118) (actual time=204.173..5369.710 \nrows=24 loops=1)\n -> Nested \nLoop (cost=6.27..17.17 rows=1 width=118) (actual \ntime=128.586..1972.271 rows=24 loops=1)\n Join \nFilter: (tbl_share.pk_share_id = tbl_archive.fk_share_id)\n -> \nNested Loop (cost=6.27..15.99 rows=1 width=105) (actual \ntime=116.772..1958.954 rows=24 loops=1)\n - \n > Index Scan using tbl_structure_idx on tbl_structure \n(cost=0.01..6.71 rows=1 width=89) (actual time=86.271..1375.178 \nrows=24 loops=1)\n \n Index Cond: ((lower((structure_path_name)::text) >= \n'awstats'::text) AND (lower((structure_path_name)::text) < \n'awstatt'::text))\n \n Filter: (lower((structure_path_name)::text) ~~ 'awstats%'::text)\n - \n > Bitmap Heap Scan on tbl_archive (cost=6.26..9.27 rows=1 \nwidth=24) (actual time=24.286..24.292 rows=1 loops=24)\n \n Recheck Cond: (tbl_archive.pk_archive_id = \ntbl_structure.fk_archive_id)\n \n Filter: archive_complete\n \n -> Bitmap Index Scan on tbl_archive_pkey (cost=0.00..6.26 \nrows=1 width=0) (actual time=3.642..3.642 rows=1 loops=24)\n \n Index Cond: (tbl_archive.pk_archive_id = \ntbl_structure.fk_archive_id)\n -> \nSeq Scan on tbl_share (cost=0.00..1.08 rows=8 width=29) (actual \ntime=0.501..0.517 rows=8 loops=24)\n -> Index \nScan using tbl_file_structure_idx1 on tbl_file_structure \n(cost=0.00..178.96 rows=4 width=16) (actual time=131.843..141.509 \nrows=1 loops=24)\n Index \nCond: (tbl_structure.pk_structure_id = \ntbl_file_structure.fk_structure_id)\n \nFilter: (fk_file_id IS NULL)\n -> Index Scan \nusing tbl_job_pkey on tbl_job (cost=0.00..4.05 rows=1 width=16) \n(actual time=1.844..1.848 rows=1 loops=24)\n Index Cond: \n(tbl_archive.fk_job_id = tbl_job.pk_job_id)\n -> Index Scan using \ntbl_job_group_pkey on tbl_job_group (cost=0.00..4.33 rows=1 width=8) \n(actual time=0.044..0.049 rows=1 loops=24)\n Index Cond: \n(tbl_job.fk_job_group_id = tbl_job_group.pk_job_group_id)\n Filter: \n(job_group_type <> 'R'::bpchar)\n -> Index Scan using \ntbl_computer_pkey on tbl_computer (cost=0.00..4.77 rows=1 width=19) \n(actual time=0.580..0.584 rows=1 loops=24)\n Index Cond: \n(tbl_computer.pk_computer_id = tbl_share.fk_computer_id)\n -> Index Scan using tbl_acl_idx on \ntbl_acl (cost=0.00..91.93 rows=48 width=8) (actual \ntime=84.054..84.078 rows=4 loops=24)\n Index Cond: \n(tbl_acl.fk_file_structure_id = tbl_file_structure.pk_file_structure_id)\nTotal runtime: 368592.811 ms\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 6 Aug 2007 11:19:10 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Extreme slow select query 8.2.4" }, { "msg_contents": "Henrik Zagerholm <[email protected]> writes:\n> ... FROM tbl_file_structure\n> JOIN tbl_file ON pk_file_id = fk_file_id\n> JOIN tbl_structure ON pk_structure_id = fk_structure_id\n> JOIN tbl_archive ON pk_archive_id = fk_archive_id\n> JOIN tbl_share ON pk_share_id = fk_share_id\n> JOIN tbl_computer ON pk_computer_id = fk_computer_id\n> JOIN tbl_filetype ON pk_filetype_id = fk_filetype_id\n> JOIN tbl_acl ON fk_file_structure_id = pk_file_structure_id\n> LEFT OUTER JOIN tbl_job ON tbl_archive.fk_job_id = pk_job_id\n> LEFT OUTER JOIN tbl_job_group ON tbl_job.fk_job_group_id = \n> pk_job_group_id\n> WHERE LOWER(file_name) LIKE LOWER('awstats%') AND \n> archive_complete = true AND job_group_type != 'R' GROUP BY \n> file_name, file_ctime, structure_path, pk_computer_id, filetype_icon, computer_name, \n> share_name, share_path ...\n\nPerhaps raising join_collapse_limit and/or work_mem would help.\nAlthough I'm not really sure why you expect the above query to be fast\n--- with the file_name condition matching 50K rows, and no selectivity\nworth mentioning in any other WHERE-condition, it's gonna have to do a\nheck of a lot of joining in any case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2007 10:58:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extreme slow select query 8.2.4 " }, { "msg_contents": "\n6 aug 2007 kl. 16:58 skrev Tom Lane:\n\n> Henrik Zagerholm <[email protected]> writes:\n>> ... FROM tbl_file_structure\n>> JOIN tbl_file ON pk_file_id = fk_file_id\n>> JOIN tbl_structure ON pk_structure_id = fk_structure_id\n>> JOIN tbl_archive ON pk_archive_id = fk_archive_id\n>> JOIN tbl_share ON pk_share_id = fk_share_id\n>> JOIN tbl_computer ON pk_computer_id = fk_computer_id\n>> JOIN tbl_filetype ON pk_filetype_id = fk_filetype_id\n>> JOIN tbl_acl ON fk_file_structure_id = pk_file_structure_id\n>> LEFT OUTER JOIN tbl_job ON tbl_archive.fk_job_id = \n>> pk_job_id\n>> LEFT OUTER JOIN tbl_job_group ON tbl_job.fk_job_group_id =\n>> pk_job_group_id\n>> WHERE LOWER(file_name) LIKE LOWER('awstats%') AND\n>> archive_complete = true AND job_group_type != 'R' GROUP BY\n>> file_name, file_ctime, structure_path, pk_computer_id, \n>> filetype_icon, computer_name,\n>> share_name, share_path ...\n>\n> Perhaps raising join_collapse_limit and/or work_mem would help.\n> Although I'm not really sure why you expect the above query to be fast\n> --- with the file_name condition matching 50K rows, and no selectivity\n> worth mentioning in any other WHERE-condition, it's gonna have to do a\n> heck of a lot of joining in any case.\n>\n\nI did test to raise work_mem to 10MB and join_collapse_limit to 10,12 \nand 16 with no significant performance boost.\nI know the query retrieves way more which is really necessary to show \nto the user so I would gladly come up with a way to limit the query \nso the GUI doesn't hang for several minutes if a user does a bad search.\nThe problem is that I don't know a good way of limit the search \nefficiently as only going on tbl_file with limit 100 could make the \nquery only to return 10 rows if the user doesn't have access to 900 \nof the files (This is what the join with tbl_acl does). Using cursors \ndoesn't help because I really don't retrieve that much data\n\nWould sub selects work best in these kinds of scenarios? It mush be a \nquite common problem with users doing queries that is too wide.\n\nThanks for all your help.\n\n> \t\t\tregards, tom lane\n\n", "msg_date": "Mon, 6 Aug 2007 17:50:22 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extreme slow select query 8.2.4 " }, { "msg_contents": "Henrik Zagerholm wrote:\n> I know the query retrieves way more which is really necessary to show to\n> the user so I would gladly come up with a way to limit the query so the\n> GUI doesn't hang for several minutes if a user does a bad search.\n> The problem is that I don't know a good way of limit the search\n> efficiently as only going on tbl_file with limit 100 could make the\n> query only to return 10 rows if the user doesn't have access to 900 of\n> the files (This is what the join with tbl_acl does). Using cursors\n> doesn't help because I really don't retrieve that much data\n\nCould you just add a LIMIT 100 to the end of the query, if 100 rows is\nenough? That would cut the runtime of the query, if there's a quicker\nplan to retrieve just those 100 rows.\n\nAnother alternative is to use statement_timeout. If a query takes longer\nthan specified timeout, it's automatically aborted and an error is given.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 06 Aug 2007 20:47:22 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extreme slow select query 8.2.4" }, { "msg_contents": "\n6 aug 2007 kl. 21:47 skrev Heikki Linnakangas:\n\n> Henrik Zagerholm wrote:\n>> I know the query retrieves way more which is really necessary to \n>> show to\n>> the user so I would gladly come up with a way to limit the query \n>> so the\n>> GUI doesn't hang for several minutes if a user does a bad search.\n>> The problem is that I don't know a good way of limit the search\n>> efficiently as only going on tbl_file with limit 100 could make the\n>> query only to return 10 rows if the user doesn't have access to \n>> 900 of\n>> the files (This is what the join with tbl_acl does). Using cursors\n>> doesn't help because I really don't retrieve that much data\n>\n> Could you just add a LIMIT 100 to the end of the query, if 100 rows is\n> enough? That would cut the runtime of the query, if there's a quicker\n> plan to retrieve just those 100 rows.\nAs you can see in the query I already have a limit 20 and it doesn't \nmake any difference as the query still does the big joins between \ntbl_file, tbl_file_structure and tbl_acl.\nThis is why I think I have to come up with a way of using sub select \nwith internal limits. Maybe have a cursor like procedure using these \nso I always get the correct number of lines back.\n>\n> Another alternative is to use statement_timeout. If a query takes \n> longer\n> than specified timeout, it's automatically aborted and an error is \n> given.\nInteresting! I'll take a look at that.\nThanks,\nhenrik\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 6 Aug 2007 22:16:55 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extreme slow select query 8.2.4" } ]
[ { "msg_contents": "Hi list,\n\nI'm having a weird acting query which simply retrieves some files \nstored in a db which are related to a specific archive and also has a \nsize lower than 1024 bytes.\nExplain analyze below. The first one is with seq-scan enabled and the \nother one with seq-scans disabled. The weird thing is the seq scan on \ntbl_file_structure and also the insane calculated cost of 100 000 000 \non some tables.\n\nExplain analyze below with both seq scan on and off.\n\nRegards,\nHenrik\n\n\nEXPLAIN ANALYZE SELECT pk_file_id, file_name_in_tar, tar_name, \nfile_suffix, fk_tar_id, tar_compressed FROM tbl_file\n INNER JOIN \ntbl_file_structure ON fk_file_id = pk_file_id\n INNER JOIN \ntbl_structure ON fk_structure_id = pk_structure_id\n\t\t\t\t\t\tLEFT OUTER JOIN tbl_tar ON fk_tar_id = pk_tar_id\n\t\t\t\t\t\tWHERE file_indexed IS FALSE\n AND file_copied IS TRUE\n AND file_size < (1024)\n AND LOWER \n(file_suffix) IN(\n SELECT LOWER \n(filetype_suffix) FROM tbl_filetype_suffix WHERE \nfiletype_suffix_index IS TRUE\n ) AND fk_archive_id \n= 115 ORDER BY fk_tar_id\n\n\"Sort (cost=238497.42..238499.49 rows=828 width=76) (actual \ntime=65377.316..65377.321 rows=5 loops=1)\"\n\" Sort Key: tbl_file.fk_tar_id\"\n\" -> Hash Left Join (cost=39935.64..238457.29 rows=828 width=76) \n(actual time=61135.732..65377.246 rows=5 loops=1)\"\n\" Hash Cond: (tbl_file.fk_tar_id = tbl_tar.pk_tar_id)\"\n\" -> Hash Join (cost=39828.67..238336.86 rows=828 width=50) \n(actual time=60776.587..65018.062 rows=5 loops=1)\"\n\" Hash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\"\n\" -> Hash Join (cost=30975.39..228750.72 rows=72458 \nwidth=58) (actual time=14256.555..64577.950 rows=4650 loops=1)\"\n\" Hash Cond: (tbl_file_structure.fk_file_id = \ntbl_file.pk_file_id)\"\n\" -> Seq Scan on tbl_file_structure \n(cost=0.00..167417.09 rows=7902309 width=16) (actual \ntime=9.581..33702.852 rows=7801334 loops=1)\"\n\" -> Hash (cost=30874.63..30874.63 rows=8061 \nwidth=50) (actual time=14058.396..14058.396 rows=486 loops=1)\"\n\" -> Hash Join (cost=3756.16..30874.63 \nrows=8061 width=50) (actual time=9373.992..14056.119 rows=486 loops=1)\"\n\" Hash Cond: (lower \n((tbl_file.file_suffix)::text) = lower \n((tbl_filetype_suffix.filetype_suffix)::text))\"\n\" -> Bitmap Heap Scan on tbl_file \n(cost=3754.47..29939.50 rows=136453 width=50) (actual \ntime=9068.525..13654.235 rows=154605 loops=1)\"\n\" Recheck Cond: (file_size < 1024)\"\n\" Filter: ((file_indexed IS \nFALSE) AND (file_copied IS TRUE))\"\n\" -> Bitmap Index Scan on \ntbl_file_idx4 (cost=0.00..3720.36 rows=195202 width=0) (actual \ntime=9002.683..9002.683 rows=205084 loops=1)\"\n\" Index Cond: (file_size < \n1024)\"\n\" -> Hash (cost=1.52..1.52 rows=14 \nwidth=8) (actual time=0.557..0.557 rows=14 loops=1)\"\n\" -> HashAggregate \n(cost=1.38..1.52 rows=14 width=8) (actual time=0.484..0.507 rows=14 \nloops=1)\"\n\" -> Seq Scan on \ntbl_filetype_suffix (cost=0.00..1.34 rows=14 width=8) (actual \ntime=0.383..0.423 rows=14 loops=1)\"\n\" Filter: \n(filetype_suffix_index IS TRUE)\"\n\" -> Hash (cost=8778.54..8778.54 rows=5979 width=8) \n(actual time=419.491..419.491 rows=11420 loops=1)\"\n\" -> Bitmap Heap Scan on tbl_structure \n(cost=617.08..8778.54 rows=5979 width=8) (actual \ntime=114.501..393.685 rows=11420 loops=1)\"\n\" Recheck Cond: (fk_archive_id = 115)\"\n\" -> Bitmap Index Scan on \ntbl_structure_idx3 (cost=0.00..615.59 rows=5979 width=0) (actual \ntime=100.939..100.939 rows=11420 loops=1)\"\n\" Index Cond: (fk_archive_id = 115)\"\n\" -> Hash (cost=64.21..64.21 rows=3421 width=34) (actual \ntime=359.043..359.043 rows=3485 loops=1)\"\n\" -> Seq Scan on tbl_tar (cost=0.00..64.21 rows=3421 \nwidth=34) (actual time=19.287..348.237 rows=3485 loops=1)\"\n\"Total runtime: 65378.552 ms\"\n\n\n\n\nNow I disabled seq scans.\nset enable_seqscan=false;\n\n\n\"Merge Left Join (cost=100331398.53..100331526.36 rows=828 width=76) \n(actual time=36206.575..36291.847 rows=5 loops=1)\"\n\" Merge Cond: (tbl_file.fk_tar_id = tbl_tar.pk_tar_id)\"\n\" -> Sort (cost=100331398.53..100331400.60 rows=828 width=50) \n(actual time=36030.473..36030.487 rows=5 loops=1)\"\n\" Sort Key: tbl_file.fk_tar_id\"\n\" -> Hash Join (cost=100012609.44..100331358.40 rows=828 \nwidth=50) (actual time=27279.046..36030.399 rows=5 loops=1)\"\n\" Hash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\"\n\" -> Nested Loop (cost=100003756.16..100321772.87 \nrows=72397 width=58) (actual time=13225.815..35533.414 rows=4650 \nloops=1)\"\n\" -> Hash Join (cost=100003756.16..100030874.63 \nrows=8061 width=50) (actual time=12888.880..19845.110 rows=486 loops=1)\"\n\" Hash Cond: (lower \n((tbl_file.file_suffix)::text) = lower \n((tbl_filetype_suffix.filetype_suffix)::text))\"\n\" -> Bitmap Heap Scan on tbl_file \n(cost=3754.47..29939.50 rows=136453 width=50) (actual \ntime=12747.478..19266.843 rows=154605 loops=1)\"\n\" Recheck Cond: (file_size < 1024)\"\n\" Filter: ((file_indexed IS FALSE) AND \n(file_copied IS TRUE))\"\n\" -> Bitmap Index Scan on \ntbl_file_idx4 (cost=0.00..3720.36 rows=195202 width=0) (actual \ntime=12689.593..12689.593 rows=205084 loops=1)\"\n\" Index Cond: (file_size < 1024)\"\n\" -> Hash (cost=100000001.52..100000001.52 \nrows=14 width=8) (actual time=0.313..0.313 rows=14 loops=1)\"\n\" -> HashAggregate \n(cost=100000001.38..100000001.52 rows=14 width=8) (actual \ntime=0.230..0.254 rows=14 loops=1)\"\n\" -> Seq Scan on \ntbl_filetype_suffix (cost=100000000.00..100000001.34 rows=14 \nwidth=8) (actual time=0.133..0.176 rows=14 loops=1)\"\n\" Filter: \n(filetype_suffix_index IS TRUE)\"\n\" -> Index Scan using tbl_file_structure_idx on \ntbl_file_structure (cost=0.00..35.82 rows=21 width=16) (actual \ntime=7.031..32.178 rows=10 loops=486)\"\n\" Index Cond: (tbl_file_structure.fk_file_id \n= tbl_file.pk_file_id)\"\n\" -> Hash (cost=8778.54..8778.54 rows=5979 width=8) \n(actual time=445.799..445.799 rows=11420 loops=1)\"\n\" -> Bitmap Heap Scan on tbl_structure \n(cost=617.08..8778.54 rows=5979 width=8) (actual \ntime=155.046..419.676 rows=11420 loops=1)\"\n\" Recheck Cond: (fk_archive_id = 115)\"\n\" -> Bitmap Index Scan on \ntbl_structure_idx3 (cost=0.00..615.59 rows=5979 width=0) (actual \ntime=126.623..126.623 rows=11420 loops=1)\"\n\" Index Cond: (fk_archive_id = 115)\"\n\" -> Index Scan using tbl_tar_pkey on tbl_tar (cost=0.00..106.86 \nrows=3421 width=34) (actual time=22.218..251.830 rows=2491 loops=1)\"\n\"Total runtime: 36292.481 ms\"\n\n", "msg_date": "Mon, 6 Aug 2007 14:21:10 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Planner making wrong decisions 8.2.4. Insane cost calculations." }, { "msg_contents": "\"Henrik Zagerholm\" <[email protected]> writes:\n\n> Hi list,\n>\n> I'm having a weird acting query which simply retrieves some files stored in a db\n> which are related to a specific archive and also has a size lower than 1024\n> bytes.\n> Explain analyze below. The first one is with seq-scan enabled and the other one\n> with seq-scans disabled. The weird thing is the seq scan on tbl_file_structure\n> and also the insane calculated cost of 100 000 000 on some tables.\n\nWell the way Postgres disables a plan node type is by giving it a cost of\n100,000,000. What other way did you expect it to be able to scan\ntbl_filetype_suffix anyways? What indexes do you have on tbl_filetype_suffix?\n\nAnd any chance you could resend this stuff without the word-wrapping?\nIt's pretty hard to read like this:\n\n\" -> Seq Scan on tbl_filetype_suffix\n(cost=100000000.00..100000001.34 rows=14 width=8) (actual time=0.133..0.176\nrows=14 loops=1)\"\n\" Filter: (filetype_suffix_index IS\nTRUE)\"\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 06 Aug 2007 14:07:17 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." }, { "msg_contents": "\n6 aug 2007 kl. 15:07 skrev Gregory Stark:\n\n> \"Henrik Zagerholm\" <[email protected]> writes:\n>\n>> Hi list,\n>>\n>> I'm having a weird acting query which simply retrieves some files \n>> stored in a db\n>> which are related to a specific archive and also has a size lower \n>> than 1024\n>> bytes.\n>> Explain analyze below. The first one is with seq-scan enabled and \n>> the other one\n>> with seq-scans disabled. The weird thing is the seq scan on \n>> tbl_file_structure\n>> and also the insane calculated cost of 100 000 000 on some tables.\n>\n> Well the way Postgres disables a plan node type is by giving it a \n> cost of\n> 100,000,000. What other way did you expect it to be able to scan\n> tbl_filetype_suffix anyways? What indexes do you have on \n> tbl_filetype_suffix?\n>\nAhh, my bad. It is a very small table but I have an unique index.\nCREATE UNIQUE INDEX tbl_filetype_suffix_idx ON tbl_filetype_suffix \nUSING btree (filetype_suffix);\n\n> And any chance you could resend this stuff without the word-wrapping?\n> It's pretty hard to read like this:\n>\nResending and hopefully the line breaks are gone. I couldn't find any \nin my sent mail. If this doesn't work I'll paste it on pastie.\nThe weird thing is that the seq scan on tbl_file_structure is soooo \nslow but when I force an index scan it is very fast.\n\n> \" -> Seq Scan on \n> tbl_filetype_suffix\n> (cost=100000000.00..100000001.34 rows=14 width=8) (actual \n> time=0.133..0.176\n> rows=14 loops=1)\"\n> \" Filter: \n> (filetype_suffix_index IS\n> TRUE)\"\n>\n\n\nCheers,\nHenrik\n\nEXPLAIN ANALYZE SELECT pk_file_id, file_name_in_tar, tar_name, \nfile_suffix, fk_tar_id, tar_compressed FROM tbl_file\n INNER JOIN \ntbl_file_structure ON fk_file_id = pk_file_id\n INNER JOIN \ntbl_structure ON fk_structure_id = pk_structure_id\n\t\t\t\t\t\tLEFT OUTER JOIN tbl_tar ON fk_tar_id = pk_tar_id\n\t\t\t\t\t\tWHERE file_indexed IS FALSE\n AND file_copied IS TRUE\n AND file_size < (1024)\n AND LOWER \n(file_suffix) IN(\n SELECT LOWER \n(filetype_suffix) FROM tbl_filetype_suffix WHERE \nfiletype_suffix_index IS TRUE\n ) AND fk_archive_id \n= 115 ORDER BY fk_tar_id\n\nSort (cost=238497.42..238499.49 rows=828 width=76) (actual \ntime=65377.316..65377.321 rows=5 loops=1)\n Sort Key: tbl_file.fk_tar_id\n -> Hash Left Join (cost=39935.64..238457.29 rows=828 width=76) \n(actual time=61135.732..65377.246 rows=5 loops=1)\n Hash Cond: (tbl_file.fk_tar_id = tbl_tar.pk_tar_id)\n -> Hash Join (cost=39828.67..238336.86 rows=828 width=50) \n(actual time=60776.587..65018.062 rows=5 loops=1)\n Hash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\n -> Hash Join (cost=30975.39..228750.72 rows=72458 \nwidth=58) (actual time=14256.555..64577.950 rows=4650 loops=1)\n Hash Cond: (tbl_file_structure.fk_file_id = \ntbl_file.pk_file_id)\n -> Seq Scan on tbl_file_structure \n(cost=0.00..167417.09 rows=7902309 width=16) (actual \ntime=9.581..33702.852 rows=7801334 loops=1)\n -> Hash (cost=30874.63..30874.63 rows=8061 \nwidth=50) (actual time=14058.396..14058.396 rows=486 loops=1)\n -> Hash Join (cost=3756.16..30874.63 \nrows=8061 width=50) (actual time=9373.992..14056.119 rows=486 loops=1)\n Hash Cond: (lower \n((tbl_file.file_suffix)::text) = lower \n((tbl_filetype_suffix.filetype_suffix)::text))\n -> Bitmap Heap Scan on tbl_file \n(cost=3754.47..29939.50 rows=136453 width=50) (actual \ntime=9068.525..13654.235 rows=154605 loops=1)\n Recheck Cond: (file_size < 1024)\n Filter: ((file_indexed IS \nFALSE) AND (file_copied IS TRUE))\n -> Bitmap Index Scan on \ntbl_file_idx4 (cost=0.00..3720.36 rows=195202 width=0) (actual \ntime=9002.683..9002.683 rows=205084 loops=1)\n Index Cond: (file_size < \n1024)\n -> Hash (cost=1.52..1.52 rows=14 \nwidth=8) (actual time=0.557..0.557 rows=14 loops=1)\n -> HashAggregate \n(cost=1.38..1.52 rows=14 width=8) (actual time=0.484..0.507 rows=14 \nloops=1)\n -> Seq Scan on \ntbl_filetype_suffix (cost=0.00..1.34 rows=14 width=8) (actual \ntime=0.383..0.423 rows=14 loops=1)\n Filter: \n(filetype_suffix_index IS TRUE)\n -> Hash (cost=8778.54..8778.54 rows=5979 width=8) \n(actual time=419.491..419.491 rows=11420 loops=1)\n -> Bitmap Heap Scan on tbl_structure \n(cost=617.08..8778.54 rows=5979 width=8) (actual \ntime=114.501..393.685 rows=11420 loops=1)\n Recheck Cond: (fk_archive_id = 115)\n -> Bitmap Index Scan on \ntbl_structure_idx3 (cost=0.00..615.59 rows=5979 width=0) (actual \ntime=100.939..100.939 rows=11420 loops=1)\n Index Cond: (fk_archive_id = 115)\n -> Hash (cost=64.21..64.21 rows=3421 width=34) (actual \ntime=359.043..359.043 rows=3485 loops=1)\n -> Seq Scan on tbl_tar (cost=0.00..64.21 rows=3421 \nwidth=34) (actual time=19.287..348.237 rows=3485 loops=1)\nTotal runtime: 65378.552 ms\n\n\n\n\nNow I disabled seq scans.\nset enable_seqscan=false;\n\n\nMerge Left Join (cost=100331398.53..100331526.36 rows=828 width=76) \n(actual time=36206.575..36291.847 rows=5 loops=1)\n Merge Cond: (tbl_file.fk_tar_id = tbl_tar.pk_tar_id)\n -> Sort (cost=100331398.53..100331400.60 rows=828 width=50) \n(actual time=36030.473..36030.487 rows=5 loops=1)\n Sort Key: tbl_file.fk_tar_id\n -> Hash Join (cost=100012609.44..100331358.40 rows=828 \nwidth=50) (actual time=27279.046..36030.399 rows=5 loops=1)\n Hash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\n -> Nested Loop (cost=100003756.16..100321772.87 \nrows=72397 width=58) (actual time=13225.815..35533.414 rows=4650 \nloops=1)\n -> Hash Join (cost=100003756.16..100030874.63 \nrows=8061 width=50) (actual time=12888.880..19845.110 rows=486 loops=1)\n Hash Cond: (lower \n((tbl_file.file_suffix)::text) = lower \n((tbl_filetype_suffix.filetype_suffix)::text))\n -> Bitmap Heap Scan on tbl_file \n(cost=3754.47..29939.50 rows=136453 width=50) (actual \ntime=12747.478..19266.843 rows=154605 loops=1)\n Recheck Cond: (file_size < 1024)\n Filter: ((file_indexed IS FALSE) AND \n(file_copied IS TRUE))\n -> Bitmap Index Scan on \ntbl_file_idx4 (cost=0.00..3720.36 rows=195202 width=0) (actual \ntime=12689.593..12689.593 rows=205084 loops=1)\n Index Cond: (file_size < 1024)\n -> Hash (cost=100000001.52..100000001.52 \nrows=14 width=8) (actual time=0.313..0.313 rows=14 loops=1)\n -> HashAggregate \n(cost=100000001.38..100000001.52 rows=14 width=8) (actual \ntime=0.230..0.254 rows=14 loops=1)\n -> Seq Scan on \ntbl_filetype_suffix (cost=100000000.00..100000001.34 rows=14 \nwidth=8) (actual time=0.133..0.176 rows=14 loops=1)\n Filter: \n(filetype_suffix_index IS TRUE)\n -> Index Scan using tbl_file_structure_idx on \ntbl_file_structure (cost=0.00..35.82 rows=21 width=16) (actual \ntime=7.031..32.178 rows=10 loops=486)\n Index Cond: (tbl_file_structure.fk_file_id \n= tbl_file.pk_file_id)\n -> Hash (cost=8778.54..8778.54 rows=5979 width=8) \n(actual time=445.799..445.799 rows=11420 loops=1)\n -> Bitmap Heap Scan on tbl_structure \n(cost=617.08..8778.54 rows=5979 width=8) (actual \ntime=155.046..419.676 rows=11420 loops=1)\n Recheck Cond: (fk_archive_id = 115)\n -> Bitmap Index Scan on \ntbl_structure_idx3 (cost=0.00..615.59 rows=5979 width=0) (actual \ntime=126.623..126.623 rows=11420 loops=1)\n Index Cond: (fk_archive_id = 115)\n -> Index Scan using tbl_tar_pkey on tbl_tar (cost=0.00..106.86 \nrows=3421 width=34) (actual time=22.218..251.830 rows=2491 loops=1)\nTotal runtime: 36292.481 ms\n\n>\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Mon, 6 Aug 2007 16:46:43 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." }, { "msg_contents": "\"Henrik Zagerholm\" <[email protected]> writes:\n\n> Ahh, my bad. It is a very small table but I have an unique index.\n> CREATE UNIQUE INDEX tbl_filetype_suffix_idx ON tbl_filetype_suffix \n> USING btree (filetype_suffix);\n\nWell it can't use that to help with a join. If you had an index on\nlower(filetype_suffix) it might be able to use it. I'm not sure though,\nespecially if it's a small table.\n\n>> And any chance you could resend this stuff without the word-wrapping?\n>> It's pretty hard to read like this:\n>>\n> Resending and hopefully the line breaks are gone. I couldn't find any \n> in my sent mail. \n\nNo, the double-quotes are gone but the lines are still wrapped. It's become\nquite a hassle recently to get mailers to do anything reasonable with code.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 06 Aug 2007 16:31:14 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." }, { "msg_contents": "Henrik Zagerholm <[email protected]> writes:\n> \t\t\t\t\t\tWHERE file_indexed IS FALSE\n> AND file_copied IS TRUE\n> AND file_size < (1024)\n> AND LOWER \n> (file_suffix) IN(\n> SELECT LOWER \n> (filetype_suffix) FROM tbl_filetype_suffix WHERE \n> filetype_suffix_index IS TRUE\n> ) AND fk_archive_id \n> = 115 ORDER BY fk_tar_id\n\nDo you really need the lower() calls there? The planner is getting the\nwrong estimate for the selectivity of the IN-clause, which is likely\nbecause it has no statistics about lower(file_suffix) or\nlower(filetype_suffix).\n\nIf you don't want to constrain the data to be already lower'd, then\nsetting up functional indexes on the two lower() expressions should\nprompt ANALYZE to track stats for them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2007 11:31:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." }, { "msg_contents": "\n6 aug 2007 kl. 17:31 skrev Tom Lane:\n\n> Henrik Zagerholm <[email protected]> writes:\n>> \t\t\t\t\t\tWHERE file_indexed IS FALSE\n>> AND file_copied \n>> IS TRUE\n>> AND file_size < \n>> (1024)\n>> AND LOWER\n>> (file_suffix) IN(\n>> SELECT LOWER\n>> (filetype_suffix) FROM tbl_filetype_suffix WHERE\n>> filetype_suffix_index IS TRUE\n>> ) AND fk_archive_id\n>> = 115 ORDER BY fk_tar_id\n>\n> Do you really need the lower() calls there? The planner is getting \n> the\n> wrong estimate for the selectivity of the IN-clause, which is likely\n> because it has no statistics about lower(file_suffix) or\n> lower(filetype_suffix).\n>\n> If you don't want to constrain the data to be already lower'd, then\n> setting up functional indexes on the two lower() expressions should\n> prompt ANALYZE to track stats for them.\n>\n\nOK, thanx for the tip. I actually think that all the suffixes are \nlower case so the lower should go.\nBut would this really impact the sequential scan on tbl_file_structure?\n\n->Seq Scan on tbl_file_structure (cost=0.00..167417.09 rows=7902309 \nwidth=16) (actual time=9.581..33702.852 rows=7801334 loops=1)\n\nAt what point does the planner choose seq scans? I've seen the \nplanner use seq scan even though only 1% of the joining tables rows \nare selected.\nIf the filter gives me 70k rows from tbl_file and tbl_file_structure \nhas 8 million rows why does the planner choose seq scans?\n\nCheers,\nHenrik\n\n\n\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Mon, 6 Aug 2007 23:29:45 +0200", "msg_from": "Henrik Zagerholm <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." }, { "msg_contents": "Henrik Zagerholm <[email protected]> writes:\n> At what point does the planner choose seq scans?\n\nWhen it thinks it's cheaper than the other way. There's no hard and\nfast answer. The immediate problem you've got is that the estimated\nsize of the tbl_file/tbl_filetype_suffix join is off by a factor of\nalmost 20 (8061 vs 486). The plan that you think would be faster\ninvolves an inner indexscan on the larger table for each result row from\nthat join, and therefore this error translates directly to a 20x\noverestimate of its cost, and therefore the planner avoids that in favor\nof a hash join that indeed is more efficient when there are lots of rows\nto be joined.\n\nIt may well be that you also need to adjust random_page_cost and/or\neffective_cache_size so that the planner's estimates of indexscan vs\nseqscan costs are more in line with reality on your machine. But it's\na capital error to tinker with those numbers on the basis of an example\nwhere the rowcount estimates are so far off. (Actually I'd not advise\nchanging them on the basis of *any* single test case, you need to look\nat average behaviors. Get the count estimates fixed first and then\nsee where you are.)\n\nIt's also not impossible that the planner is right and the seqscan is\nbetter than a lot of index probes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2007 18:23:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Planner making wrong decisions 8.2.4. Insane cost\n\tcalculations." } ]
[ { "msg_contents": "I have a table with 4,889,820 records in it. The\ntable also has 47 fields. I'm having problems with\nupdate performance. Just as a test, I issued the\nfollowing update:\n\nupdate valley set test='this is a test'\n\nThis took 905641 ms. Isn't that kind of slow? There\naren't any indexes, triggers, constraints or anything\non this table. The version of Postgres is \"PostgreSQL\n8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe\n(GCC) 3.4.2 (mingw-special)\". The operating\nenvironment is Windows 2003 Standard Edition w/service\npack 2. It is 2.20 Ghz with 1.0 GB of RAM. Here is\nthe results from Explain:\n\n\"Seq Scan on valley (cost=0.00..1034083.57\nrows=4897257 width=601)\"\n\nHere are the settings in the postgresql.conf. Any\nideas or is this the expected speed?\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*'\t\t# what IP address(es) to\nlisten on; \n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\n\t\t\t\t\t# (change requires restart)\nport = 5432\t\t\t\t# (change requires restart)\nmax_connections = 20\t\t\t# (change requires restart)\n# Note: increasing max_connections costs ~400 bytes of\nshared memory per \n# connection slot, plus lock space (see\nmax_locks_per_transaction). You\n# might also need to raise shared_buffers to support\nmore connections.\n#superuser_reserved_connections = 3\t# (change requires\nrestart)\n#unix_socket_directory = ''\t\t# (change requires\nrestart)\n#unix_socket_group = ''\t\t\t# (change requires restart)\n#unix_socket_permissions = 0777\t\t# octal\n\t\t\t\t\t# (change requires restart)\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\t\t\t\t\t# (change requires restart)\n\n# - Security & Authentication -\n\n#authentication_timeout = 1min\t\t# 1s-600s\n#ssl = off\t\t\t\t# (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\t\t# (change requires restart)\n#krb_srvname = 'postgres'\t\t# (change requires restart)\n#krb_server_hostname = ''\t\t# empty string matches any\nkeytab entry\n\t\t\t\t\t# (change requires restart)\n#krb_caseins_users = off\t\t# (change requires restart)\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in\nseconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 512MB\t\t\t# min 128kB or\nmax_connections*16kB\n\t\t\t\t\t# (change requires restart)\ntemp_buffers = 8MB\t\t\t# min 800kB\nmax_prepared_transactions = 5\t\t# can be 0 or more\n\t\t\t\t\t# (change requires restart)\n# Note: increasing max_prepared_transactions costs\n~600 bytes of shared memory\n# per transaction slot, plus lock space (see\nmax_locks_per_transaction).\nwork_mem = 8MB\t\t\t\t# min 64kB\nmaintenance_work_mem = 16MB\t\t# min 1MB\n#max_stack_depth = 4MB\t\t\t# min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 700000\t\t# min max_fsm_relations*16, 6\nbytes each\n\t\t\t\t\t# (change requires restart)\nmax_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\t\t\t\t\t# (change requires restart)\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 1000\t\t# min 25\n\t\t\t\t\t# (change requires restart)\n#shared_preload_libraries = ''\t\t# (change requires\nrestart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers\nscanned/round\n#bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max\nwritten/round\n#bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers\nscanned/round\n#bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max\nwritten/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = off\t\t\t\t# turns forced synchronization on or\noff\n#wal_sync_method = fsync\t\t# the default is the first\noption \n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\nfull_page_writes = off\t\t\t# recover from partial page\nwrites\n#wal_buffers = 64kB\t\t\t# min 32kB\n\t\t\t\t\t# (change requires restart)\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 60\t\t# in logfile segments, min\n1, 16MB each\ncheckpoint_timeout = 5min\t\t# range 30s-1h\ncheckpoint_warning = 0\t\t# 0 is off\n\n# - Archiving -\n\narchive_command = ''\t\t# command to use to archive a\nlogfile segment\narchive_timeout = 0\t\t# force a logfile segment switch\nafter this\n\t\t\t\t# many seconds; 0 is off\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0\t\t\t# measured on an arbitrary\nscale\n#random_page_cost = 4.0\t\t\t# same scale as above\n#cpu_tuple_cost = 0.01\t\t\t# same scale as above\n#cpu_index_tuple_cost = 0.005\t\t# same scale as above\n#cpu_operator_cost = 0.0025\t\t# same scale as above\neffective_cache_size = 32MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on\neffort\n#geqo_generations = 0\t\t\t# selects default based on\neffort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t\t# range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables collapsing of\nexplicit \n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\nlog_destination = 'stderr'\t\t# Valid values are\ncombinations of \n\t\t\t\t\t# stderr, syslog and eventlog, \n\t\t\t\t\t# depending on platform.\n\n# This is used when logging to stderr:\nredirect_stderr = on\t\t\t# Enable capturing of stderr\ninto log \n\t\t\t\t\t# files\n\t\t\t\t\t# (change requires restart)\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log'\t\t# Directory where log files\nare written\n\t\t\t\t\t# Can be absolute or relative to PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log\nfile name pattern.\n\t\t\t\t\t# Can include strftime() escapes\n#log_truncate_on_rotation = off # If on, any existing\nlog file of the same \n\t\t\t\t\t# name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to. But\n\t\t\t\t\t# such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on restarts\n\t\t\t\t\t# or size-driven rotation. Default is\n\t\t\t\t\t# off, meaning append to existing files\n\t\t\t\t\t# in all cases.\n#log_rotation_age = 1d\t\t\t# Automatic rotation of\nlogfiles will \n\t\t\t\t\t# happen after that time. 0 to \n\t\t\t\t\t# disable.\n#log_rotation_size = 10MB\t\t# Automatic rotation of\nlogfiles will \n\t\t\t\t\t# happen after that much log\n\t\t\t\t\t# output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t\t# Values, in order of\ndecreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\n#log_min_messages = notice\t\t# Values, in order of\ndecreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or\nverbose messages\n\n#log_min_error_statement = error\t# Values in order of\nincreasing severity:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic (effectively off)\n\n#log_min_duration_statement = -1\t# -1 is disabled, 0\nlogs all statements\n\t\t\t\t\t# and their durations.\n\n#silent_mode = off\t\t\t# DO NOT USE without syslog or \n\t\t\t\t\t# redirect_stderr\n\t\t\t\t\t# (change requires restart)\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\nlog_line_prefix = '%t '\t\t\t# Special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = PID\n\t\t\t\t\t# %t = timestamp (no milliseconds)\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session id\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %x = transaction id\n\t\t\t\t\t# %q = stop here in non-session \n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_statement = 'none'\t\t\t# none, ddl, mod, all\n#log_hostname = off\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#stats_command_string = on\n#update_process_title = on\n\nstats_start_collector = on\t\t# needed for block or row\nstats\n\t\t\t\t\t# (change requires restart)\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off\t# (change requires\nrestart)\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on\t\t\t# enable autovacuum subprocess?\n\t\t\t\t\t# 'on' requires stats_start_collector\n\t\t\t\t\t# and stats_row_level to also be on\n#autovacuum_naptime = 1min\t\t# time between autovacuum\nruns\n#autovacuum_vacuum_threshold = 500\t# min # of tuple\nupdates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 250\t# min # of tuple\nupdates before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.2\t# fraction of\nrel size before \n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.1\t# fraction of\nrel size before \n\t\t\t\t\t# analyze\n#autovacuum_freeze_max_age = 200000000\t# maximum XID\nage before forced vacuum\n\t\t\t\t\t# (change requires restart)\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum\ncost delay for \n\t\t\t\t\t# autovacuum, -1 means use \n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum\ncost limit for \n\t\t\t\t\t# autovacuum, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled\n#vacuum_freeze_min_age = 100000000\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#timezone = unknown\t\t\t# actually, defaults to TZ \n\t\t\t\t\t# environment setting\n#timezone_abbreviations = 'Default' # select the\nset of available timezone\n\t\t\t\t\t# abbreviations. Currently, there are\n\t\t\t\t\t# Default\n\t\t\t\t\t# Australia\n\t\t\t\t\t# India\n\t\t\t\t\t# However you can also create your own\n\t\t\t\t\t# file in share/timezonesets/.\n#extra_float_digits = 0\t\t\t# min -15, max 2\n#client_encoding = sql_ascii\t\t# actually, defaults to\ndatabase\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they\nmight be changed\nlc_messages = 'C'\t\t\t# locale for system error message \n\t\t\t\t\t# strings\nlc_monetary = 'C'\t\t\t# locale for monetary formatting\nlc_numeric = 'C'\t\t\t# locale for number formatting\nlc_time = 'C'\t\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\nmax_locks_per_transaction = 384\t\t# min 10\n\t\t\t\t\t# (change requires restart)\n# Note: each lock table slot uses ~270 bytes of shared\nmemory, and there are\n# max_locks_per_transaction * (max_connections +\nmax_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding\t# on, off, or\nsafe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#standard_conforming_strings = off\n#regex_flavor = advanced\t\t# advanced, extended, or\nbasic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom\nvariable class names\n\n\n\n \n____________________________________________________________________________________\nBe a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out. \nhttp://answers.yahoo.com/dir/?link=list&sid=396545433\n", "msg_date": "Tue, 7 Aug 2007 05:58:35 -0700 (PDT)", "msg_from": "Mark Makarowsky <[email protected]>", "msg_from_op": true, "msg_subject": "Update table performance" }, { "msg_contents": "Mark,\n\nYou are not alone in the fact that when you post your system\nspecifications, CPU and memory are always listed while the\ndisk I/O subsystem invariably is not. This is a very disk\nintensive operation and I suspect that your disk system is\nmaxed-out. If you want it faster, you will need more I/O\ncapacity.\n\nRegards,\nKen\n\nOn Tue, Aug 07, 2007 at 05:58:35AM -0700, Mark Makarowsky wrote:\n> I have a table with 4,889,820 records in it. The\n> table also has 47 fields. I'm having problems with\n> update performance. Just as a test, I issued the\n> following update:\n> \n> update valley set test='this is a test'\n> \n> This took 905641 ms. Isn't that kind of slow? There\n> aren't any indexes, triggers, constraints or anything\n> on this table. The version of Postgres is \"PostgreSQL\n> 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe\n> (GCC) 3.4.2 (mingw-special)\". The operating\n> environment is Windows 2003 Standard Edition w/service\n> pack 2. It is 2.20 Ghz with 1.0 GB of RAM. Here is\n> the results from Explain:\n> \n> \"Seq Scan on valley (cost=0.00..1034083.57\n> rows=4897257 width=601)\"\n> \n> Here are the settings in the postgresql.conf. Any\n", "msg_date": "Tue, 7 Aug 2007 08:15:37 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "Mark Makarowsky wrote:\n> I have a table with 4,889,820 records in it. The\n> table also has 47 fields. I'm having problems with\n> update performance. Just as a test, I issued the\n> following update:\n> \n> update valley set test='this is a test'\n> \n> This took 905641 ms. Isn't that kind of slow?\n\nThe limiting factor here will be how fast you can write to your disk. \nLet's see: 5 million rows in ~900 seconds, that's about 5500 \nrows/second. Now, you don't say how large your rows are, but assuming \neach row is say 1kB that'd be 5.5MB/sec - or not brilliant. Simplest way \nto find out total activity is check how much disk space PG is using \nbefore and after the update.\n\nWhat you'll need to do is monitor disk activity, in particular how many \nwrites and how much time the processor spends waiting on writes to complete.\n\nIf your standard usage pattern is to update a single field in all rows \nof a large table, then performance isn't going to be sub-second I'm afraid.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 07 Aug 2007 14:33:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "Hi,\n\nupdate valley set test='this is a test'\n\nSuch query updates ALL of your records in the table. \n5 million records * 47 fields - that can be several gigabytes of data.\nThe system has to scan that gigabytes to change every record. This is a huge \ntask. Try vacuuming and see if it helps. It can help a lot, if you perform \nsuch 'whole table updates' often.\n\nBest regards,\nPiotr Kolaczkowski\n", "msg_date": "Tue, 7 Aug 2007 15:44:17 +0200", "msg_from": "Piotr =?iso-8859-2?q?Ko=B3aczkowski?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On 8/7/07, Mark Makarowsky <[email protected]> wrote:\n> I have a table with 4,889,820 records in it. The\n> table also has 47 fields. I'm having problems with\n> update performance. Just as a test, I issued the\n> following update:\n>\n> update valley set test='this is a test'\n>\n> This took 905641 ms. Isn't that kind of slow? There\n> aren't any indexes, triggers, constraints or anything\n> on this table. The version of Postgres is \"PostgreSQL\n> 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe\n> (GCC) 3.4.2 (mingw-special)\". The operating\n> environment is Windows 2003 Standard Edition w/service\n> pack 2. It is 2.20 Ghz with 1.0 GB of RAM. Here is\n> the results from Explain:\n>\n> \"Seq Scan on valley (cost=0.00..1034083.57\n> rows=4897257 width=601)\"\n\nHave you done this a few times? You could easily have a very large\nand bloated table if you do this several times in a row. That would\nexplain the slow performance. If you're going to do a lot of updates\nwithout where clauses on large tables, you'll need to run a vacuum\nright afterwards to clean things up.\n\nI see that you included a lot about your machine, but you didn't\ninclude any specs on your disk subsystem. When it comes to update\nspeed, the disk subsystem is probably the most important part.\n\nNote also that Windows is still not the preferred platform for\npostgresql from a performance perspective (actually, the only database\nwhere that's true is MS-SQL really).\n\nHave you run any benchmarks on your disk subsystem to see how fast it is?\n", "msg_date": "Tue, 7 Aug 2007 09:53:31 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Tuesday 07 August 2007 05:58, Mark Makarowsky \n<[email protected]> wrote:\n> I have a table with 4,889,820 records in it. The\n> table also has 47 fields. I'm having problems with\n> update performance. Just as a test, I issued the\n> following update:\n>\n> update valley set test='this is a test'\n>\n> This took 905641 ms. Isn't that kind of slow? \n\nPostgreSQL has to write a full new version of every row that gets updated. \nUpdates are, therefore, relatively slow. \n\nI'm guessing you're doing this on a single SATA drive, too, which probably \ndoesn't help.\n\n-- \n\"If a nation values anything more than freedom, it will lose its freedom;\nand the irony of it is that if it is comfort or money that it values more,\nit will lose that too.\" -- Somerset Maugham, Author\n\n", "msg_date": "Tue, 7 Aug 2007 09:03:49 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:\n> Mark Makarowsky wrote:\n> >I have a table with 4,889,820 records in it. The\n> >table also has 47 fields. I'm having problems with\n> >update performance. Just as a test, I issued the\n> >following update:\n> >\n> >update valley set test='this is a test'\n> >\n> >This took 905641 ms. Isn't that kind of slow?\n> \n> The limiting factor here will be how fast you can write to your disk. \n\nWell, very possibly how fast you can read, too. Using your assumption of\n1k per row, 5M rows means 5G of data, which might well not fit in\nmemory. And if the entire table's been updated just once before, even\nwith vacuuming you're now at 10G of data.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 7 Aug 2007 12:59:14 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "[email protected] (Mark Makarowsky) writes:\n> I have a table with 4,889,820 records in it. The\n> table also has 47 fields. I'm having problems with\n> update performance. Just as a test, I issued the\n> following update:\n>\n> update valley set test='this is a test'\n>\n> This took 905641 ms. Isn't that kind of slow? There\n> aren't any indexes, triggers, constraints or anything\n> on this table. The version of Postgres is \"PostgreSQL\n> 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe\n> (GCC) 3.4.2 (mingw-special)\". The operating\n> environment is Windows 2003 Standard Edition w/service\n> pack 2. It is 2.20 Ghz with 1.0 GB of RAM. Here is\n> the results from Explain:\n>\n> \"Seq Scan on valley (cost=0.00..1034083.57\n> rows=4897257 width=601)\"\n>\n> Here are the settings in the postgresql.conf. Any\n> ideas or is this the expected speed?\n\nHmm. \n\n- You asked to update 4,889,820 records.\n\n- It's a table consisting of 8.5GB of data (based on the cost info)\n\nFor this to take 15 minutes doesn't seem particularly outrageous.\n-- \noutput = (\"cbbrowne\" \"@\" \"acm.org\")\nhttp://cbbrowne.com/info/oses.html\nRules of the Evil Overlord #65. \"If I must have computer systems with\npublically available terminals, the maps they display of my complex\nwill have a room clearly marked as the Main Control Room. That room\nwill be the Execution Chamber. The actual main control room will be\nmarked as Sewage Overflow Containment.\" <http://www.eviloverlord.com/>\n", "msg_date": "Tue, 07 Aug 2007 14:03:06 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On 8/7/07, Decibel! <[email protected]> wrote:\n> On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:\n> > Mark Makarowsky wrote:\n> > >I have a table with 4,889,820 records in it. The\n> > >table also has 47 fields. I'm having problems with\n> > >update performance. Just as a test, I issued the\n> > >following update:\n> > >\n> > >update valley set test='this is a test'\n> > >\n> > >This took 905641 ms. Isn't that kind of slow?\n> >\n> > The limiting factor here will be how fast you can write to your disk.\n>\n> Well, very possibly how fast you can read, too. Using your assumption of\n> 1k per row, 5M rows means 5G of data, which might well not fit in\n> memory. And if the entire table's been updated just once before, even\n> with vacuuming you're now at 10G of data.\n\nWhere one might have to update just one column of a wide table often,\nit's often a good idea to move that column into its own dependent\ntable.\n\nOr just don't update one column of every row in table...\n", "msg_date": "Tue, 7 Aug 2007 14:36:18 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Tue, Aug 07, 2007 at 02:36:18PM -0500, Scott Marlowe wrote:\n> On 8/7/07, Decibel! <[email protected]> wrote:\n> > On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:\n> > > Mark Makarowsky wrote:\n> > > >I have a table with 4,889,820 records in it. The\n> > > >table also has 47 fields. I'm having problems with\n> > > >update performance. Just as a test, I issued the\n> > > >following update:\n> > > >\n> > > >update valley set test='this is a test'\n> > > >\n> > > >This took 905641 ms. Isn't that kind of slow?\n> > >\n> > > The limiting factor here will be how fast you can write to your disk.\n> >\n> > Well, very possibly how fast you can read, too. Using your assumption of\n> > 1k per row, 5M rows means 5G of data, which might well not fit in\n> > memory. And if the entire table's been updated just once before, even\n> > with vacuuming you're now at 10G of data.\n> \n> Where one might have to update just one column of a wide table often,\n> it's often a good idea to move that column into its own dependent\n> table.\n\nYeah, I've used \"vertical partitioning\" very successfully in the past,\nthough I've never done it for just a single field. I'll typically leave\nthe few most common fields in the \"main\" table and pull everything else\ninto a second table.\n\n> Or just don't update one column of every row in table...\n\nYeah, that too. :) Though sometimes you can't avoid it.\n\nI should mention that if you can handle splitting the update into\nmultiple transactions, that will help a lot since it means you won't be\ndoubling the size of the table.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 7 Aug 2007 16:42:28 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "Can you provide more detail on what you mean by your\ntwo suggestions below:\n\nYeah, I've used \"vertical partitioning\" very\nsuccessfully in the past, though I've never done it\nfor just a single field. I'll typically leave the few\nmost common fields in the \"main\" table and pull\neverything else into a second table.\n\nI should mention that if you can handle splitting the\nupdate into multiple transactions, that will help a\nlot since it means you won't be doubling the size of\nthe table.\n\nI guess I was just surprised by the speed it takes to\nupdate the field in Postgres since on an almost\nidentical table in FoxPro (400,000 records less), it\nupdates the table with the same exact update table\nstatement in about 4 minutes.\n--- Decibel! <[email protected]> wrote:\n\n> On Tue, Aug 07, 2007 at 02:36:18PM -0500, Scott\n> Marlowe wrote:\n> > On 8/7/07, Decibel! <[email protected]> wrote:\n> > > On Tue, Aug 07, 2007 at 02:33:19PM +0100,\n> Richard Huxton wrote:\n> > > > Mark Makarowsky wrote:\n> > > > >I have a table with 4,889,820 records in it. \n> The\n> > > > >table also has 47 fields. I'm having\n> problems with\n> > > > >update performance. Just as a test, I issued\n> the\n> > > > >following update:\n> > > > >\n> > > > >update valley set test='this is a test'\n> > > > >\n> > > > >This took 905641 ms. Isn't that kind of\n> slow?\n> > > >\n> > > > The limiting factor here will be how fast you\n> can write to your disk.\n> > >\n> > > Well, very possibly how fast you can read, too.\n> Using your assumption of\n> > > 1k per row, 5M rows means 5G of data, which\n> might well not fit in\n> > > memory. And if the entire table's been updated\n> just once before, even\n> > > with vacuuming you're now at 10G of data.\n> > \n> > Where one might have to update just one column of\n> a wide table often,\n> > it's often a good idea to move that column into\n> its own dependent\n> > table.\n> \n> Yeah, I've used \"vertical partitioning\" very\n> successfully in the past,\n> though I've never done it for just a single field.\n> I'll typically leave\n> the few most common fields in the \"main\" table and\n> pull everything else\n> into a second table.\n> \n> > Or just don't update one column of every row in \n> table...\n> \n> Yeah, that too. :) Though sometimes you can't avoid\n> it.\n> \n> I should mention that if you can handle splitting\n> the update into\n> multiple transactions, that will help a lot since it\n> means you won't be\n> doubling the size of the table.\n> -- \n> Decibel!, aka Jim Nasby \n> [email protected]\n> EnterpriseDB http://enterprisedb.com \n> 512.569.9461 (cell)\n> \n\n\n\n ____________________________________________________________________________________\nPark yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.\nhttp://autos.yahoo.com/green_center/ \n", "msg_date": "Tue, 7 Aug 2007 16:13:00 -0700 (PDT)", "msg_from": "Mark Makarowsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Aug 7, 2007, at 6:13 PM, Mark Makarowsky wrote:\n\n> Can you provide more detail on what you mean by your\n> two suggestions below:\n>\n> Yeah, I've used \"vertical partitioning\" very\n> successfully in the past, though I've never done it\n> for just a single field. I'll typically leave the few\n> most common fields in the \"main\" table and pull\n> everything else into a second table.\n\nVertical partitioning is where you split up your table on disk by \ncolumns, i.e on the vertical lines. He quoted it because Postgres \ndoesn't actually support it transparently but you can always fake it \nby splitting up your table. For example, given the following table \nwherein column bar gets updated a lot but the others don't:\n\ncreate table foo (\nid\tint \tnot null,\nbar\tint,\nbaz \tint,\n\nprimary key (id)\n);\n\nYou could split it up like so:\n\ncreate table foo_a (\nid \tint,\nbaz\tint,\n\nprimary key (id)\n);\n\ncreate table foo_b (\nfoo_id\tint,\nbar\t\tint,\n\nforeign key foo_a_id (foo_id) references foo_a (id)\n);\n\nThe reason you'd ever want to do this is that when Postgres goes to \nupdate a row what it actually does is inserts a new row with the new \nvalue(s) that you changed and marks the old one as deleted. So, if \nyou have a wide table and frequently update only certain columns, \nyou'll take a performance hit as you're having to re-write a lot of \nstatic values.\n\n>\n> I should mention that if you can handle splitting the\n> update into multiple transactions, that will help a\n> lot since it means you won't be doubling the size of\n> the table.\n\nAs I mentioned above, when you do an update you're actually inserting \na new row and deleting the old one. That deleted row is still \nconsidered part of the table (for reasons of concurrency, read up on \nthe concurrency chapter in the manual for the details) and once it is \nno longer visible by any live transactions can be re-used by future \ninserts. So, if you update one column on every row of a one million \nrow table all at once, you have to allocate and write out one million \nnew rows. But, if you do the update a quarter million at a time, the \nlast three updates would be able to re-use many of the rows deleted \nin earlier updates.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Tue, 7 Aug 2007 20:46:20 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "Erik Jones wrote:\n> Decibel! wrote:\n>> I should mention that if you can handle splitting the\n>> update into multiple transactions, that will help a\n>> lot since it means you won't be doubling the size of\n>> the table.\n> \n> As I mentioned above, when you do an update you're actually inserting a\n> new row and deleting the old one. That deleted row is still considered\n> part of the table (for reasons of concurrency, read up on the\n> concurrency chapter in the manual for the details) and once it is no\n> longer visible by any live transactions can be re-used by future\n> inserts. So, if you update one column on every row of a one million row\n> table all at once, you have to allocate and write out one million new\n> rows. But, if you do the update a quarter million at a time, the last\n> three updates would be able to re-use many of the rows deleted in\n> earlier updates.\n\nOnly if you vacuum between the updates.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 08 Aug 2007 09:00:48 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Aug 8, 2007, at 3:00 AM, Heikki Linnakangas wrote:\n\n> Erik Jones wrote:\n>> Decibel! wrote:\n>>> I should mention that if you can handle splitting the\n>>> update into multiple transactions, that will help a\n>>> lot since it means you won't be doubling the size of\n>>> the table.\n>>\n>> As I mentioned above, when you do an update you're actually \n>> inserting a\n>> new row and deleting the old one. That deleted row is still \n>> considered\n>> part of the table (for reasons of concurrency, read up on the\n>> concurrency chapter in the manual for the details) and once it is no\n>> longer visible by any live transactions can be re-used by future\n>> inserts. So, if you update one column on every row of a one \n>> million row\n>> table all at once, you have to allocate and write out one million new\n>> rows. But, if you do the update a quarter million at a time, the \n>> last\n>> three updates would be able to re-use many of the rows deleted in\n>> earlier updates.\n>\n> Only if you vacuum between the updates.\n\nThis is true. In fact, the chapter on Routine Database Maintenance \ntasks that discusses vacuuming explains all of this.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Wed, 8 Aug 2007 10:15:41 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Tue, Aug 07, 2007 at 08:46:20PM -0500, Erik Jones wrote:\n> Vertical partitioning is where you split up your table on disk by \n> columns, i.e on the vertical lines. He quoted it because Postgres \n> doesn't actually support it transparently but you can always fake it \n> by splitting up your table. For example, given the following table \n> wherein column bar gets updated a lot but the others don't:\n> \n> create table foo (\n> id\tint \tnot null,\n> bar\tint,\n> baz \tint,\n> \n> primary key (id)\n> );\n> \n> You could split it up like so:\n> \n> create table foo_a (\n> id \tint,\n> baz\tint,\n> \n> primary key (id)\n> );\n> \n> create table foo_b (\n> foo_id\tint,\n> bar\t\tint,\n> \n> foreign key foo_a_id (foo_id) references foo_a (id)\n> );\n\nFWIW, the cases where I've actually used this have been on much wider\ntables, and a number of the attributes are in-frequently accessed. An\nexample would be if you keep snail-mail address info for users; you\nprobably don't use those fields very often, so they would be good\ncandidates for going into a second table.\n\nWhen does it actually make sense to use this? When you do a *lot* with a\nsmall number of fields in the table. In this example, perhaps you very\nfrequently need to look up either user_name or user_id, probably via\njoins. Having a table with just name, id, perhaps password and a few\nother fields might add up to 50 bytes per row (with overhead), while\naddress information by itself could easily be 50 bytes. So by pushing\nthat out to another table, you cut the size of the main table in half.\nThat means more efficient use of cache, faster seqscans, etc.\n\nThe case Erik is describing is more unique to PostgreSQL and how it\nhandles MVCC. In some cases, splitting a frequently updated row out to a\nseparate table might not gain as much once we get HOT, but it's still a\ngood tool to consider. Depending on what you're doing another useful\ntechnique is to not update the field as often by logging updates to be\nperformed into a separate table and periodically processing that\ninformation into the main table.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Wed, 8 Aug 2007 12:28:14 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On 8/8/07, Mark Makarowsky <[email protected]> wrote:\n> Can you provide more detail on what you mean by your\n> two suggestions below:\n>\n> Yeah, I've used \"vertical partitioning\" very\n> successfully in the past, though I've never done it\n> for just a single field. I'll typically leave the few\n> most common fields in the \"main\" table and pull\n> everything else into a second table.\n>\n> I should mention that if you can handle splitting the\n> update into multiple transactions, that will help a\n> lot since it means you won't be doubling the size of\n> the table.\n>\n> I guess I was just surprised by the speed it takes to\n> update the field in Postgres since on an almost\n> identical table in FoxPro (400,000 records less), it\n> updates the table with the same exact update table\n> statement in about 4 minutes.\n\nFoxPro is a single process DBF based system with some sql access.\nWhen you update th records, it updates them in place since all the\nrecords are fixed size and padded. Be careful with this\ncomparison...while certain operations like the above may feel faster,\nthe locking in foxpro is extremely crude compared to PostgreSQL.\nThere are many other things about dbf systems in general which are\npretty lousy from performance perspective.\n\nThat said, 'update' is the slowest operation for postgresql relative\nto other databases that are not MVCC. This is balanced by extremely\nefficient locking and good performance under multi user loads.\nPostgreSQL likes to be used a certain way...you will find that when\nused properly it is extremely fast.\n\nkeep an eye for the HOT feature which will hopefully make 8.3 that\nwill highly reduce the penalty for (small) updates in many cases.\n\nmerlin\n", "msg_date": "Thu, 9 Aug 2007 18:04:09 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:\n>keep an eye for the HOT feature which will hopefully make 8.3 that\n>will highly reduce the penalty for (small) updates in many cases.\n\nIs there an overview somewhere about how this feature works and what it \nis expected to do? There have been a lot of references to it over time, \nand it's possible to understand it if you follow list traffic over time, \nbut starting cold it's hard to know what it is. The name was poorly \nchosen as far as google is concerned. :)\n\nMike Stone\n", "msg_date": "Thu, 09 Aug 2007 09:13:44 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On 8/9/07, Michael Stone <[email protected]> wrote:\n> On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:\n> >keep an eye for the HOT feature which will hopefully make 8.3 that\n> >will highly reduce the penalty for (small) updates in many cases.\n>\n> Is there an overview somewhere about how this feature works and what it\n> is expected to do? There have been a lot of references to it over time,\n> and it's possible to understand it if you follow list traffic over time,\n> but starting cold it's hard to know what it is. The name was poorly\n> chosen as far as google is concerned. :)\n\nThis is what I found when I went looking for info earlier:\nhttp://archives.postgresql.org/pgsql-patches/2007-07/msg00142.php\nhttp://archives.postgresql.org/pgsql-patches/2007-07/msg00360.php\n", "msg_date": "Thu, 9 Aug 2007 06:41:50 -0700", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" }, { "msg_contents": "On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:\n> That said, 'update' is the slowest operation for postgresql relative\n> to other databases that are not MVCC.\n\nActually, it depends on how you do MVCC. In Oracle, DELETE is actually\nthe most expensive operation, because they have to not only remove the\nrow from the heap, they have to copy it to the undo log. And they need\nto do something with indexes as well. Whereas we just update 4 bytes in\nthe heap and that's it.\n\nAn UPDATE in Oracle OTOH just needs to store whatever fields have\nchanged in the undo log. If you haven't messed with indexed fields, it\ndoesn't have to touch those either.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 9 Aug 2007 10:32:15 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance" } ]
[ { "msg_contents": "\nHi folks,\n\n\nI've coded an auction crawler, which queries several auction\nplatforms for user-defined keywords. The results are then \nput into queues an sorted out from there. For example each\nauction query can be passed along with an maxprice value, \nwhich is put into the result records. Every few minutes an\nfilter moves those result records where the article reached\nthe maxprice away to another queue. \n\nThe blacklist filter (which makes me headaches) moves those \nrecords where article's title matches some regex'es to another \nqueue. One of the users has more than 630 blacklist entries,\nlinked to one regex about 24kb. I've tried differet approaches,\ncomparing to each single blacklist entry (join'ing the \nblacklist table) or comparing against one huge regex (put the\ncompiled regex'es per user to an separate table). Both run \nvery, very long (> 15mins) and make heavy load.\n\nMy current scheme (reduced to required stuff):\n\n* article table: article_id(oid) \n title(text)\n\t\t current_price(float)\n\t\t ... \n\n* user table: user_id(oid)\n username(text)\n ...\n\t\t \n* user_results table: article_id(oid)\n user_id(oid) \n\t\t username(oid) \n\t\t queue(oid) <-- only scanning 'incoming'\n seen(boolean) <-- seen results are skipped\n ...\n\t\t \n* user_db_list: username(text)\n dbname(text)\n\t\t data(text)\n\t\t ...\n\t\t\n* heap: name(text)\n data(text)\n\n\nThis is the explain analyze output of the compiled-regex approach:\n(the compiled regex is stored in the \"heap\" table)\n\nauctionwatch=> explain analyze update base.user_results set queue='FOO' \n WHERE queue = 'incoming' AND \n\t NOT seen AND \n base.user_results.article_id = base.articles.inode_id AND \n\t\t base.articles.end_time > current_timestamp AND \n\t\t base.articles.title ~ (\n\t\t SELECT data FROM base.heap WHERE name = \n\t\t\t 'blacklist.title::'||base.user_results.username);\n\nHash Join (cost=2131.38..7622.69 rows=22 width=56) (actual time=1040416.087..1128977.760 rows=1 loops=1)\n Hash Cond: (\"outer\".article_id = \"inner\".inode_id)\n Join Filter: (\"inner\".title ~ (subplan))\n -> Seq Scan on user_results (cost=0.00..593.08 rows=11724 width=56) (actual time=0.104..518.036 rows=11189 loops=1)\n\tFilter: ((queue = 'incoming'::text) AND (NOT seen))\n -> Hash (cost=2014.41..2014.41 rows=8787 width=57) (actual time=250.946..250.946 rows=0 loops=1)\n\t-> Seq Scan on articles (cost=0.00..2014.41 rows=8787 width=57) (actual time=0.702..232.754 rows=8663 loops=1)\n\t Filter: (end_time > ('now'::text)::timestamp(6) with time zone)\n SubPlan\n -> Seq Scan on heap (cost=0.00..1.01 rows=1 width=32) (actual time=0.070..0.072 rows=1 loops=5998)\n Filter: (name = ('blacklist.title::'::text || $0))\n\nTotal runtime: 1129938.362 ms\n\t\t\t \n\nAnd the approach via joining the regex table:\n\nauctionwatch=> explain analyze update base.user_results set queue = 'FOO' \n WHERE queue = 'incoming' AND \n\t NOT seen AND \n\t\t base.user_results.article_id = base.articles.inode_id AND \n\t\t base.articles.end_time > current_timestamp AND \n\t\t base.articles.title ~ base.user_db_list.data AND \n\t\t base.user_db_list.username = base.user_results.username AND \n\t\t base.user_db_list.dbname = 'BLACKLIST.TITLE' ;\n\nHash Join (cost=3457.12..11044097.45 rows=3619812 width=56) (actual time=90458.408..126119.167 rows=2 loops=1)\n Hash Cond: (\"outer\".username = \"inner\".username)\n Join Filter: (\"inner\".title ~ \"outer\".data)\n -> Seq Scan on user_db_list (cost=0.00..5268.16 rows=186333 width=51) (actual time=512.939..514.394 rows=634 loops=1)\n Filter: (dbname = 'BLACKLIST.TITLE'::text)\n -> Hash (cost=3373.49..3373.49 rows=4254 width=109) (actual time=466.177..466.177 rows=0 loops=1)\n -> Hash Join (cost=2221.01..3373.49 rows=4254 width=109) (actual time=225.006..439.334 rows=6023 loops=1)\n Hash Cond: (\"outer\".article_id = \"inner\".inode_id)\n -> Seq Scan on user_results (cost=0.00..593.08 rows=11724 width=56) (actual time=0.155..85.865 rows=11223 loops=1)\n Filter: ((queue = 'incoming'::text) AND (NOT seen))\n -> Hash (cost=2099.20..2099.20 rows=9127 width=57) (actual time=205.996..205.996 rows=0 loops=1)\n -> Seq Scan on articles (cost=0.00..2099.20 rows=9127 width=57) (actual time=0.373..187.468 rows=8662 loops=1)\n Filter: (end_time > ('now'::text)::timestamp(6) with time zone)\n Total runtime: 126921.854 ms\n \t\t\t \t\t \t \t\t \t \t \t \n \t \nI'm not sure what \"Total runtime\" means. Is it the time the analyze\ntook or the query will take to execute ?\n\nIf it's really the execution time, then the second query would be\nmuch faster (about 2mins vs. 18mins). But I really wonder, why \nis processing one huge regex so dramatically slow ?\n\n\nBTW: in some tables I'm using the username instead (or parallel\nto) the numerical id to skip joins against the user table. But\nI'm not sure if this wise for performance.\n\n\nAny hints for futher optimization appreciated :)\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service - http://www.metux.de/\n---------------------------------------------------------------------\n Please visit the OpenSource QM Taskforce:\n \thttp://wiki.metux.de/public/OpenSource_QM_Taskforce\n Patches / Fixes for a lot dozens of packages in dozens of versions:\n\thttp://patches.metux.de/\n---------------------------------------------------------------------\n", "msg_date": "Wed, 8 Aug 2007 14:16:11 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Implementing an regex filter" }, { "msg_contents": "Enrico Weigelt wrote:\n> Hi folks,\n> \n> \n> Any hints for futher optimization appreciated :)\n> \n> \n> thx\n\n\nIt doesn't look like you have any indexes - I'd add one to at least \narticles.title and blacklist.title to start with and probably also \nuser_results.article_id and articles.inode_id.\n\n-- \nPaul Lambert\nDatabase Administrator\nAutoLedgers\n\n", "msg_date": "Thu, 09 Aug 2007 06:13:35 +0800", "msg_from": "Paul Lambert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implementing an regex filter" } ]
[ { "msg_contents": "\nHi folks,\n\n\nI'm often using writable views as interfaces to clients, so \nthey only see \"virtual\" objects and never have to cope with\nthe actual storage, ie. to give some client an totally \ndenormalized view of certain things, containing only those \ninformation required for certain kind of operations. \n\nThis method is nice for creating easy and robust client \ninterfaces - internal schema changes are not visible to \nthe client. In situations when many, many clients - often\ncoded/maintained by different people - have to access an\ndatabase which is still under development (typical for \nmany inhouse applications), it helps to circument interface\ninstabilities.\n\nNow I've got the strange feeling that this makes updates\nslow, since it always has to run the whole view query to\nfetch an record to be updated (ie. to get OLD.*).\n\nCould anyone with some deep insight please give me some \ndetails about that issue ?\n\n\ncu \n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service - http://www.metux.de/\n---------------------------------------------------------------------\n Please visit the OpenSource QM Taskforce:\n \thttp://wiki.metux.de/public/OpenSource_QM_Taskforce\n Patches / Fixes for a lot dozens of packages in dozens of versions:\n\thttp://patches.metux.de/\n---------------------------------------------------------------------\n", "msg_date": "Wed, 8 Aug 2007 14:40:54 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on writable views" }, { "msg_contents": "Enrico Weigelt wrote:\n> I'm often using writable views as interfaces to clients, so \n> they only see \"virtual\" objects and never have to cope with\n> the actual storage, ie. to give some client an totally \n> denormalized view of certain things, containing only those \n> information required for certain kind of operations. \n> \n> This method is nice for creating easy and robust client \n> interfaces - internal schema changes are not visible to \n> the client. In situations when many, many clients - often\n> coded/maintained by different people - have to access an\n> database which is still under development (typical for \n> many inhouse applications), it helps to circument interface\n> instabilities.\n> \n> Now I've got the strange feeling that this makes updates\n> slow, since it always has to run the whole view query to\n> fetch an record to be updated (ie. to get OLD.*).\n\nThere is some overhead in rewriting the query, but it shouldn't be\nsignificantly slower than issuing the statements behind the view\ndirectly. I wouldn't worry about it, unless you have concrete evidence\nthat it's causing problems.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 11 Aug 2007 08:42:54 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on writable views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHeikki Linnakangas wrote:\n> Enrico Weigelt wrote:\n>> I'm often using writable views as interfaces to clients, so \n>> they only see \"virtual\" objects and never have to cope with\n>> the actual storage, ie. to give some client an totally \n>> denormalized view of certain things, containing only those \n>> information required for certain kind of operations. \n\n>> Now I've got the strange feeling that this makes updates\n>> slow, since it always has to run the whole view query to\n>> fetch an record to be updated (ie. to get OLD.*).\n> \n> There is some overhead in rewriting the query, but it shouldn't be\n> significantly slower than issuing the statements behind the view\n> directly. I wouldn't worry about it, unless you have concrete evidence\n> that it's causing problems.\n\nI don't know about that, at least when using rules for partitioning the\nimpact can be significant in comparison to triggers.\n\nIt may make sense for him to push this stuff to stored procs instead.\n\nJoshua D. Drake\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGvcCRATb/zqfZUUQRAqngAKCKZG1LkeBd6/Qyghv/GzPBp4qCGACfS1Ar\ntXJSi/ynIQlAkATIv2yKd7M=\n=lYbI\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 11 Aug 2007 06:58:41 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on writable views" }, { "msg_contents": "On Aug 11, 2007, at 8:58 AM, Joshua D. Drake wrote:\n> Heikki Linnakangas wrote:\n>> Enrico Weigelt wrote:\n>>> I'm often using writable views as interfaces to clients, so\n>>> they only see \"virtual\" objects and never have to cope with\n>>> the actual storage, ie. to give some client an totally\n>>> denormalized view of certain things, containing only those\n>>> information required for certain kind of operations.\n>\n>>> Now I've got the strange feeling that this makes updates\n>>> slow, since it always has to run the whole view query to\n>>> fetch an record to be updated (ie. to get OLD.*).\n>>\n>> There is some overhead in rewriting the query, but it shouldn't be\n>> significantly slower than issuing the statements behind the view\n>> directly. I wouldn't worry about it, unless you have concrete \n>> evidence\n>> that it's causing problems.\n>\n> I don't know about that, at least when using rules for partitioning \n> the\n> impact can be significant in comparison to triggers.\n\nThat's because you have to re-evaluate the input query for each rule \nthat's defined, so even if you only have rules for 2 partitions in a \ntable (which is really about the minimum you can have, at least for \nsome period of overlap surrounding the time when you switch to a new \npartition), you're looking at evaluating every input query twice.\n\nIn this case, the rules presumably are just simply re-directing DML, \nso there'd only be one rule in play at a time. That means the only \nreal overhead is in the rewrite engine.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Mon, 13 Aug 2007 17:17:16 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on writable views" } ]
[ { "msg_contents": "Hello Group,\n\nI'm new in PostgreSQL Business, therefore please forgive me a \"newbie\"\nQuestion. I have a table with ca. 1.250.000 Records. When I execute\na \"select count (*) from table\" (with pgAdmin III) it takes about 40\nsecs.\nI think that takes much to long. Can you please give me hints, where\nI can search for Improvements?\n\nTIA, Det\n\n", "msg_date": "Wed, 08 Aug 2007 06:01:25 -0700", "msg_from": "runic <[email protected]>", "msg_from_op": true, "msg_subject": "select count(*) performance" }, { "msg_contents": "> I'm new in PostgreSQL Business, therefore please forgive me a \"newbie\"\n> Question. I have a table with ca. 1.250.000 Records. When I execute\n> a \"select count (*) from table\" (with pgAdmin III) it takes about 40\n> secs.\n> I think that takes much to long. Can you please give me hints, where\n> I can search for Improvements?\n> TIA, Det\n\nmaybe try change shared_buffers and test it,\n\nor you can create trigger on this table after insert and delete\nif insert then increse value in another table \"counts\"\nif delete then decrese\nand now you dont must execute select count(*) ...\nbut execute\nselect my_count from counts where tablename='your_table'\n\nsj\n\n\n", "msg_date": "Thu, 9 Aug 2007 16:44:41 +0200", "msg_from": "\"slawekj\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": "On 8/8/07, runic <[email protected]> wrote:\n> Hello Group,\n>\n> I'm new in PostgreSQL Business, therefore please forgive me a \"newbie\"\n> Question. I have a table with ca. 1.250.000 Records. When I execute\n> a \"select count (*) from table\" (with pgAdmin III) it takes about 40\n> secs.\n> I think that takes much to long. Can you please give me hints, where\n> I can search for Improvements?\n\nThis is a FAQ. This operation is optimized in some other database\nengines but not in PostgreSQL due to way the locking engine works.\nThere are many workarounds, maybe the easiest is to get an approximate\ncount using\nselect reltuples from pg_class where relname = 'your_table' and relkind = 'r';\n\nmerlin\n", "msg_date": "Fri, 10 Aug 2007 17:14:21 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": "runic wrote:\n\n>Hello Group,\n>\n>I'm new in PostgreSQL Business, therefore please forgive me a \"newbie\"\n>Question. I have a table with ca. 1.250.000 Records. When I execute\n>a \"select count (*) from table\" (with pgAdmin III) it takes about 40\n>secs.\n>I think that takes much to long. Can you please give me hints, where\n>I can search for Improvements?\n>\n>TIA, Det\n> \n>\n\n1) VACUUM FULL the table, maybe the whole database.\n2) Buy more/faster hard disks\n\nThe problem is that count(*) on a table has to scan the whole table, due \nto the fact that Postgres uses MVCC for it's concurrency control. This \nis normally a huge win- but one of the few places where it's a loss is \ndoing count(*) over a whole table. In this case, Postgres has no choice \nbut to inspect each and every row to see if it's live or not, and thus \nhas no choice but to read in the whole table.\n\nIf you've been doing a lot of inserts, updates, and/or deletes to the \ntable, and you either don't have autovacuum turned on or agressive \nenough, the table can be littered with a bunch of dead rows that haven't \nbeen deleted yet. Postgres still has to read in those rows to make sure \nthey're dead, so it's easy for it to have to read many multiples of the \nnumber of live rows in the table. What vacuum does is it goes through \nand deletes those dead rows.\n\nIf that isn't the problem, then it's just that you have to read the \nwhole table. If the rows are large enough, and the disk subsystem is \nslow enough, this can just take a while. My advice in this case to buy \neither more disks and/or faster disks, to speed up the reading of the table.\n\nBrian\n\n", "msg_date": "Fri, 10 Aug 2007 09:08:18 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": ">>> On Fri, Aug 10, 2007 at 8:08 AM, in message\n<[email protected]>, Brian Hurt <[email protected]>\nwrote: \n> runic wrote:\n> \n>>I have a table with ca. 1.250.000 Records. When I execute\n>>a \"select count (*) from table\" (with pgAdmin III) it takes about 40\n>>secs.\n>>I think that takes much to long. Can you please give me hints, where\n>>I can search for Improvements?\n>>\n>>TIA, Det\n> \n> 1) VACUUM FULL the table, maybe the whole database.\n> 2) Buy more/faster hard disks\n \nDet,\n \nForty seconds is a long time for only 1.25 million rows. I just ran a count\nagainst a production database and it took 2.2 seconds to get a count from a\ntable with over 6.8 million rows.\n \nIn addtion to the advice given by Brian, I would recommend:\n \n3) Make sure you are using a recent version of PostgreSQL. There have been\nsigniificant performance improvements lately. If you're not on 8.2.4, I'd\nrecommend you convert while your problem table is that small.\n \n4) Make sure you read up on PostgreSQL configuration. Like many products,\nPostgreSQL has a default configuration which is designed to start on just\nabout anything, but which will not perform well without tuning.\n \n5) Consider whether you need an exact count. I just selected the reltuples\nvalue from pg_class for the table with the 6.8 million rows, and the value I\ngot was only off from the exact count by 0.0003%. That's close enough for\nmany purposes, and the run time is negligible.\n \n6) If you're looking at adding hardware, RAM helps. It can help a lot.\n \nI'll finish by restating something Brian mentioned. VACUUM. Use autovacuum.\nYou should also do scheduled VACUUM ANALYZE, under the database superuser\nlogin, on a regular basis. We do it nightly on most of our databases.\nWithout proper maintenance, dead space will accumulate and destroy your\nperformance.\n \nAlso, I don't generally recommend VACUUM FULL. If a table needs agressive\nmaintenance, I recommend using CLUSTER, followed by an ANALYZE. It does a\nbetter job of cleaning things up, and is often much faster.\n \nI hope this helps.\n \n-Kevin\n \n\n", "msg_date": "Sat, 11 Aug 2007 10:32:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": "Hello Group,\n\nI've tried the VACUUM ANALYSE, that doesn't help\nmuch, but VACUUM FULL improves Performance down\nfrom about 40 secs to 8. I think in future I would\nuse the reltuples value from pg_class for the table.\n\nThanks a lot for your answers and a good Sunday,\nDet\n", "msg_date": "Sat, 11 Aug 2007 17:54:41 +0200", "msg_from": "Detlef Rudolph <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": "On Aug 11, 5:54 pm, Detlef Rudolph <[email protected]> wrote:\n> Hello Group,\n>\n> I've tried the VACUUM ANALYSE, that doesn't help\n> much, but VACUUM FULL improves Performance down\n> from about 40 secs to 8. I think in future I would\n> use the reltuples value from pg_class for the table.\n>\n> Thanks a lot for your answers and a good Sunday,\n> Det\n\njust do not forget, that reltuples is count and updated in pg_class\nonly during the vacuuming or analyzing of a table... so the value is\nonly an APPROXIMATE....\n\n-- Valentine\n\n", "msg_date": "Mon, 13 Aug 2007 11:38:09 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": ">>> valgog <[email protected]> 08/13/07 6:38 AM >>> \nOn Aug 11, 5:54 pm, Detlef Rudolph <[email protected]> wrote:\n>\n> I've tried the VACUUM ANALYSE, that doesn't help\n> much, but VACUUM FULL improves Performance down\n> from about 40 secs to 8.\n \nDet,\n \nI don't think anyone meant to suggest that VACUUM ANALYZE would improve the\ncount speed on a table which had become bloated, but its routine use would\nPREVENT a table from becoming bloated. Once bloat occurs, you need more\nagressive maintenance, like VACUUM FULL or CLUSTER.\n \nVACUUM FULL tends to cause index bloat, so you will probably see performance\nissues in other queries at the moment. You will probably need to REINDEX\nthe table or use CLUSTER to clean that up.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 13 Aug 2007 09:07:03 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" }, { "msg_contents": "On 8/11/07, Detlef Rudolph <[email protected]> wrote:\n> Hello Group,\n>\n> I've tried the VACUUM ANALYSE, that doesn't help\n> much, but VACUUM FULL improves Performance down\n> from about 40 secs to 8.\n\nIf vacuum full fixes a performance problem, then you have a regular\nvacuum problem of some sort.\n\nMake sure regular vacuum runs often enough and make sure your fsm\nsettings are high enough to allow it to reclaim all deleted tuples\nwhen it does run.\n", "msg_date": "Fri, 17 Aug 2007 20:14:01 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance" } ]
[ { "msg_contents": "\nHello all,\n\nI am trying to enable capturing of the submitted code via an\napplication...how do I do this in Postgres? Performance is SLOW on my\nserver and I have autovacuum enabled as well as rebuilt indexes...whatelse\nshould be looked at?\n\nThanks...Michelle\n-- \nView this message in context: http://www.nabble.com/How-to-ENABLE-SQL-capturing----tf4238694.html#a12060736\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 8 Aug 2007 13:02:24 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "How to ENABLE SQL capturing???" }, { "msg_contents": "On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n> I am trying to enable capturing of the submitted code via an\n> application...how do I do this in Postgres? Performance is SLOW on my\n> server and I have autovacuum enabled as well as rebuilt indexes...whatelse\n> should be looked at?\n\nTry \"log_min_duration_statement = 100\" in postgresql.conf; it will show all\nstatements that take more than 100ms. Set to 0 to log _all_ statements, or\n-1 to turn the logging back off.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 8 Aug 2007 22:36:12 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "we currently have logging enabled for all queries over 100ms, and keep\nthe last 24 hours of logs before we rotate them. I've found this tool\nvery helpful in diagnosing new performance problems that crop up:\n\nhttp://pgfouine.projects.postgresql.org/\n\nBryan\n\nOn 8/8/07, Steinar H. Gunderson <[email protected]> wrote:\n> On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n> > I am trying to enable capturing of the submitted code via an\n> > application...how do I do this in Postgres? Performance is SLOW on my\n> > server and I have autovacuum enabled as well as rebuilt indexes...whatelse\n> > should be looked at?\n>\n> Try \"log_min_duration_statement = 100\" in postgresql.conf; it will show all\n> statements that take more than 100ms. Set to 0 to log _all_ statements, or\n> -1 to turn the logging back off.\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Wed, 8 Aug 2007 15:57:52 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "\nHello all,\n\nI have ENABLED this 'log_min_duration_statement = 100\" but I can't figure\nout WHERE it's writing the commands to ...I have it set to 'syslogs' but\nthis file is 0 bytes :confused:\n\nShould I set other parameters in my postgresql.conf file???\n\nThanks...Michelle\n\n\nBryan Murphy-3 wrote:\n> \n> we currently have logging enabled for all queries over 100ms, and keep\n> the last 24 hours of logs before we rotate them. I've found this tool\n> very helpful in diagnosing new performance problems that crop up:\n> \n> http://pgfouine.projects.postgresql.org/\n> \n> Bryan\n> \n> On 8/8/07, Steinar H. Gunderson <[email protected]> wrote:\n>> On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n>> > I am trying to enable capturing of the submitted code via an\n>> > application...how do I do this in Postgres? Performance is SLOW on my\n>> > server and I have autovacuum enabled as well as rebuilt\n>> indexes...whatelse\n>> > should be looked at?\n>>\n>> Try \"log_min_duration_statement = 100\" in postgresql.conf; it will show\n>> all\n>> statements that take more than 100ms. Set to 0 to log _all_ statements,\n>> or\n>> -1 to turn the logging back off.\n>>\n>> /* Steinar */\n>> --\n>> Homepage: http://www.sesse.net/\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/How-to-ENABLE-SQL-capturing----tf4238694.html#a12096180\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 10 Aug 2007 10:54:35 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "Michelle,\n\nWhat platform are you on? If you're on linux, than logging to syslog will \nlikely show up in the /var/log/messages file.\n\nOn Fri, 10 Aug 2007, smiley2211 wrote:\n\n>\n> Hello all,\n>\n> I have ENABLED this 'log_min_duration_statement = 100\" but I can't figure\n> out WHERE it's writing the commands to ...I have it set to 'syslogs' but\n> this file is 0 bytes :confused:\n>\n> Should I set other parameters in my postgresql.conf file???\n>\n> Thanks...Michelle\n>\n>\n> Bryan Murphy-3 wrote:\n>>\n>> we currently have logging enabled for all queries over 100ms, and keep\n>> the last 24 hours of logs before we rotate them. I've found this tool\n>> very helpful in diagnosing new performance problems that crop up:\n>>\n>> http://pgfouine.projects.postgresql.org/\n>>\n>> Bryan\n>>\n>> On 8/8/07, Steinar H. Gunderson <[email protected]> wrote:\n>>> On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n>>>> I am trying to enable capturing of the submitted code via an\n>>>> application...how do I do this in Postgres? Performance is SLOW on my\n>>>> server and I have autovacuum enabled as well as rebuilt\n>>> indexes...whatelse\n>>>> should be looked at?\n>>>\n>>> Try \"log_min_duration_statement = 100\" in postgresql.conf; it will show\n>>> all\n>>> statements that take more than 100ms. Set to 0 to log _all_ statements,\n>>> or\n>>> -1 to turn the logging back off.\n>>>\n>>> /* Steinar */\n>>> --\n>>> Homepage: http://www.sesse.net/\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>>\n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 10 Aug 2007 13:10:02 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "\nJeff,\n\nYou are CORRECT...my queries were going to /var/log/messages...had to get\nthe Linux Admin to grant me READ access to the file...\n\nThanks for your reply.\nMichelle.\n\n\nJeff Frost wrote:\n> \n> Michelle,\n> \n> What platform are you on? If you're on linux, than logging to syslog will \n> likely show up in the /var/log/messages file.\n> \n> On Fri, 10 Aug 2007, smiley2211 wrote:\n> \n>>\n>> Hello all,\n>>\n>> I have ENABLED this 'log_min_duration_statement = 100\" but I can't figure\n>> out WHERE it's writing the commands to ...I have it set to 'syslogs' but\n>> this file is 0 bytes :confused:\n>>\n>> Should I set other parameters in my postgresql.conf file???\n>>\n>> Thanks...Michelle\n>>\n>>\n>> Bryan Murphy-3 wrote:\n>>>\n>>> we currently have logging enabled for all queries over 100ms, and keep\n>>> the last 24 hours of logs before we rotate them. I've found this tool\n>>> very helpful in diagnosing new performance problems that crop up:\n>>>\n>>> http://pgfouine.projects.postgresql.org/\n>>>\n>>> Bryan\n>>>\n>>> On 8/8/07, Steinar H. Gunderson <[email protected]> wrote:\n>>>> On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n>>>>> I am trying to enable capturing of the submitted code via an\n>>>>> application...how do I do this in Postgres? Performance is SLOW on my\n>>>>> server and I have autovacuum enabled as well as rebuilt\n>>>> indexes...whatelse\n>>>>> should be looked at?\n>>>>\n>>>> Try \"log_min_duration_statement = 100\" in postgresql.conf; it will show\n>>>> all\n>>>> statements that take more than 100ms. Set to 0 to log _all_ statements,\n>>>> or\n>>>> -1 to turn the logging back off.\n>>>>\n>>>> /* Steinar */\n>>>> --\n>>>> Homepage: http://www.sesse.net/\n>>>>\n>>>> ---------------------------(end of\n>>>> broadcast)---------------------------\n>>>> TIP 6: explain analyze is your friend\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 3: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/docs/faq\n>>>\n>>>\n>>\n>>\n> \n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/How-to-ENABLE-SQL-capturing----tf4238694.html#a12099590\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 10 Aug 2007 14:43:13 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nsmiley2211 wrote:\n> Jeff,\n> \n> You are CORRECT...my queries were going to /var/log/messages...had to get\n> the Linux Admin to grant me READ access to the file...\n\nYou may want to actually get that to stop. Syslog is a notorious\nperformance bottleneck for postgresql.\n\n> \n> Thanks for your reply.\n> Michelle.\n> \n> \n> Jeff Frost wrote:\n>> Michelle,\n>>\n>> What platform are you on? If you're on linux, than logging to syslog will \n>> likely show up in the /var/log/messages file.\n>>\n>> On Fri, 10 Aug 2007, smiley2211 wrote:\n>>\n>>> Hello all,\n>>>\n>>> I have ENABLED this 'log_min_duration_statement = 100\" but I can't figure\n>>> out WHERE it's writing the commands to ...I have it set to 'syslogs' but\n>>> this file is 0 bytes :confused:\n>>>\n>>> Should I set other parameters in my postgresql.conf file???\n>>>\n>>> Thanks...Michelle\n>>>\n>>>\n>>> Bryan Murphy-3 wrote:\n>>>> we currently have logging enabled for all queries over 100ms, and keep\n>>>> the last 24 hours of logs before we rotate them. I've found this tool\n>>>> very helpful in diagnosing new performance problems that crop up:\n>>>>\n>>>> http://pgfouine.projects.postgresql.org/\n>>>>\n>>>> Bryan\n>>>>\n>>>> On 8/8/07, Steinar H. Gunderson <[email protected]> wrote:\n>>>>> On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:\n>>>>>> I am trying to enable capturing of the submitted code via an\n>>>>>> application...how do I do this in Postgres? Performance is SLOW on my\n>>>>>> server and I have autovacuum enabled as well as rebuilt\n>>>>> indexes...whatelse\n>>>>>> should be looked at?\n>>>>> Try \"log_min_duration_statement = 100\" in postgresql.conf; it will show\n>>>>> all\n>>>>> statements that take more than 100ms. Set to 0 to log _all_ statements,\n>>>>> or\n>>>>> -1 to turn the logging back off.\n>>>>>\n>>>>> /* Steinar */\n>>>>> --\n>>>>> Homepage: http://www.sesse.net/\n>>>>>\n>>>>> ---------------------------(end of\n>>>>> broadcast)---------------------------\n>>>>> TIP 6: explain analyze is your friend\n>>>>>\n>>>> ---------------------------(end of broadcast)---------------------------\n>>>> TIP 3: Have you checked our extensive FAQ?\n>>>>\n>>>> http://www.postgresql.org/docs/faq\n>>>>\n>>>>\n>>>\n>> -- \n>> Jeff Frost, Owner \t<[email protected]>\n>> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n>> Phone: 650-780-7908\tFAX: 650-649-1954\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>>\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGvN2qATb/zqfZUUQRAmxSAJ96tbd3n12W79mxtad4dtD0F/7w6wCeI1uj\nRpgRIKSMNrMHgm1wrCkqpjU=\n=gJD2\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 10 Aug 2007 14:50:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "On 8/10/07, Joshua D. Drake <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> smiley2211 wrote:\n> > Jeff,\n> >\n> > You are CORRECT...my queries were going to /var/log/messages...had to get\n> > the Linux Admin to grant me READ access to the file...\n>\n> You may want to actually get that to stop. Syslog is a notorious\n> performance bottleneck for postgresql.\n\nCan you elaborate? The only reference to this I could find was a\nthread from 2004 where someone wasn't rotating his logs.\n\n-Jonathan\n", "msg_date": "Mon, 13 Aug 2007 09:04:07 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJonathan Ellis wrote:\n> On 8/10/07, Joshua D. Drake <[email protected]> wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>>\n>> smiley2211 wrote:\n>>> Jeff,\n>>>\n>>> You are CORRECT...my queries were going to /var/log/messages...had to get\n>>> the Linux Admin to grant me READ access to the file...\n>> You may want to actually get that to stop. Syslog is a notorious\n>> performance bottleneck for postgresql.\n> \n> Can you elaborate? The only reference to this I could find was a\n> thread from 2004 where someone wasn't rotating his logs.\n\nI am not sure what to elaborate on :). Syslog is slow, logging to file\nisn't. Although both will certainly slow down your installation quite a\nbit, syslog will slow it down more.\n\nIf I recall correctly, it is because syslog is blocking.\n\nJoshua D. Drake\n\n\n\n> \n> -Jonathan\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGwHv6ATb/zqfZUUQRAgMlAKCcZpj+CCP50Deo/CsSCN21IyjrCACghXfN\nuJQ+qsu4FI4Kjf8fpNiWgnw=\n=BJ8E\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 13 Aug 2007 08:42:50 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n\n> If I recall correctly, it is because syslog is blocking.\n\nAre you sure it isn't just that syslog fsyncs its log files after every log\nmessage? I don't think the individual syslogs are synchronous but if syslog\nfalls behind the buffer will fill and throttle the sender.\n\nIf your Postgres data is on the same device as the syslogs those fsyncs will\nprobably cause a big slowdown directly on Postgres's I/O as well.\n\nYou can turn off the fsyncs in syslog by putting a - before the filename.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 13 Aug 2007 17:02:41 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGregory Stark wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n> \n>> If I recall correctly, it is because syslog is blocking.\n> \n> Are you sure it isn't just that syslog fsyncs its log files after every log\n> message?\n\nNope I am not sure at all ;). Darcy actually found the issue and can\nspeak better to it, I never use syslog and have always logged direct to\nfile.\n\n I don't think the individual syslogs are synchronous but if syslog\n> falls behind the buffer will fill and throttle the sender.\n> \n> If your Postgres data is on the same device as the syslogs those fsyncs will\n> probably cause a big slowdown directly on Postgres's I/O as well.\n> \n> You can turn off the fsyncs in syslog by putting a - before the filename.\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGwIOnATb/zqfZUUQRAqWqAKCEhoW/01Hc//cDEpREit8ipn2SZwCfUxPE\n1Ir6eyuD4EcShwsn4sMAeKA=\n=W2cJ\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 13 Aug 2007 09:15:35 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to ENABLE SQL capturing???" } ]
[ { "msg_contents": "I saw an interesting topic in the archives on best bang for the buck for \n$20k.. about a year old now.\n\nSo whats the thoughts on a current combined rack/disks/cpu combo around \nthe $10k-$15k point, currently?\n\nI can configure up a Dell poweredge 2900 for $9k, but am wondering if \nI'm missing out on something better.\n( 9k spent with mr dell that gets you 2x quad core xeon X535s, so thats \n8 cores total, with 24gb of memory\nand 8x 146gb 15k serial attached scsi connected, raid10, attached to a \nperc 5/i with redundant power supplies,\nall in what looks like a 3u chassis). But the system is pretty maxxed \nout like that, no room for future expansion.\n\nbetter options? or a better balance for a pure db box that is mostly \nsmaller reads and large indexes?\n\nThis would be to go into a rack with existing but older equipment that \ncan be warm standby\nso I don't have to split the cost here for getting redundancy.\n\nthanks for any 2007 advice!\n\n", "msg_date": "Wed, 08 Aug 2007 23:34:42 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": true, "msg_subject": "mid 2007 \"best bang for the buck\" hardware opinions" }, { "msg_contents": "On 8/8/07, justin <[email protected]> wrote:\n> I saw an interesting topic in the archives on best bang for the buck for\n> $20k.. about a year old now.\n>\n> So whats the thoughts on a current combined rack/disks/cpu combo around\n> the $10k-$15k point, currently?\n>\n> I can configure up a Dell poweredge 2900 for $9k, but am wondering if\n> I'm missing out on something better.\n> ( 9k spent with mr dell that gets you 2x quad core xeon X535s, so thats\n> 8 cores total, with 24gb of memory\n> and 8x 146gb 15k serial attached scsi connected, raid10, attached to a\n> perc 5/i with redundant power supplies,\n> all in what looks like a 3u chassis). But the system is pretty maxxed\n> out like that, no room for future expansion.\n>\n> better options? or a better balance for a pure db box that is mostly\n> smaller reads and large indexes?\n\nThat's not much kit for $20k.\n\nI went to www.aberdeeninc.com and speced out a box with 24 750G\nbarracudas, battery backed cache RAID and dual Quad core 2.66GHz\nxeons, and 16 Gigs of ram for $15k. Another 16gigs of ram would\nprobably put it around 18k or so, but they didn't have the config\noption enabled there.\n\nI consider Dells to be mediocre hardware with mediocre support.\n\nThere are lots of medium sized integration shops out there that make\nbetter servers with lots more storage for a lot less.\n", "msg_date": "Wed, 8 Aug 2007 23:06:30 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mid 2007 \"best bang for the buck\" hardware opinions" }, { "msg_contents": "No it wouldn't be much kit for $20k\n\nbut that example is currently coming in at $9k ... (the $20k referred to \nis last years topic).\n\nI think I can spend up to $15k but it would have to be clearly \nfaster/better/more expandable than this config.\nOr I can spend $9k with someone else if I can convince myself it is just \na better option.\n\nThe sense that there might be better options out there, I've no doubt in..\nit is why I posted hoping for some solid leads on what & why.\n\nScott Marlowe wrote:\n> On 8/8/07, justin <[email protected]> wrote:\n> \n>> I saw an interesting topic in the archives on best bang for the buck for\n>> $20k.. about a year old now.\n>>\n>> So whats the thoughts on a current combined rack/disks/cpu combo around\n>> the $10k-$15k point, currently?\n>>\n>> I can configure up a Dell poweredge 2900 for $9k, but am wondering if\n>> I'm missing out on something better.\n>> ( 9k spent with mr dell that gets you 2x quad core xeon X535s, so thats\n>> 8 cores total, with 24gb of memory\n>> and 8x 146gb 15k serial attached scsi connected, raid10, attached to a\n>> perc 5/i with redundant power supplies,\n>> all in what looks like a 3u chassis). But the system is pretty maxxed\n>> out like that, no room for future expansion.\n>>\n>> better options? or a better balance for a pure db box that is mostly\n>> smaller reads and large indexes?\n>> \n>\n> That's not much kit for $20k.\n>\n> I went to www.aberdeeninc.com and speced out a box with 24 750G\n> barracudas, battery backed cache RAID and dual Quad core 2.66GHz\n> xeons, and 16 Gigs of ram for $15k. Another 16gigs of ram would\n> probably put it around 18k or so, but they didn't have the config\n> option enabled there.\n>\n> I consider Dells to be mediocre hardware with mediocre support.\n>\n> There are lots of medium sized integration shops out there that make\n> better servers with lots more storage for a lot less.\n>\n> \n\n", "msg_date": "Thu, 09 Aug 2007 00:45:27 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: mid 2007 \"best bang for the buck\" hardware opinions" }, { "msg_contents": "\nOn Aug 8, 2007, at 11:34 PM, justin wrote:\n\n> So whats the thoughts on a current combined rack/disks/cpu combo \n> around the $10k-$15k point, currently?\n\nI just put into production testing this setup:\n\nSunFire X4100M2 (2x Opteron Dual core) with 20Gb RAM and an LSI PCI-e \ndual-channel 4Gb Fibre channel adapter, connected to Partners Data \nSystems' Triton 16FA4 RAID array. The Triton array consists of 16 \nSATA drives and connects out to 4Gb fibre channel. I run FreeBSD 6.2 \non it.\n\nIt is very fast. ;-)\n\nIt cost a bit over $20k.\n", "msg_date": "Thu, 9 Aug 2007 10:40:30 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mid 2007 \"best bang for the buck\" hardware opinions" } ]
[ { "msg_contents": "We have a 30 GB database (according to pg_database_size) running nicely\non a single Dell PowerEdge 2850 right now. This represents data\nspecific to 1 US state. We are in the process of planning a deployment\nthat will service all 50 US states.\n\nIf 30 GB is an accurate number per state that means the database size is\nabout to explode to 1.5 TB. About 1 TB of this amount would be OLAP\ndata that is heavy-read but only updated or inserted in batch. It is\nalso largely isolated to a single table partitioned on state. This\nportion of the data will grow very slowly after the initial loading. \n\nThe remaining 500 GB has frequent individual writes performed against\nit. 500 GB is a high estimate and it will probably start out closer to\n100 GB and grow steadily up to and past 500 GB.\n\nI am trying to figure out an appropriate hardware configuration for such\na database. Currently I am considering the following:\n\nPowerEdge 1950 paired with a PowerVault MD1000\n2 x Quad Core Xeon E5310\n16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\nPERC 5/E Raid Adapter\n2 x 146 GB SAS in Raid 1 for OS + logs.\nA bunch of disks in the MD1000 configured in Raid 10 for Postgres data.\n\nThe MD1000 holds 15 disks, so 14 disks + a hot spare is the max. With\n12 250GB SATA drives to cover the 1.5TB we would be able add another\n250GB of usable space for future growth before needing to get a bigger\nset of disks. 500GB drives would leave alot more room and could allow\nus to run the MD1000 in split mode and use its remaining disks for other\npurposes in the mean time. I would greatly appreciate any feedback with\nrespect to drive count vs. drive size and SATA vs. SCSI/SAS. The price\ndifference makes SATA awfully appealing.\n\nWe plan to involve outside help in getting this database tuned and\nconfigured, but want to get some hardware ballparks in order to get\nquotes and potentially request a trial unit.\n\nAny thoughts or recommendations? We are running openSUSE 10.2 with\nkernel 2.6.18.2-34.\n\nRegards,\n\nJoe Uhl\[email protected]\n\n", "msg_date": "Thu, 09 Aug 2007 15:47:09 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Dell Hardware Recommendations" }, { "msg_contents": "On Thu, Aug 09, 2007 at 03:47:09PM -0400, Joe Uhl wrote:\n> We have a 30 GB database (according to pg_database_size) running nicely\n> on a single Dell PowerEdge 2850 right now. This represents data\n> specific to 1 US state. We are in the process of planning a deployment\n> that will service all 50 US states.\n> \n> If 30 GB is an accurate number per state that means the database size is\n> about to explode to 1.5 TB. About 1 TB of this amount would be OLAP\n> data that is heavy-read but only updated or inserted in batch. It is\n> also largely isolated to a single table partitioned on state. This\n> portion of the data will grow very slowly after the initial loading. \n> \n> The remaining 500 GB has frequent individual writes performed against\n> it. 500 GB is a high estimate and it will probably start out closer to\n> 100 GB and grow steadily up to and past 500 GB.\n\nWhat kind of transaction rate are you looking at?\n\n> I am trying to figure out an appropriate hardware configuration for such\n> a database. Currently I am considering the following:\n> \n> PowerEdge 1950 paired with a PowerVault MD1000\n> 2 x Quad Core Xeon E5310\n> 16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\n\n16GB for 500GB of active data is probably a bit light.\n\n> PERC 5/E Raid Adapter\n> 2 x 146 GB SAS in Raid 1 for OS + logs.\n> A bunch of disks in the MD1000 configured in Raid 10 for Postgres data.\n> \n> The MD1000 holds 15 disks, so 14 disks + a hot spare is the max. With\n> 12 250GB SATA drives to cover the 1.5TB we would be able add another\n> 250GB of usable space for future growth before needing to get a bigger\n> set of disks. 500GB drives would leave alot more room and could allow\n> us to run the MD1000 in split mode and use its remaining disks for other\n> purposes in the mean time. I would greatly appreciate any feedback with\n> respect to drive count vs. drive size and SATA vs. SCSI/SAS. The price\n> difference makes SATA awfully appealing.\n\nWell, how does this compare with what you have right now? And do you\nexpect your query rate to be 50x what it is now, or higher?\n\n> We plan to involve outside help in getting this database tuned and\n> configured, but want to get some hardware ballparks in order to get\n> quotes and potentially request a trial unit.\n\nYou're doing a very wise thing by asking for information before\npurchasing (unfortunately, many people put that cart before the horse).\nThis list is a great resource for information, but there's no real\nsubstitute for working directly with someone and being able to discuss\nyour actual system in detail, so I'd suggest getting outside help\ninvolved before actually purchasing or even evaluating hardware. There's\na lot to think about beyond just drives and memory with the kind of\nexpansion you're looking at. For example, what ability do you have to\nscale past one machine? Do you have a way to control your growth rate?\nHow well will the existing design scale out? (Often times what is a good\ndesign for a smaller set of data is sub-optimal for a large set of\ndata.)\n\nSomething else that might be worth looking at is having your existing\nworkload modeled; that allows building a pretty accurate estimate of\nwhat kind of hardware would be required to hit a different workload.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 9 Aug 2007 16:15:53 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 8/9/07, Joe Uhl <[email protected]> wrote:\n> We have a 30 GB database (according to pg_database_size) running nicely\n> on a single Dell PowerEdge 2850 right now. This represents data\n> specific to 1 US state. We are in the process of planning a deployment\n> that will service all 50 US states.\n>\n> If 30 GB is an accurate number per state that means the database size is\n> about to explode to 1.5 TB. About 1 TB of this amount would be OLAP\n> data that is heavy-read but only updated or inserted in batch. It is\n> also largely isolated to a single table partitioned on state. This\n> portion of the data will grow very slowly after the initial loading.\n>\n> The remaining 500 GB has frequent individual writes performed against\n> it. 500 GB is a high estimate and it will probably start out closer to\n> 100 GB and grow steadily up to and past 500 GB.\n>\n> I am trying to figure out an appropriate hardware configuration for such\n> a database. Currently I am considering the following:\n>\n> PowerEdge 1950 paired with a PowerVault MD1000\n> 2 x Quad Core Xeon E5310\n> 16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\n> PERC 5/E Raid Adapter\n> 2 x 146 GB SAS in Raid 1 for OS + logs.\n> A bunch of disks in the MD1000 configured in Raid 10 for Postgres data.\n>\n> The MD1000 holds 15 disks, so 14 disks + a hot spare is the max. With\n> 12 250GB SATA drives to cover the 1.5TB we would be able add another\n> 250GB of usable space for future growth before needing to get a bigger\n> set of disks. 500GB drives would leave alot more room and could allow\n> us to run the MD1000 in split mode and use its remaining disks for other\n> purposes in the mean time. I would greatly appreciate any feedback with\n> respect to drive count vs. drive size and SATA vs. SCSI/SAS. The price\n> difference makes SATA awfully appealing.\n\nI'm getting a MD1000 tomorrow to play with for just this type of\nanalysis as it happens. First of all, move the o/s drives to the\nbackplane and get the cheapest available.\n\nI might consider pick up an extra perc 5/e, since the MD1000 is\nactive/active, and do either raid 10 or 05 with one of the raid levels\nin software. For example, two raid 5 volumes (hardware raid 5)\nstriped in software as raid 0. A 15k SAS drive is worth at least two\nSATA drives (unless they are raptors) for OLTP performance loads.\n\nWhere the extra controller especially pays off is if you have to\nexpand to a second tray. It's easy to add trays but installing\ncontrollers on a production server is scary.\n\nRaid 10 is usually better for databases but in my experience it's a\nroll of the dice. If you factor cost into the matrix a SAS raid 05\nmight outperform a SATA raid 10 because you are getting better storage\nutilization out of the drives (n - 2 vs. n / 2). Then again, you\nmight not.\n\nmerlin\n", "msg_date": "Thu, 9 Aug 2007 17:50:10 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On Thu, Aug 09, 2007 at 05:50:10PM -0400, Merlin Moncure wrote:\n> Raid 10 is usually better for databases but in my experience it's a\n> roll of the dice. If you factor cost into the matrix a SAS raid 05\n> might outperform a SATA raid 10 because you are getting better storage\n> utilization out of the drives (n - 2 vs. n / 2). Then again, you\n> might not.\n\nIt's going to depend heavily on the controller and the workload.\nTheoretically, if most of your writes are to stripes that the controller\nalready has cached then you could actually out-perform RAID10. But\nthat's a really, really big IF, because if the strip isn't in cache you\nhave to read the entire thing in before you can do the write... and that\ncosts *a lot*.\n\nAlso, a good RAID controller can spread reads out across both drives in\neach mirror on a RAID10. Though, there is an argument for not doing\nthat... it makes it much less likely that both drives in a mirror will\nfail close enough to each other that you'd lose that chunk of data.\n\nSpeaking of failures, keep in mind that a normal RAID5 puts you only 2\ndrive failures away from data loss, while with RAID10 you can\npotentially lose half the array without losing any data. If you do RAID5\nwith multiple parity copies that does change things; I'm not sure which\nis better at that point (I suspect it matters how many drives are\ninvolved).\n\nThe comment about the extra controller isn't a bad idea, although I\nwould hope that you'll have some kind of backup server available, which\nmakes an extra controller much less useful.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 9 Aug 2007 17:05:02 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 9-8-2007 23:50 Merlin Moncure wrote:\n> Where the extra controller especially pays off is if you have to\n> expand to a second tray. It's easy to add trays but installing\n> controllers on a production server is scary.\n\nFor connectivity-sake that's not a necessity. You can either connect \n(two?) extra MD1000's to your first MD1000 or you can use the second \nexternal SAS-port on your controller. Obviously it depends on the \ncontroller whether its good enough to just add the disks to it, rather \nthan adding another controller for the second tray. Whether the perc5/e \nis good enough for that, I don't know, we've only equipped ours with a \nsingle MD1000 holding 15x 15k rpm drives, but in our benchmarks it \nscaled pretty well going from a few to all 14 disks (+1 hotspare).\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 10 Aug 2007 00:21:01 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "Thanks for the input. Thus far we have used Dell but I would certainly\nbe willing to explore other options.\n\nI found a \"Reference Guide\" for the MD1000 from April, 2006 that\nincludes info on the PERC 5/E at:\n\nhttp://www.dell.com/downloads/global/products/pvaul/en/pvaul_md1000_solutions_guide.pdf\n\nTo answer the questions below:\n\n> How many users do you expect to hit the db at the same time?\nThere are 2 types of users. For roughly every 5000 active accounts, 10\nor fewer or those will have additional privileges. Only those more\nprivileged users interact substantially with the OLAP portion of the\ndatabase. For 1 state 10 concurrent connections was about the max, so\nif that holds for 50 states we are looking at 500 concurrent users as a\ntop end, with a very small fraction of those users interacting with the\nOLAP portion.\n\n> How big of a dataset will each one be grabbing at the same time?\nFor the OLTP data it is mostly single object reads and writes and\ngenerally touches only a few tables at a time.\n\n> Will your Perc RAID controller have a battery backed cache on board?\n> If so (and it better!) how big of a cache can it hold?\nAccording to the above link, it has a 256 MB cache that is battery\nbacked.\n\n> Can you split this out onto two different machines, one for the OLAP\n> load and the other for what I'm assuming is OLTP?\n> Can you physically partition this out by state if need be?\nRight now this system isn't in production so we can explore any option. \nWe are looking into splitting the OLAP and OLTP portions right now and I\nimagine physically splitting the partitions on the big OLAP table is an\noption as well.\n\nReally appreciate all of the advice. Before we pull the trigger on\nhardware we probably will get some external advice from someone but I\nknew this list would provide some excellent ideas and feedback to get us\nstarted.\n\nJoe Uhl\[email protected]\n\nOn Thu, 9 Aug 2007 16:02:49 -0500, \"Scott Marlowe\"\n<[email protected]> said:\n> On 8/9/07, Joe Uhl <[email protected]> wrote:\n> > We have a 30 GB database (according to pg_database_size) running nicely\n> > on a single Dell PowerEdge 2850 right now. This represents data\n> > specific to 1 US state. We are in the process of planning a deployment\n> > that will service all 50 US states.\n> >\n> > If 30 GB is an accurate number per state that means the database size is\n> > about to explode to 1.5 TB. About 1 TB of this amount would be OLAP\n> > data that is heavy-read but only updated or inserted in batch. It is\n> > also largely isolated to a single table partitioned on state. This\n> > portion of the data will grow very slowly after the initial loading.\n> >\n> > The remaining 500 GB has frequent individual writes performed against\n> > it. 500 GB is a high estimate and it will probably start out closer to\n> > 100 GB and grow steadily up to and past 500 GB.\n> >\n> > I am trying to figure out an appropriate hardware configuration for such\n> > a database. Currently I am considering the following:\n> >\n> > PowerEdge 1950 paired with a PowerVault MD1000\n> > 2 x Quad Core Xeon E5310\n> > 16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\n> > PERC 5/E Raid Adapter\n> > 2 x 146 GB SAS in Raid 1 for OS + logs.\n> > A bunch of disks in the MD1000 configured in Raid 10 for Postgres data.\n> >\n> > The MD1000 holds 15 disks, so 14 disks + a hot spare is the max. With\n> > 12 250GB SATA drives to cover the 1.5TB we would be able add another\n> > 250GB of usable space for future growth before needing to get a bigger\n> > set of disks. 500GB drives would leave alot more room and could allow\n> > us to run the MD1000 in split mode and use its remaining disks for other\n> > purposes in the mean time. I would greatly appreciate any feedback with\n> > respect to drive count vs. drive size and SATA vs. SCSI/SAS. The price\n> > difference makes SATA awfully appealing.\n> >\n> > We plan to involve outside help in getting this database tuned and\n> > configured, but want to get some hardware ballparks in order to get\n> > quotes and potentially request a trial unit.\n> >\n> > Any thoughts or recommendations? We are running openSUSE 10.2 with\n> > kernel 2.6.18.2-34.\n> \n> Some questions:\n> \n> How many users do you expect to hit the db at the same time?\n> How big of a dataset will each one be grabbing at the same time?\n> Will your Perc RAID controller have a battery backed cache on board?\n> If so (and it better!) how big of a cache can it hold?\n> Can you split this out onto two different machines, one for the OLAP\n> load and the other for what I'm assuming is OLTP?\n> Can you physically partition this out by state if need be?\n> \n> A few comments:\n> \n> I'd go with the bigger drives. Just as many, so you have spare\n> storage as you need it. you never know when you'll need to migrate\n> your whole data set from one pg db to another for testing etc...\n> extra space comes in REAL handy when things aren't quite going right.\n> With 10krpm 500 and 750 Gig drives you can use smaller partitions on\n> the bigger drives to short stroke them and often outrun supposedly\n> faster drives.\n> \n> The difference between SAS and SATA drives is MUCH less important than\n> the difference between one RAID controller and the next. It's not\n> likely the Dell is gonna come with the fastest RAID controllers\n> around, as they seem to still be selling Adaptec (buggy and\n> unreliable, avoid like the plague) and LSI (stable, moderately fast).\n> \n> I.e. I'd rather have 24 SATA disks plugged into a couple of big Areca\n> or 3ware (now escalade I think?) controllers than 8 SAS drives plugged\n> into any Adaptec controller.\n", "msg_date": "Thu, 09 Aug 2007 20:54:20 -0400", "msg_from": "\"Joe Uhl\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "oops\n\nOn 8/9/07, Decibel! <[email protected]> wrote:\n> You forgot the list. :)\n>\n> On Thu, Aug 09, 2007 at 05:29:18PM -0500, Scott Marlowe wrote:\n> > On 8/9/07, Decibel! <[email protected]> wrote:\n> >\n> > > Also, a good RAID controller can spread reads out across both drives in\n> > > each mirror on a RAID10. Though, there is an argument for not doing\n> > > that... it makes it much less likely that both drives in a mirror will\n> > > fail close enough to each other that you'd lose that chunk of data.\n> >\n> > I'd think that kind of failure mode is pretty uncommon, unless you're\n> > in an environment where physical shocks are common. which is not a\n> > typical database environment. (tell that to the guys writing a db for\n> > a modern tank fire control system though :) )\n> >\n> > > Speaking of failures, keep in mind that a normal RAID5 puts you only 2\n> > > drive failures away from data loss,\n> >\n> > Not only that, but the first drive failure puts you way down the list\n> > in terms of performance, where a single failed drive in a large\n> > RAID-10 only marginally affects performance.\n> >\n> > > while with RAID10 you can\n> > > potentially lose half the array without losing any data.\n> >\n> > Yes, but the RIGHT two drives can kill EITHER RAID 5 or RAID10.\n> >\n> > > If you do RAID5\n> > > with multiple parity copies that does change things; I'm not sure which\n> > > is better at that point (I suspect it matters how many drives are\n> > > involved).\n> >\n> > That's RAID6. The primary advantages of RAID6 over RAID10 or RAID5\n> > are two fold:\n> >\n> > 1: A single drive failure has no negative effect on performance, so\n> > the array is still pretty fast, especially for reads, which just suck\n> > under RAID 5 with a missing drive.\n> > 2: No two drive failures can cause loss of data. Admittedly, by the\n> > time the second drive fails, you're now running on the equivalent of a\n> > degraded RAID5, unless you've configured >2 drives for parity.\n> >\n> > On very large arrays (100s of drives), RAID6 with 2, 3, or 4 drives\n> > for parity makes some sense, since having that many extra drives means\n> > the RAID controller (SW or HW) can now have elections to decide which\n> > drive might be lying if you get data corruption.\n> >\n> > Note that you can also look into RAID10 with 3 or more drives per\n> > mirror. I.e. build 3 RAID-1 sets of 3 drives each, then you can lose\n> > any two drives and still stay up. Plus, on a mostly read database,\n> > where users might be reading the same drives but in different places,\n> > multi-disk RAID-1 makes sense under RAID-10.\n> >\n> > While I agree with Merlin that for OLTP a faster drive is a must, for\n> > OLAP, more drives is often the real key. The high aggregate bandwidth\n> > of a large array of SATA drives is an amazing thing to watch when\n> > running a reporting server with otherwise unimpressive specs.\n> >\n>\n> --\n> Decibel!, aka Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n>\n", "msg_date": "Thu, 9 Aug 2007 20:57:32 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Dell Hardware Recommendations" }, { "msg_contents": "oops, the the wrong list... now the right one.\n\nOn 8/9/07, Decibel! <[email protected]> wrote:\n> You forgot the list. :)\n>\n> On Thu, Aug 09, 2007 at 05:29:18PM -0500, Scott Marlowe wrote:\n> > On 8/9/07, Decibel! <[email protected]> wrote:\n> >\n> > > Also, a good RAID controller can spread reads out across both drives in\n> > > each mirror on a RAID10. Though, there is an argument for not doing\n> > > that... it makes it much less likely that both drives in a mirror will\n> > > fail close enough to each other that you'd lose that chunk of data.\n> >\n> > I'd think that kind of failure mode is pretty uncommon, unless you're\n> > in an environment where physical shocks are common. which is not a\n> > typical database environment. (tell that to the guys writing a db for\n> > a modern tank fire control system though :) )\n> >\n> > > Speaking of failures, keep in mind that a normal RAID5 puts you only 2\n> > > drive failures away from data loss,\n> >\n> > Not only that, but the first drive failure puts you way down the list\n> > in terms of performance, where a single failed drive in a large\n> > RAID-10 only marginally affects performance.\n> >\n> > > while with RAID10 you can\n> > > potentially lose half the array without losing any data.\n> >\n> > Yes, but the RIGHT two drives can kill EITHER RAID 5 or RAID10.\n> >\n> > > If you do RAID5\n> > > with multiple parity copies that does change things; I'm not sure which\n> > > is better at that point (I suspect it matters how many drives are\n> > > involved).\n> >\n> > That's RAID6. The primary advantages of RAID6 over RAID10 or RAID5\n> > are two fold:\n> >\n> > 1: A single drive failure has no negative effect on performance, so\n> > the array is still pretty fast, especially for reads, which just suck\n> > under RAID 5 with a missing drive.\n> > 2: No two drive failures can cause loss of data. Admittedly, by the\n> > time the second drive fails, you're now running on the equivalent of a\n> > degraded RAID5, unless you've configured >2 drives for parity.\n> >\n> > On very large arrays (100s of drives), RAID6 with 2, 3, or 4 drives\n> > for parity makes some sense, since having that many extra drives means\n> > the RAID controller (SW or HW) can now have elections to decide which\n> > drive might be lying if you get data corruption.\n> >\n> > Note that you can also look into RAID10 with 3 or more drives per\n> > mirror. I.e. build 3 RAID-1 sets of 3 drives each, then you can lose\n> > any two drives and still stay up. Plus, on a mostly read database,\n> > where users might be reading the same drives but in different places,\n> > multi-disk RAID-1 makes sense under RAID-10.\n> >\n> > While I agree with Merlin that for OLTP a faster drive is a must, for\n> > OLAP, more drives is often the real key. The high aggregate bandwidth\n> > of a large array of SATA drives is an amazing thing to watch when\n> > running a reporting server with otherwise unimpressive specs.\n> >\n>\n> --\n> Decibel!, aka Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n>\n", "msg_date": "Thu, 9 Aug 2007 20:58:19 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On Thu, Aug 09, 2007 at 08:58:19PM -0500, Scott Marlowe wrote:\n> > On Thu, Aug 09, 2007 at 05:29:18PM -0500, Scott Marlowe wrote:\n> > > On 8/9/07, Decibel! <[email protected]> wrote:\n> > >\n> > > > Also, a good RAID controller can spread reads out across both drives in\n> > > > each mirror on a RAID10. Though, there is an argument for not doing\n> > > > that... it makes it much less likely that both drives in a mirror will\n> > > > fail close enough to each other that you'd lose that chunk of data.\n> > >\n> > > I'd think that kind of failure mode is pretty uncommon, unless you're\n> > > in an environment where physical shocks are common. which is not a\n> > > typical database environment. (tell that to the guys writing a db for\n> > > a modern tank fire control system though :) )\n\nYou'd be surprised. I've seen more than one case of a bunch of drives\nfailing within a month, because they were all bought at the same time.\n\n> > > > while with RAID10 you can\n> > > > potentially lose half the array without losing any data.\n> > >\n> > > Yes, but the RIGHT two drives can kill EITHER RAID 5 or RAID10.\n\nSure, but the odds of that with RAID5 are 100%, while they're much less\nin a RAID10.\n\n> > > While I agree with Merlin that for OLTP a faster drive is a must, for\n> > > OLAP, more drives is often the real key. The high aggregate bandwidth\n> > > of a large array of SATA drives is an amazing thing to watch when\n> > > running a reporting server with otherwise unimpressive specs.\n\nTrue. In this case, the OP will probably want to have one array for the\nOLTP stuff and one for the OLAP stuff.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 9 Aug 2007 23:58:58 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On Thu, 9 Aug 2007, Joe Uhl wrote:\n\n> The MD1000 holds 15 disks, so 14 disks + a hot spare is the max. With \n> 12 250GB SATA drives to cover the 1.5TB we would be able add another \n> 250GB of usable space for future growth before needing to get a bigger \n> set of disks. 500GB drives would leave alot more room and could allow \n> us to run the MD1000 in split mode and use its remaining disks for other \n> purposes in the mean time. I would greatly appreciate any feedback with \n> respect to drive count vs. drive size and SATA vs. SCSI/SAS. The price \n> difference makes SATA awfully appealing.\n\nThe SATA II drives in the MD1000 all run at 7200 RPM, and are around \n0.8/GB (just grabbed a random quote from the configurator on their site \nfor all these) for each of the 250GB, 500GB, and 750GB capacities. If you \ncouldn't afford to fill the whole array with 500GB models, than it might \nmake sense to get the 250GB ones instead just to spread the load out over \nmore spindles; if you're filling it regardless, surely the reduction in \nstress over capacity issues of the 500GB models makes more sense. Also, \nusing the 500 GB models would make it much easier to only ever use 12 \nactive drives and have 3 hot spares, with less pressure to convert spares \ninto active storage; drives die in surprisingly correlated batches far too \noften to only have 1 spare IMHO.\n\nThe two SAS options that you could use are both 300GB, and you can have \n10K RPM for $2.3/GB or 15K RPM for $3.0/GB. So relative to the SATA \noptoins, you're paying about 3X as much to get a 40% faster spin rate, or \naround 4X as much to get over a 100% faster spin. There's certainly other \nthings that factor into performance than just that, but just staring at \nthe RPM gives you a gross idea how much higher of a raw transaction rate \nthe drives can support.\n\nThe question you have to ask yourself is how much actual I/O are you \ndealing with. The tiny 256MB cache on the PERC 5/E isn't going to help \nmuch with buffering writes in particular, so the raw disk performance may \nbe critical for your update intensive workload. If the combination of \ntransaction rate and total bandwidth are low enough that the 7200 RPM \ndrives can keep up with your load, by all means save yourself a lot of \ncash and get the SATA drives.\n\nIn your situation, I'd be spending a lot of my time measuring the \ntransaction and I/O bandwidth rates on the active system very carefully to \nfigure out which way to go here. You're in a better position than most \npeople buying new hardware to estimate what you need with the existing \nsystem in place, take advantage of that by drilling into the exact numbers \nfor what you're pushing through your disks now. Every dollar spent on \nwork to quantify that early will easily pay for itself in helping guide \nyour purchase and future plans; that's what I'd be bringing in people in \nright now to do if I were you, if that's not something you're already \nfamiliar with measuring.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 10 Aug 2007 01:23:13 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On Thu, 9 Aug 2007, Decibel! wrote:\n\n> On Thu, Aug 09, 2007 at 08:58:19PM -0500, Scott Marlowe wrote:\n>>> On Thu, Aug 09, 2007 at 05:29:18PM -0500, Scott Marlowe wrote:\n>>>> On 8/9/07, Decibel! <[email protected]> wrote:\n>>>>\n>>>>> Also, a good RAID controller can spread reads out across both drives in\n>>>>> each mirror on a RAID10. Though, there is an argument for not doing\n>>>>> that... it makes it much less likely that both drives in a mirror will\n>>>>> fail close enough to each other that you'd lose that chunk of data.\n>>>>\n>>>> I'd think that kind of failure mode is pretty uncommon, unless you're\n>>>> in an environment where physical shocks are common. which is not a\n>>>> typical database environment. (tell that to the guys writing a db for\n>>>> a modern tank fire control system though :) )\n>\n> You'd be surprised. I've seen more than one case of a bunch of drives\n> failing within a month, because they were all bought at the same time.\n>\n>>>>> while with RAID10 you can\n>>>>> potentially lose half the array without losing any data.\n>>>>\n>>>> Yes, but the RIGHT two drives can kill EITHER RAID 5 or RAID10.\n>\n> Sure, but the odds of that with RAID5 are 100%, while they're much less\n> in a RAID10.\n\nso you go with Raid6, not Raid5.\n\n>>>> While I agree with Merlin that for OLTP a faster drive is a must, for\n>>>> OLAP, more drives is often the real key. The high aggregate bandwidth\n>>>> of a large array of SATA drives is an amazing thing to watch when\n>>>> running a reporting server with otherwise unimpressive specs.\n>\n> True. In this case, the OP will probably want to have one array for the\n> OLTP stuff and one for the OLAP stuff.\n\none thing that's interesting is that the I/O throughlut on the large SATA \ndrives can actually be higher then the faster, but smaller SCSI drives. \nthe SCSI drives can win on seeking, but how much seeking you need to do \ndepends on how large the OLTP database ends up being\n\nDavid Lang\n", "msg_date": "Thu, 9 Aug 2007 22:27:43 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 8/10/07, Arjen van der Meijden <[email protected]> wrote:\n> On 9-8-2007 23:50 Merlin Moncure wrote:\n> > Where the extra controller especially pays off is if you have to\n> > expand to a second tray. It's easy to add trays but installing\n> > controllers on a production server is scary.\n>\n> For connectivity-sake that's not a necessity. You can either connect\n> (two?) extra MD1000's to your first MD1000 or you can use the second\n> external SAS-port on your controller. Obviously it depends on the\n> controller whether its good enough to just add the disks to it, rather\n> than adding another controller for the second tray. Whether the perc5/e\n> is good enough for that, I don't know, we've only equipped ours with a\n> single MD1000 holding 15x 15k rpm drives, but in our benchmarks it\n> scaled pretty well going from a few to all 14 disks (+1 hotspare).\n\ncompletely correct....I was suggesting this on performance\nterms...I've never done it with the Perc/5, but have done it with some\nactive/active SANs and it works really well.\n\nmerlin\n", "msg_date": "Fri, 10 Aug 2007 16:05:51 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 8/10/07, Decibel! <[email protected]> wrote:\n> On Thu, Aug 09, 2007 at 05:50:10PM -0400, Merlin Moncure wrote:\n> > Raid 10 is usually better for databases but in my experience it's a\n> > roll of the dice. If you factor cost into the matrix a SAS raid 05\n> > might outperform a SATA raid 10 because you are getting better storage\n> > utilization out of the drives (n - 2 vs. n / 2). Then again, you\n> > might not.\n>\n> It's going to depend heavily on the controller and the workload.\n> Theoretically, if most of your writes are to stripes that the controller\n> already has cached then you could actually out-perform RAID10. But\n> that's a really, really big IF, because if the strip isn't in cache you\n> have to read the entire thing in before you can do the write... and that\n> costs *a lot*.\n>\n> Also, a good RAID controller can spread reads out across both drives in\n> each mirror on a RAID10. Though, there is an argument for not doing\n> that... it makes it much less likely that both drives in a mirror will\n> fail close enough to each other that you'd lose that chunk of data.\n>\n> Speaking of failures, keep in mind that a normal RAID5 puts you only 2\n> drive failures away from data loss, while with RAID10 you can\n> potentially lose half the array without losing any data. If you do RAID5\n> with multiple parity copies that does change things; I'm not sure which\n> is better at that point (I suspect it matters how many drives are\n> involved).\n\nwhen making hardware recommendations I always suggest to buy two\nservers and rig PITR with warm standby. This allows to adjust the\nsystem a little bit for performance over fault tolerance.\n\nRegarding raid controllers, I've found performance to be quite\nvariable as stated, especially with regards to RAID 5. I've also\nunfortunately found bonnie++ to not be very reflective of actual\nperformance in high stress environments. We have a IBM DS4200 that\nbangs out some pretty impressive numbers with our app using sata while\nthe bonnie++ numbers fairly suck.\n\nmerlin\n", "msg_date": "Fri, 10 Aug 2007 16:23:20 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 8/9/07, Arjen van der Meijden <[email protected]> wrote:\n> On 9-8-2007 23:50 Merlin Moncure wrote:\n> > Where the extra controller especially pays off is if you have to\n> > expand to a second tray. It's easy to add trays but installing\n> > controllers on a production server is scary.\n>\n> For connectivity-sake that's not a necessity. You can either connect\n> (two?) extra MD1000's to your first MD1000 or you can use the second\n> external SAS-port on your controller. Obviously it depends on the\n> controller whether its good enough to just add the disks to it, rather\n> than adding another controller for the second tray. Whether the perc5/e\n> is good enough for that, I don't know, we've only equipped ours with a\n> single MD1000 holding 15x 15k rpm drives, but in our benchmarks it\n> scaled pretty well going from a few to all 14 disks (+1 hotspare).\n\nAs it happens I will have an opportunity to test the dual controller\ntheory. In about a week we are picking up another md1000 and will\nattach it in an active/active configuration with various\nhardware/software RAID configurations, and run a battery of database\ncentric tests. Results will follow.\n\nBy the way, the recent dell severs I have seen are well built in my\nopinion...better and cheaper than comparable IBM servers. I've also\ntested the IBM exp3000, and the MD1000 is cheaper and comes standard\nwith a second ESM. In my opinion, the Dell 1U 1950 is extremely well\norganized in terms of layout and cooling...dual power supplies, dual\nPCI-E (one low profile), plus a third custom slot for the optional\nperc 5/i which drives the backplane.\n\nmerlin\n", "msg_date": "Fri, 10 Aug 2007 13:30:38 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "\nOn Aug 9, 2007, at 3:47 PM, Joe Uhl wrote:\n\n> PowerEdge 1950 paired with a PowerVault MD1000\n> 2 x Quad Core Xeon E5310\n> 16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\n> PERC 5/E Raid Adapter\n> 2 x 146 GB SAS in Raid 1 for OS + logs.\n> A bunch of disks in the MD1000 configured in Raid 10 for Postgres \n> data.\n\nI'd avoid Dell disk systems if at all possible. I know, I've been \nthrough the pain. You really want someone else providing your RAID \ncard and disk array, especially if the 5/E card is based on the \nAdaptec devices.\n\n", "msg_date": "Fri, 10 Aug 2007 15:42:58 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "On 8/10/07, Vivek Khera <[email protected]> wrote:\n>\n> On Aug 9, 2007, at 3:47 PM, Joe Uhl wrote:\n>\n> > PowerEdge 1950 paired with a PowerVault MD1000\n> > 2 x Quad Core Xeon E5310\n> > 16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)\n> > PERC 5/E Raid Adapter\n> > 2 x 146 GB SAS in Raid 1 for OS + logs.\n> > A bunch of disks in the MD1000 configured in Raid 10 for Postgres\n> > data.\n>\n> I'd avoid Dell disk systems if at all possible. I know, I've been\n> through the pain. You really want someone else providing your RAID\n> card and disk array, especially if the 5/E card is based on the\n> Adaptec devices.\n\nI'm not so sure I agree. They are using LSI firmware now (and so is\neveryone else). The servers are well built (highly subjective, I\nadmit) and configurable. I have had some bad experiences with IBM\ngear (adaptec controller though), and white box parts 3ware, etc. I\ncan tell you that dell got us the storage and the server in record\ntime\n\ndo agree on adaptec however\n\nmerlin\n", "msg_date": "Fri, 10 Aug 2007 16:36:07 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "I know we bough the 4 proc opteron unit with the sas jbod from dell and it\nhas been extremely excellent in terms of performance.\n\nWas like 3 times faster the our old dell 4 proc which had xeon processors.\n\nThe newer one has had a few issues (I am running redhat as4 since dell\nsupports it. I have had one kernel failure (but it has been up for like a\nyear). Other then that no issues a reboot fixed whatever caused the failure\nand I have not seen it happen again and its been a few months.\n\nI am definitely going dell for any other server needs their pricing is so\ncompetitive now and the machines I bought both the 1u 2 proc and the larger\n4 proc have been very good.\n\nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Merlin Moncure\nSent: Friday, August 10, 2007 1:31 PM\nTo: Arjen van der Meijden\nCc: Joe Uhl; [email protected]\nSubject: Re: [PERFORM] Dell Hardware Recommendations\n\nOn 8/9/07, Arjen van der Meijden <[email protected]> wrote:\n> On 9-8-2007 23:50 Merlin Moncure wrote:\n> > Where the extra controller especially pays off is if you have to\n> > expand to a second tray. It's easy to add trays but installing\n> > controllers on a production server is scary.\n>\n> For connectivity-sake that's not a necessity. You can either connect\n> (two?) extra MD1000's to your first MD1000 or you can use the second\n> external SAS-port on your controller. Obviously it depends on the\n> controller whether its good enough to just add the disks to it, rather\n> than adding another controller for the second tray. Whether the perc5/e\n> is good enough for that, I don't know, we've only equipped ours with a\n> single MD1000 holding 15x 15k rpm drives, but in our benchmarks it\n> scaled pretty well going from a few to all 14 disks (+1 hotspare).\n\nAs it happens I will have an opportunity to test the dual controller\ntheory. In about a week we are picking up another md1000 and will\nattach it in an active/active configuration with various\nhardware/software RAID configurations, and run a battery of database\ncentric tests. Results will follow.\n\nBy the way, the recent dell severs I have seen are well built in my\nopinion...better and cheaper than comparable IBM servers. I've also\ntested the IBM exp3000, and the MD1000 is cheaper and comes standard\nwith a second ESM. In my opinion, the Dell 1U 1950 is extremely well\norganized in terms of layout and cooling...dual power supplies, dual\nPCI-E (one low profile), plus a third custom slot for the optional\nperc 5/i which drives the backplane.\n\nmerlin\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Fri, 10 Aug 2007 16:36:46 -0400", "msg_from": "\"Joel Fradkin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "\nOn Aug 10, 2007, at 4:36 PM, Merlin Moncure wrote:\n\n> I'm not so sure I agree. They are using LSI firmware now (and so is\n> everyone else). The servers are well built (highly subjective, I\n> admit) and configurable. I have had some bad experiences with IBM\n> gear (adaptec controller though), and white box parts 3ware, etc. I\n> can tell you that dell got us the storage and the server in record\n> time\n>\n> do agree on adaptec however\n\nOk, perhaps you got luckier... I have two PowerVault 220 rack mounts \nwith U320 SCSI drives in them. With an LSI 320-2X controller, it \n*refuses* to recognize some of the drives (channel 1 on either \narray). Dell blames LSI, LSI blames dell's backplane. This is \nconsistent across multiple controllers we tried, and two different \nDell disk arrays. Dropping the SCSI speed to 160 is the only way to \nmake them work. I tend to believe LSI here.\n\nThe Adaptec 2230SLP controller recognizes the arrays fine, but tends \nto \"drop\" devices at inopportune moments. Re-seating dropped devices \nstarts a rebuild, but the speed is recognized as \"1\" and the rebuild \ntakes two lifetimes to complete unless you insert a reboot of the \nsystem in there. Totally unacceptable. Again, dropping the scsi \nrate to 160 seems to make it more stable.\n\n", "msg_date": "Mon, 13 Aug 2007 09:50:11 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" }, { "msg_contents": "\nOn 13-Aug-07, at 9:50 AM, Vivek Khera wrote:\n\n>\n> On Aug 10, 2007, at 4:36 PM, Merlin Moncure wrote:\n>\n>> I'm not so sure I agree. They are using LSI firmware now (and so is\n>> everyone else). The servers are well built (highly subjective, I\n>> admit) and configurable. I have had some bad experiences with IBM\n>> gear (adaptec controller though), and white box parts 3ware, etc. I\n>> can tell you that dell got us the storage and the server in record\n>> time\n>>\n>> do agree on adaptec however\n>\n> Ok, perhaps you got luckier... I have two PowerVault 220 rack \n> mounts with U320 SCSI drives in them. With an LSI 320-2X \n> controller, it *refuses* to recognize some of the drives (channel 1 \n> on either array). Dell blames LSI, LSI blames dell's backplane. \n> This is consistent across multiple controllers we tried, and two \n> different Dell disk arrays. Dropping the SCSI speed to 160 is the \n> only way to make them work. I tend to believe LSI here.\n>\nThis is the crux of the argument here. Perc/5 is a dell trademark. \nThey can ship any hardware they want and call it a Perc/5.\n\nDave\n> The Adaptec 2230SLP controller recognizes the arrays fine, but \n> tends to \"drop\" devices at inopportune moments. Re-seating dropped \n> devices starts a rebuild, but the speed is recognized as \"1\" and \n> the rebuild takes two lifetimes to complete unless you insert a \n> reboot of the system in there. Totally unacceptable. Again, \n> dropping the scsi rate to 160 seems to make it more stable.\n>\n>\n\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Mon, 13 Aug 2007 20:59:26 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Hardware Recommendations" } ]
[ { "msg_contents": "I'm have the following view as part of a larger, aggregate query that is\nrunning slower than I'd like. There are 4 views total, each very\nsimilar to this one. Each of the views is then left joined with data\nfrom some other tables to give me the final result that I'm looking for.\n\nI'm hoping that if I can get some insight in to how to make this view\nexecute faster, I can apply that learning to the other 3 views and\nthereby decrease the run time for my aggregate query.\n\nI'm running 8.2.4 on Windows XP with a single 10K rpm disk dedicated to\nthe data directory.\nshared_buffers = 12288\nwork_mem = 262144\nmaintenance_work_mem = 131072\nmax_fsm_pages = 204800\nrandom_page_cost = 2.0\neffective_cache_size = 10000\nautovacuum = on\n\n\nSELECT \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", avg(\"ParameterValues\".\"ParameterValue\") AS\n\"Mottle_NMF\"\n\nFROM \"PrintSamples\", \"DigitalImages\", \"PrintSampleAnalyses\",\n\"Measurements\", \"ParameterValues\", \"tblTPNamesAndColors\", \"tblColors\",\n\"AnalysisModules\", \"ParameterNames\"\n\nWHERE \"DigitalImages\".\"ImageID\" = \"PrintSampleAnalyses\".\"ImageID\"\nAND \"PrintSamples\".\"PrintSampleID\" = \"DigitalImages\".\"PrintSampleID\"\nAND \"PrintSampleAnalyses\".\"psaID\" = \"Measurements\".\"psaID\" \nAND \"Measurements\".\"MeasurementID\" = \"ParameterValues\".\"MeasurementID\" \nAND \"AnalysisModules\".\"MetricID\" = \"Measurements\".\"MetricID\" \nAND \"ParameterNames\".\"ParameterID\" = \"ParameterValues\".\"ParameterID\"\nAND \"tblTPNamesAndColors\".\"TestPatternName\" =\n\"PrintSamples\".\"TestPatternName\" \nAND \"tblColors\".\"ColorID\" = \"tblTPNamesAndColors\".\"ColorID\"\n\nGROUP BY \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", \"AnalysisModules\".\"AnalysisModuleName\",\n\"ParameterNames\".\"ParameterName\", \"PrintSamples\".\"TestPatternName\"\n\nHAVING \"PrintSamples\".\"MachineID\" = 4741 OR \"PrintSamples\".\"MachineID\" =\n4745 AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF' AND\n\"ParameterNames\".\"ParameterName\" = 'NMF' AND \"tblColors\".\"ColorID\" <> 3\nAND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n\n\nEXPLAIN ANALYZE\nHashAggregate (cost=6069.71..6069.82 rows=9 width=70) (actual\ntime=3230.868..3230.923 rows=31 loops=1)\n -> Nested Loop (cost=1.77..6069.55 rows=9 width=70) (actual\ntime=367.959..3230.476 rows=31 loops=1)\n Join Filter: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1 width=17)\n(actual time=0.020..0.032 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Nested Loop (cost=1.77..6059.09 rows=682 width=61) (actual\ntime=367.905..3230.154 rows=124 loops=1)\n -> Hash Join (cost=1.77..2889.96 rows=151 width=57) (actual\ntime=119.748..1447.130 rows=31 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Nested Loop (cost=0.00..2880.22 rows=1722 width=48) (actual\ntime=55.278..1444.801 rows=1656 loops=1)\n -> Nested Loop (cost=0.00..226.25 rows=18 width=44) (actual\ntime=10.080..13.951 rows=31 loops=1)\n -> Nested Loop (cost=0.00..151.33 rows=18 width=44)\n(actual time=5.030..8.266 rows=31 loops=1)\n -> Nested Loop (cost=0.00..74.21 rows=18 width=44)\n(actual time=2.253..4.822 rows=31 loops=1)\n Join Filter: (\"tblColors\".\"ColorID\" =\n\"tblTPNamesAndColors\".\"ColorID\")\n -> Nested Loop (cost=0.00..48.11 rows=24 width=44)\n(actual time=2.232..3.619 rows=43 loops=1)\n -> Index Scan using \"PSMachineID_idx\" on\n\"PrintSamples\" (cost=0.00..7.99 rows=29 width=40)\n(actual time=2.204..2.515 rows=43 loops=1)\n Index Cond: (\"MachineID\" = 4741)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Index Scan using \"TPNTestPatternName\" on\n\"tblTPNamesAndColors\" \n\t\n(cost=0.00..1.37 rows=1 width=30)\n\t\n(actual time=0.011..0.015 rows=1 loops=43)\n Index Cond:\n((\"tblTPNamesAndColors\".\"TestPatternName\")::text =\n(\"PrintSamples\".\"TestPatternName\")::text)\n -> Seq Scan on \"tblColors\" (cost=0.00..1.05 rows=3\nwidth=4) \n\t\t\t\t\t\t\t\t(actual\ntime=0.004..0.010 rows=3 loops=43)\n Filter: (\"ColorID\" <> 3)\n -> Index Scan using \"DIPrintSampleID_idx\" on\n\"DigitalImages\"\n\t\n(cost=0.00..4.27 rows=1 width=8)\n\t\n(actual time=0.100..0.102 rows=1 loops=31)\n Index Cond: (\"PrintSamples\".\"PrintSampleID\" =\n\"DigitalImages\".\"PrintSampleID\")\n -> Index Scan using \"PSAImageID_idx\" on\n\"PrintSampleAnalyses\" (cost=0.00..4.15 rows=1 width=8)\n\t\n(actual time=0.171..0.174 rows=1 loops=31)\n Index Cond: (\"DigitalImages\".\"ImageID\" =\n\"PrintSampleAnalyses\".\"ImageID\")\n -> Index Scan using \"MpsaID_idx\" on \"Measurements\"\n(cost=0.00..120.33 rows=2169 width=12)\n\t\n(actual time=19.381..46.016 rows=53 loops=31)\n Index Cond: (\"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\")\n -> Hash (cost=1.71..1.71 rows=5 width=17) (actual\ntime=0.073..0.073 rows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5\nwidth=17) (actual time=0.013..0.030 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\n -> Index Scan using \"PVMeasurementID_idx\" on \"ParameterValues\"\n(cost=0.00..16.56 rows=354 width=12)\n\t\n(actual time=56.359..57.495 rows=4 loops=31)\n Index Cond: (\"Measurements\".\"MeasurementID\" =\n\"ParameterValues\".\"MeasurementID\")\nTotal runtime: 3231.331 ms\n", "msg_date": "Fri, 10 Aug 2007 12:57:33 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help optimize view" }, { "msg_contents": ">>> On Fri, Aug 10, 2007 at 11:57 AM, in message\n<1806D1F73FCB7F439F2C842EE0627B18065BEC18@usa0300ms01.na.xerox.net>, \"Relyea,\nMike\" <[email protected]> wrote: \n> I'm have the following view as part of a larger, aggregate query that is\n> running slower than I'd like.\n> . . .\n> HAVING \"PrintSamples\".\"MachineID\" = 4741 OR \"PrintSamples\".\"MachineID\" =\n> 4745 AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF' AND\n> \"ParameterNames\".\"ParameterName\" = 'NMF' AND \"tblColors\".\"ColorID\" <> 3\n> AND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n \nFirst off, let's make sure we're optimizing the query you really want to run.\nAND binds tighter than OR, so as you have it written, it is the same as:\n \n HAVING \"PrintSamples\".\"MachineID\" = 4741\n OR ( \"PrintSamples\".\"MachineID\" = 4745\n AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF'\n AND \"ParameterNames\".\"ParameterName\" = 'NMF'\n AND \"tblColors\".\"ColorID\" <> 3\n AND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n )\n \nI fear you may really want it evaluate to:\n \n HAVING (\"PrintSamples\".\"MachineID\" = 4741 OR \"PrintSamples\".\"MachineID\" = 4745)\n AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF'\n AND \"ParameterNames\".\"ParameterName\" = 'NMF'\n AND \"tblColors\".\"ColorID\" <> 3\n AND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n \n-Kevin\n \n\n\n", "msg_date": "Sat, 18 Aug 2007 20:01:50 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" }, { "msg_contents": ">>> On Fri, Aug 10, 2007 at 11:57 AM, in message\n<1806D1F73FCB7F439F2C842EE0627B18065BEC18@usa0300ms01.na.xerox.net>, \"Relyea,\nMike\" <[email protected]> wrote: \n> HAVING \"PrintSamples\".\"MachineID\" = 4741 OR \"PrintSamples\".\"MachineID\" =\n> 4745 AND . . .\n \nOn top of the issue in my prior email, I don't see any test for 4745 in the\nEXPLAIN ANALYZE output, which makes me think it doesn't go with the posted\nquery.\n \n-Kevin\n \n\n\n", "msg_date": "Sat, 18 Aug 2007 20:11:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" }, { "msg_contents": "> From: Kevin Grittner [mailto:[email protected]] \n> \n> First off, let's make sure we're optimizing the query you \n> really want to run.\n> AND binds tighter than OR, so as you have it written, it is \n> the same as:\n> \n> HAVING \"PrintSamples\".\"MachineID\" = 4741\n> OR ( \"PrintSamples\".\"MachineID\" = 4745\n> AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF'\n> AND \"ParameterNames\".\"ParameterName\" = 'NMF'\n> AND \"tblColors\".\"ColorID\" <> 3\n> AND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n> )\n> \n> I fear you may really want it evaluate to:\n> \n> HAVING (\"PrintSamples\".\"MachineID\" = 4741 OR \n> \"PrintSamples\".\"MachineID\" = 4745)\n> AND \"AnalysisModules\".\"AnalysisModuleName\" = 'NMF'\n> AND \"ParameterNames\".\"ParameterName\" = 'NMF'\n> AND \"tblColors\".\"ColorID\" <> 3\n> AND \"PrintSamples\".\"TestPatternName\" LIKE 'IQAF-TP8%';\n\nThe query I really want to run is several times larger than this. I\ndidn't think people would want to wade through pages and pages worth of\nSQL and then explain analyze results - especially when I'm fairly\ncertain that optimizing this smaller part of the overall aggregate query\nwould provide me the help I was looking for.\n\nYou're right about what I really want the query to evaluate to. I'll\ngive your suggestion a try. Thanks.\n\nMike\n", "msg_date": "Mon, 20 Aug 2007 10:02:48 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help optimize view" } ]
[ { "msg_contents": "Oops. Realized I posted the wrong SQL and EXPLAIN ANALYZE results.\nAlso forgot to mention that my \"server\" has 1.5 GB memory.\n\n\nSELECT \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", avg(\"ParameterValues\".\"ParameterValue\") AS\n\"Mottle_NMF\"\n FROM \"AnalysisModules\"\n JOIN (\"tblColors\"\n JOIN (\"tblTPNamesAndColors\"\n JOIN \"PrintSamples\" ON \"tblTPNamesAndColors\".\"TestPatternName\"::text\n= \"PrintSamples\".\"TestPatternName\"::text\n JOIN (\"DigitalImages\"\n JOIN \"PrintSampleAnalyses\" ON \"DigitalImages\".\"ImageID\" =\n\"PrintSampleAnalyses\".\"ImageID\"\n JOIN (\"ParameterNames\"\n JOIN (\"Measurements\"\n JOIN \"ParameterValues\" ON \"Measurements\".\"MeasurementID\" =\n\"ParameterValues\".\"MeasurementID\") ON \"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\") ON \"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\") ON \"PrintSamples\".\"PrintSampleID\" =\n\"DigitalImages\".\"PrintSampleID\") ON \"tblColors\".\"ColorID\" =\n\"tblTPNamesAndColors\".\"ColorID\") ON \"AnalysisModules\".\"MetricID\" =\n\"Measurements\".\"MetricID\"\n GROUP BY \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", \"AnalysisModules\".\"AnalysisModuleName\",\n\"ParameterNames\".\"ParameterName\", \"PrintSamples\".\"TestPatternName\"\n HAVING \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text AND\n\"ParameterNames\".\"ParameterName\"::text = 'NMF'::text AND\n\"tblColors\".\"ColorID\" <> 3 AND \"PrintSamples\".\"TestPatternName\"::text ~~\n'IQAF-TP8%'::text;\n\n\nQUERY PLAN\nHashAggregate (cost=519801.96..519898.00 rows=7683 width=70) (actual\ntime=106219.710..106249.456 rows=14853 loops=1)\n -> Hash Join (cost=286101.76..519667.51 rows=7683 width=70) (actual\ntime=50466.513..106111.635 rows=15123 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Hash Join (cost=286099.98..519260.45 rows=87588 width=61) (actual\ntime=50466.417..106055.182 rows=15123 loops=1)\n Hash Cond: (\"ParameterValues\".\"MeasurementID\" =\n\"Measurements\".\"MeasurementID\")\n -> Nested Loop (cost=8054.81..238636.75 rows=454040 width=21)\n(actual time=143.017..55178.583 rows=289724 loops=1)\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1\nwidth=17) (actual time=0.012..0.027 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Bitmap Heap Scan on \"ParameterValues\"\n(cost=8054.81..231033.70 rows=608089 width=12)\n\t\n(actual time=142.986..54432.650 rows=289724 loops=1)\n Recheck Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Bitmap Index Scan on \"PVParameterID_idx\"\n(cost=0.00..7902.79 rows=608089 width=0)\n\t\n(actual time=109.178..109.178 rows=289724 loops=1)\n Index Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Hash (cost=259861.12..259861.12 rows=1454724 width=48) (actual\ntime=50306.950..50306.950 rows=961097 loops=1)\n -> Hash Join (cost=8139.75..259861.12 rows=1454724 width=48)\n\t\t\t\t(actual time=971.910..48649.190\nrows=961097 loops=1)\n Hash Cond: (\"Measurements\".\"psaID\" =\n\"PrintSampleAnalyses\".\"psaID\")\n -> Seq Scan on \"Measurements\" (cost=0.00..199469.09\nrows=7541009 width=12)\n\t\t\t\t\t\t\t(actual\ntime=0.047..35628.599 rows=7539838 loops=1)\n -> Hash (cost=7949.67..7949.67 rows=15206 width=44) (actual\ntime=971.734..971.734 rows=18901 loops=1)\n -> Hash Join (cost=5069.24..7949.67 rows=15206 width=44)\n(actual time=590.003..938.744 rows=18901 loops=1)\n Hash Cond: (\"PrintSampleAnalyses\".\"ImageID\" =\n\"DigitalImages\".\"ImageID\")\n -> Seq Scan on \"PrintSampleAnalyses\"\n(cost=0.00..2334.25 rows=78825 width=8)\n\t\n(actual time=0.021..130.335 rows=78859 loops=1)\n -> Hash (cost=4879.10..4879.10 rows=15211 width=44)\n\t\t\t\t\t(actual time=589.940..589.940\nrows=18901 loops=1)\n -> Hash Join (cost=2220.11..4879.10 rows=15211\nwidth=44)\n\t\t\t\t\t\t(actual\ntime=168.307..557.675 rows=18901 loops=1)\n Hash Cond: (\"DigitalImages\".\"PrintSampleID\" =\n\"PrintSamples\".\"PrintSampleID\")\n -> Seq Scan on \"DigitalImages\"\n(cost=0.00..1915.50 rows=78850 width=8)\n\t\n(actual time=16.126..194.911 rows=78859 loops=1)\n -> Hash (cost=2029.98..2029.98 rows=15211\nwidth=44)\n\t\t\t\t\t\t(actual\ntime=152.128..152.128 rows=18645 loops=1)\n -> Hash Join (cost=564.39..2029.98\nrows=15211 width=44)\n\t\t\t\t\t\t\t(actual\ntime=13.951..121.903 rows=18645 loops=1)\n Hash Cond:\n((\"PrintSamples\".\"TestPatternName\")::text =\n\t\n(\"tblTPNamesAndColors\".\"TestPatternName\")::text)\n -> Bitmap Heap Scan on \"PrintSamples\"\n(cost=561.39..1781.53 rows=24891 width=40)\n\t\n(actual time=13.680..59.919 rows=24914 loops=1)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Bitmap Index Scan on\n\"PSTestPatternName_idx\" (cost=0.00..555.17 rows=24891 width=0)\n\t\n(actual time=13.487..13.487 rows=24914 loops=1)\n Index Cond:\n(((\"TestPatternName\")::text >= 'IQAF-TP8'::character varying) AND\n((\"TestPatternName\")::text < 'IQAF-TP9'::character varying))\n -> Hash (cost=2.72..2.72 rows=22\nwidth=30) (actual time=0.242..0.242 rows=21 loops=1)\n -> Hash Join (cost=1.09..2.72 rows=22\nwidth=30)\n\t\t\t\t\t\t\t\t(actual\ntime=0.101..0.200 rows=21 loops=1)\n Hash Cond:\n(\"tblTPNamesAndColors\".\"ColorID\" = \"tblColors\".\"ColorID\")\n -> Seq Scan on\n\"tblTPNamesAndColors\" (cost=0.00..1.30 rows=30 width=30)\n\t\n(actual time=0.050..0.085 rows=30 loops=1)\n -> Hash (cost=1.05..1.05 rows=3\nwidth=4) (actual time=0.028..0.028 rows=3 loops=1)\n -> Seq Scan on \"tblColors\"\n(cost=0.00..1.05 rows=3 width=4)\n\t\n(actual time=0.009..0.016 rows=3 loops=1)\n Filter: (\"ColorID\" <> 3)\n -> Hash (cost=1.71..1.71 rows=5 width=17) (actual time=0.072..0.072\nrows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5\nwidth=17) (actual time=0.038..0.055 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\nTotal runtime: 106358.738 ms\n", "msg_date": "Fri, 10 Aug 2007 15:04:44 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help optimize view" }, { "msg_contents": "\"Relyea, Mike\" <[email protected]> writes:\n> SELECT \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n> \"tblColors\".\"ColorID\", avg(\"ParameterValues\".\"ParameterValue\") AS\n> \"Mottle_NMF\"\n> FROM \"AnalysisModules\"\n> JOIN (\"tblColors\"\n> JOIN (\"tblTPNamesAndColors\"\n> JOIN \"PrintSamples\" ON \"tblTPNamesAndColors\".\"TestPatternName\"::text\n> =3D \"PrintSamples\".\"TestPatternName\"::text\n> JOIN (\"DigitalImages\"\n> JOIN \"PrintSampleAnalyses\" ON \"DigitalImages\".\"ImageID\" =3D\n> \"PrintSampleAnalyses\".\"ImageID\"\n> JOIN (\"ParameterNames\"\n> JOIN (\"Measurements\"\n> JOIN \"ParameterValues\" ON \"Measurements\".\"MeasurementID\" =3D\n> \"ParameterValues\".\"MeasurementID\") ON \"ParameterNames\".\"ParameterID\" =3D\n> \"ParameterValues\".\"ParameterID\") ON \"PrintSampleAnalyses\".\"psaID\" =3D\n> \"Measurements\".\"psaID\") ON \"PrintSamples\".\"PrintSampleID\" =3D\n> \"DigitalImages\".\"PrintSampleID\") ON \"tblColors\".\"ColorID\" =3D\n> \"tblTPNamesAndColors\".\"ColorID\") ON \"AnalysisModules\".\"MetricID\" =3D\n> \"Measurements\".\"MetricID\"\n\nTry increasing join_collapse_limit --- you have just enough tables here\nthat the planner isn't going to consider all possible join orders.\nAnd it sorta looks like it's picking a bad one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Aug 2007 17:43:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Friday, August 10, 2007 5:44 PM\n> To: Relyea, Mike\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Help optimize view \n> \n> Try increasing join_collapse_limit --- you have just enough \n> tables here that the planner isn't going to consider all \n> possible join orders.\n> And it sorta looks like it's picking a bad one.\n> \n> \t\t\tregards, tom lane\n> \n\nI tried increasing join_collapse_limit with no significant change in run\ntime although a different plan was chosen.\n\nI've included a re-send of my original post, it looks like it didn't go\nthrough - it's not in the archives. I've also included an explain\nanalyze before and after the join_collapse_limit change.\n\nI'm have the following view as part of a larger, aggregate query that is\nrunning slower than I'd like. There are 4 views total, each very\nsimilar to this one. Each of the views is then left joined with data\nfrom some other tables to give me the final result that I'm looking for.\n\nI'm hoping that if I can get some insight in to how to make this view\nexecute faster, I can apply that learning to the other 3 views and\nthereby decrease the run time for my aggregate query.\n\nI'm running 8.2.4 on Windows XP with a single 10K rpm disk dedicated to\nthe data directory and 1.5 GB memory.\nshared_buffers = 12288\nwork_mem = 262144\nmaintenance_work_mem = 131072\nmax_fsm_pages = 204800\nrandom_page_cost = 2.0\neffective_cache_size = 10000\nautovacuum = on\n\n====================================\n\nEXPLAIN ANALYZE SELECT \"PrintSamples\".\"MachineID\",\n\"PrintSamples\".\"PrintCopyID\", \"tblColors\".\"ColorID\",\navg(\"ParameterValues\".\"ParameterValue\") AS \"Mottle_NMF\"\n FROM \"AnalysisModules\"\n JOIN (\"tblColors\"\n JOIN (\"tblTPNamesAndColors\"\n JOIN \"PrintSamples\" ON \"tblTPNamesAndColors\".\"TestPatternName\"::text\n= \"PrintSamples\".\"TestPatternName\"::text\n JOIN (\"DigitalImages\"\n JOIN \"PrintSampleAnalyses\" ON \"DigitalImages\".\"ImageID\" =\n\"PrintSampleAnalyses\".\"ImageID\"\n JOIN (\"ParameterNames\"\n JOIN (\"Measurements\"\n JOIN \"ParameterValues\" ON \"Measurements\".\"MeasurementID\" =\n\"ParameterValues\".\"MeasurementID\") ON \"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\") ON \"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\") ON \"PrintSamples\".\"PrintSampleID\" =\n\"DigitalImages\".\"PrintSampleID\") ON \"tblColors\".\"ColorID\" =\n\"tblTPNamesAndColors\".\"ColorID\") ON \"AnalysisModules\".\"MetricID\" =\n\"Measurements\".\"MetricID\"\n GROUP BY \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", \"AnalysisModules\".\"AnalysisModuleName\",\n\"ParameterNames\".\"ParameterName\", \"PrintSamples\".\"TestPatternName\"\n HAVING \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text AND\n\"ParameterNames\".\"ParameterName\"::text = 'NMF'::text AND\n\"tblColors\".\"ColorID\" <> 3 AND \"PrintSamples\".\"TestPatternName\"::text ~~\n'IQAF-TP8%'::text;\n\nHashAggregate (cost=519801.96..519898.00 rows=7683 width=70) (actual\ntime=121101.027..121146.385 rows=14853 loops=1)\n -> Hash Join (cost=286101.76..519667.51 rows=7683 width=70) (actual\ntime=52752.600..120989.713 rows=15123 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Hash Join (cost=286099.98..519260.45 rows=87588 width=61) (actual\ntime=52752.502..120933.784 rows=15123 loops=1)\n Hash Cond: (\"ParameterValues\".\"MeasurementID\" =\n\"Measurements\".\"MeasurementID\")\n -> Nested Loop (cost=8054.81..238636.75 rows=454040 width=21)\n(actual time=165.510..67811.086 rows=289724 loops=1)\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1\nwidth=17) (actual time=0.012..0.026 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Bitmap Heap Scan on \"ParameterValues\"\n(cost=8054.81..231033.70 rows=608089 width=12) (actual\ntime=165.481..67094.656 rows=289724 loops=1)\n Recheck Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Bitmap Index Scan on \"PVParameterID_idx\"\n(cost=0.00..7902.79 rows=608089 width=0) (actual time=141.013..141.013\nrows=289724 loops=1)\n Index Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Hash (cost=259861.12..259861.12 rows=1454724 width=48) (actual\ntime=52573.270..52573.270 rows=961097 loops=1)\n -> Hash Join (cost=8139.75..259861.12 rows=1454724 width=48)\n(actual time=1399.575..50896.641 rows=961097 loops=1)\n Hash Cond: (\"Measurements\".\"psaID\" =\n\"PrintSampleAnalyses\".\"psaID\")\n -> Seq Scan on \"Measurements\" (cost=0.00..199469.09\nrows=7541009 width=12) (actual time=6.697..37199.702 rows=7539838\nloops=1)\n -> Hash (cost=7949.67..7949.67 rows=15206 width=44) (actual\ntime=1392.743..1392.743 rows=18901 loops=1)\n -> Hash Join (cost=5069.24..7949.67 rows=15206 width=44)\n(actual time=986.589..1358.908 rows=18901 loops=1)\n Hash Cond: (\"PrintSampleAnalyses\".\"ImageID\" =\n\"DigitalImages\".\"ImageID\")\n -> Seq Scan on \"PrintSampleAnalyses\"\n(cost=0.00..2334.25 rows=78825 width=8) (actual time=13.747..158.867\nrows=78859 loops=1)\n -> Hash (cost=4879.10..4879.10 rows=15211 width=44)\n(actual time=972.787..972.787 rows=18901 loops=1)\n -> Hash Join (cost=2220.11..4879.10 rows=15211\nwidth=44) (actual time=341.158..938.970 rows=18901 loops=1)\n Hash Cond: (\"DigitalImages\".\"PrintSampleID\" =\n\"PrintSamples\".\"PrintSampleID\")\n -> Seq Scan on \"DigitalImages\"\n(cost=0.00..1915.50 rows=78850 width=8) (actual time=34.028..418.113\nrows=78859 loops=1)\n -> Hash (cost=2029.98..2029.98 rows=15211\nwidth=44) (actual time=307.073..307.073 rows=18645 loops=1)\n -> Hash Join (cost=564.39..2029.98\nrows=15211 width=44) (actual time=92.565..275.879 rows=18645 loops=1)\n Hash Cond:\n((\"PrintSamples\".\"TestPatternName\")::text =\n(\"tblTPNamesAndColors\".\"TestPatternName\")::text)\n -> Bitmap Heap Scan on \"PrintSamples\"\n(cost=561.39..1781.53 rows=24891 width=40) (actual time=92.296..208.635\nrows=24914 loops=1)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Bitmap Index Scan on\n\"PSTestPatternName_idx\" (cost=0.00..555.17 rows=24891 width=0) (actual\ntime=76.711..76.711 rows=24914 loops=1)\n Index Cond:\n(((\"TestPatternName\")::text >= 'IQAF-TP8'::character varying) AND\n((\"TestPatternName\")::text < 'IQAF-TP9'::character varying))\n -> Hash (cost=2.72..2.72 rows=22\nwidth=30) (actual time=0.238..0.238 rows=21 loops=1)\n -> Hash Join (cost=1.09..2.72 rows=22\nwidth=30) (actual time=0.097..0.196 rows=21 loops=1)\n Hash Cond:\n(\"tblTPNamesAndColors\".\"ColorID\" = \"tblColors\".\"ColorID\")\n -> Seq Scan on\n\"tblTPNamesAndColors\" (cost=0.00..1.30 rows=30 width=30) (actual\ntime=0.046..0.080 rows=30 loops=1)\n -> Hash (cost=1.05..1.05 rows=3\nwidth=4) (actual time=0.028..0.028 rows=3 loops=1)\n -> Seq Scan on \"tblColors\"\n(cost=0.00..1.05 rows=3 width=4) (actual time=0.009..0.016 rows=3\nloops=1)\n Filter: (\"ColorID\" <> 3)\n -> Hash (cost=1.71..1.71 rows=5 width=17) (actual time=0.072..0.072\nrows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5\nwidth=17) (actual time=0.036..0.054 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\nTotal runtime: 121178.595 ms\n\n============================================\n\nSELECT set_config('join_collapse_limit', '20', false);\n\nEXPLAIN ANALYZE SELECT \"PrintSamples\".\"MachineID\",\n\"PrintSamples\".\"PrintCopyID\", \"tblColors\".\"ColorID\",\navg(\"ParameterValues\".\"ParameterValue\") AS \"Mottle_NMF\"\n FROM \"AnalysisModules\"\n JOIN (\"tblColors\"\n JOIN (\"tblTPNamesAndColors\"\n JOIN \"PrintSamples\" ON \"tblTPNamesAndColors\".\"TestPatternName\"::text\n= \"PrintSamples\".\"TestPatternName\"::text\n JOIN (\"DigitalImages\"\n JOIN \"PrintSampleAnalyses\" ON \"DigitalImages\".\"ImageID\" =\n\"PrintSampleAnalyses\".\"ImageID\"\n JOIN (\"ParameterNames\"\n JOIN (\"Measurements\"\n JOIN \"ParameterValues\" ON \"Measurements\".\"MeasurementID\" =\n\"ParameterValues\".\"MeasurementID\") ON \"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\") ON \"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\") ON \"PrintSamples\".\"PrintSampleID\" =\n\"DigitalImages\".\"PrintSampleID\") ON \"tblColors\".\"ColorID\" =\n\"tblTPNamesAndColors\".\"ColorID\") ON \"AnalysisModules\".\"MetricID\" =\n\"Measurements\".\"MetricID\"\n GROUP BY \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", \"AnalysisModules\".\"AnalysisModuleName\",\n\"ParameterNames\".\"ParameterName\", \"PrintSamples\".\"TestPatternName\"\n HAVING \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text AND\n\"ParameterNames\".\"ParameterName\"::text = 'NMF'::text AND\n\"tblColors\".\"ColorID\" <> 3 AND \"PrintSamples\".\"TestPatternName\"::text ~~\n'IQAF-TP8%'::text;\n\nHashAggregate (cost=489274.71..489372.94 rows=7858 width=70) (actual\ntime=120391.220..120420.367 rows=14853 loops=1)\n -> Hash Join (cost=256774.03..489137.20 rows=7858 width=70) (actual\ntime=51021.953..120276.494 rows=15123 loops=1)\n Hash Cond: (\"ParameterValues\".\"MeasurementID\" =\n\"Measurements\".\"MeasurementID\")\n -> Nested Loop (cost=8054.81..238636.75 rows=454040 width=21)\n(actual time=159.781..68959.258 rows=289724 loops=1)\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1 width=17)\n(actual time=0.021..0.039 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Bitmap Heap Scan on \"ParameterValues\" (cost=8054.81..231033.70\nrows=608089 width=12) (actual time=159.740..68235.713 rows=289724\nloops=1)\n Recheck Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Bitmap Index Scan on \"PVParameterID_idx\"\n(cost=0.00..7902.79 rows=608089 width=0) (actual time=135.166..135.166\nrows=289724 loops=1)\n Index Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Hash (cost=247087.84..247087.84 rows=130510 width=57) (actual\ntime=50844.324..50844.324 rows=15123 loops=1)\n -> Hash Join (cost=8141.52..247087.84 rows=130510 width=57)\n(actual time=11034.877..50791.185 rows=15123 loops=1)\n Hash Cond: (\"Measurements\".\"psaID\" =\n\"PrintSampleAnalyses\".\"psaID\")\n -> Hash Join (cost=1.77..234364.57 rows=661492 width=21)\n(actual time=31.302..48949.943 rows=289724 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Seq Scan on \"Measurements\" (cost=0.00..199469.09\nrows=7541009 width=12) (actual time=10.700..37931.726 rows=7539838\nloops=1)\n -> Hash (cost=1.71..1.71 rows=5 width=17) (actual\ntime=0.066..0.066 rows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5\nwidth=17) (actual time=0.033..0.049 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\n -> Hash (cost=7949.67..7949.67 rows=15206 width=44) (actual\ntime=1325.797..1325.797 rows=18901 loops=1)\n -> Hash Join (cost=5069.24..7949.67 rows=15206 width=44)\n(actual time=906.105..1290.289 rows=18901 loops=1)\n Hash Cond: (\"PrintSampleAnalyses\".\"ImageID\" =\n\"DigitalImages\".\"ImageID\")\n -> Seq Scan on \"PrintSampleAnalyses\" (cost=0.00..2334.25\nrows=78825 width=8) (actual time=4.456..153.999 rows=78859 loops=1)\n -> Hash (cost=4879.10..4879.10 rows=15211 width=44)\n(actual time=901.596..901.596 rows=18901 loops=1)\n -> Hash Join (cost=2220.11..4879.10 rows=15211\nwidth=44) (actual time=293.264..866.364 rows=18901 loops=1)\n Hash Cond: (\"DigitalImages\".\"PrintSampleID\" =\n\"PrintSamples\".\"PrintSampleID\")\n -> Seq Scan on \"DigitalImages\" (cost=0.00..1915.50\nrows=78850 width=8) (actual time=21.967..380.287 rows=78859 loops=1)\n -> Hash (cost=2029.98..2029.98 rows=15211\nwidth=44) (actual time=271.232..271.232 rows=18645 loops=1)\n -> Hash Join (cost=564.39..2029.98 rows=15211\nwidth=44) (actual time=60.780..237.748 rows=18645 loops=1)\n Hash Cond:\n((\"PrintSamples\".\"TestPatternName\")::text =\n(\"tblTPNamesAndColors\".\"TestPatternName\")::text)\n -> Bitmap Heap Scan on \"PrintSamples\"\n(cost=561.39..1781.53 rows=24891 width=40) (actual time=60.482..168.602\nrows=24914 loops=1)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Bitmap Index Scan on\n\"PSTestPatternName_idx\" (cost=0.00..555.17 rows=24891 width=0) (actual\ntime=52.269..52.269 rows=24914 loops=1)\n Index Cond:\n(((\"TestPatternName\")::text >= 'IQAF-TP8'::character varying) AND\n((\"TestPatternName\")::text < 'IQAF-TP9'::character varying))\n -> Hash (cost=2.72..2.72 rows=22 width=30)\n(actual time=0.266..0.266 rows=21 loops=1)\n -> Hash Join (cost=1.09..2.72 rows=22\nwidth=30) (actual time=0.120..0.223 rows=21 loops=1)\n Hash Cond:\n(\"tblTPNamesAndColors\".\"ColorID\" = \"tblColors\".\"ColorID\")\n -> Seq Scan on \"tblTPNamesAndColors\"\n(cost=0.00..1.30 rows=30 width=30) (actual time=0.025..0.059 rows=30\nloops=1)\n -> Hash (cost=1.05..1.05 rows=3\nwidth=4) (actual time=0.068..0.068 rows=3 loops=1)\n -> Seq Scan on \"tblColors\"\n(cost=0.00..1.05 rows=3 width=4) (actual time=0.048..0.054 rows=3\nloops=1)\n Filter: (\"ColorID\" <> 3)\nTotal runtime: 120443.640 ms\n", "msg_date": "Mon, 13 Aug 2007 11:35:44 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help optimize view " }, { "msg_contents": ">>> On Mon, Aug 13, 2007 at 10:35 AM, in message\n<1806D1F73FCB7F439F2C842EE0627B18065BF2C0@USA0300MS01.na.xerox.net>, \"Relyea,\nMike\" <[email protected]> wrote: \n> I'm running 8.2.4 on Windows XP with 1.5 GB memory.\n> shared_buffers = 12288\n> effective_cache_size = 10000\n \nFor starters, you might want to adjust one or both of these. It looks to me\nlike you're telling it that it only has 78.125 MB cache space. That will\nmake it tend to want to scan entire tables, on the assumption that the cache\nhit ratio will be poor for random reads.\n \nSince you're on 8.2.4, you can use units of measure to help make this easier\nto read. You could, for example, say:\n \nshared_buffers = 96MB\neffective_cache_size = 1200MB\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 13 Aug 2007 11:25:01 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" }, { "msg_contents": "> >>> On Mon, Aug 13, 2007 at 10:35 AM, in message\n> <[email protected]\n> .net>, \"Relyea, Mike\" <[email protected]> wrote: \n> > I'm running 8.2.4 on Windows XP with 1.5 GB memory.\n> > shared_buffers = 12288\n> > effective_cache_size = 10000\n> \n> For starters, you might want to adjust one or both of these. \n> It looks to me like you're telling it that it only has 78.125 \n> MB cache space. That will make it tend to want to scan \n> entire tables, on the assumption that the cache hit ratio \n> will be poor for random reads.\n> \n> Since you're on 8.2.4, you can use units of measure to help \n> make this easier to read. You could, for example, say:\n> \n> shared_buffers = 96MB\n> effective_cache_size = 1200MB\n> \n> -Kevin\n\nI've increased shared_buffers to 128MB, and restarted the server. My\ntotal run time didn't really change.\n\nSELECT set_config('effective_cache_size', '1000MB', false); I have\nanother app that uses about 500MB.\nSELECT set_config('join_collapse_limit', '20', false);\n\nexplain analyze SELECT \"PrintSamples\".\"MachineID\",\n\"PrintSamples\".\"PrintCopyID\", \"tblColors\".\"ColorID\",\navg(\"ParameterValues\".\"ParameterValue\") AS \"Mottle_NMF\"\n FROM \"AnalysisModules\"\n JOIN (\"tblColors\"\n JOIN (\"tblTPNamesAndColors\"\n JOIN \"PrintSamples\" ON \"tblTPNamesAndColors\".\"TestPatternName\"::text\n= \"PrintSamples\".\"TestPatternName\"::text\n JOIN (\"DigitalImages\"\n JOIN \"PrintSampleAnalyses\" ON \"DigitalImages\".\"ImageID\" =\n\"PrintSampleAnalyses\".\"ImageID\"\n JOIN (\"ParameterNames\"\n JOIN (\"Measurements\"\n JOIN \"ParameterValues\" ON \"Measurements\".\"MeasurementID\" =\n\"ParameterValues\".\"MeasurementID\") ON \"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\") ON \"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\") ON \"PrintSamples\".\"PrintSampleID\" =\n\"DigitalImages\".\"PrintSampleID\") ON \"tblColors\".\"ColorID\" =\n\"tblTPNamesAndColors\".\"ColorID\") ON \"AnalysisModules\".\"MetricID\" =\n\"Measurements\".\"MetricID\"\n GROUP BY \"PrintSamples\".\"MachineID\", \"PrintSamples\".\"PrintCopyID\",\n\"tblColors\".\"ColorID\", \"AnalysisModules\".\"AnalysisModuleName\",\n\"ParameterNames\".\"ParameterName\", \"PrintSamples\".\"TestPatternName\"\n HAVING \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text AND\n\"ParameterNames\".\"ParameterName\"::text = 'NMF'::text AND\n\"tblColors\".\"ColorID\" <> 3 AND \"PrintSamples\".\"TestPatternName\"::text ~~\n'IQAF-TP8%'::text;\n\nHashAggregate (cost=489274.71..489372.94 rows=7858 width=70) (actual\ntime=117632.844..117663.228 rows=14853 loops=1)\n -> Hash Join (cost=256774.03..489137.20 rows=7858 width=70) (actual\ntime=50297.022..117530.665 rows=15123 loops=1)\n Hash Cond: (\"ParameterValues\".\"MeasurementID\" =\n\"Measurements\".\"MeasurementID\")\n -> Nested Loop (cost=8054.81..238636.75 rows=454040 width=21)\n(actual time=172.341..66959.288 rows=289724 loops=1)\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1 width=17)\n(actual time=0.020..0.034 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Bitmap Heap Scan on \"ParameterValues\" (cost=8054.81..231033.70\nrows=608089 width=12) (actual time=172.297..66241.380 rows=289724\nloops=1)\n Recheck Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Bitmap Index Scan on \"PVParameterID_idx\"\n(cost=0.00..7902.79 rows=608089 width=0) (actual time=147.690..147.690\nrows=289724 loops=1)\n Index Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Hash (cost=247087.84..247087.84 rows=130510 width=57) (actual\ntime=50109.022..50109.022 rows=15123 loops=1)\n -> Hash Join (cost=8141.52..247087.84 rows=130510 width=57)\n(actual time=11095.022..50057.777 rows=15123 loops=1)\n Hash Cond: (\"Measurements\".\"psaID\" =\n\"PrintSampleAnalyses\".\"psaID\")\n -> Hash Join (cost=1.77..234364.57 rows=661492 width=21)\n(actual time=31.457..48123.380 rows=289724 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Seq Scan on \"Measurements\" (cost=0.00..199469.09\nrows=7541009 width=12) (actual time=10.920..37814.792 rows=7539838\nloops=1)\n -> Hash (cost=1.71..1.71 rows=5 width=17) (actual\ntime=0.066..0.066 rows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5\nwidth=17) (actual time=0.032..0.049 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\n -> Hash (cost=7949.67..7949.67 rows=15206 width=44) (actual\ntime=1424.025..1424.025 rows=18901 loops=1)\n -> Hash Join (cost=5069.24..7949.67 rows=15206 width=44)\n(actual time=1007.901..1387.787 rows=18901 loops=1)\n Hash Cond: (\"PrintSampleAnalyses\".\"ImageID\" =\n\"DigitalImages\".\"ImageID\")\n -> Seq Scan on \"PrintSampleAnalyses\" (cost=0.00..2334.25\nrows=78825 width=8) (actual time=4.432..153.090 rows=78859 loops=1)\n -> Hash (cost=4879.10..4879.10 rows=15211 width=44)\n(actual time=1003.424..1003.424 rows=18901 loops=1)\n -> Hash Join (cost=2220.11..4879.10 rows=15211\nwidth=44) (actual time=348.841..968.194 rows=18901 loops=1)\n Hash Cond: (\"DigitalImages\".\"PrintSampleID\" =\n\"PrintSamples\".\"PrintSampleID\")\n -> Seq Scan on \"DigitalImages\" (cost=0.00..1915.50\nrows=78850 width=8) (actual time=22.080..427.303 rows=78859 loops=1)\n -> Hash (cost=2029.98..2029.98 rows=15211\nwidth=44) (actual time=326.703..326.703 rows=18645 loops=1)\n -> Hash Join (cost=564.39..2029.98 rows=15211\nwidth=44) (actual time=90.425..293.223 rows=18645 loops=1)\n Hash Cond:\n((\"PrintSamples\".\"TestPatternName\")::text =\n(\"tblTPNamesAndColors\".\"TestPatternName\")::text)\n -> Bitmap Heap Scan on \"PrintSamples\"\n(cost=561.39..1781.53 rows=24891 width=40) (actual time=90.188..221.310\nrows=24914 loops=1)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Bitmap Index Scan on\n\"PSTestPatternName_idx\" (cost=0.00..555.17 rows=24891 width=0) (actual\ntime=72.897..72.897 rows=24914 loops=1)\n Index Cond:\n(((\"TestPatternName\")::text >= 'IQAF-TP8'::character varying) AND\n((\"TestPatternName\")::text < 'IQAF-TP9'::character varying))\n -> Hash (cost=2.72..2.72 rows=22 width=30)\n(actual time=0.210..0.210 rows=21 loops=1)\n -> Hash Join (cost=1.09..2.72 rows=22\nwidth=30) (actual time=0.070..0.168 rows=21 loops=1)\n Hash Cond:\n(\"tblTPNamesAndColors\".\"ColorID\" = \"tblColors\".\"ColorID\")\n -> Seq Scan on \"tblTPNamesAndColors\"\n(cost=0.00..1.30 rows=30 width=30) (actual time=0.022..0.056 rows=30\nloops=1)\n -> Hash (cost=1.05..1.05 rows=3\nwidth=4) (actual time=0.026..0.026 rows=3 loops=1)\n -> Seq Scan on \"tblColors\"\n(cost=0.00..1.05 rows=3 width=4) (actual time=0.008..0.014 rows=3\nloops=1)\n Filter: (\"ColorID\" <> 3)\nTotal runtime: 117692.834 ms\n", "msg_date": "Mon, 13 Aug 2007 14:48:30 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help optimize view" }, { "msg_contents": ">>> On Mon, Aug 13, 2007 at 1:48 PM, in message\n<1806D1F73FCB7F439F2C842EE0627B18065F78DF@USA0300MS01.na.xerox.net>, \"Relyea,\nMike\" <[email protected]> wrote: \n> I've increased shared_buffers to 128MB, and restarted the server. My\n> total run time didn't really change.\n \nPlease forgive me if this guess doesn't help either, but could you try eliminating the GROUP BY options which don't echo values in the select value list, and move the HAVING conditions to a WHERE clause? Something like:\n \nexplain analyze\nSELECT\n \"PrintSamples\".\"MachineID\",\n \"PrintSamples\".\"PrintCopyID\",\n \"tblColors\".\"ColorID\",\n avg(\"ParameterValues\".\"ParameterValue\") AS \"Mottle_NMF\"\n FROM \"AnalysisModules\"\n JOIN\n (\n \"tblColors\"\n JOIN\n (\n \"tblTPNamesAndColors\"\n JOIN \"PrintSamples\"\n ON (\"tblTPNamesAndColors\".\"TestPatternName\"::text = \"PrintSamples\".\"TestPatternName\"::text)\n JOIN\n (\n \"DigitalImages\"\n JOIN \"PrintSampleAnalyses\"\n ON (\"DigitalImages\".\"ImageID\" = \"PrintSampleAnalyses\".\"ImageID\")\n JOIN\n (\n \"ParameterNames\"\n JOIN\n (\n \"Measurements\"\n JOIN \"ParameterValues\"\n ON \"Measurements\".\"MeasurementID\" = \"ParameterValues\".\"MeasurementID\"\n ) ON \"ParameterNames\".\"ParameterID\" = \"ParameterValues\".\"ParameterID\"\n ) ON \"PrintSampleAnalyses\".\"psaID\" = \"Measurements\".\"psaID\"\n ) ON \"PrintSamples\".\"PrintSampleID\" = \"DigitalImages\".\"PrintSampleID\"\n ) ON \"tblColors\".\"ColorID\" = \"tblTPNamesAndColors\".\"ColorID\"\n ) ON \"AnalysisModules\".\"MetricID\" = \"Measurements\".\"MetricID\"\n WHERE \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text\n AND \"ParameterNames\".\"ParameterName\"::text = 'NMF'::text\n AND \"PrintSamples\".\"TestPatternName\"::text ~~ 'IQAF-TP8%'::text\n AND \"tblColors\".\"ColorID\" <> 3\n GROUP BY\n \"PrintSamples\".\"MachineID\",\n \"PrintSamples\".\"PrintCopyID\",\n \"tblColors\".\"ColorID\"\n;\n \nI'd also be inclined to simplify the FROM clause by eliminating the parentheses and putting the ON conditions closer to where they are used, but that would be more for readability than any expectation that it would affect the plan.\n \n-Kevin\n \n\n", "msg_date": "Mon, 13 Aug 2007 15:02:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" }, { "msg_contents": "\"Relyea, Mike\" <[email protected]> writes:\n> I've increased shared_buffers to 128MB, and restarted the server. My\n> total run time didn't really change.\n\nIt doesn't look like you can hope for much in terms of improving the\nplan. The bulk of the time is going into scanning ParameterValues and\nMeasurements, but AFAICS there is no way for the query to pull fewer\nrows from those tables than it is doing, and the size of the join means\nthat a nestloop indexscan is likely to suck. (You could try forcing one\nby setting enable_hashjoin and enable_mergejoin to OFF, but I don't have\nmuch hope for that.)\n\nIf you haven't played with work_mem yet, increasing that might make the\nhash joins go a bit faster --- but it looks like most of the time is\ngoing into the raw relation scans, so there's not going to be a lot of\nwin to be had there either.\n\nBasically, joining lots of rows like this takes awhile. If you have to\nhave a faster answer, I can only suggest rethinking your table design.\nSometimes denormalization of the schema is necessary for performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Aug 2007 16:17:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view " }, { "msg_contents": "> >>> On Mon, Aug 13, 2007 at 1:48 PM, in message\n> <[email protected]\n> .net>, \"Relyea, Mike\" <[email protected]> wrote: \n> > I've increased shared_buffers to 128MB, and restarted the \n> server. My \n> > total run time didn't really change.\n> \n> Please forgive me if this guess doesn't help either, but \n> could you try eliminating the GROUP BY options which don't \n> echo values in the select value list, and move the HAVING \n> conditions to a WHERE clause? Something like:\n> \n> explain analyze\n> SELECT\n> \"PrintSamples\".\"MachineID\",\n> \"PrintSamples\".\"PrintCopyID\",\n> \"tblColors\".\"ColorID\",\n> avg(\"ParameterValues\".\"ParameterValue\") AS \"Mottle_NMF\"\n> FROM \"AnalysisModules\"\n> JOIN\n> (\n> \"tblColors\"\n> JOIN\n> (\n> \"tblTPNamesAndColors\"\n> JOIN \"PrintSamples\"\n> ON (\"tblTPNamesAndColors\".\"TestPatternName\"::text = \n> \"PrintSamples\".\"TestPatternName\"::text)\n> JOIN\n> (\n> \"DigitalImages\"\n> JOIN \"PrintSampleAnalyses\"\n> ON (\"DigitalImages\".\"ImageID\" = \n> \"PrintSampleAnalyses\".\"ImageID\")\n> JOIN\n> (\n> \"ParameterNames\"\n> JOIN\n> (\n> \"Measurements\"\n> JOIN \"ParameterValues\"\n> ON \"Measurements\".\"MeasurementID\" = \n> \"ParameterValues\".\"MeasurementID\"\n> ) ON \"ParameterNames\".\"ParameterID\" = \n> \"ParameterValues\".\"ParameterID\"\n> ) ON \"PrintSampleAnalyses\".\"psaID\" = \"Measurements\".\"psaID\"\n> ) ON \"PrintSamples\".\"PrintSampleID\" = \n> \"DigitalImages\".\"PrintSampleID\"\n> ) ON \"tblColors\".\"ColorID\" = \"tblTPNamesAndColors\".\"ColorID\"\n> ) ON \"AnalysisModules\".\"MetricID\" = \"Measurements\".\"MetricID\"\n> WHERE \"AnalysisModules\".\"AnalysisModuleName\"::text = 'NMF'::text\n> AND \"ParameterNames\".\"ParameterName\"::text = 'NMF'::text\n> AND \"PrintSamples\".\"TestPatternName\"::text ~~ 'IQAF-TP8%'::text\n> AND \"tblColors\".\"ColorID\" <> 3\n> GROUP BY\n> \"PrintSamples\".\"MachineID\",\n> \"PrintSamples\".\"PrintCopyID\",\n> \"tblColors\".\"ColorID\"\n> ;\n> \n> I'd also be inclined to simplify the FROM clause by \n> eliminating the parentheses and putting the ON conditions \n> closer to where they are used, but that would be more for \n> readability than any expectation that it would affect the plan.\n> \n> -Kevin\n\nThanks for your help. Re-writing the view like this maybe bought me\nsomething. I've pasted the explain analyze results below. Tough to\ntell because I also increased some of the statistics. From what Tom\nsays, it sounds like if I want the data returned faster I'm likely to\nhave to get beefier hardware.\n\nALTER TABLE \"ParameterValues\" ALTER \"MeasurementID\" SET STATISTICS 500;\n\nALTER TABLE \"ParameterValues\" ALTER \"ParameterID\" SET STATISTICS 500;\n\nANALYZE \"ParameterValues\";\n\nALTER TABLE \"Measurements\" ALTER COLUMN \"MetricID\" SET STATISTICS 500;\n\nALTER TABLE \"Measurements\" ALTER COLUMN \"psaID\" SET STATISTICS 500;\n\nANALYZE \"Measurements\";\n\nRunning the above SQL:\n\nHashAggregate (cost=461541.53..461634.88 rows=7468 width=16) (actual\ntime=110002.041..110024.777 rows=14853 loops=1)\n -> Hash Join (cost=230789.57..461464.70 rows=7683 width=16) (actual\ntime=56847.814..109936.722 rows=15123 loops=1)\n Hash Cond: (\"Measurements\".\"MetricID\" =\n\"AnalysisModules\".\"MetricID\")\n -> Hash Join (cost=230787.80..461057.64 rows=87588 width=20) (actual\ntime=56847.697..109884.122 rows=15123 loops=1)\n Hash Cond: (\"ParameterValues\".\"MeasurementID\" =\n\"Measurements\".\"MeasurementID\")\n -> Nested Loop (cost=6353.15..234044.47 rows=454038 width=8)\n(actual time=179.154..52780.680 rows=289724 loops=1)\n -> Seq Scan on \"ParameterNames\" (cost=0.00..1.94 rows=1\nwidth=4) (actual time=0.012..0.027 rows=1 loops=1)\n Filter: ((\"ParameterName\")::text = 'NMF'::text)\n -> Bitmap Heap Scan on \"ParameterValues\"\n(cost=6353.15..228047.32 rows=479617 width=12) (actual\ntime=179.123..52102.572 rows=289724 loops=1)\n Recheck Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Bitmap Index Scan on \"PVParameterID_idx\"\n(cost=0.00..6233.25 rows=479617 width=0) (actual time=152.752..152.752\nrows=289724 loops=1)\n Index Cond: (\"ParameterNames\".\"ParameterID\" =\n\"ParameterValues\".\"ParameterID\")\n -> Hash (cost=206253.42..206253.42 rows=1454498 width=20) (actual\ntime=56657.022..56657.022 rows=961097 loops=1)\n -> Nested Loop (cost=5069.24..206253.42 rows=1454498 width=20)\n(actual time=932.249..55176.315 rows=961097 loops=1)\n -> Hash Join (cost=5069.24..7949.67 rows=15206 width=16)\n(actual time=908.275..1257.120 rows=18901 loops=1)\n Hash Cond: (\"PrintSampleAnalyses\".\"ImageID\" =\n\"DigitalImages\".\"ImageID\")\n -> Seq Scan on \"PrintSampleAnalyses\" (cost=0.00..2334.25\nrows=78825 width=8) (actual time=10.440..139.945 rows=78859 loops=1)\n -> Hash (cost=4879.10..4879.10 rows=15211 width=16)\n(actual time=897.776..897.776 rows=18901 loops=1)\n -> Hash Join (cost=2220.11..4879.10 rows=15211\nwidth=16) (actual time=297.330..868.632 rows=18901 loops=1)\n Hash Cond: (\"DigitalImages\".\"PrintSampleID\" =\n\"PrintSamples\".\"PrintSampleID\")\n -> Seq Scan on \"DigitalImages\" (cost=0.00..1915.50\nrows=78850 width=8) (actual time=15.859..408.784 rows=78859 loops=1)\n -> Hash (cost=2029.98..2029.98 rows=15211\nwidth=16) (actual time=281.413..281.413 rows=18645 loops=1)\n -> Hash Join (cost=564.39..2029.98 rows=15211\nwidth=16) (actual time=84.182..251.833 rows=18645 loops=1)\n Hash Cond:\n((\"PrintSamples\".\"TestPatternName\")::text =\n(\"tblTPNamesAndColors\".\"TestPatternName\")::text)\n -> Bitmap Heap Scan on \"PrintSamples\"\n(cost=561.39..1781.53 rows=24891 width=40) (actual time=83.925..184.775\nrows=24914 loops=1)\n Filter: ((\"TestPatternName\")::text ~~\n'IQAF-TP8%'::text)\n -> Bitmap Index Scan on\n\"PSTestPatternName_idx\" (cost=0.00..555.17 rows=24891 width=0) (actual\ntime=74.198..74.198 rows=24914 loops=1)\n Index Cond:\n(((\"TestPatternName\")::text >= 'IQAF-TP8'::character varying) AND\n((\"TestPatternName\")::text < 'IQAF-TP9'::character varying))\n -> Hash (cost=2.72..2.72 rows=22 width=30)\n(actual time=0.225..0.225 rows=21 loops=1)\n -> Hash Join (cost=1.09..2.72 rows=22\nwidth=30) (actual time=0.086..0.184 rows=21 loops=1)\n Hash Cond:\n(\"tblTPNamesAndColors\".\"ColorID\" = \"tblColors\".\"ColorID\")\n -> Seq Scan on \"tblTPNamesAndColors\"\n(cost=0.00..1.30 rows=30 width=30) (actual time=0.025..0.060 rows=30\nloops=1)\n -> Hash (cost=1.05..1.05 rows=3\nwidth=4) (actual time=0.040..0.040 rows=3 loops=1)\n -> Seq Scan on \"tblColors\"\n(cost=0.00..1.05 rows=3 width=4) (actual time=0.021..0.027 rows=3\nloops=1)\n Filter: (\"ColorID\" <> 3)\n -> Index Scan using \"MpsaID_idx\" on \"Measurements\"\n(cost=0.00..11.13 rows=153 width=12) (actual time=1.615..2.728 rows=51\nloops=18901)\n Index Cond: (\"PrintSampleAnalyses\".\"psaID\" =\n\"Measurements\".\"psaID\")\n -> Hash (cost=1.71..1.71 rows=5 width=4) (actual time=0.092..0.092\nrows=5 loops=1)\n -> Seq Scan on \"AnalysisModules\" (cost=0.00..1.71 rows=5 width=4)\n(actual time=0.060..0.077 rows=5 loops=1)\n Filter: ((\"AnalysisModuleName\")::text = 'NMF'::text)\nTotal runtime: 110047.601 ms\n\n", "msg_date": "Mon, 13 Aug 2007 17:00:28 -0400", "msg_from": "\"Relyea, Mike\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help optimize view" }, { "msg_contents": ">>> On Mon, Aug 13, 2007 at 4:00 PM, in message\n<1806D1F73FCB7F439F2C842EE0627B18065F7A86@USA0300MS01.na.xerox.net>, \"Relyea,\nMike\" <[email protected]> wrote: \n> \n> Re-writing the view like this maybe bought me something.\n> Tough to tell because I also increased some of the statistics.\n \nI don't know whether it was the finer-grained statistics or the simplification,\nbut it bought you a new plan. I don't know if the seven second improvement\nis real or within the run-to-run variation, though; it could be because you\nhappened to be better-cached at the time.\n \n> From what Tom\n> says, it sounds like if I want the data returned faster I'm likely to\n> have to get beefier hardware.\n \nThat's not what he suggested. If you introduce redundancy in a controlled\nfashion, you could have a single table with an index to more quickly get you\nto the desired set of data. That can be maintained on an ongoing basis\n(possibly using triggers) or could be materialized periodically or prior to\nrunning a series of reports or queries.\n \nSuch redundancies violate the normalization rules which are generally used\nin database design, but some denormalization is often needed for acceptable\nperformance.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 13 Aug 2007 16:25:49 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" }, { "msg_contents": ">>> On Mon, Aug 13, 2007 at 4:25 PM, in message\n<[email protected]>, \"Kevin Grittner\"\n<[email protected]> wrote: \n>>>> On Mon, Aug 13, 2007 at 4:00 PM, in message\n> <1806D1F73FCB7F439F2C842EE0627B18065F7A86@USA0300MS01.na.xerox.net>, \"Relyea,\n> Mike\" <[email protected]> wrote: \n> \n>> From what Tom\n>> says, it sounds like if I want the data returned faster I'm likely to\n>> have to get beefier hardware.\n> \n> That's not what he suggested. If you introduce redundancy in a controlled\n> fashion, you could have a single table with an index to more quickly get you\n> to the desired set of data. That can be maintained on an ongoing basis\n> (possibly using triggers) or could be materialized periodically or prior to\n> running a series of reports or queries.\n> \n> Such redundancies violate the normalization rules which are generally used\n> in database design, but some denormalization is often needed for acceptable\n> performance.\n \nOne last thought regarding your table structure -- I noticed you were often\njoining on column names ending in \"ID\" and selecting using column names\nending in \"Name\", where the values for the name columns were only a few\ncharacters long. It is not always a good idea to create a meaningless ID\nnumber for a primary key if you have a meaningful value (or combination of\nvalues) which would uniquely identify a row.\n \nIf you were able to use the columns in your search criteria as keys, you\nwould have them in the Measurements table without creating any troublesome\nredundancy. You could then add Measurements indexes on these columns, and\nyour query might run in under a second.\n \nThe down side of meaningful keys (oft cited by proponents of the technique)\nis that if you decide that everything with an AnalysisModuleName\" name of\n'NMF' should now be named 'NMX', you would have to update all rows which\ncontain the old value. To be able to do this safely and reliably, you would\nwant to use DOMAIN definitions rigorously. If you link through meaningless\nID numbers (and what would be the point of changing those?) you can change\n'NMF' to 'NMX' in one place, and everything would reflect the new value,\nsince it would always join to one place for those characters.\n \n-Kevin\n \n\n", "msg_date": "Mon, 13 Aug 2007 17:37:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help optimize view" } ]
[ { "msg_contents": "These query times are the \"fully cached\" times for both, from doing a previous run of the same query. (The first one took 193.772 ms on its first run; I don't have a good \"uncached\" timing for the second one at this point.)\n \nIt seems like the first query could move the searchName filter to the Bitmap Index Scan phase, and save 97.5% of the page retrievals in the Bitmap Heap Scan.\n \n-Kevin\n \n \ncc=> explain analyze select * from \"Warrant\" where \"soundex\" = 'S530' and \"searchName\" like '%,G%' and \"countyNo\" = 40;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on \"Warrant\" (cost=55.37..1202.35 rows=841 width=123) (actual time=2.625..8.602 rows=112 loops=1)\n Recheck Cond: (((soundex)::text = 'S530'::text) AND ((\"countyNo\")::smallint = 40))\n Filter: ((\"searchName\")::text ~~ '%,G%'::text)\n -> Bitmap Index Scan on \"Warrant_WarrantSoundex\" (cost=0.00..55.16 rows=4240 width=0) (actual time=1.911..1.911 rows=4492 loops=1)\n Index Cond: (((soundex)::text = 'S530'::text) AND ((\"countyNo\")::smallint = 40))\n Total runtime: 8.739 ms\n(6 rows)\n\ncc=> explain analyze select * from \"Warrant\" where \"soundex\" = 'S530' and \"searchName\" like 'SMITH,G%' and \"countyNo\" = 40;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using \"Warrant_WarrantName\" on \"Warrant\" (cost=0.00..1.28 rows=1 width=123) (actual time=0.099..0.397 rows=112 loops=1)\n Index Cond: (((\"searchName\")::text >= 'SMITH,G'::character varying) AND ((\"searchName\")::text < 'SMITH,H'::character varying) AND ((\"countyNo\")::smallint = 40))\n Filter: (((soundex)::text = 'S530'::text) AND ((\"searchName\")::text ~~ 'SMITH,G%'::text))\n Total runtime: 0.510 ms\n(4 rows)\n\ncc=> \\d \"Warrant\"\n Table \"public.Warrant\"\n Column | Type | Modifiers\n----------------+-----------------+-----------\n warrantSeqNo | \"WarrantSeqNoT\" | not null\n countyNo | \"CountyNoT\" | not null\n caseNo | \"CaseNoT\" | not null\n nameL | \"LastNameT\" | not null\n partyNo | \"PartyNoT\" | not null\n searchName | \"SearchNameT\" | not null\n soundex | \"SoundexT\" | not null\n authSeqNo | \"HistSeqNoT\" |\n dateAuthorized | \"DateT\" |\n dateDisposed | \"DateT\" |\n dateIssued | \"DateT\" |\n dispoMethod | \"EventTypeT\" |\n dispSeqNo | \"HistSeqNoT\" |\n histSeqNo | \"HistSeqNoT\" |\n nameF | \"FirstNameT\" |\n nameM | \"MiddleNameT\" |\n stayDate | \"DateT\" |\n stayTime | \"TimeT\" |\n suffix | \"NameSuffixT\" |\n warrantDob | \"DateT\" |\nIndexes:\n \"Warrant_pkey\" PRIMARY KEY, btree (\"warrantSeqNo\", \"countyNo\")\n \"Warrant_HistSeqNo\" UNIQUE, btree (\"caseNo\", \"histSeqNo\", \"countyNo\", \"warrantSeqNo\")\n \"Warrant_AuthSeqNo\" btree (\"caseNo\", \"authSeqNo\", \"countyNo\")\n \"Warrant_CaseNo\" btree (\"caseNo\", \"partyNo\", \"countyNo\")\n \"Warrant_DispSeqNo\" btree (\"caseNo\", \"dispSeqNo\", \"countyNo\")\n \"Warrant_WarrantName\" btree (\"searchName\", \"countyNo\")\n \"Warrant_WarrantSoundex\" btree (soundex, \"searchName\", \"countyNo\")\n\ncc=> select version();\n version\n-------------------------------------------------------------------------------------\n PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE Linux)\n(1 row)\n\n\n", "msg_date": "Fri, 10 Aug 2007 14:33:28 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap Index Scan optimization opportunity" }, { "msg_contents": "Kevin Grittner wrote:\n> These query times are the \"fully cached\" times for both, from doing a previous run of the same query. (The first one took 193.772 ms on its first run; I don't have a good \"uncached\" timing for the second one at this point.)\n> \n> It seems like the first query could move the searchName filter to the Bitmap Index Scan phase, and save 97.5% of the page retrievals in the Bitmap Heap Scan.\n\nYes it could in theory, but unfortunately the planner/executor doesn't\nhave the capability to do that. An indexed value is never handed back\nfrom the index; the indexed values are only used to satisfy index\nconditions, not filters. It's been discussed before (see\nhttp://archives.postgresql.org/pgsql-performance/2006-09/msg00080.php),\nbut it's not easy to implement so no one's done it yet.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 10 Aug 2007 21:05:13 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap Index Scan optimization opportunity" } ]
[ { "msg_contents": "Hello All.\n\nI have a table with ca. 100.000.000 records. The main idea is make\nPartitioning for this table (1000 or 10000 tables).\nLet's take for example.\n\nCREATE TABLE test\n(\n id integer,\n data date not null default now()\n)\nWITHOUT OIDS;\n\nCREATE TABLE test00 ( CHECK ( id%100 = 0 ) ) INHERITS (test);\nCREATE TABLE test01 ( CHECK ( id%100 = 1 ) ) INHERITS (test);\n...\nCREATE TABLE test09 ( CHECK ( id%100 = 9 ) ) INHERITS (test);\n\n-- RULES\n\nCREATE OR REPLACE RULE test00 AS\nON INSERT TO test WHERE (NEW.id%100) = 0\nDO INSTEAD INSERT INTO test00 (id) VALUES (NEW.id);\n\nCREATE OR REPLACE RULE test01 AS\nON INSERT TO test WHERE (NEW.id%100) = 1\nDO INSTEAD INSERT INTO test01 (id) VALUES (NEW.id);\n\n...\n\nCREATE OR REPLACE RULE test09 AS\nON INSERT TO test WHERE (NEW.id%100) = 9\nDO INSTEAD INSERT INTO test09 (id) VALUES (NEW.id);\n\nSo the main algorithm is to take last digits of ID and put to special\ntable. Yes, it is work correct. But when I make a selection query\ndatabase ask all table instead of one.\n\n\"Aggregate (cost=134.17..134.18 rows=1 width=0)\"\n\" -> Append (cost=4.33..133.94 rows=90 width=0)\"\n\" -> Bitmap Heap Scan on test01 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test01_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test02 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test02_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test03 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test03_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test04 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test04_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test05 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test05_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test06 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test06_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test07 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test07_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test08 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test08_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\" -> Bitmap Heap Scan on test09 (cost=4.33..14.88 rows=10 width=0)\"\n\" Recheck Cond: (id = 1)\"\n\" -> Bitmap Index Scan on test09_id (cost=0.00..4.33\nrows=10 width=0)\"\n\" Index Cond: (id = 1)\"\n\nIf change CHECK to\n\nCREATE TABLE test00 ( CHECK ( id = 0 ) ) INHERITS (test);\n\nCREATE TABLE test01 ( CHECK ( id = 1 ) ) INHERITS (test);\n\n... etc - everything work correct, only one table is asked for data.\n\nBut how to implement my idea if ID is always increment and have range\nfrom 1 to BIGINT?\nHow it is possible or is there any variants to store different IDs in\nseparated tables when CHECK condition will be used during SELECT or\nDELETE queries?\n", "msg_date": "Sat, 11 Aug 2007 02:58:29 +0600", "msg_from": "\"Nurlan Mukhanov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Table Partitioning" }, { "msg_contents": "\n10 aug 2007 kl. 22:58 skrev Nurlan Mukhanov:\n\n> Hello All.\n>\n> I have a table with ca. 100.000.000 records. The main idea is make\n> Partitioning for this table (1000 or 10000 tables).\n> Let's take for example.\n>\n> CREATE TABLE test\n> (\n> id integer,\n> data date not null default now()\n> )\n> WITHOUT OIDS;\n>\n> CREATE TABLE test00 ( CHECK ( id%100 = 0 ) ) INHERITS (test);\n> CREATE TABLE test01 ( CHECK ( id%100 = 1 ) ) INHERITS (test);\n> ...\n> CREATE TABLE test09 ( CHECK ( id%100 = 9 ) ) INHERITS (test);\n>\n> -- RULES\n>\n> CREATE OR REPLACE RULE test00 AS\n> ON INSERT TO test WHERE (NEW.id%100) = 0\n> DO INSTEAD INSERT INTO test00 (id) VALUES (NEW.id);\n>\n> CREATE OR REPLACE RULE test01 AS\n> ON INSERT TO test WHERE (NEW.id%100) = 1\n> DO INSTEAD INSERT INTO test01 (id) VALUES (NEW.id);\n>\n> ...\n>\n> CREATE OR REPLACE RULE test09 AS\n> ON INSERT TO test WHERE (NEW.id%100) = 9\n> DO INSTEAD INSERT INTO test09 (id) VALUES (NEW.id);\n>\n> So the main algorithm is to take last digits of ID and put to special\n> table. Yes, it is work correct. But when I make a selection query\n> database ask all table instead of one\n\nI'm not sure this will make any difference but are you using SET \nconstraint_exclusion = on; ?\n\n> .\n>\n> \"Aggregate (cost=134.17..134.18 rows=1 width=0)\"\n> \" -> Append (cost=4.33..133.94 rows=90 width=0)\"\n> \" -> Bitmap Heap Scan on test01 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test01_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test02 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test02_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test03 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test03_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test04 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test04_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test05 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test05_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test06 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test06_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test07 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test07_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test08 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test08_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n> \" -> Bitmap Heap Scan on test09 (cost=4.33..14.88 rows=10 \n> width=0)\"\n> \" Recheck Cond: (id = 1)\"\n> \" -> Bitmap Index Scan on test09_id (cost=0.00..4.33\n> rows=10 width=0)\"\n> \" Index Cond: (id = 1)\"\n>\n> If change CHECK to\n>\n> CREATE TABLE test00 ( CHECK ( id = 0 ) ) INHERITS (test);\n>\n> CREATE TABLE test01 ( CHECK ( id = 1 ) ) INHERITS (test);\n>\n> ... etc - everything work correct, only one table is asked for data.\n\nAre your first check algorithm causing overlaps?\n\nCheers,\nhenrik\n>\n> But how to implement my idea if ID is always increment and have range\n> from 1 to BIGINT?\n> How it is possible or is there any variants to store different IDs in\n> separated tables when CHECK condition will be used during SELECT or\n> DELETE queries?\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Sat, 18 Aug 2007 13:11:58 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Table Partitioning" } ]
[ { "msg_contents": "Hi All,\n\nTomas Kovarik and I have presented at PGCon 2007 in Ottawa\nthe ideas about other possible optimizer algorithms to be used\nin PostgreSQL.\n\nWe are quite new to PostgreSQL project so it took us some\ntime to go through the sources end explore the possibilities\nhow things could be implemented.\n\nThere is a proposal attached to this mail about the interface\nwe would like to implement for switching between different\noptimizers. Please review it and provide a feedback to us.\nThank You.\n\nRegards\n\nJulius Stroffek", "msg_date": "Mon, 13 Aug 2007 21:49:56 +0200", "msg_from": "Julius Stroffek <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Pluggable Optimizer Interface" }, { "msg_contents": "Julius Stroffek wrote:\n> Hi All,\n> \n> Tomas Kovarik and I have presented at PGCon 2007 in Ottawa\n> the ideas about other possible optimizer algorithms to be used\n> in PostgreSQL.\n> \n> We are quite new to PostgreSQL project so it took us some\n> time to go through the sources end explore the possibilities\n> how things could be implemented.\n> \n> There is a proposal attached to this mail about the interface\n> we would like to implement for switching between different\n> optimizers. Please review it and provide a feedback to us.\n> Thank You.\n\nhmm - how does is that proposal different from what got implemented with:\n\nhttp://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php\n\n\nStefan\n", "msg_date": "Mon, 13 Aug 2007 22:20:35 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface" }, { "msg_contents": "Stefan Kaltenbrunner <[email protected]> writes:\n> Julius Stroffek wrote:\n>> There is a proposal attached to this mail about the interface\n>> we would like to implement for switching between different\n>> optimizers. Please review it and provide a feedback to us.\n\n> hmm - how does is that proposal different from what got implemented with:\n> http://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php\n\nWell, it's a very different level of abstraction. The planner_hook\nwould allow you to replace the *entire* planner, but if you only want to\nreplace GEQO (that is, only substitute some other heuristics for partial\nsearch of a large join-order space), doing it from planner_hook will\nprobably require duplicating a great deal of code. A hook right at the\nplace where we currently choose \"geqo or regular\" would be a lot easier\nto experiment with.\n\nReplacing GEQO sounds like a fine area for investigation to me; I've\nalways been dubious about whether it's doing a good job. But I'd prefer\na simple hook function pointer designed in the same style as\nplanner_hook (ie, intended to be overridden by a loadable module).\nThe proposed addition of a system catalog and SQL-level management\ncommands sounds like a great way to waste a lot of effort on mere\ndecoration, before ever getting to the point of being able to\ndemonstrate that there's any value in it. Also, while we might accept\na small hook-function patch for 8.3, there's zero chance of any of that\nother stuff making it into this release cycle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Aug 2007 17:23:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface " }, { "msg_contents": "Stefan,\n\nthanks for pointing this out. I missed this change.\n\nWe would like to place the hooks to a different place in the planner and \nwe would like to just replace the non-deterministic algorithm searching \nfor the best order of joins and keep the rest of the planner untouched.\n\nI am not quite sure about the usage from the user point of view of what \ngot implemented. I read just the code of the patch. Are there more \nexplanations somewhere else?\n\nI understood that if the user creates his own implementation of the \nplanner which can be stored in some external library, he have to provide \nsome C language function as a \"hook activator\" which will assign the \ndesired value to the planner_hook variable. Both, the activator function \nand the new planner implementation have to be located in the same \ndynamic library which will be loaded when CREATE FUNCTION statement \nwould be used on \"hook activator\" function.\n\nAm I correct? Have I missed something?\n\nIf the above is the case than it is exactly what we wanted except we \nwould like to have the hook also in the different place.\n\nThere are more things in the proposal as a new pg_optimizer catalog and \ndifferent way of configuring the hooks. However, this thinks are not \nmandatory for the functionality but are more user friendly.\n\nThanks\n\nJulo\n\nStefan Kaltenbrunner wrote:\n> Julius Stroffek wrote:\n> \n>> Hi All,\n>>\n>> Tomas Kovarik and I have presented at PGCon 2007 in Ottawa\n>> the ideas about other possible optimizer algorithms to be used\n>> in PostgreSQL.\n>>\n>> We are quite new to PostgreSQL project so it took us some\n>> time to go through the sources end explore the possibilities\n>> how things could be implemented.\n>>\n>> There is a proposal attached to this mail about the interface\n>> we would like to implement for switching between different\n>> optimizers. Please review it and provide a feedback to us.\n>> Thank You.\n>> \n>\n> hmm - how does is that proposal different from what got implemented with:\n>\n> http://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php\n>\n>\n> Stefan\n> \n\n\n\n\n\n\nStefan,\n\nthanks for pointing this out. I missed this change.\n\nWe would like to place the hooks to a different place in the planner\nand we would like to just replace the non-deterministic algorithm\nsearching for the best order of joins and keep the rest of the planner\nuntouched.\n\nI am not quite sure about the usage from the user point of view of what\ngot implemented. I read just the code of the patch. Are there more\nexplanations somewhere else?\n\nI understood that if the user creates his own implementation of the\nplanner which can be stored in some external library, he have to\nprovide some C language function as a \"hook activator\" which will\nassign the desired value to the planner_hook variable. Both, the\nactivator function and the new planner implementation have to be\nlocated in the same dynamic library which will be loaded when CREATE\nFUNCTION statement would be used on \"hook activator\" function.\n\nAm I correct? Have I missed something?\n\nIf the above is the case than it is exactly what we wanted except we\nwould like to have the hook also in the different place.\n\nThere are more things in the proposal as a new pg_optimizer catalog and\ndifferent way of configuring the hooks. However, this thinks are not\nmandatory for the functionality but are more user friendly.\n\nThanks\n\nJulo\n\nStefan Kaltenbrunner wrote:\n\nJulius Stroffek wrote:\n \n\nHi All,\n\nTomas Kovarik and I have presented at PGCon 2007 in Ottawa\nthe ideas about other possible optimizer algorithms to be used\nin PostgreSQL.\n\nWe are quite new to PostgreSQL project so it took us some\ntime to go through the sources end explore the possibilities\nhow things could be implemented.\n\nThere is a proposal attached to this mail about the interface\nwe would like to implement for switching between different\noptimizers. Please review it and provide a feedback to us.\nThank You.\n \n\n\nhmm - how does is that proposal different from what got implemented with:\n\nhttp://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php\n\n\nStefan", "msg_date": "Mon, 13 Aug 2007 23:27:56 +0200", "msg_from": "Julius Stroffek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface" }, { "msg_contents": "Julius Stroffek <[email protected]> writes:\n> I understood that if the user creates his own implementation of the \n> planner which can be stored in some external library, he have to provide \n> some C language function as a \"hook activator\" which will assign the \n> desired value to the planner_hook variable. Both, the activator function \n> and the new planner implementation have to be located in the same \n> dynamic library which will be loaded when CREATE FUNCTION statement \n> would be used on \"hook activator\" function.\n\nYou could do it that way if you wanted, but a minimalistic solution is\njust to install the hook from the _PG_init function of a loadable\nlibrary, and then LOAD is sufficient for a user to execute the thing.\nThere's a small example at\nhttp://archives.postgresql.org/pgsql-patches/2007-05/msg00421.php\n\nAlso, having the loadable module add a custom GUC variable would likely\nbe a preferable solution for control purposes than making specialized\nfunctions. I attach another small hack I made recently, which simply\nscales all the planner's relation size estimates by a scale_factor GUC;\nthis is handy for investigating how a plan will change with relation\nsize, without having to actually create gigabytes of test data.\n\n> There are more things in the proposal as a new pg_optimizer catalog and \n> different way of configuring the hooks. However, this thinks are not \n> mandatory for the functionality but are more user friendly.\n\nGranted, but at this point we are talking about infrastructure for\nplanner-hackers to play with, not something that's intended to be a\nlong-term API for end users. It may or may not happen that we ever\nneed a user API for this at all. I think a planner that just \"does the\nright thing\" is far preferable to one with a lot of knobs that users\nhave to know how to twiddle, so I see this more as scaffolding on which\nsomeone can build and test the replacement for GEQO; which ultimately\nwould go in without any user-visible API additions.\n\n\t\t\tregards, tom lane\n\n\n#include \"postgres.h\"\n\n#include \"fmgr.h\"\n#include \"commands/explain.h\"\n#include \"optimizer/plancat.h\"\n#include \"optimizer/planner.h\"\n#include \"utils/guc.h\"\n\n\nPG_MODULE_MAGIC;\n\nvoid\t\t_PG_init(void);\nvoid\t\t_PG_fini(void);\n\nstatic double scale_factor = 1.0;\n\nstatic void my_get_relation_info(PlannerInfo *root, Oid relationObjectId,\n\t\t\t\t\t\t\t\t bool inhparent, RelOptInfo *rel);\n\n\n/*\n * Get control during planner's get_relation_info() function, which sets up\n * a RelOptInfo struct based on the system catalog contents. We can modify\n * the struct contents to cause the planner to work with a hypothetical\n * situation rather than what's actually in the catalogs.\n *\n * This simplistic example just scales all relation size estimates by a\n * user-settable factor.\n */\nstatic void\nmy_get_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent,\n\t\t\t\t\t RelOptInfo *rel)\n{\n\tListCell *ilist;\n\n\t/* Do nothing for an inheritance parent RelOptInfo */\n\tif (inhparent)\n\t\treturn;\n\n\trel->pages = (BlockNumber) ceil(rel->pages * scale_factor);\n\trel->tuples = ceil(rel->tuples * scale_factor);\n\n\tforeach(ilist, rel->indexlist)\n\t{\n\t\tIndexOptInfo *ind = (IndexOptInfo *) lfirst(ilist);\n\n\t\tind->pages = (BlockNumber) ceil(ind->pages * scale_factor);\n\t\tind->tuples = ceil(ind->tuples * scale_factor);\n\t}\n}\n\n\n/*\n * _pg_init()\t\t\t- library load-time initialization\n *\n * DO NOT make this static nor change its name!\n */\nvoid\n_PG_init(void)\n{\n\t/* Get into the hooks we need to be in all the time */\n\tget_relation_info_hook = my_get_relation_info;\n\t/* Make scale_factor accessible through GUC */\n\tDefineCustomRealVariable(\"scale_factor\",\n\t\t\t\t\t\t\t \"\",\n\t\t\t\t\t\t\t \"\",\n\t\t\t\t\t\t\t &scale_factor,\n\t\t\t\t\t\t\t 0.0001,\n\t\t\t\t\t\t\t 1e9,\n\t\t\t\t\t\t\t PGC_USERSET,\n\t\t\t\t\t\t\t NULL,\n\t\t\t\t\t\t\t NULL);\n}\n\n\n/*\n * _PG_fini()\t\t\t- library unload-time finalization\n *\n * DO NOT make this static nor change its name!\n */\nvoid\n_PG_fini(void)\n{\n\t/* Get out of all the hooks (just to be sure) */\n\tget_relation_info_hook = NULL;\n}", "msg_date": "Mon, 13 Aug 2007 18:24:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface " }, { "msg_contents": "Tom,\n\n> Also, while we might accept\n> a small hook-function patch for 8.3, there's zero chance of any of that\n> other stuff making it into this release cycle.\n\nI don't think anyone was thinking about 8.3. This is pretty much 8.4 \nstuff; Julius is just raising it now becuase they don't want to go down \nthe wrong path and waste everyone's time.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 13 Aug 2007 16:08:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom,\n>> Also, while we might accept\n>> a small hook-function patch for 8.3, there's zero chance of any of that\n>> other stuff making it into this release cycle.\n\n> I don't think anyone was thinking about 8.3. This is pretty much 8.4 \n> stuff; Julius is just raising it now becuase they don't want to go down \n> the wrong path and waste everyone's time.\n\nWell, if they get the hook in now, then in six months or so when they\nhave something to play with, people would be able to play with it.\nIf not, there'll be zero uptake till after 8.4 is released...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Aug 2007 19:18:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal: Pluggable Optimizer Interface " } ]
[ { "msg_contents": "Hello!\n\nHere's my test database:\n\n# table\nCREATE TABLE public.t\n(\n id integer NOT NULL,\n a integer NOT NULL,\n CONSTRAINT pk_t PRIMARY KEY (id)\n)\nCREATE INDEX idx_t_a\n ON public.t\n USING btree\n (a);\n\n# function\nCREATE OR REPLACE FUNCTION public.f()\n RETURNS integer AS\n$BODY$BEGIN\n\tRETURN 1;\nEND$BODY$\n LANGUAGE 'plpgsql' STABLE;\n\n# view\nCREATE OR REPLACE VIEW public.v AS\n SELECT t.id, t.a\n FROM public.t\n WHERE public.f() = t.a;\n\n########\n\n# f() is stable\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1991.00 rows=51200 width=8) (actual \ntime=0.060..458.476 rows=50003 loops=1)\n Filter: (f() = a)\n Total runtime: 626.341 ms\n(3 rows)\n\n# changing f() to immutable\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1741.00 rows=51200 width=8) (actual \ntime=0.165..199.215 rows=50003 loops=1)\n Filter: (1 = a)\n Total runtime: 360.819 ms\n(3 rows)\n\n# changing f() to volatile\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1991.00 rows=50000 width=8) (actual \ntime=0.217..560.426 rows=50003 loops=1)\n Filter: (f() = a)\n Total runtime: 732.655 ms\n(3 rows)\n\n########\n\nThe biggest question here is: Why is the runtime of the query with \nthe stable function not near the runtime of the immutable function? \nIt's definitely one query and the manual states that a stable \nfunction does not change in one statement and therefore can be \noptimised.\n\nIs this a pg problem or did I do something wrong?\n\nThank you for your help!\n\nPhilipp\n", "msg_date": "Mon, 13 Aug 2007 22:12:33 +0200", "msg_from": "Philipp Specht <[email protected]>", "msg_from_op": true, "msg_subject": "Stable function optimisation" } ]
[ { "msg_contents": "Hello!\n\nHere's my test database:\n\n# table\nCREATE TABLE public.t\n(\n id integer NOT NULL,\n a integer NOT NULL,\n CONSTRAINT pk_t PRIMARY KEY (id)\n)\nCREATE INDEX idx_t_a\n ON public.t\n USING btree\n (a);\n\n# function\nCREATE OR REPLACE FUNCTION public.f()\n RETURNS integer AS\n$BODY$BEGIN\n\tRETURN 1;\nEND$BODY$\n LANGUAGE 'plpgsql' STABLE;\n\n# view\nCREATE OR REPLACE VIEW public.v AS\n SELECT t.id, t.a\n FROM public.t\n WHERE public.f() = t.a;\n\n########\n\n# f() is stable\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1991.00 rows=51200 width=8) (actual \ntime=0.060..458.476 rows=50003 loops=1)\n Filter: (f() = a)\n Total runtime: 626.341 ms\n(3 rows)\n\n# changing f() to immutable\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1741.00 rows=51200 width=8) (actual \ntime=0.165..199.215 rows=50003 loops=1)\n Filter: (1 = a)\n Total runtime: 360.819 ms\n(3 rows)\n\n# changing f() to volatile\n\ntest=# explain analyze select * from public.v;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------\n Seq Scan on t (cost=0.00..1991.00 rows=50000 width=8) (actual \ntime=0.217..560.426 rows=50003 loops=1)\n Filter: (f() = a)\n Total runtime: 732.655 ms\n(3 rows)\n\n########\n\nThe biggest question here is: Why is the runtime of the query with \nthe stable function not near the runtime of the immutable function? \nIt's definitely one query and the manual states that a stable \nfunction does not change in one statement and therefore can be \noptimised.\n\nIs this a pg problem or did I do something wrong?\n\nThank you for your help!\n\nPhilipp\n", "msg_date": "Mon, 13 Aug 2007 22:37:59 +0200", "msg_from": "Philipp Specht <[email protected]>", "msg_from_op": true, "msg_subject": "Stable function optimisation" }, { "msg_contents": "Philipp Specht <[email protected]> writes:\n> The biggest question here is: Why is the runtime of the query with \n> the stable function not near the runtime of the immutable function? \n\nStable functions don't get folded to constants.\n\n> It's definitely one query and the manual states that a stable \n> function does not change in one statement and therefore can be \n> optimised.\n\nThat's not the type of optimization that gets done with it. What\n\"STABLE\" is for is marking functions that are safe to use in index\nconditions. If you'd been using an indexable condition you'd have\nseen three different behaviors here.\n\n(I see that you do have an index on t.a, but apparently there are\ntoo many matching rows for the planner to think the index is worth\nusing.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Aug 2007 17:01:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stable function optimisation " }, { "msg_contents": "Hi Tom,\n\nThank you very much for your explanation.\n\nOn 13.08.2007, at 23:01, Tom Lane wrote:\n\n> Philipp Specht <[email protected]> writes:\n>> The biggest question here is: Why is the runtime of the query with\n>> the stable function not near the runtime of the immutable function?\n>\n> Stable functions don't get folded to constants.\n\nI tried to force this by using the following construct:\n\nSELECT t.id, t.a FROM public.t WHERE t.a=(VALUES(public.f()));\n\nIs this a bad practice and will destroy some other thing I can't \nthink of at the moment? What it means for me at the moment is about \nhalf the query time of a high usage query directly linked to a gui. \nThat's a big gain for a user interface and takes the query under the \nmagical 500ms response time...\n\n\n>> It's definitely one query and the manual states that a stable\n>> function does not change in one statement and therefore can be\n>> optimised.\n>\n> That's not the type of optimization that gets done with it. What\n> \"STABLE\" is for is marking functions that are safe to use in index\n> conditions. If you'd been using an indexable condition you'd have\n> seen three different behaviors here.\n>\n> (I see that you do have an index on t.a, but apparently there are\n> too many matching rows for the planner to think the index is worth\n> using.)\n\nYes, that's not the real problem here. It's only a test database and \nthe real data behaves a bit differently.\n\nHave a nice day,\nPhilipp\n\n", "msg_date": "Wed, 15 Aug 2007 22:48:34 +0200", "msg_from": "Philipp Specht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stable function optimisation " } ]
[ { "msg_contents": "Hello,\n\nI need to setup a web server with PostgreSQL. The heaviest application will\nbe product comparisons engine that will analyze 5-10 Million products.\n\nI have 6 SAS 15K disks, how should I set them up:\n\nRAID 10 on all 6 disks ( OS + Apache + PostgreSQL )\nRAID 1 on 2 disks ( OS + Apache ) + RAID 10 on 4 disks ( PostgreSQL )\n\nWhat is the recommended stripe size ( The computer is Dell PowerEdge 2950 )\n\nThanks,\nMiki\n\n-- \n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nHello,I need to setup a web server with PostgreSQL. The heaviest application will be product comparisons engine that will analyze 5-10 Million products.I have 6 SAS 15K disks, how should I set them up:\n\nRAID 10 on all 6 disks ( OS + Apache + PostgreSQL )RAID 1 on 2 disks ( OS + Apache ) + RAID 10 on 4 disks ( PostgreSQL )What is the recommended stripe size ( The computer is Dell PowerEdge 2950 )Thanks,\nMiki-- --------------------------------------------------Michael Ben-Nes - Internet Consultant and  Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113--------------------------------------------------", "msg_date": "Tue, 14 Aug 2007 13:14:39 +0300", "msg_from": "\"Michael Ben-Nes\" <[email protected]>", "msg_from_op": true, "msg_subject": "RAID 10 or RAID 10 + RAID 1" }, { "msg_contents": "On Tue, Aug 14, 2007 at 01:14:39PM +0300, Michael Ben-Nes wrote:\n> Hello,\n> \n> I need to setup a web server with PostgreSQL. The heaviest application will\n> be product comparisons engine that will analyze 5-10 Million products.\n> \n> I have 6 SAS 15K disks, how should I set them up:\n> \n> RAID 10 on all 6 disks ( OS + Apache + PostgreSQL )\n> RAID 1 on 2 disks ( OS + Apache ) + RAID 10 on 4 disks ( PostgreSQL )\n \nIf the controller is any good and has a battery-backed write cache,\nyou'll probably do better with a 6 disk RAID10. If not, I'd put all the\ndata on a 4 disk RAID10, and everything else (including pg_xlog!) on a\nmirror.\n\n> What is the recommended stripe size ( The computer is Dell PowerEdge 2950 )\n\nIf you search through the archives I think you'll find some stuff about\nstripe size and performance.\n\nAs always, your best bet is to benchmark both approaches with your\nactual application.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 14 Aug 2007 16:31:50 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 or RAID 10 + RAID 1" } ]
[ { "msg_contents": "Hello,\nWhe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.\nThe query only uses the index if we have a \"limit n\":\n\nWithout \"Limit n\"\nexplain\nselect esapcuit, esapcuil\nfrom esact00 t1\norder by esapcuit, esapcuil\n\nSort (cost=843833.82..853396.76 rows=3825177 width=30)\n Sort Key: esapcuit, esapcuil\n -> Seq Scan on esact00 t1 (cost=0.00..111813.77 rows=3825177 width=30)\n\nWith \"Limit n\"\nexplain\nselect esapcuit, esapcuil\nfrom esact00 t1\norder by esapcuit, esapcuil\nlimit 1\n\nLimit (cost=0.00..1.86 rows=1 width=30)\n -> Index Scan using uesact002 on esact00 t1 (cost=0.00..7129736.89 rows=3825177 width=30)\n\nOur postgresql.conf is:\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\n\nThank you.\nSebasti�n\n\n\nSebasti�n Baioni\n http://www.acomplejados.com.ar\n http://www.extremista.com.ar\n http://www.coolartists.com.ar\n\n \n---------------------------------\n\n�S� un mejor ambientalista!\nEncontr� consejos para cuidar el lugar donde vivimos..\n\nHello,Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.The query only uses the index if we have a \"limit n\":Without \"Limit n\"explainselect esapcuit, esapcuilfrom esact00 t1order by esapcuit, esapcuilSort  (cost=843833.82..853396.76 rows=3825177 width=30)  Sort Key: esapcuit, esapcuil  ->  Seq Scan on esact00 t1  (cost=0.00..111813.77 rows=3825177 width=30)With \"Limit n\"explainselect esapcuit, esapcuilfrom esact00 t1order by esapcuit, esapcuillimit 1Limit  (cost=0.00..1.86 rows=1 width=30)  ->  Index Scan using uesact002 on esact00 t1  (cost=0.00..7129736.89 rows=3825177 width=30)Our postgresql.conf is:enable_bitmapscan = onenable_hashagg = onenable_hashjoin = onenable_indexscan = onenable_mergejoin = onenable_nestloop = onenable_seqscan =\n onenable_sort = onenable_tidscan = onThank you.Sebasti�nSebasti�n Baioni http://www.acomplejados.com.ar http://www.extremista.com.ar http://www.coolartists.com.ar\n�S� un mejor ambientalista!Encontr� consejos para cuidar el lugar donde vivimos..", "msg_date": "Wed, 15 Aug 2007 16:36:45 -0300 (ART)", "msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>", "msg_from_op": true, "msg_subject": "Indexscan is only used if we use \"limit n\"" }, { "msg_contents": "Sebasti�n Baioni escribi�:\n> Hello,\n> Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.\n> The query only uses the index if we have a \"limit n\":\n\n> Without \"Limit n\"\n> explain\n> select esapcuit, esapcuil\n> from esact00 t1\n> order by esapcuit, esapcuil\n> \n> Sort (cost=843833.82..853396.76 rows=3825177 width=30)\n> Sort Key: esapcuit, esapcuil\n> -> Seq Scan on esact00 t1 (cost=0.00..111813.77 rows=3825177 width=30)\n\nThat's right. What else did you expect? It estimates it has to return\n3 million rows after all -- using an indexscan would be slow.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"Puedes vivir solo una vez, pero si lo haces bien, una vez es suficiente\"\n", "msg_date": "Wed, 15 Aug 2007 16:00:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexscan is only used if we use \"limit n\"" }, { "msg_contents": "\nwhich column does your indice cover?\n\nEm Qua, 2007-08-15 �s 16:36 -0300, Sebasti�n Baioni escreveu:\n> Hello,\n> Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled\n> by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.\n> The query only uses the index if we have a \"limit n\":\n> \n> Without \"Limit n\"\n> explain\n> select esapcuit, esapcuil\n> from esact00 t1\n> order by esapcuit, esapcuil\n> \n> Sort (cost=843833.82..853396.76 rows=3825177 width=30)\n> Sort Key: esapcuit, esapcuil\n> -> Seq Scan on esact00 t1 (cost=0.00..111813.77 rows=3825177\n> width=30)\n> \n> With \"Limit n\"\n> explain\n> select esapcuit, esapcuil\n> from esact00 t1\n> order by esapcuit, esapcuil\n> limit 1\n> \n> Limit (cost=0.00..1.86 rows=1 width=30)\n> -> Index Scan using uesact002 on esact00 t1 (cost=0.00..7129736.89\n> rows=3825177 width=30)\n> \n> Our postgresql.conf is:\n> enable_bitmapscan = on\n> enable_hashagg = on\n> enable_hashjoin = on\n> enable_indexscan = on\n> enable_mergejoin = on\n> enable_nestloop = on\n> enable_seqscan = on\n> enable_sort = on\n> enable_tidscan = on\n> \n> Thank you.\n> Sebasti�n\n> \n> \n> Sebasti�n Baioni\n> http://www.acomplejados.com.ar\n> http://www.extremista.com.ar\n> http://www.coolartists.com.ar\n> \n> \n> ______________________________________________________________________\n> \n> �S� un mejor ambientalista!\n> Encontr� consejos para cuidar el lugar donde vivimos..\n\n", "msg_date": "Wed, 15 Aug 2007 17:02:43 -0300", "msg_from": "joao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexscan is only used if we use \"limit n\"" }, { "msg_contents": "On Wed, 2007-08-15 at 16:36 -0300, Sebastián Baioni wrote:\n> Hello,\n> Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled\n> by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.\n> The query only uses the index if we have a \"limit n\":\n> \n> Without \"Limit n\"\n> explain\n> select esapcuit, esapcuil\n> from esact00 t1\n> order by esapcuit, esapcuil\n> \n> Sort (cost=843833.82..853396.76 rows=3825177 width=30)\n> Sort Key: esapcuit, esapcuil\n> -> Seq Scan on esact00 t1 (cost=0.00..111813.77 rows=3825177\n> width=30)\n> \n> With \"Limit n\"\n> explain\n> select esapcuit, esapcuil\n> from esact00 t1\n> order by esapcuit, esapcuil\n> limit 1\n\nThis isn't really unexpected-- it's faster to do a full sequential scan\nof a table than it is to do a full index traversal over the table. And\nusually it's still cheaper even after sorting the results of the full\ntable scan.\n\nSo as near as we can tell, PG is just doing what it's supposed to do and\npicking the best plan it can.\n\nYou didn't really ask a question-- is this causing problems somehow, or\nwere you just confused by the behavior?\n\n-- Mark\n", "msg_date": "Wed, 15 Aug 2007 13:03:10 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexscan is only used if we use \"limit n\"" }, { "msg_contents": "Sebastian,\n\n> Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled\n> by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305. The query only uses the index\n> if we have a \"limit n\":\n\nUm, why are you running an unpatched version of 8.2? You should be runing \n8.2.4.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 16 Aug 2007 16:10:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexscan is only used if we use \"limit n\"" } ]
[ { "msg_contents": "Hi\n\nI wanted to know if the integrated perc 5/i which come with Dell 2950 will\nyield maximum performance from RAID 10 ( 15K SAS ).\nOr should I ask for different card ?\n\nI read an old post that shows that RAID 10 does not work eficently under\nperc 5/i\nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#38837995887b6033\n\nDoes any one have any experience with RAID 10 & perc 5/i ?\n\nThanks,\nMiki\n\n-- \n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nHiI wanted to know if the integrated perc 5/i which come with Dell 2950 will yield maximum performance from RAID 10 ( 15K SAS ).Or should I ask for different card ?I read an old post that shows that RAID 10 does not work eficently under perc 5/i\nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#38837995887b6033\nDoes any one have any experience with RAID 10 & perc 5/i ?Thanks,Miki-- --------------------------------------------------Michael Ben-Nes - Internet Consultant and  Director.\nhttp://www.epoch.co.il - weaving the Net.Cellular: 054-4848113--------------------------------------------------", "msg_date": "Thu, 16 Aug 2007 11:26:52 +0300", "msg_from": "\"Michael Ben-Nes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Integrated perc 5/i" }, { "msg_contents": "On 8/16/07, Michael Ben-Nes <[email protected]> wrote:\n> Hi\n>\n> I wanted to know if the integrated perc 5/i which come with Dell 2950 will\n> yield maximum performance from RAID 10 ( 15K SAS ).\n> Or should I ask for different card ?\n>\n> I read an old post that shows that RAID 10 does not work eficently under\n> perc 5/i\n> http://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#38837995887b6033\n>\n> Does any one have any experience with RAID 10 & perc 5/i ?\n\nno, I've tested raid 10 on a few different perc 5/i and never had\nthose results. raid 10 often gives poor sequential read performance\nrelative to a raid 5 but better random performance generally, which is\nusually more important. That said, the perc 5/e (never done raid 5 on\nthe 5/i) has posted some of the best raid 5 numbers I've ever seen in\nterms of random performance on sata.\n\nmerlin\n", "msg_date": "Thu, 16 Aug 2007 08:59:51 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On Thu, Aug 16, 2007 at 11:26:52AM +0300, Michael Ben-Nes wrote:\n> Does any one have any experience with RAID 10 & perc 5/i ?\n\nWithout having done PostgreSQL benchmarking, we have a 2950 with four SATA\ndisks in RAID 10 (and two SAS disks in RAID 1), and have not seen any\nperformance issues.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 16 Aug 2007 18:43:47 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "Hi Miki,\n \nI am using a Dell 2950, and I recently switched from using RAID 5 of all\nsix disks to three RAID 1 pairs with the OS on the first pair, postgres\non the second except for pg_xlog, which I moved to the third pair. This\nconfiguration change increased the insert performance of my application\nby 40%. I have not tried RAID 10 so I cannot help you there. My\nsuggestion is test both RAID 5 and RAID 10, and report back to us what\nyou find.\n \nEd\n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael\nBen-Nes\nSent: Thursday, August 16, 2007 1:27 AM\nTo: PostgreSQL Performance\nSubject: [PERFORM] Integrated perc 5/i\n\n\nHi\n\nI wanted to know if the integrated perc 5/i which come with Dell 2950\nwill yield maximum performance from RAID 10 ( 15K SAS ).\nOr should I ask for different card ?\n\nI read an old post that shows that RAID 10 does not work eficently under\nperc 5/i \nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/b8\n5926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&h\nl=en#38837995887b6033 \n\nDoes any one have any experience with RAID 10 & perc 5/i ?\n\nThanks,\nMiki\n\n-- \n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director. \nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n-------------------------------------------------- \n\n\n\n\n\nHi Miki,\n \nI am using a Dell 2950, and I recently switched from using RAID 5 of all \nsix disks to three RAID 1 pairs with the OS on the first \npair, postgres on the second except for pg_xlog, which I moved to the third \npair.  This configuration change increased the insert performance \nof my application by 40%.  I have not tried RAID 10 so I cannot help you \nthere.  My suggestion is test both RAID 5 and RAID 10, and report back to \nus what you find.\n \nEd\n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Michael \nBen-NesSent: Thursday, August 16, 2007 1:27 AMTo: \nPostgreSQL PerformanceSubject: [PERFORM] Integrated perc \n5/i\nHiI wanted to know if the integrated perc 5/i which come with \nDell 2950 will yield maximum performance from RAID 10 ( 15K SAS ).Or should \nI ask for different card ?I read an old post that shows that RAID 10 \ndoes not work eficently under perc 5/i http://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#38837995887b6033 \nDoes any one have any experience with RAID 10 & perc 5/i \n?Thanks,Miki-- \n--------------------------------------------------Michael Ben-Nes - \nInternet Consultant and  Director. http://www.epoch.co.il - weaving the \nNet.Cellular: \n054-4848113--------------------------------------------------", "msg_date": "Thu, 16 Aug 2007 13:30:11 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "Thanks for all the answers.\nIt seems its a capable card.\nDid any one changed the default stripe of 128kb ?\n\n\nOn 8/16/07, Steinar H. Gunderson <[email protected]> wrote:\n>\n> On Thu, Aug 16, 2007 at 11:26:52AM +0300, Michael Ben-Nes wrote:\n> > Does any one have any experience with RAID 10 & perc 5/i ?\n>\n> Without having done PostgreSQL benchmarking, we have a 2950 with four SATA\n> disks in RAID 10 (and two SAS disks in RAID 1), and have not seen any\n> performance issues.\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n\n\n\n-- \n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nThanks for all the answers.It seems its a capable card.Did any one changed the default stripe of 128kb ?On 8/16/07, Steinar H. Gunderson <\[email protected]> wrote:On Thu, Aug 16, 2007 at 11:26:52AM +0300, Michael Ben-Nes wrote:\n> Does any one have any experience with RAID 10 & perc 5/i ?Without having done PostgreSQL benchmarking, we have a 2950 with four SATAdisks in RAID 10 (and two SAS disks in RAID 1), and have not seen any\nperformance issues./* Steinar */--Homepage: http://www.sesse.net/-- --------------------------------------------------\nMichael Ben-Nes - Internet Consultant and  Director.http://www.epoch.co.il - weaving the Net.Cellular: 054-4848113--------------------------------------------------", "msg_date": "Thu, 16 Aug 2007 20:47:25 +0300", "msg_from": "\"Michael Ben-Nes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "Hi Michael,\n\nThere is a problem with some Dell ³perc 5² RAID cards, specifically we¹ve\nhad this problem with the 2950 as of 6 months ago ­ they do not support\nRAID10. They have a setting that sounds like RAID10, but it actually\nimplements spanning of mirrors. This means that you will not get more than\none disk worth of performance whether you are performing random seeks\n(within a one disk sized area) or sequential transfers.\n\nI recommend you read the section in the Dell configuration guide very\ncarefully and look for supplemental sources of technical information about\nit. We found the issue clearly explained in a Dell technical memo that I\ndon¹t have in front of me ­ we were shocked to find this out.\n\nAs suggested ­ the RAID5 numbers from these controllers are very strong.\n\n- Luke\n\n\nOn 8/16/07 1:26 AM, \"Michael Ben-Nes\" <[email protected]> wrote:\n\n> Hi\n> \n> I wanted to know if the integrated perc 5/i which come with Dell 2950 will\n> yield maximum performance from RAID 10 ( 15K SAS ).\n> Or should I ask for different card ?\n> \n> I read an old post that shows that RAID 10 does not work eficently under perc\n> 5/i \n> http://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe\n> 6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#3883799\n> 5887b6033 \n> <http://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926f\n> e6de1f6c2/38837995887b6033?lnk=st&amp;q=perc+5%2Fi+performance&amp;rnum=6&amp;\n> hl=en#38837995887b6033>\n> \n> Does any one have any experience with RAID 10 & perc 5/i ?\n> \n> Thanks,\n> Miki\n\n\n\n\n\nRe: [PERFORM] Integrated perc 5/i\n\n\nHi Michael,\n\nThere is a problem with some Dell “perc 5” RAID cards, specifically we’ve had this problem with the 2950 as of 6 months ago – they do not support RAID10.  They have a setting that sounds like RAID10, but it actually implements spanning of mirrors.  This means that you will not get more than one disk worth of performance whether you are performing random seeks (within a one disk sized area) or sequential transfers.\n\nI recommend you read the section in the Dell configuration guide very carefully and look for supplemental sources of technical information about it.  We found the issue clearly explained in a Dell technical memo that I don’t have in front of me – we were shocked to find this out.\n\nAs suggested – the RAID5 numbers from these controllers are very strong.\n\n- Luke\n\n\nOn 8/16/07 1:26 AM, \"Michael Ben-Nes\" <[email protected]> wrote:\n\nHi\n\nI wanted to know if the integrated perc 5/i which come with Dell 2950 will yield maximum performance from RAID 10 ( 15K SAS ).\nOr should I ask for different card ?\n\nI read an old post that shows that RAID 10 does not work eficently under perc 5/i \nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&q=perc+5%2Fi+performance&rnum=6&hl=en#38837995887b6033  <http://groups.google.com/group/pgsql.performance/browse_thread/thread/b85926fe6de1f6c2/38837995887b6033?lnk=st&amp;q=perc+5%2Fi+performance&amp;rnum=6&amp;hl=en#38837995887b6033> \n\nDoes any one have any experience with RAID 10 & perc 5/i ?\n\nThanks,\nMiki", "msg_date": "Thu, 16 Aug 2007 10:53:00 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On Thu, Aug 16, 2007 at 10:53:00AM -0700, Luke Lonergan wrote:\n> They have a setting that sounds like RAID10, but it actually\n> implements spanning of mirrors.\n\nThat's interesting. I'm pretty sure it actually says \"RAID10\" in the BIOS,\nbut is this a lie?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 16 Aug 2007 19:59:15 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On Thu, Aug 16, 2007 at 07:59:15PM +0200, Steinar H. Gunderson wrote:\n> On Thu, Aug 16, 2007 at 10:53:00AM -0700, Luke Lonergan wrote:\n> > They have a setting that sounds like RAID10, but it actually\n> > implements spanning of mirrors.\n> \n> That's interesting. I'm pretty sure it actually says \"RAID10\" in the BIOS,\n> but is this a lie?\n\nUnless they use the \"plus notation\" (ie: RAID 1+0 or RAID 0+1), you\nnever truly know what you're getting.\n\nBTW, there's other reasons that RAID 0+1 stinks, beyond just\nperformance.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 16 Aug 2007 14:51:10 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On 8/16/07, Luke Lonergan <[email protected]> wrote:\n>\n> Hi Michael,\n>\n> There is a problem with some Dell \"perc 5\" RAID cards, specifically we've\n> had this problem with the 2950 as of 6 months ago – they do not support\n> RAID10. They have a setting that sounds like RAID10, but it actually\n> implements spanning of mirrors. This means that you will not get more than\n> one disk worth of performance whether you are performing random seeks\n> (within a one disk sized area) or sequential transfers.\n>\n> I recommend you read the section in the Dell configuration guide very\n> carefully and look for supplemental sources of technical information about\n> it. We found the issue clearly explained in a Dell technical memo that I\n> don't have in front of me – we were shocked to find this out.\n\ninteresting. this may also be true of the other 'rebrands' of the lsi\nlogic chipset. for example, the ibm 8480/exp3000 sets up the same\nway, namely you do the 'spanadd' function of the firmware which layers\nthe raids.\n\nfwiw, I will be testing perc 5 raid 10, 01, 00, and 05 in a dual\ncontroller controller configuration as well as dual controller\nconfiguration over the md1000 (which is active/active) in a few days.\nThis should give very good support to your claim if the arrays spanned\nin software singnificantly outperform a single controller.\n\nmerlin\n", "msg_date": "Fri, 17 Aug 2007 03:44:54 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On 8/16/07, [email protected] <[email protected]> wrote:\n>\n>\n> Hi Miki,\n>\n> I am using a Dell 2950, and I recently switched from using RAID 5 of all six\n> disks to three RAID 1 pairs with the OS on the first pair, postgres on the\n> second except for pg_xlog, which I moved to the third pair. This\n> configuration change increased the insert performance of my application by\n> 40%. I have not tried RAID 10 so I cannot help you there. My suggestion is\n> test both RAID 5 and RAID 10, and report back to us what you find.\n\nGood to know.\n\nAlso, be aware that one some RAID controllers, you'll get better\nperformance if you make the mirrors on the RAID controller, then RAID\n0 them in the OS / Kernel. RAID 0 is very low on overhead, so it\ndoesn't have much negative impact on the server anyway. We have a\nre-purposed Dell 4600 workstation with a single CPU and 2 Gigs ram\nwith a 4 disk linux kernel software RAID-10 that's quite a bit faster\nat most db work than the 2850 w/ dual CPUs, 4 gigs ram, and a perc 5\nseries controller w/ battery backed cache and a 4 disk RAID-5 it is\ncomplementing.\n", "msg_date": "Thu, 16 Aug 2007 17:25:07 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "On Thu, Aug 16, 2007 at 01:30:11PM -0400, [email protected] wrote:\n> Hi Miki,\n> by 40%. I have not tried RAID 10 so I cannot help you there. My\n> suggestion is test both RAID 5 and RAID 10, and report back to us what\n> you find.\n\nUnless you're running something like a data warehouse, I'd put a real\nlow priority on testing RAID5... it's rarely a good idea for a database.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 16 Aug 2007 17:41:00 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" }, { "msg_contents": "Yay - looking forward to your results!\n\n- Luke\n\n\nOn 8/16/07 3:14 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On 8/16/07, Luke Lonergan <[email protected]> wrote:\n>> \n>> Hi Michael,\n>> \n>> There is a problem with some Dell \"perc 5\" RAID cards, specifically we've\n>> had this problem with the 2950 as of 6 months ago ­ they do not support\n>> RAID10. They have a setting that sounds like RAID10, but it actually\n>> implements spanning of mirrors. This means that you will not get more than\n>> one disk worth of performance whether you are performing random seeks\n>> (within a one disk sized area) or sequential transfers.\n>> \n>> I recommend you read the section in the Dell configuration guide very\n>> carefully and look for supplemental sources of technical information about\n>> it. We found the issue clearly explained in a Dell technical memo that I\n>> don't have in front of me ­ we were shocked to find this out.\n> \n> interesting. this may also be true of the other 'rebrands' of the lsi\n> logic chipset. for example, the ibm 8480/exp3000 sets up the same\n> way, namely you do the 'spanadd' function of the firmware which layers\n> the raids.\n> \n> fwiw, I will be testing perc 5 raid 10, 01, 00, and 05 in a dual\n> controller controller configuration as well as dual controller\n> configuration over the md1000 (which is active/active) in a few days.\n> This should give very good support to your claim if the arrays spanned\n> in software singnificantly outperform a single controller.\n> \n> merlin\n\n\n", "msg_date": "Thu, 16 Aug 2007 18:12:49 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Integrated perc 5/i" } ]
[ { "msg_contents": "Hello everyone,\n\nThis being my first e-mail to the mailing list, I hope my question is \nrelevant and on-topic. I'm seeing poor performance on a few queries \nwhere the planner decides to use a bitmap scan instead of using indices.\n\nI'm using a stock PostgreSQL 8.1.9 on Debian 4.0r0 (x86). The \ndatabase is vacuumed and analyzed daily (\"full\") and testing on a \ndifferent machine (stock 8.1.9 on Ubuntu 6.06 LTS (x86)) gave the \nsame results. I've made sure the data for each query was cached.\n\nThe (example) query:\nSELECT * FROM movies WHERE letter = 'T' ORDER BY name ASC LIMIT 100 \nOFFSET 1900;\nis run on a single table \"movies\" (~ 250.000 rows) with the following \nstructure and indices:\n\n Table \"public.movies\"\n Column | Type | \nModifiers\n----------+----------------------------- \n+-----------------------------------------------------------\nmovie_id | integer | not null default nextval \n('movies_movie_id_seq'::regclass)\nname | character varying(255) | not null\nyear | integer | not null\nviews | integer | not null default 0\nlastview | timestamp without time zone |\nletter | character(1) | not null default 'A'::bpchar\nIndexes:\n \"movies_pkey\" PRIMARY KEY, btree (movie_id)\n \"movies_lastview\" btree (lastview)\n \"movies_letter\" btree (letter)\n \"movies_letter_name\" btree (letter, name)\n \"movies_name\" btree (name)\n \"movies_year\" btree (\"year\")\n\nRunning the query using EXPLAIN ANALYZE results in the following \nquery plan and execution time:\n\nLimit (cost=4002.04..4002.29 rows=100 width=48) (actual \ntime=1469.565..1470.097 rows=100 loops=1)\n -> Sort (cost=3997.29..4031.18 rows=13556 width=48) (actual \ntime=1460.958..1467.993 rows=2000 loops=1)\n Sort Key: name\n -> Bitmap Heap Scan on movies (cost=86.45..3066.90 \nrows=13556 width=48) (actual time=20.522..77.889 rows=13640 loops=1)\n Recheck Cond: (letter = 'T'::bpchar)\n -> Bitmap Index Scan on movies_letter \n(cost=0.00..86.45 rows=13556 width=0) (actual time=18.452..18.452 \nrows=13658 loops=1)\n Index Cond: (letter = 'T'::bpchar)\nTotal runtime: 1474.821 ms\n\nSetting enable_bitmapscan to 0 results in the following plan and \nexecution time:\n\nLimit (cost=5041.06..5306.38 rows=100 width=48) (actual \ntime=15.385..16.305 rows=100 loops=1)\n -> Index Scan using movies_letter_name on movies \n(cost=0.00..35966.65 rows=13556 width=48) (actual time=0.121..14.067 \nrows=2000 loops=1)\n Index Cond: (letter = 'T'::bpchar)\nTotal runtime: 16.604 ms\n\nSeeing that disabling the bitmap scan speeds up the query about fifty \ntimes, it would be interesting to know what is causing the planner to \ndecide to not use the appropriate index.\n\nIf anyone could comment a bit on my example, that would be great. \nThere's a few things I'm considering regarding this:\n- I could disable bitmap scan altogether, per application or query, \nbut that does not seem elegant, I'd rather have the query planner \nmake better decisions\n- I could try and test what 8.2 does if someone expects the results \nto be different, but I can't yet upgrade my production servers to 8.2\n- am I just running into a corner case which falls outside of the \nplanner's logic?\n\nThanks in advance for your efforts and replies.\n\nWith kind regards,\n\nFrank Schoep\n\n", "msg_date": "Thu, 16 Aug 2007 18:14:02 +0200", "msg_from": "Frank Schoep <[email protected]>", "msg_from_op": true, "msg_subject": "Bad planner decision - bitmap scan instead of index" }, { "msg_contents": "Frank Schoep <[email protected]> writes:\n> Limit (cost=4002.04..4002.29 rows=100 width=48) (actual \n> time=1469.565..1470.097 rows=100 loops=1)\n> -> Sort (cost=3997.29..4031.18 rows=13556 width=48) (actual \n> time=1460.958..1467.993 rows=2000 loops=1)\n> Sort Key: name\n> -> Bitmap Heap Scan on movies (cost=86.45..3066.90 \n> rows=13556 width=48) (actual time=20.522..77.889 rows=13640 loops=1)\n> Recheck Cond: (letter = 'T'::bpchar)\n> -> Bitmap Index Scan on movies_letter \n> (cost=0.00..86.45 rows=13556 width=0) (actual time=18.452..18.452 \n> rows=13658 loops=1)\n> Index Cond: (letter = 'T'::bpchar)\n> Total runtime: 1474.821 ms\n\nWhy is the sort step so slow? Sorting a mere 13k rows shouldn't take\nvery long. Maybe you are overrunning work_mem and it's falling back\nto a disk sort ... what is work_mem set to?\n\nAnother theory is that you are using a locale in which strcoll() is\nhorridly expensive :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Aug 2007 13:01:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index " }, { "msg_contents": "On Aug 16, 2007, at 7:01 PM, Tom Lane wrote:\n> �\n> Why is the sort step so slow? Sorting a mere 13k rows shouldn't take\n> very long. Maybe you are overrunning work_mem and it's falling back\n> to a disk sort ... what is work_mem set to?\n\nBy default work_mem is set to \"1024\". Increasing the value to \"8192\" \nhalves the execution time, still leaving a factor twenty-five \nperformance decrease compared to using the index. The machine I'm \ntesting this on is a very modest Pentium 3 at 450 MHz.\n\n> Another theory is that you are using a locale in which strcoll() is\n> horridly expensive :-(\n\nRunning 'locale' indicates I'm using \"en_US.UTF-8\" with language \n\"en_NL:en\". My databases all use the UTF8 encoding.\n\nSincerely,\n\nFrank\n", "msg_date": "Thu, 16 Aug 2007 21:25:14 +0200", "msg_from": "Frank Schoep <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index " }, { "msg_contents": "On Thu, Aug 16, 2007 at 06:14:02PM +0200, Frank Schoep wrote:\n> The (example) query:\n> SELECT * FROM movies WHERE letter = 'T' ORDER BY name ASC LIMIT 100 \n> OFFSET 1900;\n\ntry to change the query to:\nSELECT * FROM movies WHERE letter = 'T' ORDER BY letter ASC, name ASC LIMIT 100 \nOFFSET 1900;\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Fri, 17 Aug 2007 09:28:39 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index" }, { "msg_contents": "On Aug 17, 2007, at 9:28 AM, hubert depesz lubaczewski wrote:\n> �\n> try to change the query to:\n> SELECT * FROM movies WHERE letter = 'T' ORDER BY letter ASC, name \n> ASC LIMIT 100\n> OFFSET 1900;\n\nThanks for the suggestion, however executing this query takes even \nlonger regardless of work_mem. The query leads to this plan:\n\nLimit (cost=4320.68..4320.93 rows=100 width=48) (actual \ntime=2137.764..2138.294 rows=100 loops=1)\n -> Sort (cost=4315.93..4351.49 rows=14221 width=48) (actual \ntime=2129.755..2136.184 rows=2000 loops=1)\n Sort Key: letter, name\n -> Bitmap Heap Scan on movies (cost=90.77..3067.54 \nrows=14221 width=48) (actual time=20.277..89.913 rows=13640 loops=1)\n Recheck Cond: (letter = 'T'::bpchar)\n -> Bitmap Index Scan on movies_letter \n(cost=0.00..90.77 rows=14221 width=0) (actual time=18.139..18.139 \nrows=13644 loops=1)\n Index Cond: (letter = 'T'::bpchar)\nTotal runtime: 2143.111 ms\n\nTo compare, that same query (sorting by two columns) without bitmap \nscan runs like this:\n\nLimit (cost=5025.26..5289.75 rows=100 width=48) (actual \ntime=14.986..15.911 rows=100 loops=1)\n -> Index Scan using movies_letter_name on movies \n(cost=0.00..37612.76 rows=14221 width=48) (actual time=0.125..13.686 \nrows=2000 loops=1)\n Index Cond: (letter = 'T'::bpchar)\nTotal runtime: 16.214 ms\n\nI'm not an expert at how the planner decides which query plan to use, \nbut it seems that in my (corner?) case bitmap scan shouldn't be \npreferred over the index scan, as the index is pre-sorted and spans \nall columns involved in the 'WHERE' and 'ORDER BY' clauses.\n\nRegarding the sort performance and work_mem size I have tested the \nsame scenario on a second machine (dual P3-1.13 GHz) with work_mem \nset to 8192, using the same PostgreSQL version. The query plans are \nidentical and running times are ~300ms for the bitmap scan and ~5ms \nfor index scan.\n\nSincerely,\n\nFrank\n", "msg_date": "Fri, 17 Aug 2007 10:43:18 +0200", "msg_from": "Frank Schoep <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index" }, { "msg_contents": "On Fri, Aug 17, 2007 at 10:43:18AM +0200, Frank Schoep wrote:\n>On Aug 17, 2007, at 9:28 AM, hubert depesz lubaczewski wrote:\n>(cost=0.00..37612.76 rows=14221 width=48) (actual time=0.125..13.686 \n>rows=2000 loops=1)\n[snip]\n>I'm not an expert at how the planner decides which query plan to use, \n\nNeither am I. :) I do notice that the estimated number of rows is \nsignificantly larger than the real number; you may want to bump up your \nstatistics a bit to see if it can estimate better.\n\nMike Stone\n", "msg_date": "Fri, 17 Aug 2007 11:23:29 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index" }, { "msg_contents": "On Aug 17, 2007, at 5:23 PM, Michael Stone wrote:\n> On Fri, Aug 17, 2007 at 10:43:18AM +0200, Frank Schoep wrote:\n>> On Aug 17, 2007, at 9:28 AM, hubert depesz lubaczewski wrote:\n>> (cost=0.00..37612.76 rows=14221 width=48) (actual \n>> time=0.125..13.686 rows=2000 loops=1)\n> [snip]\n>> I'm not an expert at how the planner decides which query plan to use,\n>\n> Neither am I. :) I do notice that the estimated number of rows is \n> significantly larger than the real number; you may want to bump up \n> your statistics a bit to see if it can estimate better.\n\nI think the actual number of 2000 rows is based on the LIMIT (100) \nand OFFSET (1900) clauses. 14K rows will have to be sorted, but only \n2000 have to actually be returned for PostgreSQL to be able to \nsatisfy the request.\n\nA few weeks ago I set default_statistics_target to 50 to try and \nnudge the planner into making better judgments, but apparently this \ndoesn't influence the planner in the desired way.\n\nShould I try upping that value even more? I took 50 because the \n'letter' column only has uppercase letters or digits (36 different \nvalues). 50 seemed a good value for reasonable estimates.\n\nSincerely,\n\nFrank\n\n", "msg_date": "Fri, 17 Aug 2007 17:59:30 +0200", "msg_from": "Frank Schoep <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad planner decision - bitmap scan instead of index" } ]
[ { "msg_contents": "After reading many articles which indicate the more disk spindles the \nbetter performance and separating indexes, WAL and data on different \nsets of spindles, I've come up with a couple of questions.\n\nWe am planning to buy an external raid sub-system utilizing raid 10. The \nsub-system will consist of 12 73GB SAS drives total.\nBased on our data requirements we can set this system up using two \ndifferent configurations.\n\nFirst, we could have two raid sets, one with two drives mirrored for \nindexes and the other with four drives mirrored for data. Second, we \ncould configure as one raid set with six drives mirrored housing both \nindexes and data.\n\nOur environment consists of up to 10-20 users doing a variety of \nqueries. We have data entry, batch processing, customer lookups and \nad-hoc queries happening concurrently through out the day.\n\nAlmost all queries would be using indexes, so we were concerned about \nperformance of index lookups with only two spindles dedicated to indexes \n(using the first configuration). We thought it may be better to put data \nand indexes on one raid where index lookups and data retrieval would be \nspread across all six spindles.\n\nAny comments would be appreciated!\n\nSecond Question:\n\nWould there be any problems/concerns with putting WAL files on the \nserver in a raid 10 configuration separate from external raid sub-system?\n\nBest regards,\n\nDoug\n\n-- \n\nRobert D Oden\nDatabase Marketing Technologies, Inc\n951 Locust Hill Circle\nBelton MO 64012-1786\n\nPh: 816-318-8840\nFax: 816-318-8841\n\[email protected]\n\n\nThis email has been processed by SmoothZap - www.smoothwall.net\n\n", "msg_date": "Thu, 16 Aug 2007 11:32:24 -0500", "msg_from": "Robert D Oden <[email protected]>", "msg_from_op": true, "msg_subject": "Raid Configurations " }, { "msg_contents": "On 8/16/07, Robert D Oden <[email protected]> wrote:\n> After reading many articles which indicate the more disk spindles the\n> better performance and separating indexes, WAL and data on different\n> sets of spindles, I've come up with a couple of questions.\n>\n> We am planning to buy an external raid sub-system utilizing raid 10. The\n> sub-system will consist of 12 73GB SAS drives total.\n> Based on our data requirements we can set this system up using two\n> different configurations.\n>\n> First, we could have two raid sets, one with two drives mirrored for\n> indexes and the other with four drives mirrored for data. Second, we\n> could configure as one raid set with six drives mirrored housing both\n> indexes and data.\n>\n> Our environment consists of up to 10-20 users doing a variety of\n> queries. We have data entry, batch processing, customer lookups and\n> ad-hoc queries happening concurrently through out the day.\n>\n> Almost all queries would be using indexes, so we were concerned about\n> performance of index lookups with only two spindles dedicated to indexes\n> (using the first configuration). We thought it may be better to put data\n> and indexes on one raid where index lookups and data retrieval would be\n> spread across all six spindles.\n>\n> Any comments would be appreciated!\n>\n> Second Question:\n>\n> Would there be any problems/concerns with putting WAL files on the\n> server in a raid 10 configuration separate from external raid sub-system?\n\nThis question comes up a lot, and the answer is always 'it depends'\n:-). Separate WAL volume pays off the more writing is going on in\nyour database...it's literally a rolling log of block level changes to\nthe database files. If your database was 100% read, it would not help\nvery much at all. WAL traffic is mostly sequential I/O, but heavy.\n\nAs for splitting data and indexes, I am skeptical this is a good idea\nexcept in very specific cases and here's my reasoning...splitting the\ndevices that way doesn't increase the number of random I/O of the data\nsubsystem. Mostly I would be doing this if I was adding drives to the\narray but couldn't resize the array for some reason...so I look at\nthis as more of a storage management feature.\n\nSo, I'd be looking at a large raid 10 and 1-2 drives for the WAL...on\na raid 1. If your system supports two controllers (in either\nactive/active or active/passive), you should look at second controller\nas well.\n\nmerlin\n", "msg_date": "Sat, 18 Aug 2007 05:25:59 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid Configurations" }, { "msg_contents": "On Aug 17, 2007, at 6:55 PM, Merlin Moncure wrote:\n> So, I'd be looking at a large raid 10 and 1-2 drives for the WAL...on\n> a raid 1. If your system supports two controllers (in either\n> active/active or active/passive), you should look at second controller\n> as well.\n\nIf you only have one controller, and it can cache writes (it has a \nBBU), I'd actually lean towards putting all 12 drives into one raid \n10. A good controller will be able to handle WAL fsyncs plenty fast \nenough, so having a separate WAL mirror would likely hurt more than \nhelp.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Thu, 23 Aug 2007 12:13:43 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid Configurations" } ]
[ { "msg_contents": "Hi,\nwe are using Postgres on both Solaris servers and Linux servers, and\nPostgres are much slower on Solaris servers. We have tested with different\nversions of Solaris and Postgres, but the fact remains: Postgres seems to be\nmuch faster on Linux server. Does anybody else has the same experience?\n\nBest regards,\nFredrik B\n\nHi,we are using Postgres on both Solaris servers and Linux servers, and Postgres are much slower on Solaris servers. We have tested with different versions of Solaris and Postgres, but the fact remains: Postgres seems to be much faster on Linux server. Does anybody else has the same experience?\nBest regards,Fredrik B", "msg_date": "Fri, 17 Aug 2007 08:50:00 +0200", "msg_from": "\"Fredrik Bertilsson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Solaris vs Linux" }, { "msg_contents": "Fredrik Bertilsson escribi�:\n> Hi,\n> we are using Postgres on both Solaris servers and Linux servers, and\n> Postgres are much slower on Solaris servers. We have tested with different\n> versions of Solaris and Postgres, but the fact remains: Postgres seems to be\n> much faster on Linux server. Does anybody else has the same experience?\n\nYou haven't specified where the slowness is. Is it that connection\nestablishing is slower? Are queries slower? Is the hardware\ncomparable? Are the filesystems configured similarly?\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n", "msg_date": "Fri, 17 Aug 2007 23:08:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Solaris vs Linux" }, { "msg_contents": "Fredrik Bertilsson wrote:\n> Hi,\n> we are using Postgres on both Solaris servers and Linux servers, and \n> Postgres are much slower on Solaris servers. We have tested with \n> different versions of Solaris and Postgres, but the fact remains: \n> Postgres seems to be much faster on Linux server. Does anybody else has \n> the same experience?\n> \n> Best regards,\n> Fredrik B\n\nI had some performance problems on Solaris a while ago which let to\nthis interesting thread:\n\nhttp://archives.postgresql.org/pgsql-performance/2006-04/thrd4.php#00035\n\nexecutive summary:\n - write cache might be (unexpectedly) off by default on sun gear\n - set explicitly \"wal_sync_method = fsync\"\n - some other settings (see thread)\n\nBye,\nChris.\n\n\n", "msg_date": "Sat, 18 Aug 2007 12:13:45 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Solaris vs Linux" }, { "msg_contents": "Hi Frederick,\n\nThere is an article about tunning the performance of PostgreSQL on \nSolaris at\nhttp://tweakers.net/reviews/649/9\n\nwhich is not quite exactly of what you wanted but it might help you.\n\nI think that Solaris has disk cache turned off by default which linux \ndoes not have\ndue to possible data loss in case of a power failure.\n\nRegards,\n\nJulo\n\nFredrik Bertilsson wrote:\n> Hi,\n> we are using Postgres on both Solaris servers and Linux servers, and \n> Postgres are much slower on Solaris servers. We have tested with \n> different versions of Solaris and Postgres, but the fact remains: \n> Postgres seems to be much faster on Linux server. Does anybody else \n> has the same experience?\n>\n> Best regards,\n> Fredrik B\n\n\n\n\n\n\nHi Frederick,\n\nThere is an article about tunning the performance of PostgreSQL on\nSolaris at\nhttp://tweakers.net/reviews/649/9\n\nwhich is not quite exactly of what you wanted but it might help you.\n\nI think that Solaris has disk cache turned off by default which linux\ndoes not have\ndue to possible data loss in case of a power failure.\n\nRegards,\n\nJulo\n\nFredrik Bertilsson wrote:\nHi,\nwe are using Postgres on both Solaris servers and Linux servers, and\nPostgres are much slower on Solaris servers. We have tested with\ndifferent versions of Solaris and Postgres, but the fact remains:\nPostgres seems to be much faster on Linux server. Does anybody else has\nthe same experience?\n \n\nBest regards,\nFredrik B", "msg_date": "Tue, 21 Aug 2007 17:49:13 +0200", "msg_from": "Julius Stroffek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Solaris vs Linux" }, { "msg_contents": "Is there any way to stop the autovacuum if it is running longer than 10\nmin or so? \n\n \n\nIs it good idea to kill autovacuum if it is running longer than\nexpected?\n\n \n\n \n\nIn my OLTP system, we are inserting, updating and deleting the data\nevery second. \n\n \n\nAutovacuum started and never finished slowing down the whole system.\n\n \n\n \n\nAny help?\n\n \n\nThanks\n\nRegards\n\nsachi\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\nIs there any way to stop the\nautovacuum if it is running longer than 10 min or so? \n \nIs it good idea to kill autovacuum\nif it is running longer than expected?\n \n \nIn my OLTP system, we are inserting,\nupdating and deleting the data every second. \n \nAutovacuum started and never\nfinished slowing down the whole system.\n \n \nAny help?\n \nThanks\nRegards\nsachi", "msg_date": "Tue, 21 Aug 2007 17:30:09 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n>\n>\n> Is there any way to stop the autovacuum if it is running longer than 10 min\n> or so?\n>\n> Is it good idea to kill autovacuum if it is running longer than expected?\n>\n> In my OLTP system, we are inserting, updating and deleting the data every\n> second.\n>\n> Autovacuum started and never finished slowing down the whole system.\n\nIt's probably better to adjust the sleep parameters in postgresql.conf\nand then pg_ctl reload\n\nYou can kill the autovacuum backend, but it will just start up again\nwhen the sleep time has passed.\n", "msg_date": "Tue, 21 Aug 2007 16:47:18 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running forever" } ]
[ { "msg_contents": "Hi,\n\nMaybe not completely the wright place to ask but... I have this schema\ndesign question (db is postgres of course). I have a couple of classes with\nattributes. The only goal is to search the object that I want to find (which\nis stored on the harddrive).\n\nI have hundreds of classes that are similar but not the same. They all have\nattributes/properties (type is probably String), e.g. (in pseudo code):\n\nclass A_version_1 {\n attribute1, attribute2, attribute3, ..., attributeN\n}\n\nclass A_version_2 {\n attribute1, attribute3, ..., attributeN, attributeN+1, attributeN+2\n}\n\nclass B_version_1 {\n attribute3, attribute4, attribute7, attributeN+3, ..., attributeN+M\n}\n\n\nClass A will have attributes from class B, class B will have attributes from\nclass C and so on. My initial thought was to use the (sometimes dreaded) EAV\nmodel: class_id, object_id, attribute_id and attribute_value. In this way I\ncan make queries like:\n\nSELECT CLASS_ID,\n OBJECT_ID\nFROM EAV_TABLE EAV\nWHERE EAV.ATTRIBUTE_ID = X\n AND EAV.ATTRIBUTE_VALUE = 'searchstring'\n AND EXISTS (SELECT OBJECT_ID\n FROM EAV_TABLE EAV2\n WHERE EAV.OBJECT_ID = EAV2.OBJECT_ID\n AND EAV.CLASS_ID = EAV2.CLASS_ID\n AND EAV2.ATTRIBUTE_ID = Y\n AND EAV2.ATTRIBUTE_VALUE = 'searchstring2')\n\nResults from this query could be entities from multiple classes!\n\nThe alternative is, as many people say: make a proper table for each class\nwhich would lead to hundreds of unions. Is that good/performant? I thought\nit would not... To put all attributes of all classes (as columns) in one\ntable is impossible. The number of total attributes should be in the\nthousands.\n\nA third alternative I came up with is the entity/value schema design where\neach attribute would have its own table. A query would look like this:\n\nSELECT CLASS_ID,\n OBJECT_ID\nFROM EV_X EAV\nWHERE EAV.ATTRIBUTE_VALUE = 'searchstring'\n AND EXISTS (SELECT OBJECT_ID\n FROM EV_Y EAV2\n WHERE EAV.OBJECT_ID = EAV2.OBJECT_ID\n AND EAV.CLASS_ID = EAV2.CLASS_ID\n AND EAV2.ATTRIBUTE_VALUE = 'searchstring2')\n\nWhich would be a nice way to partition the otherwise large table (but there\nwould be thousands of smaller tables).\n\nThe app I'm writing has to scale to about 1 billion attributes/value-pairs\nin total. A normal search query would imply about 5 search terms (but there\ncould be 20). Any suggestions/remarks (I think the EXISTS should be replaced\nby an IN, something else)? Did anyone implement such a search method (or did\nthey decide to make a different design)? Did it work/scale?\n\nThanks in advance,\n\nMark O.\n\nHi,\n\nMaybe not completely the wright place to ask but... I have this schema\ndesign question (db is postgres of course). I have a couple of classes with attributes. The\nonly goal is to search the object that I want to find (which is stored\non the harddrive). \n\nI have hundreds of classes that are similar but not the same. They all\nhave attributes/properties (type is probably String), e.g. (in pseudo\ncode):\n\nclass A_version_1 {\n   attribute1, attribute2, attribute3, ..., attributeN\n}\n\nclass A_version_2 {\n   attribute1, attribute3, ..., attributeN, attributeN+1, attributeN+2\n}\n\nclass B_version_1 {\n   attribute3, attribute4, attribute7, attributeN+3, ..., attributeN+M\n}\n\n\nClass A will have attributes from class B, class B will have attributes\nfrom class C and so on. My initial thought was to use the (sometimes\ndreaded) EAV model: class_id, object_id, attribute_id and\nattribute_value. In this way I can make queries like:\n\nSELECT CLASS_ID, OBJECT_IDFROM EAV_TABLE EAVWHERE EAV.ATTRIBUTE_ID = X AND EAV.ATTRIBUTE_VALUE = 'searchstring' AND EXISTS (SELECT OBJECT_ID FROM EAV_TABLE EAV2\n WHERE EAV.OBJECT_ID = EAV2.OBJECT_ID AND EAV.CLASS_ID = EAV2.CLASS_ID AND EAV2.ATTRIBUTE_ID = Y AND EAV2.ATTRIBUTE_VALUE\n = 'searchstring2')\nResults from this query could be entities from multiple classes! \n\nThe alternative is, as many people say: make a proper table for each\nclass which would lead to hundreds of unions. Is that good/performant? I thought it would not... \nTo put all attributes of all classes (as columns) in one table is\nimpossible. The number of total attributes  should be in the thousands. \n\nA third alternative I came up with is the entity/value schema design where each\nattribute would have its own table. A query would look like this:\n\nSELECT CLASS_ID, OBJECT_IDFROM EV_X EAVWHERE EAV.ATTRIBUTE_VALUE = 'searchstring' AND EXISTS (SELECT OBJECT_ID FROM EV_Y EAV2 WHERE \nEAV.OBJECT_ID = EAV2.OBJECT_ID AND EAV.CLASS_ID = EAV2.CLASS_ID AND EAV2.ATTRIBUTE_VALUE = 'searchstring2')Which would be a nice way to partition the otherwise large table (but there would be thousands of smaller tables). \nThe app I'm writing has to scale to about 1 billion attributes/value-pairs in total. A normal search query would imply about 5 search terms (but there could be 20). Any suggestions/remarks (I think the EXISTS should be replaced by an IN, something else)? Did anyone implement such a search method (or did they decide to make a different design)? Did it work/scale?\nThanks in advance,Mark O.", "msg_date": "Sun, 19 Aug 2007 15:19:52 +0200", "msg_from": "\"mark overmeer\" <[email protected]>", "msg_from_op": true, "msg_subject": "schema design question" }, { "msg_contents": "\n> Maybe not completely the wright place to ask but... I have this schema\n> design question (db is postgres of course). I have a couple of classes\n> with attributes. The only goal is to search the object that I want to\n> find (which is stored on the harddrive). \n> I have hundreds of classes that are similar but not the same. They all\n> have attributes/properties (type is probably String), e.g. (in pseudo\n> code):\n\nUse table inheritance.\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Sun, 19 Aug 2007 10:24:14 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "Hi Adam,\n\nThanks for the fast reply. What should inherit from what? Class A (e.g.\n'todo item') is certainly not derived from property X (e.g. 'startdate').\nClass A version 2 has different properties (some are removed, others are\nadded). Can you elaborate / say I'm wrong / give an example ? Thanks,\n\nMark\n\n\n2007/8/19, Adam Tauno Williams <[email protected]>:\n>\n>\n> > Maybe not completely the wright place to ask but... I have this schema\n> > design question (db is postgres of course). I have a couple of classes\n> > with attributes. The only goal is to search the object that I want to\n> > find (which is stored on the harddrive).\n> > I have hundreds of classes that are similar but not the same. They all\n> > have attributes/properties (type is probably String), e.g. (in pseudo\n> > code):\n>\n> Use table inheritance.\n>\n> --\n> Adam Tauno Williams, Network & Systems Administrator\n> Consultant - http://www.whitemiceconsulting.com\n> Developer - http://www.opengroupware.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nHi Adam,\nThanks for the fast reply. What should inherit from what? Class A (e.g. 'todo item') is certainly\nnot derived from property X (e.g. 'startdate'). Class A version 2 has different properties (some are removed, others are added). Can you elaborate / say\nI'm wrong / give an example ? Thanks,\nMark\n2007/8/19, Adam Tauno Williams <[email protected]>:\n> Maybe not completely the wright place to ask but... I have this schema> design question (db is postgres of course). I have a couple of classes> with attributes. The only goal is to search the object that I want to\n> find (which is stored on the harddrive).> I have hundreds of classes that are similar but not the same. They all> have attributes/properties (type is probably String), e.g. (in pseudo> code):\nUse table inheritance.--Adam Tauno Williams, Network & Systems AdministratorConsultant - http://www.whitemiceconsulting.comDeveloper - \nhttp://www.opengroupware.org---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings", "msg_date": "Sun, 19 Aug 2007 17:23:14 +0200", "msg_from": "\"mark overmeer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: schema design question" }, { "msg_contents": "On Sun, Aug 19, 2007 at 03:19:52PM +0200, mark overmeer wrote:\n> Hi,\n> \n> Maybe not completely the wright place to ask but... I have this\n> schema design question (db is postgres of course). I have a couple\n> of classes with attributes.\n\nDanger, Will Robinson! Danger!\n\nThe DBMS way of looking at things is fundamentally different from OO\ncoding, and if you try to make them fit together na�vely as you do\nbelow, you only get grief.\n\n> The only goal is to search the object\n> that I want to find (which is stored on the harddrive).\n> \n> I have hundreds of classes that are similar but not the same. They all have\n> attributes/properties (type is probably String), e.g. (in pseudo code):\n> \n> class A_version_1 {\n> attribute1, attribute2, attribute3, ..., attributeN\n> }\n> \n> class A_version_2 {\n> attribute1, attribute3, ..., attributeN, attributeN+1, attributeN+2\n> }\n> \n> class B_version_1 {\n> attribute3, attribute4, attribute7, attributeN+3, ..., attributeN+M\n> }\n> \n> \n> Class A will have attributes from class B, class B will have\n> attributes from class C and so on. My initial thought was to use the\n> (sometimes dreaded) EAV model: class_id, object_id, attribute_id and\n> attribute_value. In this way I can make queries like:\n> \n> SELECT CLASS_ID,\n> OBJECT_ID\n> FROM EAV_TABLE EAV\n\nThere's your mistake. EAV is not performant, and won't become so.\n\nDecide what your database will and won't do, and design your schema\naround that. I know it takes a little extra helping of courage, but\nit's worth it in the long run.\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nphone: +1 415 235 3778 AIM: dfetter666\n Skype: davidfetter\n\nRemember to vote!\nConsider donating to PostgreSQL: http://www.postgresql.org/about/donate\n", "msg_date": "Sun, 19 Aug 2007 11:12:16 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "On Sun, Aug 19, 2007 at 11:12:16AM -0700, David Fetter wrote:\n> There's your mistake. EAV is not performant, and won't become so.\n\nIt sort of depends. I put all the EXIF information for my image gallery into\nan EAV table -- it was the most logical format at the time, although I'm not\nsure I need all the information. Anyhow, with clustering and indexes,\nPostgres zips through the five million records easily enough for my use -- at\nleast fast enough that I can live with it without feeling the need for a\nredesign.\n\nAs a general database design paradigm, though, I fully agree with you.\nDatabases are databases, not glorified OO data stores or hash tables.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 19 Aug 2007 20:26:58 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "On Sun, Aug 19, 2007 at 08:26:58PM +0200, Steinar H. Gunderson wrote:\n> On Sun, Aug 19, 2007 at 11:12:16AM -0700, David Fetter wrote:\n> > There's your mistake. EAV is not performant, and won't become so.\n> \n> It sort of depends. I put all the EXIF information for my image\n> gallery into an EAV table -- it was the most logical format at the\n> time, although I'm not sure I need all the information. Anyhow, with\n> clustering and indexes, Postgres zips through the five million\n> records easily enough for my use -- at least fast enough that I can\n> live with it without feeling the need for a redesign.\n\nUnless your records are huge, that's a tiny database, where tiny is\ndefined to mean that the whole thing fits in main memory with plenty\nof room to spare. I guarantee that performance will crash right\nthrough the floor as soon as any table no longer fits in main memory.\n\n> As a general database design paradigm, though, I fully agree with\n> you. Databases are databases, not glorified OO data stores or hash\n> tables.\n\nExactly :)\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nphone: +1 415 235 3778 AIM: dfetter666\n Skype: davidfetter\n\nRemember to vote!\nConsider donating to PostgreSQL: http://www.postgresql.org/about/donate\n", "msg_date": "Sun, 19 Aug 2007 11:41:15 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "Hi,\n\n2007/8/19, Steinar H. Gunderson <[email protected]>:\n>\n> As a general database design paradigm, though, I fully agree with you.\n> Databases are databases, not glorified OO data stores or hash tables.\n\nI don't want to use it as an OO data store, I use the filesystem for that.\nThe intended use is to search for the right object. Since it has separate\ndata structures for searching (indexes) I guess that is one of its\nfunctions.\n\nHowever, it still doesn't answer my question about the EV model (where each\nattribute is given its own table).\n\nMark\n\n/* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n\nHi,2007/8/19, Steinar H. Gunderson <[email protected]>:\nAs a general database design paradigm, though, I fully agree with you.Databases are databases, not glorified OO data stores or hash tables.I don't want to use it as an OO data store, I use the filesystem for that. The intended use is to search for the right object. Since it has separate data structures for searching (indexes) I guess that is one of its functions. \nHowever, it still doesn't answer my question about the EV model (where each attribute is given its own table). Mark\n/* Steinar */--Homepage: http://www.sesse.net/", "msg_date": "Sun, 19 Aug 2007 22:13:08 +0200", "msg_from": "\"mark overmeer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: schema design question" }, { "msg_contents": "On Sun, Aug 19, 2007 at 10:13:08PM +0200, mark overmeer wrote:\n> Hi,\n> \n> 2007/8/19, Steinar H. Gunderson <[email protected]>:\n> >\n> > As a general database design paradigm, though, I fully agree with\n> > you. Databases are databases, not glorified OO data stores or\n> > hash tables.\n> \n> I don't want to use it as an OO data store, I use the filesystem for\n> that. The intended use is to search for the right object. Since it\n> has separate data structures for searching (indexes) I guess that is\n> one of its functions.\n> \n> However, it still doesn't answer my question about the EV model\n> (where each attribute is given its own table).\n\nThe answer to EAV modeling, is, \"DON'T!\"\n\nCheers,\nDavid (who, if he were greedy, would be encouraging EAV modeling\nbecause it would cause guaranteed large consulting income later)\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nphone: +1 415 235 3778 AIM: dfetter666\n Skype: davidfetter\n\nRemember to vote!\nConsider donating to PostgreSQL: http://www.postgresql.org/about/donate\n", "msg_date": "Sun, 19 Aug 2007 13:23:34 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "On Sun, Aug 19, 2007 at 11:41:15AM -0700, David Fetter wrote:\n> Unless your records are huge, that's a tiny database, where tiny is\n> defined to mean that the whole thing fits in main memory with plenty\n> of room to spare. I guarantee that performance will crash right\n> through the floor as soon as any table no longer fits in main memory.\n\nSure, it fits into memory; however, it isn't used so often, though, so it's\nfrequently not in the cache when it's needed. You are completely right in\nthat it's much slower from disk than from RAM :-)\n\nThe question is, of course, how to best store something like the EXIF\ninformation _without_ using EAV. I could separate out the few fields I\nnormally use into a horizontal (ie. standard relational) table, but it seems\nsort of... lossy? Another possible approach is to keep the EAV table around\nfor completeness in addition to the few fields I need, but then you do of\ncourse get into normalization issues.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 19 Aug 2007 22:42:34 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" }, { "msg_contents": "\n\n> However, it still doesn't answer my question about the EV model (where\n> each attribute is given its own table).\n\nDo a TABLE(object_id INT, attribute STRING, value STRING) if you just\nwant to be able to search for objects by an attribute. But better yet\nlook at one of the thousand object persistence systems out there, not\nmuch to be gained from re-inventing the wheel. \n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Sun, 19 Aug 2007 20:22:07 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schema design question" } ]
[ { "msg_contents": "Hi,\nthe company I'm doing work for is expecting a 20 times increase in \ndata and seeks a 10 times increase in performance. Having pushed our \ndatabase server to the limit daily for the past few months we have \ndecided we'd prefer to be database users rather than database server \nadmins. :-)\n\nAre you or can you recommend a database hosting company that is good \nfor clients that require more power than what a single database \nserver can offer?\n\nCheers\n\n Nik\n", "msg_date": "Sun, 19 Aug 2007 16:01:55 +0200", "msg_from": "Niklas Saers <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for database hosting" }, { "msg_contents": "Nik, you may be underestimating just how much performance can be obtained\nfrom a single database server. For example, an IBM p595 server connected to\nan array of ds8300 storage devices could reasonably be expected to provide\nseveral orders of magnitude more performance when compared to commodity\nhardware. In commodity space (albeit, just barely), a 16 core opteron\nrunning (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\nprovisioned SAN should also enormously outperform a beige-box solution, and\nat a fraction of the cost. If it's performance you care about then the\npgsql-performance list (which I have cc'd) is the place to talk about it.\n\nI realize this doesn't address your desire to get out of database server\nadministration. I am not aware of any company which provides database\nhosting, further I'm not entirely convinced that's a viable business\nsolution. The technical issues (security, latency and reliability are the\nones that immediately come to mind) associated with a hosted database server\nsolution suggest to me that this would not be economically viable. The\nbusiness issues around out-sourcing a critical, if not central component of\nyour architecture seem, at least to me, to be insurmountable.\n\nAndrew\n\n\nOn 8/19/07, Niklas Saers <[email protected]> wrote:\n>\n> Hi,\n> the company I'm doing work for is expecting a 20 times increase in\n> data and seeks a 10 times increase in performance. Having pushed our\n> database server to the limit daily for the past few months we have\n> decided we'd prefer to be database users rather than database server\n> admins. :-)\n>\n> Are you or can you recommend a database hosting company that is good\n> for clients that require more power than what a single database\n> server can offer?\n>\n> Cheers\n>\n> Nik\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nNik, you may be underestimating just how much performance can be obtained from a single database server. For example, an IBM p595 server connected to an array of ds8300 storage devices could reasonably be expected to provide several orders of magnitude more performance when compared to commodity hardware. In commodity space (albeit, just barely), a 16 core opteron running (the admittedly yet-to-be-released) FreeBSD 7, and a suitably provisioned SAN should also enormously outperform a beige-box solution, and at a fraction of the cost. If it's performance you care about then the pgsql-performance list (which I have cc'd) is the place to talk about it.\nI realize this doesn't address your desire to get out of database server administration. I am not aware of any company which provides database hosting, further I'm not entirely convinced that's a viable business solution. The technical issues (security, latency and reliability are the ones that immediately come to mind) associated with a hosted database server solution suggest to me that this would not be economically viable. The business issues around out-sourcing a critical, if not central component of your architecture seem, at least to me, to be insurmountable.\nAndrewOn 8/19/07, Niklas Saers <[email protected]> wrote:\nHi,the company I'm doing work for is expecting a 20 times increase indata and seeks a 10 times increase in performance. Having pushed ourdatabase server to the limit daily for the past few months we have\ndecided we'd prefer to be database users rather than database serveradmins. :-)Are you or can you recommend a database hosting company that is goodfor clients that require more power than what a single database\nserver can offer?Cheers    Nik---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not\n       match", "msg_date": "Sun, 19 Aug 2007 12:48:10 -0700", "msg_from": "\"Andrew Hammond\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for database hosting" }, { "msg_contents": "Hello,\n\nJust to note something interesting on database scalability: i'm not sure\nwhether your database is used for processing or just data lookup, but if\nit's used for data lookup, look into memcached -- it's a really scalable\ncaching system which can reduce your database load a lot.\n\nI know a lot of large websites (slashdot, livejournal, etc) use this\nsolution -- they have dozens of gigabytes worth of memcached processes to\nreduce the cache hits (I'm told livejournal has around 200 of those servers\nrunning, making sure around 99.99% of the database queries are just cache\nhits). This probably has been discussed on this list before, but just in\ncase: look into it.\n\nRegards,\n\nLeon Mergen\n\n\nOn 8/19/07, Andrew Hammond <[email protected]> wrote:\n>\n> Nik, you may be underestimating just how much performance can be obtained\n> from a single database server. For example, an IBM p595 server connected to\n> an array of ds8300 storage devices could reasonably be expected to provide\n> several orders of magnitude more performance when compared to commodity\n> hardware. In commodity space (albeit, just barely), a 16 core opteron\n> running (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\n> provisioned SAN should also enormously outperform a beige-box solution, and\n> at a fraction of the cost. If it's performance you care about then the\n> pgsql-performance list (which I have cc'd) is the place to talk about it.\n>\n> I realize this doesn't address your desire to get out of database server\n> administration. I am not aware of any company which provides database\n> hosting, further I'm not entirely convinced that's a viable business\n> solution. The technical issues (security, latency and reliability are the\n> ones that immediately come to mind) associated with a hosted database server\n> solution suggest to me that this would not be economically viable. The\n> business issues around out-sourcing a critical, if not central component of\n> your architecture seem, at least to me, to be insurmountable.\n>\n> Andrew\n>\n>\n> On 8/19/07, Niklas Saers <[email protected]> wrote:\n> >\n> > Hi,\n> > the company I'm doing work for is expecting a 20 times increase in\n> > data and seeks a 10 times increase in performance. Having pushed our\n> > database server to the limit daily for the past few months we have\n> > decided we'd prefer to be database users rather than database server\n> > admins. :-)\n> >\n> > Are you or can you recommend a database hosting company that is good\n> > for clients that require more power than what a single database\n> > server can offer?\n> >\n> > Cheers\n> >\n> > Nik\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n>\n>\n\n\n-- \nLeon Mergen\nhttp://www.solatis.com\n\nHello,Just to note something interesting on database scalability: i'm not sure whether your database is used for processing or just data lookup, but if it's used for data lookup, look into memcached -- it's a really scalable caching system which can reduce your database load a lot. \nI know a lot of large websites (slashdot, livejournal, etc) use this solution -- they have dozens of gigabytes worth of memcached processes to reduce the cache hits (I'm told livejournal has around 200 of those servers running, making sure around \n99.99% of the database queries are just cache hits). This probably has been discussed on this list before, but just in case: look into it.Regards,Leon MergenOn 8/19/07, \nAndrew Hammond <[email protected]> wrote:\nNik, you may be underestimating just how much performance can be obtained from a single database server. For example, an IBM p595 server connected to an array of ds8300 storage devices could reasonably be expected to provide several orders of magnitude more performance when compared to commodity hardware. In commodity space (albeit, just barely), a 16 core opteron running (the admittedly yet-to-be-released) FreeBSD 7, and a suitably provisioned SAN should also enormously outperform a beige-box solution, and at a fraction of the cost. If it's performance you care about then the pgsql-performance list (which I have cc'd) is the place to talk about it. \nI realize this doesn't address your desire to get out of database server administration. I am not aware of any company which provides database hosting, further I'm not entirely convinced that's a viable business solution. The technical issues (security, latency and reliability are the ones that immediately come to mind) associated with a hosted database server solution suggest to me that this would not be economically viable. The business issues around out-sourcing a critical, if not central component of your architecture seem, at least to me, to be insurmountable. \nAndrewOn 8/19/07, Niklas Saers <\[email protected]> wrote:\n Hi,the company I'm doing work for is expecting a 20 times increase indata and seeks a 10 times increase in performance. Having pushed ourdatabase server to the limit daily for the past few months we have\n decided we'd prefer to be database users rather than database serveradmins. :-)Are you or can you recommend a database hosting company that is goodfor clients that require more power than what a single database \nserver can offer?Cheers    Nik---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not \n       match-- Leon Mergenhttp://www.solatis.com", "msg_date": "Sun, 19 Aug 2007 23:49:02 +0200", "msg_from": "\"Leon Mergen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Looking for database hosting" }, { "msg_contents": "Folks,\n\nPlease remove pgsql-jobs from your CC list with this thread. That list is \nONLY for employment ads. Thank you.\n\n> Nik, you may be underestimating just how much performance can be\n> obtained from a single database server ...\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Sun, 19 Aug 2007 14:51:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for database hosting: FIX CC LIST!!" } ]
[ { "msg_contents": "Andrew,\n\nI'd say that commodity systems are the fastest with postgres - many have seen big slowdowns with high end servers. 'Several orders of magnitude' is not possible by just changing the HW, you've got a SW problem to solve first. We have done 100+ times faster than both Postgres and popular (even gridded) commercial DBMS using an intrinsically parallel SW approach.\n\nIf the objective is OLAP / DSS there's no substitute for a parallel DB that does query and load / transform using all the CPUs and IO channels simultaneously. This role is best met from a value standpoint by clustering commodity systems.\n\nFor OLTP, we need better SMP and DML algorithmic optimizations for concurrency, at which point big SMP machines work. Right now you can buy a 32 CPU commodity (opteron) machine from SUN (X4600) for about $60K loaded.\n\nWRT hosting, we've done a bit of it on GPDB systems, but we're not making it a focus area. Instead, we do subscription pricing by the amount of data used and recommend / help get systems set up.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tAndrew Hammond [mailto:[email protected]]\nSent:\tSunday, August 19, 2007 03:49 PM Eastern Standard Time\nTo:\tNiklas Saers\nCc:\[email protected]; [email protected]\nSubject:\tRe: [PERFORM] [pgsql-jobs] Looking for database hosting\n\nNik, you may be underestimating just how much performance can be obtained\nfrom a single database server. For example, an IBM p595 server connected to\nan array of ds8300 storage devices could reasonably be expected to provide\nseveral orders of magnitude more performance when compared to commodity\nhardware. In commodity space (albeit, just barely), a 16 core opteron\nrunning (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\nprovisioned SAN should also enormously outperform a beige-box solution, and\nat a fraction of the cost. If it's performance you care about then the\npgsql-performance list (which I have cc'd) is the place to talk about it.\n\nI realize this doesn't address your desire to get out of database server\nadministration. I am not aware of any company which provides database\nhosting, further I'm not entirely convinced that's a viable business\nsolution. The technical issues (security, latency and reliability are the\nones that immediately come to mind) associated with a hosted database server\nsolution suggest to me that this would not be economically viable. The\nbusiness issues around out-sourcing a critical, if not central component of\nyour architecture seem, at least to me, to be insurmountable.\n\nAndrew\n\n\nOn 8/19/07, Niklas Saers <[email protected]> wrote:\n>\n> Hi,\n> the company I'm doing work for is expecting a 20 times increase in\n> data and seeks a 10 times increase in performance. Having pushed our\n> database server to the limit daily for the past few months we have\n> decided we'd prefer to be database users rather than database server\n> admins. :-)\n>\n> Are you or can you recommend a database hosting company that is good\n> for clients that require more power than what a single database\n> server can offer?\n>\n> Cheers\n>\n> Nik\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\nRe: [PERFORM] [pgsql-jobs] Looking for database hosting\n\n\n\nAndrew,\n\nI'd say that commodity systems are the fastest with postgres - many have seen big slowdowns with high end servers.  'Several orders of magnitude' is not possible by just changing the HW, you've got a SW problem to solve first.  We have done 100+ times faster than both Postgres and popular (even gridded) commercial DBMS using an intrinsically parallel SW approach.\n\nIf the objective is OLAP / DSS there's no substitute for a parallel DB that does query and load / transform using all the CPUs and IO channels simultaneously.  This role is best met from a value standpoint by clustering commodity systems.\n\nFor OLTP, we need better SMP and DML algorithmic optimizations for concurrency, at which point big SMP machines work.  Right now you can buy a 32 CPU commodity (opteron) machine from SUN (X4600) for about $60K loaded.\n\nWRT hosting, we've done a bit of it on GPDB systems, but we're not making it a focus area.  Instead, we do subscription pricing by the amount of data used and recommend / help get systems set up.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Andrew Hammond [mailto:[email protected]]\nSent:   Sunday, August 19, 2007 03:49 PM Eastern Standard Time\nTo:     Niklas Saers\nCc:     [email protected]; [email protected]\nSubject:        Re: [PERFORM] [pgsql-jobs] Looking for database hosting\n\nNik, you may be underestimating just how much performance can be obtained\nfrom a single database server. For example, an IBM p595 server connected to\nan array of ds8300 storage devices could reasonably be expected to provide\nseveral orders of magnitude more performance when compared to commodity\nhardware. In commodity space (albeit, just barely), a 16 core opteron\nrunning (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\nprovisioned SAN should also enormously outperform a beige-box solution, and\nat a fraction of the cost. If it's performance you care about then the\npgsql-performance list (which I have cc'd) is the place to talk about it.\n\nI realize this doesn't address your desire to get out of database server\nadministration. I am not aware of any company which provides database\nhosting, further I'm not entirely convinced that's a viable business\nsolution. The technical issues (security, latency and reliability are the\nones that immediately come to mind) associated with a hosted database server\nsolution suggest to me that this would not be economically viable. The\nbusiness issues around out-sourcing a critical, if not central component of\nyour architecture seem, at least to me, to be insurmountable.\n\nAndrew\n\n\nOn 8/19/07, Niklas Saers <[email protected]> wrote:\n>\n> Hi,\n> the company I'm doing work for is expecting a 20 times increase in\n> data and seeks a 10 times increase in performance. Having pushed our\n> database server to the limit daily for the past few months we have\n> decided we'd prefer to be database users rather than database server\n> admins. :-)\n>\n> Are you or can you recommend a database hosting company that is good\n> for clients that require more power than what a single database\n> server can offer?\n>\n> Cheers\n>\n>     Nik\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>        choose an index scan if your joining column's datatypes do not\n>        match\n>", "msg_date": "Sun, 19 Aug 2007 16:17:04 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Looking for database hosting" }, { "msg_contents": "On 8/19/07, Luke Lonergan <[email protected]> wrote:\n>\n> Andrew,\n>\n> I'd say that commodity systems are the fastest with postgres - many have\n> seen big slowdowns with high end servers. 'Several orders of magnitude' is\n> not possible by just changing the HW,\n>\n\nGoing from one or two SATA disks to a SAN farm ought to achieve orders of\nmagnitude in improvement. And cost. Going from 2GB of memory up to 16 or\n32GB can make significant changes as well. However I agree with you that\nintelligence at the application layer such that you can take advantage of a\nparallel approach is a superior solution both in terms of overall\neffectiveness and cost effectiveness.\n\nyou've got a SW problem to solve first. We have done 100+ times faster than\n> both Postgres and popular (even gridded) commercial DBMS using an\n> intrinsically parallel SW approach.\n>\n\nThat is both cool and unsurprising at the same time. One of the major\nchallenges I've seen in practice is that small companies don't generally\nstart off with a db design that's capable of a parallel approach. With\nsuccess and growth, there comes a point where a massive re-design is needed.\nCompanies that recognize this, make the investment and take the risk are\nrare.\n\nIf the objective is OLAP / DSS there's no substitute for a parallel DB that\n> does query and load / transform using all the CPUs and IO channels\n> simultaneously. This role is best met from a value standpoint by clustering\n> commodity systems.\n>\n> For OLTP, we need better SMP and DML algorithmic optimizations for\n> concurrency, at which point big SMP machines work. Right now you can buy a\n> 32 CPU commodity (opteron) machine from SUN (X4600) for about $60K loaded.\n>\n\n\nWRT hosting, we've done a bit of it on GPDB systems, but we're not making it\n> a focus area. Instead, we do subscription pricing by the amount of data\n> used and recommend / help get systems set up.\n>\n> - Luke\n>\n> Msg is shrt cuz m on ma treo\n>\n>\n> -----Original Message-----\n> From: Andrew Hammond [mailto:[email protected]<[email protected]>\n> ]\n> Sent: Sunday, August 19, 2007 03:49 PM Eastern Standard Time\n> To: Niklas Saers\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] [pgsql-jobs] Looking for database hosting\n>\n> Nik, you may be underestimating just how much performance can be obtained\n> from a single database server. For example, an IBM p595 server connected\n> to\n> an array of ds8300 storage devices could reasonably be expected to provide\n> several orders of magnitude more performance when compared to commodity\n> hardware. In commodity space (albeit, just barely), a 16 core opteron\n> running (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\n> provisioned SAN should also enormously outperform a beige-box solution,\n> and\n> at a fraction of the cost. If it's performance you care about then the\n> pgsql-performance list (which I have cc'd) is the place to talk about it.\n>\n> I realize this doesn't address your desire to get out of database server\n> administration. I am not aware of any company which provides database\n> hosting, further I'm not entirely convinced that's a viable business\n> solution. The technical issues (security, latency and reliability are the\n> ones that immediately come to mind) associated with a hosted database\n> server\n> solution suggest to me that this would not be economically viable. The\n> business issues around out-sourcing a critical, if not central component\n> of\n> your architecture seem, at least to me, to be insurmountable.\n>\n> Andrew\n>\n>\n> On 8/19/07, Niklas Saers <[email protected]> wrote:\n> >\n> > Hi,\n> > the company I'm doing work for is expecting a 20 times increase in\n> > data and seeks a 10 times increase in performance. Having pushed our\n> > database server to the limit daily for the past few months we have\n> > decided we'd prefer to be database users rather than database server\n> > admins. :-)\n> >\n> > Are you or can you recommend a database hosting company that is good\n> > for clients that require more power than what a single database\n> > server can offer?\n> >\n> > Cheers\n> >\n> > Nik\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n>\n>\n\nOn 8/19/07, Luke Lonergan <[email protected]> wrote:\n\nAndrew,\n\nI'd say that commodity systems are the fastest with postgres - many have seen big slowdowns with high end servers.  'Several orders of magnitude' is not possible by just changing the HW,\nGoing from one or two SATA disks to a SAN farm ought to achieve orders of magnitude in improvement. And cost. Going from 2GB of memory up to 16 or 32GB can make significant changes as well. However I agree with you that intelligence at the application layer such that you can take advantage of a parallel approach is a superior solution both in terms of overall effectiveness and cost effectiveness.\n you've got a SW problem to solve first.  We have done 100+ times faster than both Postgres and popular (even gridded) commercial DBMS using an intrinsically parallel SW approach.\nThat is both cool and unsurprising at the same time. One of the major challenges I've seen in practice is that small companies don't generally start off with a db design that's capable of a parallel approach. With success and growth, there comes a point where a massive re-design is needed. Companies that recognize this, make the investment and take the risk are rare.\n\nIf the objective is OLAP / DSS there's no substitute for a parallel DB that does query and load / transform using all the CPUs and IO channels simultaneously.  This role is best met from a value standpoint by clustering commodity systems.\n\n\nFor OLTP, we need better SMP and DML algorithmic optimizations for concurrency, at which point big SMP machines work.  Right now you can buy a 32 CPU commodity (opteron) machine from SUN (X4600) for about $60K loaded.\n\nWRT hosting, we've done a bit of it on GPDB systems, but we're not making it a focus area.  Instead, we do subscription pricing by the amount of data used and recommend / help get systems set up.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Andrew Hammond [mailto:[email protected]]\nSent:   Sunday, August 19, 2007 03:49 PM Eastern Standard Time\nTo:     Niklas Saers\nCc:     [email protected]; \[email protected]\nSubject:        Re: [PERFORM] [pgsql-jobs] Looking for database hosting\n\nNik, you may be underestimating just how much performance can be obtained\nfrom a single database server. For example, an IBM p595 server connected to\nan array of ds8300 storage devices could reasonably be expected to provide\nseveral orders of magnitude more performance when compared to commodity\nhardware. In commodity space (albeit, just barely), a 16 core opteron\nrunning (the admittedly yet-to-be-released) FreeBSD 7, and a suitably\nprovisioned SAN should also enormously outperform a beige-box solution, and\nat a fraction of the cost. If it's performance you care about then the\npgsql-performance list (which I have cc'd) is the place to talk about it.\n\nI realize this doesn't address your desire to get out of database server\nadministration. I am not aware of any company which provides database\nhosting, further I'm not entirely convinced that's a viable business\nsolution. The technical issues (security, latency and reliability are the\nones that immediately come to mind) associated with a hosted database server\nsolution suggest to me that this would not be economically viable. The\nbusiness issues around out-sourcing a critical, if not central component of\nyour architecture seem, at least to me, to be insurmountable.\n\nAndrew\n\n\nOn 8/19/07, Niklas Saers <[email protected]> wrote:\n>\n> Hi,\n> the company I'm doing work for is expecting a 20 times increase in\n> data and seeks a 10 times increase in performance. Having pushed our\n> database server to the limit daily for the past few months we have\n> decided we'd prefer to be database users rather than database server\n> admins. :-)\n>\n> Are you or can you recommend a database hosting company that is good\n> for clients that require more power than what a single database\n> server can offer?\n>\n> Cheers\n>\n>     Nik\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>        choose an index scan if your joining column's datatypes do not\n>        match\n>", "msg_date": "Sun, 19 Aug 2007 17:37:07 -0700", "msg_from": "\"Andrew Hammond\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-jobs] Looking for database hosting" } ]
[ { "msg_contents": "Terminology Question:\n\nIf I use the following statement:\n\n \n\nI am backing up schema XYZ every 30 minutes.\n\n \n\nDoes this statement imply that I am only backing up the definition of\nthe data? Or does it mean that I am backing up the definition of the\ndata and the data within the schema object?\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTerminology Question:\nIf I use the following statement:\n \nI am backing up schema XYZ every 30 minutes.\n \nDoes this statement imply that I am only backing up the definition\nof the data?  Or does it mean that I am backing up the definition of the data\nand the data within the schema object?\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Mon, 20 Aug 2007 09:36:00 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Terminology Question" }, { "msg_contents": "On 8/20/07, Campbell, Lance <[email protected]> wrote:\n>\n> Terminology Question:\n>\n> If I use the following statement:\n>\n> I am backing up schema XYZ every 30 minutes.\n>\n> Does this statement imply that I am only backing up the definition of the\n> data? Or does it mean that I am backing up the definition of the data and\n> the data within the schema object?\n\nIn db parlance, schema means two different things really.\n\nOne is that layout of your data (how tables are related etc...)\nThe other is the namespace that a set of objects can live in. i.e.\nThere is a certain amount of overlap here as well.\n\ndbname.schemaname.objectname.fieldname\n\nIn this instance, \"backing up\" a schema pretty much implies the\nnamespace version of schema.\n\nOTOH, a phrase like \"I think your schema has some design flaws pretty\nobviously points to the first definition relating to the layout of\nyour data.\n", "msg_date": "Mon, 20 Aug 2007 09:50:18 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terminology Question" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nCampbell, Lance wrote:\n> Terminology Question:\n> \n> If I use the following statement:\n> \n> \n> \n> I am backing up schema XYZ every 30 minutes.\n> \n> \n> \n> Does this statement imply that I am only backing up the definition of\n> the data? Or does it mean that I am backing up the definition of the\n> data and the data within the schema object?\n\nI read that as you are backing up the schema and all objects + data\nwithin the schema XYZ every 30 minutes.\n\nJoshua D. Drake\n\n\n> \n> \n> \n> Thanks,\n> \n> \n> \n> Lance Campbell\n> \n> Project Manager/Software Architect\n> \n> Web Services at Public Affairs\n> \n> University of Illinois\n> \n> 217.333.0382\n> \n> http://webservices.uiuc.edu\n> \n> \n> \n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGydWDATb/zqfZUUQRAjYBAJ0djuYjUDTSQXn2Crg5eEOgVZTDVACcC1M7\ngoKAhBqQWByziN5mVULUYo8=\n=PZ/+\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 20 Aug 2007 10:55:15 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terminology Question" } ]
[ { "msg_contents": "Hi,\n\nI recently inherited a very old (PostgreSQL 7.0.3) database, and have \nmigrated it to 8.2.4 but have run into a performance issue.\n\nBasically, I did a dump and import into the new database, vacuumed and \ncreated fresh indexes and everything is work great except the following \ntype of query (and similar):\n\n SELECT tsr.stepId, tsr.testType, tsr.problemReportId, tsr.excpt, tcr.caseId\n FROM TestCaseRun tcr, TestStepRun tsr\n WHERE tcr.parentSN = 194813\n AND (tsr.testType <> ''\n OR tsr.problemReportId <> ''\n OR tsr.excpt <> '')\n AND tsr.parentSN = tcr.recordSN\n\nWhat used to take 250ms or so on the old database now takes between 55 and \n60 Seconds.\n\nOn the old database, the query plan looks like this:\n\nUnique (cost=13074.30..13078.36 rows=32 width=68)\n -> Sort (cost=13074.30..13074.30 rows=324 width=68)\n -> Nested Loop (cost=0.00..13060.77 rows=324 width=68)\n -> Index Scan using parentsn_tcr_indx on testcaserun tcr \n(cost=0.00..444.83 rows=111 width=16)\n -> Index Scan using parentsn_tsr_indx on teststeprun tsr \n(cost=0.00..113.42 rows=27 width=52)\n\nAnd on the new database it looks like this:\n\n Unique (cost=206559152.10..206559157.14 rows=336 width=137)\n -> Sort (cost=206559152.10..206559152.94 rows=336 width=137)\n Sort Key: tsr.stepid, tsr.testtype, tsr.problemreportid, \ntsr.excpt, tcr.caseid\n -> Nested Loop (cost=100000000.00..106559138.00 rows=336 \nwidth=137)\n -> Index Scan using parentsn_tcr_indx on testcaserun tcr \n(cost=0.00..17.00 rows=115 width=11)\n Index Cond: (parentsn = 186726)\n -> Index Scan using parentsn_tsr_indx on teststeprun tsr \n(cost=0.00..56089.00 rows=75747 width=134)\n Index Cond: (tsr.parentsn = tcr.recordsn)\n Filter: ((testtype <> ''::text) OR \n((problemreportid)::text <> ''::text) OR (excpt <> ''::text))\n(9 rows)\n\nI'm fairly familiar with PostgreSQL, but I have no idea where to start in \ntrying to trouble shoot this huge performance discrepancy. The hardware \nand OS are the same.\n\nAnd the data size is exactly the same between the two, and the total data \nsize is about 7.5GB, with the largest table (teststeprun mentioned above) \nbeing about 15 million rows.\n\nAny pointers to where to start troubleshooting this or how to change the \nquery to work better would be appreciated.\n\ncheers and thanks,\nBen Perrault\nSr. Systems Consultant\nAlcatel-Lucent Internetworking\n", "msg_date": "Mon, 20 Aug 2007 22:17:14 -0700 (PDT)", "msg_from": "Ben Perrault <[email protected]>", "msg_from_op": true, "msg_subject": "Poor Performance after Upgrade" }, { "msg_contents": "On Mon, Aug 20, 2007 at 10:17:14PM -0700, Ben Perrault wrote:\n> -> Nested Loop (cost=100000000.00..106559138.00 rows=336 \n> width=137)\n\nThis sounds very much like you're trying to force the planner. Did you set\nenable_nestloop=false or something? Are there any other non-default settings\nthat could negatively impact planner performance?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 22 Aug 2007 06:24:25 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance after Upgrade" }, { "msg_contents": "> Hi,\n>\n> I recently inherited a very old (PostgreSQL 7.0.3) database, and have\n> migrated it to 8.2.4 but have run into a performance issue.\n>\n\nDid you configure the 8.2.4 server to match the memory requirements etc of\nthe old server? PostgreSQL's default settings are usually not aimed at\noptimal performance.\n\n\n", "msg_date": "Wed, 22 Aug 2007 11:19:11 +0200 (CEST)", "msg_from": "\"vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance after Upgrade" }, { "msg_contents": "Ben Perrault wrote:\n> Hi,\n>\n> I recently inherited a very old (PostgreSQL 7.0.3) database, and have \n> migrated it to 8.2.4 but have run into a performance issue.\n>\n> Basically, I did a dump and import into the new database, vacuumed and \n> created fresh indexes and everything is work great except the \n> following type of query (and similar):\n>\n> SELECT tsr.stepId, tsr.testType, tsr.problemReportId, tsr.excpt, \n> tcr.caseId\n> FROM TestCaseRun tcr, TestStepRun tsr\n> WHERE tcr.parentSN = 194813\n> AND (tsr.testType <> ''\n> OR tsr.problemReportId <> ''\n> OR tsr.excpt <> '')\n> AND tsr.parentSN = tcr.recordSN\nThis query is not \"similar\" to the plans listed below. It will not \nresult in a sort/unique unless tcr or tsr are views.\n\nCan we also see explain analyze instead of just explain, it's much more \nhelpful to see what's actually going on. Especially since the row\nestimates are quite different in the two plans.\n\nYou also mentioned above that you vacuumed, did you analyze with that? \nvacuum doesn't do analyze in 8.2.4. You have to say \"vacuum analyze\", \nor just analyze.\n>\n> What used to take 250ms or so on the old database now takes between 55 \n> and 60 Seconds.\n>\n> On the old database, the query plan looks like this:\n>\n> Unique (cost=13074.30..13078.36 rows=32 width=68)\n> -> Sort (cost=13074.30..13074.30 rows=324 width=68)\n> -> Nested Loop (cost=0.00..13060.77 rows=324 width=68)\n> -> Index Scan using parentsn_tcr_indx on testcaserun \n> tcr (cost=0.00..444.83 rows=111 width=16)\n> -> Index Scan using parentsn_tsr_indx on teststeprun \n> tsr (cost=0.00..113.42 rows=27 width=52)\n>\n> And on the new database it looks like this:\n>\n> Unique (cost=206559152.10..206559157.14 rows=336 width=137)\n> -> Sort (cost=206559152.10..206559152.94 rows=336 width=137)\n> Sort Key: tsr.stepid, tsr.testtype, tsr.problemreportid, \n> tsr.excpt, tcr.caseid\n> -> Nested Loop (cost=100000000.00..106559138.00 rows=336 \n> width=137)\n> -> Index Scan using parentsn_tcr_indx on testcaserun \n> tcr (cost=0.00..17.00 rows=115 width=11)\n> Index Cond: (parentsn = 186726)\n> -> Index Scan using parentsn_tsr_indx on teststeprun \n> tsr (cost=0.00..56089.00 rows=75747 width=134)\n> Index Cond: (tsr.parentsn = tcr.recordsn)\n> Filter: ((testtype <> ''::text) OR \n> ((problemreportid)::text <> ''::text) OR (excpt <> ''::text))\n> (9 rows)\n>\n> I'm fairly familiar with PostgreSQL, but I have no idea where to start \n> in trying to trouble shoot this huge performance discrepancy. The \n> hardware and OS are the same.\n>\n> And the data size is exactly the same between the two, and the total \n> data size is about 7.5GB, with the largest table (teststeprun \n> mentioned above) being about 15 million rows.\n>\n> Any pointers to where to start troubleshooting this or how to change \n> the query to work better would be appreciated.\nLook at row estimates vs reality. They should be pretty close in the \nnew version.\nWhy are the costs so high in the new plan? 100000000 happens to be a \nnice number that's used when you attempt to turn off a certain type of plan.\nEXPLAIN ANALZE (query) is your friend.\n\nRegards\n\nRussell\n\n", "msg_date": "Wed, 22 Aug 2007 20:52:30 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance after Upgrade" } ]
[ { "msg_contents": "Is there any way to stop the autovacuum if it is running longer than 10\nmin or so? \n\n \n\nIs it good idea to kill autovacuum if it is running longer than\nexpected?\n\n \n\n \n\nIn my OLTP system, we are inserting, updating and deleting the data\nevery second. \n\n \n\nAutovacuum started and never finished slowing down the whole system.\n\n \n\n \n\nAny help?\n\n \n\nThanks\n\nRegards\n\nsachi\n\n\n\n\n\n\n\n\n\n\nIs there any way to stop the autovacuum if it is running longer\nthan 10 min or so? \n \nIs it good idea to kill autovacuum if it is running longer\nthan expected?\n \n \nIn my OLTP system, we are inserting, updating and deleting\nthe data every second. \n \nAutovacuum started and never finished slowing down the whole\nsystem.\n \n \nAny help?\n \nThanks\nRegards\nsachi", "msg_date": "Tue, 21 Aug 2007 17:26:45 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Autovacuum is running forever" }, { "msg_contents": "\nOn Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:\n> In my OLTP system, we are inserting, updating and deleting the data \n> every second.\n>\n> Autovacuum started and never finished slowing down the whole system.\nThere's the possibility that your autovacuum settings aren't \naggressive enough for your system, so it's never able to catch up. \nWithout knowing details it's hard to say for certain. What are your \nautovacuum settings and other details about the load on your system?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 21 Aug 2007 16:35:57 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "Total RAM in the system is 2GB \n\n#-----------------------------------------------------------------------\n----\n# AUTOVACUUM PARAMETERS\n#-----------------------------------------------------------------------\n----\n\nvacuum_cost_delay = 200\n# 0-1000 milliseconds\nvacuum_cost_page_hit = 1\n# 0-10000 credits\nvacuum_cost_page_miss = 10 #\n0-10000 credits\nvacuum_cost_page_dirty = 20 #\n0-10000 credits\nvacuum_cost_limit = 200\n# 0-10000 credits\nautovacuum = on\nautovacuum_naptime = 3600\n#autovacuum_vacuum_threshold = 1000\n#autovacuum_analyze_threshold = 500\n#autovacuum_vacuum_scale_factor = 0.4\n#autovacuum_analyze_scale_factor = 0.2\n#autovacuum_vacuum_cost_delay = -1\n#autovacuum_vacuum_cost_limit = -1\n\nSome other parameter\n\nmax_connections = 350\nshared_buffers = 400MB # set to\n16 MB min 16 or max_connections*2, 8KB each\ntemp_buffers = 8MB # min\n100, 8KB each\nmax_prepared_transactions = 5 # can be\n0 or more\nwork_mem = 4MB\n# min 64, size in KB\nmaintenance_work_mem = 256MB # min 1024, size\nin KB\nmax_stack_depth = 2048 # min\n100, size in KB\nmax_fsm_pages = 2000000 # min\nmax_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min\n100, ~70 bytes each\nmax_files_per_process = 1000 # min 25\nbgwriter_delay = 200 #\n10-10000 milliseconds between rounds\nbgwriter_lru_percent = 1.0 # 0-100% of LRU\nbuffers scanned/round\nbgwriter_lru_maxpages = 5 # 0-1000\nbuffers max written/round\nbgwriter_all_percent = 0.333 # 0-100% of all\nbuffers scanned/round\nbgwriter_all_maxpages = 5 # 0-1000\nbuffers max written/round \n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Michael Glaesemann [mailto:[email protected]] \nSent: Tuesday, August 21, 2007 5:36 PM\nTo: Sachchida Ojha\nCc: [email protected]\nSubject: Re: [PERFORM] Autovacuum is running forever\n\n\nOn Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:\n> In my OLTP system, we are inserting, updating and deleting the data \n> every second.\n>\n> Autovacuum started and never finished slowing down the whole system.\nThere's the possibility that your autovacuum settings aren't aggressive\nenough for your system, so it's never able to catch up. \nWithout knowing details it's hard to say for certain. What are your\nautovacuum settings and other details about the load on your system?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 21 Aug 2007 17:37:59 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "Our model is to provode black box solutions to our customer.\n\nBlack box, I mean application system, web sever and database is running\non the same machine. \n\nWe are running our sensor on 10 assets (windows machine) and sending\nasset data to the server at every 15 minutes. There are some other user\noperations going on to those assets at the same time.\n\nOn server \n\nCpu util ranging from 10-75%\nMem util ranging from 15-50%\n\n\n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Michael Glaesemann [mailto:[email protected]] \nSent: Tuesday, August 21, 2007 5:36 PM\nTo: Sachchida Ojha\nCc: [email protected]\nSubject: Re: [PERFORM] Autovacuum is running forever\n\n\nOn Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:\n> In my OLTP system, we are inserting, updating and deleting the data \n> every second.\n>\n> Autovacuum started and never finished slowing down the whole system.\nThere's the possibility that your autovacuum settings aren't aggressive\nenough for your system, so it's never able to catch up. \nWithout knowing details it's hard to say for certain. What are your\nautovacuum settings and other details about the load on your system?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 21 Aug 2007 17:46:12 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "Sachchida Ojha wrote:\n\n> vacuum_cost_delay = 200\n\nThat is absurdly high. A setting of 10 is more likely to be useful.\n\n> autovacuum_naptime = 3600\n\nThat is too high probably as well; particularly so if you have \"updates\nand deletes every second\".\n\n> #autovacuum_vacuum_scale_factor = 0.4\n> #autovacuum_analyze_scale_factor = 0.2\n\nThese too. Try 0.1 for both and see how it goes.\n\nIn short, you need autovacuum to run _way more often_ than you are.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"Un poeta es un mundo encerrado en un hombre\" (Victor Hugo)\n", "msg_date": "Tue, 21 Aug 2007 17:54:41 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n\n\n\n> vacuum_cost_delay = 200\n> vacuum_cost_page_hit = 1\n> vacuum_cost_page_miss = 10\n> vacuum_cost_page_dirty = 20\n> vacuum_cost_limit = 200\n>\n> autovacuum = on\n> autovacuum_naptime = 3600\n>\n> maintenance_work_mem = 256MB # min 1024, size\n\nThat's a REALLY long naptime. Better to let autovacuum decide if you\nneed vacuum more often, and just increase the vacuum_cost_delay and\ndecrease vacuum_cost_limit so that vacuum doesn't slam your I/O.\n\nMaintenance work mem on the other hand is plenty big. and your fsm\nsettings seem large enough to handle your freed space.\n\nBut making vacuum wait so long between runs may be slowly bloating\nyour data store, and then vacuum becomes more and more expensive\nbecause it takes longer and longer to run.\n", "msg_date": "Tue, 21 Aug 2007 17:01:23 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" } ]
[ { "msg_contents": "Is there any data corruption/damage to the database if we forcefully\nkill autovacuum using cron job (if it is running longer than a\npredefined time frame).\n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Sachchida Ojha \nSent: Tuesday, August 21, 2007 5:46 PM\nTo: 'Michael Glaesemann'\nCc: '[email protected]'\nSubject: RE: [PERFORM] Autovacuum is running forever\n\nOur model is to provode black box solutions to our customer.\n\nBlack box, I mean application system, web sever and database is running\non the same machine. \n\nWe are running our sensor on 10 assets (windows machine) and sending\nasset data to the server at every 15 minutes. There are some other user\noperations going on to those assets at the same time.\n\nOn server \n\nCpu util ranging from 10-75%\nMem util ranging from 15-50%\n\n\n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Michael Glaesemann [mailto:[email protected]]\nSent: Tuesday, August 21, 2007 5:36 PM\nTo: Sachchida Ojha\nCc: [email protected]\nSubject: Re: [PERFORM] Autovacuum is running forever\n\n\nOn Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:\n> In my OLTP system, we are inserting, updating and deleting the data \n> every second.\n>\n> Autovacuum started and never finished slowing down the whole system.\nThere's the possibility that your autovacuum settings aren't aggressive\nenough for your system, so it's never able to catch up. \nWithout knowing details it's hard to say for certain. What are your\nautovacuum settings and other details about the load on your system?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 21 Aug 2007 17:52:32 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> Is there any data corruption/damage to the database if we forcefully\n> kill autovacuum using cron job (if it is running longer than a\n> predefined time frame).\n\nNot really. but vacuum will just have to run that long again plus\nsome the next time it starts up.\n\nAgain, it's better to run vacuum more often not less often, and keep\nthe cost_delay high enough that it doesn't interfere with your I/O.\nhowever, that will make it run even longer.\n", "msg_date": "Tue, 21 Aug 2007 17:03:25 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> Is there any data corruption/damage to the database if we forcefully\n> kill autovacuum using cron job (if it is running longer than a\n> predefined time frame).\n\nOh, and I'd look at your I/O subsystem. You might want to look at\nputting $300 hardware RAID cards with battery backed cache and 4 or so\ndisks in a RAID10 in them. It sounds to me like you could use more\nI/O for your vacuuming. Vacuuming isn't CPU intensive, but it can be\nI/O intensive.\n", "msg_date": "Tue, 21 Aug 2007 17:05:05 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "We are having only two disk (40GB each). One disk is used for OS, App\nServer, and application. Second disk is used for postgresql database.\nIt's a dual cpu machine having 2 GB of ram.\n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, August 21, 2007 6:05 PM\nTo: Sachchida Ojha\nCc: Michael Glaesemann; [email protected]\nSubject: Re: [PERFORM] Autovacuum is running forever\n\nOn 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> Is there any data corruption/damage to the database if we forcefully \n> kill autovacuum using cron job (if it is running longer than a \n> predefined time frame).\n\nOh, and I'd look at your I/O subsystem. You might want to look at\nputting $300 hardware RAID cards with battery backed cache and 4 or so\ndisks in a RAID10 in them. It sounds to me like you could use more I/O\nfor your vacuuming. Vacuuming isn't CPU intensive, but it can be I/O\nintensive.\n", "msg_date": "Tue, 21 Aug 2007 18:07:44 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> We are having only two disk (40GB each). One disk is used for OS, App\n> Server, and application. Second disk is used for postgresql database.\n> It's a dual cpu machine having 2 GB of ram.\n\nEven a single disk, with a battery backed caching controller will\ngenerally run things like updates and inserts much faster, and is\nusually a much better performance under load than a single disk.\n\nI'd at least look at mirroring them for redundancy and better read performance.\n", "msg_date": "Tue, 21 Aug 2007 17:19:59 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "Thanks to all of you. I have changed the settings and reloaded the\nconfig. Let me run this system overnight. I will update this forum if\nnew settings works for me. I am also asking management to upgrade the\nhardware.\n\nThanks a lot.\n\n\nRegards\nSachchida\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, August 21, 2007 6:05 PM\nTo: Sachchida Ojha\nCc: Michael Glaesemann; [email protected]\nSubject: Re: [PERFORM] Autovacuum is running forever\n\nOn 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> Is there any data corruption/damage to the database if we forcefully \n> kill autovacuum using cron job (if it is running longer than a \n> predefined time frame).\n\nOh, and I'd look at your I/O subsystem. You might want to look at\nputting $300 hardware RAID cards with battery backed cache and 4 or so\ndisks in a RAID10 in them. It sounds to me like you could use more I/O\nfor your vacuuming. Vacuuming isn't CPU intensive, but it can be I/O\nintensive.\n", "msg_date": "Tue, 21 Aug 2007 18:20:02 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum is running forever" }, { "msg_contents": "On 8/21/07, Sachchida Ojha <[email protected]> wrote:\n> Thanks to all of you. I have changed the settings and reloaded the\n> config. Let me run this system overnight. I will update this forum if\n> new settings works for me. I am also asking management to upgrade the\n> hardware.\n\nYou need to run vacuum verbose on the database (not an individual\ntable) and note the output at the end. It will tell you how bloated\nyour current db is. If vacuums have been delayed for too long, you\nmay need to vacuum full and / or reindex the bloated tables and\nindexes to reclaim the lost space.\n\nAssuming that there's not too much dead space, or that if there is\nyou've used vacuum full / reindexdb to reclaim it, then vacuum running\nregularly and in the background should fix this issue...\n\nThe output of vacuum verbose you're looking for is like this:\n\nDETAIL: A total of 2096 page slots are in use (including overhead).\n2096 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 182 KB.\n\nIf it comes back with some huge number for page slots (like in the\nmillions) needed to track all the dead tuples you'll need that vacuum\nfull / reindex. A certain amount of dead space is ok, even a good\nthing, since you don't have to extend your table / index files to\ninsert. 10-30% dead space is normal. anything around 100% or heading\nup from there is bad.\n\nYou'll also want to look through the rest of the vacuum verbose output\nfor things like this:\n\nINFO: vacuuming \"abc.zip_test\"\nINFO: index \"zip_test_pkey\" now contains 1000000 row versions in 3076 pages\nDETAIL: 8589 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.37s/0.23u sec elapsed 28.23 sec.\nINFO: \"zip_test\": removed 8589 row versions in 55 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"zip_test\": found 8589 removable, 1000000 nonremovable row\nversions in 6425 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 1.36s/0.34u sec elapsed 100.52 sec.\n\nIf the number of rows removed and the pages they held were a large\npercentage of the table, then you'll likely need to reindex them to\nget the space back. Or cluster on an index.\n", "msg_date": "Tue, 21 Aug 2007 18:16:00 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum is running forever" } ]
[ { "msg_contents": "I have a PostgreSQL 8.2.4 table with some seven million rows.\n\nThe psql query:\n\nselect count(rdate),rdate from reading where sensor_id in \n(1137,1138,1139) group by rdate order by rdate desc limit 1;\n\ntakes a few seconds but:\n\nselect count(rdate),rdate from reading where sensor_id in \n(1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n\n(anything with four or more values in the \"in\" list) takes several \nminutes.\n\nIs there any way to make the \"larger\" queries more efficient?\n\nBoth rdate and sensor_id are indexed and the database is vacuumed every \nnight.\n\nThe values in the \"in\" list are seldom as \"neat\" as in the above \nexamples. Actual values can range from 1 to about 2000. The number of \nvalues ranges from 2 to about 10.\n\nExplain outputs are:\n\nbenparts=# explain select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139,1140) group by rdate order by rdate desc \nlimit 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Limit (cost=0.00..39890.96 rows=1 width=8)\n -> GroupAggregate (cost=0.00..7938300.21 rows=199 width=8)\n -> Index Scan Backward using date on reading \n(cost=0.00..7937884.59 rows=82625 width=8)\n Filter: (sensor_id = ANY \n('{1137,1138,1139,1140}'::integer[]))\n(4 rows)\n\nbenparts=# explain select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139) group by rdate order by rdate desc limit \n1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Limit (cost=48364.32..48364.32 rows=1 width=8)\n -> Sort (cost=48364.32..48364.49 rows=69 width=8)\n Sort Key: rdate\n -> HashAggregate (cost=48361.35..48362.21 rows=69 width=8)\n -> Bitmap Heap Scan on reading (cost=535.53..48218.10 \nrows=28650 width=8)\n Recheck Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n -> Bitmap Index Scan on reading_sensor \n(cost=0.00..528.37 rows=28650 width=0)\n Index Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n(8 rows)\n\nTIA,\nStephen Davies\n-- \n========================================================================\nThis email is for the person(s) identified above, and is confidential to\nthe sender and the person(s). No one else is authorised to use or\ndisseminate this email or its contents.\n\nStephen Davies Consulting Voice: 08-8177 1595\nAdelaide, South Australia. Fax: 08-8177 0133\nComputing & Network solutions. Mobile:0403 0405 83\n", "msg_date": "Wed, 22 Aug 2007 12:10:36 +0930", "msg_from": "Stephen Davies <[email protected]>", "msg_from_op": true, "msg_subject": "Optimising \"in\" queries" }, { "msg_contents": "On 8/21/07, Stephen Davies <[email protected]> wrote:\n> I have a PostgreSQL 8.2.4 table with some seven million rows.\n>\n> The psql query:\n>\n> select count(rdate),rdate from reading where sensor_id in\n> (1137,1138,1139) group by rdate order by rdate desc limit 1;\n>\n> takes a few seconds but:\n>\n> select count(rdate),rdate from reading where sensor_id in\n> (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n>\n> (anything with four or more values in the \"in\" list) takes several\n> minutes.\n\nCan we see explain analyze output? (i.e. not just plain explain)\n", "msg_date": "Tue, 21 Aug 2007 23:20:25 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Stephen Davies wrote:\n> I have a PostgreSQL 8.2.4 table with some seven million rows.\n>\n> The psql query:\n>\n> select count(rdate),rdate from reading where sensor_id in \n> (1137,1138,1139) group by rdate order by rdate desc limit 1;\n>\n> takes a few seconds but:\n>\n> select count(rdate),rdate from reading where sensor_id in \n> (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n>\n> \nIt would have been helpful to see the table definition here. I can say \nup front that array processing in postgres is SLOW.\n\n> (anything with four or more values in the \"in\" list) takes several \n> minutes.\n>\n> Is there any way to make the \"larger\" queries more efficient?\n>\n> Both rdate and sensor_id are indexed and the database is vacuumed every \n> night.\n>\n> The values in the \"in\" list are seldom as \"neat\" as in the above \n> examples. Actual values can range from 1 to about 2000. The number of \n> values ranges from 2 to about 10.\n>\n> Explain outputs are:\n>\n> benparts=# explain select count(rdate),rdate from reading where \n> sensor_id in (1137,1138,1139,1140) group by rdate order by rdate desc \n> limit 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..39890.96 rows=1 width=8)\n> -> GroupAggregate (cost=0.00..7938300.21 rows=199 width=8)\n> -> Index Scan Backward using date on reading \n> (cost=0.00..7937884.59 rows=82625 width=8)\n> Filter: (sensor_id = ANY \n> ('{1137,1138,1139,1140}'::integer[]))\n> (4 rows)\n> \nI'm unsure of how you produced a plan like this without the benefit of \nseeing the table definition.\n> benparts=# explain select count(rdate),rdate from reading where \n> sensor_id in (1137,1138,1139) group by rdate order by rdate desc limit \n> 1;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n> Limit (cost=48364.32..48364.32 rows=1 width=8)\n> -> Sort (cost=48364.32..48364.49 rows=69 width=8)\n> Sort Key: rdate\n> -> HashAggregate (cost=48361.35..48362.21 rows=69 width=8)\n> -> Bitmap Heap Scan on reading (cost=535.53..48218.10 \n> rows=28650 width=8)\n> Recheck Cond: (sensor_id = ANY \n> ('{1137,1138,1139}'::integer[]))\n> -> Bitmap Index Scan on reading_sensor \n> (cost=0.00..528.37 rows=28650 width=0)\n> Index Cond: (sensor_id = ANY \n> ('{1137,1138,1139}'::integer[]))\n> (8 rows)\n>\n>\n> \nAs mentioned already, you need explain analyze.\n\nHowever I again will say that array processing is postgres is SLOW. It \nwould strongly recommend redesigning your schema to use a table with \nsensor_id's that correspond to the primary key in the reading table.\n\nRethinking the way you are going about this will probably be the most \neffective solution, but we will need more information if you are not \ncomfortable doing that yourself.\n\nRegards\n\nRussell Smith\n\n", "msg_date": "Wed, 22 Aug 2007 20:58:59 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "array processing???\n\nThere are no arrays. What made you think there might be?\n\nThe table definition is:\n\n\nbenparts=# \\d reading\n Table \"public.reading\"\n Column | Type | \nModifiers\n-----------+-----------------------------+-----------------------------------------------------------\n id | integer | not null default \nnextval(('reading_seq'::text)::regclass)\n sensor_id | integer |\n rdate | timestamp without time zone |\n rval | numeric(7,3) |\nIndexes:\n \"reading_pkey\" PRIMARY KEY, btree (id)\n \"unique_sensor_date\" UNIQUE, btree (sensor_id, rdate)\n \"date\" btree (rdate)\n \"reading_sensor\" btree (sensor_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (sensor_id) REFERENCES sensor(id)\n\nCheers,\nStephen\n\nOn Wednesday 22 August 2007 20:28, Russell Smith wrote:\n> Stephen Davies wrote:\n> > I have a PostgreSQL 8.2.4 table with some seven million rows.\n> >\n> > The psql query:\n> >\n> > select count(rdate),rdate from reading where sensor_id in\n> > (1137,1138,1139) group by rdate order by rdate desc limit 1;\n> >\n> > takes a few seconds but:\n> >\n> > select count(rdate),rdate from reading where sensor_id in\n> > (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n>\n> It would have been helpful to see the table definition here. I can\n> say up front that array processing in postgres is SLOW.\n>\n> > (anything with four or more values in the \"in\" list) takes several\n> > minutes.\n> >\n> > Is there any way to make the \"larger\" queries more efficient?\n> >\n> > Both rdate and sensor_id are indexed and the database is vacuumed\n> > every night.\n> >\n> > The values in the \"in\" list are seldom as \"neat\" as in the above\n> > examples. Actual values can range from 1 to about 2000. The number\n> > of values ranges from 2 to about 10.\n> >\n> > Explain outputs are:\n> >\n> > benparts=# explain select count(rdate),rdate from reading where\n> > sensor_id in (1137,1138,1139,1140) group by rdate order by rdate\n> > desc limit 1;\n> > QUERY PLAN\n> > -------------------------------------------------------------------\n> >-------------------------------- Limit (cost=0.00..39890.96 rows=1\n> > width=8)\n> > -> GroupAggregate (cost=0.00..7938300.21 rows=199 width=8)\n> > -> Index Scan Backward using date on reading\n> > (cost=0.00..7937884.59 rows=82625 width=8)\n> > Filter: (sensor_id = ANY\n> > ('{1137,1138,1139,1140}'::integer[]))\n> > (4 rows)\n>\n> I'm unsure of how you produced a plan like this without the benefit\n> of seeing the table definition.\n>\n> > benparts=# explain select count(rdate),rdate from reading where\n> > sensor_id in (1137,1138,1139) group by rdate order by rdate desc\n> > limit 1;\n> > QUERY PLAN\n> > -------------------------------------------------------------------\n> >---------------------------------- Limit (cost=48364.32..48364.32\n> > rows=1 width=8)\n> > -> Sort (cost=48364.32..48364.49 rows=69 width=8)\n> > Sort Key: rdate\n> > -> HashAggregate (cost=48361.35..48362.21 rows=69\n> > width=8) -> Bitmap Heap Scan on reading (cost=535.53..48218.10\n> > rows=28650 width=8)\n> > Recheck Cond: (sensor_id = ANY\n> > ('{1137,1138,1139}'::integer[]))\n> > -> Bitmap Index Scan on reading_sensor\n> > (cost=0.00..528.37 rows=28650 width=0)\n> > Index Cond: (sensor_id = ANY\n> > ('{1137,1138,1139}'::integer[]))\n> > (8 rows)\n>\n> As mentioned already, you need explain analyze.\n>\n> However I again will say that array processing is postgres is SLOW. \n> It would strongly recommend redesigning your schema to use a table\n> with sensor_id's that correspond to the primary key in the reading\n> table.\n>\n> Rethinking the way you are going about this will probably be the most\n> effective solution, but we will need more information if you are not\n> comfortable doing that yourself.\n>\n> Regards\n>\n> Russell Smith\n\n-- \n========================================================================\nThis email is for the person(s) identified above, and is confidential to\nthe sender and the person(s). No one else is authorised to use or\ndisseminate this email or its contents.\n\nStephen Davies Consulting Voice: 08-8177 1595\nAdelaide, South Australia. Fax: 08-8177 0133\nComputing & Network solutions. Mobile:0403 0405 83\n", "msg_date": "Wed, 22 Aug 2007 21:38:56 +0930", "msg_from": "Stephen Davies <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "\nOn Aug 22, 2007, at 5:58 , Russell Smith wrote:\n\n> Stephen Davies wrote:\n>> select count(rdate),rdate from reading where sensor_id in \n>> (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n>>\n>>\n> It would have been helpful to see the table definition here. I can \n> say up front that array processing in postgres is SLOW.\n\nUm, what array processing are you seeing here? IN (a, b, b) is not an \narray construct.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Wed, 22 Aug 2007 08:25:26 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": ">>> On Tue, Aug 21, 2007 at 9:40 PM, in message\n<[email protected]>, Stephen Davies <[email protected]>\nwrote: \n> Is there any way to make the \"larger\" queries more efficient?\n \nPeople would be in a better position to answer that if you posted the table\nstructure and the results of EXPLAIN ANALYZE (rather than just EXPLAIN).\n \n-Kevin\n\n\n", "msg_date": "Wed, 22 Aug 2007 13:50:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Interesting semantics. I have never seen the IN syntax referred to as \n\"array processing\" before.\n\nI have always thought of array processing as the thing that vector \nprocessors such as Cray and ETA do/did.\n\nWhile superficially equivalent, I have always believed that IN (a,b,c) \nexecuted faster than =a or =b or =c. Am I wrong for PostgreSQL?\n\nStephen\n\n On Wednesday 22 August 2007 22:55, Michael Glaesemann wrote:\n> On Aug 22, 2007, at 5:58 , Russell Smith wrote:\n> > Stephen Davies wrote:\n> >> select count(rdate),rdate from reading where sensor_id in\n> >> (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n> >\n> > It would have been helpful to see the table definition here. I can\n> > say up front that array processing in postgres is SLOW.\n>\n> Um, what array processing are you seeing here? IN (a, b, b) is not an\n> array construct.\n>\n> Michael Glaesemann\n> grzm seespotcode net\n\n-- \n========================================================================\nThis email is for the person(s) identified above, and is confidential to\nthe sender and the person(s). No one else is authorised to use or\ndisseminate this email or its contents.\n\nStephen Davies Consulting Voice: 08-8177 1595\nAdelaide, South Australia. Fax: 08-8177 0133\nComputing & Network solutions. Mobile:0403 0405 83\n", "msg_date": "Thu, 23 Aug 2007 09:00:13 +0930", "msg_from": "Stephen Davies <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "[Please don't top post as it makes the discussion more difficult to \nfollow.]\n\nOn Aug 22, 2007, at 18:30 , Stephen Davies wrote:\n\n> I have always thought of array processing as the thing that vector\n> processors such as Cray and ETA do/did.\n\n(I've always heard that referred to as vector processing.)\n\n> While superficially equivalent, I have always believed that IN (a,b,c)\n> executed faster than =a or =b or =c. Am I wrong for PostgreSQL?\n\nDepending on the numbers of the IN list and other statistcs, I \nbelieve PostgreSQL will rewrite z in IN (a, b, ...) into either (z = \na) OR (z = b) OR ... or somehow add it to the join list, so \nperformance will vary.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Wed, 22 Aug 2007 19:21:05 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Michael Glaesemann wrote:\n> \n> On Aug 22, 2007, at 5:58 , Russell Smith wrote:\n> \n>> Stephen Davies wrote:\n>>> select count(rdate),rdate from reading where sensor_id in \n>>> (1137,1138,1139,1140) group by rdate order by rdate desc limit 1;\n>>>\n>>>\n>> It would have been helpful to see the table definition here. I can \n>> say up front that array processing in postgres is SLOW.\n> \n> Um, what array processing are you seeing here? IN (a, b, b) is not an \n> array construct.\n\nFilter: (sensor_id = ANY ('{1137,1138,1139,1140}'::integer[]))\n\nI've never seen this plan item except for when array's are involved. I \ncould be wrong. I'd like to know how this is generated when you don't \nhave an array.\n\nRegards\n\nRussell Smith\n", "msg_date": "Thu, 23 Aug 2007 17:34:01 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Russell Smith wrote:\n> \n> Filter: (sensor_id = ANY ('{1137,1138,1139,1140}'::integer[]))\n> \n> I've never seen this plan item except for when array's are involved. I \n> could be wrong. I'd like to know how this is generated when you don't \n> have an array.\n> \n\nI have just discovered that PG 8.2 will turn an IN clause into an array \nsearch instead of an OR list. Sorry all.\n\nRegards\n\nRussell Smith\n", "msg_date": "Thu, 23 Aug 2007 17:42:28 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Stephen Davies wrote:\n> Interesting semantics. I have never seen the IN syntax referred to as \n> \"array processing\" before.\n> \n> I have always thought of array processing as the thing that vector \n> processors such as Cray and ETA do/did.\n> \n> While superficially equivalent, I have always believed that IN (a,b,c) \n> executed faster than =a or =b or =c. Am I wrong for PostgreSQL?\n\nOlder versions of Postgres translated IN (a, b, c) into an OR'ed list of\nequalities. Nowadays it is treated as an array; I think it's translated\nto = ANY ({a,b,c}), as you can see in the message you posted at the\nstart of this thread.\n\nI don't think you showed us the EXPLAIN ANALYZE results that Scott\nrequested.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 23 Aug 2007 15:46:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "I thought that I had but I screwed up the addresses.\nHere they are:\n\nbenparts=# explain select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139,1140) group by rdate order by rdate desc \nlimit 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Limit (cost=0.00..39890.96 rows=1 width=8)\n -> GroupAggregate (cost=0.00..7938300.21 rows=199 width=8)\n -> Index Scan Backward using date on reading \n(cost=0.00..7937884.59 rows=82625 width=8)\n Filter: (sensor_id = ANY \n('{1137,1138,1139,1140}'::integer[]))\n(4 rows)\n\nbenparts=# explain select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139) group by rdate order by rdate desc limit \n1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Limit (cost=48364.32..48364.32 rows=1 width=8)\n -> Sort (cost=48364.32..48364.49 rows=69 width=8)\n Sort Key: rdate\n -> HashAggregate (cost=48361.35..48362.21 rows=69 width=8)\n -> Bitmap Heap Scan on reading (cost=535.53..48218.10 \nrows=28650 width=8)\n Recheck Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n -> Bitmap Index Scan on reading_sensor \n(cost=0.00..528.37 rows=28650 width=0)\n Index Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n(8 rows)\n\n\nbenparts=# explain analyze select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139) group by rdate order by rdate desc limit \n1;\n \nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=49260.20..49260.20 rows=1 width=8) (actual \ntime=3263.219..3263.221 rows=1 loops=1)\n -> Sort (cost=49260.20..49260.38 rows=73 width=8) (actual \ntime=3263.213..3263.213 rows=1 loops=1)\n Sort Key: rdate\n -> HashAggregate (cost=49257.03..49257.94 rows=73 width=8) \n(actual time=3049.667..3093.345 rows=30445 loops=1)\n -> Bitmap Heap Scan on reading (cost=541.97..49109.62 \nrows=29481 width=8) (actual time=1727.021..2908.563 rows=91334 loops=1)\n Recheck Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n -> Bitmap Index Scan on reading_sensor \n(cost=0.00..534.60 rows=29481 width=0) (actual time=1714.980..1714.980 \nrows=91334 loops=1)\n Index Cond: (sensor_id = ANY \n('{1137,1138,1139}'::integer[]))\n Total runtime: 3264.121 ms\n(9 rows)\n\nbenparts=# explain analyze select count(rdate),rdate from reading where \nsensor_id in (1137,1138,1139,1140) group by rdate order by rdate desc \nlimit 1;\n QUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..41959.54 rows=1 width=8) (actual time=1.284..1.285 \nrows=1 loops=1)\n -> GroupAggregate (cost=0.00..8182110.32 rows=195 width=8) (actual \ntime=1.281..1.281 rows=1 loops=1)\n -> Index Scan Backward using date on reading \n(cost=0.00..8181711.41 rows=79294 width=8) (actual time=1.254..1.261 \nrows=2 loops=1)\n Filter: (sensor_id = ANY \n('{1137,1138,1139,1140}'::integer[]))\n Total runtime: 1.360 ms\n(5 rows)\n\nOn Friday 24 August 2007 05:16, Alvaro Herrera wrote:\n<snip>\n> I don't think you showed us the EXPLAIN ANALYZE results that Scott\n> requested.\n\n-- \n========================================================================\nThis email is for the person(s) identified above, and is confidential to\nthe sender and the person(s). No one else is authorised to use or\ndisseminate this email or its contents.\n\nStephen Davies Consulting Voice: 08-8177 1595\nAdelaide, South Australia. Fax: 08-8177 0133\nComputing & Network solutions. Mobile:0403 0405 83\n", "msg_date": "Fri, 24 Aug 2007 09:17:48 +0930", "msg_from": "Stephen Davies <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimising \"in\" queries" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Stephen Davies wrote:\n>> While superficially equivalent, I have always believed that IN (a,b,c) \n>> executed faster than =a or =b or =c. Am I wrong for PostgreSQL?\n\n> Older versions of Postgres translated IN (a, b, c) into an OR'ed list of\n> equalities. Nowadays it is treated as an array; I think it's translated\n> to = ANY ({a,b,c}), as you can see in the message you posted at the\n> start of this thread.\n\nIf you're dealing with tables large enough that the index search work is\nthe dominant cost, all these variants ought to be exactly the same.\nHowever, for smaller tables the planning time and executor startup time\nare interesting, and on those measures the = ANY(array) formulation\nshould win because there's less \"stuff\" for the system to look at.\nWith \"x=a OR x=b OR x=c\" the planner actually has to deduce three times\nthat an indexscan on x is possible; with \"x = ANY(ARRAY[a,b,c])\" it\ndoes that only once. That's why I changed IN to expand to an array\nconstruct instead of an OR tree. I have to confess not having tried to\nmeasure the consequences carefully, though. I suspect it's not all\nthat interesting at only three items ... it's lists of hundreds or\nthousands of items where this becomes a big deal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2007 22:19:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising \"in\" queries " } ]
[ { "msg_contents": "Hello!\n\n We run a large (~66Gb) web-backend database on Postgresql 8.2.4 on\nLinux. The hardware is Dual Xeon 5130 with 16Gb ram, LSI Megaraid U320-2x\nscsi controller w/512Mb writeback cache and a BBU. Storage setup contains 3\nraid10 arrays (data, xlog, indexes, each on different array), 12 HDDs total.\nFrontend application uses jdbc driver, connection pooling and threads.\n\n We've run into an issue of IO storms on checkpoints. Once in 20min\n(which is checkpoint_interval) the database becomes unresponsive for about\n4-8 seconds. Query processing is suspended, server does nothing but writing\na large amount of data to disks. Because of the db server being stalled,\nsome of the web clients get timeout and disconnect, which is unacceptable.\nEven worse, as the new requests come at a pretty constant rate, by the time\nthis storm comes to an end there is a huge amount of sleeping app. threads\nwaiting for their queries to complete. After the db server comes back to\nlife again, these threads wake up and flood it with queries, so performance\nsuffer even more, for some minutes after the checkpoint.\n\n It seemed strange to me that our 70%-read db generates so much dirty\npages that writing them out takes 4-8 seconds and grabs the full bandwidth.\nFirst, I started to tune bgwriter to a more aggressive settings, but this\nwas of no help, nearly no performance changes at all. Digging into the issue\nfurther, I discovered that linux page cache was the reason. \"Dirty\"\nparameter in /proc/meminfo (which shows the amount of ready-to-write \"dirty\"\ndata currently sitting in page cache) grows between checkpoints from 0 to\nabout 100Mb. When checkpoint comes, all the 100mb got flushed out to disk,\neffectively causing a IO storm.\n\n I found this (http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n<http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm>) document and\npeeked into mm/page-writeback.c in linux kernel source tree. I'm not sure\nthat I understand pdflush writeout semantics correctly, but looks like when\nthe amount of \"dirty\" data is less than dirty_background_ratio*RAM/100,\npdflush only writes pages in background, waking up every\ndirty_writeback_centisecs and writing no more than 1024 pages\n(MAX_WRITEBACK_PAGES constant). When we hit dirty_background_ratio, pdflush\nstarts to write out more agressively.\n\n So, looks like the following scenario takes place: postgresql constantly\nwrites something to database and xlog files, dirty data gets to the page\ncache, and then slowly written out by pdflush. When postgres generates more\ndirty pages than pdflush writes out, the amount of dirty data in the\npagecache is growing. When we're at checkpoint, postgres does fsync() on the\ndatabase files, and sleeps until the whole page cache is written out.\n\n By default, dirty_background_ratio is 2%, which is about 328Mb of 16Gb\ntotal. Following the curring pdflush logic, nearly this amount of data we\nface to write out on checkpoint effective stalling everything else, so even\n1% of 16Gb is too much. My setup experience 4-8 sec pause in operation even\non ~100Mb dirty pagecache...\n\n I temporaly solved this problem by setting dirty_background_ratio to\n0%. This causes the dirty data to be written out immediately. It is ok for\nour setup (mostly because of large controller cache), but it doesn't looks\nto me as an elegant solution. Is there some other way to fix this issue\nwithout disabling pagecache and the IO smoothing it was designed to perform?\n\n-- \nRegards,\n Dmitry\n\n            Hello!    We run a large (~66Gb) web-backend\ndatabase on Postgresql 8.2.4 on Linux.\nThe hardware is  Dual Xeon 5130 with 16Gb ram, LSI Megaraid U320-2x\nscsi controller w/512Mb writeback cache and a BBU. Storage setup\ncontains 3 raid10 arrays (data, xlog, indexes, each on different\narray), 12 HDDs total. Frontend application uses jdbc driver,\nconnection pooling and threads.\n     We've run into an issue of IO storms on checkpoints. Once\nin 20min (which is checkpoint_interval) the database becomes\nunresponsive for about 4-8 seconds. Query processing is suspended,\nserver does nothing but writing a large amount of data to disks.\nBecause of the db server being stalled, some of the web clients get\ntimeout and disconnect, which is unacceptable. Even worse, as the new\nrequests come at a pretty constant rate, by the time this storm comes\nto an end there is a huge amount of sleeping app. threads waiting for\ntheir queries to complete. After the db server comes back to life\nagain, these threads wake up and flood it with queries, so performance\nsuffer even more, for some minutes after the checkpoint.\n    It seemed strange to me that our 70%-read db generates so\nmuch dirty pages that writing them out takes 4-8 seconds and grabs the\nfull bandwidth. First, I started to tune bgwriter to a more aggressive\nsettings, but this was of no help, nearly no performance changes at\nall. Digging into the issue further, I discovered that linux page cache\nwas the reason. \"Dirty\" parameter in /proc/meminfo (which shows the\namount of ready-to-write \"dirty\" data currently sitting in page cache)\ngrows between checkpoints from 0 to about 100Mb. When checkpoint comes,\nall the 100mb got flushed out to disk, effectively causing a IO storm.\n    I found this (http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n) document and peeked into mm/page-writeback.c in linux kernel\nsource tree. I'm not sure that I understand pdflush writeout semantics\ncorrectly, but looks like when the amount of \"dirty\" data is less than\ndirty_background_ratio*RAM/100, pdflush only writes pages in\nbackground, waking up every dirty_writeback_centisecs and writing no\nmore than 1024 pages (MAX_WRITEBACK_PAGES constant). When we hit\ndirty_background_ratio, pdflush starts to write out more agressively.\n    So, looks like the following scenario takes place:\npostgresql constantly writes something to database and xlog files,\ndirty data gets to the page cache, and then slowly written out by\npdflush. When postgres generates more dirty pages than pdflush writes\nout, the amount of dirty data in the pagecache is growing. When we're\nat checkpoint, postgres does fsync() on the database files, and sleeps\nuntil the whole page cache is written out.\n    By default, dirty_background_ratio is 2%, which is about\n328Mb of 16Gb total. Following the curring pdflush logic, nearly this\namount of data we face to write out on checkpoint effective stalling\neverything else, so even 1% of 16Gb is too much. My setup experience\n4-8 sec pause in operation even on ~100Mb dirty pagecache...\n     I temporaly solved this problem by setting\ndirty_background_ratio to 0%. This causes the dirty data to be written\nout immediately. It is ok for our setup (mostly because of large\ncontroller cache), but it doesn't looks to me as an elegant solution.\nIs there some other way to fix this issue without disabling pagecache\nand the IO smoothing it was designed to perform?-- Regards,             Dmitry", "msg_date": "Wed, 22 Aug 2007 19:33:35 +0400", "msg_from": "\"Dmitry Potapov\" <[email protected]>", "msg_from_op": true, "msg_subject": "io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "On Wed, Aug 22, 2007 at 07:33:35PM +0400, Dmitry Potapov wrote:\n> Hello!\n> \n> We run a large (~66Gb) web-backend database on Postgresql 8.2.4 on\n> Linux. The hardware is Dual Xeon 5130 with 16Gb ram, LSI Megaraid U320-2x\n> scsi controller w/512Mb writeback cache and a BBU. Storage setup contains 3\n> raid10 arrays (data, xlog, indexes, each on different array), 12 HDDs total.\n> Frontend application uses jdbc driver, connection pooling and threads.\n> \n> We've run into an issue of IO storms on checkpoints. Once in 20min\n> (which is checkpoint_interval) the database becomes unresponsive for about\n> 4-8 seconds. Query processing is suspended, server does nothing but writing\n> a large amount of data to disks. Because of the db server being stalled,\n> some of the web clients get timeout and disconnect, which is unacceptable.\n> Even worse, as the new requests come at a pretty constant rate, by the time\n> this storm comes to an end there is a huge amount of sleeping app. threads\n> waiting for their queries to complete. After the db server comes back to\n> life again, these threads wake up and flood it with queries, so performance\n> suffer even more, for some minutes after the checkpoint.\n> \n> It seemed strange to me that our 70%-read db generates so much dirty\n> pages that writing them out takes 4-8 seconds and grabs the full bandwidth.\n> First, I started to tune bgwriter to a more aggressive settings, but this\n> was of no help, nearly no performance changes at all. Digging into the issue\n> further, I discovered that linux page cache was the reason. \"Dirty\"\n> parameter in /proc/meminfo (which shows the amount of ready-to-write \"dirty\"\n> data currently sitting in page cache) grows between checkpoints from 0 to\n> about 100Mb. When checkpoint comes, all the 100mb got flushed out to disk,\n> effectively causing a IO storm.\n> \n> I found this (http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n> <http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm>) document and\n> peeked into mm/page-writeback.c in linux kernel source tree. I'm not sure\n> that I understand pdflush writeout semantics correctly, but looks like when\n> the amount of \"dirty\" data is less than dirty_background_ratio*RAM/100,\n> pdflush only writes pages in background, waking up every\n> dirty_writeback_centisecs and writing no more than 1024 pages\n> (MAX_WRITEBACK_PAGES constant). When we hit dirty_background_ratio, pdflush\n> starts to write out more agressively.\n> \n> So, looks like the following scenario takes place: postgresql constantly\n> writes something to database and xlog files, dirty data gets to the page\n> cache, and then slowly written out by pdflush. When postgres generates more\n> dirty pages than pdflush writes out, the amount of dirty data in the\n> pagecache is growing. When we're at checkpoint, postgres does fsync() on the\n> database files, and sleeps until the whole page cache is written out.\n> \n> By default, dirty_background_ratio is 2%, which is about 328Mb of 16Gb\n> total. Following the curring pdflush logic, nearly this amount of data we\n> face to write out on checkpoint effective stalling everything else, so even\n> 1% of 16Gb is too much. My setup experience 4-8 sec pause in operation even\n> on ~100Mb dirty pagecache...\n> \n> I temporaly solved this problem by setting dirty_background_ratio to\n> 0%. This causes the dirty data to be written out immediately. It is ok for\n> our setup (mostly because of large controller cache), but it doesn't looks\n> to me as an elegant solution. Is there some other way to fix this issue\n> without disabling pagecache and the IO smoothing it was designed to perform?\n> \n> -- \n> Regards,\n> Dmitry\n\nDmitry,\n\nYou are working at the correct level. The bgwriter performs the I/O smoothing\nfunction at the database level. Obviously, the OS level smoothing function\nneeded to be tuned and you have done that within the parameters of the OS.\nYou may want to bring this up on the Linux kernel lists and see if they have\nany ideas.\n\nGood luck,\n\nKen\n", "msg_date": "Wed, 22 Aug 2007 10:57:19 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "Are you able to show that the dirty pages are all coming from postgres?\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n", "msg_date": "Wed, 22 Aug 2007 11:57:33 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nDmitry Potapov wrote:\n> Hello!\n> \n> We run a large (~66Gb) web-backend database on Postgresql 8.2.4 on\n> Linux. The hardware is Dual Xeon 5130 with 16Gb ram, LSI Megaraid U320-2x\n> scsi controller w/512Mb writeback cache and a BBU. Storage setup contains 3\n> raid10 arrays (data, xlog, indexes, each on different array), 12 HDDs total.\n> Frontend application uses jdbc driver, connection pooling and threads.\n> \n> We've run into an issue of IO storms on checkpoints. Once in 20min\n> (which is checkpoint_interval) the database becomes unresponsive for about\n> 4-8 seconds. Query processing is suspended, server does nothing but writing\n\nWhat are your background writer settings?\n\nJoshua D. Drake\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGzF3nATb/zqfZUUQRAsV8AJ9Sg7yTUfTGKTB/vQdW5BucwgcRSgCeKqIE\njzR7X5+n0x1Y91etGOBvvpE=\n=4Dc9\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 22 Aug 2007 09:01:43 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "2007/8/22, Mark Mielke <[email protected]>:\n>\n> Are you able to show that the dirty pages are all coming from postgres?\n>\n>\nI don't know how to prove that, but I suspect that nothing else except\npostgres writes to disk on that system, because it runs nothing except\npostgresql and syslog (which I configured not to write to local storage,\nbut to send everytning to remote log server). No cron jobs, nothing else.\n\n-- \nRegards,\n Dmitry\n\n2007/8/22, Mark Mielke <[email protected]>:\nAre you able to show that the dirty pages are all coming from postgres?I don't know how to prove that, but I suspect that nothing else except postgres writes to disk on that system, because it runs nothing except postgresql and syslog  (which I configured not to write to local storage, but to send everytning to remote log server). No cron jobs, nothing else.\n-- Regards,             Dmitry", "msg_date": "Wed, 22 Aug 2007 20:16:15 +0400", "msg_from": "\"Dmitry Potapov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "2007/8/22, Joshua D. Drake <[email protected]>:\n\n> > We've run into an issue of IO storms on checkpoints. Once in 20min\n> > (which is checkpoint_interval) the database becomes unresponsive for\n> about\n> > 4-8 seconds. Query processing is suspended, server does nothing but\n> writing\n>\n> What are your background writer settings?\n>\n>\nbgwriter_delay=100ms\nbgwriter_lru_percent=20.0\nbgwriter_lru_maxpages=100\nbgwriter_all_percent=3\nbgwriter_all_maxpages=600\n\n\nIn fact, with dirty_background_ratio > 0 bgwriter even make things a tiny\nbit worse.\n-- \nRegards,\n Dmitry\n\n2007/8/22, Joshua D. Drake <[email protected]>:\n>     We've run into an issue of IO storms on checkpoints. Once in 20min> (which is checkpoint_interval) the database becomes unresponsive for about> 4-8 seconds. Query processing is suspended, server does nothing but writing\nWhat are your background writer settings?bgwriter_delay=100msbgwriter_lru_percent=20.0bgwriter_lru_maxpages=100bgwriter_all_percent=3bgwriter_all_maxpages=600\nIn fact, with dirty_background_ratio > 0 bgwriter even make things a tiny bit worse.-- Regards,             Dmitry", "msg_date": "Wed, 22 Aug 2007 20:22:59 +0400", "msg_from": "\"Dmitry Potapov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "2007/8/22, Kenneth Marshall <[email protected]>:\n>\n>\n> You are working at the correct level. The bgwriter performs the I/O\n> smoothing\n> function at the database level. Obviously, the OS level smoothing function\n> needed to be tuned and you have done that within the parameters of the OS.\n> You may want to bring this up on the Linux kernel lists and see if they\n> have\n> any ideas.\n\n\nWill do so, this seems to be a reasonable idea.\n\n\n-- \nRegards,\n Dmitry\n\n2007/8/22, Kenneth Marshall <[email protected]>:\nYou are working at the correct level. The bgwriter performs the I/O smoothingfunction at the database level. Obviously, the OS level smoothing functionneeded to be tuned and you have done that within the parameters of the OS.\nYou may want to bring this up on the Linux kernel lists and see if they haveany ideas.Will do so, this seems to be a reasonable idea. -- Regards,             Dmitry", "msg_date": "Wed, 22 Aug 2007 20:28:00 +0400", "msg_from": "\"Dmitry Potapov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "On Wed, 22 Aug 2007, Dmitry Potapov wrote:\n\n> I found this http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nIf you do end up following up with this via the Linux kernel mailing list, \nplease pass that link along. I've been meaning to submit it to them and \nwait for the flood of e-mail telling me what I screwed up, that will go \nbetter if you tell them about it instead of me.\n\n> I temporaly solved this problem by setting dirty_background_ratio to 0%. \n> This causes the dirty data to be written out immediately. It is ok for \n> our setup (mostly because of large controller cache), but it doesn't \n> looks to me as an elegant solution. Is there some other way to fix this \n> issue without disabling pagecache and the IO smoothing it was designed \n> to perform?\n\nI spent a couple of months trying and decided it was impossible. Your \nanalysis of the issue is completely accurate; lowering \ndirty_background_ratio to 0 makes the system much less efficient, but it's \nthe only way to make the stalls go completely away.\n\nI contributed some help toward fixing the issue in the upcoming 8.3 \ninstead; there's a new checkpoint writing process aimed to ease the exact \nproblem you're running into there, see the new \ncheckpoint_completion_target tunable at \nhttp://developer.postgresql.org/pgdocs/postgres/wal-configuration.html\n\nIf you could figure out how to run some tests to see if the problem clears \nup for you using the new technique, that would be valuable feedback for \nthe development team for the upcoming 8.3 beta. Probably more productive \nuse of your time than going crazy trying to fix the issue in 8.2.4.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 22 Aug 2007 16:30:35 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "2007/8/23, Greg Smith <[email protected]>:\n>\n> On Wed, 22 Aug 2007, Dmitry Potapov wrote:\n>\n> If you do end up following up with this via the Linux kernel mailing list,\n> please pass that link along. I've been meaning to submit it to them and\n> wait for the flood of e-mail telling me what I screwed up, that will go\n> better if you tell them about it instead of me.\n\n\nI'm planning to do so, but before I need to take a look at postgresql source\nand dev documentation to find how exactly IO is done, to be able to explain\nthe issue to linux kernel people. That will take some time, I'll post a\nlink here when I'm done.\n\n\n> > looks to me as an elegant solution. Is there some other way to fix this\n> > issue without disabling pagecache and the IO smoothing it was designed\n> > to perform?\n>\n> I spent a couple of months trying and decided it was impossible. Your\n> analysis of the issue is completely accurate; lowering\n> dirty_background_ratio to 0 makes the system much less efficient, but it's\n> the only way to make the stalls go completely away.\n\n\nBy the way, does postgresql has a similar stall problem on freebsd/other\nOS'es? It would be interesting to study their approach to io smoothing if it\ndoesn't.\n\nI contributed some help toward fixing the issue in the upcoming 8.3\n> instead; there's a new checkpoint writing process aimed to ease the exact\n> problem you're running into there, see the new\n> checkpoint_completion_target tunable at\n> http://developer.postgresql.org/pgdocs/postgres/wal-configuration.html\n>\n> If you could figure out how to run some tests to see if the problem clears\n> up for you using the new technique, that would be valuable feedback for\n> the development team for the upcoming 8.3 beta. Probably more productive\n> use of your time than going crazy trying to fix the issue in 8.2.4.\n\nWe have a tool here to record and replay the exact workload we have on a\nreal production system, the only problem is getting a spare 16Gb box. I can\nget a server with 8Gb ram and nearly same storage setup for testing\npurposes. I hope it will be able to carry the production load, so I can\ncompare 8.2.4 and 8.3devel on the same box, in the same situation. Is there\nany other changes in 8.3devel that can affect the results of such test? I\ndidn't really follow 8.3 development process :(\n\n-- \nRegards,\n Dmitry\n\n2007/8/23, Greg Smith <[email protected]>:\nOn Wed, 22 Aug 2007, Dmitry Potapov wrote:If you do end up following up with this via the Linux kernel mailing list,please pass that link along.  I've been meaning to submit it to them andwait for the flood of e-mail telling me what I screwed up, that will go\nbetter if you tell them about it instead of me.I'm planning to do so, but before I need to take a look at postgresql source and dev documentation to find how exactly IO is done, to be able to explain the issue to linux kernel people.  That will take some time, I'll post a link here when I'm done.\n > looks to me as an elegant solution. Is there some other way to fix this\n> issue without disabling pagecache and the IO smoothing it was designed> to perform?I spent a couple of months trying and decided it was impossible.  Youranalysis of the issue is completely accurate; lowering\ndirty_background_ratio to 0 makes the system much less efficient, but it'sthe only way to make the stalls go completely away.By the way, does postgresql has a similar stall problem on freebsd/other OS'es? It would be interesting to study their approach to io smoothing if it doesn't.\n\nI contributed some help toward fixing the issue in the upcoming 8.3instead; there's a new checkpoint writing process aimed to ease the exact\nproblem you're running into there, see the newcheckpoint_completion_target tunable athttp://developer.postgresql.org/pgdocs/postgres/wal-configuration.html\nIf you could figure out how to run some tests to see if the problem clearsup for you using the new technique, that would be valuable feedback forthe development team for the upcoming 8.3 beta.  Probably more productive\nuse of your time than going crazy trying to fix the issue in 8.2.4.We have a tool here to record and replay the exact workload we have on a real production system, the only problem is getting a spare 16Gb box. I can get a server with 8Gb ram and nearly same storage setup for testing purposes. I hope it will be able to carry the production load, so I can compare \n8.2.4 and 8.3devel on the same box, in the same situation. Is there any other changes in 8.3devel that can affect the results of such test? I didn't really follow 8.3 development process :(-- Regards, \n            Dmitry", "msg_date": "Thu, 23 Aug 2007 18:37:39 +0400", "msg_from": "\"Dmitry Potapov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "On Thu, 23 Aug 2007, Dmitry Potapov wrote:\n\n> I'm planning to do so, but before I need to take a look at postgresql source\n> and dev documentation to find how exactly IO is done, to be able to explain\n> the issue to linux kernel people.\n\nI can speed that up for you. \nhttp://developer.postgresql.org/index.php/Buffer_Cache%2C_Checkpoints%2C_and_the_BGW \noutlines all the source code involved. Easiest way to browse through the \ncode is it via http://doxygen.postgresql.org/ , eventually I want to \nupdate the page so it points right into the appropriate doxygen spots but \nhaven't gotten to that yet.\n\n> By the way, does postgresql has a similar stall problem on freebsd/other\n> OS'es? It would be interesting to study their approach to io smoothing if it\n> doesn't.\n\nThere's some evidence that something about Linux aggrevates the problem; \ncheck out \nhttp://archives.postgresql.org/pgsql-hackers/2007-07/msg00261.php and the \nrest of the messages in that thread. I haven't heard a report of this \nproblem from someone who isn't running Linux, but as it requires a certain \nlevel of hardware and a specific type of work load I'm not sure if this is \ncoincidence or a cause/effect relationship.\n\n> Is there any other changes in 8.3devel that can affect the results of \n> such test?\n\nThe \"all\" component of the background writer was removed as it proved not \nto be useful once checkpoint_completion_target was introduced. And the \nLRU background writer keeps going while checkpoints are being trickled \nout, in earlier versions that didn't happen.\n\nThe test I'd like to see more people run is to simulate their workloads \nwith checkpoint_completion_target set to 0.5, 0.7, and 0.9 and see how \neach of those settings works relative to the 8.2 behavior.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 23 Aug 2007 13:00:57 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "\nOn Aug 22, 2007, at 10:57 AM, Kenneth Marshall wrote:\n\n> On Wed, Aug 22, 2007 at 07:33:35PM +0400, Dmitry Potapov wrote:\n>> Hello!\n>>\n>> We run a large (~66Gb) web-backend database on Postgresql \n>> 8.2.4 on\n>> Linux. The hardware is Dual Xeon 5130 with 16Gb ram, LSI Megaraid \n>> U320-2x\n>> scsi controller w/512Mb writeback cache and a BBU. Storage setup \n>> contains 3\n>> raid10 arrays (data, xlog, indexes, each on different array), 12 \n>> HDDs total.\n>> Frontend application uses jdbc driver, connection pooling and \n>> threads.\n>>\n>> We've run into an issue of IO storms on checkpoints. Once in \n>> 20min\n>> (which is checkpoint_interval) the database becomes unresponsive \n>> for about\n>> 4-8 seconds. Query processing is suspended, server does nothing \n>> but writing\n>> a large amount of data to disks. Because of the db server being \n>> stalled,\n>> some of the web clients get timeout and disconnect, which is \n>> unacceptable.\n>> Even worse, as the new requests come at a pretty constant rate, by \n>> the time\n>> this storm comes to an end there is a huge amount of sleeping app. \n>> threads\n>> waiting for their queries to complete. After the db server comes \n>> back to\n>> life again, these threads wake up and flood it with queries, so \n>> performance\n>> suffer even more, for some minutes after the checkpoint.\n>>\n>> It seemed strange to me that our 70%-read db generates so much \n>> dirty\n>> pages that writing them out takes 4-8 seconds and grabs the full \n>> bandwidth.\n>> First, I started to tune bgwriter to a more aggressive settings, \n>> but this\n>> was of no help, nearly no performance changes at all. Digging into \n>> the issue\n>> further, I discovered that linux page cache was the reason. \"Dirty\"\n>> parameter in /proc/meminfo (which shows the amount of ready-to- \n>> write \"dirty\"\n>> data currently sitting in page cache) grows between checkpoints \n>> from 0 to\n>> about 100Mb. When checkpoint comes, all the 100mb got flushed out \n>> to disk,\n>> effectively causing a IO storm.\n>>\n>> I found this (http://www.westnet.com/~gsmith/content/linux- \n>> pdflush.htm\n>> <http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm>) \n>> document and\n>> peeked into mm/page-writeback.c in linux kernel source tree. I'm \n>> not sure\n>> that I understand pdflush writeout semantics correctly, but looks \n>> like when\n>> the amount of \"dirty\" data is less than dirty_background_ratio*RAM/ \n>> 100,\n>> pdflush only writes pages in background, waking up every\n>> dirty_writeback_centisecs and writing no more than 1024 pages\n>> (MAX_WRITEBACK_PAGES constant). When we hit \n>> dirty_background_ratio, pdflush\n>> starts to write out more agressively.\n>>\n>> So, looks like the following scenario takes place: postgresql \n>> constantly\n>> writes something to database and xlog files, dirty data gets to \n>> the page\n>> cache, and then slowly written out by pdflush. When postgres \n>> generates more\n>> dirty pages than pdflush writes out, the amount of dirty data in the\n>> pagecache is growing. When we're at checkpoint, postgres does fsync \n>> () on the\n>> database files, and sleeps until the whole page cache is written out.\n>>\n>> By default, dirty_background_ratio is 2%, which is about 328Mb \n>> of 16Gb\n>> total. Following the curring pdflush logic, nearly this amount of \n>> data we\n>> face to write out on checkpoint effective stalling everything \n>> else, so even\n>> 1% of 16Gb is too much. My setup experience 4-8 sec pause in \n>> operation even\n>> on ~100Mb dirty pagecache...\n>>\n>> I temporaly solved this problem by setting \n>> dirty_background_ratio to\n>> 0%. This causes the dirty data to be written out immediately. It \n>> is ok for\n>> our setup (mostly because of large controller cache), but it \n>> doesn't looks\n>> to me as an elegant solution. Is there some other way to fix this \n>> issue\n>> without disabling pagecache and the IO smoothing it was designed \n>> to perform?\n>>\n>> -- \n>> Regards,\n>> Dmitry\n>\n> Dmitry,\n>\n> You are working at the correct level. The bgwriter performs the I/O \n> smoothing\n> function at the database level. Obviously, the OS level smoothing \n> function\n> needed to be tuned and you have done that within the parameters of \n> the OS.\n> You may want to bring this up on the Linux kernel lists and see if \n> they have\n> any ideas.\n>\n> Good luck,\n>\n> Ken\n\nHave you tried decreasing you checkpoint interval? That would at \nleast help to reduce the amount of data that needs to be flushed when \nPostgres fsyncs.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Tue, 28 Aug 2007 10:00:57 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" }, { "msg_contents": "On Tue, Aug 28, 2007 at 10:00:57AM -0500, Erik Jones wrote:\n> >> It seemed strange to me that our 70%-read db generates so much \n> >>dirty\n> >>pages that writing them out takes 4-8 seconds and grabs the full \n> >>bandwidth.\n> >>First, I started to tune bgwriter to a more aggressive settings, \n> >>but this\n> >>was of no help, nearly no performance changes at all. Digging into \n> >>the issue\n> >>further, I discovered that linux page cache was the reason. \"Dirty\"\n> >>parameter in /proc/meminfo (which shows the amount of ready-to- \n> >>write \"dirty\"\n> >>data currently sitting in page cache) grows between checkpoints \n> >>from 0 to\n> >>about 100Mb. When checkpoint comes, all the 100mb got flushed out \n> >>to disk,\n> >>effectively causing a IO storm.\n> >>\n> >> I found this (http://www.westnet.com/~gsmith/content/linux- \n> >>pdflush.htm\n> >><http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm>) \n> >>document and\n> >>peeked into mm/page-writeback.c in linux kernel source tree. I'm \n> >>not sure\n> >>that I understand pdflush writeout semantics correctly, but looks \n> >>like when\n> >>the amount of \"dirty\" data is less than dirty_background_ratio*RAM/ \n> >>100,\n> >>pdflush only writes pages in background, waking up every\n> >>dirty_writeback_centisecs and writing no more than 1024 pages\n> >>(MAX_WRITEBACK_PAGES constant). When we hit \n> >>dirty_background_ratio, pdflush\n> >>starts to write out more agressively.\n> >>\n> >> So, looks like the following scenario takes place: postgresql \n> >>constantly\n> >>writes something to database and xlog files, dirty data gets to \n> >>the page\n> >>cache, and then slowly written out by pdflush. When postgres \n> >>generates more\n> >>dirty pages than pdflush writes out, the amount of dirty data in the\n> >>pagecache is growing. When we're at checkpoint, postgres does fsync \n> >>() on the\n> >>database files, and sleeps until the whole page cache is written out.\n> >>\n> >> By default, dirty_background_ratio is 2%, which is about 328Mb \n> >>of 16Gb\n> >>total. Following the curring pdflush logic, nearly this amount of \n> >>data we\n> >>face to write out on checkpoint effective stalling everything \n> >>else, so even\n> >>1% of 16Gb is too much. My setup experience 4-8 sec pause in \n> >>operation even\n> >>on ~100Mb dirty pagecache...\n> >>\n> >> I temporaly solved this problem by setting \n> >>dirty_background_ratio to\n> >>0%. This causes the dirty data to be written out immediately. It \n> >>is ok for\n> >>our setup (mostly because of large controller cache), but it \n> >>doesn't looks\n> >>to me as an elegant solution. Is there some other way to fix this \n> >>issue\n> >>without disabling pagecache and the IO smoothing it was designed \n> >>to perform?\n> >\n> >You are working at the correct level. The bgwriter performs the I/O \n> >smoothing\n> >function at the database level. Obviously, the OS level smoothing \n> >function\n> >needed to be tuned and you have done that within the parameters of \n> >the OS.\n> >You may want to bring this up on the Linux kernel lists and see if \n> >they have\n> >any ideas.\n> >\n> >Good luck,\n> >\n> >Ken\n> \n> Have you tried decreasing you checkpoint interval? That would at \n> least help to reduce the amount of data that needs to be flushed when \n> Postgres fsyncs.\n\nThe downside to that is it will result in writing a lot more data to WAL\nas long as full page writes are on.\n\nIsn't there some kind of a timeout parameter for how long dirty data\nwill sit in the cache? It seems pretty broken to me to allow stuff to\nsit in a dirty state indefinitely.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 28 Aug 2007 16:34:04 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: io storm on checkpoints, postgresql 8.2.4, linux" } ]
[ { "msg_contents": "I have read that trigram matching (similarity()) performance degrades when \nthe matching is on longer strings such as phrases. I need to quickly match \nstrings and rate them by similiarity. The strings are typically one to seven \nwords in length - and will often include unconventional abbreviations and \nmisspellings.\n\nI have a stored function which does more thorough testing of the phrases, \nincluding spelling correction, abbreviation translation, etc... and scores \nthe results - I pick the winning score that passes a pass/fail constant. \nHowever, the function is slow. My solution was to reduce the number of rows \nthat are passed to the function by pruning obvious mismatches using \nsimilarity(). However, trigram matching on phrases is slow as well.\n\nI have experimented with tsearch2 but I have two problems:\n\n1) I need a \"score\" so I can decide if match passed or failed. trigram \nsimilarity() has a fixed result that you can test, but I don't know if \nrank() returns results that can be compared to a fixed value\n\n2) I need an efficient methodology to create vectors based on trigrams, and \na way to create an index to support it. My tsearch2 experiment with normal \nvectors used gist(text tsvector) and an on insert/update trigger to populate \nthe vector field.\n\nAny suggestions on where to go with this project to improve performance \nwould be greatly appreciated.\n\nCarlo\n\n\n", "msg_date": "Wed, 22 Aug 2007 12:02:54 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "On Wed, Aug 22, 2007 at 12:02:54PM -0400, Carlo Stonebanks wrote:\n> Any suggestions on where to go with this project to improve performance \n> would be greatly appreciated.\n\nI'm a bit unsure from reading your mail -- have you tried pg_trgm with a GiST\nindex?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 22 Aug 2007 18:29:50 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "On Wed, 22 Aug 2007, Carlo Stonebanks wrote:\n\n> I have read that trigram matching (similarity()) performance degrades when \n> the matching is on longer strings such as phrases. I need to quickly match \n> strings and rate them by similiarity. The strings are typically one to seven \n> words in length - and will often include unconventional abbreviations and \n> misspellings.\n>\n> I have a stored function which does more thorough testing of the phrases, \n> including spelling correction, abbreviation translation, etc... and scores \n> the results - I pick the winning score that passes a pass/fail constant. \n> However, the function is slow. My solution was to reduce the number of rows \n> that are passed to the function by pruning obvious mismatches using \n> similarity(). However, trigram matching on phrases is slow as well.\n\nyou didn't show us explain analyze of your select.\n\n>\n> I have experimented with tsearch2 but I have two problems:\n>\n> 1) I need a \"score\" so I can decide if match passed or failed. trigram \n> similarity() has a fixed result that you can test, but I don't know if rank() \n> returns results that can be compared to a fixed value\n>\n> 2) I need an efficient methodology to create vectors based on trigrams, and a \n> way to create an index to support it. My tsearch2 experiment with normal \n> vectors used gist(text tsvector) and an on insert/update trigger to populate \n> the vector field.\n>\n> Any suggestions on where to go with this project to improve performance would \n> be greatly appreciated.\n>\n> Carlo\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 22 Aug 2007 22:48:44 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "In December I had tried this; it caused a tremendous slowdown in our system. \nI have avoided it since then.\n\nDo you expect pg_trgm to work with phrases? OI had read a post earlier from \nan earlier support question that suggested that it I SHOULD expect \nperformance to degrade and that pg_trgrm was oriented towards word mathcing, \nnot phrase matching.\n\nCarlo\n\n\n\"\"Steinar H. Gunderson\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Wed, Aug 22, 2007 at 12:02:54PM -0400, Carlo Stonebanks wrote:\n>> Any suggestions on where to go with this project to improve performance\n>> would be greatly appreciated.\n>\n> I'm a bit unsure from reading your mail -- have you tried pg_trgm with a \n> GiST\n> index?\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n", "msg_date": "Wed, 22 Aug 2007 16:25:16 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "Hi Oleg,\n\n> you didn't show us explain analyze of your select.\n\nI didn't because I didn't expect any reaction to it - my understanding is \nthat trigram matching for phrases is not recommended because of the \nperformance. Do you believe that I SHOULD expect good performance from \ntrigram matching on phrases (a sopposed to words)?\n\nCarlo \n\n", "msg_date": "Wed, 22 Aug 2007 16:31:08 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "On Wed, 22 Aug 2007, Carlo Stonebanks wrote:\n\n> Hi Oleg,\n>\n>> you didn't show us explain analyze of your select.\n>\n> I didn't because I didn't expect any reaction to it - my understanding is \n> that trigram matching for phrases is not recommended because of the \n> performance. Do you believe that I SHOULD expect good performance from \n> trigram matching on phrases (a sopposed to words)?\n\nThe problem is in idea, not in performance.\n\n>\n> Carlo \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 23 Aug 2007 00:43:07 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" }, { "msg_contents": "\n>> The problem is in idea, not in performance.\n\nOh, I think we both agree on that! ;-D \n\nThis is why I didn't post any EXPLAINs or anything like that. I thought the\nproblem was in the entire method of how to best zero in on the set of\nrecords best suited for closer analysis by my phrase-matching function.\n\nSince you asked about an EXPLAIN ANALYZE, I put together some results for\nfor you. I added a pg_trgm index to the table to show the performance of\nGiST indexes, and did another based on exclusively on similarity(). But I\ndon't think that you will be surprised by what you see.\n\nAs you say, the problem is in the idea - but no matter what, I need to be\nable to match phrases that will have all sorts of erratic abbreviations and\nmisspellings - and I have to do it at very high speeds.\n\nI would appreciate any suggestions you might have.\n\nCarlo\n\n\n\nselect similarity('veterans''s affairs', name) as sim, name\nfrom institution \nwhere name % 'veterans''s affairs'\norder by sim desc\n\nSort (cost=4068.21..4071.83 rows=1446 width=23) (actual\ntime=4154.962..4155.006 rows=228 loops=1)\n Sort Key: similarity('veterans''s affairs'::text, (name)::text)\n -> Bitmap Heap Scan on institution (cost=75.07..3992.31 rows=1446\nwidth=23) (actual time=4152.825..4154.754 rows=228 loops=1)\n Recheck Cond: ((name)::text % 'veterans''s affairs'::text)\n -> Bitmap Index Scan on institution_name_trgm_idx\n(cost=0.00..74.71 rows=1446 width=0) (actual time=4152.761..4152.761\nrows=228 loops=1)\n Index Cond: ((name)::text % 'veterans''s affairs'::text)\nTotal runtime: 4155.127 ms\n\nselect name\nfrom institution \nwhere \n similarity('veterans''s affairs', name) > 0.5\norder by similarity('veterans''s affairs', name) > 0.5\n\nSort (cost=142850.08..144055.17 rows=482036 width=23) (actual\ntime=12603.745..12603.760 rows=77 loops=1)\n Sort Key: (similarity('veterans''s affairs'::text, (name)::text) >\n0.5::double precision)\n -> Seq Scan on institution (cost=0.00..97348.81 rows=482036 width=23)\n(actual time=2032.439..12603.370 rows=77 loops=1)\n Filter: (similarity('veterans''s affairs'::text, (name)::text) >\n0.5::double precision)\nTotal runtime: 12603.818 ms\n\n\n", "msg_date": "Wed, 22 Aug 2007 17:09:52 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast tsearch2, trigram matching on short phrases" } ]
[ { "msg_contents": "I see some long running transaction in my pg_activity_log table. My app\nbecomes almost unusable. My question is \nHow can I query the database to see what sql these transactions are\nrunning. \n\n\n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22\n20:37:33.937193+00\";\"127.0.0.1\";52466\n\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22\n19:39:11.670572+00\";\"127.0.0.1\";50961\n\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22\n19:39:11.704564+00\";\"127.0.0.1\";50964\n\nThanks\nRegards\nSachi\n\n\n\n\n\n\nLong running transaction in pg_activity_log\n\n\n\nI see some long running transaction in my pg_activity_log table. My app becomes almost unusable. My question is \nHow can I query the database to see what sql these transactions are running. \n\n\n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22 20:37:33.937193+00\";\"127.0.0.1\";52466\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22 19:39:11.670572+00\";\"127.0.0.1\";50961\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22 19:39:11.704564+00\";\"127.0.0.1\";50964\nThanks\nRegards\nSachi", "msg_date": "Wed, 22 Aug 2007 16:40:09 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Long running transaction in pg_activity_log" }, { "msg_contents": "I can see SQL in the current SQL column of pg_activity_log when I run\nsql from sql window or throgh odbc/jdbc connection but when sql is\nembedded in java beaans i can see those sql. Is there any way to\nextract those sql from the database.\n \n \n16385;\"em_db\";20220;16388;\"emsowner\";\"select * from\npg_stat_activity;\";f;\"2007-08-22 20:48:25.062663+00\";\"2007-08-22\n20:30:58.600489+00\";\"10.0.200.165\";4170\n \n\nRegards \nSachchida \n\n \n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Sachchida\nOjha\nSent: Wednesday, August 22, 2007 4:40 PM\nTo: [email protected]\nSubject: [PERFORM] Long running transaction in pg_activity_log\n\n\n\nI see some long running transaction in my pg_activity_log table. My app\nbecomes almost unusable. My question is \nHow can I query the database to see what sql these transactions are\nrunning. \n\n\n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22\n20:37:33.937193+00\";\"127.0.0.1\";52466\n\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22\n19:39:11.670572+00\";\"127.0.0.1\";50961\n\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in\ntransaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22\n19:39:11.704564+00\";\"127.0.0.1\";50964\n\nThanks \nRegards \nSachi \n\n\nLong running transaction in pg_activity_log\n\n\n\nI can see SQL in the current SQL column of \npg_activity_log when I run sql from sql window or throgh odbc/jdbc connection \nbut when sql is embedded in java beaans i can see those sql.  Is there any \nway to extract those sql from the database.\n \n \n16385;\"em_db\";20220;16388;\"emsowner\";\"select * from \npg_stat_activity;\";f;\"2007-08-22 20:48:25.062663+00\";\"2007-08-22 \n20:30:58.600489+00\";\"10.0.200.165\";4170\n \nRegards Sachchida \n \n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Sachchida \nOjhaSent: Wednesday, August 22, 2007 4:40 PMTo: \[email protected]: [PERFORM] Long running \ntransaction in pg_activity_log\n\nI see some long running transaction in my \npg_activity_log table. My app becomes almost unusable. My question is \nHow can I query the database to see what sql \nthese transactions are running. \n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in \ntransaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22 \n20:37:33.937193+00\";\"127.0.0.1\";52466\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in \ntransaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22 \n19:39:11.670572+00\";\"127.0.0.1\";50961\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in \ntransaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22 \n19:39:11.704564+00\";\"127.0.0.1\";50964\nThanks Regards Sachi", "msg_date": "Wed, 22 Aug 2007 16:54:29 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long running transaction in pg_activity_log" }, { "msg_contents": "[Sachchida Ojha - Wed at 04:40:09PM -0400]\n> I see some long running transaction in my pg_activity_log table. My app\n> becomes almost unusable. My question is \n> How can I query the database to see what sql these transactions are\n> running. \n\n\"<IDLE> in transaction\" means that no sql query is running at the\nmoment.\n\nMost probably it's a problem with the application, it starts a\ntransaction, but does not close it (through a commit or rollback). This\nis very harmful for the performance, as the hours, days and weeks pass\nby any database supporting transactions will get problems.\n\nRestarting the application and vacuuming is a one-time-shot which should\nsolve the problem for a short while. For a permanent fix, the\napplication needs to be fixed, or you'll have to ensure that the\nautocommit feature is used.\n", "msg_date": "Wed, 22 Aug 2007 23:08:58 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running transaction in pg_activity_log" } ]
[ { "msg_contents": "If I am not wrong, you must be asking about this:\n\nselect * from pg_stat_activity;\n\nIf you see <command string \n not enabled> in the query column. \n \n\nTurn on the stats_command_string option in postgresql.conf.\n\n\nRegards, \n\nFarhan \n\n\n \n\n\n\n----- Original Message ----\nFrom: Sachchida Ojha <[email protected]>\nTo: [email protected]\nSent: Thursday, 23 August, 2007 1:40:09 AM\nSubject: [PERFORM] Long running transaction in pg_activity_log\n\nLong running transaction in pg_activity_log\n\n\n \n \n\n\n\n\nI see some long running transaction in my pg_activity_log table. My app becomes almost unusable. My question is \n\n\nHow can I query the database to see what sql these transactions are running. \n\n\n\n\n\n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22 20:37:33.937193+00\";\"127.0.0.1\";52466\n\n\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22 19:39:11.670572+00\";\"127.0.0.1\";50961\n\n\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22 19:39:11.704564+00\";\"127.0.0.1\";50964\n\n\nThanks\n\n\nRegards\n\n\nSachi\n\n\n\n\n\n\n\n\n\n ___________________________________________________________ \nWant ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html\nIf I am not wrong, you must be asking about this:select * from pg_stat_activity;If you see <command string \n not enabled> in the query column. \n \nTurn on the stats_command_string option in postgresql.conf.Regards, Farhan \n----- Original Message ----From: Sachchida Ojha <[email protected]>To: [email protected]: Thursday, 23 August, 2007 1:40:09 AMSubject: [PERFORM] Long running transaction in pg_activity_logLong running transaction in pg_activity_log\nI see some long running transaction in my pg_activity_log table. My app becomes almost unusable. My question is \nHow can I query the database to see what sql these transactions are running. \n\n\n16385;\"em_db\";20893;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:38:06.527792+00\";\"2007-08-22 20:37:33.937193+00\";\"127.0.0.1\";52466\n16385;\"em_db\";15110;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:05:08.580643+00\";\"2007-08-22 19:39:11.670572+00\";\"127.0.0.1\";50961\n16385;\"em_db\";15113;16386;\"em_user\";\"<IDLE> in transaction\";f;\"2007-08-22 20:06:01.53394+00\";\"2007-08-22 19:39:11.704564+00\";\"127.0.0.1\";50964\nThanks\nRegards\nSachi\n\n\n \nYahoo! Answers - Get better answers from someone who knows. Try\nit now.", "msg_date": "Wed, 22 Aug 2007 21:58:40 +0000 (GMT)", "msg_from": "Farhan Mughal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long running transaction in pg_activity_log" } ]
[ { "msg_contents": "I'm testing the new asynch commit feature on various raid\nconfigurations and my early findings is that it reduces the impact of\nkeeping wal and data on the same volume. I have 10 disks to play\nwith, and am finding that it's faster to do a 10 drive raid 10 rather\nthan 8 drive raid 10 + two drive wal.\n\nanybody curious about the results, feel free to drop a line. I think\nthis will be a popular feature.\n\nmerlin\n", "msg_date": "Thu, 23 Aug 2007 09:09:00 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "asynchronous commit feature" }, { "msg_contents": "On Thu, Aug 23, 2007 at 09:09:00AM -0400, Merlin Moncure wrote:\n> I'm testing the new asynch commit feature on various raid\n> configurations and my early findings is that it reduces the impact of\n> keeping wal and data on the same volume. I have 10 disks to play\n> with, and am finding that it's faster to do a 10 drive raid 10 rather\n> than 8 drive raid 10 + two drive wal.\n> \n> anybody curious about the results, feel free to drop a line. I think\n> this will be a popular feature.\n\nWith or without a write cache on the RAID controller? I suspect that for\nmany systems, a write-caching controller will be very similar in\nperformance to async commit.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 27 Aug 2007 15:11:38 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: asynchronous commit feature" }, { "msg_contents": "On 8/27/07, Decibel! <[email protected]> wrote:\n> On Thu, Aug 23, 2007 at 09:09:00AM -0400, Merlin Moncure wrote:\n> > I'm testing the new asynch commit feature on various raid\n> > configurations and my early findings is that it reduces the impact of\n> > keeping wal and data on the same volume. I have 10 disks to play\n> > with, and am finding that it's faster to do a 10 drive raid 10 rather\n> > than 8 drive raid 10 + two drive wal.\n> >\n> > anybody curious about the results, feel free to drop a line. I think\n> > this will be a popular feature.\n>\n> With or without a write cache on the RAID controller? I suspect that for\n> many systems, a write-caching controller will be very similar in\n> performance to async commit.\n\nI usually only work with mid to high end hardware.\n\nThe platform I'm testing on is:\n10x146gb 15k rpm sas in Dell md1000\n2xperc 5/e in active/active (5 drives each controller)\n2x146gb 15krpm sas on the backplane perc 5/i (for o/s, wal)\n\nin my experience, even with a high end raid controller moving the wal\noffline is helpful in high activity systems, especially during\ncheckpoints.\n\nmerlin\n", "msg_date": "Mon, 27 Aug 2007 17:18:05 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: asynchronous commit feature" } ]
[ { "msg_contents": "Should installation questions be sent here or to the admin listserv?\n\n \n\nOS: redhat linux\n\nVersion of PostgreSQL: 8.2.4\n\n \n\nI had a group that now manages our server set up a directory/partition\nfor us to put postgreSQL into. The directory is called pgsql_data. The\ndirectory is more than a regular directory. It contains a subdirectory\ncalled \"lost+found\". I would assume this is a logical partition. I\ntried installing postgreSQL directly into this directory but it failed\nsince there is a file in this directory, \"lost+found\". Is there a way\naround this? Worst case scenario I will create a subdirectory called\ndata and put the install in there. I would have preferred to put it\ndirectly into the pgsql_data. There would be no other files that would\nhave gone into the directory/partition other than postgreSQL. Would it\nbe possible for me to install postgreSQL into a sub directory of\npgsql_data and then move the files up a directory into pgsql_data?\n\n \n\nThanks,\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nShould installation questions be sent here or to the admin\nlistserv?\n \nOS: redhat linux\nVersion of PostgreSQL: 8.2.4\n \nI had a group that now manages our server set up a\ndirectory/partition for us to put postgreSQL into.  The directory is\ncalled pgsql_data.  The directory is more than a regular directory. \nIt contains a subdirectory called “lost+found”.  I would\nassume this is a logical partition.  I tried installing postgreSQL\ndirectly into this directory but it failed since there is a file in this\ndirectory, “lost+found”.  Is there a way around this? \nWorst case scenario I will create a subdirectory called data and put the\ninstall in there.  I would have preferred to put it directly into the\npgsql_data.  There would be no other files that would have gone into the\ndirectory/partition other than postgreSQL.  Would it be possible for me to\ninstall postgreSQL into a sub directory of pgsql_data and then move the files\nup a directory into pgsql_data?\n \nThanks,\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Thu, 23 Aug 2007 11:47:15 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Installing PostgreSQL" }, { "msg_contents": "Campbell, Lance wrote:\n> Should installation questions be sent here or to the admin listserv?\n\nProbably the pgsql-general/admin/novice lists\n\n> OS: redhat linux\n\nRHES?\n\n> Version of PostgreSQL: 8.2.4\n\nOK\n\n> I had a group that now manages our server set up a directory/partition\n> for us to put postgreSQL into. The directory is called pgsql_data. The\n> directory is more than a regular directory. It contains a subdirectory\n> called \"lost+found\". I would assume this is a logical partition. \n\nNo - if you get filesystem corruption any recovered disk-blocks are put \ninto files here. All your disk partitions will have such a directory.\n\n > I\n> tried installing postgreSQL directly into this directory but it failed\n> since there is a file in this directory, \"lost+found\". Is there a way\n> around this? Worst case scenario I will create a subdirectory called\n> data and put the install in there.\n\nThat's what you want to do. Apart from anything else it lets you set \nownership & permission of the directory.\n\n > I would have preferred to put it\n> directly into the pgsql_data. There would be no other files that would\n> have gone into the directory/partition other than postgreSQL. Would it\n> be possible for me to install postgreSQL into a sub directory of\n> pgsql_data and then move the files up a directory into pgsql_data?\n\nJust symlink your directory to the correct place if that's what you want.\n\nPartition at: /mnt/pg_disk\nDirectory is: /mnt/pg_disk/data\nsymlink to: /var/db/data\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 23 Aug 2007 18:08:22 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installing PostgreSQL" }, { "msg_contents": "Richard,\nSo what you are saying is that if you install PostgeSQL into a data\ndirectory /abc/data you could then stop the database, move the files\ninto /def/data, and then start the database making sure to point to the\nnew data directory. PostgreSQL is therefore referencing its files\nrelative to the \"data\" directory the files are in.\n\nIs this a correct observation?\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Thursday, August 23, 2007 12:08 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Installing PostgreSQL\n\nCampbell, Lance wrote:\n> Should installation questions be sent here or to the admin listserv?\n\nProbably the pgsql-general/admin/novice lists\n\n> OS: redhat linux\n\nRHES?\n\n> Version of PostgreSQL: 8.2.4\n\nOK\n\n> I had a group that now manages our server set up a directory/partition\n> for us to put postgreSQL into. The directory is called pgsql_data.\nThe\n> directory is more than a regular directory. It contains a\nsubdirectory\n> called \"lost+found\". I would assume this is a logical partition. \n\nNo - if you get filesystem corruption any recovered disk-blocks are put \ninto files here. All your disk partitions will have such a directory.\n\n > I\n> tried installing postgreSQL directly into this directory but it failed\n> since there is a file in this directory, \"lost+found\". Is there a way\n> around this? Worst case scenario I will create a subdirectory called\n> data and put the install in there.\n\nThat's what you want to do. Apart from anything else it lets you set \nownership & permission of the directory.\n\n > I would have preferred to put it\n> directly into the pgsql_data. There would be no other files that\nwould\n> have gone into the directory/partition other than postgreSQL. Would\nit\n> be possible for me to install postgreSQL into a sub directory of\n> pgsql_data and then move the files up a directory into pgsql_data?\n\nJust symlink your directory to the correct place if that's what you\nwant.\n\nPartition at: /mnt/pg_disk\nDirectory is: /mnt/pg_disk/data\nsymlink to: /var/db/data\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 23 Aug 2007 12:19:25 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Installing PostgreSQL" }, { "msg_contents": "On Thu, 23 Aug 2007, Campbell, Lance wrote:\n\n> Should installation questions be sent here or to the admin listserv?\n\nadmin or general would be more appropriate for this type of question.\n\n> The directory is called pgsql_data. The directory is more than a \n> regular directory. It contains a subdirectory called \"lost+found\". I \n> would assume this is a logical partition.\n\nIt's a partition of some sort. lost+found shows up in the root directory \nof any partition you create, it's where damaged files found by fsck go. \nSee http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/lostfound.html for \nmore information.\n\n> I tried installing postgreSQL directly into this directory but it \n> failed since there is a file in this directory, \"lost+found\". Is there \n> a way around this? Worst case scenario I will create a subdirectory \n> called data and put the install in there.\n\nYou will have to create subdirectory in this new partition in order for \ninitdb to have a place it can work in. What you should probably do here \nis have your administrator rename the mount point to something more \ngeneric, like \"data\" or \"postgres\", to avoid confusion here; then you'd \nhave PGDATA pointing to data/pgsql_data or postgres/pgsql_data which won't \nbe as confusing.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 23 Aug 2007 13:25:42 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installing PostgreSQL" }, { "msg_contents": "Campbell, Lance wrote:\n> Richard,\n> So what you are saying is that if you install PostgeSQL into a data\n> directory /abc/data you could then stop the database, move the files\n> into /def/data, and then start the database making sure to point to the\n> new data directory. PostgreSQL is therefore referencing its files\n> relative to the \"data\" directory the files are in.\n> \n> Is this a correct observation?\n\nYes - provided:\n1. Ownership and permissions on the destination directory are correct\n2. You remember to stop the server when copying\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 23 Aug 2007 18:26:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installing PostgreSQL" }, { "msg_contents": "Richard,\nI was able to prove that it works. Thanks for your time.\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Thursday, August 23, 2007 12:26 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Installing PostgreSQL\n\nCampbell, Lance wrote:\n> Richard,\n> So what you are saying is that if you install PostgeSQL into a data\n> directory /abc/data you could then stop the database, move the files\n> into /def/data, and then start the database making sure to point to\nthe\n> new data directory. PostgreSQL is therefore referencing its files\n> relative to the \"data\" directory the files are in.\n> \n> Is this a correct observation?\n\nYes - provided:\n1. Ownership and permissions on the destination directory are correct\n2. You remember to stop the server when copying\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 23 Aug 2007 12:37:51 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Installing PostgreSQL" } ]
[ { "msg_contents": "Hi List;\n\nI've just started working with a new client and they have amoung other issues \nwith their databases a particular update that basically locks out users. \n\nThe below query was running for over 6 hours this morning and the CPU load had \nclimbed to a point where new connections simply hung waiting to connect. We \nhad to kill the query to allow business users to connect to the applications \nthat connect to the database, thus I could not post an explain analyze.\n\nIn any case the query looks like this:\n\nupdate dat_customer_mailbox_counts\nset total_contacts = contacts.ct,\ntotal_contact_users = contacts.dct\nfrom\n( select customer_id, count(*) as ct,\ncount( distinct con.user_id ) as dct\nfrom dat_user_contacts con\ngroup by customer_id )\ncontacts where contacts.customer_id = dat_customer_mailbox_counts.customer_id\n\nHere's the latest counts from the system catalogs:\n\ndat_customer_mailbox_counts: 423\ndat_user_contacts 59,469,476\n\n\n\nAnd here's an explain plan:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Merge Join (cost=17118858.51..17727442.30 rows=155 width=90)\n Merge Cond: (\"outer\".customer_id = \"inner\".customer_id)\n -> GroupAggregate (cost=17118772.93..17727347.34 rows=155 width=8)\n -> Sort (cost=17118772.93..17270915.95 rows=60857208 width=8)\n Sort Key: con.customer_id\n -> Seq Scan on dat_user_contacts con (cost=0.00..7332483.08 \nrows=60857208 width=8)\n -> Sort (cost=85.57..88.14 rows=1026 width=74)\n Sort Key: dat_customer_mailbox_counts.customer_id\n -> Seq Scan on dat_customer_mailbox_counts (cost=0.00..34.26 \nrows=1026 width=74)\n(9 rows)\n\n\nAny thoughts, comments, Ideas for debugging, etc would be way helpful...\n\nThanks in advance.\n\n/Kevin\n\n\n\n\n", "msg_date": "Thu, 23 Aug 2007 11:16:57 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "long-running query - needs tuning" }, { "msg_contents": "Kevin Kempter <[email protected]> writes:\n> Merge Join (cost=17118858.51..17727442.30 rows=155 width=90)\n> Merge Cond: (\"outer\".customer_id = \"inner\".customer_id)\n> -> GroupAggregate (cost=17118772.93..17727347.34 rows=155 width=8)\n> -> Sort (cost=17118772.93..17270915.95 rows=60857208 width=8)\n> Sort Key: con.customer_id\n> -> Seq Scan on dat_user_contacts con (cost=0.00..7332483.08 \n> rows=60857208 width=8)\n> -> Sort (cost=85.57..88.14 rows=1026 width=74)\n> Sort Key: dat_customer_mailbox_counts.customer_id\n> -> Seq Scan on dat_customer_mailbox_counts (cost=0.00..34.26 \n> rows=1026 width=74)\n\nThe planner, at least, thinks that all the time will go into the sort\nstep. Sorting 60M rows is gonna take awhile :-(. What PG version is\nthis? (8.2 has noticeably faster sort code than prior releases...)\nWhat have you got work_mem set to?\n\nBad as the sort is, I suspect that the real problem is the\ncount(distinct) operator, which is going to require *another*\nsort-and-uniq step for each customer_id group --- and judging by\nthe rowcount estimates, at least some of those groups must be\npretty large. (AFAIR this time is not counted in the planner\nestimates.) Again, work_mem would have an effect on how fast\nthat goes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2007 14:56:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long-running query - needs tuning " } ]
[ { "msg_contents": "I am having some dead locking problem with my app system. Our dev are\ndebugging the app to find out the cause of the problem. In the mean time\nI looked at postgresql.conf file. I found that there is a parameter in\npostgresql.conf file deadlock_timeout which was set 1000 (ms). Normally\nI see deadlock in the night or when auto vacuum is running for a long\ntime.\n\n \n\nMy question is \n\n \n\nWhat is the significance of this parameter and updating this parameter\nvalue will make any difference ? \n\n \n\nThanks\n\nRegards\n\nSachchida N Ojha\n\[email protected] <mailto:[email protected]> \n\nSecure Elements Incorporated\n\n198 Van Buren Street, Suite 110\n\nHerndon Virginia 20170-5338 USA\n\n \n\nhttp://www.secure-elements.com/ <http://www.secure-elements.com/> \n\n \n\n800-709-5011 Main\n\n703-709-2168 Direct\n\n703-709-2180 Fax\n\nThis email message and any attachment to this email message is intended\nonly for the use of the addressee(s) named above. If the reader of this\nmessage is not the intended recipient or the employee or agent\nresponsible for delivering the message to the intended recipient(s),\nplease note that any distribution or copying of this communication is\nstrictly prohibited. If you have received this email in error, please\nnotify me immediately and delete this message. Please note that if this\nemail contains a forwarded message or is a reply to a prior message,\nsome or all of the contents of this message or any attachments may not\nhave been produced by the sender.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI am having some dead locking problem with my app system.\nOur dev are debugging the app to find out the cause of the problem. In the mean\ntime I looked at postgresql.conf file. I found that there is a parameter in\npostgresql.conf file deadlock_timeout which was set 1000 (ms).  Normally I see\ndeadlock in the night or when auto vacuum is running for a long time.\n \nMy question is \n \nWhat is the significance of this parameter and updating this\nparameter value will make any difference ? \n \nThanks\nRegards\nSachchida N\nOjha\[email protected]\nSecure Elements\nIncorporated\n198 Van Buren Street, Suite 110\nHerndon Virginia 20170-5338 USA\n \nhttp://www.secure-elements.com/\n \n800-709-5011 Main\n703-709-2168\nDirect\n703-709-2180 Fax\nThis\nemail message and any attachment to this email message is intended only for the\nuse of the addressee(s) named above. If the reader of this message is not the\nintended recipient or the employee or agent responsible for delivering the\nmessage to the intended recipient(s), please note that any distribution or\ncopying of this communication is strictly prohibited.  If you have\nreceived this email in error, please notify me immediately and delete this\nmessage.  Please note that if this email contains a forwarded message\nor is a reply to a prior message, some or all of the contents of this message\nor any attachments may not have been produced by the sender.", "msg_date": "Thu, 23 Aug 2007 14:21:59 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "deadlock_timeout parameter in Postgresql.cof" } ]