threads
listlengths
1
275
[ { "msg_contents": "My situation is this. We have a semi-production server where we \npre-process data and then upload the finished data to our production \nservers. We need the fastest possible write performance. Having the DB \ngo corrupt due to power loss/OS crash is acceptable because we can \nalways restore from last night and re-run everything that was done since \nthen.\n\nI already have fsync off. Short of buying more hardware -- which I will \nprobably do anyways once I figure out whether I need more CPU, memory or \ndisk -- what else can I do to max out the speed? Operation mix is about \n50% select, 40% insert, 10% update.\n\n", "msg_date": "Sun, 23 Nov 2003 19:48:13 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu <[email protected]> writes:\n> [ we don't care about data integrity ]\n> I already have fsync off. Short of buying more hardware -- which I will \n> probably do anyways once I figure out whether I need more CPU, memory or \n> disk -- what else can I do to max out the speed? Operation mix is about \n> 50% select, 40% insert, 10% update.\n\nBatch operations so you commit more than one insert per transaction.\n(With fsync off, this isn't such a killer consideration as it would be\nwith fsync on, but the per-transaction overhead is still nontrivial.)\n\nGet rid of as many integrity constraints as you feel can reasonably be\npostponed to the final upload. FK checks are particularly painful.\n\nEliminate indexes where possible.\n\nAlso (I hate to say this, but...) you should consider using Some Other\nDatabase. \"I don't care about data integrity, only speed\" sounds like\na good fit to MySQL ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Nov 2003 23:21:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance? " }, { "msg_contents": "William,\n\n> I already have fsync off. Short of buying more hardware -- which I will\n> probably do anyways once I figure out whether I need more CPU, memory or\n> disk -- what else can I do to max out the speed? Operation mix is about\n> 50% select, 40% insert, 10% update.\n\nDisk. Multi-channel RAID is where it's at, and/or RAID with a great write \ncache enabled. For really fast updates, I'd suggest 6-disk or even 8-disk \nRAID 1+0.\n\nAs soon as you have gobs of extra disk space, jack your checkpoint_buffers way \nup, like a couple of gigs.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Nov 2003 20:29:04 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu wrote:\n> My situation is this. We have a semi-production server where we \n> pre-process data and then upload the finished data to our production \n> servers. We need the fastest possible write performance. Having the DB \n> go corrupt due to power loss/OS crash is acceptable because we can \n> always restore from last night and re-run everything that was done since \n> then.\n\nIf you can, use COPY -- it is far faster than INSERT.\n\nSee:\nhttp://www.postgresql.org/docs/current/static/sql-copy.html\n\nHTH,\n\nJoe\n\n\n", "msg_date": "Sun, 23 Nov 2003 20:40:15 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu wrote:\n\n> My situation is this. We have a semi-production server where we \n> pre-process data and then upload the finished data to our production \n> servers. We need the fastest possible write performance. Having the DB \n> go corrupt due to power loss/OS crash is acceptable because we can \n> always restore from last night and re-run everything that was done since \n> then.\n> \n> I already have fsync off. Short of buying more hardware -- which I will \n> probably do anyways once I figure out whether I need more CPU, memory or \n> disk -- what else can I do to max out the speed? Operation mix is about \n> 50% select, 40% insert, 10% update.\n\nMount WAL on RAM disk. WAL is most often hit area for heavy updates/inserts. If \nyou spped that up, things should be pretty faster.\n\nA non-tried advice though. Given that you can afford a crash, I would say it is \nworth a try..\n\n Shridhar\n\n", "msg_date": "Mon, 24 Nov 2003 11:20:36 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu wrote:\n> My situation is this. We have a semi-production server where we \n> pre-process data and then upload the finished data to our production \n> servers. We need the fastest possible write performance. Having the DB \n> go corrupt due to power loss/OS crash is acceptable because we can \n> always restore from last night and re-run everything that was done since \n> then.\n> \n> I already have fsync off. Short of buying more hardware -- which I will \n> probably do anyways once I figure out whether I need more CPU, memory or \n> disk -- what else can I do to max out the speed? Operation mix is about \n> 50% select, 40% insert, 10% update.\n\nIn line with what Tom Lane said, you may want to look at the various\nmemory databases available (I'm not familiar with any one to recommend,\nthough) If you can fit the whole database in RAM, that would work\ngreat, if not, you may be able to split the DB up and put the most\nused tables just in the memory database.\n\nI have also seen a number tutorials on how to put a database on a\nRAM disk. This helps, but it's still not as fast as a database server\nthat's designed to keep all its data in RAM.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Mon, 24 Nov 2003 08:38:31 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "This is an intriguing thought which leads me to think about a similar \nsolution for even a production server and that's a solid state drive for \njust the WAL. What's the max disk space the WAL would ever take up? \nThere's quite a few 512MB/1GB/2GB solid state drives available now in \nthe ~$200-$500 range and if you never hit those limits...\n\nWhen my current job batch is done, I'll save a copy of the dir and give \nthe WAL on ramdrive a test. And perhaps even buy a Sandisk at the local \nstore and run that through the hooper.\n\n\nShridhar Daithankar wrote:\n> \n> Mount WAL on RAM disk. WAL is most often hit area for heavy \n> updates/inserts. If you spped that up, things should be pretty faster.\n\n", "msg_date": "Mon, 24 Nov 2003 09:19:20 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William,\n\n> When my current job batch is done, I'll save a copy of the dir and give\n> the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local\n> store and run that through the hooper.\n\nWe'll be interested in the results. The Sandisk won't be much of a \nperformance test; last I checked, their access speed was about 1/2 that of a \nfast SCSI drive. But it could be a feasability test for the more expensive \nRAMdrive approach.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Nov 2003 09:25:58 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "Josh Berkus wrote:\n> William,\n> \n> \n>>When my current job batch is done, I'll save a copy of the dir and give\n>>the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local\n>>store and run that through the hooper.\n> \n> \n> We'll be interested in the results. The Sandisk won't be much of a \n> performance test; last I checked, their access speed was about 1/2 that of a \n> fast SCSI drive. But it could be a feasability test for the more expensive \n> RAMdrive approach.\n> \n\n\nThe SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could \nget 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You \nshouldn't need to do mirroring with a solid state drive.\n\nTime to Google up some more solid state drive vendors.\n\n", "msg_date": "Mon, 24 Nov 2003 09:45:22 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William,\n\n> The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could\n> get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You\n> shouldn't need to do mirroring with a solid state drive.\n\nI wouldn't count on RAID0 improving the speed of SANDisk's much. How are you \nconnecting to them? USB? USB doesn't support fast parallel data access.\n\nNow, if it turns out that 256MB ramdisks are less than 1/5 the cost of 1GB \nramdisks, then that's worth considering.\n\nYou're right, though, mirroring a solid state drive is pretty pointless; if \npower fails, both mirrors are dead. \n\nAs I said before, though, we're all very interested in this test. Using a \nramdisk for WAL has been discussed on this list numerous times but not \nattempted by anyone who published their results.\n\nAll that aside, though, I think you should also experiment with the Background \nWriter patch recently discussed on Hackers, as it may give you a performance \nboost as well.\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Nov 2003 10:04:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "Josh Berkus wrote:\n> William,\n> \n> \n>>The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could\n>>get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You\n>>shouldn't need to do mirroring with a solid state drive.\n> \n> \n> I wouldn't count on RAID0 improving the speed of SANDisk's much. How are you \n> connecting to them? USB? USB doesn't support fast parallel data access.\n\nYou can get ATA SanDisks up to 2GB. Another vendor I checked out -- \nBitMicro -- has solid state drives for SATA, SCSI and FiberChannel. I'd \ndefinitely would not use USB SSDs -- USB performance would be so pokey \nto be useless.\n\n> Now, if it turns out that 256MB ramdisks are less than 1/5 the cost of 1GB \n> ramdisks, then that's worth considering.\n\nLooks like they're linear with size. SanDisk Flashdrive 1GB is about \n$1000 while 256MB is $250.\n\n> You're right, though, mirroring a solid state drive is pretty pointless; if \n> power fails, both mirrors are dead. \n\nActually no. Solid state memory is non-volatile. They retain data even \nwithout power.\n\n", "msg_date": "Mon, 24 Nov 2003 10:23:36 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu <[email protected]> writes:\n\n> > You're right, though, mirroring a solid state drive is pretty pointless; if\n> > power fails, both mirrors are dead.\n> \n> Actually no. Solid state memory is non-volatile. They retain data even without\n> power.\n\nNote that flash ram only has a finite number of write cycles before it fails.\n\nOn the other hand that might not be so bad for WAL which writes sequentially,\nyou can easily calculate how close you are to the maximum. For things like\nheap storage or swap it's awful as you can get hot spots that get written to\nthousands of times before the rest of the space is used.\n\n-- \ngreg\n\n", "msg_date": "24 Nov 2003 20:16:27 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu wrote:\n\n> This is an intriguing thought which leads me to think about a similar \n> solution for even a production server and that's a solid state drive for \n> just the WAL. What's the max disk space the WAL would ever take up? \n> There's quite a few 512MB/1GB/2GB solid state drives available now in \n> the ~$200-$500 range and if you never hit those limits...\n\nMaximum number of WAL segments at any time in 2*(number of checkpoint \nsegments)+1 IIRC.\n\nSo if you have 3 checkpoint segments, you can not have more than 7 WAL segments \nat any time. Give or take 1.\n\nCorrect me if I am wrong..\n\n Shridhar\n\n", "msg_date": "Tue, 25 Nov 2003 11:36:50 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> William Yu wrote:\n>> This is an intriguing thought which leads me to think about a similar \n>> solution for even a production server and that's a solid state drive for \n>> just the WAL. What's the max disk space the WAL would ever take up? \n\n> Maximum number of WAL segments at any time in 2*(number of checkpoint \n> segments)+1 IIRC.\n> So if you have 3 checkpoint segments, you can not have more than 7 WAL\n> segments at any time. Give or take 1.\n\nI don't believe that's a *hard* limit. The system tries to schedule\ncheckpoints often enough to prevent WAL from getting bigger than that,\nbut if you had a sufficiently big spike in update activity, it's at\nleast theoretically possible that more than checkpoint_segments segments\ncould be filled before the concurrently running checkpoint finishes and\nreleases some old segments.\n\nThe odds of this being a real problem are small, especially if you don't\ntry to fit on an undersized SSD by reducing checkpoint_segments. I'd\nthink that a 512Mb SSD would be plenty of space for ordinary update load\nlevels ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2003 10:47:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance? " }, { "msg_contents": "On Mon, 2003-11-24 at 19:16, Greg Stark wrote:\n> William Yu <[email protected]> writes:\n> \n> > > You're right, though, mirroring a solid state drive is pretty pointless; if\n> > > power fails, both mirrors are dead.\n> > \n> > Actually no. Solid state memory is non-volatile. They retain data even without\n> > power.\n> \n> Note that flash ram only has a finite number of write cycles before it fails.\n> \n> On the other hand that might not be so bad for WAL which writes sequentially,\n> you can easily calculate how close you are to the maximum. For things like\n> heap storage or swap it's awful as you can get hot spots that get written to\n> thousands of times before the rest of the space is used.\n\nI could be wrong, but I was under the impression that most of the newer\nflash disks tended to spread writes out over the drive so that hotspots\nare minimized. \n\n-- \nSuchandra Thapa <[email protected]>", "msg_date": "25 Nov 2003 12:59:54 -0600", "msg_from": "Suchandra Thapa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "Josh Berkus wrote:\n> William,\n> \n> \n>>When my current job batch is done, I'll save a copy of the dir and give\n>>the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local\n>>store and run that through the hooper.\n> \n> \n> We'll be interested in the results. The Sandisk won't be much of a \n> performance test; last I checked, their access speed was about 1/2 that of a \n> fast SCSI drive. But it could be a feasability test for the more expensive \n> RAMdrive approach.\n\nSome initial numbers. I simulated a CPU increase by underclocking the \nprocessors. Most of the time, performance does not scale linearly with \nclock speed but since I also underclocked the FSB and memory bandwidth \nwith the CPU, it's nearly an exact match.\n\n1.15GHz 6.14\n1.53GHz 6.97 +33% CPU = +13.5% performance\n\nI then simulated adding a heapload of extra memory by running my job a \nsecond time. Unfortunately, to keep my 25GB DB mostly cached in memory, \nthe word heapload is too accurate.\n\nRun 1 6.97\nRun 2 7.99 +14%\n\nI popped in an extra IDE hard drive to store the WAL files and that \nboosted the numbers by a little. From looking at iostat, the ratio \nlooked like 300K/s WAL for 1MB/s data.\n\nWAL+Data on same disk 6.97\nWAL+Data separated 7.26 +4%\n\nI then tried to put the WAL directory onto a ramdisk. I turned off \nswapping, created a tmpfs mount point and copied the pg_xlog directory \nover. Everything looked fine as far as I could tell but Postgres just \npanic'd with a \"file permissions\" error. Anybody have thoughts to why \ntmpfs would not work?\n\n", "msg_date": "Wed, 26 Nov 2003 08:44:59 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "William Yu <[email protected]> writes:\n> I then tried to put the WAL directory onto a ramdisk. I turned off \n> swapping, created a tmpfs mount point and copied the pg_xlog directory \n> over. Everything looked fine as far as I could tell but Postgres just \n> panic'd with a \"file permissions\" error. Anybody have thoughts to why \n> tmpfs would not work?\n\nI'd say you got the file or directory ownership or permissions wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Nov 2003 12:55:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance? " }, { "msg_contents": "Tom Lane wrote:\n> William Yu <[email protected]> writes:\n> \n>>I then tried to put the WAL directory onto a ramdisk. I turned off \n>>swapping, created a tmpfs mount point and copied the pg_xlog directory \n>>over. Everything looked fine as far as I could tell but Postgres just \n>>panic'd with a \"file permissions\" error. Anybody have thoughts to why \n>>tmpfs would not work?\n> \n> \n> I'd say you got the file or directory ownership or permissions wrong.\n\nI did a mv instead of a cp which duplicates ownership & permissions exactly.\n\n", "msg_date": "Wed, 26 Nov 2003 10:03:47 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maximum Possible Insert Performance?" }, { "msg_contents": "\nBut the permissions of the base ramdisk might be wrong. I'd su to the\nuser that you run postgres as (probably postgres), and make sure that\nyou can go to the directory where the log and the database files are and\nmake sure you can see the files.\n\nOn Wed, Nov 26, 2003 at 10:03:47AM -0800, William Yu wrote:\n> Tom Lane wrote:\n> >William Yu <[email protected]> writes:\n> >\n> >>I then tried to put the WAL directory onto a ramdisk. I turned off \n> >>swapping, created a tmpfs mount point and copied the pg_xlog directory \n> >>over. Everything looked fine as far as I could tell but Postgres just \n> >>panic'd with a \"file permissions\" error. Anybody have thoughts to why \n> >>tmpfs would not work?\n> >\n> >\n> >I'd say you got the file or directory ownership or permissions wrong.\n> \n> I did a mv instead of a cp which duplicates ownership & permissions exactly.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Wed, 26 Nov 2003 10:54:21 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maximum Possible Insert Performance?" } ]
[ { "msg_contents": "\nI am sure there is no transaction open with the table banner_stats2.\nStill VACUUM FULL does not seems to effective in removing the\ndead rows.\n\nCan any one please help?\n\nRegds\nmallah\n\ntradein_clients=# VACUUM FULL verbose banner_stats2 ;\nINFO: vacuuming \"public.banner_stats2\"\nINFO: \"banner_stats2\": found 0 removable, 741912 nonremovable row versions in \n6710 pages\nDETAIL: 737900 dead row versions cannot be removed yet.\nNonremovable row versions range from 61 to 72 bytes long.\nThere were 120 unused item pointers.\nTotal free space (including removable row versions) is 246672 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n557 pages containing 61344 free bytes are potential move destinations.\nCPU 0.15s/1.23u sec elapsed 1.38 sec.\nINFO: index \"banner_stats_pkey\" now contains 741912 row versions in 2173 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.05u sec elapsed 0.09 sec.\nINFO: \"banner_stats2\": moved 0 row versions, truncated 6710 to 6710 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\ntradein_clients=#\n\n", "msg_date": "Mon, 24 Nov 2003 22:43:59 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM problems with 7.4" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> I am sure there is no transaction open with the table banner_stats2.\n> Still VACUUM FULL does not seems to effective in removing the\n> dead rows.\n\nThat is not the issue --- the limiting factor is what is your oldest\nopen transaction, period. Whether it has yet looked at this table is\nnot relevant, because the system has no way to know whether it might\ndecide to do so later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Nov 2003 12:44:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM problems with 7.4 " }, { "msg_contents": "> Rajesh Kumar Mallah <[email protected]> writes:\n>> I am sure there is no transaction open with the table banner_stats2. Still VACUUM FULL does\n>> not seems to effective in removing the\n>> dead rows.\n>\n> That is not the issue --- the limiting factor is what is your oldest open transaction, period.\n> Whether it has yet looked at this table is not relevant, because the system has no way to know\n> whether it might decide to do so later.\n\n\nOk , shutting down the database and vacumming immediatly after starting\nhelped .\n\nBut it was not this bad in 7.3 as far as i understand. Is it something that\nhas come up in 7.4 only , if so any solution to this issue?\n\nBTW can you please tell me if its safe to upgrade from RC2 to 7.4 final\nwithout initdb? [ i am still on RC2 :( ]\n\n\nRegds\nMallah.\n\n\n\n\n\nAFTER RESTARTING DATABASE:\n\ntradein_clients=# VACUUM FULL verbose banner_stats2 ;\nINFO: vacuuming \"public.banner_stats2\"\nINFO: \"banner_stats2\": found 737900 removable, 4012 nonremovable row versions in 6710 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 61 to 72 bytes long.\nThere were 120 unused item pointers.\nTotal free space (including removable row versions) is 51579272 bytes.\n6387 pages are or will become empty, including 0 at the end of the table.\n6686 pages containing 51578312 free bytes are potential move destinations.\nCPU 0.17s/0.09u sec elapsed 0.26 sec.\nINFO: index \"banner_stats_pkey\" now contains 4012 row versions in 2165 pages\nDETAIL: 737900 index row versions were removed.\n1813 index pages have been deleted, 1813 are currently reusable.\nCPU 0.16s/1.58u sec elapsed 1.97 sec.\nINFO: \"banner_stats2\": moved 785 row versions, truncated 6710 to 38 pages\nDETAIL: CPU 0.17s/0.54u sec elapsed 8.30 sec.\nINFO: index \"banner_stats_pkey\" now contains 4012 row versions in 2165 pages\nDETAIL: 785 index row versions were removed.\n1821 index pages have been deleted, 1821 are currently reusable.\nCPU 0.00s/0.02u sec elapsed 0.50 sec.\nVACUUM\ntradein_clients=#\n\n\n\ntradein_clients=# VACUUM FULL verbose banner_stats2 ;\nINFO: vacuuming \"public.banner_stats2\"\nINFO: \"banner_stats2\": found 0 removable, 4012 nonremovable row versions in 38 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 61 to 72 bytes long.\nThere were 100 unused item pointers.\nTotal free space (including removable row versions) is 7368 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n2 pages containing 5984 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"banner_stats_pkey\" now contains 4012 row versions in 2165 pages\nDETAIL: 0 index row versions were removed.\n1821 index pages have been deleted, 1821 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"banner_stats2\": moved 0 row versions, truncated 38 to 38 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\ntradein_clients=#\n\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off\n> all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Mon, 24 Nov 2003 23:34:06 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM problems with 7.4" }, { "msg_contents": "<[email protected]> writes:\n> But it was not this bad in 7.3 as far as i understand.\n\nNo, I believe this behavior is present in any recent release of\nPostgreSQL.\n\n-Neil\n\n", "msg_date": "Mon, 24 Nov 2003 21:27:40 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM problems with 7.4" } ]
[ { "msg_contents": "Torsten Schulz wrote:\n\n> Yes, I know: very difficult question, but I don't know what to do now.\n> \n> Our Server:\n> Dual-CPU with 1.2 GHz\n> 1.5 GB RAM\n> \n> Our Problem: We are a Community. Between 19 and 21 o clock we have >350 \n> User in the Community. But then, the Database are very slow. And we have \n> per CPU ~20-30% idle-time.\n\nMay we know the postgres version that you are running and\nsee the query that run slow ?\nIs also usefull take a look at your postgresql configuration.\nYou can see doing select * from pg_stat_activity the\nqueries that are currently running on your server, and\ndo a explain analize on it to see which one is the\nbottleneck. If you are running the 7.4 you can see on\nthe log the total ammount for each query.\n\nLet us know.\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Mon, 24 Nov 2003 21:20:11 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize" }, { "msg_contents": "Yes, I know: very difficult question, but I don't know what to do now.\n\nOur Server:\nDual-CPU with 1.2 GHz\n1.5 GB RAM\n\nOur Problem: We are a Community. Between 19 and 21 o clock we have >350 \nUser in the Community. But then, the Database are very slow. And we have \nper CPU ~20-30% idle-time.\n\nHas anyone an idea what's the best configuration for thta server?\n\nMany Greetings\nT. Schulz (with very bad english, i know)\n\n", "msg_date": "Mon, 24 Nov 2003 21:48:22 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Optimize" }, { "msg_contents": "Torsten Schulz wrote:\n\n> Gaetano Mendola wrote:\n> \n>> Torsten Schulz wrote:\n>>\n>>> Yes, I know: very difficult question, but I don't know what to do now.\n>>>\n>>> Our Server:\n>>> Dual-CPU with 1.2 GHz\n>>> 1.5 GB RAM\n>>>\n>>> Our Problem: We are a Community. Between 19 and 21 o clock we have \n>>> >350 User in the Community. But then, the Database are very slow. And \n>>> we have per CPU ~20-30% idle-time.\n>>\n>>\n>>\n>> May we know the postgres version that you are running and\n>> see the query that run slow ?\n> \n> \n> Postgres: 7.3.2\n> Query: All queries\n> \n> Configuration:\n> max_connections = 1000 # Must be, if lower then 500 we become \n> connection-errors\n> shared_buffers = 5000 # 2*max_connections, min 16\n> max_fsm_relations = 1000 # min 10, fsm is free space map\n> max_fsm_pages = 2000000 # min 1000, fsm is free space map\n> max_locks_per_transaction = 64 # min 10\n> wal_buffers = 2000 # min 4\n> \n> sort_mem = 32768 # min 32\n> vacuum_mem = 32768 # min 1024\n> \n> fsync = false\n> \n> enable_seqscan = true\n> enable_indexscan = true\n> enable_tidscan = true\n> enable_sort = true\n> enable_nestloop = true\n> enable_mergejoin = true\n> enable_hashjoin = true\n> \n> effective_cache_size = 96000 # default in 8k pages\n\nWith 500 connection at the sime time 32MB for sort_mem can be too much.\nWhat say \"iostat 1\" and \"vmstat 1\" ?\n\nTry also to reduce this costs:\n\nrandom_page_cost = 2.5\ncpu_tuple_cost = 0.005\ncpu_index_tuple_cost = 0.0005\n\n\nBTW take a query and show us the result of explain analyze.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 24 Nov 2003 21:56:17 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize" }, { "msg_contents": "Gaetano Mendola wrote:\n\n> Torsten Schulz wrote:\n>\n>> Yes, I know: very difficult question, but I don't know what to do now.\n>>\n>> Our Server:\n>> Dual-CPU with 1.2 GHz\n>> 1.5 GB RAM\n>>\n>> Our Problem: We are a Community. Between 19 and 21 o clock we have \n>> >350 User in the Community. But then, the Database are very slow. And \n>> we have per CPU ~20-30% idle-time.\n>\n>\n> May we know the postgres version that you are running and\n> see the query that run slow ?\n\nPostgres: 7.3.2\nQuery: All queries\n\nConfiguration:\nmax_connections = 1000 # Must be, if lower then 500 we become \nconnection-errors\nshared_buffers = 5000 # 2*max_connections, min 16\nmax_fsm_relations = 1000 # min 10, fsm is free space map\nmax_fsm_pages = 2000000 # min 1000, fsm is free space map\nmax_locks_per_transaction = 64 # min 10\nwal_buffers = 2000 # min 4\n\nsort_mem = 32768 # min 32\nvacuum_mem = 32768 # min 1024\n\nfsync = false\n\nenable_seqscan = true\nenable_indexscan = true\nenable_tidscan = true\nenable_sort = true\nenable_nestloop = true\nenable_mergejoin = true\nenable_hashjoin = true\n\neffective_cache_size = 96000 # default in 8k pages\n\n\nThat are all uncommented lines. I've found the values in internet and \nhad tested it. But in performance are no difference between old \nconfiguration an this.\n\n> Is also usefull take a look at your postgresql configuration.\n> You can see doing select * from pg_stat_activity the\n> queries that are currently running on your server, and\n> do a explain analize on it to see which one is the\n> bottleneck. If you are running the 7.4 you can see on\n> the log the total ammount for each query.\n>\nI'll show tomorrow for this, today it is too late, the performance is \nnow perfect. It's only slow on this 2 hours with so many users on server.\n\nOh, and i can't update to 7.4. The Chat don't run with libraries of 7.4\n\n", "msg_date": "Mon, 24 Nov 2003 22:33:05 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize" }, { "msg_contents": "On Mon, 24 Nov 2003, Torsten Schulz wrote:\n\n> sort_mem = 32768 # min 32\n\n32 meg per sort can be a lot in total if you have many clients sorting \nthings. I assume you have checked so that the computer is not pushed into \nswapping when you have the peak with lots of users. A swapping computer is \nnever fast.\n\nUsing some swap space is not bad, but a lot of page in and page out to the\nswap is not good.\n\n-- \n/Dennis\n\n", "msg_date": "Mon, 24 Nov 2003 22:45:44 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize" }, { "msg_contents": "Torsten Schulz <[email protected]> writes:\n> Our Server:\n> Dual-CPU with 1.2 GHz\n> 1.5 GB RAM\n\nWhat kind of I/O subsystem is in this machine? This is an x86 machine,\nright?\n\n> Has anyone an idea what's the best configuration for thta server?\n\nIt is difficult to say until you provide some information on the\nsystem's state during periods of heavy traffic.\n\nBTW, in addition to the machine's hardware configuration, have you\nlooked at tuning the queries running on PostgreSQL? What about the OS\nkernel?\n\n-Neil\n\n", "msg_date": "Mon, 24 Nov 2003 21:24:45 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize" }, { "msg_contents": "Hi folks,\n\nDisclaimer: I am relatively new to RDBMSs, so please do not laugh at me \ntoo loudly, you can laugh, just not too loudly and please do not point. :)\n\nI am working on an Automated Installer Testing System for Adobe Systems \nand I am doing a DB redesign of the current postgres db:\n\n1. We are testing a matrix of over 900 Acrobat installer configurations \nand we are tracking every file and registry entry that is affected by an \ninstallation.\n\n2. a single file or registry entry that is affected by any test is \nstored in the db as a record.\n\n3. a typical record is about 12 columns of string data. the data is all \ninformation about a file (mac or windows) or windows registry entry [ \nfile or regkey name, file size, modification date, checksum, \npermissions, owner, group, and in the case of a mac, we are getting all \nthe hfs atts as well].\n\n4. A typical test produces anywhere from 2000 - 5000 records.\n\n\nOur db is getting to be a respectable size (about 10GB right now) and is \ngrowing slower and slower. I have been charged with making it faster and \nwith a smaller footprint while retaining all of the current \nfunctionality. here is one of my ideas. Please tell me if I am crazy:\n\nThe strings that we are storing (mentioned in 3 above) are extremely \nrepetitive. for example, there are a limited number of permissions for \nthe files in the acrobat installer and we are storing this information \nover and over again in the tables. The same goes for filenames, registry \nkey names and almost all of the data we are storing. So it seems to me \nthat to create a smaller and faster database we could assign an integer \nto each string and just store the integer representation of the string \nrather than the string itself. Then we would just store the strings in \na separate table one time and do join queries against the tables that \nare holding the strings and the main data tables. for example,\n\na table that would hold unique permissions strings would look like\n\ntable: perms_strs\n\nstring | id\n---------------------\n'drwxr-xr-x' | 1\n'-rw-------' | 2\n'drwxrwxr-x' | 3\n'-rw-r--r--' | 4\n\nthen in my data I would just store 1,2,3 or 4 instead of the whole \npermissions string.\n\nit seems to me that we would save lots of space and over time not see \nthe same performance degradation.\n\nanyways, please tell me if this makes sense and make any other \nsuggestions that you can think of. I am just now starting this analysis \nso I cannot give specifics as to where we are seeing poor performance \njust yet. just tell me if my concepts are correct.\n\nthanks for your time and for suffering this email.\n\nchao,\n\n-Shane\n\n", "msg_date": "Tue, 25 Nov 2003 10:42:47 -0800", "msg_from": "shane hill <[email protected]>", "msg_from_op": false, "msg_subject": "design question: general db performance" }, { "msg_contents": "[small chuckle]\n\nBy George, I think he's got it!\n\nYou are on the right track. Have a look at this link on database\nnormalization for more info:\n\nhttp://databases.about.com/library/weekly/aa080501a.htm \n\n\n\nOn Tue, 2003-11-25 at 10:42, shane hill wrote:\n> Hi folks,\n> \n> Disclaimer: I am relatively new to RDBMSs, so please do not laugh at me \n> too loudly, you can laugh, just not too loudly and please do not point. :)\n> \n\n[snip]\n\n-- \nJord Tanner <[email protected]>\n\n", "msg_date": "25 Nov 2003 10:58:28 -0800", "msg_from": "Jord Tanner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: design question: general db performance" }, { "msg_contents": "Shane,\n\n> Disclaimer: I am relatively new to RDBMSs, so please do not laugh at me \n> too loudly, you can laugh, just not too loudly and please do not point. :)\n\nHey, we all started somewhere. Nobody was born knowing databases. Except \nmaybe Neil Conway.\n\n> I am working on an Automated Installer Testing System for Adobe Systems \n> and I am doing a DB redesign of the current postgres db:\n\nCool! We're going to want to talk to you about a case study later, if you \ncan get your boss to authorize it ....\n\n> Our db is getting to be a respectable size (about 10GB right now) and is \n> growing slower and slower. \n\nSlower and slower? Hmmm ... what's your VACUUM. ANALYZE & REINDEX schedule? \nWhat PG version? What are your postgresql.conf settings? Progressive \nperformance loss may indicate a problem with one or more of these things ...\n\n> then in my data I would just store 1,2,3 or 4 instead of the whole \n> permissions string.\n> \n> it seems to me that we would save lots of space and over time not see \n> the same performance degradation.\n\nYes, this is a good idea. Abstracting other repetitive data is good too. \nAlso keep in mind that the permissions themselves can be represented as octal \nnumbers instead of strings, which takes less space.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 25 Nov 2003 11:12:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: design question: general db performance" }, { "msg_contents": "On Tue, 25 Nov 2003 10:42:47 -0800\nshane hill <[email protected]> wrote:\n\n> Our db is getting to be a respectable size (about 10GB right now) and\n> is growing slower and slower. I have been charged with making it\n> faster and with a smaller footprint while retaining all of the current\n> functionality. here is one of my ideas. Please tell me if I am\n> crazy:\n> \n\nWhat exactly is it getting slower doing?\n\nHave you run through the usual gamut of things to check - shared\nbuffers, vacuum analyzig, etc. etc.\n\nWhat ver of PG?\n\nWhat OS?\n\nCan you post any schema/queries?\n\nNormalizing can help. But I don't think it is going to be a magical\nbullet that will make the DB instantly fast. It will reduce the size of\nit though.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 25 Nov 2003 14:23:18 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: design question: general db performance" }, { "msg_contents": "On Tuesday 25 November 2003 18:42, shane hill wrote:\n>\n> Our db is getting to be a respectable size (about 10GB right now) and is\n> growing slower and slower. I have been charged with making it faster and\n> with a smaller footprint while retaining all of the current\n> functionality. here is one of my ideas. Please tell me if I am crazy:\n\nYour idea of using an integer makes sense - that's how it is stored on unix \nanyway.\n\nAre you familiar with VACUUM/VACUUM FULL/REINDEX and when you should use them? \nIf not, that's a good place to start. Try a VACUUM FULL on frequently updated \ntables and see if that reduces your disk size.\n\nYou'll probably want to check the performance notes too: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 25 Nov 2003 19:27:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: design question: general db performance" } ]
[ { "msg_contents": "I've scanned some of the archives and have learned a lot about different performance tuning practices. I will be looking into using many of these ideas but I'm not sure they address the issue I am currently experiencing.\n\nFirst, I'm a total newb with postgresql. Like many before me, I have inherited many responsibilities outside of my original job description due to layoffs. I am now the sole developer/support for a software product. *sigh* This product uses postgresql. I am familiar with the basics of sql and have worked on the database and code for the software but by no means am I proficient with postgresql.\n\nThe archives of this list provides many ideas for improving performance, but the problem we are having is gradually degrading performance ending in postgres shutting down. So it's not a matter of optimizing a complex query to take 5 seconds instead of 60 seconds. From what I can tell we are using the VACUUM command on a schedule but it doesn't seem to prevent the database from becoming \"congested\" as we refer to it. :] Anyway, the only way I know to \"fix\" the problem is to export (pg_dump) the db, drop the database, recreate the database and import the dump. This seems to return performance back to normal but obviously isn't a very good \"solution\". The slowdown and subsequent crash can take as little as 1 week for databases with a lot of data or go as long as a few weeks to a month for smaller data sets.\n\nI don't really know where to start looking for a solution. Any advice on where to start, understanding that I am a newb, would be greatly appreciated. Thank you.\n\nNid\n\n\n\n\n\n\nI've scanned some of the archives and have learned \na lot about different performance tuning practices.  I will be looking into \nusing many of these ideas but I'm not sure they address the issue I am currently \nexperiencing.\n \nFirst, I'm a total newb with postgresql.  Like \nmany before me, I have inherited many responsibilities outside of my original \njob description due to layoffs.  I am now the sole developer/support for a \nsoftware product.  *sigh*  This product uses postgresql.  I am \nfamiliar with the basics of sql and have worked on the database and code for the \nsoftware but by no means am I proficient with postgresql.\n \nThe archives of this list provides many ideas for \nimproving performance, but the problem we are having is gradually degrading \nperformance ending in postgres shutting down.  So it's not a matter of \noptimizing a complex query to take 5 seconds instead of 60 seconds.  \n>From what I can tell we are using the VACUUM command on a schedule but it \ndoesn't seem to prevent the database from becoming \"congested\" as we refer to \nit.  :]  Anyway, the only way I know to \"fix\" the problem is to export \n(pg_dump) the db, drop the database, recreate the database and import the \ndump.  This seems to return performance back to normal but obviously isn't \na very good \"solution\".  The slowdown and subsequent crash can take as \nlittle as 1 week for databases with a lot of data or go as long as a few weeks \nto a month for smaller data sets.\n \nI don't really know where to start looking for a \nsolution.  Any advice on where to start, understanding that I am a newb, \nwould be greatly appreciated.  Thank you.\n \nNid", "msg_date": "Mon, 24 Nov 2003 16:03:17 -0600", "msg_from": "\"MK Spam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Where to start for performance problem?" }, { "msg_contents": "My apologies for the \"From\" name of MK Spam. That references an email account I made for signing up for things on the net. :]\n\nNid\n ----- Original Message ----- \n From: MK Spam \n To: [email protected] \n Sent: Monday, November 24, 2003 4:03 PM\n Subject: [PERFORM] Where to start for performance problem?\n\n\n I've scanned some of the archives and have learned a lot about different performance tuning practices. I will be looking into using many of these ideas but I'm not sure they address the issue I am currently experiencing.\n\n First, I'm a total newb with postgresql. Like many before me, I have inherited many responsibilities outside of my original job description due to layoffs. I am now the sole developer/support for a software product. *sigh* This product uses postgresql. I am familiar with the basics of sql and have worked on the database and code for the software but by no means am I proficient with postgresql.\n\n The archives of this list provides many ideas for improving performance, but the problem we are having is gradually degrading performance ending in postgres shutting down. So it's not a matter of optimizing a complex query to take 5 seconds instead of 60 seconds. From what I can tell we are using the VACUUM command on a schedule but it doesn't seem to prevent the database from becoming \"congested\" as we refer to it. :] Anyway, the only way I know to \"fix\" the problem is to export (pg_dump) the db, drop the database, recreate the database and import the dump. This seems to return performance back to normal but obviously isn't a very good \"solution\". The slowdown and subsequent crash can take as little as 1 week for databases with a lot of data or go as long as a few weeks to a month for smaller data sets.\n\n I don't really know where to start looking for a solution. Any advice on where to start, understanding that I am a newb, would be greatly appreciated. Thank you.\n\n Nid\n\n\n\n\n\n\n\nMy apologies for the \"From\" name of MK Spam.  \nThat references an email account I made for signing up for things on the \nnet.  :]\n \nNid\n\n----- Original Message ----- \nFrom:\nMK Spam\n\nTo: [email protected]\n\nSent: Monday, November 24, 2003 4:03 \n PM\nSubject: [PERFORM] Where to start for \n performance problem?\n\nI've scanned some of the archives and have \n learned a lot about different performance tuning practices.  I will be \n looking into using many of these ideas but I'm not sure they address the issue \n I am currently experiencing.\n \nFirst, I'm a total newb with postgresql.  \n Like many before me, I have inherited many responsibilities outside of my \n original job description due to layoffs.  I am now the sole \n developer/support for a software product.  *sigh*  This product uses \n postgresql.  I am familiar with the basics of sql and have worked on the \n database and code for the software but by no means am I proficient with \n postgresql.\n \nThe archives of this list provides many ideas for \n improving performance, but the problem we are having is gradually degrading \n performance ending in postgres shutting down.  So it's not a matter of \n optimizing a complex query to take 5 seconds instead of 60 seconds.  \n From what I can tell we are using the VACUUM command on a schedule but it \n doesn't seem to prevent the database from becoming \"congested\" as we refer to \n it.  :]  Anyway, the only way I know to \"fix\" the problem is to \n export (pg_dump) the db, drop the database, recreate the database and import \n the dump.  This seems to return performance back to normal but obviously \n isn't a very good \"solution\".  The slowdown and subsequent crash can take \n as little as 1 week for databases with a lot of data or go as long as a few \n weeks to a month for smaller data sets.\n \nI don't really know where to start looking for a \n solution.  Any advice on where to start, understanding that I am a newb, \n would be greatly appreciated.  Thank you.\n \nNid", "msg_date": "Mon, 24 Nov 2003 16:08:00 -0600", "msg_from": "\"Nid\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "> The archives of this list provides many ideas for improving performance, \n> but the problem we are having is gradually degrading performance ending \n> in postgres shutting down. So it's not a matter of optimizing a complex \n> query to take 5 seconds instead of 60 seconds. >From what I can tell we \n> are using the VACUUM command on a schedule but it doesn't seem to \n> prevent the database from becoming \"congested\" as we refer to it. :] \n\nOur busy website has a cronjob that runs VACUUM ANALYZE once an hour \n(vacuumdb -a -q -z).\n\nHave you tried going 'VACUUM FULL ANALYZE' (vacuumdb -a -q -z -f) \ninstead of a dump and reload?\n\nChris\n\n\n", "msg_date": "Tue, 25 Nov 2003 09:13:51 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "I've been digging around in the code and found where we are executing the\nVACUUM command. VACUUM ANALYZE is executed every 15 minutes. We haven't\ntried VACUUM FULL ANALYZE. I think I read that using FULL is a good idea\nonce a day or something. Just doing a VACUUM ANALYZE doesn't seem to be\npreventing our problem. Thank you for the responses.\n\nnid\n\n----- Original Message ----- \nFrom: \"Christopher Kings-Lynne\" <[email protected]>\nTo: \"MK Spam\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, November 24, 2003 7:13 PM\nSubject: Re: [PERFORM] Where to start for performance problem?\n\n\n> > The archives of this list provides many ideas for improving performance,\n> > but the problem we are having is gradually degrading performance ending\n> > in postgres shutting down. So it's not a matter of optimizing a complex\n> > query to take 5 seconds instead of 60 seconds. >From what I can tell we\n> > are using the VACUUM command on a schedule but it doesn't seem to\n> > prevent the database from becoming \"congested\" as we refer to it. :]\n>\n> Our busy website has a cronjob that runs VACUUM ANALYZE once an hour\n> (vacuumdb -a -q -z).\n>\n> Have you tried going 'VACUUM FULL ANALYZE' (vacuumdb -a -q -z -f)\n> instead of a dump and reload?\n>\n> Chris\n>\n>\n>\n\n\n", "msg_date": "Mon, 24 Nov 2003 21:05:56 -0600", "msg_from": "\"Nid\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "Nid wrote:\n\n> I've been digging around in the code and found where we are executing the\n> VACUUM command. VACUUM ANALYZE is executed every 15 minutes. We haven't\n> tried VACUUM FULL ANALYZE. I think I read that using FULL is a good idea\n> once a day or something. Just doing a VACUUM ANALYZE doesn't seem to be\n> preventing our problem. Thank you for the responses.\n\nTry upgrading to PostgreSQL 7.4 and use the new pg_autovacuum daemon. \nThis daemon will monitor your tables and vacuum and analyze whenever \nnecessary.\n\nChris\n\n\n", "msg_date": "Tue, 25 Nov 2003 11:18:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "The problems with giving suggestions about increasing performance is \nthat one persons increase is another persons decrease.\n\nhaving said that, there are a few general suggestions :\n\nSet-up some shared memory, about a tenth of your available RAM, and \nconfigure shared_memory and max_clients correctly. I've used the \nfollowing formula, ripped off the net from somewhere. It's not entirely \nacurate, as other settings steal a little shared memory, but it works \nfor the most part :\n\n((1024*RAM_SIZE) - (14.2 * max_connections) - 250) / 8.2\n\nas I say, it should get you a good value, otherwise lower it bit by bit \nif you have trouble starting your db.\n\nIncrease effective_cache (50%-70% avail ram) and sort_mem (about 1/20th \nram) and lower you random_page_cost to around 2 or less (as low as 0.3) \nif you have fast SCSI drives in a RAID10 set-up - this was a big speedup ;)\n\nBut this might not be the answer though. The values detailed above are \nwhen tuning an already stable setup.\n\nPerhaps you need to look at your system resource usage. If you're \ndegrading performance over time it sounds to me like you are slowly \nrunning out of memory and swap ?\n\nGenerall if I take something over, I'll try and get it onto my terms. \nHave you tried importing the DB to a fresh installation, one where you \nknow sensible defaults are set, so you aren't inheriting any cruft from \nthe previous sysadmin.\n\nTo be honest tho, I've never run pg so that it actually shutdown because \nit was running so badly - i just wouldn't think it would do that.\n\n\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n\n", "msg_date": "Tue, 25 Nov 2003 14:07:58 +0000", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "On Mon, Nov 24, 2003 at 16:03:17 -0600,\n MK Spam <[email protected]> wrote:\n> \n> The archives of this list provides many ideas for improving performance, but the problem we are having is gradually degrading performance ending in postgres shutting down. So it's not a matter of optimizing a complex query to take 5 seconds instead of 60 seconds. From what I can tell we are using the VACUUM command on a schedule but it doesn't seem to prevent the database from becoming \"congested\" as we refer to it. :] Anyway, the only way I know to \"fix\" the problem is to export (pg_dump) the db, drop the database, recreate the database and import the dump. This seems to return performance back to normal but obviously isn't a very good \"solution\". The slowdown and subsequent crash can take as little as 1 week for databases with a lot of data or go as long as a few weeks to a month for smaller data sets.\n\nA couple of things you might look for are index bloat and having FSM set too\nsmall for your plain vacuums. Upgrading to 7.4 may help with index bloat\nif that is your problem.\n", "msg_date": "Tue, 25 Nov 2003 10:52:32 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem?" }, { "msg_contents": "MK Spam <[email protected]> wrote:\n> ... the problem we are having is gradually degrading\n> performance ending in postgres shutting down.\n\nAs someone else commented, that's not an ordinary sort of performance\nproblem. What exactly happens when the database \"shuts down\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2003 12:07:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where to start for performance problem? " } ]
[ { "msg_contents": "Torsten Schulz wrote:\n> Hi,\n> \n>> You can see doing select * from pg_stat_activity the\n>> queries that are currently running on your server, and\n>> do a explain analize on it to see which one is the\n>> bottleneck. If you are running the 7.4 you can see on\n>> the log the total ammount for each query.\n> \n> \n> \n> with this query I see how much queries running, but the field \n> current_query are free, so i can't see which queries are very slow.\n\nYou must perform that query with permission of super_user.\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Tue, 25 Nov 2003 20:19:25 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: Optimize]" }, { "msg_contents": "Hi,\n\n> You can see doing select * from pg_stat_activity the\n> queries that are currently running on your server, and\n> do a explain analize on it to see which one is the\n> bottleneck. If you are running the 7.4 you can see on\n> the log the total ammount for each query.\n\n\nwith this query I see how much queries running, but the field \ncurrent_query are free, so i can't see which queries are very slow.\n\nGreetings\nTorsten\n\n\n", "msg_date": "Tue, 25 Nov 2003 20:39:05 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "[Fwd: Re: Optimize]" }, { "msg_contents": "Gaetano Mendola wrote:\n\n> Torsten Schulz wrote:\n>\n>> Hi,\n>>\n>>> You can see doing select * from pg_stat_activity the\n>>> queries that are currently running on your server, and\n>>> do a explain analize on it to see which one is the\n>>> bottleneck. If you are running the 7.4 you can see on\n>>> the log the total ammount for each query.\n>>\n>>\n>>\n>>\n>> with this query I see how much queries running, but the field \n>> current_query are free, so i can't see which queries are very slow.\n>\n>\n> You must perform that query with permission of super_user.\n>\nI've made it in root-account with psql -U postgres - but i can't see the \nquery\n\nRegards\nTorsten Schulz\n\n", "msg_date": "Tue, 25 Nov 2003 23:06:10 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: Optimize]" }, { "msg_contents": ">>> with this query I see how much queries running, but the field\n>>> current_query are free, so i can't see which queries are very slow.\n>>\n>>\n>> You must perform that query with permission of super_user.\n>>\n> I've made it in root-account with psql -U postgres - but i can't see\n> the query\n\nYou must have these lines in your postgresql.conf for the query stats to be\ncollected:\n\nstats_start_collector = true\nstats_command_string = true\n\n--------------------------------------------------------------------\nRuss Garrett [email protected]\n http://last.fm\n\n", "msg_date": "Tue, 25 Nov 2003 22:27:34 -0000", "msg_from": "\"Russell Garrett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: Optimize]" } ]
[ { "msg_contents": "Hi all,\n\nI want to use index on the gene_symbol column in my\nquery and gene_symbol is indexed. but when I use\nlower (gene_symbol) like lower('%mif%'), the index\nis not used. While when I change to\nlower(gene_symbol) = lower('mif'), the index is used\nand index scan works, but this is not what I like. I\nwant all the gene_symbols containing substring\n'mif' are pulled out, and not necessarily exactly match.\n\ncould anybody give me some hints how to deal with \nthis. If I do not used index, it take too long for\nthe query.\n\n \nPGA> explain select distinct probeset_id, chip,\ngene_symbol, title, sequence_description, pfam from\naffy_array_annotation where lower(gene_symbol) like\nupper('%mif%');\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Unique (cost=29576.44..29591.44 rows=86 width=265)\n -> Sort (cost=29576.44..29578.59 rows=857\nwidth=265)\n Sort Key: probeset_id, chip, gene_symbol,\ntitle, sequence_description, pfam\n -> Seq Scan on affy_array_annotation \n(cost=0.00..29534.70 rows=857 width=265)\n Filter: (lower((gene_symbol)::text)\n~~ 'MIF%'::text)\n(5 rows)\n\n\nPGA=> explain select distinct probeset_id, chip,\ngene_symbol, title, sequence_description, pfam from\naffy_array_annotation where lower(gene_symbol) =\nupper('%mif%');\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Unique (cost=3433.44..3448.44 rows=86 width=265)\n -> Sort (cost=3433.44..3435.58 rows=857 width=265)\n Sort Key: probeset_id, chip, gene_symbol,\ntitle, sequence_description, pfam\n -> Index Scan using gene_symbol_idx_fun1\non affy_array_annotation (cost=0.00..3391.70\nrows=857 width=265)\n Index Cond:\n(lower((gene_symbol)::text) = '%MIF%'::text)\n(5 rows)\n\n\n\n\n\nRegards,\nWilliam\n\n", "msg_date": "Tue, 25 Nov 2003 19:48:49 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "why index scan not working when using 'like'?" }, { "msg_contents": "Lianhe,\n\n> I want to use index on the gene_symbol column in my\n> query and gene_symbol is indexed. but when I use\n> lower (gene_symbol) like lower('%mif%'), the index\n> is not used. While when I change to\n> lower(gene_symbol) = lower('mif'), the index is used\n> and index scan works, but this is not what I like. I\n> want all the gene_symbols containing substring\n> 'mif' are pulled out, and not necessarily exactly match.\n\nLIKE '%mif%' is what's called an \"unanchored text search\" and it cannot use an \nindex. The database *has* to scan the full text looking for the substring. \nThis is true of all database platforms I know of.\n\nIn regular text fields containing words, your problem is solvable with full \ntext indexing (FTI). Unfortunately, FTI is not designed for arbitrary \nnon-language strings. It could be adapted, but would require a lot of \nhacking.\n\nSo you will need to find a way to restructure you data to avoid needing \nunanchored text searches. One way would be to break down the gene_symbol \nfield into its smallest atomic components and store those in an indexed child \ntable. Or if you're searching on the same values all the time, you can \ncreate a partial index.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 25 Nov 2003 11:51:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index scan not working when using 'like'?" }, { "msg_contents": "\nHi,\n\nSearches with like or regexes often can't use the index. Think of the index as\na sorted list of your items. It's easy to find an item when you know it\nstarts with mif so ('mif%' should use the index). But when you use a\n'like' that starts with '%' the index is useless and the search needs to\ndo a sequential scan.\n\nRegards,\n\nDror\n\nOn Tue, Nov 25, 2003 at 07:48:49PM +0000, LIANHE SHAO wrote:\n> Hi all,\n> \n> I want to use index on the gene_symbol column in my\n> query and gene_symbol is indexed. but when I use\n> lower (gene_symbol) like lower('%mif%'), the index\n> is not used. While when I change to\n> lower(gene_symbol) = lower('mif'), the index is used\n> and index scan works, but this is not what I like. I\n> want all the gene_symbols containing substring\n> 'mif' are pulled out, and not necessarily exactly match.\n> \n> could anybody give me some hints how to deal with \n> this. If I do not used index, it take too long for\n> the query.\n> \n> \n> PGA> explain select distinct probeset_id, chip,\n> gene_symbol, title, sequence_description, pfam from\n> affy_array_annotation where lower(gene_symbol) like\n> upper('%mif%');\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Unique (cost=29576.44..29591.44 rows=86 width=265)\n> -> Sort (cost=29576.44..29578.59 rows=857\n> width=265)\n> Sort Key: probeset_id, chip, gene_symbol,\n> title, sequence_description, pfam\n> -> Seq Scan on affy_array_annotation \n> (cost=0.00..29534.70 rows=857 width=265)\n> Filter: (lower((gene_symbol)::text)\n> ~~ 'MIF%'::text)\n> (5 rows)\n> \n> \n> PGA=> explain select distinct probeset_id, chip,\n> gene_symbol, title, sequence_description, pfam from\n> affy_array_annotation where lower(gene_symbol) =\n> upper('%mif%');\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Unique (cost=3433.44..3448.44 rows=86 width=265)\n> -> Sort (cost=3433.44..3435.58 rows=857 width=265)\n> Sort Key: probeset_id, chip, gene_symbol,\n> title, sequence_description, pfam\n> -> Index Scan using gene_symbol_idx_fun1\n> on affy_array_annotation (cost=0.00..3391.70\n> rows=857 width=265)\n> Index Cond:\n> (lower((gene_symbol)::text) = '%MIF%'::text)\n> (5 rows)\n> \n> \n> \n> \n> \n> Regards,\n> William\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Tue, 25 Nov 2003 11:56:13 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index scan not working when using 'like'?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> In regular text fields containing words, your problem is solvable with full \n> text indexing (FTI). Unfortunately, FTI is not designed for arbitrary \n> non-language strings. It could be adapted, but would require a lot of \n> hacking.\n\nI'm not sure why you say that FTI isn't a usable solution. As long as\nthe gene symbols are separated by whitespace or some other non-letters\n(eg, \"foo mif bar\" not \"foomifbar\"), I'd think FTI would work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2003 16:29:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index scan not working when using 'like'? " }, { "msg_contents": "Tom Lane kirjutas T, 25.11.2003 kell 23:29:\n> Josh Berkus <[email protected]> writes:\n> > In regular text fields containing words, your problem is solvable with full \n> > text indexing (FTI). Unfortunately, FTI is not designed for arbitrary \n> > non-language strings. It could be adapted, but would require a lot of \n> > hacking.\n> \n> I'm not sure why you say that FTI isn't a usable solution. As long as\n> the gene symbols are separated by whitespace or some other non-letters\n> (eg, \"foo mif bar\" not \"foomifbar\"), I'd think FTI would work.\n\nIf he wants to search on arbitrary substring, he could change tokeniser\nin FTI to produce trigrams, so that \"foomifbar\" would be indexed as if\nit were text \"foo oom omi mif ifb fba bar\" and search for things like\n%mifb% should first do a FTI search for \"mif\" AND \"ifb\" and then simple\nLIKE %mifb% to weed out something like \"mififb\".\n\nThere are ways to use trigrams for 1 and 2 letter matches as well.\n\n-------------\nHannu\n\n", "msg_date": "Wed, 26 Nov 2003 19:33:08 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index scan not working when using 'like'?" } ]
[ { "msg_contents": "Dear You all,\n\n(please tell me if this has already been discussed, I was unable to find any \nconvincing information)\n\nI'm developing a small application, tied to a PG 7.4 beta 5 (i didn't \nupgrade). The DB i use is roughly 20 tales each of them containing at most 30 \nrecords (I'm still in development). I can provide a whole dump if necessary.\nI access the DB throug IODBC (Suse Linux 8.1), through PHP. The machine \neverything runs on is 512M of Ram, 2.5GHz speed. So I assume it should be \nblazingly fast.\n\nSo here's my trouble : some DELETE statement take up to 1 minute to complete \n(but not always, sometimes it's fast, sometimes it's that slow). Here's a \ntypical one : DELETE FROM response_bool WHERE response_id = '125'\nThe response_bool table has no foreing key and no index on response_id column. \nNo foreign key reference the response_bool table. There are 6 rows in the \ntable (given that size, I assumed that an index was not necessary).\n\nSo 1 minute to complete look like I did something REALLY bad.\n\nIt is my feeling that doing the same query with psql works without problem, \nbut I can't be sure. The rest of my queries (inserts, updates) just work fine \nand pretty fast.\n\nCan someone help me or point me to a place where I can find help ? I didn't do \nany in deep debugging though.\n\nthx,\n\nstF\n\n\n", "msg_date": "Tue, 25 Nov 2003 22:56:56 +0100", "msg_from": "Stefan Champailler <[email protected]>", "msg_from_op": true, "msg_subject": "Impossibly slow DELETEs" }, { "msg_contents": "Stefan Champailler wrote:\n> Dear You all,\n> \n> (please tell me if this has already been discussed, I was unable to find any \n> convincing information)\n> \n> I'm developing a small application, tied to a PG 7.4 beta 5 (i didn't \n> upgrade). The DB i use is roughly 20 tales each of them containing at most 30 \n> records (I'm still in development). I can provide a whole dump if necessary.\n> I access the DB throug IODBC (Suse Linux 8.1), through PHP. The machine \n> everything runs on is 512M of Ram, 2.5GHz speed. So I assume it should be \n> blazingly fast.\n> \n> So here's my trouble : some DELETE statement take up to 1 minute to complete \n> (but not always, sometimes it's fast, sometimes it's that slow). Here's a \n> typical one : DELETE FROM response_bool WHERE response_id = '125'\n> The response_bool table has no foreing key and no index on response_id column. \n> No foreign key reference the response_bool table. There are 6 rows in the \n> table (given that size, I assumed that an index was not necessary).\n> \n> So 1 minute to complete look like I did something REALLY bad.\n> \n> It is my feeling that doing the same query with psql works without problem, \n> but I can't be sure.\n\nI think that last sentence is the crux of the problem. If you can establish\nfor sure that the unreasonable delay is _only_ there when the command is\nissued through IODBC, then it's not a Postgresql problem.\n\nOut of curiosity, why are you using ODBC for PHP anyway? PHP has Postgresql\nlibraries that work very well. I use them quite often without problems.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 25 Nov 2003 18:45:08 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impossibly slow DELETEs" }, { "msg_contents": "Stefan Champailler <[email protected]> writes:\n> So here's my trouble : some DELETE statement take up to 1 minute to\n> complete (but not always, sometimes it's fast, sometimes it's that\n> slow). Here's a typical one : DELETE FROM response_bool WHERE\n> response_id = '125' The response_bool table has no foreing key and\n> no index on response_id column. No foreign key reference the\n> response_bool table. \n\nI'm skeptical that PostgreSQL is causing the performance problem\nhere -- 1 minute for a DELETE on a single-page table is absurdly\nslow. If you enable the log_min_duration_statement configuration\nvariable, you should be able to get an idea of how long it actually\ntakes PostgreSQL to execute each query -- do you see some 60 second\nqueries in the log?\n\nWhat is the system load like when the query takes a long time? For\nexample, `vmstat 1` output around this point in time would be\nhelpful.\n\nDoes PostgreSQL consume a lot of CPU time or do a lot of disk I/O?\n\nCan you confirm this problem using psql?\n\n> There are 6 rows in the table (given that size, I assumed that an\n> index was not necessary).\n\nThat's a reasonable assumption.\n\n-Neil\n\n", "msg_date": "Tue, 25 Nov 2003 18:55:33 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impossibly slow DELETEs" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n>> There are 6 rows in the table (given that size, I assumed that an\n>> index was not necessary).\n\n> That's a reasonable assumption.\n\nBut if he's updated those rows a few hundred thousand times and never\nVACUUMed, he could be having some problems ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2003 19:37:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impossibly slow DELETEs " }, { "msg_contents": "\nIs it possible another connection has updated the record and not committed,\nand it takes a minute for the connection to time out and commit or roll back?\n\n-- \ngreg\n\n", "msg_date": "26 Nov 2003 12:11:34 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impossibly slow DELETEs" }, { "msg_contents": "I did not conduct much more test but from what I've seen, it looks like the \nODBC driver is in the doldrums, not PG. For example, when I run my software \non Windows rather than Linux, everything just works as expected. Sorry for \ndisturbing.\n\nAnd btw, I use ODBC because my target DB is Oracle and I've been requested to \naccess it throguh ODBC. So, because I don't have Oracle, I do most of my \ndevelopment with PG and then I'll port to Oracle. Since I'm doing \"simple\" \nstuff, PG is almost 100% compatible with Oracle. (and before you ask, no, \nthey don't give me the proper dev environment, bastards :))\n\nThanks for all the answers.\n\nStefan\n\n\n> Stefan Champailler wrote:\n> > Dear You all,\n> >\n> > (please tell me if this has already been discussed, I was unable to find\n> > any convincing information)\n> >\n> > I'm developing a small application, tied to a PG 7.4 beta 5 (i didn't\n> > upgrade). The DB i use is roughly 20 tales each of them containing at\n> > most 30 records (I'm still in development). I can provide a whole dump if\n> > necessary. I access the DB throug IODBC (Suse Linux 8.1), through PHP.\n> > The machine everything runs on is 512M of Ram, 2.5GHz speed. So I assume\n> > it should be blazingly fast.\n> >\n> > So here's my trouble : some DELETE statement take up to 1 minute to\n> > complete (but not always, sometimes it's fast, sometimes it's that slow).\n> > Here's a typical one : DELETE FROM response_bool WHERE response_id =\n> > '125' The response_bool table has no foreing key and no index on\n> > response_id column. No foreign key reference the response_bool table.\n> > There are 6 rows in the table (given that size, I assumed that an index\n> > was not necessary).\n> >\n> > So 1 minute to complete look like I did something REALLY bad.\n> >\n> > It is my feeling that doing the same query with psql works without\n> > problem, but I can't be sure.\n>\n> I think that last sentence is the crux of the problem. If you can\n> establish for sure that the unreasonable delay is _only_ there when the\n> command is issued through IODBC, then it's not a Postgresql problem.\n>\n> Out of curiosity, why are you using ODBC for PHP anyway? PHP has\n> Postgresql libraries that work very well. I use them quite often without\n> problems.\n\n", "msg_date": "Thu, 27 Nov 2003 20:57:06 +0100", "msg_from": "Stefan Champailler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impossibly slow DELETEs" } ]
[ { "msg_contents": "Chester Kustarz wrote:\n\n> On Mon, 24 Nov 2003, Torsten Schulz wrote:\n> \n>\n>> shared_buffers = 5000 # 2*max_connections, min 16\n>> \n>\n>\n> that looks pretty small. that would only be 40MBytes (8k/page * \n> 5000pages).\n>\n> http://www.varlena.com/GeneralBits/Tidbits/perf.html\n>\n> \n>\nOk, thats it. I've set it to 51200, now it seems to be very fast.\n\nThank you!\n\n\n-------- Original Message --------\nSubject: \tRe: [PERFORM] Optimize\nDate: \tTue, 25 Nov 2003 23:04:06 +0100\nFrom: \tTorsten Schulz <[email protected]>\nTo: \tChester Kustarz <[email protected]>\nReferences: \t<[email protected]>\n\n\n\nChester Kustarz wrote:\n\n>On Mon, 24 Nov 2003, Torsten Schulz wrote:\n> \n>\n>>shared_buffers = 5000 # 2*max_connections, min 16\n>> \n>>\n>\n>that looks pretty small. that would only be 40MBytes (8k/page * 5000pages).\n>\n>http://www.varlena.com/GeneralBits/Tidbits/perf.html\n>\n> \n>\nOk, thats it. I've set it to 51200, now it seems to be very fast.\n\nThank you!\n\n\n\n", "msg_date": "Tue, 25 Nov 2003 23:07:18 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize" }, { "msg_contents": "Torsten Schulz wrote:\n\n> Chester Kustarz wrote:\n\n>> On Mon, 24 Nov 2003, Torsten Schulz wrote:\n>>> shared_buffers = 5000 # 2*max_connections, min 16\n>> that looks pretty small. that would only be 40MBytes (8k/page * \n>> 5000pages).\n>> http://www.varlena.com/GeneralBits/Tidbits/perf.html\n> Ok, thats it. I've set it to 51200, now it seems to be very fast.\n\nWhoa..That is too much. You acn get still better performance at something low \nlike 10,000 or even 5000.\n\nBumping up shared buffers stops being useful after a point and later it actually \ndegrades the performance..\n\n Shridhar\n\n", "msg_date": "Wed, 26 Nov 2003 11:36:34 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize" } ]
[ { "msg_contents": "Folks,\n\nWe're seeing some odd issues with hyperthreading-capable Xeons, whether or not \nhyperthreading is enabled. Basically, when a small number of really-heavy \nduty queries hit the system and push all of the CPUs to more than 70% used \n(about 1/2 user & 1/2 kernel), the system goes to 100,000+ context switcthes \nper second and performance degrades. \n\nI know that there's other Xeon users on this list ... has anyone else seen \nanything like that? The machines are Dells running Red Hat 7.3.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 25 Nov 2003 14:19:36 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Wierd context-switching issue on Xeon" }, { "msg_contents": "Tom,\n\n> Strictly a WAG ... but what this sounds like to me is disastrously bad\n> behavior of the spinlock code under heavy contention. We thought we'd\n> fixed the spinlock code for SMP machines awhile ago, but maybe\n> hyperthreading opens some new vistas for misbehavior ...\n\nYeah, I thought of that based on the discussion on -Hackers. But we tried \nturning off hyperthreading, with no change in behavior.\n\n> If you can't try 7.4, or want to gather more data first, it would be\n> good to try to confirm or disprove the theory that the context switches\n> are coming from spinlock delays. If they are, they'd be coming from the\n> select() calls in s_lock() in s_lock.c. Can you strace or something to\n> see what kernel calls the context switches occur on?\n\nMight be worth it ... will suggest that. Will also try 7.4.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 25 Nov 2003 15:37:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> We're seeing some odd issues with hyperthreading-capable Xeons, whether or not \n> hyperthreading is enabled. Basically, when a small number of really-heavy \n> duty queries hit the system and push all of the CPUs to more than 70% used \n> (about 1/2 user & 1/2 kernel), the system goes to 100,000+ context switcthes \n> per second and performance degrades. \n\nStrictly a WAG ... but what this sounds like to me is disastrously bad\nbehavior of the spinlock code under heavy contention. We thought we'd\nfixed the spinlock code for SMP machines awhile ago, but maybe\nhyperthreading opens some new vistas for misbehavior ...\n\n> I know that there's other Xeon users on this list ... has anyone else seen \n> anything like that? The machines are Dells running Red Hat 7.3.\n\nWhat Postgres version? Is it easy for you to try 7.4? If we were\nreally lucky, the random-backoff algorithm added late in 7.4 development\nwould cure this.\n\nIf you can't try 7.4, or want to gather more data first, it would be\ngood to try to confirm or disprove the theory that the context switches\nare coming from spinlock delays. If they are, they'd be coming from the\nselect() calls in s_lock() in s_lock.c. Can you strace or something to\nsee what kernel calls the context switches occur on?\n\nAnother line of thought is that RH 7.3 is a long ways back, and it\nwasn't so very long ago that Linux still had lots of SMP bugs. Maybe\nwhat you really need is a kernel update?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2003 18:40:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Tom, Josh,\n\nI think we have the problem resolved after I found the following note \nfrom Tom:\n\n > A large number of semops may mean that you have excessive contention \non some lockable\n > resource, but I don't have enough info to guess what resource.\n\nThis was the key to look at: we were missing all indices on table which \nis used heavily and does lots of locking. After recreating the missing \nindices the production system performed normal. No, more excessive \nsemop() calls, load way below 1.0, CS over 20.000 very rare, more in \nthousands realm and less.\n\nThis is quite a relief but I am sorry that the problem was so stupid and \nyou wasted some time although Tom said he had also seem excessive \nsemop() calls on another Dual XEON system.\n\nHyperthreading was turned off so far but will be turned on again the \nnext days. I don't expect any problems then.\n\nI'm not sure if this semop() problem is still an issue but the database \nbehaves a bit out of bounds in this situation, i.e. consuming system \nresources with semop() calls 95% while tables are locked very often and \nlonger.\n\nThanks for your help,\n\nDirk\n\nAt last here is the current vmstat 1 excerpt where the problem has been \nresolved:\n\n\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 1 0 2308 232508 201924 6976532 0 0 136 464 628 812 5 \n1 94 0\n 0 0 2308 232500 201928 6976628 0 0 96 296 495 484 4 \n0 95 0\n 0 1 2308 232492 201928 6976628 0 0 0 176 347 278 1 \n0 99 0\n 0 0 2308 233484 201928 6976596 0 0 40 580 443 351 8 \n2 90 0\n 1 0 2308 233484 201928 6976696 0 0 76 692 792 651 9 \n2 88 0\n 0 0 2308 233484 201928 6976696 0 0 0 20 132 34 0 \n0 100 0\n 0 0 2308 233484 201928 6976696 0 0 0 76 177 90 0 \n0 100 0\n 0 1 2308 233484 201928 6976696 0 0 0 216 321 250 4 \n0 96 0\n 0 0 2308 233484 201928 6976696 0 0 0 116 417 240 8 \n0 92 0\n 0 0 2308 233484 201928 6976784 0 0 48 600 403 270 8 \n0 92 0\n 0 0 2308 233464 201928 6976860 0 0 76 452 1064 2611 14 \n1 84 0\n 0 0 2308 233460 201932 6976900 0 0 32 256 587 587 12 \n1 87 0\n 0 0 2308 233460 201932 6976932 0 0 32 188 379 287 5 \n0 94 0\n 0 0 2308 233460 201932 6976932 0 0 0 0 103 8 0 \n0 100 0\n 0 0 2308 233460 201932 6976932 0 0 0 0 102 14 0 \n0 100 0\n 0 1 2308 233444 201948 6976932 0 0 0 348 300 180 1 \n0 99 0\n 1 0 2308 233424 201948 6976948 0 0 16 380 739 906 4 \n2 93 0\n 0 0 2308 233424 201948 6977032 0 0 68 260 724 987 7 \n0 92 0\n 0 0 2308 231924 201948 6977128 0 0 96 344 1130 753 11 \n1 88 0\n 1 0 2308 231924 201948 6977248 0 0 112 324 687 628 3 \n0 97 0\n 0 0 2308 231924 201948 6977248 0 0 0 192 575 430 5 \n0 95 0\n 1 0 2308 231924 201948 6977248 0 0 0 264 208 124 0 \n0 100 0\n 0 0 2308 231924 201948 6977264 0 0 16 272 380 230 3 \n2 95 0\n 0 0 2308 231924 201948 6977264 0 0 0 0 104 8 0 \n0 100 0\n 0 0 2308 231924 201948 6977264 0 0 0 48 258 92 1 \n0 99 0\n 0 0 2308 231816 201948 6977484 0 0 212 268 456 384 2 \n0 98 0\n 0 0 2308 231816 201948 6977484 0 0 0 88 453 770 0 \n0 99 0\n 0 0 2308 231452 201948 6977680 0 0 196 476 615 676 5 \n0 94 0\n 0 0 2308 231452 201948 6977680 0 0 0 228 431 400 2 \n0 98 0\n 0 0 2308 231452 201948 6977680 0 0 0 0 237 58 3 \n0 97 0\n 0 0 2308 231448 201952 6977680 0 0 0 0 365 84 2 \n0 97 0\n 0 0 2308 231448 201952 6977680 0 0 0 40 246 108 1 \n0 99 0\n 0 0 2308 231448 201952 6977776 0 0 96 352 606 1026 4 \n2 94 0\n 0 0 2308 231448 201952 6977776 0 0 0 240 295 266 5 \n0 95 0\n\n\n", "msg_date": "Fri, 16 Apr 2004 15:03:28 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": false, "msg_subject": "RESOLVED: Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]> writes:\n> This was the key to look at: we were missing all indices on table which \n> is used heavily and does lots of locking. After recreating the missing \n> indices the production system performed normal. No, more excessive \n> semop() calls, load way below 1.0, CS over 20.000 very rare, more in \n> thousands realm and less.\n\nHmm ... that's darn interesting. AFAICT the test case I am looking at\nfor Josh's client has no such SQL-level problem ... but I will go back\nand double check ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 09:49:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RESOLVED: Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Dirk,\n\n> I'm not sure if this semop() problem is still an issue but the database \n> behaves a bit out of bounds in this situation, i.e. consuming system \n> resources with semop() calls 95% while tables are locked very often and \n> longer.\n\nIt would be helpful to us if you could test this with the indexes disabled on \nthe non-Bigmem system. I'd like to eliminate Bigmem as a factor, if \npossible.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Enterprise vertical business [email protected]\n and data analysis solutions (415) 752-2387\n and database optimization fax 651-9224\n utilizing Open Source technology San Francisco\n\n", "msg_date": "Fri, 16 Apr 2004 09:58:14 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RESOLVED: Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "After some further digging I think I'm starting to understand what's up\nhere, and the really fundamental answer is that a multi-CPU Xeon MP box\nsucks for running Postgres.\n\nI did a bunch of oprofile measurements on a machine belonging to one of\nJosh's clients, using a test case that involved heavy concurrent access\nto a relatively small amount of data (little enough to fit into Postgres\nshared buffers, so that no I/O or kernel calls were really needed once\nthe test got going). I found that by nearly any measure --- elapsed\ntime, bus transactions, or machine-clear events --- the spinlock\nacquisitions associated with grabbing and releasing the BufMgrLock took\nan unreasonable fraction of the time. I saw about 15% of elapsed time,\n40% of bus transactions, and nearly 100% of pipeline-clear cycles going\ninto what is essentially two instructions out of the entire backend.\n(Pipeline clears occur when the cache coherency logic detects a memory\nwrite ordering problem.)\n\nI am not completely clear on why this machine-level bottleneck manifests\nas a lot of context swaps at the OS level. I think what is happening is\nthat because SpinLockAcquire is so slow, a process is much more likely\nthan you'd normally expect to arrive at SpinLockAcquire while another\nprocess is also acquiring the spinlock. This puts the two processes\ninto a \"lockstep\" condition where the second process is nearly certain\nto observe the BufMgrLock as locked, and be forced to suspend itself,\neven though the time the first process holds the BufMgrLock is not\nreally very long at all.\n\nIf you google for Xeon and \"cache coherency\" you'll find quite a bit of\nsuggestive information about why this might be more true on the Xeon\nsetup than others. A couple of interesting hits:\n\nhttp://www.theinquirer.net/?article=10797\nsays that Xeon MP uses a *slower* FSB than Xeon DP. This would\ntranslate directly to more time needed to transfer a dirty cache line\nfrom one processor to the other, which is the basic operation that we're\ntalking about here.\n\nhttp://www.aceshardware.com/Spades/read.php?article_id=30000187\nsays that Opterons use a different cache coherency protocol that is\nfundamentally superior to the Xeon's, because dirty cache data can be\ntransferred directly between two processor caches without waiting for\nmain memory.\n\nSo in the short term I think we have to tell people that Xeon MP is not\nthe most desirable SMP platform to run Postgres on. (Josh thinks that\nthe specific motherboard chipset being used in these machines might\nshare some of the blame too. I don't have any evidence for or against\nthat idea, but it's certainly possible.)\n\nIn the long run, however, CPUs continue to get faster than main memory\nand the price of cache contention will continue to rise. So it seems\nthat we need to give up the assumption that SpinLockAcquire is a cheap\noperation. In the presence of heavy contention it won't be.\n\nOne thing we probably have got to do soon is break up the BufMgrLock\ninto multiple finer-grain locks so that there will be less contention.\nHowever I am wary of doing this incautiously, because if we do it in a\nway that makes for a significant rise in the number of locks that have\nto be acquired to access a buffer, we might end up with a net loss.\n\nI think Neil Conway was looking into how the bufmgr might be\nrestructured to reduce lock contention, but if he had come up with\nanything he didn't mention exactly what. Neil?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 17:47:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "So the the kernel/OS is irrelevant here ? this happens on any dual xeon?\n\nWhat about hypterthreading does it still happen if HTT is turned off ?\n\nDave\nOn Sun, 2004-04-18 at 17:47, Tom Lane wrote:\n> After some further digging I think I'm starting to understand what's up\n> here, and the really fundamental answer is that a multi-CPU Xeon MP box\n> sucks for running Postgres.\n> \n> I did a bunch of oprofile measurements on a machine belonging to one of\n> Josh's clients, using a test case that involved heavy concurrent access\n> to a relatively small amount of data (little enough to fit into Postgres\n> shared buffers, so that no I/O or kernel calls were really needed once\n> the test got going). I found that by nearly any measure --- elapsed\n> time, bus transactions, or machine-clear events --- the spinlock\n> acquisitions associated with grabbing and releasing the BufMgrLock took\n> an unreasonable fraction of the time. I saw about 15% of elapsed time,\n> 40% of bus transactions, and nearly 100% of pipeline-clear cycles going\n> into what is essentially two instructions out of the entire backend.\n> (Pipeline clears occur when the cache coherency logic detects a memory\n> write ordering problem.)\n> \n> I am not completely clear on why this machine-level bottleneck manifests\n> as a lot of context swaps at the OS level. I think what is happening is\n> that because SpinLockAcquire is so slow, a process is much more likely\n> than you'd normally expect to arrive at SpinLockAcquire while another\n> process is also acquiring the spinlock. This puts the two processes\n> into a \"lockstep\" condition where the second process is nearly certain\n> to observe the BufMgrLock as locked, and be forced to suspend itself,\n> even though the time the first process holds the BufMgrLock is not\n> really very long at all.\n> \n> If you google for Xeon and \"cache coherency\" you'll find quite a bit of\n> suggestive information about why this might be more true on the Xeon\n> setup than others. A couple of interesting hits:\n> \n> http://www.theinquirer.net/?article=10797\n> says that Xeon MP uses a *slower* FSB than Xeon DP. This would\n> translate directly to more time needed to transfer a dirty cache line\n> from one processor to the other, which is the basic operation that we're\n> talking about here.\n> \n> http://www.aceshardware.com/Spades/read.php?article_id=30000187\n> says that Opterons use a different cache coherency protocol that is\n> fundamentally superior to the Xeon's, because dirty cache data can be\n> transferred directly between two processor caches without waiting for\n> main memory.\n> \n> So in the short term I think we have to tell people that Xeon MP is not\n> the most desirable SMP platform to run Postgres on. (Josh thinks that\n> the specific motherboard chipset being used in these machines might\n> share some of the blame too. I don't have any evidence for or against\n> that idea, but it's certainly possible.)\n> \n> In the long run, however, CPUs continue to get faster than main memory\n> and the price of cache contention will continue to rise. So it seems\n> that we need to give up the assumption that SpinLockAcquire is a cheap\n> operation. In the presence of heavy contention it won't be.\n> \n> One thing we probably have got to do soon is break up the BufMgrLock\n> into multiple finer-grain locks so that there will be less contention.\n> However I am wary of doing this incautiously, because if we do it in a\n> way that makes for a significant rise in the number of locks that have\n> to be acquired to access a buffer, we might end up with a net loss.\n> \n> I think Neil Conway was looking into how the bufmgr might be\n> restructured to reduce lock contention, but if he had come up with\n> anything he didn't mention exactly what. Neil?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n> \n> !DSPAM:4082feb7326901956819835!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Sun, 18 Apr 2004 19:34:41 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> So in the short term I think we have to tell people that Xeon MP is not\n> the most desirable SMP platform to run Postgres on. (Josh thinks that\n> the specific motherboard chipset being used in these machines might\n> share some of the blame too. I don't have any evidence for or against\n> that idea, but it's certainly possible.)\n> \n> In the long run, however, CPUs continue to get faster than main memory\n> and the price of cache contention will continue to rise. So it seems\n> that we need to give up the assumption that SpinLockAcquire is a cheap\n> operation. In the presence of heavy contention it won't be.\n\nThere's nothing about the way Postgres spinlocks are coded that affects this?\n\nIs it something the kernel could help with? I've been wondering whether\nthere's any benefits postgres is missing out on by using its own hand-rolled\nlocking instead of using the pthreads infrastructure that the kernel is often\ninvolved in.\n\n-- \ngreg\n\n", "msg_date": "18 Apr 2004 20:40:35 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> So the the kernel/OS is irrelevant here ? this happens on any dual xeon?\n\nI believe so. The context-switch behavior might possibly be a little\nmore pleasant on other kernels, but the underlying spinlock problem is\nnot dependent on the kernel.\n\n> What about hypterthreading does it still happen if HTT is turned off ?\n\nThe problem comes from keeping the caches synchronized between multiple\nphysical CPUs. AFAICS enabling HTT wouldn't make it worse, because a\nhyperthreaded processor still only has one cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 22:20:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> There's nothing about the way Postgres spinlocks are coded that affects this?\n\nNo. AFAICS our spinlock sequences are pretty much equivalent to the way\nthe Linux kernel codes its spinlocks, so there's no deep dark knowledge\nto be mined there.\n\nWe could possibly use some more-efficient blocking mechanism than semop()\nonce we've decided we have to block (it's a shame Linux still doesn't\nhave cross-process POSIX semaphores). But the striking thing I learned\nfrom looking at the oprofile results is that most of the inefficiency\ncomes at the very first TAS() operation, before we've even \"spun\" let\nalone decided we have to block. The s_lock() subroutine does not\naccount for more than a few percent of the runtime in these tests,\ncompared to 15% at the inline TAS() operations in LWLockAcquire and\nLWLockRelease. I interpret this to mean that once it's acquired\nownership of the cache line, a Xeon can get through the \"spinning\"\nloop in s_lock() mighty quickly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 22:30:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": ">> What about hypterthreading does it still happen if HTT is turned off ?\n\n> The problem comes from keeping the caches synchronized between multiple\n> physical CPUs. AFAICS enabling HTT wouldn't make it worse, because a\n> hyperthreaded processor still only has one cache.\n\nAlso, I forgot to say that the numbers I'm quoting *are* with HTT off.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 23:19:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Josh, I cannot reproduce the excessive semop() on a Dual XEON DP on a \nnon-bigmem kernel, HT on. Interesting to know if the problem is related \nto XEON MP (as Tom wrote) or bigmem.\n\nJosh Berkus wrote:\n\n>Dirk,\n>\n> \n>\n>>I'm not sure if this semop() problem is still an issue but the database \n>>behaves a bit out of bounds in this situation, i.e. consuming system \n>>resources with semop() calls 95% while tables are locked very often and \n>>longer.\n>> \n>>\n>\n>It would be helpful to us if you could test this with the indexes disabled on \n>the non-Bigmem system. I'd like to eliminate Bigmem as a factor, if \n>possible.\n>\n> \n>\n\n\n", "msg_date": "Mon, 19 Apr 2004 09:27:57 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RESOLVED: Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Hi Tom,\n\nJust to explain our hardware situation releated to the FSB of the XEON's.\nWe have older XEON DP in operation with FSB 400 and 2.4 GHz.\nThe XEON MP box runs with 2.5 GHz.\nThe XEON MP box is a Fujitsu Siemens Primergy RX600 with ServerWorks GC LE\nas chipset.\n\nThe box, which Dirk were use to compare the behavior, is our newest XEON DP\nsystem.\nThis XEON DP box runs with 2.8 GHz and FSB 533 using the Intel 7501 chipset\n(Supermicro).\n\nI would agree to Jush. When PostgreSQL has an issue with the INTEL XEON MP\nhardware, this is more releated to the chipset.\n\nBack to the SQL-Level. We use SELECT FOR UPDATE as \"semaphore\".\nShould we try another implementation for this semahore on the client side to\nprevent this issue?\n\nRegards\nSven.\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: <[email protected]>\nCc: \"Josh Berkus\" <[email protected]>; <[email protected]>;\n\"Neil Conway\" <[email protected]>\nSent: Sunday, April 18, 2004 11:47 PM\nSubject: Re: [PERFORM] Wierd context-switching issue on Xeon\n\n\n> After some further digging I think I'm starting to understand what's up\n> here, and the really fundamental answer is that a multi-CPU Xeon MP box\n> sucks for running Postgres.\n>\n> I did a bunch of oprofile measurements on a machine belonging to one of\n> Josh's clients, using a test case that involved heavy concurrent access\n> to a relatively small amount of data (little enough to fit into Postgres\n> shared buffers, so that no I/O or kernel calls were really needed once\n> the test got going). I found that by nearly any measure --- elapsed\n> time, bus transactions, or machine-clear events --- the spinlock\n> acquisitions associated with grabbing and releasing the BufMgrLock took\n> an unreasonable fraction of the time. I saw about 15% of elapsed time,\n> 40% of bus transactions, and nearly 100% of pipeline-clear cycles going\n> into what is essentially two instructions out of the entire backend.\n> (Pipeline clears occur when the cache coherency logic detects a memory\n> write ordering problem.)\n>\n> I am not completely clear on why this machine-level bottleneck manifests\n> as a lot of context swaps at the OS level. I think what is happening is\n> that because SpinLockAcquire is so slow, a process is much more likely\n> than you'd normally expect to arrive at SpinLockAcquire while another\n> process is also acquiring the spinlock. This puts the two processes\n> into a \"lockstep\" condition where the second process is nearly certain\n> to observe the BufMgrLock as locked, and be forced to suspend itself,\n> even though the time the first process holds the BufMgrLock is not\n> really very long at all.\n>\n> If you google for Xeon and \"cache coherency\" you'll find quite a bit of\n> suggestive information about why this might be more true on the Xeon\n> setup than others. A couple of interesting hits:\n>\n> http://www.theinquirer.net/?article=10797\n> says that Xeon MP uses a *slower* FSB than Xeon DP. This would\n> translate directly to more time needed to transfer a dirty cache line\n> from one processor to the other, which is the basic operation that we're\n> talking about here.\n>\n> http://www.aceshardware.com/Spades/read.php?article_id=30000187\n> says that Opterons use a different cache coherency protocol that is\n> fundamentally superior to the Xeon's, because dirty cache data can be\n> transferred directly between two processor caches without waiting for\n> main memory.\n>\n> So in the short term I think we have to tell people that Xeon MP is not\n> the most desirable SMP platform to run Postgres on. (Josh thinks that\n> the specific motherboard chipset being used in these machines might\n> share some of the blame too. I don't have any evidence for or against\n> that idea, but it's certainly possible.)\n>\n> In the long run, however, CPUs continue to get faster than main memory\n> and the price of cache contention will continue to rise. So it seems\n> that we need to give up the assumption that SpinLockAcquire is a cheap\n> operation. In the presence of heavy contention it won't be.\n>\n> One thing we probably have got to do soon is break up the BufMgrLock\n> into multiple finer-grain locks so that there will be less contention.\n> However I am wary of doing this incautiously, because if we do it in a\n> way that makes for a significant rise in the number of locks that have\n> to be acquired to access a buffer, we might end up with a net loss.\n>\n> I think Neil Conway was looking into how the bufmgr might be\n> restructured to reduce lock contention, but if he had come up with\n> anything he didn't mention exactly what. Neil?\n>\n> regards, tom lane\n>\n>\n\n", "msg_date": "Mon, 19 Apr 2004 14:27:44 +0200", "msg_from": "\"Sven Geisler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Here's an interesting link that suggests that hyperthreading would be\nmuch worse.\n\nhttp://groups.google.com/groups?q=hyperthreading+dual+xeon+idle&start=10&hl=en&lr=&ie=UTF-8&c2coff=1&selm=aukkonen-FE5275.21093624062003%40shawnews.gv.shawcable.net&rnum=16\n\nanother which has some hints as to how it should be handled\n\nhttp://groups.google.com/groups?q=hyperthreading+dual+xeon+idle&start=10&hl=en&lr=&ie=UTF-8&c2coff=1&selm=u5tl1XD3BHA.2760%40tkmsftngp04&rnum=19\nFWIW, I have anecdotal evidence that suggests that this is the case, on\nof my clients was seeing very large context switches with HTT turned on,\nand without it was much better.\n\nDave\nOn Sun, 2004-04-18 at 23:19, Tom Lane wrote:\n> >> What about hypterthreading does it still happen if HTT is turned off ?\n> \n> > The problem comes from keeping the caches synchronized between multiple\n> > physical CPUs. AFAICS enabling HTT wouldn't make it worse, because a\n> > hyperthreaded processor still only has one cache.\n> \n> Also, I forgot to say that the numbers I'm quoting *are* with HTT off.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n> \n> \n> !DSPAM:40834781158911062514350!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Mon, 19 Apr 2004 08:32:33 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Tom,\n\n> So in the short term I think we have to tell people that Xeon MP is not\n> the most desirable SMP platform to run Postgres on. (Josh thinks that\n> the specific motherboard chipset being used in these machines might\n> share some of the blame too. I don't have any evidence for or against\n> that idea, but it's certainly possible.)\n\nI have 3 reasons for thinking this:\n1) the ServerWorks chipset is present in the fully documented cases that we \nhave of this problem so far. This is notable becuase the SW is notorious \nfor poor manufacturing quality, so much so that the company that made them is \ncurrently in receivership. These chips were so bad that Dell was forced to \nrecall several hundred of it's 2650's, where the motherboards caught fire!\n2) the main defect of the SW is the NorthBridge, which could conceivably \nadversely affect traffic between RAM and the processor cache.\n3) XeonMP is a very popular platform thanks to Dell, and we are not seeing \nmore problem reports than we are.\n\nThe other thing I'd like your comment on, Tom, is that Dirk appears to have \nreported that when he installed a non-bigmem kernel, the issue went away. \nDirk, is this correct?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 19 Apr 2004 10:50:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> The other thing I'd like your comment on, Tom, is that Dirk appears to have \n> reported that when he installed a non-bigmem kernel, the issue went away. \n> Dirk, is this correct?\n\nI'd be really surprised if that had anything to do with it. AFAIR\nDirk's test changed more than one variable and so didn't prove a\nconnection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Apr 2004 14:00:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Josh Berkus wrote:\n> Tom,\n> \n> > So in the short term I think we have to tell people that Xeon MP is not\n> > the most desirable SMP platform to run Postgres on. (Josh thinks that\n> > the specific motherboard chipset being used in these machines might\n> > share some of the blame too. I don't have any evidence for or against\n> > that idea, but it's certainly possible.)\n> \n> I have 3 reasons for thinking this:\n> 1) the ServerWorks chipset is present in the fully documented cases that we \n> have of this problem so far. This is notable becuase the SW is notorious \n> for poor manufacturing quality, so much so that the company that made them is \n> currently in receivership. These chips were so bad that Dell was forced to \n> recall several hundred of it's 2650's, where the motherboards caught fire!\n> 2) the main defect of the SW is the NorthBridge, which could conceivably \n> adversely affect traffic between RAM and the processor cache.\n> 3) XeonMP is a very popular platform thanks to Dell, and we are not seeing \n> more problem reports than we are.\n> \n> The other thing I'd like your comment on, Tom, is that Dirk appears to have \n> reported that when he installed a non-bigmem kernel, the issue went away. \n\nI have BSD on a SuperMicro dual Xeon, so if folks want another\nhardware/OS combination to test, I can give out logins to my machine.\n\n\thttp://candle.pha.pa.us/main/hardware.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 19 Apr 2004 14:32:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "On Mon, 19 Apr 2004, Bruce Momjian wrote:\n\n> Josh Berkus wrote:\n> > Tom,\n> > \n> > > So in the short term I think we have to tell people that Xeon MP is not\n> > > the most desirable SMP platform to run Postgres on. (Josh thinks that\n> > > the specific motherboard chipset being used in these machines might\n> > > share some of the blame too. I don't have any evidence for or against\n> > > that idea, but it's certainly possible.)\n> > \n> > I have 3 reasons for thinking this:\n> > 1) the ServerWorks chipset is present in the fully documented cases that we \n> > have of this problem so far. This is notable becuase the SW is notorious \n> > for poor manufacturing quality, so much so that the company that made them is \n> > currently in receivership. These chips were so bad that Dell was forced to \n> > recall several hundred of it's 2650's, where the motherboards caught fire!\n> > 2) the main defect of the SW is the NorthBridge, which could conceivably \n> > adversely affect traffic between RAM and the processor cache.\n> > 3) XeonMP is a very popular platform thanks to Dell, and we are not seeing \n> > more problem reports than we are.\n> > \n> > The other thing I'd like your comment on, Tom, is that Dirk appears to have \n> > reported that when he installed a non-bigmem kernel, the issue went away. \n> \n> I have BSD on a SuperMicro dual Xeon, so if folks want another\n> hardware/OS combination to test, I can give out logins to my machine.\n\nI can probably do some nighttime testing on a dual 2800MHz non-MP Xeon \nmachine as well. It's a Dell 2600 series machine and very fast. It has \nthe moderately fast 533MHz FSB so may not have as many problems as the MP \ntype CPUs seem to be having.\n\n", "msg_date": "Mon, 19 Apr 2004 14:12:32 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "There are a few things that you can do to help force yourself to be I/O\nbound. These include:\n\n- RAID 5 for write intensive applications, since multiple writes per synch\nwrite is good. (There is a special case for logging or other streaming\nsequential writes on RAID 5)\n\n- Data journaling file systems are helpful in stress testing your\ncheckpoints\n\n- Using midsized battery backed up write through buffering controllers. In\ngeneral, if you have a small cache, you see the problem directly, and a huge\ncache will balance out load and defer writes to quieter times. That is why a\nmidsized cache is so useful in showing stress in your system only when it is\nbeing stressed.\n\nOnly partly in jest,\n/Aaron\n\nBTW - I am truly curious about what happens to your system if you use\nseparate RAID 0+1 for your logs, disk sorts, and at least the most active\ntables. This should reduce I/O load by an order of magnitude.\n\n\"Vivek Khera\" <[email protected]> wrote in message\nnews:[email protected]...\n> >>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n>\n> JB> Aaron,\n> >> I do consulting, so they're all over the place and tend to be complex.\nVery\n> >> few fit in RAM, but still are very buffered. These are almost all\nbacked\n> >> with very high end I/O subsystems, with dozens of spindles with battery\n> >> backed up writethrough cache and gigs of buffers, which may be why I\nworry\n> >> so much about CPU. I have had this issue with multiple servers.\n>\n> JB> Aha, I think this is the difference. I never seem to be able to\n> JB> get my clients to fork out for adequate disk support. They are\n> JB> always running off single or double SCSI RAID in the host server;\n> JB> not the sort of setup you have.\n>\n> Even when I upgraded my system to a 14-spindle RAID5 with 128M cache\n> and 4GB RAM on a dual Xeon system, I still wind up being I/O bound\n> quite often.\n>\n> I think it depends on what your \"working set\" turns out to be. My\n> workload really spans a lot more of the DB than I can end up caching.\n>\n> -- \n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-301-869-4449 x806\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n", "msg_date": "Mon, 19 Apr 2004 16:41:22 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": "scott.marlowe wrote:\n> On Mon, 19 Apr 2004, Bruce Momjian wrote:\n>>I have BSD on a SuperMicro dual Xeon, so if folks want another\n>>hardware/OS combination to test, I can give out logins to my machine.\n> \n> I can probably do some nighttime testing on a dual 2800MHz non-MP Xeon \n> machine as well. It's a Dell 2600 series machine and very fast. It has \n> the moderately fast 533MHz FSB so may not have as many problems as the MP \n> type CPUs seem to be having.\n\nI've got a quad 2.8Ghz MP Xeon (IBM x445) that I could test on. Does \nanyone have a test set that can reliably reproduce the problem?\n\nJoe\n", "msg_date": "Mon, 19 Apr 2004 14:02:27 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "I would agree to Tom, that too much parameters are involved to blame \nbigmem. I have access to the following machines where the same \napplication operates:\n\na) Dual (4way) XEON MP, bigmem, HT off, ServerWorks chipset (a \nFujitsu-Siemens Primergy)\n\nperforms ok now because missing indexes were added but this is no proof \nthat this behaviour occurs again under high load, context switches are \nmoderate but have peaks to 40.000\n\nb) Dual XEON DP, non-bigmem, HT on, ServerWorks chipset (a Dell machine \nI think)\n\nperforms moderate because I see too much context switches here although \nthe mentioned indexes are created, context switches go up to 30.000 \noften, I can see 50% semop calls\n\nc) Dual XEON DP, non-bigmem, HT on, E7500 Intel chipset (Supermicro)\n\nperforms well and I could not observe context switch peaks here (one \nuser active), almost no extra semop calls\n\nd) Dual XEON DP, bigmem, HT off, ServerWorks chipset (a Fujitsu-Siemens \nPrimergy)\n\nperformance unknown at the moment (is offline) but looks like a) in the past\n\nI can offer to do tests on those machines if somebody would provide me \nsome test instructions to nail this problem down.\n\nDirk\n\n\n\nTom Lane wrote:\n\n>Josh Berkus <[email protected]> writes:\n> \n>\n>>The other thing I'd like your comment on, Tom, is that Dirk appears to have \n>>reported that when he installed a non-bigmem kernel, the issue went away. \n>>Dirk, is this correct?\n>> \n>>\n>\n>I'd be really surprised if that had anything to do with it. AFAIR\n>Dirk's test changed more than one variable and so didn't prove a\n>connection.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Mon, 19 Apr 2004 23:18:27 +0200", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Joe,\n\n> I've got a quad 2.8Ghz MP Xeon (IBM x445) that I could test on. Does \n> anyone have a test set that can reliably reproduce the problem?\n\nUnfortunately we can't seem to come up with one. So far we have 2 machines \nthat exhibit the issue, and their databases are highly confidential (State of \nWA education data). \n\nIt does seem to require a database which is in the many GB (> 10GB), and a \nsituation where a small subset of the data is getting hit repeatedly by \nmultiple processes. So you could try your own data warehouse, making sure \nthat you have at least 4 connections hitting one query after another.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 19 Apr 2004 14:55:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> I've got a quad 2.8Ghz MP Xeon (IBM x445) that I could test on. Does \n>> anyone have a test set that can reliably reproduce the problem?\n\n> Unfortunately we can't seem to come up with one.\n\n> It does seem to require a database which is in the many GB (> 10GB), and a \n> situation where a small subset of the data is getting hit repeatedly by \n> multiple processes.\n\nI do not think a large database is actually necessary; the test case\nJosh's client has is only hitting a relatively small amount of data.\nThe trick seems to be to cause lots and lots of ReadBuffer/ReleaseBuffer\nactivity without much else happening, and to do this from multiple\nbackends concurrently.\n\nI believe the best way to make this happen is a lot of relatively simple\n(but not short) indexscan queries that in aggregate touch just a bit\nless than shared_buffers worth of data. I have not tried to make a\nself-contained test case, but based on what I know now I think it should\nbe possible.\n\nI'll give this a shot later tonight --- it does seem that trying to\nreproduce the problem on different kinds of hardware is the next useful\nstep we can take.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Apr 2004 18:55:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Here is a test case. To set up, run the \"test_setup.sql\" script once;\nthen launch two copies of the \"test_run.sql\" script. (For those of\nyou with more than two CPUs, see whether you need one per CPU to make\ntrouble, or whether two test_runs are enough.) Check that you get a\nnestloops-with-index-scans plan shown by the EXPLAIN in test_run.\n\nIn isolation, test_run.sql should do essentially no syscalls at all once\nit's past the initial ramp-up. On a machine that's functioning per\nexpectations, multiple copies of test_run show a relatively low rate of\nsemop() calls --- a few per second, at most --- and maybe a delaying\nselect() here and there.\n\nWhat I actually see on Josh's client's machine is a context swap storm:\n\"vmstat 1\" shows CS rates around 170K/sec. strace'ing the backends\nshows a corresponding rate of semop() syscalls, with a few delaying\nselect()s sprinkled in. top(1) shows system CPU percent of 25-30\nand idle CPU percent of 16-20.\n\nI haven't bothered to check how long the test_run query takes, but if it\nends while you're still examining the behavior, just start it again.\n\nNote the test case assumes you've got shared_buffers set to at least\n1000; with smaller values, you may get some I/O syscalls, which will\nprobably skew the results.\n\n\t\t\tregards, tom lane\n\n\ndrop table test_data;\n\ncreate table test_data(f1 int);\n\ninsert into test_data values (random() * 100);\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\ninsert into test_data select random() * 100 from test_data;\n\ncreate index test_index on test_data(f1);\n\nvacuum verbose analyze test_data;\ncheckpoint;\n\n-- force nestloop indexscan plan\nset enable_seqscan to 0;\nset enable_mergejoin to 0;\nset enable_hashjoin to 0;\n\nexplain\nselect count(*) from test_data a, test_data b, test_data c\nwhere a.f1 = b.f1 and b.f1 = c.f1;\n\nselect count(*) from test_data a, test_data b, test_data c\nwhere a.f1 = b.f1 and b.f1 = c.f1;", "msg_date": "Mon, 19 Apr 2004 20:01:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "I wrote:\n> Here is a test case.\n\nHmmm ... I've been able to reproduce the CS storm on a dual Athlon,\nwhich seems to pretty much let the Xeon per se off the hook. Anybody\ngot a multiple Opteron to try? Totally non-Intel CPUs?\n\nIt would be interesting to see results with non-Linux kernels, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Apr 2004 20:53:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Tom Lane wrote:\n> Here is a test case. To set up, run the \"test_setup.sql\" script once;\n> then launch two copies of the \"test_run.sql\" script. (For those of\n> you with more than two CPUs, see whether you need one per CPU to make\n> trouble, or whether two test_runs are enough.) Check that you get a\n> nestloops-with-index-scans plan shown by the EXPLAIN in test_run.\n\nCheck.\n\n> In isolation, test_run.sql should do essentially no syscalls at all once\n> it's past the initial ramp-up. On a machine that's functioning per\n> expectations, multiple copies of test_run show a relatively low rate of\n> semop() calls --- a few per second, at most --- and maybe a delaying\n> select() here and there.\n> \n> What I actually see on Josh's client's machine is a context swap storm:\n> \"vmstat 1\" shows CS rates around 170K/sec. strace'ing the backends\n> shows a corresponding rate of semop() syscalls, with a few delaying\n> select()s sprinkled in. top(1) shows system CPU percent of 25-30\n> and idle CPU percent of 16-20.\n\nYour test case works perfectly. I ran 4 concurrent psql sessions, on a \nquad Xeon (IBM x445, 2.8GHz, 4GB RAM), hyperthreaded. Heres what 'top' \nlooks like:\n\n177 processes: 173 sleeping, 3 running, 1 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 35.9% 0.0% 7.2% 0.0% 0.0% 0.0% 56.8%\n cpu00 19.6% 0.0% 4.9% 0.0% 0.0% 0.0% 75.4%\n cpu01 44.1% 0.0% 7.8% 0.0% 0.0% 0.0% 48.0%\n cpu02 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0%\n cpu03 32.3% 0.0% 13.7% 0.0% 0.0% 0.0% 53.9%\n cpu04 21.5% 0.0% 10.7% 0.0% 0.0% 0.0% 67.6%\n cpu05 42.1% 0.0% 9.8% 0.0% 0.0% 0.0% 48.0%\n cpu06 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n cpu07 27.4% 0.0% 10.7% 0.0% 0.0% 0.0% 61.7%\nMem: 4123700k av, 3933896k used, 189804k free, 0k shrd, 221948k buff\n 2492124k actv, 760612k in_d, 41416k in_c\nSwap: 2040244k av, 5632k used, 2034612k free 3113272k cached\n\nNote that cpu06 is not a postgres process. The output of vmstat looks \nlike this:\n\n# vmstat 1\nprocs memory swap io system \n cpu\nr b swpd free buff cache si so bi bo in cs us sy id wa\n4 0 5632 184264 221948 3113308 0 0 0 0 0 0 0 0 0 0\n3 0 5632 184264 221948 3113308 0 0 0 0 112 211894 36 9 55 0\n5 0 5632 184264 221948 3113308 0 0 0 0 125 222071 39 8 53 0\n4 0 5632 184264 221948 3113308 0 0 0 0 110 215097 39 10 52 0\n1 0 5632 184588 221948 3113308 0 0 0 96 139 187561 35 10 55 0\n3 0 5632 184588 221948 3113308 0 0 0 0 114 241731 38 10 52 0\n3 0 5632 184920 221948 3113308 0 0 0 0 132 257168 40 9 51 0\n1 0 5632 184912 221948 3113308 0 0 0 0 114 251802 38 9 54 0\n\n> Note the test case assumes you've got shared_buffers set to at least\n> 1000; with smaller values, you may get some I/O syscalls, which will\n> probably skew the results.\n\n shared_buffers\n----------------\n 16384\n(1 row)\n\nI found that killing three of the four concurrent queries dropped \ncontext switches to about 70,000 to 100,000. Two or more sessions brings \nit up to 200K+.\n\nJoe\n", "msg_date": "Mon, 19 Apr 2004 20:00:05 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "When grilled further on (Mon, 19 Apr 2004 20:53:09 -0400),\nTom Lane <[email protected]> confessed:\n\n> I wrote:\n> > Here is a test case.\n> \n> Hmmm ... I've been able to reproduce the CS storm on a dual Athlon,\n> which seems to pretty much let the Xeon per se off the hook. Anybody\n> got a multiple Opteron to try? Totally non-Intel CPUs?\n> \n> It would be interesting to see results with non-Linux kernels, too.\n> \n\nSame problem on my dual AMD MP with 2.6.5 kernel using two sessions of your\ntest, but maybe not quite as severe. The highest CS values I saw was 102k, with\nsome non-db number crunching going on in parallel with the test. 'Average'\nabout 80k with two instances. Using the anticipatory scheduler.\n\nA single instance pulls in around 200-300 CS, and no tests running around\n200-300 CS (i.e. no CS difference).\n\nA snipet:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n 3 0 284 90624 93452 1453740 0 0 0 0 1075 76548 83 17 0 0\n 6 0 284 125312 93452 1470196 0 0 0 0 1073 87702 78 22 0 0\n 3 0 284 178392 93460 1420208 0 0 76 298 1083 67721 77 24 0 0\n 4 0 284 177120 93460 1421500 0 0 1104 0 1054 89593 80 21 0 0\n 5 0 284 173504 93460 1425172 0 0 3584 0 1110 65536 81 19 0 0\n 4 0 284 169984 93460 1428708 0 0 3456 0 1098 66937 81 20 0 0\n 6 0 284 170944 93460 1428708 0 0 8 0 1045 66065 81 19 0 0\n 6 0 284 167288 93460 1428776 0 0 0 8 1097 75560 81 19 0 0\n 6 0 284 136296 93460 1458356 0 0 0 0 1036 80808 75 26 0 0\n 5 0 284 132864 93460 1461688 0 0 0 0 1007 76071 84 17 0 0\n 4 0 284 132880 93460 1461688 0 0 0 0 1079 86903 82 18 0 0\n 5 0 284 132880 93460 1461688 0 0 0 0 1078 79885 83 17 0 0\n 6 0 284 132648 93460 1461688 0 0 0 760 1228 66564 86 14 0 0\n 6 0 284 132648 93460 1461688 0 0 0 0 1047 69741 86 15 0 0\n 6 0 284 132672 93460 1461688 0 0 0 0 1057 79052 84 16 0 0\n 5 0 284 132672 93460 1461688 0 0 0 0 1054 81109 82 18 0 0\n 5 0 284 132736 93460 1461688 0 0 0 0 1043 91725 80 20 0 0\n\n\nCheers,\nRob\n\n-- \n 21:33:03 up 3 days, 1:10, 3 users, load average: 5.05, 4.67, 4.22\nLinux 2.6.5-01 #5 SMP Tue Apr 6 21:32:39 MDT 2004", "msg_date": "Mon, 19 Apr 2004 21:47:02 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "\nSame problem with dual 1Ghz P3's running Postgres 7.4.2, linux 2.4.x, and \n2GB ram, under load, with long transactions (i.e. 1 \"cannot serialize\" \nrollback per minute). 200K was the worst observed with vmstat.\n\nFinally moved DB to a single xeon box.\n\n", "msg_date": "Mon, 19 Apr 2004 22:18:21 -0700 (PDT)", "msg_from": "jelle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Hi Tom,\n\nYou still have an account on my Unixware Bi-Xeon hyperthreded machine.\nFeel free to use it for your tests.\nOn Mon, 19 Apr 2004, Tom Lane wrote:\n\n> Date: Mon, 19 Apr 2004 20:53:09 -0400\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: Joe Conway <[email protected]>, scott.marlowe <[email protected]>,\n> Bruce Momjian <[email protected]>, [email protected],\n> [email protected], Neil Conway <[email protected]>\n> Subject: Re: [PERFORM] Wierd context-switching issue on Xeon\n>\n> I wrote:\n> > Here is a test case.\n>\n> Hmmm ... I've been able to reproduce the CS storm on a dual Athlon,\n> which seems to pretty much let the Xeon per se off the hook. Anybody\n> got a multiple Opteron to try? Totally non-Intel CPUs?\n>\n> It would be interesting to see results with non-Linux kernels, too.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n", "msg_date": "Tue, 20 Apr 2004 12:35:50 +0200 (MET DST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "\nOn Apr 19, 2004, at 8:01 PM, Tom Lane wrote:\n[test case]\n\nQuad P3-700Mhz, ServerWorks, pg 7.4.2 - 1 process: 10-30 cs / second\n \t\t\t\t\t\t\t 2 process: 100k cs / sec\n\t\t\t\t\t\t\t\t 3 process: 140k cs / sec\n\t\t\t\t\t\t\t\t 8 process: 115k cs / sec\n\nDual P2-450Mhz, non-serverworks (piix) - 1 process 15-20 / sec\n \t\t\t\t\t\t 2 process 30k / sec\n \t\t\t\t\t\t\t 3 (up to 7) process: 15k /sec\n\n(Yes, I verified with more processes the cs's drop)\n\nAnd finally,\n\n6 cpu sun e4500, solaris 2.6, pg 7.4.2: 1 - 10 processes: hovered \nbetween 2-3k cs/second (there was other stuff running on the machine as \nwell)\n\n\nVerrry interesting.\nI've got a dual G4 at home, but for convenience Apple doesn't ship a \nvmstat that tells context switches\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Tue, 20 Apr 2004 08:46:12 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Dual Athlon\n\nWith one process running 30 cs/second\nwith two process running 15000 cs/second\n\nDave\nOn Tue, 2004-04-20 at 08:46, Jeff wrote:\n> On Apr 19, 2004, at 8:01 PM, Tom Lane wrote:\n> [test case]\n> \n> Quad P3-700Mhz, ServerWorks, pg 7.4.2 - 1 process: 10-30 cs / second\n> \t\t\t\t\t\t\t 2 process: 100k cs / sec\n> \t\t\t\t\t\t\t\t 3 process: 140k cs / sec\n> \t\t\t\t\t\t\t\t 8 process: 115k cs / sec\n> \n> Dual P2-450Mhz, non-serverworks (piix) - 1 process 15-20 / sec\n> \t\t\t\t\t\t 2 process 30k / sec\n> \t\t\t\t\t\t\t 3 (up to 7) process: 15k /sec\n> \n> (Yes, I verified with more processes the cs's drop)\n> \n> And finally,\n> \n> 6 cpu sun e4500, solaris 2.6, pg 7.4.2: 1 - 10 processes: hovered \n> between 2-3k cs/second (there was other stuff running on the machine as \n> well)\n> \n> \n> Verrry interesting.\n> I've got a dual G4 at home, but for convenience Apple doesn't ship a \n> vmstat that tells context switches\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n> \n> !DSPAM:40851da1199651145780980!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 20 Apr 2004 09:06:59 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "As a cross-ref to all the 7.4.x tests people have sent in, here's 7.2.3 (Redhat 7.3), Quad Xeon 700MHz/1MB L2 cache, 3GB RAM.\n\nIdle-ish (it's a production server) cs/sec ~5000\n\n3 test queries running:\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 3 0 0 23380 577680 105912 2145140 0 0 0 0 107 116890 50 14 35\n 2 0 0 23380 577680 105912 2145140 0 0 0 0 114 118583 50 15 34\n 2 0 0 23380 577680 105912 2145140 0 0 0 0 107 115842 54 14 32\n 2 1 0 23380 577680 105920 2145140 0 0 0 32 156 117549 50 16 35\n\nHTH\n\nMatt\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: 20 April 2004 01:02\n> To: [email protected]\n> Cc: Joe Conway; scott.marlowe; Bruce Momjian; [email protected];\n> [email protected]; Neil Conway\n> Subject: Re: [PERFORM] Wierd context-switching issue on Xeon \n> \n> \n> Here is a test case. To set up, run the \"test_setup.sql\" script once;\n> then launch two copies of the \"test_run.sql\" script. (For those of\n> you with more than two CPUs, see whether you need one per CPU to make\n> trouble, or whether two test_runs are enough.) Check that you get a\n> nestloops-with-index-scans plan shown by the EXPLAIN in test_run.\n> \n> In isolation, test_run.sql should do essentially no syscalls at all once\n> it's past the initial ramp-up. On a machine that's functioning per\n> expectations, multiple copies of test_run show a relatively low rate of\n> semop() calls --- a few per second, at most --- and maybe a delaying\n> select() here and there.\n> \n> What I actually see on Josh's client's machine is a context swap storm:\n> \"vmstat 1\" shows CS rates around 170K/sec. strace'ing the backends\n> shows a corresponding rate of semop() syscalls, with a few delaying\n> select()s sprinkled in. top(1) shows system CPU percent of 25-30\n> and idle CPU percent of 16-20.\n> \n> I haven't bothered to check how long the test_run query takes, but if it\n> ends while you're still examining the behavior, just start it again.\n> \n> Note the test case assumes you've got shared_buffers set to at least\n> 1000; with smaller values, you may get some I/O syscalls, which will\n> probably skew the results.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n", "msg_date": "Tue, 20 Apr 2004 14:44:40 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Dirk Lutzebaeck wrote:\n\n> c) Dual XEON DP, non-bigmem, HT on, E7500 Intel chipset (Supermicro)\n>\n> performs well and I could not observe context switch peaks here (one \n> user active), almost no extra semop calls\n\nDid Tom's test here: with 2 processes I'll reach 200k+ CS with peaks to \n300k CS. Bummer.. Josh, I don't think you can bash the ServerWorks \nchipset here nor bigmem.\n\nDirk\n\n\n", "msg_date": "Tue, 20 Apr 2004 16:29:01 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Dirk Lutzeb�ck wrote:\n> Dirk Lutzebaeck wrote:\n> \n> > c) Dual XEON DP, non-bigmem, HT on, E7500 Intel chipset (Supermicro)\n> >\n> > performs well and I could not observe context switch peaks here (one \n> > user active), almost no extra semop calls\n> \n> Did Tom's test here: with 2 processes I'll reach 200k+ CS with peaks to \n> 300k CS. Bummer.. Josh, I don't think you can bash the ServerWorks \n> chipset here nor bigmem.\n\nDave Cramer reproduced the problem on my SuperMicro dual Xeon on BSD/OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 20 Apr 2004 12:48:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "I verified problem on a Dual Opteron server. I temporarily killed the\nnormal load, so the server was largely idle when the test was run.\n\nHardware:\n2x Opteron 242\nRioworks HDAMA server board\n4Gb RAM\n\nOS Kernel:\nRedHat9 + XFS\n\n\n1 proc: 10-15 cs/sec\n2 proc: 400,000-420,000 cs/sec\n\n\n\nj. andrew rogers\n\n\n\n", "msg_date": "20 Apr 2004 10:17:22 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Dirk, Tom,\n\nOK, off IRC, I have the following reports:\n\nLinux 2.4.21 or 2.4.20 on dual Pentium III : problem verified\nLinux 2.4.21 or 2.4.20 on dual Penitum II : problem cannot be reproduced\nSolaris 2.6 on 6 cpu e4500 (using 8 processes) : problem not reproduced\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 20 Apr 2004 10:58:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "> It would be interesting to see results with non-Linux kernels, too.\n\nDual Celeron 500Mhz (Abit BP6 mobo) - client & server on same machine\n\n2 processes FreeBSD (5.2.1): 1800cs\n3 processes FreeBSD: 14000cs\n4 processes FreeBSD: 14500cs\n\n2 processes Linux (2.4.18 kernel): 52000cs\n3 processes Linux: 10000cs\n4 processes Linux: 20000cs\n\n\n", "msg_date": "Tue, 20 Apr 2004 15:48:50 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "I tried to test how this is related to cache coherency, by forcing \naffinity of the two test_run.sql processes to the two cores (pipelines? \nthreads) of a single hyperthreaded xeon processor in an smp xeon box.\n\nWhen the processes are allowed to run on distinct chips in the smp box, \nthe CS storm happens. When they are \"bound\" to the two cores of a \nsingle hyperthreaded Xeon in the smp box, the CS storm *does* happen.\n\n\n\nI used the taskset command:\ntaskset 01 -p <pid for backend of test_run.sql 1>\ntaskset 01 -p <pid for backend of test_run.sql 1>\n\nI guess that 0 and 1 are the two cores (pipelines? hyper-threads?) on \nthe first Xeon processor in the box.\n\nI did this on RedHat Fedora core1 on an intel motherboard (I'll get the \npart no if it matters)\n\nduring storms : 300k CS/sec, 75% idle (on a dual xeon (four core)) \nmachine (suggesting serializing/sleeping processes)\nno storm: 50k CS/sec, 50% idle (suggesting 2 cpu bound processes)\n\n\nMaybe there's a \"hot block\" that is bouncing back and forth between \ncaches? or maybe the page holding semaphores?\n\nOn Apr 19, 2004, at 5:53 PM, Tom Lane wrote:\n\n> I wrote:\n>> Here is a test case.\n>\n> Hmmm ... I've been able to reproduce the CS storm on a dual Athlon,\n> which seems to pretty much let the Xeon per se off the hook. Anybody\n> got a multiple Opteron to try? Totally non-Intel CPUs?\n>\n> It would be interesting to see results with non-Linux kernels, too.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 20 Apr 2004 13:02:43 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hmmm ... I've been able to reproduce the CS storm on a dual Athlon,\n> which seems to pretty much let the Xeon per se off the hook. Anybody\n> got a multiple Opteron to try? Totally non-Intel CPUs?\n> \n> It would be interesting to see results with non-Linux kernels, too.\n> \n> \t\t\tregards, tom lane\n\nI also tested on an dual Athlon MP Tyan Thunder motherboard (2xMP2800+, \n2.5GB memory), and got the same high numbers.\nI then ran with kernel 2.6.5, it lowered them a little, but it's still \nsome ping pong effect here. I wonder if this is some effect of the \nscheduler, maybe the shed frequency alone (100HZ vs 1000HZ).\n\nIt would be interesting to see what a locking implementation ala FUTEX \nstyle would give on an 2.6 kernel, as i understood it that would work \ncross process with some work.\n\nThe first file attached is kernel 2.4 running one process then starting \nup the other one.\nSame with second file, but with kernel 2.6...\n\nRegards\nMagnus", "msg_date": "Wed, 21 Apr 2004 00:47:49 +0200", "msg_from": "\"Magnus Naeslund(t)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Ooops, what I meant to say was that 2 threads bound to one \n(hyperthreaded) cpu does *NOT* cause the storm, even on an smp xeon.\n\nTherefore, the context switches may be a result of cache coherency \nrelated delays. (2 threads on one hyperthreaded cpu presumably have \ntightly coupled 1,l2 cache.)\n\nOn Apr 20, 2004, at 1:02 PM, Paul Tuckfield wrote:\n\n> I tried to test how this is related to cache coherency, by forcing \n> affinity of the two test_run.sql processes to the two cores \n> (pipelines? threads) of a single hyperthreaded xeon processor in an \n> smp xeon box.\n>\n> When the processes are allowed to run on distinct chips in the smp \n> box, the CS storm happens. When they are \"bound\" to the two cores of \n> a single hyperthreaded Xeon in the smp box, the CS storm *does* \n> happen.\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ er, meant *NOT HAPPEN*\n>\n>\n>\n> I used the taskset command:\n> taskset 01 -p <pid for backend of test_run.sql 1>\n> taskset 01 -p <pid for backend of test_run.sql 1>\n>\n> I guess that 0 and 1 are the two cores (pipelines? hyper-threads?) on \n> the first Xeon processor in the box.\n>\n> I did this on RedHat Fedora core1 on an intel motherboard (I'll get \n> the part no if it matters)\n>\n> during storms : 300k CS/sec, 75% idle (on a dual xeon (four core)) \n> machine (suggesting serializing/sleeping processes)\n> no storm: 50k CS/sec, 50% idle (suggesting 2 cpu bound processes)\n>\n>\n> Maybe there's a \"hot block\" that is bouncing back and forth between \n> caches? or maybe the page holding semaphores?\n>\n> On Apr 19, 2004, at 5:53 PM, Tom Lane wrote:\n>\n>> I wrote:\n>>> Here is a test case.\n>>\n>> Hmmm ... I've been able to reproduce the CS storm on a dual Athlon,\n>> which seems to pretty much let the Xeon per se off the hook. Anybody\n>> got a multiple Opteron to try? Totally non-Intel CPUs?\n>>\n>> It would be interesting to see results with non-Linux kernels, too.\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n", "msg_date": "Tue, 20 Apr 2004 17:34:13 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Joe Conway wrote:\n>> In isolation, test_run.sql should do essentially no syscalls at all once\n>> it's past the initial ramp-up. On a machine that's functioning per\n>> expectations, multiple copies of test_run show a relatively low rate of\n>> semop() calls --- a few per second, at most --- and maybe a delaying\n>> select() here and there.\n\nHere's results for 7.4 on a dual Athlon server running fedora core:\n\nCPU states: cpu user nice system irq softirq iowait idle\n total 86.0% 0.0% 52.4% 0.0% 0.0% 0.0% 61.2%\n cpu00 37.6% 0.0% 29.7% 0.0% 0.0% 0.0% 32.6%\n cpu01 48.5% 0.0% 22.7% 0.0% 0.0% 0.0% 28.7%\n\nprocs memory swap io system \n cpu\n r b swpd free buff cache si so bi bo in cs\n 1 0 120448 25764 48300 1094576 0 0 0 124 170 187\n 1 0 120448 25780 48300 1094576 0 0 0 0 152 89\n 2 0 120448 25744 48300 1094580 0 0 0 60 141 78290\n 2 0 120448 25752 48300 1094580 0 0 0 0 131 140326\n 2 0 120448 25756 48300 1094576 0 0 0 40 122 140100\n 2 0 120448 25764 48300 1094584 0 0 0 60 133 136595\n 2 0 120448 24284 48300 1094584 0 0 0 200 138 135151\n\nThe jump in cs corresponds to starting the query in the second session.\n\nJoe\n\n", "msg_date": "Tue, 20 Apr 2004 20:46:58 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "How long is this test supposed to run?\n\nI've launched just 1 for testing, the plan seems horrible; the test is cpu\nbound and hasn't finished yet after 17:02 min of CPU time, dual XEON 2.6G\nUnixware 713\n\nThe machine is a Fujitsu-Siemens TX 200 server\n On Mon, 19 Apr 2004, Tom Lane wrote:\n\n> Date: Mon, 19 Apr 2004 20:01:56 -0400\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: Joe Conway <[email protected]>, scott.marlowe <[email protected]>,\n> Bruce Momjian <[email protected]>, [email protected],\n> [email protected], Neil Conway <[email protected]>\n> Subject: Re: [PERFORM] Wierd context-switching issue on Xeon\n>\n> Here is a test case. To set up, run the \"test_setup.sql\" script once;\n> then launch two copies of the \"test_run.sql\" script. (For those of\n> you with more than two CPUs, see whether you need one per CPU to make\n> trouble, or whether two test_runs are enough.) Check that you get a\n> nestloops-with-index-scans plan shown by the EXPLAIN in test_run.\n>\n> In isolation, test_run.sql should do essentially no syscalls at all once\n> it's past the initial ramp-up. On a machine that's functioning per\n> expectations, multiple copies of test_run show a relatively low rate of\n> semop() calls --- a few per second, at most --- and maybe a delaying\n> select() here and there.\n>\n> What I actually see on Josh's client's machine is a context swap storm:\n> \"vmstat 1\" shows CS rates around 170K/sec. strace'ing the backends\n> shows a corresponding rate of semop() syscalls, with a few delaying\n> select()s sprinkled in. top(1) shows system CPU percent of 25-30\n> and idle CPU percent of 16-20.\n>\n> I haven't bothered to check how long the test_run query takes, but if it\n> ends while you're still examining the behavior, just start it again.\n>\n> Note the test case assumes you've got shared_buffers set to at least\n> 1000; with smaller values, you may get some I/O syscalls, which will\n> probably skew the results.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n", "msg_date": "Wed, 21 Apr 2004 13:18:53 +0200 (MET DST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "It is intended to run indefinately.\n\nDirk\n\[email protected] wrote:\n\n>How long is this test supposed to run?\n>\n>I've launched just 1 for testing, the plan seems horrible; the test is cpu\n>bound and hasn't finished yet after 17:02 min of CPU time, dual XEON 2.6G\n>Unixware 713\n>\n>The machine is a Fujitsu-Siemens TX 200 server\n> \n>\n\n\n", "msg_date": "Wed, 21 Apr 2004 14:10:55 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "After some testing if you use the current head code for s_lock.c which\nhas some mods in it to alleviate this situation, and change\nSPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.\nI am seeing a slight degradation in throughput using pgbench -c 10 -t\n1000 but it might be liveable, considering the alternative is unbearable\nin some situations.\n\nCan anyone else replicate my results?\n\nDave\nOn Wed, 2004-04-21 at 08:10, Dirk_Lutzeb�ck wrote:\n> It is intended to run indefinately.\n> \n> Dirk\n> \n> [email protected] wrote:\n> \n> >How long is this test supposed to run?\n> >\n> >I've launched just 1 for testing, the plan seems horrible; the test is cpu\n> >bound and hasn't finished yet after 17:02 min of CPU time, dual XEON 2.6G\n> >Unixware 713\n> >\n> >The machine is a Fujitsu-Siemens TX 200 server\n> > \n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n> \n> !DSPAM:40866735106778584283649!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Wed, 21 Apr 2004 11:05:31 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Dave,\n\n> After some testing if you use the current head code for s_lock.c which\n> has some mods in it to alleviate this situation, and change\n> SPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.\n> I am seeing a slight degradation in throughput using pgbench -c 10 -t\n> 1000 but it might be liveable, considering the alternative is unbearable\n> in some situations.\n>\n> Can anyone else replicate my results?\n\nCan you produce a patch against 7.4.1? I'd like to test your fix against a \nreal-world database.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 21 Apr 2004 10:29:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Paul Tuckfield <[email protected]> writes:\n>> I used the taskset command:\n>> taskset 01 -p <pid for backend of test_run.sql 1>\n>> taskset 01 -p <pid for backend of test_run.sql 1>\n>> \n>> I guess that 0 and 1 are the two cores (pipelines? hyper-threads?) on \n>> the first Xeon processor in the box.\n\nAFAICT, what you've actually done here is to bind both backends to the\nfirst logical processor of the first Xeon. If you'd used 01 and 02\nas the affinity masks then you'd have bound them to the two cores of\nthat Xeon, but what you actually did simply reduces the system to a\nuniprocessor. In that situation the context swap rate will be normally\none swap per scheduler timeslice, and at worst two swaps per timeslice\n(if a process is swapped away from while it holds a lock the other one\nwants). It doesn't prove a lot about our SMP problem though.\n\nI don't have access to a Xeon with both taskset and hyperthreading\nenabled, so I can't check what happens when you do the taskset correctly\n... could you retry?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 23:10:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Magus,\n\n> It would be interesting to see what a locking implementation ala FUTEX \n> style would give on an 2.6 kernel, as i understood it that would work \n> cross process with some work.\n\nI'mm working on testing a FUTEX patch, but am having some trouble with it. \nWill let you know the results ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 26 Apr 2004 12:20:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "When grilled further on (Wed, 21 Apr 2004 10:29:43 -0700),\nJosh Berkus <[email protected]> confessed:\n\n> Dave,\n> \n> > After some testing if you use the current head code for s_lock.c which\n> > has some mods in it to alleviate this situation, and change\n> > SPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.\n> > I am seeing a slight degradation in throughput using pgbench -c 10 -t\n> > 1000 but it might be liveable, considering the alternative is unbearable\n> > in some situations.\n> >\n> > Can anyone else replicate my results?\n> \n> Can you produce a patch against 7.4.1? I'd like to test your fix against a \n> real-world database.\n\nI would like to see the same, as I have a system that exhibits the same behavior\non a production db that's running 7.4.1.\n\nCheers,\nRob\n\n\n-- \n 18:55:22 up 1:40, 4 users, load average: 2.00, 2.04, 2.00\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Wed, 28 Apr 2004 18:57:53 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Hi\n\nI'd LOVE to contribute on this but I don't have vmstat and I'm not running\nlinux.\n\nHow can I help?\nRegards\nOn Wed, 28 Apr 2004, Robert Creager wrote:\n\n> Date: Wed, 28 Apr 2004 18:57:53 -0600\n> From: Robert Creager <[email protected]>\n> To: Josh Berkus <[email protected]>\n> Cc: [email protected], Dirk_Lutzeb�ck <[email protected]>, [email protected],\n> Tom Lane <[email protected]>, Joe Conway <[email protected]>,\n> scott.marlowe <[email protected]>,\n> Bruce Momjian <[email protected]>, [email protected],\n> Neil Conway <[email protected]>\n> Subject: Re: [PERFORM] Wierd context-switching issue on Xeon\n>\n> When grilled further on (Wed, 21 Apr 2004 10:29:43 -0700),\n> Josh Berkus <[email protected]> confessed:\n>\n> > Dave,\n> >\n> > > After some testing if you use the current head code for s_lock.c which\n> > > has some mods in it to alleviate this situation, and change\n> > > SPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.\n> > > I am seeing a slight degradation in throughput using pgbench -c 10 -t\n> > > 1000 but it might be liveable, considering the alternative is unbearable\n> > > in some situations.\n> > >\n> > > Can anyone else replicate my results?\n> >\n> > Can you produce a patch against 7.4.1? I'd like to test your fix against a\n> > real-world database.\n>\n> I would like to see the same, as I have a system that exhibits the same behavior\n> on a production db that's running 7.4.1.\n>\n> Cheers,\n> Rob\n>\n>\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n", "msg_date": "Thu, 29 Apr 2004 15:20:18 +0200 (MET DST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Rob,\n\n> I would like to see the same, as I have a system that exhibits the same \nbehavior\n> on a production db that's running 7.4.1.\n\nIf you checked the thread follow-ups, you'd see that *decreasing* \nspins_per_delay was not beneficial. Instead, try increasing them, one step \nat a time:\n\n(take baseline measurement at 100)\n250\n500\n1000\n1500\n2000\n3000\n5000\n\n... until you find an optimal level. Then report the results to us!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 29 Apr 2004 11:21:51 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "When grilled further on (Thu, 29 Apr 2004 11:21:51 -0700),\nJosh Berkus <[email protected]> confessed:\n\n> spins_per_delay was not beneficial. Instead, try increasing them, one step \n> at a time:\n> \n> (take baseline measurement at 100)\n> 250\n> 500\n> 1000\n> 1500\n> 2000\n> 3000\n> 5000\n> \n> ... until you find an optimal level. Then report the results to us!\n> \n\nSome results. The patch mentioned is what Dave Cramer posted to the Performance\nlist on 4/21.\n\nA Perl script monitored <vmstat 1> for 120 seconds and generated max and average\nvalues. Unfortunately, I am not present on site, so I cannot physically change\nthe device under test to increase the db load to where it hit about 10 days ago.\n That will have to wait till the 'real' work week on Monday.\n\nContext switches - avg max\n\nDefault 7.4.1 code : 10665 69470\nDefault patch - 10 : 17297 21929\npatch at 100 : 26825 87073\npatch at 1000 : 37580 110849\n\nNow granted, the db isn't showing the CS swap problem in a bad way (at all), but\nshould the numbers be trending the way they are with the patched code? Or will\nthese numbers potentially change dramatically when I can load up the db?\n\nAnd, presuming I can re-produce what I was seeing previously (200K CS/s), you\nfolks want me to carry on with more testing of the patch and report the results?\n Or just go away and be quiet...\n\nThe information is provided from a HP Proliant DL380 G3 with 2x 2.4 GHZ Xenon's\n(with HT enabled) 2 GB ram, running 2.4.22-26mdkenterprise kernel, RAID\ncontroller w/128 Mb battery backed cache RAID 1 on 2x 15K RPM drives for WAL\ndrive, RAID 0+1 on 4x 10K RPM drives for data. The only job this box has is\nrunning this db.\n\nCheers,\nRob\n\n-- \n 21:54:48 up 2 days, 4:39, 4 users, load average: 2.00, 2.03, 2.00\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Fri, 30 Apr 2004 22:03:06 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "No, don't go away and be quiet. Keep testing, it may be that under\nnormal operation the context switching goes up but under the conditions\nthat you were seeing the high CS it may not be as bad.\n\nAs others have mentioned the real solution to this is to rewrite the\nbuffer management so that the lock isn't quite as coarse grained.\n\nDave\nOn Sat, 2004-05-01 at 00:03, Robert Creager wrote:\n> When grilled further on (Thu, 29 Apr 2004 11:21:51 -0700),\n> Josh Berkus <[email protected]> confessed:\n> \n> > spins_per_delay was not beneficial. Instead, try increasing them, one step \n> > at a time:\n> > \n> > (take baseline measurement at 100)\n> > 250\n> > 500\n> > 1000\n> > 1500\n> > 2000\n> > 3000\n> > 5000\n> > \n> > ... until you find an optimal level. Then report the results to us!\n> > \n> \n> Some results. The patch mentioned is what Dave Cramer posted to the Performance\n> list on 4/21.\n> \n> A Perl script monitored <vmstat 1> for 120 seconds and generated max and average\n> values. Unfortunately, I am not present on site, so I cannot physically change\n> the device under test to increase the db load to where it hit about 10 days ago.\n> That will have to wait till the 'real' work week on Monday.\n> \n> Context switches - avg max\n> \n> Default 7.4.1 code : 10665 69470\n> Default patch - 10 : 17297 21929\n> patch at 100 : 26825 87073\n> patch at 1000 : 37580 110849\n> \n> Now granted, the db isn't showing the CS swap problem in a bad way (at all), but\n> should the numbers be trending the way they are with the patched code? Or will\n> these numbers potentially change dramatically when I can load up the db?\n> \n> And, presuming I can re-produce what I was seeing previously (200K CS/s), you\n> folks want me to carry on with more testing of the patch and report the results?\n> Or just go away and be quiet...\n> \n> The information is provided from a HP Proliant DL380 G3 with 2x 2.4 GHZ Xenon's\n> (with HT enabled) 2 GB ram, running 2.4.22-26mdkenterprise kernel, RAID\n> controller w/128 Mb battery backed cache RAID 1 on 2x 15K RPM drives for WAL\n> drive, RAID 0+1 on 4x 10K RPM drives for data. The only job this box has is\n> running this db.\n> \n> Cheers,\n> Rob\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Sat, 01 May 2004 14:50:47 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Found some co-workers at work yesterday to load up my library...\n\nThe sample period is 5 minutes long (vs 2 minutes previously):\n\nContext switches - avg max\n\nDefault 7.4.1 code : 48784 107354\nDefault patch - 10 : 20400 28160\npatch at 100 : 38574 85372\npatch at 1000 : 41188 106569\n\nThe reading at 1000 was not produced under the same circumstances as the prior\nreadings as I had to replace my device under test with a simulated one. The\nreal one died.\n\nThe previous run with smaller database and 120 second averages:\n\nContext switches - avg max\n\nDefault 7.4.1 code : 10665 69470\nDefault patch - 10 : 17297 21929\npatch at 100 : 26825 87073\npatch at 1000 : 37580 110849\n\n-- \n 20:13:50 up 3 days, 2:58, 4 users, load average: 2.12, 2.14, 2.10\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Sun, 2 May 2004 09:20:47 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Robert,\n\nThe real question is does it help under real life circumstances ? \n\nDid you do the tests with Tom's sql code that is designed to create high\ncontext switchs ?\n\nDave\nOn Sun, 2004-05-02 at 11:20, Robert Creager wrote:\n> Found some co-workers at work yesterday to load up my library...\n> \n> The sample period is 5 minutes long (vs 2 minutes previously):\n> \n> Context switches - avg max\n> \n> Default 7.4.1 code : 48784 107354\n> Default patch - 10 : 20400 28160\n> patch at 100 : 38574 85372\n> patch at 1000 : 41188 106569\n> \n> The reading at 1000 was not produced under the same circumstances as the prior\n> readings as I had to replace my device under test with a simulated one. The\n> real one died.\n> \n> The previous run with smaller database and 120 second averages:\n> \n> Context switches - avg max\n> \n> Default 7.4.1 code : 10665 69470\n> Default patch - 10 : 17297 21929\n> patch at 100 : 26825 87073\n> patch at 1000 : 37580 110849\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Sun, 02 May 2004 11:39:22 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "When grilled further on (Sun, 02 May 2004 11:39:22 -0400),\nDave Cramer <[email protected]> confessed:\n\n> Robert,\n> \n> The real question is does it help under real life circumstances ? \n\nI'm not yet at the point where the CS's are causing appreciable delays. I\nshould get there early this week and will be able to measure the relief your\npatch may provide.\n\n> \n> Did you do the tests with Tom's sql code that is designed to create high\n> context switchs ?\n\nNo, I'm using my queries/data.\n\nCheers,\nRob\n\n-- \n 10:44:58 up 3 days, 17:30, 4 users, load average: 2.00, 2.04, 2.01\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Sun, 2 May 2004 15:46:49 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "\nDid we ever come to a conclusion about excessive SMP context switching\nunder load?\n\n---------------------------------------------------------------------------\n\nDave Cramer wrote:\n> Robert,\n> \n> The real question is does it help under real life circumstances ? \n> \n> Did you do the tests with Tom's sql code that is designed to create high\n> context switchs ?\n> \n> Dave\n> On Sun, 2004-05-02 at 11:20, Robert Creager wrote:\n> > Found some co-workers at work yesterday to load up my library...\n> > \n> > The sample period is 5 minutes long (vs 2 minutes previously):\n> > \n> > Context switches - avg max\n> > \n> > Default 7.4.1 code : 48784 107354\n> > Default patch - 10 : 20400 28160\n> > patch at 100 : 38574 85372\n> > patch at 1000 : 41188 106569\n> > \n> > The reading at 1000 was not produced under the same circumstances as the prior\n> > readings as I had to replace my device under test with a simulated one. The\n> > real one died.\n> > \n> > The previous run with smaller database and 120 second averages:\n> > \n> > Context switches - avg max\n> > \n> > Default 7.4.1 code : 10665 69470\n> > Default patch - 10 : 17297 21929\n> > patch at 100 : 26825 87073\n> > patch at 1000 : 37580 110849\n> -- \n> Dave Cramer\n> 519 939 0336\n> ICQ # 14675561\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 19 May 2004 21:20:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "When grilled further on (Wed, 19 May 2004 21:20:20 -0400 (EDT)),\nBruce Momjian <[email protected]> confessed:\n\n> \n> Did we ever come to a conclusion about excessive SMP context switching\n> under load?\n> \n\nI just figured out what was causing the problem on my system Monday. I'm using\nthe pg_autovacuum daemon, and it was not vacuuming my db. I've no idea why and\ndidn't get a chance to investigate.\n\nThis lack of vacuuming was causing a huge number of context switches and query\ndelays. the queries that normally take .1 seconds were taking 11 seconds, and\nthe context switches were averaging 160k/s, peaking at 190k/s\n\nUnfortunately, I was under pressure to fix the db at the time so I didn't get a\nchance to play with the patch.\n\nI restarted the vacuum daemon, and will keep an eye on it to see if it behaves.\n\nIf the problem re-occurs, is it worth while to attempt the different patch\ndelay settings?\n\nCheers,\nRob\n\n-- \n 19:45:40 up 21 days, 2:30, 4 users, load average: 2.03, 2.09, 2.06\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Wed, 19 May 2004 19:59:26 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Did we ever come to a conclusion about excessive SMP context switching\n> under load?\n\nYeah: it's bad.\n\nOh, you wanted a fix? That seems harder :-(. AFAICS we need a redesign\nthat causes less load on the BufMgrLock. However, the traditional\nsolution to too-much-contention-for-a-lock is to break up the locked\ndata structure into finer-grained units, which means *more* lock\noperations in total. Normally you expect that the finer-grained lock\nunits will mean less contention. But given that the issue here seems to\nbe trading physical ownership of the lock's cache line back and forth,\nI'm afraid that the traditional approach would actually make things\nworse. The SMP issue seems to be not with whether there is\ninstantaneous contention for the locked datastructure, but with the cost\nof making it possible for processor B to acquire a lock recently held by\nprocessor A.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 May 2004 22:41:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> I just figured out what was causing the problem on my system Monday.\n> I'm using the pg_autovacuum daemon, and it was not vacuuming my db.\n\nDo you have the post-7.4.2 datatype fixes for pg_autovacuum?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 May 2004 22:42:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "When grilled further on (Wed, 19 May 2004 22:42:26 -0400),\nTom Lane <[email protected]> confessed:\n\n> Robert Creager <[email protected]> writes:\n> > I just figured out what was causing the problem on my system Monday.\n> > I'm using the pg_autovacuum daemon, and it was not vacuuming my db.\n> \n> Do you have the post-7.4.2 datatype fixes for pg_autovacuum?\n\nNo. I'm still running 7.4.1 w/associated contrib. I guess an upgrade is in\norder then. I'm currently downloading 7.4.2 to see what the change is that I\nneed. Is it just the 7.4.2 pg_autovacuum that is needed here?\n\nI've caught a whiff that 7.4.3 is nearing release? Any idea when?\n\nThanks,\nRob\n\n-- \n 20:45:52 up 21 days, 3:30, 4 users, load average: 2.02, 2.05, 2.05\nLinux 2.6.5-01 #7 SMP Fri Apr 16 22:45:31 MDT 2004", "msg_date": "Wed, 19 May 2004 20:59:21 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Did we ever come to a conclusion about excessive SMP context switching\n> > under load?\n> \n> Yeah: it's bad.\n> \n> Oh, you wanted a fix? That seems harder :-(. AFAICS we need a redesign\n> that causes less load on the BufMgrLock. However, the traditional\n> solution to too-much-contention-for-a-lock is to break up the locked\n> data structure into finer-grained units, which means *more* lock\n> operations in total. Normally you expect that the finer-grained lock\n> units will mean less contention. But given that the issue here seems to\n> be trading physical ownership of the lock's cache line back and forth,\n> I'm afraid that the traditional approach would actually make things\n> worse. The SMP issue seems to be not with whether there is\n> instantaneous contention for the locked datastructure, but with the cost\n> of making it possible for processor B to acquire a lock recently held by\n> processor A.\n\nI see. I don't even see a TODO in there. :-(\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 19 May 2004 23:02:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> ... The SMP issue seems to be not with whether there is\n>> instantaneous contention for the locked datastructure, but with the cost\n>> of making it possible for processor B to acquire a lock recently held by\n>> processor A.\n\n> I see. I don't even see a TODO in there. :-(\n\nNothing more specific than \"investigate SMP context switching issues\",\nanyway. We are definitely in a research mode here, rather than an\nengineering mode.\n\nObQuote: \"Research is what I am doing when I don't know what I am\ndoing.\" - attributed to Werner von Braun, but has anyone got a\ndefinitive reference?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 May 2004 23:58:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Tom Lane <[email protected]> confessed:\n>> Do you have the post-7.4.2 datatype fixes for pg_autovacuum?\n\n> No. I'm still running 7.4.1 w/associated contrib. I guess an upgrade is in\n> order then. I'm currently downloading 7.4.2 to see what the change is that I\n> need. Is it just the 7.4.2 pg_autovacuum that is needed here?\n\nNope, the fixes I was thinking about just missed the 7.4.2 release.\nI think you can only get them from CVS. (Maybe we should offer a\nnightly build of the latest stable release branch, not only development\ntip...)\n\n> I've caught a whiff that 7.4.3 is nearing release? Any idea when?\n\nNot scheduled yet, but there was talk of pushing one out before 7.5 goes\ninto feature freeze.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2004 00:02:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "\nOK, added to TODO:\n\n\t* Investigate SMP context switching issues\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> ... The SMP issue seems to be not with whether there is\n> >> instantaneous contention for the locked datastructure, but with the cost\n> >> of making it possible for processor B to acquire a lock recently held by\n> >> processor A.\n> \n> > I see. I don't even see a TODO in there. :-(\n> \n> Nothing more specific than \"investigate SMP context switching issues\",\n> anyway. We are definitely in a research mode here, rather than an\n> engineering mode.\n> \n> ObQuote: \"Research is what I am doing when I don't know what I am\n> doing.\" - attributed to Werner von Braun, but has anyone got a\n> definitive reference?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 20 May 2004 00:11:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Tom Lane wrote:\n> Robert Creager <[email protected]> writes:\n> > Tom Lane <[email protected]> confessed:\n> >> Do you have the post-7.4.2 datatype fixes for pg_autovacuum?\n> \n> > No. I'm still running 7.4.1 w/associated contrib. I guess an upgrade is in\n> > order then. I'm currently downloading 7.4.2 to see what the change is that I\n> > need. Is it just the 7.4.2 pg_autovacuum that is needed here?\n> \n> Nope, the fixes I was thinking about just missed the 7.4.2 release.\n> I think you can only get them from CVS. (Maybe we should offer a\n> nightly build of the latest stable release branch, not only development\n> tip...)\n> \n> > I've caught a whiff that 7.4.3 is nearing release? Any idea when?\n> \n> Not scheduled yet, but there was talk of pushing one out before 7.5 goes\n> into feature freeze.\n\nWe need the temp table autovacuum fix before we do 7.4.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 20 May 2004 00:11:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Tom Lane) transmitted:\n> ObQuote: \"Research is what I am doing when I don't know what I am\n> doing.\" - attributed to Werner von Braun, but has anyone got a\n> definitive reference?\n\n<http://www.quotationspage.com/search.php3?Author=Wernher+von+Braun&file=other>\n\nThat points to a bunch of seemingly authoritative sources...\n-- \n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/lsf.html\n\"Terrrrrific.\" -- Ford Prefect\n", "msg_date": "Thu, 20 May 2004 00:48:48 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "On Wed, 2004-05-19 at 21:59, Robert Creager wrote:\n> When grilled further on (Wed, 19 May 2004 21:20:20 -0400 (EDT)),\n> Bruce Momjian <[email protected]> confessed:\n> \n> > \n> > Did we ever come to a conclusion about excessive SMP context switching\n> > under load?\n> > \n> \n> I just figured out what was causing the problem on my system Monday. I'm using\n> the pg_autovacuum daemon, and it was not vacuuming my db. I've no idea why and\n> didn't get a chance to investigate.\n\nStrange. There is a known bug in the 7.4.2 version of pg_autovacuum\nrelated to data type mismatches which is fixed in CVS. But that bug\ndoesn't cause pg_autovacuum to stop vacuuming but rather to vacuum to\noften. So perhaps this is a different issue? Please let me know what\nyou find.\n\nThanks,\n\nMatthew O'Connor\n\n\n", "msg_date": "Thu, 20 May 2004 01:10:14 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Guys,\n \n> Oh, you wanted a fix? That seems harder :-(. AFAICS we need a redesign\n> that causes less load on the BufMgrLock.\n\nFWIW, we've been pursuing two routes of quick patch fixes.\n\n1) Dave Cramer and I have been testing setting varying rates of spin_delay in \nan effort to find a \"sweet spot\" that the individual system seems to like. \nThis has been somewhat delayed by my illness.\n\n2) The OSDL folks have been trying various patches to use Linux 2.6 Futexes in \nplace of semops (if I have that right) which, if successful, would produce a \nlinux-specific fix. However, they haven't yet come up wiith a version of \nthe patch which is stable.\n\nI'm really curious, BTW, about how all of Jan's changes to buffer usage in 7.5 \naffect this issue. Has anyone tested it on a recent snapshot?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 20 May 2004 14:52:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> I'm really curious, BTW, about how all of Jan's changes to buffer\n> usage in 7.5 affect this issue. Has anyone tested it on a recent\n> snapshot?\n\nWon't help.\n\n(1) Theoretical argument: the problem case is select-only and touches\nfew enough buffers that it need never visit the kernel. The buffer\nmanagement algorithm is thus irrelevant since there are never any\ndecisions for it to make. If anything CVS tip will have a worse problem\nbecause its more complicated management algorithm needs to spend longer\nholding the BufMgrLock.\n\n(2) Experimental argument: I believe that I did check the self-contained\ntest case we eventually developed against CVS tip on one of Red Hat's\nSMP machines, and indeed it was unhappy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2004 18:14:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " } ]
[ { "msg_contents": "I just installed v7.4 and restored a database from v7.3.4. I have an \nindex based on a function that the planner is using on the old version, \nbut doing seq scans on left joins in the new version. I have run \nanalyze on the table post restore. the query returns in less than 1 \nsecond on version 7.3.4 and takes over 10 seconds on version 7.4. Any \nhelp will be appreciated.\n\nRoger Ging\n\nQuery:\n\nSELECT L.row_id FROM music.logfile L LEFT JOIN music.program P ON\nmusic.fn_mri_id_no_program(P.mri_id_no) = L.program_id\nWHERE L.station = UPPER('kabc')::VARCHAR\nAND L.air_date = '04/12/2002'::TIMESTAMP\nAND P.cutoff_date IS NULL\nORDER BY L.chron_start,L.chron_end;\n\nplanner results on 7.4:\n\n Sort (cost=17595.99..17608.23 rows=4894 width=12)\n Sort Key: l.chron_start, l.chron_end\n -> Merge Left Join (cost=17135.92..17296.07 rows=4894 width=12)\n Merge Cond: (\"outer\".\"?column5?\" = \"inner\".\"?column3?\")\n Filter: (\"inner\".cutoff_date IS NULL)\n -> Sort (cost=1681.69..1682.73 rows=414 width=21)\n Sort Key: (l.program_id)::text\n -> Index Scan using idx_logfile_station_air_date on \nlogfile l (cost=0.00..1663.70 rows=414 width=21)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2002-04-12 00:00:00'::timestamp without time zone))\n -> Sort (cost=15454.22..15465.06 rows=4335 width=20)\n Sort Key: (music.fn_mri_id_no_program(p.mri_id_no))::text\n -> Seq Scan on program p (cost=0.00..15192.35 \nrows=4335 width=20)\n\nplanner results on 7.3.4:\n\n Sort (cost=55765.51..55768.33 rows=1127 width=41)\n Sort Key: l.chron_start, l.chron_end\n -> Nested Loop (cost=0.00..55708.36 rows=1127 width=41)\n Filter: (\"inner\".cutoff_date IS NULL)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n (cost=0.00..71.34 rows=17 width=21)\n Index Cond: ((station = 'KABC'::character varying) AND \n(air_date = '2002-04-12 00:00:00'::timestamp without time zone))\n -> Index Scan using idx_program_mri_id_no_program on program \np (cost=0.00..3209.16 rows=870 width=20)\n Index Cond: (music.fn_mri_id_no_program(p.mri_id_no) = \n\"outer\".program_id)\n\ntable \"Program\" details:\n\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n record_id | integer |\n title | character varying(40) |\n mri_id_no | character varying(8) |\n ascap_cat | character varying(1) |\n ascap_mult | numeric(5,3) |\n ascap_prod | character varying(10) |\n npa_ind | character varying(3) |\n non_inc_in | character varying(1) |\n as_pr_su | character varying(1) |\n as_1st_run | character varying(1) |\n as_cue_st | character varying(1) |\n bmi_cat | character varying(2) |\n bmi_mult | numeric(6,2) |\n bmi_prod | character varying(7) |\n year | integer |\n prog_type | character varying(1) |\n total_ep | integer |\n last_epis | character varying(3) |\n syndicator | character varying(6) |\n station | character varying(4) |\n syn_loc | character varying(1) |\n spdb_ver | character varying(4) |\n as_filed | character varying(4) |\n bmidb_ver | character varying(4) |\n cutoff_date | timestamp without time zone |\n effective_date | timestamp without time zone |\n program_id | character varying(5) |\nIndexes:\n \"idx_program_mri_id_no\" btree (mri_id_no)\n \"idx_program_mri_id_no_program\" btree \n(music.fn_mri_id_no_program(mri_id_no))\n \"idx_program_program_id\" btree (program_id)\n \"program_mri_id_no\" btree (mri_id_no)\n \"program_oid\" btree (oid)\n\n", "msg_date": "Wed, 26 Nov 2003 08:38:27 -0800", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "expression (functional) index use in joins" }, { "msg_contents": "On Wednesday 26 November 2003 16:38, Roger Ging wrote:\n> I just installed v7.4 and restored a database from v7.3.4.\n[snip]\n\nHmm - you seem to be getting different row estimates in the plan. Can you \nre-analyse both versions and post EXPLAIN ANALYSE rather than just EXPLAIN?\n\n> -> Seq Scan on program p (cost=0.00..15192.35\n> rows=4335 width=20)\n>\n> planner results on 7.3.4:\n>\n> -> Index Scan using idx_program_mri_id_no_program on program\n> p (cost=0.00..3209.16 rows=870 width=20)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2003 17:52:30 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: expression (functional) index use in joins" } ]
[ { "msg_contents": "\nversion 7.4 results:\n\nexplain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN \nmusic.program P ON\nmusic.fn_mri_id_no_program(P.mri_id_no) = L.program_id\nWHERE L.station = UPPER('kabc')::VARCHAR\nAND L.air_date = '04/12/2002'::TIMESTAMP\nAND P.cutoff_date IS NULL\nORDER BY L.chron_start,L.chron_end;\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=17595.99..17608.23 rows=4894 width=12) (actual \ntime=8083.719..8083.738 rows=30 loops=1)\n Sort Key: l.chron_start, l.chron_end\n -> Merge Left Join (cost=17135.92..17296.07 rows=4894 width=12) \n(actual time=7727.590..8083.349 rows=30 loops=1)\n Merge Cond: (\"outer\".\"?column5?\" = \"inner\".\"?column3?\")\n Filter: (\"inner\".cutoff_date IS NULL)\n -> Sort (cost=1681.69..1682.73 rows=414 width=21) (actual \ntime=1.414..1.437 rows=30 loops=1)\n Sort Key: (l.program_id)::text\n -> Index Scan using idx_logfile_station_air_date on \nlogfile l (cost=0.00..1663.70 rows=414 width=21) (actual \ntime=0.509..1.228 rows=30 loops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2002-04-12 00:00:00'::timestamp without time zone))\n -> Sort (cost=15454.22..15465.06 rows=4335 width=20) (actual \ntime=7718.612..7869.874 rows=152779 loops=1)\n Sort Key: (music.fn_mri_id_no_program(p.mri_id_no))::text\n -> Seq Scan on program p (cost=0.00..15192.35 \nrows=4335 width=20) (actual time=109.045..1955.882 rows=173998 loops=1)\n Total runtime: 8194.290 ms\n(13 rows)\n\n\nversion 7.3 results:\n\nexplain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN \nmusic.program P ON\nmusic.fn_mri_id_no_program(P.mri_id_no) = L.program_id\nWHERE L.station = UPPER('kabc')::VARCHAR\nAND L.air_date = '04/12/2002'::TIMESTAMP\nAND P.cutoff_date IS NULL\nORDER BY L.chron_start,L.chron_end;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=55765.51..55768.33 rows=1127 width=41) (actual \ntime=7.74..7.75 rows=30 loops=1)\n Sort Key: l.chron_start, l.chron_end\n -> Nested Loop (cost=0.00..55708.36 rows=1127 width=41) (actual \ntime=0.21..7.62 rows=30 loops=1)\n Filter: (\"inner\".cutoff_date IS NULL)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n (cost=0.00..71.34 rows=17 width=21) (actual time=0.14..0.74 rows=30 \nloops=1)\n Index Cond: ((station = 'KABC'::character varying) AND \n(air_date = '2002-04-12 00:00:00'::timestamp without time zone))\n -> Index Scan using idx_program_mri_id_no_program on program \np (cost=0.00..3209.16 rows=870 width=20) (actual time=0.05..0.22 rows=9 \nloops=30)\n Index Cond: (music.fn_mri_id_no_program(p.mri_id_no) = \n\"outer\".program_id)\n Total runtime: 7.86 msec\n\n", "msg_date": "Wed, 26 Nov 2003 10:39:28 -0800", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "Followup - expression (functional) index use in joins" }, { "msg_contents": "On Wednesday 26 November 2003 18:39, Roger Ging wrote:\n> version 7.4 results:\n>\n> explain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN\n> music.program P ON\n> music.fn_mri_id_no_program(P.mri_id_no) = L.program_id\n> WHERE L.station = UPPER('kabc')::VARCHAR\n> AND L.air_date = '04/12/2002'::TIMESTAMP\n> AND P.cutoff_date IS NULL\n> ORDER BY L.chron_start,L.chron_end;\n\n> -> Seq Scan on program p (cost=0.00..15192.35\n> rows=4335 width=20) (actual time=109.045..1955.882 rows=173998 loops=1)\n\nThe estimated number of rows here (4335) is *way* off (173998 actually). If \nyou only had 4335 rows, then this might be a more sensible plan.\n\nFirst step is to run:\n VACUUM ANALYSE program;\nThen, check the definition of your function fn_mri_id_no_program() and make \nsure it is marked immutable/stable (depending on what it does) and that it's \nreturning a varchar.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2003 19:12:01 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup - expression (functional) index use in joins" }, { "msg_contents": "Ran vacuum analyse on both program and logfile tables. Estimates are \nmore in line with reality now, but query still takes 10 seconds on v7.4 \nand 10 ms on v7.3. Function is marked as immutable and returns \nvarchar(5). I am wondering why the planner would choose a merge join \n(v7.4) as opposed to a nested loop (v7.3) given the small number of rows \nin the top level table (logfile) based upon the where clause (\n\nL.air_date = '04/12/2002'::TIMESTAMP\n\n)\nthere are typically only 30 rows per station/air_date. What am I \nmissing here?\n\nRichard Huxton wrote:\n\n>On Wednesday 26 November 2003 18:39, Roger Ging wrote:\n> \n>\n>>version 7.4 results:\n>>\n>>explain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN\n>>music.program P ON\n>>music.fn_mri_id_no_program(P.mri_id_no) = L.program_id\n>>WHERE L.station = UPPER('kabc')::VARCHAR\n>>AND L.air_date = '04/12/2002'::TIMESTAMP\n>>AND P.cutoff_date IS NULL\n>>ORDER BY L.chron_start,L.chron_end;\n>> \n>>\n>\n> \n>\n>> -> Seq Scan on program p (cost=0.00..15192.35\n>>rows=4335 width=20) (actual time=109.045..1955.882 rows=173998 loops=1)\n>> \n>>\n>\n>The estimated number of rows here (4335) is *way* off (173998 actually). If \n>you only had 4335 rows, then this might be a more sensible plan.\n>\n>First step is to run:\n> VACUUM ANALYSE program;\n>Then, check the definition of your function fn_mri_id_no_program() and make \n>sure it is marked immutable/stable (depending on what it does) and that it's \n>returning a varchar.\n>\n>\n> \n>\n\n\n\n\n\n\n\nRan vacuum analyse on both program and logfile tables.  Estimates are\nmore in line with reality now, but query still takes 10 seconds on v7.4\nand 10 ms on v7.3.  Function is marked as immutable and returns\nvarchar(5).  I am wondering why the planner would choose a merge join\n(v7.4) as opposed to a nested loop (v7.3) given the small number of\nrows in the top level table (logfile) based upon the where clause (\nL.air_date = '04/12/2002'::TIMESTAMP\n)\nthere are typically only 30 rows per station/air_date.  What am I\nmissing here?\n\nRichard Huxton wrote:\n\nOn Wednesday 26 November 2003 18:39, Roger Ging wrote:\n \n\nversion 7.4 results:\n\nexplain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN\nmusic.program P ON\nmusic.fn_mri_id_no_program(P.mri_id_no) = L.program_id\nWHERE L.station = UPPER('kabc')::VARCHAR\nAND L.air_date = '04/12/2002'::TIMESTAMP\nAND P.cutoff_date IS NULL\nORDER BY L.chron_start,L.chron_end;\n \n\n\n \n\n -> Seq Scan on program p (cost=0.00..15192.35\nrows=4335 width=20) (actual time=109.045..1955.882 rows=173998 loops=1)\n \n\n\nThe estimated number of rows here (4335) is *way* off (173998 actually). If \nyou only had 4335 rows, then this might be a more sensible plan.\n\nFirst step is to run:\n VACUUM ANALYSE program;\nThen, check the definition of your function fn_mri_id_no_program() and make \nsure it is marked immutable/stable (depending on what it does) and that it's \nreturning a varchar.", "msg_date": "Wed, 26 Nov 2003 13:29:16 -0800", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Followup - expression (functional) index use in joins" }, { "msg_contents": "Roger Ging <[email protected]> writes:\n> Ran vacuum analyse on both program and logfile tables. Estimates are \n> more in line with reality now,\n\nAnd they are what now? You really can't expect to get useful help here\nwhen you're being so miserly with the details ...\n\nFWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\nenable_mergejoin to off (might have to also set enable_hashjoin to off,\nif it then tries for a hash join). 7.3 could not even consider those\njoin types in this example, while 7.4 can. The interesting question\nfrom my perspective is why the planner is guessing wrong about the\nrelative costs of the plans. EXPLAIN ANALYZE results with each type of\njoin forced would be useful to look at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Nov 2003 19:09:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup - expression (functional) index use in joins " }, { "msg_contents": "Tom,\n\nTurning enable_hashjoin off made the query run as it had on v7.3. We \nhave worked around this by changing the index from a function call to a \ndirect index on a new column with the results of the function maintained \nby a trigger. Would there be performance issues from leaving \nenable_hashjoin off, or do you recomend enabling it, and working around \nfunction calls in indices?\n\nSee results below. I was not sure if I was supposed to reply-all, or \njust to the list. Sorry if the protocol is incorrect.\n\n\n\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=69.89..19157.06 rows=2322 width=28) (actual \ntime=500.905..1473.748 rows=242 loops=1)\n Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text = \n(\"inner\".program_id)::text)\n -> Seq Scan on program p (cost=0.00..16888.98 rows=173998 width=40) \n(actual time=98.371..532.184 rows=173998 loops=1)\n -> Hash (cost=69.84..69.84 rows=17 width=9) (actual \ntime=65.817..65.817 rows=0 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=24.499..65.730 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1474.067 ms\n(7 rows)\n\nppl=# set enable_mergejoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=69.89..19157.06 rows=2322 width=28) (actual \ntime=444.834..1428.815 rows=242 loops=1)\n Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text = \n(\"inner\".program_id)::text)\n -> Seq Scan on program p (cost=0.00..16888.98 rows=173998 width=40) \n(actual time=105.977..542.870 rows=173998 loops=1)\n -> Hash (cost=69.84..69.84 rows=17 width=9) (actual \ntime=1.197..1.197 rows=0 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.574..1.151 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1429.111 ms\n(7 rows)\n\nppl=# set enable_hashjoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..58104.34 rows=2322 width=28) (actual \ntime=0.480..5.357 rows=242 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.176..0.754 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND (air_date = \n'2001-01-30 00:00:00'::timestamp without time zone))\n -> Index Scan using idx_program_mri_id_no_program on program p \n(cost=0.00..3400.74 rows=870 width=40) (actual time=0.041..0.127 rows=8 \nloops=32)\n Index Cond: ((\"outer\".program_id)::text = \n(music.fn_mri_id_no_program(p.mri_id_no))::text)\n Total runtime: 5.637 ms\n(6 rows)\n\n\nTom Lane wrote:\n\n>Roger Ging <[email protected]> writes:\n> \n>\n>>Ran vacuum analyse on both program and logfile tables. Estimates are \n>>more in line with reality now,\n>> \n>>\n>\n>And they are what now? You really can't expect to get useful help here\n>when you're being so miserly with the details ...\n>\n>FWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\n>enable_mergejoin to off (might have to also set enable_hashjoin to off,\n>if it then tries for a hash join). 7.3 could not even consider those\n>join types in this example, while 7.4 can. The interesting question\n>from my perspective is why the planner is guessing wrong about the\n>relative costs of the plans. EXPLAIN ANALYZE results with each type of\n>join forced would be useful to look at.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n\n\n\n\n\nTom,\n\nTurning enable_hashjoin off made the query run as it had on v7.3.  We\nhave worked around this by changing the index from a function call to a\ndirect index on a new column with the results of the function\nmaintained by a trigger.  Would there be performance issues from\nleaving enable_hashjoin off, or do you recomend enabling it, and\nworking around function calls in indices?\n\nSee results below.  I was not sure if I was supposed to reply-all, or\njust to the list.  Sorry if the protocol is incorrect.\n\n\n\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                      \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=69.89..19157.06 rows=2322 width=28) (actual\ntime=500.905..1473.748 rows=242 loops=1)\n   Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text =\n(\"inner\".program_id)::text)\n   ->  Seq Scan on program p  (cost=0.00..16888.98 rows=173998\nwidth=40) (actual time=98.371..532.184 rows=173998 loops=1)\n   ->  Hash  (cost=69.84..69.84 rows=17 width=9) (actual\ntime=65.817..65.817 rows=0 loops=1)\n         ->  Index Scan using idx_logfile_station_air_date on\nlogfile l  (cost=0.00..69.84 rows=17 width=9) (actual\ntime=24.499..65.730 rows=32 loops=1)\n               Index Cond: (((station)::text = 'KABC'::text) AND\n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1474.067 ms\n(7 rows)\n\nppl=# set enable_mergejoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                     \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=69.89..19157.06 rows=2322 width=28) (actual\ntime=444.834..1428.815 rows=242 loops=1)\n   Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text =\n(\"inner\".program_id)::text)\n   ->  Seq Scan on program p  (cost=0.00..16888.98 rows=173998\nwidth=40) (actual time=105.977..542.870 rows=173998 loops=1)\n   ->  Hash  (cost=69.84..69.84 rows=17 width=9) (actual\ntime=1.197..1.197 rows=0 loops=1)\n         ->  Index Scan using idx_logfile_station_air_date on\nlogfile l  (cost=0.00..69.84 rows=17 width=9) (actual time=0.574..1.151\nrows=32 loops=1)\n               Index Cond: (((station)::text = 'KABC'::text) AND\n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1429.111 ms\n(7 rows)\n\nppl=# set enable_hashjoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                     \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..58104.34 rows=2322 width=28) (actual\ntime=0.480..5.357 rows=242 loops=1)\n   ->  Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.176..0.754 rows=32\nloops=1)\n         Index Cond: (((station)::text = 'KABC'::text) AND (air_date =\n'2001-01-30 00:00:00'::timestamp without time zone))\n   ->  Index Scan using idx_program_mri_id_no_program on program p \n(cost=0.00..3400.74 rows=870 width=40) (actual time=0.041..0.127 rows=8\nloops=32)\n         Index Cond: ((\"outer\".program_id)::text =\n(music.fn_mri_id_no_program(p.mri_id_no))::text)\n Total runtime: 5.637 ms\n(6 rows)\n\n\nTom Lane wrote:\n\nRoger Ging <[email protected]> writes:\n \n\nRan vacuum analyse on both program and logfile tables. Estimates are \nmore in line with reality now,\n \n\n\nAnd they are what now? You really can't expect to get useful help here\nwhen you're being so miserly with the details ...\n\nFWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\nenable_mergejoin to off (might have to also set enable_hashjoin to off,\nif it then tries for a hash join). 7.3 could not even consider those\njoin types in this example, while 7.4 can. The interesting question\nfrom my perspective is why the planner is guessing wrong about the\nrelative costs of the plans. EXPLAIN ANALYZE results with each type of\njoin forced would be useful to look at.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html", "msg_date": "Mon, 01 Dec 2003 09:14:48 -0800", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup - expression (functional) index use in joins" }, { "msg_contents": "Turning enable_hashjoin off made the query run as it had on v7.3. We \nhave worked around this by changing the index from a function call to a \ndirect index on a new column with the results of the function maintained \nby a trigger. Would there be performance issues from leaving \nenable_hashjoin off, or do you recomend enabling it, and working around \nfunction calls in indices?\n\nSee results below.\n\n\n\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=69.89..19157.06 rows=2322 width=28) (actual \ntime=500.905..1473.748 rows=242 loops=1)\n Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text = \n(\"inner\".program_id)::text)\n -> Seq Scan on program p (cost=0.00..16888.98 rows=173998 width=40) \n(actual time=98.371..532.184 rows=173998 loops=1)\n -> Hash (cost=69.84..69.84 rows=17 width=9) (actual \ntime=65.817..65.817 rows=0 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=24.499..65.730 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1474.067 ms\n(7 rows)\n\nppl=# set enable_mergejoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=69.89..19157.06 rows=2322 width=28) (actual \ntime=444.834..1428.815 rows=242 loops=1)\n Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text = \n(\"inner\".program_id)::text)\n -> Seq Scan on program p (cost=0.00..16888.98 rows=173998 width=40) \n(actual time=105.977..542.870 rows=173998 loops=1)\n -> Hash (cost=69.84..69.84 rows=17 width=9) (actual \ntime=1.197..1.197 rows=0 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.574..1.151 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND \n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1429.111 ms\n(7 rows)\n\nppl=# set enable_hashjoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..58104.34 rows=2322 width=28) (actual \ntime=0.480..5.357 rows=242 loops=1)\n -> Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.176..0.754 rows=32 \nloops=1)\n Index Cond: (((station)::text = 'KABC'::text) AND (air_date = \n'2001-01-30 00:00:00'::timestamp without time zone))\n -> Index Scan using idx_program_mri_id_no_program on program p \n(cost=0.00..3400.74 rows=870 width=40) (actual time=0.041..0.127 rows=8 \nloops=32)\n Index Cond: ((\"outer\".program_id)::text = \n(music.fn_mri_id_no_program(p.mri_id_no))::text)\n Total runtime: 5.637 ms\n(6 rows)\n\n\nTom Lane wrote:\n\n>Roger Ging <[email protected]> writes:\n> \n>\n>>Ran vacuum analyse on both program and logfile tables. Estimates are \n>>more in line with reality now,\n>> \n>>\n>\n>And they are what now? You really can't expect to get useful help here\n>when you're being so miserly with the details ...\n>\n>FWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\n>enable_mergejoin to off (might have to also set enable_hashjoin to off,\n>if it then tries for a hash join). 7.3 could not even consider those\n>join types in this example, while 7.4 can. The interesting question\n>from my perspective is why the planner is guessing wrong about the\n>relative costs of the plans. EXPLAIN ANALYZE results with each type of\n>join forced would be useful to look at.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\nTom Lane wrote:\n\n>Roger Ging <[email protected]> writes:\n> \n>\n>>Ran vacuum analyse on both program and logfile tables. Estimates are \n>>more in line with reality now,\n>> \n>>\n>\n>And they are what now? You really can't expect to get useful help here\n>when you're being so miserly with the details ...\n>\n>FWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\n>enable_mergejoin to off (might have to also set enable_hashjoin to off,\n>if it then tries for a hash join). 7.3 could not even consider those\n>join types in this example, while 7.4 can. The interesting question\n>from my perspective is why the planner is guessing wrong about the\n>relative costs of the plans. EXPLAIN ANALYZE results with each type of\n>join forced would be useful to look at.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n\n\n\n\n\n\nTurning enable_hashjoin off made the query run as it had on v7.3.  We\nhave worked around this by changing the index from a function call to a\ndirect index on a new column with the results of the function maintained\nby a trigger.  Would there be performance issues from leaving\nenable_hashjoin off, or do you recomend enabling it, and working around\nfunction calls in indices?\n\nSee results below.\n\n\n\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                      \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=69.89..19157.06 rows=2322 width=28) (actual\ntime=500.905..1473.748 rows=242 loops=1)\n   Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text =\n(\"inner\".program_id)::text)\n   ->  Seq Scan on program p  (cost=0.00..16888.98 rows=173998\nwidth=40) (actual time=98.371..532.184 rows=173998 loops=1)\n   ->  Hash  (cost=69.84..69.84 rows=17 width=9) (actual\ntime=65.817..65.817 rows=0 loops=1)\n         ->  Index Scan using idx_logfile_station_air_date on\nlogfile l  (cost=0.00..69.84 rows=17 width=9) (actual\ntime=24.499..65.730 rows=32 loops=1)\n               Index Cond: (((station)::text = 'KABC'::text) AND\n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1474.067 ms\n(7 rows)\n\nppl=# set enable_mergejoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                     \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=69.89..19157.06 rows=2322 width=28) (actual\ntime=444.834..1428.815 rows=242 loops=1)\n   Hash Cond: ((music.fn_mri_id_no_program(\"outer\".mri_id_no))::text =\n(\"inner\".program_id)::text)\n   ->  Seq Scan on program p  (cost=0.00..16888.98 rows=173998\nwidth=40) (actual time=105.977..542.870 rows=173998 loops=1)\n   ->  Hash  (cost=69.84..69.84 rows=17 width=9) (actual\ntime=1.197..1.197 rows=0 loops=1)\n         ->  Index Scan using idx_logfile_station_air_date on\nlogfile l  (cost=0.00..69.84 rows=17 width=9) (actual time=0.574..1.151\nrows=32 loops=1)\n               Index Cond: (((station)::text = 'KABC'::text) AND\n(air_date = '2001-01-30 00:00:00'::timestamp without time zone))\n Total runtime: 1429.111 ms\n(7 rows)\n\nppl=# set enable_hashjoin = false;\nSET\nppl=# explain analyse select title from music.program p\nppl-# join music.logfile l on\nppl-# l.program_id = music.fn_mri_id_no_program(p.mri_id_no)\nppl-# where l.air_date = '01/30/2001'\nppl-# and l.station = 'KABC';\n                                                                     \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..58104.34 rows=2322 width=28) (actual\ntime=0.480..5.357 rows=242 loops=1)\n   ->  Index Scan using idx_logfile_station_air_date on logfile l \n(cost=0.00..69.84 rows=17 width=9) (actual time=0.176..0.754 rows=32\nloops=1)\n         Index Cond: (((station)::text = 'KABC'::text) AND (air_date =\n'2001-01-30 00:00:00'::timestamp without time zone))\n   ->  Index Scan using idx_program_mri_id_no_program on program p \n(cost=0.00..3400.74 rows=870 width=40) (actual time=0.041..0.127 rows=8\nloops=32)\n         Index Cond: ((\"outer\".program_id)::text =\n(music.fn_mri_id_no_program(p.mri_id_no))::text)\n Total runtime: 5.637 ms\n(6 rows)\n\n\nTom Lane wrote:\n\nRoger Ging <[email protected]> writes:\n \n\nRan vacuum analyse on both program and logfile tables. Estimates are \nmore in line with reality now,\n \n\n\nAnd they are what now? You really can't expect to get useful help here\nwhen you're being so miserly with the details ...\n\nFWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\nenable_mergejoin to off (might have to also set enable_hashjoin to off,\nif it then tries for a hash join). 7.3 could not even consider those\njoin types in this example, while 7.4 can. The interesting question\nfrom my perspective is why the planner is guessing wrong about the\nrelative costs of the plans. EXPLAIN ANALYZE results with each type of\njoin forced would be useful to look at.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n \n\n\n\nTom Lane wrote:\n\nRoger Ging <[email protected]> writes:\n \n\nRan vacuum analyse on both program and logfile tables. Estimates are \nmore in line with reality now,\n \n\n\nAnd they are what now? You really can't expect to get useful help here\nwhen you're being so miserly with the details ...\n\nFWIW, I suspect you could force 7.4 to generate 7.3's plan by setting\nenable_mergejoin to off (might have to also set enable_hashjoin to off,\nif it then tries for a hash join). 7.3 could not even consider those\njoin types in this example, while 7.4 can. The interesting question\nfrom my perspective is why the planner is guessing wrong about the\nrelative costs of the plans. EXPLAIN ANALYZE results with each type of\njoin forced would be useful to look at.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html", "msg_date": "Mon, 01 Dec 2003 09:31:06 -0800", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Followup - expression (functional) index use in joins" }, { "msg_contents": "Roger Ging <[email protected]> writes:\n> See results below.\n\nThanks for the report. It seems the issue is that the estimate for the\nnumber of matching rows is way off (870 vs 8):\n\n> -> Index Scan using idx_program_mri_id_no_program on program p \n> (cost=0.00..3400.74 rows=870 width=40) (actual time=0.041..0.127 rows=8 \n> loops=32)\n\nwhich discourages the planner from using a nestloop. I'm not sure we\ncan do much about this in the short term. There's been some discussion\nof keeping statistics about the values of functional indexes, which\nwould allow a better estimate to be made in this situation; but that\nwon't happen before 7.5 at the earliest.\n\n> Turning enable_hashjoin off made the query run as it had on v7.3. We \n> have worked around this by changing the index from a function call to a \n> direct index on a new column with the results of the function maintained \n> by a trigger. Would there be performance issues from leaving \n> enable_hashjoin off, or do you recomend enabling it, and working around \n> function calls in indices?\n\nTurning enable_hashjoin off globally would be a *really bad* idea IMHO.\nThe workaround with a derived column seems okay, though certainly a pain\nin the neck. Can you manage to turn off enable_hashjoin just for this\none query? That might be the best short-term workaround.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Dec 2003 13:05:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup - expression (functional) index use in joins " } ]
[ { "msg_contents": "Hi all, \nWhich one is better (performance/easier to use),\ntsearch2 or fulltextindex? \nthere is an example how to use fulltextindex in the\ntechdocs, but I checked the contrib/fulltextindex\npackage, there is a WARNING that fulltextindex is\nmuch slower than tsearch2. but tsearch2 seems\ncomplex to use, and I can not find a good example.\nWhich one I should use? Any suggestions? \n\nthanks and Regards,\nWilliam\n\n----- Original Message -----\nFrom: Hannu Krosing <[email protected]>\nDate: Wednesday, November 26, 2003 5:33 pm\nSubject: Re: [PERFORM] why index scan not working\nwhen using 'like'?\n\n> Tom Lane kirjutas T, 25.11.2003 kell 23:29:\n> > Josh Berkus <[email protected]> writes:\n> > > In regular text fields containing words, your\nproblem is \n> solvable with full \n> > > text indexing (FTI). Unfortunately, FTI is\nnot designed for \n> arbitrary \n> > > non-language strings. It could be adapted,\nbut would require a \n> lot of \n> > > hacking.\n> > \n> > I'm not sure why you say that FTI isn't a usable\nsolution. As \n> long as\n> > the gene symbols are separated by whitespace or\nsome other non-\n> letters> (eg, \"foo mif bar\" not \"foomifbar\"), I'd\nthink FTI would \n> work.\n> If he wants to search on arbitrary substring, he\ncould change \n> tokeniserin FTI to produce trigrams, so that\n\"foomifbar\" would be \n> indexed as if\n> it were text \"foo oom omi mif ifb fba bar\" and\nsearch for things like\n> %mifb% should first do a FTI search for \"mif\" AND\n\"ifb\" and then \n> simpleLIKE %mifb% to weed out something like \"mififb\".\n> \n> There are ways to use trigrams for 1 and 2 letter\nmatches as well.\n> \n> -------------\n> Hannu\n> \n> \n> ---------------------------(end of\nbroadcast)-----------------------\n> ----\n> TIP 3: if posting/reading through Usenet, please\nsend an appropriate\n> subscribe-nomail command to\[email protected] so that \n> your message can get through to the mailing\nlist cleanly\n> \n\n", "msg_date": "Wed, 26 Nov 2003 20:06:02 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "For full text indexing, which is better, tsearch2 or fulltextindex" }, { "msg_contents": "> Which one is better (performance/easier to use),\n> tsearch2 or fulltextindex? \n> there is an example how to use fulltextindex in the\n> techdocs, but I checked the contrib/fulltextindex\n> package, there is a WARNING that fulltextindex is\n> much slower than tsearch2. but tsearch2 seems\n> complex to use, and I can not find a good example.\n> Which one I should use? Any suggestions? \n\nI believe I wrote that warning :)\n\nTsearch2 is what you should use. Yes, it's more complicated but it's \nHEAPS faster and seriously powerful.\n\nJust read the README file.\n\nYou could also try out the original tsearch (V1), but that will probably \nbe superceded soon, now that tsearch2 is around.\n\nChris\n\n\n", "msg_date": "Thu, 27 Nov 2003 08:51:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "\nOn Thu, Nov 27, 2003 at 08:51:14AM +0800, Christopher Kings-Lynne wrote:\n> >Which one is better (performance/easier to use),\n> >tsearch2 or fulltextindex? \n> >there is an example how to use fulltextindex in the\n> >techdocs, but I checked the contrib/fulltextindex\n> >package, there is a WARNING that fulltextindex is\n> >much slower than tsearch2. but tsearch2 seems\n> >complex to use, and I can not find a good example.\n> >Which one I should use? Any suggestions? \n> \n> I believe I wrote that warning :)\n> \n> Tsearch2 is what you should use. Yes, it's more complicated but it's \n> HEAPS faster and seriously powerful.\n> \n\nCan you provide some numbers please, both for creating full text indexes\nas well as for searching them? I tried to use tsearch and it seemed like\njust creating a full text index on million+ records took forever.\n\n> Just read the README file.\n> \n> You could also try out the original tsearch (V1), but that will probably \n> be superceded soon, now that tsearch2 is around.\n> \n> Chris\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Wed, 26 Nov 2003 17:03:52 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "On Thu, Nov 27, 2003 at 08:51:14AM +0800, Christopher Kings-Lynne wrote:\n> >Which one is better (performance/easier to use),\n> >tsearch2 or fulltextindex? \n> >there is an example how to use fulltextindex in the\n> >techdocs, but I checked the contrib/fulltextindex\n> >package, there is a WARNING that fulltextindex is\n> >much slower than tsearch2. but tsearch2 seems\n> >complex to use, and I can not find a good example.\n> >Which one I should use? Any suggestions? \n> \n> I believe I wrote that warning :)\n> \n> Tsearch2 is what you should use. Yes, it's more complicated but it's \n> HEAPS faster and seriously powerful.\n\nDoes anyone have any metrics on how fast tsearch2 actually is?\n\nI tried it on a synthetic dataset of a million documents of a hundred\nwords each and while insertions were impressively fast I gave up on\nthe search after 10 minutes.\n\nBroken? Unusable slow? This was on the last 7.4 release candidate.\n\nCheers,\n Steve\n", "msg_date": "Wed, 26 Nov 2003 19:28:31 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "> Does anyone have any metrics on how fast tsearch2 actually is?\n> \n> I tried it on a synthetic dataset of a million documents of a hundred\n> words each and while insertions were impressively fast I gave up on\n> the search after 10 minutes.\n> \n> Broken? Unusable slow? This was on the last 7.4 release candidate.\n\nI just created a 1.1million row dataset by copying one of our 30000 row \nproduction tables and just taking out the txtidx column. Then I \ninserted it into itself until it had 1.1 million rows.\n\nThen I created the GiST index - THAT took forever - seriously like 20 \nmins or half an hour or something.\n\nNow, to find a word:\n\nselect * from tsearchtest where ftiidx ## 'curry';\nTime: 9760.75 ms\n\nThe AND of two words:\nTime: 103.61 ms\n\nThe AND of three words:\nselect * from tsearchtest where ftiidx ## 'curry&green&thai';\nTime: 61.86 ms\n\nAnd now a one word query now that buffers are cached:\nselect * from tsearchtest where ftiidx ## 'curry';\nTime: 444.89 ms\n\nSo, I have no idea why you think it's slow? Perhaps you forgot the \n'create index using gist' step?\n\nAlso, if you use the NOT (!) operand, you can get yourself into a really \nslow situation.\n\nChris\n\n\n\n", "msg_date": "Thu, 27 Nov 2003 12:41:59 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "On Thu, Nov 27, 2003 at 12:41:59PM +0800, Christopher Kings-Lynne wrote:\n> >Does anyone have any metrics on how fast tsearch2 actually is?\n> >\n> >I tried it on a synthetic dataset of a million documents of a hundred\n> >words each and while insertions were impressively fast I gave up on\n> >the search after 10 minutes.\n> >\n> >Broken? Unusable slow? This was on the last 7.4 release candidate.\n> \n> I just created a 1.1million row dataset by copying one of our 30000 row \n> production tables and just taking out the txtidx column. Then I \n> inserted it into itself until it had 1.1 million rows.\n> \n> Then I created the GiST index - THAT took forever - seriously like 20 \n> mins or half an hour or something.\n> \n> Now, to find a word:\n> \n> select * from tsearchtest where ftiidx ## 'curry';\n> Time: 9760.75 ms\n\n> So, I have no idea why you think it's slow? Perhaps you forgot the \n> 'create index using gist' step?\n\nNo, it was indexed.\n\nThanks, that was the datapoint I was looking for. It _can_ run fast, so\nI just need to work out what's going on. (It's hard to diagnose a slow\nquery when you've no idea whether it's really 'slow').\n\nCheers,\n Steve\n", "msg_date": "Wed, 26 Nov 2003 21:12:30 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "On Wed, Nov 26, 2003 at 09:12:30PM -0800, Steve Atkins wrote:\n> On Thu, Nov 27, 2003 at 12:41:59PM +0800, Christopher Kings-Lynne wrote:\n> > >Does anyone have any metrics on how fast tsearch2 actually is?\n> > >\n> > >I tried it on a synthetic dataset of a million documents of a hundred\n> > >words each and while insertions were impressively fast I gave up on\n> > >the search after 10 minutes.\n> > >\n> > >Broken? Unusable slow? This was on the last 7.4 release candidate.\n> > \n> > I just created a 1.1million row dataset by copying one of our 30000 row \n> > production tables and just taking out the txtidx column. Then I \n> > inserted it into itself until it had 1.1 million rows.\n> > \n> > Then I created the GiST index - THAT took forever - seriously like 20 \n> > mins or half an hour or something.\n> > \n> > Now, to find a word:\n> > \n> > select * from tsearchtest where ftiidx ## 'curry';\n> > Time: 9760.75 ms\n> \n> > So, I have no idea why you think it's slow? Perhaps you forgot the \n> > 'create index using gist' step?\n> \n> No, it was indexed.\n> \n> Thanks, that was the datapoint I was looking for. It _can_ run fast, so\n> I just need to work out what's going on. (It's hard to diagnose a slow\n> query when you've no idea whether it's really 'slow').\n\nLooking at it further, something is very broken, possibly with GIST\nindices, possibly with tsearch2s use of 'em.\n\nThis is on a newly built 7.4 installation, built with 64 bit\ndatetimes, but completely stock other than that. Stock gcc 3.3.2,\nLinux, somewhat elderly 2.4.18 kernel. Running on a 1.5GHz single\nprocessor Athlon with a half gig of RAM. Configuration set to use 20%\nof RAM as shared buffers (amongst other settings, this was the last of\na range I tried looking for variation).\n\nSoftware RAID0 across two 7200RPM SCSI drives, reiserfs (it's a\ndevelopment box, not a production system). System completely idle\napart from postgresql.\n\n269000 rows, each row having 400 words. Analyzed.\n\nRunning the select query given below appears to pause a process trying\nto insert into the table completely (locking issue? I/O bandwidth?).\n\ntop shows the select below consuming <2% of CPU and iostat shows it reading\n~2800 blocks/second from each of the two RAID drives.\n\nPhysical size of the database is under 3 gigs, including toast and index\ntables.\n\nThe select query takes around 6 minutes (consistently, even if the same\nidentical query is repeated).\n\nFor entertainment, I turned off indexscan and the query takes 1\nminute with a simple seqscan.\n\nAny thoughts?\n\nCheers,\n Steve\n\n=> select count(*) from ftstest;\n count \n--------\n 269000\n(1 row)\n\n=> \\d ftstest\n Table \"public.ftstest\"\n Column | Type | Modifiers \n--------+----------+----------------------------------------------------------\n idx | integer | not null default nextval('public.ftstest_idx_seq'::text)\n words | text | not null\n idxfti | tsvector | not null\nIndexes:\n \"ftstest_idx\" gist (idxfti)\n\n=> explain analyze select idx from ftstest where idxfti @@ 'dominican'::tsquery;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ftstest_idx on ftstest (cost=0.00..515.90 rows=271 width=4) (actual time=219.694..376042.428 rows=4796 loops=1)\n Index Cond: (idxfti @@ '\\'dominican\\''::tsquery)\n Filter: (idxfti @@ '\\'dominican\\''::tsquery)\n Total runtime: 376061.541 ms\n(4 rows)\n\n\n((Set enable_indexscan=false))\n\n\n=> explain analyze select idx from ftstest where idxfti @@ 'dominican'::tsquery;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on ftstest (cost=0.00..5765.88 rows=271 width=4) (actual time=42.589..62158.285 rows=4796 loops=1)\n Filter: (idxfti @@ '\\'dominican\\''::tsquery)\n Total runtime: 62182.277 ms\n(3 rows)\n", "msg_date": "Thu, 27 Nov 2003 21:04:17 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "\n> Any thoughts?\n\nActually, I ran my tests using tsearch V1. I wonder if there has been \nsome weird regression between tsearch 1 and 2?\n\nhris\n\n\n", "msg_date": "Fri, 28 Nov 2003 13:18:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": ">> Any thoughts?\n> \n> \n> Actually, I ran my tests using tsearch V1. I wonder if there has been \n> some weird regression between tsearch 1 and 2?\n\nI also ran my tests on 7.3.4 :(\n\nChris\n\n\n", "msg_date": "Fri, 28 Nov 2003 13:26:51 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "On Fri, Nov 28, 2003 at 01:18:48PM +0800, Christopher Kings-Lynne wrote:\n> \n> >Any thoughts?\n> \n> Actually, I ran my tests using tsearch V1. I wonder if there has been \n> some weird regression between tsearch 1 and 2?\n\nMaybe. tsearch2 doesn't seem production ready in other respects\n(untsearch2.sql barfs with 'aggregate stat(tsvector) does not exist'\nand the openfts mailing list, where this would be more appropriate,\ndoesn't appear to exist according to sourceforge).\n\nSo, using the same data, modulo a few alter tables, I try tsearch, V1.\nIt's a little slower than V2, and again runs far faster without an\nindex than with it. Broken in the same way.\n\nI have 7.2.4 running on a Sun box, so I tried that too, with similar\nresults. tsearch just doesn't seem to work very well on this dataset\n(or any other large dataset I've tried).\n\nCheers,\n Steve\n", "msg_date": "Fri, 28 Nov 2003 12:37:00 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "> I have 7.2.4 running on a Sun box, so I tried that too, with similar\n> results. tsearch just doesn't seem to work very well on this dataset\n> (or any other large dataset I've tried).\n\nWell, as I've shown - works fine for me...\n\nI strongly suggest you repost your problem report to -hackers, since the \nfact that the tsearch developers haven't chimed in implies to me that \nthey don't watch the performance list.\n\nBTW, read this about Gist indexes:\n\nhttp://www.postgresql.org/docs/current/static/limitations.html\n\n(Note lack of concurrency)\n\nChris\n\n", "msg_date": "Sun, 30 Nov 2003 17:46:30 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For full text indexing, which is better, tsearch2 or" }, { "msg_contents": "Hi,\n I'am taking dump of a huge database and do not want the restoration of\nthat dump to take a lot of time as is the case when you take the dump in\ntext files. I want to take the dump as an archive file and get it restored\nin very less time. I'am not able to figure out what is the command for\ntaking dump of a database in a archive file. Kindly help it's urgent.\n\nthanks and regards\nKamalraj Singh\n\n", "msg_date": "Mon, 1 Dec 2003 15:47:47 +0530", "msg_from": "\"Kamalraj Singh Madhan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Dump restoration via archive files" }, { "msg_contents": "On Mon, 1 Dec 2003 15:47:47 +0530\n\"Kamalraj Singh Madhan\" <[email protected]> wrote:\n\n> Hi,\n> I'am taking dump of a huge database and do not want the\n> restoration of\n> that dump to take a lot of time as is the case when you take the dump\n> in text files. I want to take the dump as an archive file and get it\n> restored in very less time. I'am not able to figure out what is the\n> command for taking dump of a database in a archive file. Kindly help\n> it's urgent.\n> \n\nFast backups are an area PG needs work in. Currently, PG has no 'archive\nfile backup'. You do have the following options to get around this:\n\n1. Take big db offline, copy $PGDATA. Has a restore time of how long it\ntakes to copy $PGDATA (And optionally untar/gzip), bring db back online\n\n2. If you are using an LVM, take a snapshot and copy the data. Like #1,\nit also has a \"0\" restore time. \n\n3. If you are using a pg_dump generated dump, be sure to really jack up\nyour sort_mem - this will be a HUGE benefit when creating indexes & if\nyou are using 7.4, adding the foriegn keys. Also turning off fsync\n(Don't forget to turn it back on after your restore!) cna give you some\nnice speed increases.\n\n4. If you are not using 7.4 and using pg_dump, there isn't much you can\ndo about adding foreign keys going stupidly slow :(\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Mon, 1 Dec 2003 09:16:34 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dump restoration via archive files" }, { "msg_contents": "\nOn Mon, 1 Dec 2003, Jeff wrote:\n\n> On Mon, 1 Dec 2003 15:47:47 +0530\n> \"Kamalraj Singh Madhan\" <[email protected]> wrote:\n>\n> 4. If you are not using 7.4 and using pg_dump, there isn't much you can\n> do about adding foreign keys going stupidly slow :(\n\nYou can take a schema dump and a separate data only dump where the latter\nspecifies --disable-triggers which should disable the checks when the data\nis being added.\n", "msg_date": "Mon, 1 Dec 2003 07:23:53 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dump restoration via archive files" } ]
[ { "msg_contents": "I was wondering if there is something I can do that would act similar to\na index over more than one table. \n\nI have about 3 million people in my DB at the moment, they all have\nroles, and many of them have more than one name. \n\nfor example, a Judge will only have one name, but a Litigant could have\nmultiple aliases. Things go far to slow when I do a query on a judge\nnamed smith. Does any one know a possible way to speed this up? \n\nI would think that In a perfect world there would be a way to create an\nindex on commonly used joins, or something of that nature. I've tried\npartial indexes, but the optimizer feels that it would be quicker to do\nan index scan for smith% then join using the pkey of the person to get\ntheir role. For litigants, this makes since, for non-litigants, this\ndoesn't. \n\nthanx for any insight,\n-jj-\n\nthe basic schema\n\nactor\n\tactor_id PK\n\trole_class_code\n\nidentity\n\tactor_id FK\n\tidentity_id PK\n\tfull_name\n\nevent\n\tevent_date_time\n\tevent_id PK\n\nevent_actor\n\tevent_id FK\n\tactor_id FK\n\n\nexplain select distinct actor.actor_id,court.id,court.name,role_class_code,full_name from actor,identity,court,event,event_actor where role_class_code = 'Judge' and full_name like 'SMITH%' and identity.actor_id = actor.actor_id and identity.court_ori = actor.court_ori and actor.court_ori = court.id and actor.actor_id = event_actor.actor_id and event_actor.event_id = event.event_id and event_date_time > '20021126' order by full_name;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=726.57..726.58 rows=1 width=92)\n -> Sort (cost=726.57..726.57 rows=1 width=92)\n Sort Key: identity.full_name, actor.actor_id, court.id, court.name, actor.role_class_code\n -> Nested Loop (cost=3.02..726.56 rows=1 width=92)\n -> Nested Loop (cost=3.02..720.72 rows=1 width=144)\n -> Nested Loop (cost=3.02..9.62 rows=1 width=117)\n Join Filter: ((\"outer\".court_ori)::text = (\"inner\".court_ori)::text)\n -> Hash Join (cost=3.02..4.18 rows=1 width=93)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".court_ori)::text)\n -> Seq Scan on court (cost=0.00..1.10 rows=10 width=34)\n -> Hash (cost=3.01..3.01 rows=1 width=59)\n -> Index Scan using name_speed on identity (cost=0.00..3.01 rows=1 width=59)\n Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n Filter: ((full_name)::text ~~ 'SMITH%'::text)\n -> Index Scan using actor_speed on actor (cost=0.00..5.43 rows=1 width=50)\n Index Cond: ((\"outer\".actor_id)::text = (actor.actor_id)::text)\n Filter: ((role_class_code)::text = 'Judge'::text)\n -> Index Scan using event_actor_speed on event_actor (cost=0.00..695.15 rows=1275 width=73)\n Index Cond: ((event_actor.actor_id)::text = (\"outer\".actor_id)::text)\n -> Index Scan using event_pkey on event (cost=0.00..5.83 rows=1 width=52)\n Index Cond: ((\"outer\".event_id)::text = (event.event_id)::text)\n Filter: (event_date_time > '20021126'::bpchar)\n\n\n-- \n\"You can't make a program without broken egos.\"\n-- \nJeremiah Jahn <[email protected]>\n\n", "msg_date": "Wed, 26 Nov 2003 14:14:11 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "cross table indexes or something?" }, { "msg_contents": "Sybase IQ lets you build \"joined indexsets\". This is amazing but pricey\nand really intended more for Data Warehousing than OLTP, although they did \nrelease a version which permitted writes on-the-fly. (This was implemented \nusing a multi-concurrency solution much like PostreSQL uses.)\n\nIt essentially pre-joined the data.\n\nMarc A. Leith\nredboxdata inc.\nE-mail:[email protected]\n\nQuoting Jeremiah Jahn <[email protected]>:\n\n> I was wondering if there is something I can do that would act similar to\n> a index over more than one table. \n> \n> I have about 3 million people in my DB at the moment, they all have\n> roles, and many of them have more than one name. \n> \n> for example, a Judge will only have one name, but a Litigant could have\n> multiple aliases. Things go far to slow when I do a query on a judge\n> named smith. Does any one know a possible way to speed this up? \n> \n> I would think that In a perfect world there would be a way to create an\n> index on commonly used joins, or something of that nature. I've tried\n> partial indexes, but the optimizer feels that it would be quicker to do\n> an index scan for smith% then join using the pkey of the person to get\n> their role. For litigants, this makes since, for non-litigants, this\n> doesn't. \n> \n> thanx for any insight,\n> -jj-\n> \n> the basic schema\n> \n> actor\n> \tactor_id PK\n> \trole_class_code\n> \n> identity\n> \tactor_id FK\n> \tidentity_id PK\n> \tfull_name\n> \n> event\n> \tevent_date_time\n> \tevent_id PK\n> \n> event_actor\n> \tevent_id FK\n> \tactor_id FK\n> \n> \n> explain select distinct\n> actor.actor_id,court.id,court.name,role_class_code,full_name from\n> actor,identity,court,event,event_actor where role_class_code = 'Judge' and\n> full_name like 'SMITH%' and identity.actor_id = actor.actor_id and\n> identity.court_ori = actor.court_ori and actor.court_ori = court.id and\n> actor.actor_id = event_actor.actor_id and event_actor.event_id =\n> event.event_id and event_date_time > '20021126' order by full_name;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n----\n> Unique (cost=726.57..726.58 rows=1 width=92)\n> -> Sort (cost=726.57..726.57 rows=1 width=92)\n> Sort Key: identity.full_name, actor.actor_id, court.id, court.name,\n> actor.role_class_code\n> -> Nested Loop (cost=3.02..726.56 rows=1 width=92)\n> -> Nested Loop (cost=3.02..720.72 rows=1 width=144)\n> -> Nested Loop (cost=3.02..9.62 rows=1 width=117)\n> Join Filter: ((\"outer\".court_ori)::text =\n> (\"inner\".court_ori)::text)\n> -> Hash Join (cost=3.02..4.18 rows=1 width=93)\n> Hash Cond: ((\"outer\".id)::text =\n> (\"inner\".court_ori)::text)\n> -> Seq Scan on court (cost=0.00..1.10\n> rows=10 width=34)\n> -> Hash (cost=3.01..3.01 rows=1 width=59)\n> -> Index Scan using name_speed on\n> identity (cost=0.00..3.01 rows=1 width=59)\n> Index Cond: (((full_name)::text\n> >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character\n> varying))\n> Filter: ((full_name)::text ~~\n> 'SMITH%'::text)\n> -> Index Scan using actor_speed on actor \n> (cost=0.00..5.43 rows=1 width=50)\n> Index Cond: ((\"outer\".actor_id)::text =\n> (actor.actor_id)::text)\n> Filter: ((role_class_code)::text =\n> 'Judge'::text)\n> -> Index Scan using event_actor_speed on event_actor \n> (cost=0.00..695.15 rows=1275 width=73)\n> Index Cond: ((event_actor.actor_id)::text =\n> (\"outer\".actor_id)::text)\n> -> Index Scan using event_pkey on event (cost=0.00..5.83\n> rows=1 width=52)\n> Index Cond: ((\"outer\".event_id)::text =\n> (event.event_id)::text)\n> Filter: (event_date_time > '20021126'::bpchar)\n> \n> \n> -- \n> \"You can't make a program without broken egos.\"\n> -- \n> Jeremiah Jahn <[email protected]>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Wed, 26 Nov 2003 17:23:14 -0500", "msg_from": "\"Marc A. Leith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Jeremiah Jahn kirjutas K, 26.11.2003 kell 22:14:\n> I was wondering if there is something I can do that would act similar to\n> a index over more than one table. \n> \n> I have about 3 million people in my DB at the moment, they all have\n> roles, and many of them have more than one name. \n> \n> for example, a Judge will only have one name, but a Litigant could have\n> multiple aliases. Things go far to slow when I do a query on a judge\n> named smith.\n\nIf you dont need all the judges named smith you could try to use LIMIT.\n\nHave you run ANALYZE ? Why does DB think that there is only one judge\nwith name like SMITH% ?\n\n-------------\nHannu\n\nP.S. \nAlways send EXPLAIN ANALYZE output if asking for advice on [PERFORM]\n\n-------------\nHannu\n", "msg_date": "Thu, 27 Nov 2003 00:32:30 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "On Wed, 2003-11-26 at 16:32, Hannu Krosing wrote:\n> Jeremiah Jahn kirjutas K, 26.11.2003 kell 22:14:\n> > I was wondering if there is something I can do that would act similar to\n> > a index over more than one table. \n> > \n> > I have about 3 million people in my DB at the moment, they all have\n> > roles, and many of them have more than one name. \n> > \n> > for example, a Judge will only have one name, but a Litigant could have\n> > multiple aliases. Things go far to slow when I do a query on a judge\n> > named smith.\n> \n> If you dont need all the judges named smith you could try to use LIMIT.\nUnfortunately I do need all of the judges named smith.\n\n\n> \n> Have you run ANALYZE ? Why does DB think that there is only one judge\n> with name like SMITH% ?\nI've attached the Analyze below. I have no idea why the db thinks there\nis only 1 judge named simth. Is there some what I can inform the DB\nabout this. In actuality, there aren't any judges named smith at the\nmoment, but there are 22K people named smith.\n\n\n> \n> -------------\n> Hannu\n> \n> P.S. \n> Always send EXPLAIN ANALYZE output if asking for advice on [PERFORM]\n EXPLAIN ANALYZE select distinct actor.actor_id,court.id,court.name,role_class_code,full_name from actor,identity,court,event,event_actor where role_class_code = 'Judge' and full_name like 'SMITH%' and identity.actor_id = actor.actor_id and identity.court_ori = actor.court_ori and actor.court_ori = court.id and actor.actor_id = event_actor.actor_id and event_actor.event_id = event.event_id and event_date_time > '20021126' order by full_name;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=686.42..686.44 rows=1 width=92) (actual time=111923.877..111923.877 rows=0 loops=1)\n -> Sort (cost=686.42..686.43 rows=1 width=92) (actual time=111923.873..111923.873 rows=0 loops=1)\n Sort Key: identity.full_name, actor.actor_id, court.id, court.name, actor.role_class_code\n -> Nested Loop (cost=8.45..686.41 rows=1 width=92) (actual time=111923.836..111923.836 rows=0 loops=1)\n -> Nested Loop (cost=8.45..680.57 rows=1 width=144) (actual time=109958.426..111157.822 rows=2449 loops=1)\n -> Hash Join (cost=8.45..9.62 rows=1 width=117) (actual time=109945.754..109945.896 rows=6 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".court_ori)::text)\n -> Seq Scan on court (cost=0.00..1.10 rows=10 width=34) (actual time=0.015..0.048 rows=10 loops=1)\n -> Hash (cost=8.45..8.45 rows=1 width=109) (actual time=109940.161..109940.161 rows=0 loops=1)\n -> Nested Loop (cost=0.00..8.45 rows=1 width=109) (actual time=10.367..109940.079 rows=7 loops=1)\n Join Filter: ((\"outer\".court_ori)::text = (\"inner\".court_ori)::text)\n -> Index Scan using name_speed on identity (cost=0.00..3.01 rows=1 width=59) (actual time=10.202..238.497 rows=22436 loops=1)\n Index Cond: (((full_name)::text >= 'SMITH'::character varying) AND ((full_name)::text < 'SMITI'::character varying))\n Filter: ((full_name)::text ~~ 'SMITH%'::text)\n -> Index Scan using actor_speed on actor (cost=0.00..5.42 rows=1 width=50) (actual time=4.883..4.883 rows=0 loops=22436)\n Index Cond: ((\"outer\".actor_id)::text = (actor.actor_id)::text)\n Filter: ((role_class_code)::text = 'Judge'::text)\n -> Index Scan using event_actor_speed on event_actor (cost=0.00..655.59 rows=1229 width=73) (actual time=11.815..198.759 rows=408 loops=6)\n Index Cond: ((event_actor.actor_id)::text = (\"outer\".actor_id)::text)\n -> Index Scan using event_pkey on event (cost=0.00..5.83 rows=1 width=52) (actual time=0.308..0.308 rows=0 loops=2449)\n Index Cond: ((\"outer\".event_id)::text = (event.event_id)::text)\n Filter: (event_date_time > '20021126'::bpchar)\n Total runtime: 111924.833 ms\n(23 rows)\n\n\n\n> \n> -------------\n> Hannu\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \nJeremiah Jahn <[email protected]>\n\n", "msg_date": "Mon, 01 Dec 2003 08:29:03 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "On Monday 01 December 2003 14:29, Jeremiah Jahn wrote:\n> On Wed, 2003-11-26 at 16:32, Hannu Krosing wrote:\n> > Jeremiah Jahn kirjutas K, 26.11.2003 kell 22:14:\n> > > I was wondering if there is something I can do that would act similar\n> > > to a index over more than one table.\n> > >\n> > > I have about 3 million people in my DB at the moment, they all have\n> > > roles, and many of them have more than one name.\n> > >\n> > > for example, a Judge will only have one name, but a Litigant could have\n> > > multiple aliases. Things go far to slow when I do a query on a judge\n> > > named smith.\n> >\n> > If you dont need all the judges named smith you could try to use LIMIT.\n>\n> Unfortunately I do need all of the judges named smith.\n>\n> > Have you run ANALYZE ? Why does DB think that there is only one judge\n> > with name like SMITH% ?\n>\n> I've attached the Analyze below. I have no idea why the db thinks there\n> is only 1 judge named simth. Is there some what I can inform the DB\n> about this. In actuality, there aren't any judges named smith at the\n> moment, but there are 22K people named smith.\n\nIt's guessing there's approximately 1. I don't think PG measures \ncross-correlation of various columns cross-table.\n\nIf role_class_code on table actor? If so, try:\n\nCREATE INDEX test_judge_idx ON actor (actor_id) WHERE role_class_code = \n'Judge';\n\nAnd then similar for the other class-codes (assuming you've not got too many \nof them). Or even just an index on (actor_id,role_class_code).\n\nIf role_class_code is on a different table, can you say which one? The problem \nis clearly this step:\n\n> -> Index Scan using actor_speed on\n> actor (cost=0.00..5.42 rows=1 width=50) (actual time=4.883..4.883 rows=0\n> loops=22436)\n> Index Cond: ((\"outer\".actor_id)::text =\n> (actor.actor_id)::text) Filter: ((role_class_code)::text = 'Judge'::text)\n\nThats 4.883 * 22436 loops = 109555 milliseconds.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 1 Dec 2003 15:59:52 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "> Jeremiah Jahn wrote:\n>\n> > Have you run ANALYZE ? Why does DB think that there is only \n> one judge \n> > with name like SMITH% ?\n> I've attached the Analyze below. I have no idea why the db \n> thinks there is only 1 judge named simth. Is there some what \n> I can inform the DB about this. In actuality, there aren't \n> any judges named smith at the moment, but there are 22K \n> people named smith.\n> \n\nI think you're mistaking the command EXPLAIN ANALYZE for the command\nANALYZE.\nHave you actually run the command ANALYZE or perhaps even better if you\nhaven't vacuumed before: VACUUM FULL ANALYZE\n\nIf you have no idea what vacuum is, check the manual. If you've already\nrun such a VACUUM/ANALYZE-command, then ignore this message :)\n\nBest regards,\n\nArjen van der Meijden\n\n\n\n", "msg_date": "Mon, 1 Dec 2003 17:24:43 +0100", "msg_from": "\"Arjen van der Meijden\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Jeremiah,\n\n> I've attached the Analyze below. I have no idea why the db thinks there\n> is only 1 judge named simth. Is there some what I can inform the DB\n> about this. In actuality, there aren't any judges named smith at the\n> moment, but there are 22K people named smith.\n\nNo, Hannu meant that you may need to run the following command:\n\nANALYZE actor;\n\n... to update the database statistics on the actors table. That is a \nmaintainence task that needs to be run periodically.\n\nIf that doesn't fix the bad plan, then the granularity of statistics on the \nfull_name column needs updating; I suggest:\n\nALTER TABLE actor ALTER COLUMN full_name SET STATISTICS 100;\nANALYZE actor;\n\nAnd if it's still choosing a slow nested loop, up the stats to 250.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 1 Dec 2003 11:47:51 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Thanks to all, I had already run analyze. But the STATISTICS setting\nseems to have worked. I'm just not sure what it did..? Would anyone care\nto explain. \n\n\nOn Mon, 2003-12-01 at 13:47, Josh Berkus wrote:\n> Jeremiah,\n> \n> > I've attached the Analyze below. I have no idea why the db thinks there\n> > is only 1 judge named simth. Is there some what I can inform the DB\n> > about this. In actuality, there aren't any judges named smith at the\n> > moment, but there are 22K people named smith.\n> \n> No, Hannu meant that you may need to run the following command:\n> \n> ANALYZE actor;\n> \n> ... to update the database statistics on the actors table. That is a \n> maintainence task that needs to be run periodically.\n> \n> If that doesn't fix the bad plan, then the granularity of statistics on the \n> full_name column needs updating; I suggest:\n> \n> ALTER TABLE actor ALTER COLUMN full_name SET STATISTICS 100;\n> ANALYZE actor;\n> \n> And if it's still choosing a slow nested loop, up the stats to 250.\n-- \nJeremiah Jahn <[email protected]>\n\n", "msg_date": "Tue, 02 Dec 2003 13:08:13 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Jeremiah,\n\n> Thanks to all, I had already run analyze. But the STATISTICS setting\n> seems to have worked. I'm just not sure what it did..? Would anyone care\n> to explain.\n\nThe STATISTICS setting improves the granularity of statistics kept by the \nquery planner on that column; increasing the granularity (i.e. more random \nsamples) can significantly improve things in cases where you have data whose \ndistribution is significantly skewed. Certainly whenever you see the query \nplanner using a slow nestloop becuase of a bad row-return estimate, it is one \nof the first things to try.\n\nIts drawbacks are 4-fold:\n1) to keep it working, you will probably need to run ANALZYE more often than \nyou have been;\n2) these ANALYZEs will take longer, and have the annoying side effect of \nflooring your CPU while they do;\n3) You will have to be sure that your vacuum plan includes vacuuming the \npg_statistic table as the database superuser, as that table will be getting \nupdated more often.\n4) Currently, pg_dump does *not* back up statistics settings. So you will \nneed to save a script which does this in preparation for having to restore \nyour database.\n\nWhich is why the stats are set low by default.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 2 Dec 2003 11:27:52 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> 1) to keep it working, you will probably need to run ANALZYE more\n> often than you have been;\n\nI'm not sure why this would be the case -- can you elaborate?\n\n> 4) Currently, pg_dump does *not* back up statistics settings.\n\nYes, it does.\n\n-Neil\n\n", "msg_date": "Tue, 02 Dec 2003 16:57:27 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Neil,\n\n> > 1) to keep it working, you will probably need to run ANALZYE more\n> > often than you have been;\n> \n> I'm not sure why this would be the case -- can you elaborate?\n\nFor the more granular stats to be useful, they have to be accurate; otherwise \nyou'll go back to a nestloop as soon as the query planner encounters a value \nthat it doens't think is in the table at all. \n\n> \n> > 4) Currently, pg_dump does *not* back up statistics settings.\n> \n> Yes, it does.\n\nOh, good. Was this a 7.4 improvement? I missed that in the changelogs ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 2 Dec 2003 15:04:28 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Oh, good. Was this a 7.4 improvement?\n\nNo, it was in 7.3\n\n-Neil\n\n", "msg_date": "Tue, 02 Dec 2003 18:37:17 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": "> 4) Currently, pg_dump does *not* back up statistics settings.\n\nIs this a TODO?\n\nChris\n\n\n", "msg_date": "Wed, 03 Dec 2003 09:50:40 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" }, { "msg_contents": ">> 4) Currently, pg_dump does *not* back up statistics settings.\n> \n> \n> Is this a TODO?\n\nOops - sorry I thought you meant 'pg_dump does not back up statistics'. \n Probably still should be a TODO :)\n\nChris\n\n\n", "msg_date": "Wed, 03 Dec 2003 10:04:39 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cross table indexes or something?" } ]
[ { "msg_contents": "Hello All,\nWe will have a very large database to store\nmicroarray data (may exceed 80-100G some day). now\nwe have 1G RAM, 2G Hz Pentium 4, 1 CPU. and enough\nhard disk. \n\nI never touched such large database before. I ask\nseveral dbas if the hardware is ok, some said it is\nok for the query, but I am not so convinced. Because\nI check the mailing list and learned that it is not\nunreasonable to take several minutes to do the\nquery. But I want to query to be as fast as possible. \n\nCould anybody tell me that our hardware is an issue\nor not? do we really need better hardware to make\nreal difference?\n\nRegards,\nWilliam\n\n", "msg_date": "Wed, 26 Nov 2003 20:22:01 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "very large db performance question" }, { "msg_contents": "LIANHE SHAO <[email protected]> writes:\n> We will have a very large database to store microarray data (may\n> exceed 80-100G some day). now we have 1G RAM, 2G Hz Pentium 4, 1\n> CPU. and enough hard disk.\n\n> Could anybody tell me that our hardware is an issue or not?\n\nIMHO the size of the DB is less relevant than the query workload. For\nexample, if you're storying 100GB of data but only doing a single\nindex scan on it every 10 seconds, any modern machine with enough HD\nspace should be fine.\n\nIf you give us an idea of the # of queries you expect per second, the\napproximate mix of reads and writes, and some idea of how complex the\nqueries are, we might be able to give you some better advice.\n\n-Neil\n\n\n", "msg_date": "Wed, 26 Nov 2003 17:03:31 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very large db performance question" }, { "msg_contents": "Hi,\n\nI have done some performance tests on 1Gb and 4 Gb Databases on a mono\nPentium 4 , 1 Gb RAM, IDE disk, SCSI disks and RAID0 LUN on DAS 5300 on\nLinux RedHat 7.3.\n\nIn each cases my tests make select, update and insert.\nOne of them is pgbench. You can find it in Postgres/contrib/pgbench.\nThe other one is DBT1 form OSDL. I have port it on Postgres and you can find\nit in Source Forge. Jenny Wang is writting a better Postgres DBT1 based on C\ntransactions instead of PL/pgSQL Transactions.\n\nWith this size of database, even after a fine tuning of Postgres the problem\nis I/O Wait. So in you case with 100 Gb, you will have I/O Wait.\nTo resume my observations regarding diskss performances :\n1) IDE disk are the slower.\n2) SCSI disks are a little more faster but you can decrease I/O Wait by 25%\nby creating a stripped volume group on 3 disks.\n3) A RAID0 on 5 DAS5300 disks improve again performances by 20% as the DAS\nStorage Processeur use internal caches\n\nOne thing, very important in my case was the time of (hot) backup / restore.\n\nIn that case the pgbench database schema is to simple to have an idea but\nDBT1 schema is enough complex and on the RAID0 LUN the backup takes 12 min\nbut the restore takes 16 min + 10 min to recreate the indexes + 255 min to\nrecreate the Foreign Keys. So 4h41 for a 4Gb database.\n\nThat means for a 100 Gb database, if your schema as Foreign keys and indexes\n: about 5 hours to backup and 117 hours to restore (~5 days).\nSo, if your database in a critical database, it is better to use cold backup\nwith Snapshot tools.\n\nRegards,\nThierry Missimilly\n\nLIANHE SHAO wrote:\n\n> Hello All,\n> We will have a very large database to store\n> microarray data (may exceed 80-100G some day). now\n> we have 1G RAM, 2G Hz Pentium 4, 1 CPU. and enough\n> hard disk.\n>\n> I never touched such large database before. I ask\n> several dbas if the hardware is ok, some said it is\n> ok for the query, but I am not so convinced. Because\n> I check the mailing list and learned that it is not\n> unreasonable to take several minutes to do the\n> query. But I want to query to be as fast as possible.\n>\n> Could anybody tell me that our hardware is an issue\n> or not? do we really need better hardware to make\n> real difference?\n>\n> Regards,\n> William\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend", "msg_date": "Thu, 27 Nov 2003 09:42:30 +0100", "msg_from": "Thierry Missimilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very large db performance question" }, { "msg_contents": "> IMHO the size of the DB is less relevant than the query workload. For\n> example, if you're storying 100GB of data but only doing a single\n> index scan on it every 10 seconds, any modern machine with enough HD\n> space should be fine.\n\nI agree that the workload is likely to be the main issue in most\nsituations. However, if your queries involve lots of counting and\naggregating, your databases contains several gigabytes of data, and you\nare using common hardware, be prepared to wait anywhere from minutes to\nhours, even if you are the only user.\n\n", "msg_date": "Tue, 17 Feb 2004 11:36:35 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very large db performance question" } ]
[ { "msg_contents": "Thanks for reply. Actually our database only supply\nsome scientists to use (we predict that). so there\nis no workload problem. there is only very\ninfrequent updates. the query is not complex. the\nproblem is, we have one table that store most of the\ndata ( with 200 million rows). In this table, there\nis a text column which we need to do full text\nsearch for each row. The result will then join the\ndata from another table which has 30,000 rows. Now\nthe query runs almost forever. \n\nI tried a small table with 2 million rows using the\nfollowing simple command, it takes me about 6\nseconds to get the result back. So, I get confused.\nThat is why I ask: Is it the hardware problem or\nsomething else. (I just vacuumed the whole database\nyesterday). \n \nPGA=> select count (*) from expressiondata ;\n count\n---------\n 2197497\n(1 row)\n\n\nPGA=> explain select count (*) from expressiondata ;\n QUERY PLAN\n------------------------------------------------------------------------------\n Aggregate (cost=46731.71..46731.71 rows=1 width=0)\n -> Seq Scan on expressiondata \n(cost=0.00..41237.97 rows=2197497 width=0)\n(2 rows)\n\n \n \nRegards, \nWilliam\n\n----- Original Message -----\nFrom: Neil Conway <[email protected]>\nDate: Wednesday, November 26, 2003 10:03 pm\nSubject: Re: [PERFORM] very large db performance\nquestion\n\n> LIANHE SHAO <[email protected]> writes:\n> > We will have a very large database to store\nmicroarray data (may\n> > exceed 80-100G some day). now we have 1G RAM, 2G\nHz Pentium 4, 1\n> > CPU. and enough hard disk.\n> \n> > Could anybody tell me that our hardware is an\nissue or not?\n> \n> IMHO the size of the DB is less relevant than the\nquery workload. For\n> example, if you're storying 100GB of data but only\ndoing a single\n> index scan on it every 10 seconds, any modern\nmachine with enough HD\n> space should be fine.\n> \n> If you give us an idea of the # of queries you\nexpect per second, the\n> approximate mix of reads and writes, and some idea\nof how complex the\n> queries are, we might be able to give you some\nbetter advice.\n> \n> -Neil\n> \n> \n> \n\n", "msg_date": "Wed, 26 Nov 2003 22:46:24 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very large db performance question" }, { "msg_contents": "> Thanks for reply. Actually our database only supply\n> some scientists to use (we predict that). so there\n> is no workload problem. there is only very\n> infrequent updates. the query is not complex. the\n> problem is, we have one table that store most of the\n> data ( with 200 million rows). In this table, there\n> is a text column which we need to do full text\n> search for each row. The result will then join the\n> data from another table which has 30,000 rows. Now\n> the query runs almost forever. \n\nUse TSearch2.\n\n> I tried a small table with 2 million rows using the\n> following simple command, it takes me about 6\n> seconds to get the result back. So, I get confused.\n> That is why I ask: Is it the hardware problem or\n> something else. (I just vacuumed the whole database\n> yesterday). \n> \n> PGA=> select count (*) from expressiondata ;\n> count\n> ---------\n> 2197497\n> (1 row)\n\nselect count(*) on a postgres table ALWAYS does a sequential scan. Just \ndon't do it. There are technical reasons (MVCC) why this is so. It's a \nbad \"test\".\n\nChris\n\n\n", "msg_date": "Thu, 27 Nov 2003 09:03:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very large db performance question" } ]
[ { "msg_contents": "I'v spent a couple days playing with this problem and searching the mailing lists and\ndocs etc but come up with nothing. Any help would be much appreciated.\n\nSetup is postgres 7.3.2 on redhat 7.1 on a 1.3GHz Athlon machine with 1G pc133 ram and\nSCSI.\n\nHere is the same query with the addition of a left join onto a list of contacts to\ngrab the last name of each connected contact. I'd think this should be real quick since\nit jsut has to grab around 100 names from the list, and if its smart enough to grab just\ndistinct IDs, then it's just like 10 rows it has to grab using the primary field. But as\nfar as i can tell (and i may VERY well be reading the explain syntax wrong), it is\ngrabbing them all and joining them first, rather than doing the operation that limits\nthe result rows to a mere 100 and THEN doing the join to contacts. It would be faster\nif i did a separate query using a big IN(id1,id2,...) condition, which makes no sense to\nme. Plus i REALLY want to avoid this as the selected fields and the joins and conditions\nare all variable and controlled (indirectly and transparently) by the user.\n\nPoint is, why does a simple left join slow things down so much? in my experience\n(primarily with mysql but also over a year with postgre) simple left joins are usually\nquite quick. I can only guess that a bad plan is being chosen. PLEASE don't tell me i\nneed to store a copy of the names in the events table to get acceptable speed, cause\nthis would be plain sacrilegious in terms of DB design. Or is this simply as fast as\nthese queries can go? Just seems too long for the work that's being done IME.\n\nevents table has 12355 rows\ncontacts has 20064\nevent_managers has 8502\n\nAll fields with conditions (object_ids, contact, event_id, user_id, deleted_on) are indexed with btree.\n\nHere is the query with the left join.\n\nsauce=# explain analyze SELECT top.object_id , top.who, top.datetime, top.priority, top.subject, top.action, top_contact_.last_name, top.object_id, top_contact_.object_id\n\t\t\tFROM event_managers AS managers\n\t\t\t\tJOIN ONLY events AS top ON(managers.event_id=top.object_id)\n\t\t\t\tLEFT JOIN contacts AS top_contact_ ON(top.contact=top_contact_.object_id and top_contact_.deleted_on IS NULL)\n\t\t\tWHERE true AND managers.user_id=238;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\nMerge Join (cost=5569.24..5671.22 rows=100 width=91) (actual time=485.95..526.25 rows=208 loops=1)\n Merge Cond: (\"outer\".contact = \"inner\".object_id)\n Join Filter: (\"inner\".deleted_on IS NULL)\n -> Sort (cost=2467.17..2467.42 rows=100 width=60) (actual time=143.67..143.75 rows=208 loops=1)\n Sort Key: top.contact\n -> Hash Join (cost=143.63..2463.83 rows=100 width=60) (actual time=0.89..142.64 rows=208 loops=1)\n Hash Cond: (\"outer\".object_id = \"inner\".event_id)\n -> Seq Scan on events top (cost=0.00..1830.19 rows=12219 width=56) (actual time=0.05..131.33 rows=12219 loops=1)\n -> Hash (cost=143.45..143.45 rows=69 width=4) (actual time=0.65..0.65 rows=0 loops=1)\n -> Index Scan using event_managers_user_id on event_managers managers (cost=0.00..143.45 rows=69 width=4) (actual time=0.14..0.50 rows=139 loops=1)\n Index Cond: (user_id = 238)\n -> Sort (cost=3102.07..3152.23 rows=20064 width=31) (actual time=342.23..360.29 rows=19964 loops=1)\n Sort Key: top_contact_.object_id\n -> Append (cost=0.00..1389.64 rows=20064 width=31) (actual time=0.06..115.63 rows=20064 loops=1)\n -> Seq Scan on contacts top_contact_ (cost=0.00..1383.43 rows=20043 width=31) (actual time=0.06..101.04 rows=20043 loops=1)\n -> Seq Scan on users top_contact_ (cost=0.00..6.21 rows=21 width=31) (actual time=0.05..0.29 rows=21 loops=1)\n Total runtime: 527.47 msec\n(17 rows)\n\n\nThe same thing but without the left join. Much faster. Anything slower than\nthis would be unacceptable, especailly given how small the tables are at this\npoint. They are expected to grow ALOT bigger within a year.\n\nsauce=# explain analyze SELECT top.object_id , top.who, top.datetime, top.priority, top.subject, top.action, top.object_id\n\t\t\tFROM event_managers AS managers\n\t\t\t\tJOIN ONLY events AS top ON(managers.event_id=top.object_id)\n\t\t\tWHERE true AND managers.user_id=238;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=143.63..2463.83 rows=100 width=56) (actual time=1.48..137.74 rows=208 loops=1)\n Hash Cond: (\"outer\".object_id = \"inner\".event_id)\n -> Seq Scan on events top (cost=0.00..1830.19 rows=12219 width=52) (actual time=0.06..125.80 rows=12219 loops=1)\n -> Hash (cost=143.45..143.45 rows=69 width=4) (actual time=1.20..1.20 rows=0 loops=1)\n -> Index Scan using event_managers_user_id on event_managers managers (cost=0.00..143.45 rows=69 width=4) (actual time=0.21..1.03 rows=139 loops=1)\n Index Cond: (user_id = 238)\n Total runtime: 137.96 msec\n(7 rows)\n\nagain, many thanks for any feedback!\n\n", "msg_date": "Thu, 27 Nov 2003 21:58:29 -0800", "msg_from": "Jonathan Knopp <[email protected]>", "msg_from_op": true, "msg_subject": "simple left join slows query more than expected" }, { "msg_contents": "Jonathan Knopp <[email protected]> writes:\n> I'v spent a couple days playing with this problem and searching the mailing lists and\n> docs etc but come up with nothing. Any help would be much appreciated.\n\nDon't use inheritance to define the contacts table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Nov 2003 12:55:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple left join slows query more than expected " } ]
[ { "msg_contents": "Hi!\n\n\nI have 4 question which probably someone can answer.\n\n1) I have a transaction during which no data was modified, does it\nmake a difference whether i send COMMIT or ROLLBACK? The effect is the\nsame, but what�s about the speed?\n\n2) Is there any general rule when the GEQO will start using an index?\nDoes he consider the number of tuples in the table or the number of\ndata pages? Or is it even more complex even if you don�t tweak the\ncost setting for the GEQO?\n\n3) Makes it sense to add a index to a table used for logging? I mean\nthe table can grow rather large due to many INSERTs, but is also\nseldom queried. Does the index slowdown noticable INSERTs?\n\n4) Temporary tables will always be rather slow as they can�t gain from\nANALYZE runs, correct?\n\nThanx in advance for any answer\n\nChristoph Nelles\n\n \n\n-- \nMit freundlichen Gr�ssen\nEvil Azrael mailto:[email protected]\n\n", "msg_date": "Mon, 1 Dec 2003 14:01:42 +0100", "msg_from": "Evil Azrael <[email protected]>", "msg_from_op": true, "msg_subject": "Various Questions" } ]
[ { "msg_contents": "Hi!\n\n\nI have 4 question which probably someone can answer.\n\n1) I have a transaction during which no data was modified, does it\nmake a difference whether i send COMMIT or ROLLBACK? The effect is the\nsame, but what�s about the speed?\n\n2) Is there any general rule when the GEQO will start using an index?\nDoes he consider the number of tuples in the table or the number of\ndata pages? Or is it even more complex even if you don�t tweak the\ncost setting for the GEQO?\n\n3) Makes it sense to add a index to a table used for logging? I mean\nthe table can grow rather large due to many INSERTs, but is also\nseldom queried. Does the index slowdown noticable INSERTs?\n\n4) Temporary tables will always be rather slow as they can�t gain from\nANALYZE runs, correct?\n\nThanx in advance for any answer\n\nChristoph Nelles\n\n-- \nMit freundlichen Gr�ssen\nEvil Azrael mailto:[email protected]\n\n", "msg_date": "Mon, 1 Dec 2003 14:07:50 +0100", "msg_from": "Evil Azrael <[email protected]>", "msg_from_op": true, "msg_subject": "Various Questions" }, { "msg_contents": "On Monday 01 December 2003 18:37, Evil Azrael wrote:\n> 1) I have a transaction during which no data was modified, does it\n> make a difference whether i send COMMIT or ROLLBACK? The effect is the\n> same, but what´s about the speed?\n\nIt should not matter. Both commit and rollback should take same amount of \ntime..\n\n> 2) Is there any general rule when the GEQO will start using an index?\n> Does he consider the number of tuples in the table or the number of\n> data pages? Or is it even more complex even if you don´t tweak the\n> cost setting for the GEQO?\n\nI thought GEQO was triggered by numebr of join clauses. That is what GEQO cost \nindicates. It is not triggered by number of tuples in any table etc.\n\nBut correct me if I am wrong.\n\n> 3) Makes it sense to add a index to a table used for logging? I mean\n> the table can grow rather large due to many INSERTs, but is also\n> seldom queried. Does the index slowdown noticable INSERTs?\n\nYes. It does make a lot of difference. If the table is very seldom queried, \nyou can probably create the index before querying and drop it later. However \neven this will cost a seq. scan of table and can be heavy on performance..\n Take your pick\n\nShridhar\n\n", "msg_date": "Mon, 1 Dec 2003 18:56:03 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various Questions" }, { "msg_contents": "On Mon, Dec 01, 2003 at 02:07:50PM +0100, Evil Azrael wrote:\n> 1) I have a transaction during which no data was modified, does it\n> make a difference whether i send COMMIT or ROLLBACK? The effect is the\n> same, but what�s about the speed?\n\nIt makes no difference.\n\n> 2) Is there any general rule when the GEQO will start using an index?\n> Does he consider the number of tuples in the table or the number of\n> data pages? Or is it even more complex even if you don�t tweak the\n> cost setting for the GEQO?\n\nGEQO is not what causes indexscans. You're thinking of the\nplanner/optimiser. Generally, the optimiser decides what the optimum\nplan is to deliver a query. This involves a complicated set of\nrules. The real important question is, \"Am I really getting the\nfastest plan?\" You can find out that with EXPLAIN ANALYSE. If you\nwant to know more about what makes a good plan, I'd start by reading\nthe docs, and then by reading the comments in the source code.\n\n> 3) Makes it sense to add a index to a table used for logging? I mean\n> the table can grow rather large due to many INSERTs, but is also\n> seldom queried. Does the index slowdown noticable INSERTs?\n\nIt does, but you might find that it's worth it. If it is seldom\nqueried, but you really need the results and the result set is a\nsmall % of the table, then you're probably wise to pay the cost of\nthe index at insert, update, and VACUUM because doing a seqscan on a\nlarge table to get one or two rows will destroy all your buffers.\n\n> 4) Temporary tables will always be rather slow as they can�t gain from\n> ANALYZE runs, correct?\n\nNo, you can ANALYSE them yourself. Of course, you'll need an index\nunless you plan to read the whole table. Note that, if you use temp\ntables a lot, you need to be sure to vacuum at least pg_class and\npg_attribute more frequently than you might have thought.\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 1 Dec 2003 09:00:58 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various Questions" } ]
[ { "msg_contents": "Greetings:\n\nApologies if this question has already been answered, but I was unable \nto locate a prior answer in the archives...\n\nI have a table with approximately 10 million records, called \n\"indethom\", and with an INTEGER column called \"clavis\" which is set up \nas a primary key. When I try to perform a select on the table, \nrestricting the result to only the first 100 records, PostgreSQL \nperforms a sequence scan, rather than an index scan (as shown by using \nEXPLAIN). Needless to say the sequence scan takes forever. Is there \nsome way to get PostgreSQL to use my wonderful indexes? Have I somehow \nbuilt the indexes incorrectly or something?\n\nHere's the description of the table:\n\n====================== PSQL Output Snip =========================\n\nit=> \\d indethom\n Table \"public.indethom\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n numeoper | smallint | not null\n nomeoper | character(3) | not null\n... (numerous columns skipped) ...\n verbum | character varying(22) | not null\n poslinop | integer | not null\n posverli | smallint | not null\n posverop | integer | not null\n clavis | integer | not null\n articref | integer |\n sectref | integer |\n query_counter | integer |\nIndexes: indethom_pkey primary key btree (clavis),\n indethom_articulus_ndx btree (nomeoper, refere1a, refere1b, \nrefere2a, refere2b, refere3a, refere3b),\n indethom_sectio_ndx btree (nomeoper, refere1a, refere1b, \nrefere2a, refere2b, refere3a, refere3b, refere4a, refere4b),\n it_clavis_ndx btree (clavis),\n verbum_ndx btree (verbum)\n\nit=> explain select * from indethom where clavis < 25;\n QUERY PLAN\n----------------------------------------------------------------------\n Seq Scan on indethom (cost=0.00..1336932.65 rows=3543991 width=236)\n Filter: (clavis < 25)\n(2 rows)\n\n================== End Snip =====================\n\nFeel free to point me to any FAQ or previous message that already \nanswers this question. Thanks in advance!\n\n-Erik Norvelle", "msg_date": "Mon, 1 Dec 2003 14:40:30 +0100", "msg_from": "Erik Norvelle <[email protected]>", "msg_from_op": true, "msg_subject": "My indexes aren't being used (according to EXPLAIN)" }, { "msg_contents": "On Mon, Dec 01, 2003 at 02:40:30PM +0100, Erik Norvelle wrote:\n> \n> it=> explain select * from indethom where clavis < 25; \n\nWhat's the percentage of the table where clavis < 25? Have you\nANALYSEd recently? What does the pg_stats view tell you about this\ntable? \n\n> Feel free to point me to any FAQ or previous message that already \n> answers this question. Thanks in advance! \n\nThis is a pretty common sort of problem. See the archives of this\nlist for several fairly recent discussions of these sorts of\nproblems.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 1 Dec 2003 09:04:10 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My indexes aren't being used (according to EXPLAIN)" }, { "msg_contents": "The ANALYSE did the trick... Thanks! Will also read through the \narchives...\n\n-Erik\n\nOn lunes, dici 1, 2003, at 15:04 Europe/Madrid, Andrew Sullivan wrote:\n\n> On Mon, Dec 01, 2003 at 02:40:30PM +0100, Erik Norvelle wrote:\n>>\n>> it=> explain select * from indethom where clavis < 25;\n>\n> What's the percentage of the table where clavis < 25? Have you\n> ANALYSEd recently? What does the pg_stats view tell you about this\n> table?\n>\n>> Feel free to point me to any FAQ or previous message that already\n>> answers this question. Thanks in advance!\n>\n> This is a pretty common sort of problem. See the archives of this\n> list for several fairly recent discussions of these sorts of\n> problems.\n>\n> A\n>\n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Afilias Canada Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Mon, 1 Dec 2003 16:11:11 +0100", "msg_from": "Erik Norvelle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: My indexes aren't being used (according to EXPLAIN)" } ]
[ { "msg_contents": "I am currently working on optimizing some fairly time consuming queries \non a decently large\ndataset.\n\nThe Following is the query in question.\n\nSELECT z.lat, z.lon, z.city, z.state, q.date_time, c.make, c.model, c.year\n FROM quotes AS q, zips AS z, cars AS c\n WHERE\n z.zip = q.zip AND\n c.car_id = q.car_id AND\n z.state != 'AA' AND\n z.state != 'AE' AND\n z.state != 'AP' AND\n z.state = 'WA'\n ORDER BY date_time;\n\nThe tables are as follows.\n\n Table \"public.cars\"\n Column | Type | Modifiers\n---------------+-----------------------+----------------------------------------\n car_id | character varying(10) | not null default ''::character \nvarying\n nags_glass_id | character varying(7) | not null default ''::character \nvarying\n make | character varying(30) | not null default ''::character \nvarying\n model | character varying(30) | not null default ''::character \nvarying\n year | character varying(4) | not null default ''::character \nvarying\n style | character varying(30) | not null default ''::character \nvarying\n price | double precision | not null default (0)::double \nprecision\nIndexes:\n \"cars_pkey\" primary key, btree (car_id)\n \"cars_car_id_btree_index\" btree (car_id)\n \"make_cars_index\" btree (make)\n \"model_cars_index\" btree (model)\n \"year_cars_index\" btree (\"year\")\n\n Table \"public.quotes\"\n Column | Type \n| Modifiers\n-------------------+-----------------------------+---------------------------------------------------------------------\n quote_id | bigint | not null default \nnextval('quotes_quote_id_seq'::text)\n visitor_id | bigint | not null default \n(0)::bigint\n date_time | timestamp without time zone | not null default \n'0001-01-01 00:00:00'::timestamp without time zone\n car_id | character varying(10) | not null default \n''::character varying\n email | text | not null default ''::text\n zip | character varying(5) | not null default \n''::character varying\n current_referrer | text | not null default ''::text\n original_referrer | text | not null default ''::text\nIndexes:\n \"quotes_pkey\" primary key, btree (quote_id)\n \"car_id_quotes_index\" btree (car_id)\n \"visitor_id_quotes_index\" btree (visitor_id)\n \"zip_quotes_index\" btree (zip)\n\n Table \"public.zips\"\n Column | Type | Modifiers\n--------+-----------------------+---------------------------------------------------\n zip_id | bigint | not null default \nnextval('zips_zip_id_seq'::text)\n zip | character varying(5) | not null default ''::character varying\n city | character varying(28) | not null default ''::character varying\n state | character varying(2) | not null default ''::character varying\n lat | character varying(10) | not null default ''::character varying\n lon | character varying(10) | not null default ''::character varying\nIndexes:\n \"zips_pkey\" primary key, btree (zip_id)\n \"zip_zips_index\" btree (zip)\n \"zips_state_btree_index\" btree (state)\n\nThe above query with the default setting of 10 for \ndefault_statistics_target runs as follows\n\n(From Explain Analyze)\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=58064.16..58074.20 rows=4015 width=80) (actual \ntime=2415.060..2421.421 rows=4539 loops=1)\n Sort Key: q.date_time\n -> Merge Join (cost=57728.02..57823.84 rows=4015 width=80) (actual \ntime=2254.056..2345.013 rows=4539 loops=1)\n Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n -> Sort (cost=56880.61..56890.65 rows=4015 width=62) (actual \ntime=2054.353..2062.189 rows=4693 loops=1)\n Sort Key: (q.car_id)::text\n -> Hash Join (cost=1403.91..56640.29 rows=4015 \nwidth=62) (actual time=8.479..1757.126 rows=10151 loops=1)\n Hash Cond: ((\"outer\".zip)::text = (\"inner\".zip)::text)\n -> Seq Scan on quotes q (cost=0.00..10657.42 \nrows=336142 width=27) (actual time=0.062..657.015 rows=336166 loops=1)\n -> Hash (cost=1402.63..1402.63 rows=511 width=52) \n(actual time=8.273..8.273 rows=0 loops=1)\n -> Index Scan using zips_state_btree_index \non zips z (cost=0.00..1402.63 rows=511 width=52) (actual \ntime=0.215..6.877 rows=718 loops=1)\n Index Cond: ((state)::text = 'WA'::text)\n Filter: (((state)::text <> 'AA'::text) \nAND ((state)::text <> 'AE'::text) AND ((state)::text <> 'AP'::text))\n -> Sort (cost=847.41..870.91 rows=9401 width=37) (actual \ntime=199.172..216.354 rows=11922 loops=1)\n Sort Key: (c.car_id)::text\n -> Seq Scan on cars c (cost=0.00..227.01 rows=9401 \nwidth=37) (actual time=0.104..43.523 rows=9401 loops=1)\n Total runtime: 2427.937 ms\n\nIf I set enable_seqscan=off I get the following\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=122108.52..122118.62 rows=4039 width=80) (actual \ntime=701.002..707.442 rows=4541 loops=1)\n Sort Key: q.date_time\n -> Nested Loop (cost=0.00..121866.59 rows=4039 width=80) (actual \ntime=0.648..624.134 rows=4541 loops=1)\n -> Nested Loop (cost=0.00..102256.36 rows=4039 width=62) \n(actual time=0.374..381.440 rows=10153 loops=1)\n -> Index Scan using zips_state_btree_index on zips z \n(cost=0.00..1413.31 rows=514 width=52) (actual time=0.042..9.043 \nrows=718 loops=1)\n Index Cond: ((state)::text = 'WA'::text)\n Filter: (((state)::text <> 'AA'::text) AND \n((state)::text <> 'AE'::text) AND ((state)::text <> 'AP'::text))\n -> Index Scan using zip_quotes_index on quotes q \n(cost=0.00..195.59 rows=48 width=27) (actual time=0.039..0.426 rows=14 \nloops=718)\n Index Cond: ((\"outer\".zip)::text = (q.zip)::text)\n -> Index Scan using cars_car_id_btree_index on cars c \n(cost=0.00..4.84 rows=1 width=37) (actual time=0.015..0.017 rows=0 \nloops=10153)\n Index Cond: ((c.car_id)::text = (\"outer\".car_id)::text)\n Total runtime: 711.375 ms\n\nI can also get a similar plan if I disable both Hash Joins and Merge Joins.\n\nFurthermore I can get some additional speedup without turning off \nsequence scans if I\nset the value of default_statistics_target = 1000 then the runtime will \nbe around 1200\notoh if I set default_statistics_target = 100 then the runtime will be \naround 12000.\n\nSo, my question is is there any way to get the query planner to \nrecognize the potential\nperformance increase available by using the indexes that are set up \nwithout specifically\nturning off sequential scans before I run this query every time?\n\nThanks for the help.\n\nJared\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 01 Dec 2003 13:44:25 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "A question on the query planner" }, { "msg_contents": "Jared Carr <[email protected]> writes:\n> I am currently working on optimizing some fairly time consuming queries \n> on a decently large dataset.\n\nIt doesn't look that large from here ;-). I'd suggest experimenting\nwith reducing random_page_cost, since at least for your test query\nit sure looks like everything is in RAM. In theory random_page_cost = 1.0\nis the correct setting for all-in-RAM cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Dec 2003 19:45:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner " }, { "msg_contents": "On Mon, 2003-12-01 at 16:44, Jared Carr wrote:\n> I am currently working on optimizing some fairly time consuming queries \n> on a decently large\n> dataset.\n> \n> The Following is the query in question.\n> \n> SELECT z.lat, z.lon, z.city, z.state, q.date_time, c.make, c.model, c.year\n> FROM quotes AS q, zips AS z, cars AS c\n> WHERE\n> z.zip = q.zip AND\n> c.car_id = q.car_id AND\n> z.state != 'AA' AND\n> z.state != 'AE' AND\n> z.state != 'AP' AND\n> z.state = 'WA'\n> ORDER BY date_time;\n> \n\nThis wont completely solve your problem, but z.state = 'WA' would seem\nto be mutually exclusive of the != AA|AE|AP. While it's not much, it is\nextra overhead there doesn't seem to be any need for...\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "02 Dec 2003 12:16:21 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Robert Treat wrote:\n\n>On Mon, 2003-12-01 at 16:44, Jared Carr wrote:\n> \n>\n>>I am currently working on optimizing some fairly time consuming queries \n>>on a decently large\n>>dataset.\n>>\n>>The Following is the query in question.\n>>\n>>SELECT z.lat, z.lon, z.city, z.state, q.date_time, c.make, c.model, c.year\n>> FROM quotes AS q, zips AS z, cars AS c\n>> WHERE\n>> z.zip = q.zip AND\n>> c.car_id = q.car_id AND\n>> z.state != 'AA' AND\n>> z.state != 'AE' AND\n>> z.state != 'AP' AND\n>> z.state = 'WA'\n>> ORDER BY date_time;\n>>\n>> \n>>\n>\n>This wont completely solve your problem, but z.state = 'WA' would seem\n>to be mutually exclusive of the != AA|AE|AP. While it's not much, it is\n>extra overhead there doesn't seem to be any need for...\n>\n>Robert Treat\n> \n>\nThat is an excellent point, unfortunately it doesn't change the query \nplan at all.\n\nFurthermore noticed that in the following query plan it is doing the \nsequential scan on quotes first, and\nthen doing the sequential on zips. IMHO this should be the other way \naround, since the result set for\nzips is considerably smaller especially give that we are using a where \nclause to limit the number of items\nreturned from zips, so it would seem that it would be faster to scan \nzips then join onto quotes, but perhaps\nit needs to do the sequential scan on both regardless.\n\nOf course still there is the holy grail of getting it to actually use \nthe indexes. :P\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=57812.71..57822.86 rows=4058 width=80) (actual \ntime=2522.826..2529.237 rows=4581 loops=1)\n Sort Key: q.date_time\n -> Merge Join (cost=57473.20..57569.50 rows=4058 width=80) (actual \ntime=2360.656..2451.987 rows=4581 loops=1)\n Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n -> Sort (cost=56625.79..56635.93 rows=4058 width=62) (actual \ntime=2077.209..2085.095 rows=4735 loops=1)\n Sort Key: (q.car_id)::text\n -> Hash Join (cost=1088.19..56382.58 rows=4058 \nwidth=62) (actual time=86.111..1834.682 rows=10193 loops=1)\n Hash Cond: ((\"outer\".zip)::text = (\"inner\".zip)::text)\n -> Seq Scan on quotes q (cost=0.00..10664.25 \nrows=336525 width=27) (actual time=0.098..658.905 rows=336963 loops=1)\n -> Hash (cost=1086.90..1086.90 rows=516 width=52) \n(actual time=85.798..85.798 rows=0 loops=1)\n -> Seq Scan on zips z (cost=0.00..1086.90 \nrows=516 width=52) (actual time=79.532..84.151 rows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n -> Sort (cost=847.41..870.91 rows=9401 width=37) (actual \ntime=282.896..300.082 rows=11950 loops=1)\n Sort Key: (c.car_id)::text\n -> Seq Scan on cars c (cost=0.00..227.01 rows=9401 \nwidth=37) (actual time=0.102..43.516 rows=9401 loops=1)\n\n\n", "msg_date": "Tue, 02 Dec 2003 09:55:49 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Jared Carr <[email protected]> writes:\n\n> Furthermore noticed that in the following query plan it is doing the\n> sequential scan on quotes first, and then doing the sequential on zips. IMHO\n> this should be the other way around, since the result set for zips is\n> considerably smaller especially give that we are using a where clause to\n> limit the number of items returned from zips, so it would seem that it would\n> be faster to scan zips then join onto quotes, but perhaps it needs to do the\n> sequential scan on both regardless.\n\n>-> Hash Join (cost=1088.19..56382.58 rows=4058 width=62) (actual time=86.111..1834.682 rows=10193 loops=1)\n> Hash Cond: ((\"outer\".zip)::text = (\"inner\".zip)::text)\n> -> Seq Scan on quotes q (cost=0.00..10664.25 rows=336525 width=27) (actual time=0.098..658.905 rows=336963 loops=1)\n> -> Hash (cost=1086.90..1086.90 rows=516 width=52) (actual time=85.798..85.798 rows=0 loops=1)\n> -> Seq Scan on zips z (cost=0.00..1086.90 rows=516 width=52) (actual time=79.532..84.151 rows=718 loops=1)\n> Filter: ((state)::text = 'WA'::text)\n\nYou're misreading it. Hash join is done by reading in one table into a hash\ntable, then reading the other table looking up entries in the hash table. The\nzips are being read into the hash table which is appropriate if it's the\nsmaller table.\n\n\n> Of course still there is the holy grail of getting it to actually use \n> the indexes. :P\n\n> Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n\nWell it looks like you have something strange going on. What data type is\ncar_id in each table? \n\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 13:59:56 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Greg Stark wrote:\n\n>\n>> Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n>> \n>>\n>\n>Well it looks like you have something strange going on. What data type is\n>car_id in each table? \n>\n> \n>\ncar_id is a varchar(10) in both tables.\n\n", "msg_date": "Tue, 02 Dec 2003 11:14:19 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Jared Carr <[email protected]> writes:\n\n> Greg Stark wrote:\n> \n> >\n> >> Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n> >>\n> >\n> >Well it looks like you have something strange going on. What data type is\n> > car_id in each table?\n> car_id is a varchar(10) in both tables.\n\nWell for some reason it's being cast to a text to do the merge.\n\nWhat version of postgres is this btw? The analyzes look like 7.4?\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 15:11:02 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Greg Stark wrote:\n\n>Jared Carr <[email protected]> writes:\n>\n> \n>\n>>Greg Stark wrote:\n>>\n>> \n>>\n>>>> Merge Cond: (\"outer\".\"?column7?\" = \"inner\".\"?column5?\")\n>>>>\n>>>> \n>>>>\n>>>Well it looks like you have something strange going on. What data type is\n>>>car_id in each table?\n>>> \n>>>\n>>car_id is a varchar(10) in both tables.\n>> \n>>\n>\n>Well for some reason it's being cast to a text to do the merge.\n>\n>What version of postgres is this btw? The analyzes look like 7.4?\n>\n> \n>\nYes, this is 7.4.\n\n", "msg_date": "Tue, 02 Dec 2003 12:29:18 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "\nJared Carr <[email protected]> writes:\n\n> Greg Stark wrote:\n> \n> > Well it looks like you have something strange going on. What data type is\n> > car_id in each table?\n> >\n> car_id is a varchar(10) in both tables.\n\nHuh. The following shows something strange. It seems joining on two varchars\nno longer works well. Instead the optimizer has to convert both columns to\ntext.\n\nI know some inter-type comparisons were removed a while ago, but I would not\nhave thought that would effect varchar-varchar comparisons. I think this is\npretty bad.\n\n\ntest=# create table a (x varchar primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"a_pkey\" for table \"a\"\nCREATE TABLE\ntest=# create table b (x varchar primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"b_pkey\" for table \"b\"\nCREATE TABLE\ntest=# select * from a,b where a.x=b.x;\n x | x \n---+---\n(0 rows)\n\ntest=# explain select * from a,b where a.x=b.x;\n QUERY PLAN \n------------------------------------------------------------------\n Merge Join (cost=139.66..159.67 rows=1001 width=64)\n Merge Cond: (\"outer\".\"?column2?\" = \"inner\".\"?column2?\")\n -> Sort (cost=69.83..72.33 rows=1000 width=32)\n Sort Key: (a.x)::text\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=32)\n -> Sort (cost=69.83..72.33 rows=1000 width=32)\n Sort Key: (b.x)::text\n -> Seq Scan on b (cost=0.00..20.00 rows=1000 width=32)\n(8 rows)\n\ntest=# create table a2 (x text primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"a2_pkey\" for table \"a2\"\nCREATE TABLE\ntest=# create table b2 (x text primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"b2_pkey\" for table \"b2\"\nCREATE TABLE\ntest=# explain select * from a2,b2 where a2.x=b2.x;\n QUERY PLAN \n-------------------------------------------------------------------\n Hash Join (cost=22.50..57.51 rows=1001 width=64)\n Hash Cond: (\"outer\".x = \"inner\".x)\n -> Seq Scan on a2 (cost=0.00..20.00 rows=1000 width=32)\n -> Hash (cost=20.00..20.00 rows=1000 width=32)\n -> Seq Scan on b2 (cost=0.00..20.00 rows=1000 width=32)\n(5 rows)\n\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 17:32:11 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "\nGreg Stark <[email protected]> writes:\n\n> Huh. The following shows something strange. \n\nWorse, with enable_hashjoin off it's even more obvious something's broken:\n\n\ntest=# set enable_hashjoin = off;\nSET\ntest=# explain select * from a,b where a.x=b.x;\n QUERY PLAN \n------------------------------------------------------------------\n Merge Join (cost=139.66..159.67 rows=1001 width=64)\n Merge Cond: (\"outer\".\"?column2?\" = \"inner\".\"?column2?\")\n -> Sort (cost=69.83..72.33 rows=1000 width=32)\n Sort Key: (a.x)::text\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=32)\n -> Sort (cost=69.83..72.33 rows=1000 width=32)\n Sort Key: (b.x)::text\n -> Seq Scan on b (cost=0.00..20.00 rows=1000 width=32)\n(8 rows)\n\ntest=# explain select * from a2,b2 where a2.x=b2.x;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Merge Join (cost=0.00..63.04 rows=1001 width=64)\n Merge Cond: (\"outer\".x = \"inner\".x)\n -> Index Scan using a2_pkey on a2 (cost=0.00..24.00 rows=1000 width=32)\n -> Index Scan using b2_pkey on b2 (cost=0.00..24.00 rows=1000 width=32)\n(4 rows)\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 17:43:34 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Huh. The following shows something strange. It seems joining on two varchars\n> no longer works well. Instead the optimizer has to convert both columns to\n> text.\n\nDefine \"no longer works well\". varchar doesn't have its own comparison\noperators anymore, but AFAIK that makes no difference.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Dec 2003 18:50:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Define \"no longer works well\". varchar doesn't have its own comparison\n> operators anymore, but AFAIK that makes no difference.\n\n\nWell it seems to completely bar the use of a straight merge join between two\nindex scans:\n\n\ntest=# set enable_seqscan = off;\nSET\n\ntest=# explain select * from a,b where a.x=b.x;\n QUERY PLAN \n---------------------------------------------------------------------------\n Nested Loop (cost=100000000.00..100002188.86 rows=1001 width=64)\n -> Seq Scan on a (cost=100000000.00..100000020.00 rows=1000 width=32)\n -> Index Scan using b_pkey on b (cost=0.00..2.16 rows=1 width=32)\n Index Cond: ((\"outer\".x)::text = (b.x)::text)\n(4 rows)\n\ntest=# explain select * from a2,b2 where a2.x=b2.x;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Merge Join (cost=0.00..63.04 rows=1001 width=64)\n Merge Cond: (\"outer\".x = \"inner\".x)\n -> Index Scan using a2_pkey on a2 (cost=0.00..24.00 rows=1000 width=32)\n -> Index Scan using b2_pkey on b2 (cost=0.00..24.00 rows=1000 width=32)\n(4 rows)\n\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 20:33:15 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Define \"no longer works well\".\n\n> Well it seems to completely bar the use of a straight merge join between two\n> index scans:\n\nHmmm ... [squints] ... it's not supposed to do that ... [digs] ... yeah,\nthere's something busted here. Will get back to you ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Dec 2003 22:53:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner " }, { "msg_contents": "> Hmmm ... [squints] ... it's not supposed to do that ...\n\nThe attached patch seems to make it better.\n\n\t\t\tregards, tom lane\n\n\nIndex: src/backend/optimizer/path/costsize.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/optimizer/path/costsize.c,v\nretrieving revision 1.115\ndiff -c -r1.115 costsize.c\n*** src/backend/optimizer/path/costsize.c\t5 Oct 2003 22:44:25 -0000\t1.115\n--- src/backend/optimizer/path/costsize.c\t3 Dec 2003 17:40:58 -0000\n***************\n*** 1322,1327 ****\n--- 1322,1331 ----\n \tfloat4\t *numbers;\n \tint\t\t\tnnumbers;\n \n+ \t/* Ignore any binary-compatible relabeling */\n+ \tif (var && IsA(var, RelabelType))\n+ \t\tvar = (Var *) ((RelabelType *) var)->arg;\n+ \n \t/*\n \t * Lookup info about var's relation and attribute; if none available,\n \t * return default estimate.\nIndex: src/backend/optimizer/path/pathkeys.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/optimizer/path/pathkeys.c,v\nretrieving revision 1.53\ndiff -c -r1.53 pathkeys.c\n*** src/backend/optimizer/path/pathkeys.c\t4 Aug 2003 02:40:00 -0000\t1.53\n--- src/backend/optimizer/path/pathkeys.c\t3 Dec 2003 17:40:58 -0000\n***************\n*** 25,36 ****\n #include \"optimizer/tlist.h\"\n #include \"optimizer/var.h\"\n #include \"parser/parsetree.h\"\n #include \"parser/parse_func.h\"\n #include \"utils/lsyscache.h\"\n #include \"utils/memutils.h\"\n \n \n! static PathKeyItem *makePathKeyItem(Node *key, Oid sortop);\n static List *make_canonical_pathkey(Query *root, PathKeyItem *item);\n static Var *find_indexkey_var(Query *root, RelOptInfo *rel,\n \t\t\t\t AttrNumber varattno);\n--- 25,37 ----\n #include \"optimizer/tlist.h\"\n #include \"optimizer/var.h\"\n #include \"parser/parsetree.h\"\n+ #include \"parser/parse_expr.h\"\n #include \"parser/parse_func.h\"\n #include \"utils/lsyscache.h\"\n #include \"utils/memutils.h\"\n \n \n! static PathKeyItem *makePathKeyItem(Node *key, Oid sortop, bool checkType);\n static List *make_canonical_pathkey(Query *root, PathKeyItem *item);\n static Var *find_indexkey_var(Query *root, RelOptInfo *rel,\n \t\t\t\t AttrNumber varattno);\n***************\n*** 41,50 ****\n *\t\tcreate a PathKeyItem node\n */\n static PathKeyItem *\n! makePathKeyItem(Node *key, Oid sortop)\n {\n \tPathKeyItem *item = makeNode(PathKeyItem);\n \n \titem->key = key;\n \titem->sortop = sortop;\n \treturn item;\n--- 42,70 ----\n *\t\tcreate a PathKeyItem node\n */\n static PathKeyItem *\n! makePathKeyItem(Node *key, Oid sortop, bool checkType)\n {\n \tPathKeyItem *item = makeNode(PathKeyItem);\n \n+ \t/*\n+ \t * Some callers pass expressions that are not necessarily of the same\n+ \t * type as the sort operator expects as input (for example when dealing\n+ \t * with an index that uses binary-compatible operators). We must relabel\n+ \t * these with the correct type so that the key expressions will be seen\n+ \t * as equal() to expressions that have been correctly labeled.\n+ \t */\n+ \tif (checkType)\n+ \t{\n+ \t\tOid\t\t\tlefttype,\n+ \t\t\t\t\trighttype;\n+ \n+ \t\top_input_types(sortop, &lefttype, &righttype);\n+ \t\tif (exprType(key) != lefttype)\n+ \t\t\tkey = (Node *) makeRelabelType((Expr *) key,\n+ \t\t\t\t\t\t\t\t\t\t lefttype, -1,\n+ \t\t\t\t\t\t\t\t\t\t COERCE_DONTCARE);\n+ \t}\n+ \n \titem->key = key;\n \titem->sortop = sortop;\n \treturn item;\n***************\n*** 70,78 ****\n {\n \tExpr\t *clause = restrictinfo->clause;\n \tPathKeyItem *item1 = makePathKeyItem(get_leftop(clause),\n! \t\t\t\t\t\t\t\t\t\t restrictinfo->left_sortop);\n \tPathKeyItem *item2 = makePathKeyItem(get_rightop(clause),\n! \t\t\t\t\t\t\t\t\t\t restrictinfo->right_sortop);\n \tList\t *newset,\n \t\t\t *cursetlink;\n \n--- 90,100 ----\n {\n \tExpr\t *clause = restrictinfo->clause;\n \tPathKeyItem *item1 = makePathKeyItem(get_leftop(clause),\n! \t\t\t\t\t\t\t\t\t\t restrictinfo->left_sortop,\n! \t\t\t\t\t\t\t\t\t\t false);\n \tPathKeyItem *item2 = makePathKeyItem(get_rightop(clause),\n! \t\t\t\t\t\t\t\t\t\t restrictinfo->right_sortop,\n! \t\t\t\t\t\t\t\t\t\t false);\n \tList\t *newset,\n \t\t\t *cursetlink;\n \n***************\n*** 668,674 ****\n \t\t}\n \n \t\t/* OK, make a sublist for this sort key */\n! \t\titem = makePathKeyItem(indexkey, sortop);\n \t\tcpathkey = make_canonical_pathkey(root, item);\n \n \t\t/*\n--- 690,696 ----\n \t\t}\n \n \t\t/* OK, make a sublist for this sort key */\n! \t\titem = makePathKeyItem(indexkey, sortop, true);\n \t\tcpathkey = make_canonical_pathkey(root, item);\n \n \t\t/*\n***************\n*** 785,791 ****\n \t\t\t\t\t\t\t\t\t\ttle->resdom->restypmod,\n \t\t\t\t\t\t\t\t\t\t0);\n \t\t\t\t\touter_item = makePathKeyItem((Node *) outer_var,\n! \t\t\t\t\t\t\t\t\t\t\t\t sub_item->sortop);\n \t\t\t\t\t/* score = # of mergejoin peers */\n \t\t\t\t\tscore = count_canonical_peers(root, outer_item);\n \t\t\t\t\t/* +1 if it matches the proper query_pathkeys item */\n--- 807,814 ----\n \t\t\t\t\t\t\t\t\t\ttle->resdom->restypmod,\n \t\t\t\t\t\t\t\t\t\t0);\n \t\t\t\t\touter_item = makePathKeyItem((Node *) outer_var,\n! \t\t\t\t\t\t\t\t\t\t\t\t sub_item->sortop,\n! \t\t\t\t\t\t\t\t\t\t\t\t true);\n \t\t\t\t\t/* score = # of mergejoin peers */\n \t\t\t\t\tscore = count_canonical_peers(root, outer_item);\n \t\t\t\t\t/* +1 if it matches the proper query_pathkeys item */\n***************\n*** 893,899 ****\n \t\tPathKeyItem *pathkey;\n \n \t\tsortkey = get_sortgroupclause_expr(sortcl, tlist);\n! \t\tpathkey = makePathKeyItem(sortkey, sortcl->sortop);\n \n \t\t/*\n \t\t * The pathkey becomes a one-element sublist, for now;\n--- 916,922 ----\n \t\tPathKeyItem *pathkey;\n \n \t\tsortkey = get_sortgroupclause_expr(sortcl, tlist);\n! \t\tpathkey = makePathKeyItem(sortkey, sortcl->sortop, true);\n \n \t\t/*\n \t\t * The pathkey becomes a one-element sublist, for now;\n***************\n*** 937,943 ****\n \t{\n \t\toldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(restrictinfo));\n \t\tkey = get_leftop(restrictinfo->clause);\n! \t\titem = makePathKeyItem(key, restrictinfo->left_sortop);\n \t\trestrictinfo->left_pathkey = make_canonical_pathkey(root, item);\n \t\tMemoryContextSwitchTo(oldcontext);\n \t}\n--- 960,966 ----\n \t{\n \t\toldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(restrictinfo));\n \t\tkey = get_leftop(restrictinfo->clause);\n! \t\titem = makePathKeyItem(key, restrictinfo->left_sortop, false);\n \t\trestrictinfo->left_pathkey = make_canonical_pathkey(root, item);\n \t\tMemoryContextSwitchTo(oldcontext);\n \t}\n***************\n*** 945,951 ****\n \t{\n \t\toldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(restrictinfo));\n \t\tkey = get_rightop(restrictinfo->clause);\n! \t\titem = makePathKeyItem(key, restrictinfo->right_sortop);\n \t\trestrictinfo->right_pathkey = make_canonical_pathkey(root, item);\n \t\tMemoryContextSwitchTo(oldcontext);\n \t}\n--- 968,974 ----\n \t{\n \t\toldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(restrictinfo));\n \t\tkey = get_rightop(restrictinfo->clause);\n! \t\titem = makePathKeyItem(key, restrictinfo->right_sortop, false);\n \t\trestrictinfo->right_pathkey = make_canonical_pathkey(root, item);\n \t\tMemoryContextSwitchTo(oldcontext);\n \t}\nIndex: src/backend/utils/cache/lsyscache.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/cache/lsyscache.c,v\nretrieving revision 1.108\ndiff -c -r1.108 lsyscache.c\n*** src/backend/utils/cache/lsyscache.c\t4 Oct 2003 18:22:59 -0000\t1.108\n--- src/backend/utils/cache/lsyscache.c\t3 Dec 2003 17:40:58 -0000\n***************\n*** 465,470 ****\n--- 465,493 ----\n }\n \n /*\n+ * op_input_types\n+ *\n+ *\t\tReturns the left and right input datatypes for an operator\n+ *\t\t(InvalidOid if not relevant).\n+ */\n+ void\n+ op_input_types(Oid opno, Oid *lefttype, Oid *righttype)\n+ {\n+ \tHeapTuple\ttp;\n+ \tForm_pg_operator optup;\n+ \n+ \ttp = SearchSysCache(OPEROID,\n+ \t\t\t\t\t\tObjectIdGetDatum(opno),\n+ \t\t\t\t\t\t0, 0, 0);\n+ \tif (!HeapTupleIsValid(tp))\t/* shouldn't happen */\n+ \t\telog(ERROR, \"cache lookup failed for operator %u\", opno);\n+ \toptup = (Form_pg_operator) GETSTRUCT(tp);\n+ \t*lefttype = optup->oprleft;\n+ \t*righttype = optup->oprright;\n+ \tReleaseSysCache(tp);\n+ }\n+ \n+ /*\n * op_mergejoinable\n *\n *\t\tReturns the left and right sort operators corresponding to a\nIndex: src/include/utils/lsyscache.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/utils/lsyscache.h,v\nretrieving revision 1.82\ndiff -c -r1.82 lsyscache.h\n*** src/include/utils/lsyscache.h\t4 Oct 2003 18:22:59 -0000\t1.82\n--- src/include/utils/lsyscache.h\t3 Dec 2003 17:41:00 -0000\n***************\n*** 40,45 ****\n--- 40,46 ----\n extern bool opclass_is_hash(Oid opclass);\n extern RegProcedure get_opcode(Oid opno);\n extern char *get_opname(Oid opno);\n+ extern void op_input_types(Oid opno, Oid *lefttype, Oid *righttype);\n extern bool op_mergejoinable(Oid opno, Oid *leftOp, Oid *rightOp);\n extern void op_mergejoin_crossops(Oid opno, Oid *ltop, Oid *gtop,\n \t\t\t\t\t RegProcedure *ltproc, RegProcedure *gtproc);\n", "msg_date": "Wed, 03 Dec 2003 12:49:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner " }, { "msg_contents": "Tom Lane wrote:\n\n>>Hmmm ... [squints] ... it's not supposed to do that ...\n>> \n>>\n>\n>The attached patch seems to make it better.\n>\n> \n>\nThe patch definitely makes things more consistent...unfortunately it is \nmore\nconsistent toward the slower execution times. Of course I am looking at \nthis\nsimply from a straight performance standpoint and not a viewpoint of \nwhat *should*\nbe happening. At any rate here are the query plans with the various \nsettings.\n\nDefault Settings:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=15290.20..15300.34 rows=4058 width=80) (actual \ntime=2944.650..2951.292 rows=4672 loops=1)\n Sort Key: q.date_time\n -> Hash Join (cost=13529.79..15046.99 rows=4058 width=80) (actual \ntime=2678.033..2873.475 rows=4672 loops=1)\n Hash Cond: ((\"outer\".car_id)::text = (\"inner\".car_id)::text)\n -> Seq Scan on cars c (cost=0.00..227.01 rows=9401 width=37) \n(actual time=19.887..50.971 rows=9401 loops=1)\n -> Hash (cost=13475.65..13475.65 rows=4058 width=62) (actual \ntime=2643.377..2643.377 rows=0 loops=1)\n -> Hash Join (cost=1088.19..13475.65 rows=4058 \nwidth=62) (actual time=86.739..2497.558 rows=10284 loops=1)\n Hash Cond: ((\"outer\".zip)::text = (\"inner\".zip)::text)\n -> Seq Scan on quotes q (cost=0.00..10664.25 \nrows=336525 width=27) (actual time=0.223..1308.561 rows=340694 loops=1)\n -> Hash (cost=1086.90..1086.90 rows=516 width=52) \n(actual time=84.329..84.329 rows=0 loops=1)\n -> Seq Scan on zips z (cost=0.00..1086.90 \nrows=516 width=52) (actual time=78.363..82.901 rows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n Total runtime: 2955.366 ms\n\nSET enable_seqscan=false;\n\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=103557.82..103567.97 rows=4058 width=80) (actual \ntime=1015.122..1021.750 rows=4673 loops=1)\n Sort Key: q.date_time\n -> Merge Join (cost=102734.94..103314.61 rows=4058 width=80) \n(actual time=802.908..941.520 rows=4673 loops=1)\n Merge Cond: (\"outer\".\"?column7?\" = (\"inner\".car_id)::text)\n -> Sort (cost=102734.94..102745.08 rows=4058 width=62) \n(actual time=802.112..812.755 rows=4827 loops=1)\n Sort Key: (q.car_id)::text\n -> Nested Loop (cost=0.00..102491.73 rows=4058 \nwidth=62) (actual time=148.535..555.653 rows=10285 loops=1)\n -> Index Scan using zip_zips_index on zips z \n(cost=0.00..1272.69 rows=516 width=52) (actual time=148.243..155.577 \nrows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n -> Index Scan using zip_quotes_index on quotes q \n(cost=0.00..195.55 rows=48 width=27) (actual time=0.042..0.454 rows=14 \nloops=718)\n Index Cond: ((\"outer\".zip)::text = (q.zip)::text)\n -> Index Scan using cars_car_id_btree_index on cars c \n(cost=0.00..506.87 rows=9401 width=37) (actual time=0.220..46.910 \nrows=12019 loops=1)\n Total runtime: 1027.339 ms\n\nThere is still a 3x decrease in execution time here, but it is overall \nslower than before the\npatch was applied.\n\nSET enable_mergejoin = false; AND SET enable_seqscan = false;\n\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=104586.15..104596.29 rows=4058 width=80) (actual \ntime=887.719..894.358 rows=4673 loops=1)\n Sort Key: q.date_time\n -> Hash Join (cost=102545.88..104342.94 rows=4058 width=80) (actual \ntime=593.710..815.541 rows=4673 loops=1)\n Hash Cond: ((\"outer\".car_id)::text = (\"inner\".car_id)::text)\n -> Index Scan using cars_car_id_btree_index on cars c \n(cost=0.00..506.87 rows=9401 width=37) (actual time=0.182..37.306 \nrows=9401 loops=1)\n -> Hash (cost=102491.73..102491.73 rows=4058 width=62) \n(actual time=593.040..593.040 rows=0 loops=1)\n -> Nested Loop (cost=0.00..102491.73 rows=4058 \nwidth=62) (actual time=146.647..551.975 rows=10285 loops=1)\n -> Index Scan using zip_zips_index on zips z \n(cost=0.00..1272.69 rows=516 width=52) (actual time=146.378..153.767 \nrows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n -> Index Scan using zip_quotes_index on quotes q \n(cost=0.00..195.55 rows=48 width=27) (actual time=0.044..0.464 rows=14 \nloops=718)\n Index Cond: ((\"outer\".zip)::text = (q.zip)::text)\n Total runtime: 898.438 ms\n\nAgain a decrease in execution time.\n\nOn the other hand:\nSET enable_hasdjoin=false;\n\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=62829.86..62840.00 rows=4058 width=80) (actual \ntime=11368.025..11374.629 rows=4673 loops=1)\n Sort Key: q.date_time\n -> Merge Join (cost=62006.97..62586.65 rows=4058 width=80) (actual \ntime=11188.371..11295.156 rows=4673 loops=1)\n Merge Cond: ((\"outer\".car_id)::text = \"inner\".\"?column7?\")\n -> Index Scan using cars_car_id_btree_index on cars c \n(cost=0.00..506.87 rows=9401 width=37) (actual time=0.167..37.728 \nrows=9401 loops=1)\n -> Sort (cost=62006.97..62017.12 rows=4058 width=62) (actual \ntime=11187.581..11196.343 rows=4827 loops=1)\n Sort Key: (q.car_id)::text\n -> Merge Join (cost=60037.99..61763.76 rows=4058 \nwidth=62) (actual time=10893.572..10975.658 rows=10285 loops=1)\n Merge Cond: (\"outer\".\"?column6?\" = \"inner\".\"?column4?\")\n -> Sort (cost=1110.15..1111.44 rows=516 width=52) \n(actual time=86.679..87.166 rows=718 loops=1)\n Sort Key: (z.zip)::text\n -> Seq Scan on zips z (cost=0.00..1086.90 \nrows=516 width=52) (actual time=79.023..83.921 rows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n -> Sort (cost=58927.84..59769.15 rows=336525 \nwidth=27) (actual time=9848.479..10319.275 rows=340426 loops=1)\n Sort Key: (q.zip)::text\n -> Seq Scan on quotes q \n(cost=0.00..10664.25 rows=336525 width=27) (actual time=0.227..2171.917 \nrows=340740 loops=1)\n Total runtime: 11408.120 ms\n\nWhich really is not that surprising.\n\nAnd Finally:\nset enable_hashjoin=false; enable_seqscan=false;\n\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=103557.82..103567.97 rows=4058 width=80) (actual \ntime=1206.168..1212.880 rows=4673 loops=1)\n Sort Key: q.date_time\n -> Merge Join (cost=102734.94..103314.61 rows=4058 width=80) \n(actual time=809.448..949.110 rows=4673 loops=1)\n Merge Cond: (\"outer\".\"?column7?\" = (\"inner\".car_id)::text)\n -> Sort (cost=102734.94..102745.08 rows=4058 width=62) \n(actual time=808.660..819.317 rows=4827 loops=1)\n Sort Key: (q.car_id)::text\n -> Nested Loop (cost=0.00..102491.73 rows=4058 \nwidth=62) (actual time=151.457..559.886 rows=10285 loops=1)\n -> Index Scan using zip_zips_index on zips z \n(cost=0.00..1272.69 rows=516 width=52) (actual time=151.179..158.375 \nrows=718 loops=1)\n Filter: ((state)::text = 'WA'::text)\n -> Index Scan using zip_quotes_index on quotes q \n(cost=0.00..195.55 rows=48 width=27) (actual time=0.042..0.455 rows=14 \nloops=718)\n Index Cond: ((\"outer\".zip)::text = (q.zip)::text)\n -> Index Scan using cars_car_id_btree_index on cars c \n(cost=0.00..506.87 rows=9401 width=37) (actual time=0.213..47.307 \nrows=12019 loops=1)\n Total runtime: 1218.459 ms\n\n\nAnyway, thanks for the attention to this issue. And I hope that this \nhelps some.\n\nJared\n\n\n\n", "msg_date": "Wed, 03 Dec 2003 10:27:20 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Jared Carr <[email protected]> writes:\n\n> The patch definitely makes things more consistent...unfortunately it is more\n> consistent toward the slower execution times. Of course I am looking at this\n> simply from a straight performance standpoint and not a viewpoint of what\n> *should* be happening. At any rate here are the query plans with the various\n> settings.\n\nThe optimizer seems to be at least considering reasonable plans now. It seems\nfrom the estimates that you need to rerun analyze. You might try \"vacuum full\nanalyze\" to be sure.\n\nAlso, you might try raising effective_cache_size and/or lowering\nrandom_page_size (it looks like something around 2 might help).\n\n-- \ngreg\n\n", "msg_date": "03 Dec 2003 14:32:50 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > Tom Lane <[email protected]> writes:\n> >> Define \"no longer works well\".\n> \n> > Well it seems to completely bar the use of a straight merge join between two\n> > index scans:\n> \n> Hmmm ... [squints] ... it's not supposed to do that ... [digs] ... yeah,\n> there's something busted here. Will get back to you ...\n\nLOL, but I am not sure why. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 3 Dec 2003 18:17:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Greg Stark wrote:\n\n>Jared Carr <[email protected]> writes:\n>\n> \n>\n>>The patch definitely makes things more consistent...unfortunately it is more\n>>consistent toward the slower execution times. Of course I am looking at this\n>>simply from a straight performance standpoint and not a viewpoint of what\n>>*should* be happening. At any rate here are the query plans with the various\n>>settings.\n>> \n>>\n>\n>The optimizer seems to be at least considering reasonable plans now. It seems\n>from the estimates that you need to rerun analyze. You might try \"vacuum full\n>analyze\" to be sure.\n>\n>Also, you might try raising effective_cache_size and/or lowering\n>random_page_size (it looks like something around 2 might help).\n>\n> \n>\nYep, I had forgotten to run vacuum since I had patched it :P. The \noverall performance is definitely better,\nI will go ahead and tweak the server settings and see what I can get. \nThanks again for all the help.\n\n", "msg_date": "Wed, 03 Dec 2003 15:21:50 -0800", "msg_from": "Jared Carr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A question on the query planner" }, { "msg_contents": "Tom Lane wrote:\n\n>>Hmmm ... [squints] ... it's not supposed to do that ...\n> \n> \n> The attached patch seems to make it better.\n\nI guess is too late for 7.3.5.\n\n:-(\n\nAny chance for 7.4.1 ?\n\n\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Thu, 04 Dec 2003 00:54:59 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A question on the query planner" } ]
[ { "msg_contents": "Hello,\nI am wondering if it is possible to use several\nmachine as cluster to boost the slow queries. Is\nthat possible? Anybody have tried that before? \n\nInitially, I was thinking to use dual CPUS instead\nof one. but it is not correct because pgsql is not\nmulti-threaded.\n\nAny suggestions are welcome and appreciated,\n\nRegards,\nWilliam\n\n", "msg_date": "Mon, 01 Dec 2003 22:27:51 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "Is clustering possible to enhance the performance?" }, { "msg_contents": "LIANHE SHAO wrote:\n> Hello,\n> I am wondering if it is possible to use several\n> machine as cluster to boost the slow queries. Is\n> that possible? Anybody have tried that before? \n> \n> Initially, I was thinking to use dual CPUS instead\n> of one. but it is not correct because pgsql is not\n> multi-threaded.\n\nDual cpu's allow multiple backends to use different cpu's, but a single\nsession can't use more than one cpu.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 1 Dec 2003 17:37:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is clustering possible to enhance the performance?" } ]
[ { "msg_contents": "Folks:\n\nI´m running a query which is designed to generate a foreign key for a \ntable of approx. 10 million records (I've mentioned this in an earlier \nposting). The table is called \"indethom\", and each row contains a \nsingle word from the works of St. Thomas Aquinas, along with \ngrammatical data about the word form, and (most importantly for my \ncurrent problem) a set of columns identifying the particular \nwork/section/paragraph that the word appears in.\n\nThis database is completely non-normalized, and I'm working on \nperforming some basic normalization, beginning with creating a table \ncalled \"s2.sectiones\" which (naturally) contains a complete listing of \nall of the sections of all the works of St. Thomas. I will then \neliminate this information from the original \"indethom\" table, \nreplacing it with the foreign key I am currently generating.\n\n** My question has to do with whether or not I am getting maximal speed \nout of PostgreSQL, or whether I need to perform further optimizations. \nI am currently getting about 200,000 updates per hour, and updating the \nentire 10 million rows thus requires 50 hours, which seems a bit much.\n\nHere's the query I am running:\nupdate indethom\n\tset query_counter = nextval('s2.query_counter_seq'), -- Just \nfor keeping track of how fast the query is running\n\tsectref = (select clavis from s2.sectiones where\n\t\ts2.sectiones.nomeoper = indethom.nomeoper\n\t\tand s2.sectiones.refere1a = indethom.refere1a and \ns2.sectiones.refere1b = indethom.refere1b\n\t\tand s2.sectiones.refere2a = indethom.refere2a and \ns2.sectiones.refere2b = indethom.refere2b\n\t\tand s2.sectiones.refere3a = indethom.refere3a and \ns2.sectiones.refere3b = indethom.refere3b\n\t\tand s2.sectiones.refere4a = indethom.refere4a and \ns2.sectiones.refere4b = indethom.refere4b);\n\nHere´s the query plan:\n QUERY PLAN\n------------------------------------------------------------------------ \n-------------\n Seq Scan on indethom (cost=0.00..1310352.72 rows=10631972 width=212)\n SubPlan\n -> Index Scan using sectiones_ndx on sectiones (cost=0.00..6.03 \nrows=1 width=4)\n Index Cond: ((nomeoper = $0) AND (refere1a = $1) AND \n(refere1b = $2) AND (refere2a = $3) AND (refere2b = $4) AND (refere3a = \n$5) AND (refere3b = $6) AND (refere4a = $7) AND (refere4b = $8))\n(4 rows)\n\nNote: I have just performed a VACUUM ANALYZE on the indethom table, as \nsuggested by this listserve.\n\nHere's the structure of the s2.sectiones table:\nit=> \\d s2.sectiones\n Table \"s2.sectiones\"\n Column | Type | Modifiers\n----------+--------------+-----------\n nomeoper | character(3) |\n refere1a | character(2) |\n refere1b | character(2) |\n refere2a | character(2) |\n refere2b | character(2) |\n refere3a | character(2) |\n refere3b | character(2) |\n refere4a | character(2) |\n refere4b | character(2) |\n clavis | integer |\nIndexes: sectiones_ndx btree (nomeoper, refere1a, refere1b, refere2a, \nrefere2b, refere3a, refere3b, refere4a, refere4b)\n\nFinally, here is the structure of indethom (some non-relevant columns \nnot shown):\nit=> \\d indethom\n Table \"public.indethom\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n numeoper | smallint | not null\n nomeoper | character(3) | not null\n editcrit | character(1) |\n refere1a | character(2) |\n refere1b | character(2) |\n refere2a | character(2) |\n refere2b | character(2) |\n refere3a | character(2) |\n refere3b | character(2) |\n refere4a | character(2) |\n refere4b | character(2) |\n refere5a | character(2) | not null\n refere5b | smallint | not null\n referen6 | smallint | not null\n ... several columns skipped ...\n verbum | character varying(22) | not null\n ... other columns skipped ...\n poslinop | integer | not null\n posverli | smallint | not null\n posverop | integer | not null\n clavis | integer | not null\n articref | integer |\n sectref | integer |\n query_counter | integer |\nIndexes: indethom_pkey primary key btree (clavis),\n indethom_articulus_ndx btree (nomeoper, refere1a, refere1b, \nrefere2a, refere2b, refere3a, refere3b),\n indethom_sectio_ndx btree (nomeoper, refere1a, refere1b, \nrefere2a, refere2b, refere3a, refere3b, refere4a, refere4b),\n verbum_ndx btree (verbum)\n\nThanks for your assistance!\n-Erik Norvelle", "msg_date": "Tue, 2 Dec 2003 16:53:16 +0100", "msg_from": "Erik Norvelle <[email protected]>", "msg_from_op": true, "msg_subject": "Update performance ... is 200,\n\t000 updates per hour what I should expect?" }, { "msg_contents": "\nOn Tue, 2 Dec 2003, Erik Norvelle wrote:\n\n> ** My question has to do with whether or not I am getting maximal speed\n> out of PostgreSQL, or whether I need to perform further optimizations.\n> I am currently getting about 200,000 updates per hour, and updating the\n> entire 10 million rows thus requires 50 hours, which seems a bit much.\n\nWell, it doesn't entirely surprise me much given the presumably 10 million\niterations of the index scan that it's doing. Explain analyze output (even\nover a subset of the indethom table by adding a where clause) would\nprobably help to get better info.\n\nI'd suggest seeing if something like:\nupdate indethom set query_counter=...,sectref=s.clavis\n FROM s2.sectiones s where\n s2.sectiones.nomeoper = indethom.nomeoper and ...;\ntries a join that might give a better plan.\n\n\n> Here's the query I am running:\n> update indethom\n> \tset query_counter = nextval('s2.query_counter_seq'), -- Just\n> for keeping track of how fast the query is running\n> \tsectref = (select clavis from s2.sectiones where\n> \t\ts2.sectiones.nomeoper = indethom.nomeoper\n> \t\tand s2.sectiones.refere1a = indethom.refere1a and\n> s2.sectiones.refere1b = indethom.refere1b\n> \t\tand s2.sectiones.refere2a = indethom.refere2a and\n> s2.sectiones.refere2b = indethom.refere2b\n> \t\tand s2.sectiones.refere3a = indethom.refere3a and\n> s2.sectiones.refere3b = indethom.refere3b\n> \t\tand s2.sectiones.refere4a = indethom.refere4a and\n> s2.sectiones.refere4b = indethom.refere4b);\n>\n> Here�s the query plan:\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> -------------\n> Seq Scan on indethom (cost=0.00..1310352.72 rows=10631972 width=212)\n> SubPlan\n> -> Index Scan using sectiones_ndx on sectiones (cost=0.00..6.03\n> rows=1 width=4)\n> Index Cond: ((nomeoper = $0) AND (refere1a = $1) AND\n> (refere1b = $2) AND (refere2a = $3) AND (refere2b = $4) AND (refere3a =\n> $5) AND (refere3b = $6) AND (refere4a = $7) AND (refere4b = $8))\n> (4 rows)\n", "msg_date": "Tue, 2 Dec 2003 08:29:15 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update performance ... is 200,000 updates per hour" }, { "msg_contents": "Erik Norvelle <[email protected]> writes:\n> update indethom\n> \tset query_counter =3D nextval('s2.query_counter_seq'), -- Just=\n> =20=20\n> for keeping track of how fast the query is running\n> \tsectref =3D (select clavis from s2.sectiones where\n> \t\ts2.sectiones.nomeoper =3D indethom.nomeoper\n> \t\tand s2.sectiones.refere1a =3D indethom.refere1a and=20=20\n> s2.sectiones.refere1b =3D indethom.refere1b\n> \t\tand s2.sectiones.refere2a =3D indethom.refere2a and=20=20\n> s2.sectiones.refere2b =3D indethom.refere2b\n> \t\tand s2.sectiones.refere3a =3D indethom.refere3a and=20=20\n> s2.sectiones.refere3b =3D indethom.refere3b\n> \t\tand s2.sectiones.refere4a =3D indethom.refere4a and=20=20\n> s2.sectiones.refere4b =3D indethom.refere4b);\n\nThis is effectively forcing a nestloop-with-inner-indexscan join. You\nmight be better off with\n\nupdate indethom\n\tset query_counter = nextval('s2.query_counter_seq'),\n\tsectref = sectiones.clavis\nfrom s2.sectiones\nwhere\n\t\ts2.sectiones.nomeoper = indethom.nomeoper\n\t\tand s2.sectiones.refere1a = indethom.refere1a and \ns2.sectiones.refere1b = indethom.refere1b\n\t\tand s2.sectiones.refere2a = indethom.refere2a and \ns2.sectiones.refere2b = indethom.refere2b\n\t\tand s2.sectiones.refere3a = indethom.refere3a and \ns2.sectiones.refere3b = indethom.refere3b\n\t\tand s2.sectiones.refere4a = indethom.refere4a and \ns2.sectiones.refere4b = indethom.refere4b;\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Dec 2003 11:32:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update performance ... is 200,\n\t000 updates per hour what I should expect?" }, { "msg_contents": "Erik Norvelle <[email protected]> writes:\n\n> Here's the query I am running:\n> update indethom\n> \tset query_counter = nextval('s2.query_counter_seq'), -- Just for keeping track of how fast the query is running\n> \tsectref = (select clavis from s2.sectiones where\n> \t\ts2.sectiones.nomeoper = indethom.nomeoper\n> \t\tand s2.sectiones.refere1a = indethom.refere1a and s2.sectiones.refere1b = indethom.refere1b\n> \t\tand s2.sectiones.refere2a = indethom.refere2a and s2.sectiones.refere2b = indethom.refere2b\n> \t\tand s2.sectiones.refere3a = indethom.refere3a and s2.sectiones.refere3b = indethom.refere3b\n> \t\tand s2.sectiones.refere4a = indethom.refere4a and s2.sectiones.refere4b = indethom.refere4b);\n> \n> Here�s the query plan:\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Seq Scan on indethom (cost=0.00..1310352.72 rows=10631972 width=212)\n> SubPlan\n> -> Index Scan using sectiones_ndx on sectiones (cost=0.00..6.03 rows=1 width=4)\n> Index Cond: ((nomeoper = $0) AND (refere1a = $1) AND (refere1b = $2) AND (refere2a = $3) AND (refere2b = $4) AND (refere3a = $5) AND (refere3b = $6) AND (refere4a = $7) AND (refere4b = $8))\n> (4 rows)\n\nFirstly, you might try running \"vacuum full\" on both tables. If there are tons\nof extra dead records that are left-over they could be slowing down the\nupdate.\n\nThis isn't the fastest possible plan but it's pretty good.\n\nYou might be able to get it somewhat faster using the non-standard \"from\"\nclause on the update statement.\n\nupdate indethom\n set sectref = clavis\n from sectiones\n where sectiones.nomeoper = indethom.nomeoper\n and sectiones.refere1a = indethom.refere1a\n and sectiones.refere1b = indethom.refere1b\n and sectiones.refere2a = indethom.refere2a\n and sectiones.refere2b = indethom.refere2b\n and sectiones.refere3a = indethom.refere3a\n and sectiones.refere3b = indethom.refere3b\n and sectiones.refere4a = indethom.refere4a\n and sectiones.refere4b = indethom.refere4b\n\nThis might be able to use a merge join which will take longer to get started\nbecause it has to sort both tables, but might finish faster.\n\nYou might also try just paring the index down to just the two or three most\nuseful columns. Is it common that something matches refere1a and refere1b but\ndoesn't match the remaining? A 8-column index is a lot of overhead. I'm not\nsure how much that effects lookup times but it might be substantial.\n\n\n-- \ngreg\n\n", "msg_date": "02 Dec 2003 11:40:51 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update performance ... is 200,\n\t000 updates per hour what I should expect?" } ]
[ { "msg_contents": "xfs_freeze is a userspace program included in the xfsprogs rpm. It does run\non Redhat 7.3 (the SGI supplied kernels and userspace for RedHat 7.3 are\nsomewhat dated; I'd suggest patching the 2.4.21 kernel with XFS 1.3.1\npatches and upgrading the userspace programs from the SRPMS). Post to the\nlinux-xfs mailing list if you need further guidance (lots of people seem to\nstill run XFS on Redhat 7.3).\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Thursday, October 30, 2003 12:28 PM\n> To: [email protected]; [email protected]\n> Cc: [email protected]; [email protected]; [email protected];\n> [email protected]; [email protected]\n> Subject: Re: [linux-lvm] RE: [ADMIN] [PERFORM] backup/restore \n> - another\n> \n> \n> Does xfs_freeze work on red hat 7.3?\n> \n> Cynthia Leon\n> \n> -----Original Message-----\n> From: Murthy Kambhampaty [mailto:[email protected]]\n> Sent: Friday, October 17, 2003 11:34 AM\n> To: 'Tom Lane'; Murthy Kambhampaty\n> Cc: 'Jeff'; Josh Berkus; [email protected];\n> [email protected]; [email protected];\n> [email protected]\n> Subject: [linux-lvm] RE: [ADMIN] [PERFORM] backup/restore - another\n> area.\n> \n> \n> Friday, October 17, 2003 12:05, Tom Lane \n> [mailto:[email protected]] wrote:\n> \n> >Murthy Kambhampaty <[email protected]> writes:\n> >> ... The script handles situations\n> >> where (i) the XFS filesystem containing $PGDATA has an \n> >external log and (ii)\n> >> the postmaster log ($PGDATA/pg_xlog) is written to a \n> >filesystem different\n> >> than the one containing the $PGDATA folder.\n> >\n> >It does? How exactly can you ensure snapshot consistency between\n> >data files and XLOG if they are on different filesystem\n> \n> Say, you're setup looks something like this:\n> \n> mount -t xfs /dev/VG1/LV_data /home/pgdata\n> mount -t xfs /dev/VG1/LV_xlog /home/pgdata/pg_xlog\n> \n> When you want to take the filesystem backup, you do:\n> \n> Step 1:\n> xfs_freeze -f /dev/VG1/LV_xlog\n> xfs_freeze -f /dev/VG1/LV_data\n> \tThis should finish any checkpoints that were in \n> progress, and not\n> start any new ones\n> \ttill you unfreeze. (writes to an xfs_frozen filesystem \n> wait for the\n> xfs_freeze -u, \n> \tbut reads proceed; see text from xfs_freeze manpage in postcript\n> below.)\n> \n> \n> Step2: \n> create snapshots of /dev/VG1/LV_xlog and /dev/VG1/LV_xlog\n> \n> Step 3: \n> xfs_freeze -u /dev/VG1/LV_data\n> xfs_freeze -u /dev/VG1/LV_xlog\n> \tUnfreezing in this order should assure that checkpoints \n> resume where\n> they left off, then log writes commence.\n> \n> \n> Step4:\n> mount the snapshots taken in Step2 somewhere; e.g. /mnt/snap_data and\n> /mnt/snap_xlog. Copy (or rsync or whatever) /mnt/snap_data to \n> /mnt/pgbackup/\n> and /mnt/snap_xlog to /mnt/pgbackup/pg_xlog. Upon completion, \n> /mnt/pgbackup/\n> looks to the postmaster like /home/pgdata would if the server \n> had crashed at\n> the moment that Step1 was initiated. As I understand it, \n> during recovery\n> (startup) the postmaster will roll the database forward to this point,\n> \"checkpoint-ing\" all the transactions that made it into the \n> log before the\n> crash.\n> \n> Step5:\n> remove the snapshots created in Step2.\n> \n> The key is \n> (i) xfs_freeze allows you to \"quiesce\" any filesystem at any \n> point in time\n> and, if I'm not mistaken, the order (LIFO) in which you \n> freeze and unfreeze\n> the two filesystems: freeze $PGDATA/pg_xlog then $PGDATA; \n> unfreeze $PGDATA\n> then $PGDATA/pg_xlog.\n> (ii) WAL recovery assures consistency after a (file)sytem crash.\n> \n> Presently, the test server for my backup scripts is set-up \n> this way, and the\n> backup works flawlessly, AFAICT. (Note that the backup script starts a\n> postmaster on the filesystem copy each time, so you get early \n> warning of\n> problems. Moreover the data in the \"production\" and \"backup\" \n> copies are\n> tested and found to be identical.\n> \n> Comments? Any suggestions for additional tests?\n> \n> Thanks,\n> \tMurthy\n> \n> PS: From the xfs_freeze manpage:\n> \"xfs_freeze suspends and resumes access to an XFS filesystem (see\n> xfs(5)). \n> \n> xfs_freeze halts new access to the filesystem and creates a \n> stable image\n> on disk. xfs_freeze is intended to be used with volume managers and\n> hardware RAID devices that support the creation of snapshots. \n> \n> The mount-point argument is the pathname of the directory where the\n> filesystem is mounted. The filesystem must be mounted to be \n> frozen (see\n> mount(8)). \n> \n> The -f flag requests the specified XFS filesystem to be \n> frozen from new\n> modifications. When this is selected, all ongoing transactions in the\n> filesystem are allowed to complete, new write system calls are halted,\n> other calls which modify the filesystem are halted, and all \n> dirty data,\n> metadata, and log information are written to disk. Any process\n> attempting to write to the frozen filesystem will block \n> waiting for the\n> filesystem to be unfrozen. \n> \n> Note that even after freezing, the on-disk filesystem can contain\n> information on files that are still in the process of unlinking. These\n> files will not be unlinked until the filesystem is unfrozen or a clean\n> mount of the snapshot is complete. \n> \n> The -u option is used to un-freeze the filesystem and allow operations\n> to continue. Any filesystem modifications that were blocked by the\n> freeze are unblocked and allowed to complete.\"\n> \n> _______________________________________________\n> linux-lvm mailing list\n> [email protected]\n> http://lists.sistina.com/mailman/listinfo/linux-lvm\n> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/\n> \n> ==============================================================\n> ================\n> --- PRESBYTERIAN HEALTHCARE SERVICES DISCLAIMER ---\n> \n> This message originates from Presbyterian Healthcare Services \n> or one of its\n> affiliated organizations. It contains information, which may \n> be confidential\n> or privileged, and is intended only for the individual or \n> entity named above.\n> It is prohibited for anyone else to disclose, copy, \n> distribute or use the\n> contents of this message. All personal messages express views \n> solely of the\n> sender, which are not to be attributed to Presbyterian \n> Healthcare Services or\n> any of its affiliated organizations, and may not be \n> distributed without this\n> disclaimer. If you received this message in error, please notify us\n> immediately at [email protected]. \n> ==============================================================\n> ================\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index \n> scan if your\n> joining column's datatypes do not match\n> \n", "msg_date": "Tue, 2 Dec 2003 13:11:45 -0500 ", "msg_from": "Murthy Kambhampaty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [linux-lvm] RE: [PERFORM] backup/restore - another" } ]
[ { "msg_contents": "I took advantage of last weekend to upgrade from 7.2.4 to 7.4.0 on a\nnew faster box.\n\nNow I'm trying to implement pg_autovacuum. It seems to work ok, but\nafter about an hour or so, it does nothing. The process still is\nrunning, but nothing is sent to the log file.\n\nI'm running the daemon as distributed with PG 7.4 release as follows:\n\npg_autovacuum -d4 -V 0.15 -A 1 -U postgres -L /var/tmp/autovacuum.log -D\n\nthe last few lines of the log are:\n\n[2003-12-02 11:43:58 AM] VACUUM ANALYZE \"public\".\"msg_recipients\"\n[2003-12-02 12:24:33 PM] select relfilenode,reltuples,relpages from pg_class where relfilenode=18588239\n[2003-12-02 12:24:33 PM] table name: vkmlm.\"public\".\"msg_recipients\"\n[2003-12-02 12:24:33 PM] relfilenode: 18588239; relisshared: 0\n[2003-12-02 12:24:33 PM] reltuples: 9; relpages: 529132\n[2003-12-02 12:24:33 PM] curr_analyze_count: 1961488; cur_delete_count: 1005040\n[2003-12-02 12:24:33 PM] ins_at_last_analyze: 1961488; del_at_last_vacuum: 1005040\n[2003-12-02 12:24:33 PM] insert_threshold: 509; delete_threshold 1001\n[2003-12-02 12:24:33 PM] Performing: VACUUM ANALYZE \"public\".\"user_list\"\n[2003-12-02 12:24:33 PM] VACUUM ANALYZE \"public\".\"user_list\"\n[2003-12-02 12:43:19 PM] select relfilenode,reltuples,relpages from pg_class where relfilenode=18588202\n[2003-12-02 12:43:19 PM] table name: vkmlm.\"public\".\"user_list\"\n[2003-12-02 12:43:19 PM] relfilenode: 18588202; relisshared: 0\n[2003-12-02 12:43:19 PM] reltuples: 9; relpages: 391988\n[2003-12-02 12:43:19 PM] curr_analyze_count: 1159843; cur_delete_count: 1118540\n[2003-12-02 12:43:19 PM] ins_at_last_analyze: 1159843; del_at_last_vacuum: 1118540\n[2003-12-02 12:43:19 PM] insert_threshold: 509; delete_threshold 1001\n\nThen it just sits there. I started it at 11:35am, and it is now\n3:30pm.\n\nI did the same last night at about 10:58pm, and it ran and did work until\n11:57pm, then sat there until I killed/restarted pg_autovacuum this\nmorning at 11:35. The process is not using any CPU time.\n\nI just killed/restarted it and it found work to do on my busy tables\nwhich I'd expect.\n\nI'm running Postgres 7.4 release on FreeBSD 4.9-RELEASE.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 2 Dec 2003 15:37:40 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum daemon stops doing work after about an hour" }, { "msg_contents": "On Tue, 2003-12-02 at 15:37, Vivek Khera wrote:\n> Now I'm trying to implement pg_autovacuum. It seems to work ok, but\n> after about an hour or so, it does nothing. The process still is\n> running, but nothing is sent to the log file.\n> \n> I'm running the daemon as distributed with PG 7.4 release as follows:\n> \n> pg_autovacuum -d4 -V 0.15 -A 1 -U postgres -L /var/tmp/autovacuum.log -D\n> \n> the last few lines of the log are:\n> \n> [2003-12-02 11:43:58 AM] VACUUM ANALYZE \"public\".\"msg_recipients\"\n> [2003-12-02 12:24:33 PM] select relfilenode,reltuples,relpages from pg_class where relfilenode=18588239\n> [2003-12-02 12:24:33 PM] table name: vkmlm.\"public\".\"msg_recipients\"\n> [2003-12-02 12:24:33 PM] relfilenode: 18588239; relisshared: 0\n> [2003-12-02 12:24:33 PM] reltuples: 9; relpages: 529132\n> [2003-12-02 12:24:33 PM] curr_analyze_count: 1961488; cur_delete_count: 1005040\n> [2003-12-02 12:24:33 PM] ins_at_last_analyze: 1961488; del_at_last_vacuum: 1005040\n> [2003-12-02 12:24:33 PM] insert_threshold: 509; delete_threshold 1001\n> [2003-12-02 12:24:33 PM] Performing: VACUUM ANALYZE \"public\".\"user_list\"\n> [2003-12-02 12:24:33 PM] VACUUM ANALYZE \"public\".\"user_list\"\n> [2003-12-02 12:43:19 PM] select relfilenode,reltuples,relpages from pg_class where relfilenode=18588202\n> [2003-12-02 12:43:19 PM] table name: vkmlm.\"public\".\"user_list\"\n> [2003-12-02 12:43:19 PM] relfilenode: 18588202; relisshared: 0\n> [2003-12-02 12:43:19 PM] reltuples: 9; relpages: 391988\n> [2003-12-02 12:43:19 PM] curr_analyze_count: 1159843; cur_delete_count: 1118540\n> [2003-12-02 12:43:19 PM] ins_at_last_analyze: 1159843; del_at_last_vacuum: 1118540\n> [2003-12-02 12:43:19 PM] insert_threshold: 509; delete_threshold 1001\n> \n> Then it just sits there. I started it at 11:35am, and it is now\n> 3:30pm.\n\nWeird.... Alphabetically speaking, is vkmlm.\"public\".\"user_list\" be the\nlast table in the last schema in the last database? You are running\nwith -d4, so you would get a message about going to sleep shortly after\ndealing with the last table, but you didn't get the sleep message, so I\ndon't think the problem is that pg_autovacuum is sleeping for an\ninordinate amount time.\n\n> I did the same last night at about 10:58pm, and it ran and did work until\n> 11:57pm, then sat there until I killed/restarted pg_autovacuum this\n> morning at 11:35. The process is not using any CPU time.\n> \n> I just killed/restarted it and it found work to do on my busy tables\n> which I'd expect.\n\nwhen you kill it, do you get a core file? Could you do a backtrace and\nsee where pg_autovacuum is hung up?\n\n> I'm running Postgres 7.4 release on FreeBSD 4.9-RELEASE.\n\nI don't run FreeBSD, so I haven't tested with FreeBSD. Recently Craig\nBoston reported and submitted a patch for a crash on FreeBSD, but that\ndoesn't sound like your problem. Could be some other type of platform\ndependent problem. \n\n\n", "msg_date": "Thu, 04 Dec 2003 00:38:46 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": ">>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n\n>> Then it just sits there. I started it at 11:35am, and it is now\n>> 3:30pm.\n\nMTO> Weird.... Alphabetically speaking, is vkmlm.\"public\".\"user_list\" be the\nMTO> last table in the last schema in the last database? You are running\n\nconveniently, yes it is...\n\nMTO> with -d4, so you would get a message about going to sleep shortly after\nMTO> dealing with the last table, but you didn't get the sleep message, so I\nMTO> don't think the problem is that pg_autovacuum is sleeping for an\nMTO> inordinate amount time.\n\nThe only sleep logged was\n\n[2003-12-03 04:47:13 PM] 1 All DBs checked in: 84996853 usec, will sleep for 469 secs.\n\n\nHere's all it did on yesterday afternoon's \"hour of work\":\n\n[2003-12-03 04:45:48 PM] Performing: ANALYZE \"public\".\"url_track\"\n[2003-12-03 04:46:27 PM] Performing: ANALYZE \"public\".\"msg_recipients\"\n[2003-12-03 04:46:55 PM] Performing: ANALYZE \"public\".\"deliveries\"\n[2003-12-03 04:46:55 PM] Performing: ANALYZE \"public\".\"user_list\"\n[2003-12-03 04:47:12 PM] Performing: ANALYZE \"public\".\"sessions\"\n[2003-12-03 04:55:02 PM] Performing: ANALYZE \"public\".\"url_track\"\n[2003-12-03 04:55:22 PM] Performing: VACUUM ANALYZE \"public\".\"msg_recipients\"\n[2003-12-03 05:40:11 PM] Performing: VACUUM ANALYZE \"public\".\"user_list\"\n\nthen 18 minutes later, it reported:\n\n[2003-12-03 05:58:25 PM] select relfilenode,reltuples,relpages from pg_class where relfilenode=18588202\n[2003-12-03 05:58:25 PM] table name: vkmlm.\"public\".\"user_list\"\n[2003-12-03 05:58:25 PM] relfilenode: 18588202; relisshared: 0\n[2003-12-03 05:58:25 PM] reltuples: 9; relpages: 427920\n[2003-12-03 05:58:25 PM] curr_analyze_count: 2559236; cur_delete_count: 2475824\n[2003-12-03 05:58:25 PM] ins_at_last_analyze: 2559236; del_at_last_vacuum: 2475824\n[2003-12-03 05:58:25 PM] insert_threshold: 509; delete_threshold 1001\n\nand stopped doing anything.\n\n\nMTO> when you kill it, do you get a core file? Could you do a backtrace and\nMTO> see where pg_autovacuum is hung up?\n\nnope. unfortunately my PG libs are without debugging, too. I'll\nrebuild pg_autovacuum with debugging and run it under gdb so I can see\nwhere it gets stuck.\n\nI'll report back when I find something. I just wanted to check first\nif anyone else ran into this.\n", "msg_date": "Thu, 4 Dec 2003 11:33:59 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": "Vivek Khera wrote:\n\n>>>>>>\"MTO\" == Matthew T O'Connor <[email protected]> writes:\n> \n> \n>>>Then it just sits there. I started it at 11:35am, and it is now\n>>>3:30pm.\n> \n> \n> MTO> Weird.... Alphabetically speaking, is vkmlm.\"public\".\"user_list\" be the\n> MTO> last table in the last schema in the last database? You are running\n> \n> conveniently, yes it is...\n> \n> MTO> with -d4, so you would get a message about going to sleep shortly after\n> MTO> dealing with the last table, but you didn't get the sleep message, so I\n> MTO> don't think the problem is that pg_autovacuum is sleeping for an\n> MTO> inordinate amount time.\n> \n> The only sleep logged was\n> \n> [2003-12-03 04:47:13 PM] 1 All DBs checked in: 84996853 usec, will sleep for 469 secs.\n\nWhat I seen is:\n\n\n# tail -f auto.log\n[2003-12-04 07:10:18 PM] reltuples: 72; relpages: 1\n[2003-12-04 07:10:18 PM] curr_analyze_count: 72; cur_delete_count: 0\n[2003-12-04 07:10:18 PM] ins_at_last_analyze: 72; del_at_last_vacuum: 0\n[2003-12-04 07:10:18 PM] insert_threshold: 572; delete_threshold 536\n[2003-12-04 07:10:18 PM] table name: empdb.\"public\".\"contracts\"\n[2003-12-04 07:10:18 PM] relfilenode: 17784; relisshared: 0\n[2003-12-04 07:10:18 PM] reltuples: 347; relpages: 5\n[2003-12-04 07:10:18 PM] curr_analyze_count: 347; cur_delete_count: 0\n[2003-12-04 07:10:18 PM] ins_at_last_analyze: 347; del_at_last_vacuum: 0\n[2003-12-04 07:10:18 PM] insert_threshold: 847; delete_threshold 673\n\n\n[ 5 minutes of delay ] <----- LOOK THIS\n\n\n[2003-12-04 07:10:18 PM] 503 All DBs checked in: 179396 usec, will sleep \nfor 300 secs.\n[2003-12-04 07:15:19 PM] 504 All DBs checked in: 98814 usec, will sleep \nfor 300 secs.\n\nI think is a good Idea put a fflush after:\n\nfprintf(LOGOUTPUT, \"[%s] %s\\n\", timebuffer, logentry);\n\n\nRegards\nGaetano Mendola\n\n\n\n", "msg_date": "Thu, 04 Dec 2003 19:35:32 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an hour" }, { "msg_contents": ">>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n\n>> I'm running Postgres 7.4 release on FreeBSD 4.9-RELEASE.\n\nMTO> I don't run FreeBSD, so I haven't tested with FreeBSD. Recently Craig\nMTO> Boston reported and submitted a patch for a crash on FreeBSD, but that\nMTO> doesn't sound like your problem. Could be some other type of platform\nMTO> dependent problem. \n\nOh lucky me.\n\nI think I found it. I compiled with -g -O and ran it under gdb, so\nthe output is line buffered. The last thing it prints out now is\nthis:\n\n[2003-12-04 02:11:17 PM] 3 All DBs checked in: -786419782 usec, will sleep for -1272 secs.\n\nsince sleep() takes an unsigned int as its parameter, we are actually\nsleeping for 4294966024 seconds == 136 years.\n\nI recall reading about the negative time to test the dbs\nsomewhere... I guess I'll get on debugging that. The time keeper in\nthis box is pretty darned accurate otherwise (using ntpd).\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 04 Dec 2003 14:29:58 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": ">>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n\nMTO> I don't run FreeBSD, so I haven't tested with FreeBSD. Recently Craig\nMTO> Boston reported and submitted a patch for a crash on FreeBSD, but that\n\nsome more debugging data:\n\n(gdb) print now\n$2 = {tv_sec = 1070565077, tv_usec = 216477}\n(gdb) print then\n$3 = {tv_sec = 1070561568, tv_usec = 668963}\n(gdb) print diff\n$4 = -5459981371352\n(gdb) print sleep_secs\n$5 = -1272\n\nso for some reason, instead of calculating 3508547514 as the diff, it\ngot a hugely negative number.\n\nI'll bet it has something to do with the compiler... more debugging\nto follow (without -O compilation...)\n\n\n\nMTO> ---------------------------(end of broadcast)---------------------------\nMTO> TIP 7: don't forget to increase your free space map settings\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 04 Dec 2003 14:44:41 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": ">>>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n>\n> MTO> I don't run FreeBSD, so I haven't tested with FreeBSD. Recently\n> Craig MTO> Boston reported and submitted a patch for a crash on FreeBSD,\n> but that\n>\n> some more debugging data:\n>\n> (gdb) print now\n> $2 = {tv_sec = 1070565077, tv_usec = 216477}\n> (gdb) print then\n> $3 = {tv_sec = 1070561568, tv_usec = 668963}\n> (gdb) print diff\n> $4 = -5459981371352\n> (gdb) print sleep_secs\n> $5 = -1272\n>\n> so for some reason, instead of calculating 3508547514 as the diff, it\n> got a hugely negative number.\n>\n> I'll bet it has something to do with the compiler... more debugging to\n> follow (without -O compilation...)\n\nCould this be the recently reported bug where time goes backwards on\nFreeBSD? Can anyone who knows more about this problem chime in, I know it\nwas recently discussed on Hackers.\n\nThe simple fix is to just make sure it's a positive number. If not, then\njust sleep for some small positive amount. I can make a patch for this,\nprobably sometime this weekend.\n\nThanks for tracking this down.\n\n\n", "msg_date": "Thu, 4 Dec 2003 15:52:51 -0500 (EST)", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": ">>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n\nMTO> Could this be the recently reported bug where time goes backwards on\nMTO> FreeBSD? Can anyone who knows more about this problem chime in, I know it\nMTO> was recently discussed on Hackers.\n\n\nTime does not go backwards -- the now and then variables are properly\nincrementing in time as you see from the debugging output.\n\nThe error appears to be with the computation of the \"diff\". It is\neither a C programming error, or a compiler error. I'm not a C \"cop\"\nso I can't tell you which it is.\n\nWitness this program, below, compiled as \"cc -g -o t t.c\" and the\noutput here:\n\n% ./t\nseconds = 3509\nseconds1 = 3509000000\nuseconds = -452486\nstepped diff = 3508547514\nseconds2 = -785967296\nseconds3 = 3509000000\ndiff = -786419782\nlong long diff = 3508547514\n%\n\napperantly, if you compute (now.tv_sec - then.tv_sec) * 1000000 all at\nonce, it overflows since the RHS is all computed using longs rather\nthan long longs. Fix is to cast at least one of the values to long\nlong on the RHS, as in the computation of seconds3 below. compare\nthat to the computation of seconds2 and you'll see that this is the\ncause.\n\nI'd be curious to see the output of this program on other platforms\nand other compilers. I'm using gcc 2.95.4 as shipped with FreeBSD\n4.8+.\n\nThat all being said, you should never sleep less than the base time,\nand never for more than a max amount, perhaps 1 hour?\n\n\n--cut here--\n#include <sys/time.h>\n#include <stdio.h>\n\nint\nmain() \n{\n struct timeval now, then;\n long long diff = 0;\n long long seconds, seconds1, seconds2, seconds3, useconds;\n\n now.tv_sec = 1070565077L;\n now.tv_usec = 216477L;\n\n then.tv_sec = 1070561568L;\n then.tv_usec = 668963L;\n\n seconds = now.tv_sec - then.tv_sec;\n printf(\"seconds = %lld\\n\",seconds);\n seconds1 = seconds * 1000000;\n printf(\"seconds1 = %lld\\n\",seconds1);\n useconds = now.tv_usec - then.tv_usec;\n printf(\"useconds = %lld\\n\",useconds);\n\n diff = seconds1 + useconds;\n printf(\"stepped diff = %lld\\n\",diff);\n\n /* this appears to be the culprit... it should be same as seconds1 */\n seconds2 = (now.tv_sec - then.tv_sec) * 1000000;\n printf(\"seconds2 = %lld\\n\",seconds2);\n\n /* seems we need to cast long's to long long's for this computation */\n seconds3 = ((long long)now.tv_sec - (long long)then.tv_sec) * 1000000;\n printf(\"seconds3 = %lld\\n\",seconds3);\n \n\n diff = (now.tv_sec - then.tv_sec) * 1000000 + (now.tv_usec - then.tv_usec);\n printf (\"diff = %lld\\n\",diff);\n\n diff = ((long long)now.tv_sec - (long long)then.tv_sec) * 1000000 + (now.tv_usec - then.tv_usec);\n printf (\"long long diff = %lld\\n\",diff);\n\n exit(0);\n}\n\n\n--cut here--\n", "msg_date": "Thu, 4 Dec 2003 16:20:22 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": "Actually, you can simplify the fix thusly:\n\n diff = (long long)(now.tv_sec - then.tv_sec) * 1000000 + (now.tv_usec - then.tv_usec);\n\n", "msg_date": "Thu, 4 Dec 2003 16:22:09 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an\thour" }, { "msg_contents": "--On Thursday, December 04, 2003 16:20:22 -0500 Vivek Khera \n<[email protected]> wrote:\n\n>>>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n>\n> MTO> Could this be the recently reported bug where time goes backwards on\n> MTO> FreeBSD? Can anyone who knows more about this problem chime in, I\n> know it MTO> was recently discussed on Hackers.\n>\n>\n> Time does not go backwards -- the now and then variables are properly\n> incrementing in time as you see from the debugging output.\n>\n> The error appears to be with the computation of the \"diff\". It is\n> either a C programming error, or a compiler error. I'm not a C \"cop\"\n> so I can't tell you which it is.\n>\n> Witness this program, below, compiled as \"cc -g -o t t.c\" and the\n> output here:\n>\n> % ./t\n> seconds = 3509\n> seconds1 = 3509000000\n> useconds = -452486\n> stepped diff = 3508547514\n> seconds2 = -785967296\n> seconds3 = 3509000000\n> diff = -786419782\n> long long diff = 3508547514\n> %\n>\n> apperantly, if you compute (now.tv_sec - then.tv_sec) * 1000000 all at\n> once, it overflows since the RHS is all computed using longs rather\n> than long longs. Fix is to cast at least one of the values to long\n> long on the RHS, as in the computation of seconds3 below. compare\n> that to the computation of seconds2 and you'll see that this is the\n> cause.\n>\n> I'd be curious to see the output of this program on other platforms\n> and other compilers. I'm using gcc 2.95.4 as shipped with FreeBSD\n> 4.8+.\nthis is with the UnixWare compiler:\n$ cc -O -o testvk testvk.c\n$ ./testvk\nseconds = 3509\nseconds1 = 3509000000\nuseconds = -452486\nstepped diff = 3508547514\nseconds2 = -785967296\nseconds3 = 3509000000\ndiff = -786419782\nlong long diff = 3508547514\n$\n\n\nI think this is a C bug.\n\n\n\n>\n> That all being said, you should never sleep less than the base time,\n> and never for more than a max amount, perhaps 1 hour?\n>\n>\n> --cut here--\n># include <sys/time.h>\n># include <stdio.h>\n>\n> int\n> main()\n> {\n> struct timeval now, then;\n> long long diff = 0;\n> long long seconds, seconds1, seconds2, seconds3, useconds;\n>\n> now.tv_sec = 1070565077L;\n> now.tv_usec = 216477L;\n>\n> then.tv_sec = 1070561568L;\n> then.tv_usec = 668963L;\n>\n> seconds = now.tv_sec - then.tv_sec;\n> printf(\"seconds = %lld\\n\",seconds);\n> seconds1 = seconds * 1000000;\n> printf(\"seconds1 = %lld\\n\",seconds1);\n> useconds = now.tv_usec - then.tv_usec;\n> printf(\"useconds = %lld\\n\",useconds);\n>\n> diff = seconds1 + useconds;\n> printf(\"stepped diff = %lld\\n\",diff);\n>\n> /* this appears to be the culprit... it should be same as seconds1 */\n> seconds2 = (now.tv_sec - then.tv_sec) * 1000000;\n> printf(\"seconds2 = %lld\\n\",seconds2);\n>\n> /* seems we need to cast long's to long long's for this computation */\n> seconds3 = ((long long)now.tv_sec - (long long)then.tv_sec) * 1000000;\n> printf(\"seconds3 = %lld\\n\",seconds3);\n>\n>\n> diff = (now.tv_sec - then.tv_sec) * 1000000 + (now.tv_usec -\n> then.tv_usec); printf (\"diff = %lld\\n\",diff);\n>\n> diff = ((long long)now.tv_sec - (long long)then.tv_sec) * 1000000 +\n> (now.tv_usec - then.tv_usec); printf (\"long long diff = %lld\\n\",diff);\n>\n> exit(0);\n> }\n>\n>\n> --cut here--\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749", "msg_date": "Thu, 04 Dec 2003 15:25:36 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an" }, { "msg_contents": ">>>>> \"LR\" == Larry Rosenman <[email protected]> writes:\n\n>> I'd be curious to see the output of this program on other platforms\n>> and other compilers. I'm using gcc 2.95.4 as shipped with FreeBSD\n>> 4.8+.\nLR> this is with the UnixWare compiler:\nLR> $ cc -O -o testvk testvk.c\nLR> $ ./testvk\nLR> seconds = 3509\nLR> seconds1 = 3509000000\nLR> useconds = -452486\nLR> stepped diff = 3508547514\nLR> seconds2 = -785967296\nLR> seconds3 = 3509000000\nLR> diff = -786419782\nLR> long long diff = 3508547514\nLR> $\n\nLR> I think this is a C bug.\n\nUpon further reflection, I think so to. The entire RHS is long's so\nthe arithmetic is done in longs, then assigned to a long long when\ndone (after things have overflowed). Forcing any one of the RHS\nvalues to be long long causes the arithmetic to all be done using long\nlongs, and then you get the numbers you expect.\n\nI think you only notice this in autovacuum when it takes a long time\nto complete the work, like my example of about 3500 seconds.\n", "msg_date": "Thu, 4 Dec 2003 16:37:12 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an" }, { "msg_contents": "\nThis has been fixed and will be in 7.4.1.\n\n---------------------------------------------------------------------------\n\nVivek Khera wrote:\n> >>>>> \"LR\" == Larry Rosenman <[email protected]> writes:\n> \n> >> I'd be curious to see the output of this program on other platforms\n> >> and other compilers. I'm using gcc 2.95.4 as shipped with FreeBSD\n> >> 4.8+.\n> LR> this is with the UnixWare compiler:\n> LR> $ cc -O -o testvk testvk.c\n> LR> $ ./testvk\n> LR> seconds = 3509\n> LR> seconds1 = 3509000000\n> LR> useconds = -452486\n> LR> stepped diff = 3508547514\n> LR> seconds2 = -785967296\n> LR> seconds3 = 3509000000\n> LR> diff = -786419782\n> LR> long long diff = 3508547514\n> LR> $\n> \n> LR> I think this is a C bug.\n> \n> Upon further reflection, I think so to. The entire RHS is long's so\n> the arithmetic is done in longs, then assigned to a long long when\n> done (after things have overflowed). Forcing any one of the RHS\n> values to be long long causes the arithmetic to all be done using long\n> longs, and then you get the numbers you expect.\n> \n> I think you only notice this in autovacuum when it takes a long time\n> to complete the work, like my example of about 3500 seconds.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 7 Dec 2003 16:16:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an" }, { "msg_contents": "Gaetano Mendola wrote:\n> Vivek Khera wrote:\n> \n>>>>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n>>\n>>\n>>\n>>>> Then it just sits there. I started it at 11:35am, and it is now\n>>>> 3:30pm.\n>>\n>>\n>>\n>> MTO> Weird.... Alphabetically speaking, is vkmlm.\"public\".\"user_list\" \n>> be the\n>> MTO> last table in the last schema in the last database? You are running\n>>\n>> conveniently, yes it is...\n>>\n>> MTO> with -d4, so you would get a message about going to sleep shortly \n>> after\n>> MTO> dealing with the last table, but you didn't get the sleep \n>> message, so I\n>> MTO> don't think the problem is that pg_autovacuum is sleeping for an\n>> MTO> inordinate amount time.\n>>\n>> The only sleep logged was\n>>\n>> [2003-12-03 04:47:13 PM] 1 All DBs checked in: 84996853 usec, will \n>> sleep for 469 secs.\n> \n> \n> What I seen is:\n> \n> \n> # tail -f auto.log\n> [2003-12-04 07:10:18 PM] reltuples: 72; relpages: 1\n> [2003-12-04 07:10:18 PM] curr_analyze_count: 72; cur_delete_count: 0\n> [2003-12-04 07:10:18 PM] ins_at_last_analyze: 72; del_at_last_vacuum: 0\n> [2003-12-04 07:10:18 PM] insert_threshold: 572; delete_threshold 536\n> [2003-12-04 07:10:18 PM] table name: empdb.\"public\".\"contracts\"\n> [2003-12-04 07:10:18 PM] relfilenode: 17784; relisshared: 0\n> [2003-12-04 07:10:18 PM] reltuples: 347; relpages: 5\n> [2003-12-04 07:10:18 PM] curr_analyze_count: 347; cur_delete_count: 0\n> [2003-12-04 07:10:18 PM] ins_at_last_analyze: 347; del_at_last_vacuum: 0\n> [2003-12-04 07:10:18 PM] insert_threshold: 847; delete_threshold 673\n> \n> \n> [ 5 minutes of delay ] <----- LOOK THIS\n> \n> \n> [2003-12-04 07:10:18 PM] 503 All DBs checked in: 179396 usec, will sleep \n> for 300 secs.\n> [2003-12-04 07:15:19 PM] 504 All DBs checked in: 98814 usec, will sleep \n> for 300 secs.\n> \n> I think is a good Idea put a fflush after:\n> \n> fprintf(LOGOUTPUT, \"[%s] %s\\n\", timebuffer, logentry);\n\nWas I wrong ? If you are watching in tail the log believeme is really\nannoying.\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 08 Dec 2003 01:46:18 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an hour" }, { "msg_contents": "The world rejoiced as [email protected] (Gaetano Mendola) wrote:\n> I think is a good Idea put a fflush after:\n>\n> fprintf(LOGOUTPUT, \"[%s] %s\\n\", timebuffer, logentry);\n\nI thought I had put fflush()es at all the interesting locations...\n\nApparently it was an error to not go to the effort of making sure it\nworked well on FreeBSD. (It was on my list, but I never got the Round\nTuits...) There's an AMD-64 box coming in soon, targeted at FreeBSD,\nso that should change...\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://www3.sympatico.ca/cbbrowne/linux.html\nWhat would a chair look like, if your knees bent the other way? \n", "msg_date": "Sun, 07 Dec 2003 20:17:33 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an hour" }, { "msg_contents": "Christopher Browne wrote:\n\n>The world rejoiced as [email protected] (Gaetano Mendola) wrote:\n> \n>\n>>I think is a good Idea put a fflush after:\n>>\n>>fprintf(LOGOUTPUT, \"[%s] %s\\n\", timebuffer, logentry);\n>> \n>>\n>\n>I thought I had put fflush()es at all the interesting locations...\n> \n>\n\nI just looked through the code, I think there are fflush()es at all but \none interesting locations. The last log_entry call before sleeping \ndoesn't have an fflush call after it. I'll submit a patch that adds it.\n\n>Apparently it was an error to not go to the effort of making sure it\n>worked well on FreeBSD. (It was on my list, but I never got the Round\n>Tuits...) There's an AMD-64 box coming in soon, targeted at FreeBSD,\n>so that should change...\n> \n>\nYeah, FreeBSD testing would have been nice, but I don't have access to \nany FreeBSD boxes so.....\n\n", "msg_date": "Mon, 08 Dec 2003 00:27:29 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum daemon stops doing work after about an hour" }, { "msg_contents": ">>>>> \"MTO\" == Matthew T O'Connor <[email protected]> writes:\n\nMTO> Yeah, FreeBSD testing would have been nice, but I don't have access to\nMTO> any FreeBSD boxes so.....\n\nFWIW, with the fflush() added after that sleep, and the fix to the\nlong long computation of sleep time to keep it from overflowing,\npg_autovacuum has been working flawlessly on my FreeBSD 4.9 + PG 7.4.0\nproduction server. I'm just still playing with tuning pg_autovacuum\nto keep it from vacuuming my busy tables *too* often.\n\nJust a question: will my test program show negative sleep 'diff' on\nyour linux box? I can't imagine that it would give different results\nthan on freebsd.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 08 Dec 2003 13:41:12 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum daemon stops doing work after about an hour" } ]
[ { "msg_contents": "Dear all\n\nWe would be recommending to our ct. on the use of Postgresql db as compared\nto MS SQL Server. We are targetting to use Redhat Linux ES v2.1, Postgresql\nv7.3.4 and Postgresql ODBC 07.03.0100.\n\nWe would like to know the minimum specs required for our below target. The\nminimum specs is referring to no. of CPU, memory, harddisk capacity, RAID\ntechnology etc. And also the Postgresql parameters and configuration to run\nsuch a system.\n\n1) We will be running 2 x Postgresql db in the machine.\n\n2) Total number of connections to be around 100. The connections from the\nclients machines will be in ODBC and socket connections.\n\n3) Estimated number of transactions to be written into the Postgresql db is\naround 15000 records per day.\n\n\nThe growth rate in terms of number of connections is around 10% per year\nand the data retention is kept on average at least for 18 months for the 2\ndatabases.\n\nAre there any reference books or sites that I can tap on for the above\nrequirement?\n\n\nThank you,\nREgards.\n\n\n\n\n", "msg_date": "Wed, 3 Dec 2003 10:22:51 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Minimum hardware requirements for Postgresql db" }, { "msg_contents": "After takin a swig o' Arrakan spice grog, [email protected] belched out:\n> We would be recommending to our ct. on the use of Postgresql db as\n> compared to MS SQL Server. We are targetting to use Redhat Linux ES\n> v2.1, Postgresql v7.3.4 and Postgresql ODBC 07.03.0100.\n>\n> We would like to know the minimum specs required for our below\n> target. The minimum specs is referring to no. of CPU, memory,\n> harddisk capacity, RAID technology etc. And also the Postgresql\n> parameters and configuration to run such a system.\n>\n> 1) We will be running 2 x Postgresql db in the machine.\n>\n> 2) Total number of connections to be around 100. The connections\n> from the clients machines will be in ODBC and socket connections.\n>\n> 3) Estimated number of transactions to be written into the\n> Postgresql db is around 15000 records per day.\n>\n> The growth rate in terms of number of connections is around 10% per\n> year and the data retention is kept on average at least for 18\n> months for the 2 databases.\n>\n> Are there any reference books or sites that I can tap on for the\n> above requirement?\n\nPerhaps the best reference on detailed performance information is the\n\"General Bits\" documents.\n\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html>\n\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html>\n\nThese don't point particularly at minimal hardware requirements, but\nrather at how to configure the DBMS to best reflect what hardware you\nhave. But there's some degree to which you can work backwards...\n\nIf you'll need to support 100 concurrent connections, then minimum\nshared_buffers is 200, which implies 1600K of RAM required for shared\nbuffers.\n\n100 connections probably implies around 100MB of memory for the\nbackend processes to support the connections.\n\nThat all points to the notion that you'd more than probably get\nhalf-decent performance if you had a mere 256MB of RAM, which is about\n$50 worth these days.\n\nNone of it sounds terribly challenging; 15K records per day is 625\nrecords per hour which represents an INSERT every 6 seconds. Even if\nthat has to fit into an 8 hour day, that's still not a high number of\ntransactions per second. That _sounds like_ an application that could\nwork on old, obsolete hardware. I would imagine that my old Intel\nPentium Pro 200 might cope with the load, in much the way that that\nserver is more than capable of supporting a web server that would\nserve a local workgroup. (I only have 64MB of RAM on that box, which\nwould be a mite low, but it's an _ancient_ server...)\n\nThe only thing that makes me a little suspicious that there's\nsomething funny about the prescription is your indication of having\n100 concurrent users, which is really rather heavyweight in comparison\nwith the comparatively tiny number of transactions. Is this for some\nsort of \"departmental application\"? Where there's a lot of manual\ndata entry, so that each user would generate a transaction every 3-4\nminutes? That actually sounds about right...\n\nLet me suggest that the \"cost driver\" in this will _not_ be the cost\nof the hardware to support the database itself; it will instead be in\nhaving redundant hardware and backup hardware to ensure reliability.\n\nIt would seem likely that just about any sort of modern hardware would\nbe pretty adequate to the task. You can hardly _buy_ a system with\nless than Gigahertz-speed CPUs, 40GB of disk, and 256MB of RAM.\nUpgrade to have 2 SCSI disks, 512MB (or more, which is better) of RAM,\nand the cost of a suitable system still won't be outrageous.\n\nDouble it, buying a standby server, and the cost still oughtn't be\nreal scary. And if the application is important, you _should_ have a\nstandby server, irrespective of what software you might be running.\n-- \n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/x.html\nRules of the Evil Overlord #199. \"I will not make alliances with those\nmore powerful than myself. Such a person would only double-cross me in\nmy moment of glory. I will make alliances with those less powerful\nthan myself. I will then double-cross them in their moment of glory.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 02 Dec 2003 23:44:21 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" }, { "msg_contents": "On Wed, 3 Dec 2003 [email protected] wrote:\n\n> Dear all\n> \n> We would be recommending to our ct. on the use of Postgresql db as compared\n> to MS SQL Server. We are targetting to use Redhat Linux ES v2.1, Postgresql\n> v7.3.4 and Postgresql ODBC 07.03.0100.\n> \n> We would like to know the minimum specs required for our below target. The\n> minimum specs is referring to no. of CPU, memory, harddisk capacity, RAID\n> technology etc. And also the Postgresql parameters and configuration to run\n> such a system.\n> \n> 1) We will be running 2 x Postgresql db in the machine.\n> \n> 2) Total number of connections to be around 100. The connections from the\n> clients machines will be in ODBC and socket connections.\n> \n> 3) Estimated number of transactions to be written into the Postgresql db is\n> around 15000 records per day.\n> \n> \n> The growth rate in terms of number of connections is around 10% per year\n> and the data retention is kept on average at least for 18 months for the 2\n> databases.\n> \n> Are there any reference books or sites that I can tap on for the above\n> requirement?\n\nLike another poster pointed out, this is a walk in the park for \npostgresql. My workstation (1.1GHz celeron, 40 gig IDE drive, 512 Meg \nmemory) could handle this load while still being my workstation.\n:-)\n\n", "msg_date": "Wed, 3 Dec 2003 11:22:47 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" }, { "msg_contents": "\n\"scott.marlowe\" <[email protected]> writes:\n\n> > 3) Estimated number of transactions to be written into the Postgresql db is\n> > around 15000 records per day.\n> > \n> > The growth rate in terms of number of connections is around 10% per year\n> > and the data retention is kept on average at least for 18 months for the 2\n> > databases.\n\n> Like another poster pointed out, this is a walk in the park for \n> postgresql. My workstation (1.1GHz celeron, 40 gig IDE drive, 512 Meg \n> memory) could handle this load while still being my workstation.\n\nWell there's some info missing. Like what would you actually be _doing_ with\nthese data?\n\n15,000 inserts per day is nothing. But after 18 months that's over 5M records\nnot including the 10% growth rate. 5M records isn't really all that much but\nit's enough that it's possible to write slow queries against it.\n\nIf you're doing big batch updates or complex reports against the data that\nwill be more interesting than the inserts.\n\n-- \ngreg\n\n", "msg_date": "03 Dec 2003 14:23:32 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" }, { "msg_contents": "> 1) We will be running 2 x Postgresql db in the machine.\n> \n> 2) Total number of connections to be around 100. The connections from the\n> clients machines will be in ODBC and socket connections.\n> \n> 3) Estimated number of transactions to be written into the Postgresql db is\n> around 15000 records per day.\n\nAssuming this server will be dedicated to PostgreSQL only, the needs \noutlined above are modest.\n\nAs was pointed out in other posts, a simple sub-ghz machine with 512mb \nof ram is more than enough, but I'd slap on a gig only because RAM is \ncheaper now. If the database on this server is crucial, I'd look at \nsetting up a UPS, RAID (at this level, even software-based RAID will do \nfine, RAID 5 preferably) and investing in a backup/replicator solution.\n\n-- \nBest,\nAl Hulaton | Sr. Account Engineer | Command Prompt, Inc.\n503.667.4564 | [email protected]\nHome of Mammoth Replicator for PostgreSQL\nManaged PostgreSQL, Linux services and consulting\nRead and Search O'Reilly's 'Practical PostgreSQL' at\nhttp://www.commandprompt.com\n\n", "msg_date": "Wed, 03 Dec 2003 11:31:26 -0800", "msg_from": "Al Hulaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" } ]
[ { "msg_contents": "Hello!\n\nI am relative newcomer to SQL and PostgreSQL world, so please forgive me\nif this question is stupid.\n\nI am experiencing strange behaviour, where simple UPDATE of one field is\nvery slow, compared to INSERT into table with multiple indexes. I have\ntwo tables - one with raw data records (about 24000), where one field\ncontains status information (varchar(10)). First table has no indexes,\nonly primary key (recid). Second table contains processed records - some\nfields are same as first table, others are calculated during processing.\nRecords are processed by Python script, which uses PyPgSQL for PostgreSQL\naccess.\n\nProcessing is done by selecting all records from table1 where status\nmatches certain criteria (import). Each record is processed and results\nare inserted into table2, after inserting status field on same record in\ntable1 is updated with new value (done). Update statement itself is\nextremely simple: \"update table1 set status = 'done' where recid = ...\"\n\nMost interesting is, that insert takes 0.004 seconds in average, but\nupdate takes 0.255 seconds in average. Processing of 24000 records took\naround 1 hour 20 minutes.\n\nThen i changed processing logic not to update every record in table1\nafter processing. Instead i did insert recid value into temporary table\nand updated records in table1 after all records were processed and\ninserted into table2:\nUPDATE table1 SET Status = 'done' WHERE recid IN (SELECT recid FROM temptable)\n\nThis way i got processing time of 24000 records down to about 16 minutes.\nAbout 13 minutes from this took last UPDATE statement.\n\nWhy is UPDATE so slow compared to INSERT? I would expect more or less\nsimilar performance, or slower on insert since table2 has four indexes\nin addition to primary key, table1 has only primary key, which is used\non update. Am i doing something wrong or is this normal?\n\nI am using PostgreSQL 7.3.4, Debian/GNU Linux 3.0 (Woody),\nkernel 2.4.21, Python 2.3.2, PyPgSQL 2.4\n\n-- \nIvar Zarans\n\n", "msg_date": "Wed, 3 Dec 2003 20:29:52 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Slow UPDATE, INSERT OK" } ]
[ { "msg_contents": "\nThanks to Greg Stark, Tom Lane and Stephan Szabo for their advice on \nrewriting my query... the revised query plan claims it should only take \nabout half the time my original query did.\n\nNow for a somewhat different question: How might I improve my DB \nperformance by adjusting the various parameters in postgresql.conf and \nkernel config? Again, TKA.\n\nHere's what I've currently got (hardware, kernel config. and \npostgresql.conf)\n\nHardware: Mac iBook, G3 900Mhz, 640MB memory (This is my research machine :p \n)\nOS: OS X 10.2.6\nPostgresql version: 7.3.2\nKernel Config:\n sysctl -w kern.sysv.shmmax=4194304\n sysctl -w kern.sysv.shmmin=1\n sysctl -w kern.sysv.shmmni=32\n sysctl -w kern.sysv.shmseg=8\n sysctl -w kern.sysv.shmall=1024\n\n========================= Snip of postgresql.conf =================\n\n#\n# Shared Memory Size\n#\nshared_buffers = 128 # min max_connections*2 or 16, 8KB each\nmax_fsm_relations = 2000 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 20000 # min 1000, fsm is free space map, ~6 bytes\nmax_locks_per_transaction = 128 # min 10\nwal_buffers = 16 # min 4, typically 8KB each\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 65535 # min 64, size in KB\nvacuum_mem = 8192 # min 1024, size in KB\n\n#\n# Write-ahead log (WAL)\n#\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n#\nfsync = false\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0 # range 0-16\n\n========================== End Snip =======================\n\nSaludos,\nErik Norvelle\n\n\n", "msg_date": "Wed, 3 Dec 2003 15:22:27 -0600", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Update performance ... Recommended configuration changes?" }, { "msg_contents": "> shared_buffers = 128 # min max_connections*2 or 16, 8KB each\n\nTry 1500.\n\n> sort_mem = 65535 # min 64, size in KB\n\nI'd pull this in. You only have 640MB ram, which means about 8 large\nsorts to swap.\n\nHow about 16000?\n\n> fsync = false\n\nI presume you understand the risks involved with this setting and\ndataloss.\n\n", "msg_date": "Wed, 03 Dec 2003 16:33:10 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update performance ... Recommended configuration" } ]
[ { "msg_contents": "To all,\n\nWe are building a data warehouse composed of essentially click stream \ndata. The DB is growing fairly quickly as to be expected, currently at \n90GB for one months data. The idea is to keep 6 months detailed data on \nline and then start aggregating older data to summary tables. We have 2 \nfact tables currently, one with about 68 million rows and the other with \nabout 210 million rows. Numerous dimension tables ranging from a dozen \nrows to millions.\n\nWe are currently running on a Dell 2650 with 2 Xeon 2.8 processors in \nhyper-threading mode, 4GB of ram, and 5 SCSI drives in a RAID 0, Adaptec \nPERC3/Di, configuration. I believe they are 10k drives. Files system \nis EXT3. We are running RH9 Linux kernel 2.4.20-20.9SMP with bigmem \nturned on. This box is used only for the warehouse. All the ETL work \nis done on this machine as well. DB version is postgreSQL 7.4.\n\nWe are running into issues with IO saturation obviously. Since this \nthing is only going to get bigger we are looking for some advice on how \nto accommodate DB's of this size.\n\nFirst question is do we gain anything by moving the RH Enterprise \nversion of Linux in terms of performance, mainly in the IO realm as we \nare not CPU bound at all? Second and more radical, has anyone run \npostgreSQL on the new Apple G5 with an XRaid system? This seems like a \ngreat value combination. Fast CPU, wide bus, Fibre Channel IO, 2.5TB \nall for ~17k.\n\nI keep see references to terabyte postgreSQL installations, I was \nwondering if anyone on this list is in charge of one of those and can \noffer some advice on how to position ourselves hardware wise.\n\nThanks.\n\n--sean\n\n", "msg_date": "Wed, 03 Dec 2003 16:40:37 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Has anyone run on the new G5 yet" }, { "msg_contents": "I should also add that we have already done a ton of tuning based on the \narchives of this list so we are not starting from scratch here.\n\nThanks.\n\n--sean\n\nSean Shanny wrote:\n\n> To all,\n>\n> We are building a data warehouse composed of essentially click stream \n> data. The DB is growing fairly quickly as to be expected, currently \n> at 90GB for one months data. The idea is to keep 6 months detailed \n> data on line and then start aggregating older data to summary tables. \n> We have 2 fact tables currently, one with about 68 million rows and \n> the other with about 210 million rows. Numerous dimension tables \n> ranging from a dozen rows to millions.\n>\n> We are currently running on a Dell 2650 with 2 Xeon 2.8 processors in \n> hyper-threading mode, 4GB of ram, and 5 SCSI drives in a RAID 0, \n> Adaptec PERC3/Di, configuration. I believe they are 10k drives. \n> Files system is EXT3. We are running RH9 Linux kernel 2.4.20-20.9SMP \n> with bigmem turned on. This box is used only for the warehouse. All \n> the ETL work is done on this machine as well. DB version is \n> postgreSQL 7.4.\n>\n> We are running into issues with IO saturation obviously. Since this \n> thing is only going to get bigger we are looking for some advice on \n> how to accommodate DB's of this size.\n>\n> First question is do we gain anything by moving the RH Enterprise \n> version of Linux in terms of performance, mainly in the IO realm as we \n> are not CPU bound at all? Second and more radical, has anyone run \n> postgreSQL on the new Apple G5 with an XRaid system? This seems like \n> a great value combination. Fast CPU, wide bus, Fibre Channel IO, \n> 2.5TB all for ~17k.\n>\n> I keep see references to terabyte postgreSQL installations, I was \n> wondering if anyone on this list is in charge of one of those and can \n> offer some advice on how to position ourselves hardware wise.\n>\n> Thanks.\n>\n> --sean\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 03 Dec 2003 16:54:13 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "Sean\n\n> Second and more radical, has anyone run postgreSQL on the new Apple \n> G5 with an XRaid system? This seems like a great value combination. \n> Fast CPU, wide bus, Fibre Channel IO, 2.5TB all for ~17k.\n>\n> I keep see references to terabyte postgreSQL installations, I was \n> wondering if anyone on this list is in charge of one of those and can \n> offer some advice on how to position ourselves hardware wise.\n\n From my (admittedly low end) OSX experience, you just don't have the \nfilesystem options on OSX that you have on linux, from the noatime \nmount, filesystem types, and the raid options. I also feel that the \nsoftware stack is a bit more mature and tested on the linux side of \nthings.\n\nI doubt that the g5 hardware is that much faster than what you have \nright now. The raid hardware might be a good deal for you even on a \nlinux platform. There are reports of it 'just working' with x86 linux \nhardware.\n\neric\n\n", "msg_date": "Wed, 3 Dec 2003 14:12:08 -0800", "msg_from": "Eric Soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "> We are running into issues with IO saturation obviously. Since this\n> thing is only going to get bigger we are looking for some advice on \n> how to accommodate DB's of this size.\n<snip>\n> Second and more radical, has anyone run \n> postgreSQL on the new Apple G5 with an XRaid system? This seems like \n> a great value combination. Fast CPU, wide bus, Fibre Channel IO, \n> 2.5TB all for ~17k.\n<snip>\nIf you are going for I/O performance you are best off with one of the\nXserve competitors listed at http://www.apple.com/xserve/raid/. The\nXserve is based on IDE drives which have a lower seek time (say 8.9 ms)\ncompared to scsi (3.6 ms for seagate cheetah). For small random\nread/write operations (like databases) this will give you a noticable\nimprovement in performance over ide drives. Also make sure to get as\nmany drives as possible, more spindles equals better I/O performance.\n\n> I keep see references to terabyte postgreSQL installations, I was\n> wondering if anyone on this list is in charge of one of those and can \n> offer some advice on how to position ourselves hardware wise.\n\nI've gone to about half terabyte size and all I can say is you should\nplan for at least one quarter to one half a rack of drivespace (assuming\n14 drives per 4u that's 42 to 84 drives). Do yourself a favor and get\nmore rather than less, you will really appreciate it. I averaged about\n2 mb/s average per drive via the raid controller stats on 14 drive array\nduring I/O bound seek and update operations in 2 raid 10 arrays (half\nxlogs and half data). That comes out to around 2 hours for a terabyte\nwith 70 drives assuming a constant scaling. You may be able to get more\nor less depending on your setup and query workload.\n\n> Thanks.\n>\n> --sean\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Wed, 3 Dec 2003 14:35:45 -0800", "msg_from": "\"Fred Moyer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "Sean Shanny wrote:\n\n> We are currently running on a Dell 2650 with 2 Xeon 2.8 processors in \n> hyper-threading mode, 4GB of ram, and 5 SCSI drives in a RAID 0, Adaptec \n> PERC3/Di, configuration. I believe they are 10k drives. Files system \n> is EXT3. We are running RH9 Linux kernel 2.4.20-20.9SMP with bigmem \n> turned on. This box is used only for the warehouse. All the ETL work \n> is done on this machine as well. DB version is postgreSQL 7.4.\n\nAre you experiencing improvment using the hyper-threading ?\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Thu, 04 Dec 2003 00:50:11 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "Gaetano,\n\nI don't believe we have ever run the system without it turned on. \nAnother switch to fiddle with. :-)\n\n--sean\n\nGaetano Mendola wrote:\n\n> Sean Shanny wrote:\n>\n>> We are currently running on a Dell 2650 with 2 Xeon 2.8 processors in \n>> hyper-threading mode, 4GB of ram, and 5 SCSI drives in a RAID 0, \n>> Adaptec PERC3/Di, configuration. I believe they are 10k drives. \n>> Files system is EXT3. We are running RH9 Linux kernel 2.4.20-20.9SMP \n>> with bigmem turned on. This box is used only for the warehouse. All \n>> the ETL work is done on this machine as well. DB version is \n>> postgreSQL 7.4.\n>\n>\n> Are you experiencing improvment using the hyper-threading ?\n>\n>\n> Regards\n> Gaetano Mendola\n>\n>\n\n", "msg_date": "Wed, 03 Dec 2003 19:24:21 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "Sean Shanny wrote:\n> \n> First question is do we gain anything by moving the RH Enterprise \n> version of Linux in terms of performance, mainly in the IO realm as we \n> are not CPU bound at all? Second and more radical, has anyone run \n> postgreSQL on the new Apple G5 with an XRaid system? This seems like a \n> great value combination. Fast CPU, wide bus, Fibre Channel IO, 2.5TB \n> all for ~17k.\n\nSeems like a great value but until Apple produces a G5 that supports \nECC, I'd pass on them.\n\n", "msg_date": "Thu, 04 Dec 2003 09:06:33 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Has anyone run on the new G5 yet" } ]
[ { "msg_contents": "Just wondering if anyone has done any testing on the amount of overhead\nfor insert you might gain by adding a serial column to a table. I'm \nthinking of adding a few to some tables that get an average of 30 - 40\ninserts per second, sometimes bursting over 100 inserts per second and\nwondering if there will be any noticeable impact. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "03 Dec 2003 17:32:18 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "sequence overhead" } ]
[ { "msg_contents": "\nDear all\n\n\nSorry for my mistake on the 15000 recs per day.\n\nIn fact, this server is planned as a OLTP database server for a retailer.\nOur intention is either to setup 1 or 2 Postgresql db in the server.\n\nThe proper sizing info for the 1st Postgresql db should be:\n\nNo. of item master : 200,000\n(This item master grows at 0.5% daily).\n\nNo. of transactions from Point-of-Sales machines: 25,000\n\nPlus other tables, the total sizing that I estimated is 590,000 records\ndaily.\n\nThe 2nd Postgresql db will be used by end users on client machines linked\nvia ODBC, doing manual data entry.\nThis will house the item master, loyalty card master and other Historical\ndata to be kept for at least 1.5 years.\n\nTherefore total sizing for this db is around 165,000,000 recs at any time.\n\nIn summary, the single machine must be able to take up around 100 users\nconnections via both socket and ODBC. And house the above number of\nrecords.\n\n\nThank you,\nREgards.\n\n\n\n\n \n Christopher Browne \n <[email protected]> To: [email protected] \n Sent by: cc: \n pgsql-performance-owner@pos Subject: Re: [PERFORM] Minimum hardware requirements for Postgresql db \n tgresql.org \n \n \n 03/12/2003 12:44 PM \n \n \n\n\n\n\nAfter takin a swig o' Arrakan spice grog, [email protected] belched out:\n> We would be recommending to our ct. on the use of Postgresql db as\n> compared to MS SQL Server. We are targetting to use Redhat Linux ES\n> v2.1, Postgresql v7.3.4 and Postgresql ODBC 07.03.0100.\n>\n> We would like to know the minimum specs required for our below\n> target. The minimum specs is referring to no. of CPU, memory,\n> harddisk capacity, RAID technology etc. And also the Postgresql\n> parameters and configuration to run such a system.\n>\n> 1) We will be running 2 x Postgresql db in the machine.\n>\n> 2) Total number of connections to be around 100. The connections\n> from the clients machines will be in ODBC and socket connections.\n>\n> 3) Estimated number of transactions to be written into the\n> Postgresql db is around 15000 records per day.\n>\n> The growth rate in terms of number of connections is around 10% per\n> year and the data retention is kept on average at least for 18\n> months for the 2 databases.\n>\n> Are there any reference books or sites that I can tap on for the\n> above requirement?\n\nPerhaps the best reference on detailed performance information is the\n\"General Bits\" documents.\n\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html>\n\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html>\n\nThese don't point particularly at minimal hardware requirements, but\nrather at how to configure the DBMS to best reflect what hardware you\nhave. But there's some degree to which you can work backwards...\n\nIf you'll need to support 100 concurrent connections, then minimum\nshared_buffers is 200, which implies 1600K of RAM required for shared\nbuffers.\n\n100 connections probably implies around 100MB of memory for the\nbackend processes to support the connections.\n\nThat all points to the notion that you'd more than probably get\nhalf-decent performance if you had a mere 256MB of RAM, which is about\n$50 worth these days.\n\nNone of it sounds terribly challenging; 15K records per day is 625\nrecords per hour which represents an INSERT every 6 seconds. Even if\nthat has to fit into an 8 hour day, that's still not a high number of\ntransactions per second. That _sounds like_ an application that could\nwork on old, obsolete hardware. I would imagine that my old Intel\nPentium Pro 200 might cope with the load, in much the way that that\nserver is more than capable of supporting a web server that would\nserve a local workgroup. (I only have 64MB of RAM on that box, which\nwould be a mite low, but it's an _ancient_ server...)\n\nThe only thing that makes me a little suspicious that there's\nsomething funny about the prescription is your indication of having\n100 concurrent users, which is really rather heavyweight in comparison\nwith the comparatively tiny number of transactions. Is this for some\nsort of \"departmental application\"? Where there's a lot of manual\ndata entry, so that each user would generate a transaction every 3-4\nminutes? That actually sounds about right...\n\nLet me suggest that the \"cost driver\" in this will _not_ be the cost\nof the hardware to support the database itself; it will instead be in\nhaving redundant hardware and backup hardware to ensure reliability.\n\nIt would seem likely that just about any sort of modern hardware would\nbe pretty adequate to the task. You can hardly _buy_ a system with\nless than Gigahertz-speed CPUs, 40GB of disk, and 256MB of RAM.\nUpgrade to have 2 SCSI disks, 512MB (or more, which is better) of RAM,\nand the cost of a suitable system still won't be outrageous.\n\nDouble it, buying a standby server, and the cost still oughtn't be\nreal scary. And if the application is important, you _should_ have a\nstandby server, irrespective of what software you might be running.\n--\n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/x.html\nRules of the Evil Overlord #199. \"I will not make alliances with those\nmore powerful than myself. Such a person would only double-cross me in\nmy moment of glory. I will make alliances with those less powerful\nthan myself. I will then double-cross them in their moment of glory.\"\n<http://www.eviloverlord.com/>\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n\n\n", "msg_date": "Thu, 4 Dec 2003 11:09:59 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" }, { "msg_contents": "[email protected] wrote:\n> Sorry for my mistake on the 15000 recs per day.\n\nIt was useful for us to pick at that a bit; it was certainly looking a\nmite suspicious.\n\n> In fact, this server is planned as a OLTP database server for a retailer.\n> Our intention is either to setup 1 or 2 Postgresql db in the server.\n>\n> The proper sizing info for the 1st Postgresql db should be:\n>\n> No. of item master : 200,000\n> (This item master grows at 0.5% daily).\n>\n> No. of transactions from Point-of-Sales machines: 25,000\n\n> Plus other tables, the total sizing that I estimated is 590,000\n> records daily.\n\nSo that's more like 7 TPS, with, more than likely, a peak load several\ntimes that.\n\n> The 2nd Postgresql db will be used by end users on client machines linked\n> via ODBC, doing manual data entry.\n> This will house the item master, loyalty card master and other Historical\n> data to be kept for at least 1.5 years.\n>\n> Therefore total sizing for this db is around 165,000,000 recs at any time.\n\nFYI, it is useful to plan for purging the old data from the very\nbeginning; if you don't, things can get ugly :-(.\n\n> In summary, the single machine must be able to take up around 100\n> users connections via both socket and ODBC. And house the above\n> number of records.\n\nBased on multiplying the load by 40, we certainly move from\n\"pedestrian hardware where anything will do\" to something requiring\nmore exotic hardware. \n\n- You _definitely_ want a disk array, with a bunch of SCSI disks.\n\n- You _definitely_ will want some form of RAID controller with\n battery-backed cache.\n\n- You probably want multiple CPUs.\n\n- You almost certainly will want a second (and maybe third) complete\n redundant system that you replicate data to.\n\n- The thing that will have _wild_ effects on whether this is enough,\n or whether you need to go for something even _more_ exotic\n (e.g. - moving to big iron UNIX(tm), whether that be Solaris,\n AIX, or HP/UX) is the issue of how heavily the main database gets\n hit by queries.\n\n If \"all\" it is doing is consolidating transactions, and there is\n little query load from the POS systems, that is a very different\n level of load from what happens if it is also servicing pricing\n queries.\n\n Performance will get _destroyed_, regardless of how heavy the iron\n is, if you hit the OLTP system with a lot of transaction reports.\n You'll want a secondary replicated system to draw that load off.\n\nEvaluating whether it needs to be \"big\" hardware or \"really enormous\"\nhardware is not realistic based on what you have said. There are\n_big_ variations possible based notably on:\n\n 1. What kind of query load does the OLTP server have to serve up?\n\n If the answer is \"lots,\" then everything gets more expensive.\n\n 2. How was the database schema and the usage of the clients designed?\n\n How well it is done will have a _large_ impact on how many TPS the\n system can cope with.\n\nYou'll surely need to do some prototyping, and be open to\npossibilities such as that you'll need to consider alternative OSes.\nOn Intel/AMD hardware, it may be worth considering FreeBSD; it may\nalso be needful to consider \"official UNIX(tm)\" hardware. It would be\nunrealistic to pretend more certainty...\n-- \n(reverse (concatenate 'string \"ac.notelrac.teneerf\" \"@\" \"454aa\"))\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\n\"Being really good at C++ is like being really good at using rocks to\nsharpen sticks.\" -- Thant Tessman\n", "msg_date": "Thu, 04 Dec 2003 08:10:50 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minimum hardware requirements for Postgresql db" } ]
[ { "msg_contents": "I am using Asynchronous Query Processing interface from libpq library.\nAnd I got some strange results on Solaris\n\nMy test select query is 'SELECT * from pg_user;'\nand I use select system synchronous I/O multiplexer in 'C'\n\nThe first test sends 10000 select queries using 10 nonblocking connections\nto database ( PQsendQuery ).\nThe second test sends the same 10000 select queries using 1 connection (\nPQexec ).\n\nOn FreeBSD there is a huge difference between the async and the sync tests.\nThe async test is much faster than sync test.\nOn Solaris there is no speed difference between async and sync test,\nactually async test is even slower than sync test.\n\nQ. Why ?\n\nOn FreeBSD:\n\n/usr/bin/time ./PgAsyncManager async\nasync test start ... 10000 done\n9.46 real 3.48 user 1.25 sys\n\n/usr/bin/time ./PgAsyncManager sync\nsync test start ... 10000 done\n22.64 real 3.35 user 1.24 sys\n\nOn Solaris:\n\n/usr/bin/time ./PgAsyncManager async\nasync test start ... 10000 done\n\nreal 20.6\nuser 2.1\nsys 0.4\n\n/usr/bin/time ./PgAsyncManager sync\nsync test start ... 10000 done\n\nreal 18.4\nuser 1.1\nsys 0.5\n", "msg_date": "Thu, 4 Dec 2003 01:22:08 -0500 ", "msg_from": "\"Passynkov, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Async Query Processing on Solaris" } ]
[ { "msg_contents": "(hope I'm posting this correctly)\n\nYou wrote:\n\n>First question is do we gain anything by moving the RH Enterprise\n>version of Linux in terms of performance, mainly in the IO realm as we\n>are not CPU bound at all? Second and more radical, has anyone run\n>postgreSQL on the new Apple G5 with an XRaid system? This seems like a\n>great value combination. Fast CPU, wide bus, Fibre Channel IO, 2.5TB\n>all for ~17k.\n\nWow, funny coincidence: I've got a pair of dual xeons w. 8G + 14disk\nfcal arrays, and an xserve with an XRaid that I've been screwing around\nwith. If you have specific tests you'd like to see, let me know.\n\n--- so, for the truly IO bound, here's my recent messin' around summary:\n\nIn the not-so-structured tests I've done, I've been disappointed with\nRedhat AS 2.1. IO thruput. I've had difficulty driving a lot of IO\nthru my dual fcal channels: I can only get one going at 60M/sec, and\nwhen I drive IO to the second, I still see only about 60M/sec combined.\nand when I does get that high it uses about 30% CPU on a dual xeon\nhyperthreaded box, all in sys (by vmstat). something very wrong there,\nand the only thing I can conclude is that I'm serializing in the driver\nsomehow (qla2200 driver), thus parallel channels do the same as one, and\ninterrupt madness drives the cpu up just to do this contentious IO.\n\nThis contrasts with the Redhat 9 I just installed on a similar box, that\ngot 170M/sec on 2 fcal channels, and the expected 5-6% cpu.\n\nThe above testing was dd straight from /dev/rawX devices, so no buffer\ncache confusion there. \n\nAlso had problems getting the Redhat AS to bind to my newer qla2300\nadapters at all, whereas they bound fine under RH9. \n\nRedhat makes the claim of finer grained locks/semaphores in the qla and\nAIC drivers in RH AS, but my tests seem to show that the 2 fcal ports\nwere serializing against eachother in the kernel under RH AS, and not so\nunder RH9. Maybe I'm useing the wrong driver under AS. eh.\n\nso sort story long, it seems like you're better of with RH9. But again,\nbefore you lay out serious coin for xserve or others, if you have\nspecific tests you want to see, I'll take a little time to contrast w.\nexserve. One of the xeons also has an aic7x scsi controler w 4 drives\nso It might match your rig better.\n\nI also did some token testing on the xserve I have which I believe may\nonly have one processor (how do you tell on osX?) and the xraid has 5\nspindles in it. I did a cursory build of postgres on it and also a io\ntest (to the filesystem) and saw about 90M/sec. Dunno if it has dual\npaths (if you guys know how to tell, let me know)\n\n\nBiggest problem I've had in the past w. linux in general is that it\nseems to make poor VM choices under heavy filesystem IO. I don't really\nget exactly where it's going wrong , but I've had numerous experiences\non older systems where bursty IO would seem to cause paging on the box\n(pageout of pieces of the oracle SGA shared memory) which is a\nperformance disaseter. It seems to happen even when the shared memory\nwas sized reasonably below the size of physical ram, presumably because\nlinux is too aggressive in allocating filesystem cache (?) anyway, it\nseems to make decisions based on desire for zippy workstation\nperformance and gets burned on thruput on database servers. I'm\nguessing this may be an issue for you , when doing heavy IO. Thing is,\nit'll show like you're IO bound kindof because you're thrashing.\n\n", "msg_date": "04 Dec 2003 00:24:50 -0800", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Has anyone run on the new G5 yet" }, { "msg_contents": "Paul Tuckfield wrote:\n> Biggest problem I've had in the past w. linux in general is that it\n> seems to make poor VM choices under heavy filesystem IO. I don't really\n> get exactly where it's going wrong , but I've had numerous experiences\n> on older systems where bursty IO would seem to cause paging on the box\n> (pageout of pieces of the oracle SGA shared memory) which is a\n> performance disaseter. It seems to happen even when the shared memory\n> was sized reasonably below the size of physical ram, presumably because\n> linux is too aggressive in allocating filesystem cache (?) anyway, it\n> seems to make decisions based on desire for zippy workstation\n> performance and gets burned on thruput on database servers. I'm\n> guessing this may be an issue for you , when doing heavy IO. Thing is,\n> it'll show like you're IO bound kindof because you're thrashing.\n\nThis is not surprising. There has always been an issue with dynamic\nbuffer cache systems contending with memory used by processes. It takes\na long time to get the balance right, and still there might be cases\nwhere it gets things wrong. Isn't there a Linux option to lock shared\nmemory in to RAM? If so, we should document this in our manuals, but\nright now, there is no mention of it.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 6 Dec 2003 08:19:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Has anyone run on the new G5 yet" } ]
[ { "msg_contents": "I am using Asynchronous Query Processing interface from libpq library.\nAnd I got some strange results on Solaris\n\nMy test select query is 'SELECT * from pg_user;'\nand I use select system synchronous I/O multiplexer in 'C'\n\nThe first test sends 10000 select queries using 10 nonblocking connections\nto database ( PQsendQuery ).\nThe second test sends the same 10000 select queries using 1 connection (\nPQexec ).\n\nOn FreeBSD there is a huge difference between the async and the sync tests.\nThe async test is much faster than sync test.\nOn Solaris there is no speed difference between async and sync test,\nactually async test is even slower than sync test.\n\nQ. Why ?\n\nOn FreeBSD:\n\n/usr/bin/time ./PgAsyncManager async\nasync test start ... 10000 done\n9.46 real 3.48 user 1.25 sys\n\n/usr/bin/time ./PgAsyncManager sync\nsync test start ... 10000 done\n22.64 real 3.35 user 1.24 sys\n\nOn Solaris:\n\n/usr/bin/time ./PgAsyncManager async\nasync test start ... 10000 done\n\nreal 20.6\nuser 2.1\nsys 0.4\n\n/usr/bin/time ./PgAsyncManager sync\nsync test start ... 10000 done\n\nreal 18.4\nuser 1.1\nsys 0.5\n\n", "msg_date": "Thu, 4 Dec 2003 07:48:12 -0500 ", "msg_from": "\"Passynkov, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Async Query Processing on Solaris" } ]
[ { "msg_contents": "Hi, \n\nI have the following table:\nCREATE TABLE public.rights (\nid int4 DEFAULT nextval('\"rights_id_seq\"'::text) NOT NULL, \nid_user int4 NOT NULL, \nid_modull int4 NOT NULL, \nCONSTRAINT rights_pkey PRIMARY KEY (id)\n) \n\nand I created the following indexes:\n\nCREATE INDEX right_id_modull_idx ON rights USING btree (id_modull);\nCREATE INDEX right_id_user_idx ON rights USING btree (id_user);\n\nNow the problem:\n\nEXPLAIN SELECT * FROM rights r WHERE r.id_modull =15\nreturnes:\nSeq Scan on rights r (cost=0.00..12.30 rows=42 width=12)\nFilter: (id_modull = 15)\n\nEXPLAIN SELECT * FROM rights r WHERE r.id_user =15\nreturnes:\nIndex Scan using right_id_user_idx on rights r (cost=0.00..8.35 rows=11 width=12)\nIndex Cond: (id_user = 15)\n\nQuestion: Why the right_id_modull_idx is NOT USED at the 1st query and the second query the right_id_user_idx index is used. \n\nI don't understand this. \n\nThanx in advance.\nAndy.\n\n\n\n\n\n\n\n\n\n\nHi, \n \nI have the following table:\n\nCREATE TABLE public.rights (id int4 DEFAULT \nnextval('\"rights_id_seq\"'::text) NOT NULL, id_user int4 NOT NULL, id_modull int4 NOT NULL, CONSTRAINT rights_pkey PRIMARY KEY (id)) \n\nand I created the following indexes:\nCREATE INDEX right_id_modull_idx ON \nrights USING btree (id_modull);CREATE INDEX right_id_user_idx \nON rights USING btree \n(id_user);\nNow the problem:\nEXPLAIN SELECT * FROM rights r WHERE r.id_modull =15returnes:Seq Scan on \nrights r (cost=0.00..12.30 rows=42 width=12)Filter: (id_modull = \n15)\nEXPLAIN SELECT * FROM rights r WHERE r.id_user =15returnes:Index Scan using right_id_user_idx on \nrights r (cost=0.00..8.35 rows=11 width=12)Index \nCond: (id_user = 15)\nQuestion: Why the right_id_modull_idx is NOT USED at \nthe 1st query and the second query the right_id_user_idx index is used. \n\nI don't understand this. \nThanx in \nadvance.Andy.", "msg_date": "Thu, 4 Dec 2003 16:57:51 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used. WHY?" }, { "msg_contents": "\nOn Thu, 4 Dec 2003, Andrei Bintintan wrote:\n\n> Hi,\n>\n> I have the following table:\n> CREATE TABLE public.rights (\n> id int4 DEFAULT nextval('\"rights_id_seq\"'::text) NOT NULL,\n> id_user int4 NOT NULL,\n> id_modull int4 NOT NULL,\n> CONSTRAINT rights_pkey PRIMARY KEY (id)\n> )\n>\n> and I created the following indexes:\n>\n> CREATE INDEX right_id_modull_idx ON rights USING btree (id_modull);\n> CREATE INDEX right_id_user_idx ON rights USING btree (id_user);\n>\n> Now the problem:\n>\n> EXPLAIN SELECT * FROM rights r WHERE r.id_modull =15\n> returnes:\n> Seq Scan on rights r (cost=0.00..12.30 rows=42 width=12)\n> Filter: (id_modull = 15)\n>\n> EXPLAIN SELECT * FROM rights r WHERE r.id_user =15\n> returnes:\n> Index Scan using right_id_user_idx on rights r (cost=0.00..8.35 rows=11 width=12)\n> Index Cond: (id_user = 15)\n>\n> Question: Why the right_id_modull_idx is NOT USED at the 1st query and\n> the second query the right_id_user_idx index is used.\n\nAs a note, pgsql-performance is a better list for these questions.\n\nSo, standard questions:\n\nHow many rows are in the table, what does EXPLAIN ANALYZE show for the\nqueries, if you force index usage (set enable_seqscan=off) on the first\nwhat does EXPLAIN ANALYZE show then, have you used ANALYZE/VACUUM ANALYZE\nrecently?\n\n", "msg_date": "Thu, 4 Dec 2003 07:19:49 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used. WHY?" }, { "msg_contents": "There are around 700 rows in this table.\nIf I set enable_seqscan=off then the index is used and I also used Vacuum\nAnalyze recently.\n\nI find it strange because the number of values of id_user and id_modull are\nsomehow in the same distribution and when I search the table the id_user\nindex is used but the id_modull index is not used.\n\nDoes somehow postgre know that a seq scan runs faster in this case as a\nindex scan? Should I erase this index?\nI have to say that the data's in this table are not changed offen, but there\nare a LOT of joins made with this table.\n\nBest regards.\nAndy.\n\n\n----- Original Message -----\nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"Andrei Bintintan\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Thursday, December 04, 2003 5:19 PM\nSubject: Re: [ADMIN] Index not used. WHY?\n\n\n>\n> On Thu, 4 Dec 2003, Andrei Bintintan wrote:\n>\n> > Hi,\n> >\n> > I have the following table:\n> > CREATE TABLE public.rights (\n> > id int4 DEFAULT nextval('\"rights_id_seq\"'::text) NOT NULL,\n> > id_user int4 NOT NULL,\n> > id_modull int4 NOT NULL,\n> > CONSTRAINT rights_pkey PRIMARY KEY (id)\n> > )\n> >\n> > and I created the following indexes:\n> >\n> > CREATE INDEX right_id_modull_idx ON rights USING btree (id_modull);\n> > CREATE INDEX right_id_user_idx ON rights USING btree (id_user);\n> >\n> > Now the problem:\n> >\n> > EXPLAIN SELECT * FROM rights r WHERE r.id_modull =15\n> > returnes:\n> > Seq Scan on rights r (cost=0.00..12.30 rows=42 width=12)\n> > Filter: (id_modull = 15)\n> >\n> > EXPLAIN SELECT * FROM rights r WHERE r.id_user =15\n> > returnes:\n> > Index Scan using right_id_user_idx on rights r (cost=0.00..8.35 rows=11\nwidth=12)\n> > Index Cond: (id_user = 15)\n> >\n> > Question: Why the right_id_modull_idx is NOT USED at the 1st query and\n> > the second query the right_id_user_idx index is used.\n>\n> As a note, pgsql-performance is a better list for these questions.\n>\n> So, standard questions:\n>\n> How many rows are in the table, what does EXPLAIN ANALYZE show for the\n> queries, if you force index usage (set enable_seqscan=off) on the first\n> what does EXPLAIN ANALYZE show then, have you used ANALYZE/VACUUM ANALYZE\n> recently?\n>\n\n", "msg_date": "Fri, 5 Dec 2003 10:11:11 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Index not used. WHY?" }, { "msg_contents": "Andrei Bintintan wrote:\n\n> There are around 700 rows in this table.\n> If I set enable_seqscan=off then the index is used and I also used Vacuum\n> Analyze recently.\n\nFor 700 rows I think seq. would work best.\n> \n> I find it strange because the number of values of id_user and id_modull are\n> somehow in the same distribution and when I search the table the id_user\n> index is used but the id_modull index is not used.\n> \n> Does somehow postgre know that a seq scan runs faster in this case as a\n> index scan? Should I erase this index?\n> I have to say that the data's in this table are not changed offen, but there\n> are a LOT of joins made with this table.\n\nIf table is cached then it does not matter. Unless it grows substantially, say \nto around hundred thousand rows(Note your table is small), idex wouldn't be that \nuseful.\n\n Shridhar\n\n", "msg_date": "Fri, 05 Dec 2003 14:09:51 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Index not used. WHY?" }, { "msg_contents": "On Fri, 5 Dec 2003, Andrei Bintintan wrote:\n\n> There are around 700 rows in this table.\n> If I set enable_seqscan=off then the index is used and I also used Vacuum\n> Analyze recently.\n>\n> I find it strange because the number of values of id_user and id_modull are\n> somehow in the same distribution and when I search the table the id_user\n> index is used but the id_modull index is not used.\n\nIt was guessing that one would return 11 rows and the other 42 which is\nwhy one used the index and the other wouldn't. If those numbers aren't\nrealistic, you may want to raise the statistics target for the columns\n(see ALTER TABLE) and re-run analyze.\n\n> Does somehow postgre know that a seq scan runs faster in this case as a\n> index scan? Should I erase this index?\n\nIt's making an educated guess. When you're doing an index scan, it needs\nto read through the index and then get matching rows from the table.\nHowever, because those reads from the table are in a potentially random\norder, there's usually a higher cost associated with those reads than if\nthe table was read in order (barring cases where you know your database\nshould always stay cached in disk cache, etc...). If there's say 50 pages\nin the entire table, a sequence scan does 50 sequential page reads and is\nchecking all those tuples. If you're getting say 42 rows through an\nindex, you're first reading through the index, and then getting <n> pages\nin a random order from the table where <n> depends on the distribution of\nvalues throughout the table. There's a variable in the configuration,\nrandom_page_cost which controls the ratio of cost between a sequential\nread and a random one (defaulting to 4).\n\n", "msg_date": "Fri, 5 Dec 2003 07:34:45 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Index not used. WHY?" } ]
[ { "msg_contents": "Hi,\n\nsorry for duplication, I asked this on pgsql-admin first before\nrealizing it wasn't the appropriate list.\n\nI'm having trouble optimizing PostgreSQL for an admittedly heinous\nworst-case scenario load.\n\ntestbed:\ndual P3 1.3 GHz box with 2GB RAM\ntwo IDE 120G drives on separate channels (DMA on), OS on one, DB on the\nother, some swap on each (totalling 2.8G).\nRH Linux 8.\n\nI've installed PG 7.3.4 from source (./configure && make && make\ninstall) and from PGDG RPMs and can switch back and forth. I also have\nthe 7.4 source but haven't done any testing with it yet aside from\nstarting it and importing some data.\n\nThe application is on another server, and does this torture test: it\nbuilds a large table (~6 million rows in one test, ~18 million in\nanother). Rows are then pulled in chunks of 4 to 6 thousand, acted on,\nand inserted back into another table (which will of course eventually\ngrow to the full size of the first).\n\nThe problem is that pulling the 4 to 6 thousand rows puts PostgreSQL\ninto a tail spin: postmaster hammers on CPU anywhere from 90 seconds to\nfive minutes before returning the data. During this time vmstat shows\nthat disk activity is up of course, but it doesn't appear to be with\npage swapping (free and top and vmstat).\n\nAnother problem is that performance of the 6 million row job is decent\nif I stop the job and run a vacuumdb --analyze before letting it\ncontinue; is this something that 7.4 will help with? vacuumb --analyze\ndoesn't seem to have much effect on the 18 million row job.\n\nI've tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum\nmemory to 8192, and effective cache size to 10000.\n/proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max\nis set to 65536. Ulimit -n 3192.\n\nI've read several sites and postings on tuning PG and have tried a\nnumber of different theories, but I'm still not getting the architecture\nof how things work.\n\nthanks,\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 08:06:23 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "tuning questions" }, { "msg_contents": "Jack,\n\n> The application is on another server, and does this torture test: it\n> builds a large table (~6 million rows in one test, ~18 million in\n> another). Rows are then pulled in chunks of 4 to 6 thousand, acted on,\n> and inserted back into another table (which will of course eventually\n> grow to the full size of the first).\n\n>e tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum\n> memory to 8192, and effective cache size to 10000.\n> /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max\n> is set to 65536. Ulimit -n 3192.\n\nHave you read this?\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nActually, your situation is not \"worst case\". For one thing, your process is \neffectively single-user; this allows you to throw all of your resources at \none user. The problem is that your settings have effectively throttled PG \nat a level appropriate to a many-user and/or multi-purpose system. You need \nto \"open them up\".\n\nFor something involving massive updating/transformation like this, once you've \ndone the basics (see that URL above) the main settings which will affect you \nare sort_mem and checkpoint_segments, both of which I'd advise jacking way up \n(test by increments). Raising wal_buffers wouldn't hurt either.\n\nAlso, give some thought to running VACUUM and/or ANALYZE between segments of \nyour procedure. Particularly if you do updates to many rows of a table and \nthen query based on the changed data, it is vital to run an ANALYZE first, \nand usually a good idea to run a VACUUM if it was an UPDATE or DELETE and not \nan INSERT.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 4 Dec 2003 08:59:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 04 Dec 2003 08:06:23 -0800\nJack Coates <[email protected]> wrote:\n\n> testbed:\n> dual P3 1.3 GHz box with 2GB RAM\n> two IDE 120G drives on separate channels (DMA on), OS on one, DB on\n> the other, some swap on each (totalling 2.8G).\n> RH Linux 8.\n\nSide Note: be sure to turn off write caching on those disks or you may\nhave data corruption in the event of a failure\n\n> The problem is that pulling the 4 to 6 thousand rows puts PostgreSQL\n> into a tail spin: postmaster hammers on CPU anywhere from 90 seconds\n> to five minutes before returning the data. During this time vmstat\n> shows that disk activity is up of course, but it doesn't appear to be\n> with page swapping (free and top and vmstat).\n> \nHave you tried modifying the app to retrieve the rows in smaller chunks?\n(use a cursor). this way it only needs to alloate memory to hold say,\n100 rows at a time instead of 6000. \n\nAlso, have you explain analyze'd your queries to make sure PG is picking\na good plan to execute?\n\n> I've tweaked shared buffers to 8192, pushed sort memory to 2048,\n> vacuum memory to 8192, and effective cache size to 10000.\n> /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max\n> is set to 65536. Ulimit -n 3192.\n\nyou should set effective cache size bigger, especially with 2GB of\nmemory. effective_cache_size tells PG 'about' how much data it cna\nexpect the OS to cache. \n\nand.. I'm not sure about your query, but perhaps the sort of those 6000\nrows is spilling to disk? If you look in explain analyze you'll see in\nthe \"Sort\" step(s) it will tell you how many rows and how \"wide\" they\nare. If rows * width > sort_mem, it will have to spill the sort to\ndisk, which is slow.\n\nIf you post query info and explain analyze's we can help optimize the\nquery itself.\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Thu, 4 Dec 2003 11:59:32 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "\n> \n> I've tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum\n> memory to 8192, and effective cache size to 10000.\n> /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max\n> is set to 65536. Ulimit -n 3192.\n\nYour sharedmemory is too high, and not even being used effectivey. Your \nother settings are too low.\n\nBall park guessing here, but I'd say first read (and understand) this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nThen make shared memory about 10-20% available ram, and set:\n\n((shmmax/1024) - ( 14.2 * max_connections ) - 250 ) / 8.2 = shared_buffers\n\ndecrease random_page_cost to 0.3 and wack up sort mem by 16 times, \neffective cache size to about 50% RAM (depending on your other settings) \nand try that for starters.\n\n\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n\n", "msg_date": "Thu, 04 Dec 2003 17:13:19 +0000", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 4 Dec 2003, Jack Coates wrote:\n\n> Another problem is that performance of the 6 million row job is decent\n> if I stop the job and run a vacuumdb --analyze before letting it\n> continue; is this something that 7.4 will help with? vacuumb --analyze\n> doesn't seem to have much effect on the 18 million row job.\n\nJust to add to what the others have said here, you probably want to run \nthe pg_autovacuum daemon in the background. It comes with 7.4 but will \nwork fine with 7.3. \n\n\n", "msg_date": "Thu, 4 Dec 2003 10:26:38 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, Dec 04, 2003 at 11:59:32AM -0500, Jeff wrote:\n> On Thu, 04 Dec 2003 08:06:23 -0800\n> Jack Coates <[email protected]> wrote:\n> \n> > testbed:\n> > dual P3 1.3 GHz box with 2GB RAM\n> > two IDE 120G drives on separate channels (DMA on), OS on one, DB on\n> > the other, some swap on each (totalling 2.8G).\n> > RH Linux 8.\n> \n> Side Note: be sure to turn off write caching on those disks or you may\n> have data corruption in the event of a failure\n\nI've seen this comment several times from different people.\nWould someone care to explain how you would get data corruption? I\nthought that the whole idea of the log is to provide a journal similar\nto what you get in a journaling file system. \n\nIn other words, the db writes a series of transactions to the log and marks \nthat \"log entry\" (don't know the right nomeclature) as valid. When the db\ncrashes, it reads the log, and discards the last \"log entry\" if it wasn't\nmarked as valid, and \"replays\" any transactions that haven't been\ncommited ot the db. The end result being that you might loose your last\ntransaction(s) if the db crashes, but nothing ever gets corrupted.\n\nSo what am I missing in this picture?\n\nRegards,\n\nDror\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 4 Dec 2003 09:57:38 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Scott,\n\n> Just to add to what the others have said here, you probably want to run \n> the pg_autovacuum daemon in the background. It comes with 7.4 but will \n> work fine with 7.3. \n\nI don't recommend using pg_autovacuum with a data transformation task. pg_av \nis designed for \"regular use\" not huge batch tasks.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 4 Dec 2003 10:03:52 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "If I understand the problem correctly, the issue is that IDE drives\nsignal that data has been written to disk when they actually are holding\nthe data in the write cache. In the case of a power down (and I remember\nsomeone showing some test results confirming this, check the list\narchive) the data in the drive write cache is lost, resulting in\ncorrupted logs. \n\nAnyone else have more details?\n\nJord Tanner\n\nOn Thu, 2003-12-04 at 09:57, Dror Matalon wrote:\n> On Thu, Dec 04, 2003 at 11:59:32AM -0500, Jeff wrote:\n> > On Thu, 04 Dec 2003 08:06:23 -0800\n> > Jack Coates <[email protected]> wrote:\n> > \n> > > testbed:\n> > > dual P3 1.3 GHz box with 2GB RAM\n> > > two IDE 120G drives on separate channels (DMA on), OS on one, DB on\n> > > the other, some swap on each (totalling 2.8G).\n> > > RH Linux 8.\n> > \n> > Side Note: be sure to turn off write caching on those disks or you may\n> > have data corruption in the event of a failure\n> \n> I've seen this comment several times from different people.\n> Would someone care to explain how you would get data corruption? I\n> thought that the whole idea of the log is to provide a journal similar\n> to what you get in a journaling file system. \n> \n> In other words, the db writes a series of transactions to the log and marks \n> that \"log entry\" (don't know the right nomeclature) as valid. When the db\n> crashes, it reads the log, and discards the last \"log entry\" if it wasn't\n> marked as valid, and \"replays\" any transactions that haven't been\n> commited ot the db. The end result being that you might loose your last\n> transaction(s) if the db crashes, but nothing ever gets corrupted.\n> \n> So what am I missing in this picture?\n> \n> Regards,\n> \n> Dror\n-- \nJord Tanner <[email protected]>\n\n", "msg_date": "Thu, 04 Dec 2003 10:07:56 -0800", "msg_from": "Jord Tanner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, Dec 04, 2003 at 09:57:38AM -0800, Dror Matalon wrote:\n> \n> I've seen this comment several times from different people.\n> Would someone care to explain how you would get data corruption? I\n> thought that the whole idea of the log is to provide a journal similar\n> to what you get in a journaling file system. \n\n> So what am I missing in this picture?\n\nThat a journalling file system can _also_ have file corruption if you\nhave write caching enabled and no battery back up. If the drive\ntells the OS, \"Yep! It's all on the disk!\" bit it is _not_ actually\nscribed in the little bitty magnetic patterns -- and at that very\nmoment, the power goes away -- the data that was reported to have been\non the disk, but which was actually _not_ on the disk, is no longer\nanywhere. (Well, except in the past. But time travel was disabled\nsome versions ago. ;-)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 4 Dec 2003 13:11:52 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 4 Dec 2003, Josh Berkus wrote:\n\n> Scott,\n> \n> > Just to add to what the others have said here, you probably want to run \n> > the pg_autovacuum daemon in the background. It comes with 7.4 but will \n> > work fine with 7.3. \n> \n> I don't recommend using pg_autovacuum with a data transformation task. pg_av \n> is designed for \"regular use\" not huge batch tasks.\n\nWhat bad thing is likely to happen if it's used here? Fire too often or \nuse too much I/O bandwidth? Would that be fixed by the patch being tested \nto introduce a delay every x pages of vacuuming?\n\n", "msg_date": "Thu, 4 Dec 2003 11:12:30 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 09:12, Rob Fielding wrote:\n> > \n> > I've tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum\n> > memory to 8192, and effective cache size to 10000.\n> > /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max\n> > is set to 65536. Ulimit -n 3192.\n> \n> Your sharedmemory is too high, and not even being used effectivey. Your \n> other settings are too low.\n> \n> Ball park guessing here, but I'd say first read (and understand) this:\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nI've read it many times, understanding is slower :-)\n\n> \n> Then make shared memory about 10-20% available ram, and set:\n> \n> ((shmmax/1024) - ( 14.2 * max_connections ) - 250 ) / 8.2 = shared_buffers\n> \n> decrease random_page_cost to 0.3 and wack up sort mem by 16 times, \n> effective cache size to about 50% RAM (depending on your other settings) \n> and try that for starters.\n\nFollowing this, I've done:\n2gb ram\n=\n 2,000,000,000\nbytes\n\n15 % of that\n=\n 300,000,000\nbytes\n\ndivided by\n1024\n=\n 292,969\nkbytes\n\nmax_conn *\n14.2\n=\n 454\nkbytes\n\nsubtract c4\n=\n 292,514\nkbytes\n\nsubtract 250\n=\n 292,264\nkbytes\n\ndivide by 8.2\n=\n 35,642\nshared_buffers\n\nperformance is unchanged for the 18M job -- pg continues to use ~\n285-300M, system load and memory usage stay the same. I killed that,\ndeleted from the affected tables, inserted a 6M job, and started a\nvacuumdb --anaylze. It's been running for 20 minutes now...\n\ngetting the SQL query better optimized for PG is on my todo list, but\nnot something I can do right now -- this application is designed to be\ncross-platform with MS-SQL, PG, and Oracle so tweaking SQL is a touchy\nsubject.\n\nThe pgavd conversation is intriguing, but I don't really understand the\nrole of vacuuming. Would this be a correct statement: \"PG needs to\nregularly re-evaluate the database in order to adjust itself?\" I'm\nimagining that it continues to treat the table as a small one until\nvacuum informs it that the table is now large?\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 11:16:51 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "\nOn Dec 4, 2003, at 10:11 AM, Andrew Sullivan wrote:\n\n> On Thu, Dec 04, 2003 at 09:57:38AM -0800, Dror Matalon wrote:\n>>\n>> I've seen this comment several times from different people.\n>> Would someone care to explain how you would get data corruption? I\n>> thought that the whole idea of the log is to provide a journal similar\n>> to what you get in a journaling file system.\n>\n>> So what am I missing in this picture?\n>\n> That a journalling file system can _also_ have file corruption if you\n> have write caching enabled and no battery back up. If the drive\n> tells the OS, \"Yep! It's all on the disk!\" bit it is _not_ actually\n> scribed in the little bitty magnetic patterns -- and at that very\n> moment, the power goes away -- the data that was reported to have been\n> on the disk, but which was actually _not_ on the disk, is no longer\n> anywhere. (Well, except in the past. But time travel was disabled\n> some versions ago. ;-)\n\nIt's not just a theoretical problem. It's happened to me on a laptop \ndrive in the last week or so.\n\nI was testing out dbmail by hammering on it on Panther laptop, hfs+ \njournaling enabled, psql 7.4, latest and greatest. I managed to hang \nthe system hard, requiring a reboot. Psql wouldn't start after the \ncrash, complaining of a damaged relation and helpfully telling me that \n'you may need to restore from backup'.\n\nNo big deal on the data loss, since it was a test/hammering \ninstallation. It would have been nice to be able to drop that relation \nor prune the entire database, but I'm sure that would ultimately run \ninto referential integrity problems.\n\neric\n\n\n", "msg_date": "Thu, 4 Dec 2003 11:17:08 -0800", "msg_from": "Eric Soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack,\n\n> Following this, I've done:\n> 2gb ram\n> =\n> 2,000,000,000\n> bytes\n\nThis calculation is fun, but I really don't know where you got it from. It \nseems quite baroque. What are you trying to set, exactly?\n\n> getting the SQL query better optimized for PG is on my todo list, but\n> not something I can do right now -- this application is designed to be\n> cross-platform with MS-SQL, PG, and Oracle so tweaking SQL is a touchy\n> subject.\n\nWell, if you're queries are screwed up, no amount of .conf optimization is \ngoing to help you much. You could criticize that PG is less adept than \nsome other systems at re-writing \"bad queries\", and you would be correct. \nHowever, there's not much to do about that on existing systems.\n\nHow about posting some sample code?\n\n> The pgavd conversation is intriguing, but I don't really understand the\n> role of vacuuming. Would this be a correct statement: \"PG needs to\n> regularly re-evaluate the database in order to adjust itself?\" I'm\n> imagining that it continues to treat the table as a small one until\n> vacuum informs it that the table is now large?\n\nNot Vacuum, Analyze. Otherwise correct. Mind you, in \"regular use\" where \nonly a small % of the table changes per hour, periodic ANALYZE is fine. \nHowever, in \"batch data transform\" analyze statements need to be keyed to the \nupdates and/or imports.\n\nBTW, I send a couple of e-mails to the Lyris documentation maintainer about \nupdating out-of-date information about setting up PostgreSQL. I never got a \nresponse, and I don't think my changes were made.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 4 Dec 2003 11:20:21 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 11:20, Josh Berkus wrote:\n> Jack,\n> \n> > Following this, I've done:\n> > 2gb ram\n> > =\n> > 2,000,000,000\n> > bytes\n> \n> This calculation is fun, but I really don't know where you got it from. It \n> seems quite baroque. What are you trying to set, exactly?\nMessage-ID: <[email protected]>\nDate: Thu, 04 Dec 2003 17:12:11 +0000\nFrom: Rob Fielding <[email protected]\n\nI'm trying to set Postgres's shared memory usage in a fashion that\nallows it to return requested results quickly. Unfortunately, none of\nthese changes allow PG to use more than a little under 300M RAM.\nvacuumdb --analyze is now taking an inordinate amount of time as well\n(40 minutes and counting), so that change needs to be rolled back.\n\n> \n> > getting the SQL query better optimized for PG is on my todo list, but\n> > not something I can do right now -- this application is designed to be\n> > cross-platform with MS-SQL, PG, and Oracle so tweaking SQL is a touchy\n> > subject.\n> \n> Well, if you're queries are screwed up, no amount of .conf optimization is \n> going to help you much. You could criticize that PG is less adept than \n> some other systems at re-writing \"bad queries\", and you would be correct. \n> However, there's not much to do about that on existing systems.\n> \n> How about posting some sample code?\n\nTracking that down in CVS and translating from C++ is going to take a\nwhile -- is there a way to get PG to log the queries it's receiving?\n\n> \n> > The pgavd conversation is intriguing, but I don't really understand the\n> > role of vacuuming. Would this be a correct statement: \"PG needs to\n> > regularly re-evaluate the database in order to adjust itself?\" I'm\n> > imagining that it continues to treat the table as a small one until\n> > vacuum informs it that the table is now large?\n> \n> Not Vacuum, Analyze. Otherwise correct. Mind you, in \"regular use\" where \n> only a small % of the table changes per hour, periodic ANALYZE is fine. \n> However, in \"batch data transform\" analyze statements need to be keyed to the \n> updates and/or imports.\n> \n> BTW, I send a couple of e-mails to the Lyris documentation maintainer about \n> updating out-of-date information about setting up PostgreSQL. I never got a \n> response, and I don't think my changes were made.\n\nShe sits on the other side of the cube wall from me, and if I find a\ndecent config it's going into the manual -- consider this a golden\nopportunity :-)\n\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 11:50:55 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thursday 04 December 2003 19:50, Jack Coates wrote:\n>\n> I'm trying to set Postgres's shared memory usage in a fashion that\n> allows it to return requested results quickly. Unfortunately, none of\n> these changes allow PG to use more than a little under 300M RAM.\n> vacuumdb --analyze is now taking an inordinate amount of time as well\n> (40 minutes and counting), so that change needs to be rolled back.\n\nYou don't want PG to use all your RAM, it's designed to let the underlying OS \ndo a lot of caching for it. Probably worth having a look at vmstat/iostat and \nsee if it's saturating on I/O.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 4 Dec 2003 20:27:22 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 12:27, Richard Huxton wrote:\n> On Thursday 04 December 2003 19:50, Jack Coates wrote:\n> >\n> > I'm trying to set Postgres's shared memory usage in a fashion that\n> > allows it to return requested results quickly. Unfortunately, none of\n> > these changes allow PG to use more than a little under 300M RAM.\n> > vacuumdb --analyze is now taking an inordinate amount of time as well\n> > (40 minutes and counting), so that change needs to be rolled back.\n> \n> You don't want PG to use all your RAM, it's designed to let the underlying OS \n> do a lot of caching for it. Probably worth having a look at vmstat/iostat and \n> see if it's saturating on I/O.\n\nlatest changes:\nshared_buffers = 35642\nmax_fsm_relations = 1000\nmax_fsm_pages = 10000\nwal_buffers = 64\nsort_mem = 32768\nvacuum_mem = 32768\neffective_cache_size = 10000\n\n/proc/sys/kernel/shmmax = 500000000\n\nIO is active, but hardly saturated. CPU load is hefty though, load\naverage is at 4 now.\n\n procs memory swap io \nsystem cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 2 1 2808 11436 39616 1902988 0 0 240 896 765 469 \n2 11 87\n 0 2 1 2808 11432 39616 1902988 0 0 244 848 768 540 \n4 3 93\n 0 2 1 2808 11432 39616 1902984 0 0 204 876 788 507 \n3 4 93\n 0 2 1 2808 11432 39616 1902984 0 0 360 416 715 495 \n4 1 96\n 0 2 1 2808 11432 39616 1902984 0 0 376 328 689 441 \n2 1 97\n 0 2 0 2808 11428 39616 1902976 0 0 464 360 705 479 \n2 1 97\n 0 2 1 2808 11428 39616 1902976 0 0 432 380 718 547 \n3 1 97\n 0 2 1 2808 11428 39616 1902972 0 0 440 372 742 512 \n1 3 96\n 0 2 1 2808 11428 39616 1902972 0 0 416 364 711 504 \n3 1 96\n 0 2 1 2808 11424 39616 1902972 0 0 456 492 743 592 \n2 1 97\n 0 2 1 2808 11424 39616 1902972 0 0 440 352 707 494 \n2 1 97\n 0 2 1 2808 11424 39616 1902972 0 0 456 360 709 494 \n2 2 97\n 0 2 1 2808 11436 39616 1902968 0 0 536 516 807 708 \n3 2 94\n\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 12:37:45 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 4 Dec 2003, Jack Coates wrote:\n\n> On Thu, 2003-12-04 at 12:27, Richard Huxton wrote:\n> > On Thursday 04 December 2003 19:50, Jack Coates wrote:\n> > >\n> > > I'm trying to set Postgres's shared memory usage in a fashion that\n> > > allows it to return requested results quickly. Unfortunately, none of\n> > > these changes allow PG to use more than a little under 300M RAM.\n> > > vacuumdb --analyze is now taking an inordinate amount of time as well\n> > > (40 minutes and counting), so that change needs to be rolled back.\n> > \n> > You don't want PG to use all your RAM, it's designed to let the underlying OS \n> > do a lot of caching for it. Probably worth having a look at vmstat/iostat and \n> > see if it's saturating on I/O.\n> \n> latest changes:\n> shared_buffers = 35642\n> max_fsm_relations = 1000\n> max_fsm_pages = 10000\n> wal_buffers = 64\n> sort_mem = 32768\n> vacuum_mem = 32768\n> effective_cache_size = 10000\n> \n> /proc/sys/kernel/shmmax = 500000000\n> \n> IO is active, but hardly saturated. CPU load is hefty though, load\n> average is at 4 now.\n\nPostgresql is busily managing a far too large shared buffer. Let the \nkernel do that. Postgresql's shared buffers should be bug enough to hold \nas much of the current working set as it can, up to about 25% or so of the \nservers memory, or 512Meg, whichever comes first. Unless a single query \nwill actually use all of the buffer at once, you're not likely to see an \nimprovement.\n\nAlso, your effective cache size is really small. On a typical Postgresql \nserver with 2 gigs of ram, you'll have about 1 to 1.5 gigs as kernel cache \nand buffer, and if it's dedicated to postgresql, then the effective cache \nsetting for 1 gig would be 131072 (assuming 8k pages).\n\nIf you're updating a lot of tuples without vacuums, you'll likely want to \nup your fsm settings.\n\nNote you can change things like sort_mem, effective_cache_size and \nrandom_page_cost on the fly (but not buffers, they're allocated at \nstartup, nor fsm, they are as well.)\n\nso, if you're gonna have one huge honkin query that needs to sort a \nhundred megs at a time, but you'd rather not up your sort memory that high \n(sort mem is PER SORT, not per backend or per database, so it can get out \nof hand quickly) then you can just \n\nset sort_mem=128000;\n\nbefore throwing out the big queries that need all the sort.\n\n", "msg_date": "Thu, 4 Dec 2003 14:10:41 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack,\n\n> latest changes:\n> shared_buffers = 35642\n\nThis is fine, it's about 14% of available RAM. Though the way you calculated \nit still confuses me. It's not complicated; it should be between 6% and 15% \nof available RAM; since you're doing a data-transformation DB, yours should \nbe toward the high end. \n\n> max_fsm_relations = 1000\n> max_fsm_pages = 10000\n\nYou want to raise this a whole lot if your data transformations involve large \ndelete or update batches. I'd suggest running \"vacuum analyze verbose\" \nbetween steps to see how many dead pages you're accumulating.\n\n> wal_buffers = 64\n> sort_mem = 32768\n> vacuum_mem = 32768\n> effective_cache_size = 10000\n\nThis is way the heck too low. it's supposed to be the size of all available \nRAM; I'd set it to 2GB*65% as a start.\n\n> IO is active, but hardly saturated. CPU load is hefty though, load\n> average is at 4 now.\n\nUnless you're doing huge statistical aggregates (like radar charts), or heavy \nnumerical calculations-by-query, high CPU and idle I/O usually indicates a \nreally bad query, like badly mismatched data types on a join or unconstrained \njoins or overblown formatting-by-query.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 4 Dec 2003 13:24:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": ">\n> IO is active, but hardly saturated. CPU load is hefty though, load\n> average is at 4 now.\n>\n> procs memory swap io\n> system cpu\n> r b w swpd free buff cache si so bi bo in cs \n> us sy id\n\n> 0 2 1 2808 11432 39616 1902984 0 0 204 876 788 507 \n> 3 4 93\n\nYou're getting a load average of 4 with 93% idle?\n\nThat's a reasonable number of context switches, and if the blocks \nyou're reading/writing are discontinous, I could see io saturation \nrearing it's head.\n\nThis looks to me like you're starting and killing a lot of processes.\n\nIs this thrashing psql connections, or is it one big query? What are \nyour active processes?\n\nYour effective cache size looks to be about 1900 megs (+- binary), \nassuming all of it is pg.\n\neric\n \n\n", "msg_date": "Thu, 4 Dec 2003 14:59:45 -0800", "msg_from": "Eric Soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 13:24, Josh Berkus wrote:\n> Jack,\n> \n> > latest changes:\n> > shared_buffers = 35642\n> \n> This is fine, it's about 14% of available RAM. Though the way you calculated \n> it still confuses me. It's not complicated; it should be between 6% and 15% \n> of available RAM; since you're doing a data-transformation DB, yours should \n> be toward the high end. \n> \n> > max_fsm_relations = 1000\n> > max_fsm_pages = 10000\n> \n> You want to raise this a whole lot if your data transformations involve large \n> delete or update batches. I'd suggest running \"vacuum analyze verbose\" \n> between steps to see how many dead pages you're accumulating.\n\nThis looks really difficult to tune, and based on the load I'm giving\nit, it looks really important. I've tried the verbose analyze and I've\nlooked at the rules of thumb, neither approach seems good for the\npattern of \"hammer the system for a day or two, then leave it alone for\na week.\" I'm setting it to 500000 (half of the biggest table size\ndivided by a 6k page size), but I'll keep tweaking this.\n\n> \n> > wal_buffers = 64\n> > sort_mem = 32768\n> > vacuum_mem = 32768\n> > effective_cache_size = 10000\n> \n> This is way the heck too low. it's supposed to be the size of all available \n> RAM; I'd set it to 2GB*65% as a start.\n\nThis makes a little bit of difference. I set it to 65% (15869 pages).\nNow we have some real disk IO:\n procs memory swap io \nsystem cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 3 1 2804 10740 40808 1899856 0 0 26624 0 941 4144 \n13 24 63\n 1 2 1 2804 10808 40808 1899848 0 0 21748 60 1143 3655 \n9 22 69\n\nstill high cpu (3-ish load) though, and there's no noticeable\nimprovement in query speed.\n\n> \n> > IO is active, but hardly saturated. CPU load is hefty though, load\n> > average is at 4 now.\n> \n> Unless you're doing huge statistical aggregates (like radar charts), or heavy \n> numerical calculations-by-query, high CPU and idle I/O usually indicates a \n> really bad query, like badly mismatched data types on a join or unconstrained \n> joins or overblown formatting-by-query.\n\nRan that by the programmer responsible for this area and watched the\nstatements go by with tcpdump -X. Looks like really simple stuff to me:\nselect a handful of values, then insert into one table and delete from\nanother.\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 15:16:11 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 14:59, Eric Soroos wrote:\n> >\n> > IO is active, but hardly saturated. CPU load is hefty though, load\n> > average is at 4 now.\n> >\n> > procs memory swap io\n> > system cpu\n> > r b w swpd free buff cache si so bi bo in cs \n> > us sy id\n> \n> > 0 2 1 2808 11432 39616 1902984 0 0 204 876 788 507 \n> > 3 4 93\n> \n> You're getting a load average of 4 with 93% idle?\ndown a bit since my last set of tweaks, but yeah:\n 3:18pm up 2 days, 3:37, 3 users, load average: 3.42, 3.31, 2.81\n66 processes: 65 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 2.0% user, 3.4% system, 0.0% nice, 93.4% idle\nCPU1 states: 1.3% user, 2.3% system, 0.0% nice, 95.2% idle\nMem: 2064656K av, 2053896K used, 10760K free, 0K shrd, 40388K\nbuff\nSwap: 2899716K av, 2800K used, 2896916K free 1896232K\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n23103 root 15 0 1072 1072 840 R 1.3 0.0 0:01 top\n23046 postgres 15 0 33364 32M 32220 S 0.5 1.6 0:12 postmaster\n> \n> That's a reasonable number of context switches, and if the blocks \n> you're reading/writing are discontinous, I could see io saturation \n> rearing it's head.\n> \n> This looks to me like you're starting and killing a lot of processes.\n\nisn't that by design though? I've been looking at other postgres servers\naround the company and they seem to act pretty similar under load (none\nis being pounded to this level, though).\n\n> \n> Is this thrashing psql connections, or is it one big query? What are \n> your active processes?\n\n[root@postgres root]# ps auxw | grep postgres\npostgres 23042 0.0 0.4 308808 8628 pts/0 S 14:46 0:00\n/usr/bin/postmaster -p 5432\npostgres 23043 0.0 0.4 309788 8596 pts/0 S 14:46 0:00 postgres:\nstats buffer process \npostgres 23044 0.0 0.4 308828 8620 pts/0 S 14:46 0:00 postgres:\nstats collector process \npostgres 23046 0.6 1.4 309952 29872 pts/0 R 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT waiting\npostgres 23047 1.4 14.7 310424 304240 pts/0 S 14:46 0:21 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23048 0.4 14.7 310044 304368 pts/0 S 14:46 0:07 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23049 0.0 0.5 309820 10352 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23050 0.0 0.6 310424 13352 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23051 0.0 0.6 309940 12992 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23052 0.0 0.5 309880 11916 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23053 0.0 0.6 309924 12872 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23054 0.0 0.6 310012 13460 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23055 0.0 0.5 309932 12284 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23056 2.0 14.7 309964 304072 pts/0 S 14:46 0:30 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23057 2.4 14.7 309916 304104 pts/0 S 14:46 0:37 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23058 0.0 0.6 310392 13168 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23059 0.5 14.7 310424 304072 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23060 0.0 0.6 309896 13212 pts/0 S 14:46 0:00 postgres:\nlmuser lmdb 10.0.0.2 idle\npostgres 23061 0.5 1.4 309944 29832 pts/0 R 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT\npostgres 23062 0.6 1.4 309936 29832 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT waiting\npostgres 23063 0.6 1.4 309944 30028 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT waiting\npostgres 23064 0.6 1.4 309944 29976 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT waiting\npostgres 23065 1.4 14.7 310412 304112 pts/0 S 14:46 0:21 postgres:\nlmuser lmdb 216.91.56.200 idle\npostgres 23066 0.5 1.4 309944 29496 pts/0 S 14:46 0:08 postgres:\nlmuser lmdb 216.91.56.200 INSERT waiting\npostgres 23067 0.5 1.4 310472 30040 pts/0 D 14:46 0:09 postgres:\nlmuser lmdb 216.91.56.200 idle\npostgres 23068 0.6 1.4 309936 30104 pts/0 R 14:46 0:09 postgres:\nlmuser lmdb 216.91.56.200 INSERT waiting\npostgres 23069 0.5 1.4 309936 29716 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 216.91.56.200 INSERT waiting\npostgres 23070 0.6 1.4 309944 29744 pts/0 S 14:46 0:09 postgres:\nlmuser lmdb 10.0.0.2 INSERT waiting\n\nten-ish stay idle all the time, the inserts go to update when the big\nselect is done and rows get moved from the active to the completed\ntable.\n\n> Your effective cache size looks to be about 1900 megs (+- binary), \n> assuming all of it is pg.\n> \n> eric\n> \n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 15:20:08 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thursday 04 December 2003 23:16, Jack Coates wrote:\n>\n> > > effective_cache_size = 10000\n> >\n> > This is way the heck too low. it's supposed to be the size of all\n> > available RAM; I'd set it to 2GB*65% as a start.\n>\n> This makes a little bit of difference. I set it to 65% (15869 pages).\n\nThat's still only about 127MB (15869 * 8KB).\n\n> Now we have some real disk IO:\n> procs memory swap io\n> system cpu\n> r b w swpd free buff cache si so bi bo in cs us\n> sy id\n> 0 3 1 2804 10740 40808 1899856 0 0 26624 0 941 4144\n\nAccording to this your cache is currently 1,899,856 KB which in 8KB blocks is \n237,482 - be frugal and say effective_cache_size = 200000 (or even 150000 if \nthe trace above isn't typical).\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 4 Dec 2003 23:47:54 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Thu, 2003-12-04 at 15:47, Richard Huxton wrote:\n> On Thursday 04 December 2003 23:16, Jack Coates wrote:\n> >\n> > > > effective_cache_size = 10000\n> > >\n> > > This is way the heck too low. it's supposed to be the size of all\n> > > available RAM; I'd set it to 2GB*65% as a start.\n> >\n> > This makes a little bit of difference. I set it to 65% (15869 pages).\n> \n> That's still only about 127MB (15869 * 8KB).\n\nyeah, missed the final digit when I copied it into the postgresql.conf\n:-( Just reloaded with 158691 pages.\n> \n> > Now we have some real disk IO:\n> > procs memory swap io\n> > system cpu\n> > r b w swpd free buff cache si so bi bo in cs us\n> > sy id\n> > 0 3 1 2804 10740 40808 1899856 0 0 26624 0 941 4144\n> \n> According to this your cache is currently 1,899,856 KB which in 8KB blocks is \n> 237,482 - be frugal and say effective_cache_size = 200000 (or even 150000 if \n> the trace above isn't typical).\n\nd'oh, just realized what you're telling me here. /me smacks forehead.\nLet's try effective_cache of 183105... (75%). Starting both servers,\nwaiting for big fetch to start, and...\n\n procs memory swap io \nsystem cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 0 0 2800 11920 40532 1906516 0 0 0 0 521 8 \n0 0 100\n 0 1 0 2800 11920 40532 1906440 0 0 356 52 611 113 \n1 3 97\n 0 1 0 2800 11920 40532 1906424 0 0 20604 0 897 808 \n1 18 81\n 0 1 0 2800 11920 40532 1906400 0 0 26112 0 927 820 \n1 13 87\n 0 1 0 2800 11920 40532 1906384 0 0 26112 0 923 812 \n1 12 87\n 0 1 0 2800 11920 40532 1906372 0 0 24592 0 921 805 \n1 13 87\n 0 1 0 2800 11920 40532 1906368 0 0 3248 48 961 1209 \n0 4 96\n 0 1 0 2800 11920 40532 1906368 0 0 2600 0 845 1631 \n0 2 98\n 0 1 0 2800 11920 40532 1906364 0 0 2728 0 871 1714 \n0 2 98\n\nbetter in vmstat... but the query doesn't work any better unfortunately.\n\nThe frustrating thing is, we also have a UP P3-500 with 512M RAM and two\nIDE drives with the same PG install which is doing okay with this load\n-- still half the speed of MS-SQL2K, but usable. I'm at a loss.\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Thu, 04 Dec 2003 16:32:06 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": ">\n> d'oh, just realized what you're telling me here. /me smacks forehead.\n> Let's try effective_cache of 183105... (75%). Starting both servers,\n> waiting for big fetch to start, and...\n>\n> procs memory swap io\n> system cpu\n> r b w swpd free buff cache si so bi bo in cs us\n> sy id\n> 0 0 0 2800 11920 40532 1906516 0 0 0 0 521 8\n> 0 0 100\n> 0 1 0 2800 11920 40532 1906440 0 0 356 52 611 113\n> 1 3 97\n> 0 1 0 2800 11920 40532 1906424 0 0 20604 0 897 808\n> 1 18 81\n> 0 1 0 2800 11920 40532 1906400 0 0 26112 0 927 820\n> 1 13 87\n> 0 1 0 2800 11920 40532 1906384 0 0 26112 0 923 812\n> 1 12 87\n> 0 1 0 2800 11920 40532 1906372 0 0 24592 0 921 805\n> 1 13 87\n> 0 1 0 2800 11920 40532 1906368 0 0 3248 48 961 1209\n> 0 4 96\n> 0 1 0 2800 11920 40532 1906368 0 0 2600 0 845 1631\n> 0 2 98\n> 0 1 0 2800 11920 40532 1906364 0 0 2728 0 871 1714\n> 0 2 98\n>\n> better in vmstat... but the query doesn't work any better \n> unfortunately.\n\nYour io now looks like you're getting a few seconds of continuous read, \nand then you're getting into maxing out random reads. These look about \nright for a single ide drive.\n\n> The frustrating thing is, we also have a UP P3-500 with 512M RAM and \n> two\n> IDE drives with the same PG install which is doing okay with this load\n> -- still half the speed of MS-SQL2K, but usable. I'm at a loss.\n\nI wonder if you're doing table scans. From the earlier trace, it looked \nlike you have a few parallel select/process/insert processes going.\n\nIf that's the case, you might be getting a big sequential scan at \nfirst, then at some point you have enough selects going that it wtarts \nlooking more like random access.\n\nCan you run one of the selects from the psql console and see how fast \nit runs? Do your inserts have any foreign key relations?\n\nOne thing you might try is to shut down the postmaster and move the \npg_clog and pg_xlog directories to the other drive, and leave symlinks \npointing back. That should help your insert performance by putting the \nwal on a seperate drive from the table data. It will really help if you \nwind up having uncached read and write access at the same time. You \nalso might gain by using software raid 0 (with large stripe size, 512k \nor so) across both drives, but if you don't have the appropriate \nparitions in there now it's going to be a bunch of work.\n\neric\n\n", "msg_date": "Thu, 4 Dec 2003 20:52:22 -0800", "msg_from": "Eric Soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack Coates wrote:\n\n>\n> latest changes:\n> shared_buffers = 35642\n> max_fsm_relations = 1000\n> max_fsm_pages = 10000\n> wal_buffers = 64\n> sort_mem = 32768\n> vacuum_mem = 32768\n> effective_cache_size = 10000\n>\n> /proc/sys/kernel/shmmax = 500000000\n>\n> IO is active, but hardly saturated. CPU load is hefty though, load\n> average is at 4 now.\n>\n> procs memory swap io\n> system cpu\n> r b w swpd free buff cache si so bi bo in cs us\n> sy id\n> 0 2 1 2808 11436 39616 1902988 0 0 240 896 765 469\n> 2 11 87\n> 0 2 1 2808 11432 39616 1902988 0 0 244 848 768 540\n> 4 3 93\n> 0 2 1 2808 11432 39616 1902984 0 0 204 876 788 507\n> 3 4 93\n> 0 2 1 2808 11432 39616 1902984 0 0 360 416 715 495\n> 4 1 96\n> 0 2 1 2808 11432 39616 1902984 0 0 376 328 689 441\n> 2 1 97\n> 0 2 0 2808 11428 39616 1902976 0 0 464 360 705 479\n> 2 1 97\n> 0 2 1 2808 11428 39616 1902976 0 0 432 380 718 547\n> 3 1 97\n> 0 2 1 2808 11428 39616 1902972 0 0 440 372 742 512\n> 1 3 96\n> 0 2 1 2808 11428 39616 1902972 0 0 416 364 711 504\n> 3 1 96\n> 0 2 1 2808 11424 39616 1902972 0 0 456 492 743 592\n> 2 1 97\n> 0 2 1 2808 11424 39616 1902972 0 0 440 352 707 494\n> 2 1 97\n> 0 2 1 2808 11424 39616 1902972 0 0 456 360 709 494\n> 2 2 97\n> 0 2 1 2808 11436 39616 1902968 0 0 536 516 807 708\n> 3 2 94\n>\n\nHi Jack,\n\nAs show by vmstat, your Operating System is spending 96% of its time in Idle. On\nRedHat 8.0 IA32, Idle means idle and Wait I/O.\nIn your case, i think they are Wait I/O as you are working on 2.8 GB DB with only\n2GB RAM, but it should be arround 30%.\nYour performances whould increase only if User CPU increase otherwise, for exemple\nif your system swap, only Sys CPU whould increase and your application will stay\nslow.\n\nYou can better check your I/O with : iostat 3 1000, and check that the max tps are\non the database filesystem.\n\nSo, all the Postgres tuning you have tried do not change a lot as the bottleneck is\nyour I/O throuput.\nBut, one thing you can check is which parts of Postgres need a lot of I/O.\nTo do that, after shuting down PG, move your database on an other disk (OS disk ?)\nfor exemple /mypg/data and create a symblolic link for /mypg/data/<mydb> to\n$PGDATA/base.\n\nRestart PG, and while you execute your application, check with iostat which disk as\nthe max of tps. I bet, it is the disk where the WAL buffer are logged.\n\nOne more thing about I/O, for an IDE disk, the maximum number of Write Block + Read\nBlock per sec is about 10000 based on the I/O block size is 1 K. That means 10\nMb/s. if you need more, you can try Stripped SCSI disks or RAID0 subsystem disks.\n\nThierry Missimilly\n\n>\n> --\n> Jack Coates, Lyris Technologies Applications Engineer\n> 510-549-4350 x148, [email protected]\n> \"Interoperability is the keyword, uniformity is a dead end.\"\n> --Olivier Fourdan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly", "msg_date": "Fri, 05 Dec 2003 10:13:09 +0100", "msg_from": "Thierry Missimilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack,\n\n> The frustrating thing is, we also have a UP P3-500 with 512M RAM and two\n> IDE drives with the same PG install which is doing okay with this load\n> -- still half the speed of MS-SQL2K, but usable. I'm at a loss.\n\nOverall, I'm really getting the feeling that this procedure was optimized for \nOracle and/or MSSQL and is hitting some things that aren't such a good idea \nfor PostgreSQL. I highly suggest that you try using log_duration and \nlog_statement (and in 7.4 log_min_duration_statement) to try to locate which \nparticular statements are taking the longest.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 5 Dec 2003 09:26:05 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Fri, 2003-12-05 at 09:26, Josh Berkus wrote:\n> Jack,\n> \n> > The frustrating thing is, we also have a UP P3-500 with 512M RAM and two\n> > IDE drives with the same PG install which is doing okay with this load\n> > -- still half the speed of MS-SQL2K, but usable. I'm at a loss.\n> \n> Overall, I'm really getting the feeling that this procedure was optimized for \n> Oracle and/or MSSQL and is hitting some things that aren't such a good idea \n> for PostgreSQL. I highly suggest that you try using log_duration and \n> log_statement (and in 7.4 log_min_duration_statement) to try to locate which \n> particular statements are taking the longest.\n\nI'll definitely buy that as round two of optimization, but round one is\nstill \"it's faster on the slower server.\"\n\nhdparm -I is identical between the boxes, filesystem structure layout is\nidentical, disk organization isn't identical, but far worse: the UP low\nram box has PG on /dev/hdb, ew. Predictably, vmstat shows low numbers...\nbut steady numbers.\n\ndev is the box which goes fast, and I was wrong, it's actually a 2GHz\nP4. rufus is the box which goes slow. During the big fetch:\ndev bi sits around 2000 blocks for twenty seconds while bo is around 50\nblocks, then bo jumps to 800 or so while the data is returned, then\nwe're done.\n\nrufus bi starts at 16000 blocks, then drops steadily while bo climbs.\nAfter a minute or so, bi stabilizes at 4096 blocks, then bo bursts to\nreturn the data. Then the next fetch starts, and it's bi of 500, bo of\n300 for several minutes.\n\nThese observations certainly all point to Eric and Thierry's\nrecommendations to better organize the filesystem and get faster disks..\nexcept that the dev box gets acceptable performance.\n\nSo, I've dug into postgresql.conf on dev and rufus, and here's what I\nfound:\n\nRUFUS\n\n\n\n\n\n\n\n\nhow much\nram do\nyou\nhave?\n\n\n\n\n\n\n75%\nconverted to 8K pages of that for effective_cache\n\n\n\n15% of\nthat or\n512M,\nwhichever is larger, converted to 8K pages for shared_buffers\n15% of\nthat\nconverted to 8K pages for vacuum_mem\n\n\n\nhow many\nmessages\nwill you\nsend\nbetween\nvacuums?\n\n\n\ndivide\nthat by\n2 and\ndivide\nby 6 for\nmax_fsm_pages\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDEV\n\n\n\n\n\n\n\n\nhow much\nram do\nyou\nhave?\n\n\n\n\n\n\n48%\nconverted to 8K pages of that for effective_cache\n\n\n\n6.5% of\nthat or\n512M,\nwhichever is larger, converted to 8K pages for shared_buffers\n52% of\nthat\nconverted to 8K pages for vacuum_mem\n\n\n\n\nmax_fsm_pages untouched on this box.\n\n\n\n\n\n\n\nI adjusted rufus's configuration to match those percentages, but left\nmax_fsm_pages dialed up to 500000. Now Rufus's vmstat shows much better\nbehavior: bi 12000 blocks gradually sloping down to 3000 during the big\nselect, bo steady until it's ready to return. As more jobs come in, we\nsee overlap areas where bi is 600-ish and bo is 200-ish, but they only\nlast a few tens of seconds.\n\nThe big selects are still a lot slower than they are on the smaller\ndatabase and overall performance is still unacceptable. Next I dialed\nmax_fsm_pages back down to 10000 -- no change. Hm, maybe it's been too\nlong since the last vacuumdb --analyze, let's give it another.\n\nhdparm -Tt shows that disk performance is crappo on rufus, half what it\nis on dev -- and freaking dev is using 16 bit IO! This is a motherboard\nIDE controller issue.\n\nSouth Bridge: VIA vt8233\nRevision: ISA 0x0 IDE 0x6\n\nThat's it, I'm throwing out this whole test series and starting over\nwith different hardware. Database server is now a dual 2GHz Xeon with\n2GB RAM & 2940UW SCSI, OS and PG's logs on 36G drive, PG data on 9GB\ndrive. Data is importing now and I'll restart the tests tonight.\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Fri, 05 Dec 2003 17:22:42 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "On Fri, 2003-12-05 at 17:22, Jack Coates wrote:\n...\n> That's it, I'm throwing out this whole test series and starting over\n> with different hardware. Database server is now a dual 2GHz Xeon with\n> 2GB RAM & 2940UW SCSI, OS and PG's logs on 36G drive, PG data on 9GB\n> drive. Data is importing now and I'll restart the tests tonight.\n\nSorry to reply at myself, but thought I'd note that the performance is\npractically unchanged by moving to better hardware and separating logs\nand data onto different spindles. Although the disks are twice as fast\nby hdparm -Tt, their behavior as shown by iostat and vmstat is little\ndifferent between dual and dev (single P4-2GHz/512MB/(2)IDE drives).\nDual is moderately faster than my first, IDE-based testbed (about 8%),\nbut still only 30% as fast as the low-powered dev.\n\nI've been running vacuumdb --analyze and/or vaccuumdb --full between\neach config change, and I also let the job run all weekend. Saturday it\ngot --analyze every three hours or so, Sunday it got --analyze once in\nthe morning. None of these vacuumdb's are making any difference.\n\nTheories at this point, in no particular order:\n\na) major differences between my 7.3.4 from source (compiled with no\noptions) and dev's 7.3.2-1PGDG RPMs. Looking at the spec file doesn't\nreveal anything glaring to me, but is there something I'm missing?\n\nb) major differences between my kernel 2.4.18-14smp (RH8) and dev's\nkernel 2.4.18-3 (RH7.3).\n\nc) phase of the moon.\n\nWhile SQL optimization is likely to improve performance across the\nboard, it doesn't explain the differences between these two systems and\nI'd like to avoid it as a theory until the fast box can perform as well\nas the slow box.\n\nAny ideas? Thanks in advance,\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Mon, 08 Dec 2003 09:43:45 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack Coates <[email protected]> writes:\n> Theories at this point, in no particular order:\n\n> a) major differences between my 7.3.4 from source (compiled with no\n> options) and dev's 7.3.2-1PGDG RPMs. Looking at the spec file doesn't\n> reveal anything glaring to me, but is there something I'm missing?\n\nThere are quite a few performance-related patches between 7.3.2 and\n7.3.4. Most of them should be in 7.3.4's favor but there are some\nplaces where we had to take a performance hit in order to have a\nsuitably low-risk fix for a bug. You haven't told us enough about\nthe problem to know if any of those cases apply, though. AFAIR\nyou have not actually showed either the slow query or EXPLAIN ANALYZE\nresults for it on the two boxes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Dec 2003 14:19:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions " }, { "msg_contents": "On Mon, 2003-12-08 at 11:19, Tom Lane wrote:\n> Jack Coates <[email protected]> writes:\n> > Theories at this point, in no particular order:\n> \n> > a) major differences between my 7.3.4 from source (compiled with no\n> > options) and dev's 7.3.2-1PGDG RPMs. Looking at the spec file doesn't\n> > reveal anything glaring to me, but is there something I'm missing?\n> \n> There are quite a few performance-related patches between 7.3.2 and\n> 7.3.4. Most of them should be in 7.3.4's favor but there are some\n> places where we had to take a performance hit in order to have a\n> suitably low-risk fix for a bug. You haven't told us enough about\n> the problem to know if any of those cases apply, though. AFAIR\n> you have not actually showed either the slow query or EXPLAIN ANALYZE\n> results for it on the two boxes ...\n> \n> \t\t\tregards, tom lane\n\nRight, because re-architecture of a cross-platform query makes sense if\nperformance is bad on all systems, but is questionable activity when\nperformance is fine on some systems and lousy on others. Hence my\nstatement that while SQL optimization is certainly something we want to\ndo for across-the-board performance increase, I wanted to focus on other\nissues for troubleshooting this problem. I will be back to ask about\ndata access models later :-)\n\nI ended up going back to a default postgresql.conf and reapplying the\nvarious tunings one-by-one. Turns out that while setting fsync = false\nhad little effect on the slow IDE box, it had a drastic effect on this\nfaster SCSI box and performance is quite acceptable now (aside from the\nexpected falloff of about 30% after the first twenty minutes, which I\nbelieve comes from growing and shrinking tables without vacuumdb\n--analyzing).\n\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n", "msg_date": "Tue, 09 Dec 2003 08:57:53 -0800", "msg_from": "Jack Coates <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning questions" }, { "msg_contents": "> I ended up going back to a default postgresql.conf and reapplying the\n> various tunings one-by-one. Turns out that while setting fsync = false\n> had little effect on the slow IDE box, it had a drastic effect on this\n> faster SCSI box and performance is quite acceptable now (aside from the\n> expected falloff of about 30% after the first twenty minutes, which I\n> believe comes from growing and shrinking tables without vacuumdb\n> --analyzing).\n\nHmm. I wonder if that could be related to the issue where many IDE drives have write-caching enabled. With the write cache enabled\nfsyncs are nearly immediate, so setting fsync=false makes little difference...\n\n\n\n\n", "msg_date": "Tue, 9 Dec 2003 17:07:53 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Jack,\n\n> Right, because re-architecture of a cross-platform query makes sense if\n> performance is bad on all systems, but is questionable activity when\n> performance is fine on some systems and lousy on others. Hence my\n> statement that while SQL optimization is certainly something we want to\n> do for across-the-board performance increase, I wanted to focus on other\n> issues for troubleshooting this problem. I will be back to ask about\n> data access models later :-)\n\nYes, but an EXPLAIN ANALYZE will also help show issues like sorts running out \nof memory, etc. Really, we don't currently have enough information to do \nmore than speculate; it's like trying to repair a car engine wearing a \nblindfold.\n\nParticularly since it's possible that there are only 1 or 2 \"bad queries\" \nwhich are messing everything else up.\n\nFor that matter, it would really help to know:\n-- How many simulatneous connections are running update queries during this \nprocess?\n-- How about some sample VACUUM VERBOSE results for the intra-process vacuums?\n\n> I ended up going back to a default postgresql.conf and reapplying the\n> various tunings one-by-one. Turns out that while setting fsync = false\n> had little effect on the slow IDE box, it had a drastic effect on this\n> faster SCSI box and performance is quite acceptable now (aside from the\n> expected falloff of about 30% after the first twenty minutes, which I\n> believe comes from growing and shrinking tables without vacuumdb\n> --analyzing).\n\nWell, that brings 2 things immediately to mind:\n1) That may improve performance, but it does mean that if your machine loses \npower you *will* be restoring from backup. It's risky to do.\n\n2) Your IDE system has write-caching enabled. Once again, this is a nice \nperformmance boost, if you don't mind database corruption in a power-out.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 9 Dec 2003 09:35:04 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning questions" }, { "msg_contents": "Hi,\n\nI have a question about the COPY statement. I am using PGSQL(7.3.4) with\npython-2.3 on RedHat v8 machine. The problem I have is the following.\n\nUsing pg module in python I am trying to run the COPY command to populate\nthe large table. I am using this to replace the INSERT which takes about\nfew hours to add 70000 entries where copy takes minute and a half. Now\nthese stats come from the NetBSD machine I also use which doesn't have\nthis problem but has same python and same pgsql installed.\n\nMy understanding is that COPY workes FROM 'filename' or STDIN where the\nlast characters are '.\\\\n'. I tried using the copy from 'filename' and as\nI said NetBSD is not complaining where I get the following error on Linux\nmachine even if permissions on the data file are 777:\n\n _pg.error: ERROR: COPY command, running in backend with effective uid\n 26, could not open file '/home/slavisa/.nimrod/experiments/demo/ejdata'\n for reading. Errno = Permission denied (13).\n\nI can't figure out why would this be occuring so I wanted to switch to\nFROM STDIN option but I got stuck here due to lack of knowledge I have to\nadmit. \n\nWhat I would like to ask anyone who knows anything about this. If you know\nwhat the problem is with FROM file option or you know how to get COPY FROM\nSTDIN working from within the python (or any other) program, help would be\ngreatly appreciated,\n\nRegards,\nSlavisa\n\n", "msg_date": "Tue, 03 Feb 2004 11:55:31 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": false, "msg_subject": "COPY from question" }, { "msg_contents": "\n\nOn Tue, 3 Feb 2004, Slavisa Garic wrote:\n\n> My understanding is that COPY workes FROM 'filename' or STDIN where the\n> last characters are '.\\\\n'. I tried using the copy from 'filename' and as\n> I said NetBSD is not complaining where I get the following error on Linux\n> machine even if permissions on the data file are 777:\n>\n> _pg.error: ERROR: COPY command, running in backend with effective uid\n> 26, could not open file '/home/slavisa/.nimrod/experiments/demo/ejdata'\n> for reading. Errno = Permission denied (13).\n>\n\nThis is probably a permissions problem at a higher level, check the\npermissions on the directories in the path.\n\nKris Jurka\n\n", "msg_date": "Mon, 2 Feb 2004 23:01:35 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY from question" }, { "msg_contents": "Slavisa Garic <[email protected]> writes:\n> ... I get the following error on Linux\n> machine even if permissions on the data file are 777:\n\n> _pg.error: ERROR: COPY command, running in backend with effective uid\n> 26, could not open file '/home/slavisa/.nimrod/experiments/demo/ejdata'\n> for reading. Errno = Permission denied (13).\n\nMost likely the postgres user doesn't have read permission for one of\nthe directories in that path.\n\n\t\t\tregards, tom lane\n\nPS: this didn't really belong on pghackers.\n", "msg_date": "Mon, 02 Feb 2004 23:07:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY from question " }, { "msg_contents": "Slavisa Garic wrote:\n> Using pg module in python I am trying to run the COPY command to populate\n> the large table. I am using this to replace the INSERT which takes about\n> few hours to add 70000 entries where copy takes minute and a half. \n\nThat difference in speed seems quite large. Too large. Are you batching\nyour INSERTs into transactions (you should be in order to get good\nperformance)? Do you have a ton of indexes on the table? Does it have\ntriggers on it or some other thing (if so then COPY may well wind up doing\nthe wrong thing since the triggers won't fire for the rows it inserts)?\n\nI don't know what kind of schema you're using, but it takes perhaps a\ncouple of hours to insert 2.5 million rows on my system. But the rows\nin my schema may be much smaller than yours.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Tue, 3 Feb 2004 02:57:46 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY from question" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Slavisa Garic wrote:\n>> Using pg module in python I am trying to run the COPY command to populate\n>> the large table. I am using this to replace the INSERT which takes about\n>> few hours to add 70000 entries where copy takes minute and a half. \n\n> That difference in speed seems quite large. Too large. Are you batching\n> your INSERTs into transactions (you should be in order to get good\n> performance)? Do you have a ton of indexes on the table? Does it have\n> triggers on it or some other thing (if so then COPY may well wind up doing\n> the wrong thing since the triggers won't fire for the rows it\n> inserts)?\n\nCOPY *does* fire triggers, and has done so for quite a few releases.\n\nMy bet is that the issue is failing to batch individual INSERTs into\ntransactions. On a properly-set-up machine you can't get more than one\ntransaction commit per client per disk revolution, so the penalty for\ntrivial transactions like single inserts is pretty steep.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Feb 2004 10:10:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY from question " }, { "msg_contents": "Hi Kevin,\n\nOn Tue, 3 Feb 2004, Kevin Brown wrote:\n\n> Slavisa Garic wrote:\n> > Using pg module in python I am trying to run the COPY command to populate\n> > the large table. I am using this to replace the INSERT which takes about\n> > few hours to add 70000 entries where copy takes minute and a half. \n> \n> That difference in speed seems quite large. Too large. Are you batching\n> your INSERTs into transactions (you should be in order to get good\n> performance)? Do you have a ton of indexes on the table? Does it have\n> triggers on it or some other thing (if so then COPY may well wind up doing\n> the wrong thing since the triggers won't fire for the rows it inserts)?\n> \n> I don't know what kind of schema you're using, but it takes perhaps a\n> couple of hours to insert 2.5 million rows on my system. But the rows\n> in my schema may be much smaller than yours.\n\nYou are right about the indexes. There is quite a few of them (5-6 without\nlooking at the schema). The problem is that I do need those indexes as I\nhave a lot of SELECTs on that table and inserts are only happening once.\n\nYou are also right about the rows (i think) as I have about 15-20 columns.\nThis could be split into few other table and it used to be but I have\nmerged them because of the requirement for the faster SELECTs. With the\ncurrent schema there most of my modules that access the database are not\nrequired to do expensive JOINs as they used to. Because faster SELECTs are\nmore important to me then faster INSERTs I had to do this. THis wasn't a\nproblem for me until I have started creating experiments which had more\nthan 20 thousand jobs which translates to 20 thousand rows in this big\ntable.\n\nI do batch INSERTs into one big transaction (1000 rows at a time). While i\ndid get some improvement compared to the single transaction per insert it\nwas still not fast enough (well not for me :) ). Could you please\nelaborate on the triggers? I have no idea what kind of triggers there are\nin PGSQL or relational databases.\n\nWith regards to my problem, I did solve it by piping the data into the\nCOPY stdin. Now I have about 75000 rows inserted in 40 seconds which is\nextremely good for me.\n\nThank you for your help,\nRegards,\nSlavisa\n\n \n> -- \n> Kevin Brown\t\t\t\t\t [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 05 Feb 2004 11:42:32 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY from question" }, { "msg_contents": "Hi,\n\nI have a quick question. In order to speed up insertion of large number of\nrows (100s of thousands) I replaced the INSERT with the COPY. This works\nfine but one question popped into my mind. Does copy updates indexes on\nthat table if there are some defined?\n\nThanks,\nSlavisa\n\n\n", "msg_date": "Fri, 06 Feb 2004 11:46:57 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": false, "msg_subject": "COPY with INDEXES question" }, { "msg_contents": "Thanks for the reply and thanks even more for the good one :).\n\nCheers,\nSlavisa\n\nOn Fri, 6 Feb 2004, Christopher Kings-Lynne wrote:\n\n> > I have a quick question. In order to speed up insertion of large number of\n> > rows (100s of thousands) I replaced the INSERT with the COPY. This works\n> > fine but one question popped into my mind. Does copy updates indexes on\n> > that table if there are some defined?\n> \n> Yes, of course. Runs triggers and stuff as well.\n> \n> Chris\n> \n> \n\n", "msg_date": "Fri, 06 Feb 2004 13:52:28 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with INDEXES question" }, { "msg_contents": "> I have a quick question. In order to speed up insertion of large number of\n> rows (100s of thousands) I replaced the INSERT with the COPY. This works\n> fine but one question popped into my mind. Does copy updates indexes on\n> that table if there are some defined?\n\nYes, of course. Runs triggers and stuff as well.\n\nChris\n\n", "msg_date": "Fri, 06 Feb 2004 10:54:12 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with INDEXES question" }, { "msg_contents": "On Thu, 2004-02-05 at 19:46, Slavisa Garic wrote:\n> Hi,\n> \n> I have a quick question. In order to speed up insertion of large number of\n> rows (100s of thousands) I replaced the INSERT with the COPY. This works\n> fine but one question popped into my mind. Does copy updates indexes on\n> that table if there are some defined?\n\nCopy does nearly everything that standard inserts to. RULES are the only\nthing that come to mind. Triggers, indexes, constraints, etc. are all\napplied.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "Thu, 05 Feb 2004 21:55:28 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with INDEXES question" } ]
[ { "msg_contents": "Hello!\n\nI am relative newcomer to SQL and PostgreSQL world, so please forgive me\nif this question is stupid.\n\nI am experiencing strange behaviour, where simple UPDATE of one field is\nvery slow, compared to INSERT into table with multiple indexes. I have\ntwo tables - one with raw data records (about 24000), where one field\ncontains status information (varchar(10)). First table has no indexes,\nonly primary key (recid). Second table contains processed records - some\nfields are same as first table, others are calculated during processing.\nRecords are processed by Python script, which uses PyPgSQL for PostgreSQL\naccess.\n\nProcessing is done by selecting all records from table1 where status\nmatches certain criteria (import). Each record is processed and results\nare inserted into table2, after inserting status field on same record in\ntable1 is updated with new value (done). Update statement itself is\nextremely simple: \"update table1 set status = 'done' where recid = ...\"\n\nMost interesting is, that insert takes 0.004 seconds in average, but\nupdate takes 0.255 seconds in average. Processing of 24000 records took\naround 1 hour 20 minutes.\n\nThen i changed processing logic not to update every record in table1\nafter processing. Instead i did insert recid value into temporary table\nand updated records in table1 after all records were processed and\ninserted into table2:\nUPDATE table1 SET Status = 'done' WHERE recid IN (SELECT recid FROM temptable)\n\nThis way i got processing time of 24000 records down to about 16 minutes.\nAbout 13 minutes from this took last UPDATE statement.\n\nWhy is UPDATE so slow compared to INSERT? I would expect more or less\nsimilar performance, or slower on insert since table2 has four indexes\nin addition to primary key, table1 has only primary key, which is used\non update. Am i doing something wrong or is this normal?\n\nI am using PostgreSQL 7.3.4, Debian/GNU Linux 3.0 (Woody),\nkernel 2.4.21, Python 2.3.2, PyPgSQL 2.4\n\n-- \nIvar Zarans\n\n", "msg_date": "Thu, 4 Dec 2003 20:57:51 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thu, 4 Dec 2003 20:57:51 +0200\nIvar Zarans <[email protected]> wrote:\n.\n\n> table1 is updated with new value (done). Update statement itself is\n> extremely simple: \"update table1 set status = 'done' where recid =\n> ...\"\n> \n> Most interesting is, that insert takes 0.004 seconds in average, but\n> update takes 0.255 seconds in average. Processing of 24000 records\n> took around 1 hour 20 minutes.\n\nDo you have an index on recid?\n\nand did you vacuum analyze after you loaded up the data?\n\n> \n> Then i changed processing logic not to update every record in table1\n> after processing. Instead i did insert recid value into temporary\n> table and updated records in table1 after all records were processed\n> and inserted into table2:\n> UPDATE table1 SET Status = 'done' WHERE recid IN (SELECT recid FROM\n> temptable)\n> \n\n\"IN\" queries are terribly slow on versions before 7.4\n\n> Why is UPDATE so slow compared to INSERT? I would expect more or less\n> similar performance, or slower on insert since table2 has four indexes\n> in addition to primary key, table1 has only primary key, which is used\n> on update. Am i doing something wrong or is this normal?\n> \n\nRemember, UPDATE has to do all the work of select and more. \n\nAnd if you have 4 indexes those will also add to the time (Since it has\nto update/add them to the tree)\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Thu, 4 Dec 2003 14:23:20 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thu, Dec 04, 2003 at 02:23:20PM -0500, Jeff wrote:\n\n> > Most interesting is, that insert takes 0.004 seconds in average, but\n> > update takes 0.255 seconds in average. Processing of 24000 records\n> > took around 1 hour 20 minutes.\n> \n> Do you have an index on recid?\n\nYes, this is primary key of table1\n\n> and did you vacuum analyze after you loaded up the data?\n\nNo, this is running as nightly cronjob. All tests were done during one\nday, so no vacuum was done.\n\n> \"IN\" queries are terribly slow on versions before 7.4\n\nOK, this is useful to know :)\n\n> > Why is UPDATE so slow compared to INSERT? I would expect more or less\n> > similar performance, or slower on insert since table2 has four indexes\n> > in addition to primary key, table1 has only primary key, which is used\n> > on update. Am i doing something wrong or is this normal?\n\n> Remember, UPDATE has to do all the work of select and more. \n> \n> And if you have 4 indexes those will also add to the time (Since it has\n> to update/add them to the tree)\n\nMy primary concern is performance difference between INSERT and UPDATE\nin my first tests. There i did select from table1, fetched record,\nprocessed it and inserted into table2. Then updated status of fetched\nrecord in table1. Repeated in cycle as long as fetch returned record.\nAverage time for INSERT was 0.004 seconds, average time for UPDATE 0.255\nseconds. Update was done as \"update table1 set status = 'done' where\nrecid = xxxx\". As far as i understand, this type of simple update should\nbe faster, compared to INSERT into table with four indexes, but in my\ncase it is more than 60 times slower. Why??\n\nMy second tests were done with temporary table and update query as: \n\"UPDATE table1 SET Status = 'done' WHERE recid IN (SELECT recid FROM\ntemptable)\". It is still slower than INSERT, but more or less\nacceptable. Compared to my first tests overall processing time dropped\nfrom 1 hour and 20 minutes to 16 minutes.\n\nSo, my question remains - why is simple update more than 60 times\nslower, compared to INSERT? Any ideas?\n\n-- \nIvar Zarans\n", "msg_date": "Thu, 4 Dec 2003 21:51:21 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Ivar Zarans wrote:\n> \n> I am experiencing strange behaviour, where simple UPDATE of one field is\n> very slow, compared to INSERT into table with multiple indexes. I have\n> two tables - one with raw data records (about 24000), where one field\n\nIn Postgres and any other DB that uses MVCC (multi-version concurrency), \nUPDATES will always be slower than INSERTS. With MVCC, what the DB does \nis makes a copy of the record, updates that record and then invalidates \nthe previous record. This allows maintains a consistent view for anybody \nwho's reading the DB and also avoids the requirement of row locks.\n\nIf you have to use UPDATE, make sure (1) your UPDATE WHERE clause is \nproperly indexed and (2) you are running ANALYZE/VACUUM periodically so \nthe query planner can optimize for your UPDATE statements.\n\n", "msg_date": "Thu, 04 Dec 2003 11:59:01 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thursday 04 December 2003 19:51, Ivar Zarans wrote:\n>\n> My second tests were done with temporary table and update query as:\n> \"UPDATE table1 SET Status = 'done' WHERE recid IN (SELECT recid FROM\n> temptable)\". It is still slower than INSERT, but more or less\n> acceptable. Compared to my first tests overall processing time dropped\n> from 1 hour and 20 minutes to 16 minutes.\n\nAh - it's probably not the update but the IN. You can rewrite it using PG's \nnon-standard FROM:\n\nUPDATE t1 SET status='done' FROM t_tmp WHERE t1.rec_id = t_tmp.rec_id;\n\nNow that doesn't explain why the update is taking so long. One fifth of a \nsecond is extremely slow. Are you certain that the index is being used?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 4 Dec 2003 20:23:36 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thu, Dec 04, 2003 at 08:23:36PM +0000, Richard Huxton wrote:\n\n> Ah - it's probably not the update but the IN. You can rewrite it using PG's \n> non-standard FROM:\n> \n> UPDATE t1 SET status='done' FROM t_tmp WHERE t1.rec_id = t_tmp.rec_id;\n\nThanks for the hint. I'll try this.\n\n> Now that doesn't explain why the update is taking so long. One fifth of a \n> second is extremely slow. Are you certain that the index is being used?\n\nExplain shows following output:\n\nexplain update table1 set status = 'PROC' where recid = '199901';\n\nIndex Scan using table1_pkey on table1 (cost=0.00..6.01 rows=1 width=198)\n Index Cond: (recid = 199901::bigint)\n (2 rows)\n\n\n\n-- \nIvar Zarans\n\n", "msg_date": "Thu, 4 Dec 2003 22:43:26 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thu, Dec 04, 2003 at 08:23:36PM +0000, Richard Huxton wrote:\n\n> Ah - it's probably not the update but the IN. You can rewrite it using PG's \n> non-standard FROM:\n> \n> UPDATE t1 SET status='done' FROM t_tmp WHERE t1.rec_id = t_tmp.rec_id;\n\nThis was one *very useful* hint! Using this method i got my processing\ntime of 24000 records down to around 3 minutes 10 seconds. Comparing\nwith initial 1 hour 20 minutes and then 16 minutes, this is impressive\nimprovement!\n\n> Now that doesn't explain why the update is taking so long. One fifth of a \n> second is extremely slow. Are you certain that the index is being used?\n\nI posted results of \"EXPLAIN\" in my previous message. Meanwhile i tried\nto update just one record, using \"psql\". Also tried out \"EXPLAIN\nANALYZE\". This way i did not see any big delay - total runtime for one\nupdate was around 1 msec.\n\nI am confused - has slowness of UPDATE something to do with Python and\nPyPgSQL, since \"psql\" seems to have no delay whatsoever? Or is this\nrelated to using two cursors, one for select results and other for\nupdate? Even if this is related to Python or cursors, how am i getting\nso big speed improvement only by using different query? \n\n-- \nIvar\n\n", "msg_date": "Fri, 5 Dec 2003 00:13:12 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thursday 04 December 2003 19:59, William Yu wrote:\n> Ivar Zarans wrote:\n> > I am experiencing strange behaviour, where simple UPDATE of one field is\n> > very slow, compared to INSERT into table with multiple indexes. I have\n> > two tables - one with raw data records (about 24000), where one field\n>\n> In Postgres and any other DB that uses MVCC (multi-version concurrency),\n> UPDATES will always be slower than INSERTS. With MVCC, what the DB does\n> is makes a copy of the record, updates that record and then invalidates\n> the previous record. \n[snip]\n\nYes, but he's seeing 0.25secs to update one row - that's something odd.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 4 Dec 2003 22:37:28 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thursday 04 December 2003 22:13, Ivar Zarans wrote:\n> On Thu, Dec 04, 2003 at 08:23:36PM +0000, Richard Huxton wrote:\n> > Ah - it's probably not the update but the IN. You can rewrite it using\n> > PG's non-standard FROM:\n> >\n> > UPDATE t1 SET status='done' FROM t_tmp WHERE t1.rec_id = t_tmp.rec_id;\n>\n> This was one *very useful* hint! Using this method i got my processing\n> time of 24000 records down to around 3 minutes 10 seconds. Comparing\n> with initial 1 hour 20 minutes and then 16 minutes, this is impressive\n> improvement!\n\nBe aware, this is specific to PG - I'm not aware of this construction working \non any other DB. Three minutes still doesn't sound brilliant, but that could \nbe tuning issues.\n\n> > Now that doesn't explain why the update is taking so long. One fifth of a\n> > second is extremely slow. Are you certain that the index is being used?\n>\n> I posted results of \"EXPLAIN\" in my previous message. Meanwhile i tried\n> to update just one record, using \"psql\". Also tried out \"EXPLAIN\n> ANALYZE\". This way i did not see any big delay - total runtime for one\n> update was around 1 msec.\n\nYep - the explain looked fine. If you run EXPLAIN ANALYSE it will give you \ntimings too (actual timings will be slightly less than reported ones since PG \nwon't be timing/reporting).\n\n> I am confused - has slowness of UPDATE something to do with Python and\n> PyPgSQL, since \"psql\" seems to have no delay whatsoever? Or is this\n> related to using two cursors, one for select results and other for\n> update? Even if this is related to Python or cursors, how am i getting\n> so big speed improvement only by using different query?\n\nHmm - you didn't mention cursors. If this was a problem with PyPgSQL in \ngeneral I suspect we'd know about it by now. It could however be some \ncursor-related issue. In general, you're probably better off trying to do \nupdates/inserts as a single statement and letting PG manage things rather \nthan processing one row at a time.\n\nIf you've got the time, try putting together a small test-script with some \ndummy data and see if it's reproducible. I'm sure the other Python users \nwould be interested in seeing where the problem is.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 4 Dec 2003 22:45:21 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Thu, Dec 04, 2003 at 10:45:21PM +0000, Richard Huxton wrote:\n\n> If you've got the time, try putting together a small test-script with some \n> dummy data and see if it's reproducible. I'm sure the other Python users \n> would be interested in seeing where the problem is.\n\nTried with test-script, but this functioned normally (Murphy's law!).\nThen tweaked postrgesql.conf and switched on debugging options. Results\nshow (in my opinion) that Python has nothing to do with slow UPDATE.\nTiming from postgresql itself shows duration of 0.29 sec.\n\n===\npostgres[21247]: [2707] DEBUG: StartTransactionCommand\npostgres[21247]: [2708-1] LOG: query: \npostgres[21247]: [2708-2] UPDATE\npostgres[21247]: [2708-3] imp_cdr_200311\npostgres[21247]: [2708-4] SET\npostgres[21247]: [2708-5] Status = 'SKIP'\npostgres[21247]: [2708-6] WHERE\npostgres[21247]: [2708-7] ImpRecID = '202425'\n...\nSkipped rewritten parse tree\n...\npostgres[21247]: [2710-1] LOG: plan:\npostgres[21247]: [2710-2] { INDEXSCAN \npostgres[21247]: [2710-3] :startup_cost 0.00 \npostgres[21247]: [2710-4] :total_cost 6.01 \npostgres[21247]: [2710-5] :rows 1 \npostgres[21247]: [2710-6] :width 199 \npostgres[21247]: [2710-7] :qptargetlist (\n...\nSkipped target list\n...\npostgres[21247]: [2711] DEBUG: CommitTransactionCommand\npostgres[21247]: [2712] LOG: duration: 0.292529 sec\n===\n\nAny suggestions for further investigation?\n\n-- \nIvar Zarans\n\n", "msg_date": "Fri, 5 Dec 2003 03:45:14 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "\nI have played around with explain and explain analyze and noticed one\ninteresting oddity:\n\n===\nexplain UPDATE table1 SET status = 'SKIP' WHERE recid = 196641;\n\n Seq Scan on table1 (cost=0.00..16709.97 rows=1 width=199)\n Filter: (recid = 196641)\n\n=== \n\nexplain UPDATE table1 SET status = 'SKIP' WHERE recid = '196641';\n \n Index Scan using table1_pkey on table1 (cost=0.00..6.01 rows=1 width=199)\n Index Cond: (recid = 196641::bigint)\n\n===\n\nexplain UPDATE table1 SET status = 'SKIP' WHERE recid = 196641::bigint;\n \n Index Scan using table1_pkey on table1 (cost=0.00..6.01 rows=1 width=199)\n Index Cond: (recid = 196641::bigint)\n \n===\n\nWhy first example, where recid is given as numeric constant, is using\nsequential scan, but second example, where recid is given as string\nconstant works with index scan, as expected? Third example shows, that\nnumeric constant must be typecasted in order to function properly.\n\nIs this normal behaviour of fields with bigint type?\n\n-- \nIvar Zarans\n\n", "msg_date": "Fri, 5 Dec 2003 04:07:28 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "> Why first example, where recid is given as numeric constant, is using\n> sequential scan, but second example, where recid is given as string\n> constant works with index scan, as expected? Third example shows, that\n> numeric constant must be typecasted in order to function properly.\n> \n> Is this normal behaviour of fields with bigint type?\n\nYes, it's a known performance problem in PostgreSQL 7.4 and below. I \nbelieve it's been fixed in 7.5 CVS already.\n\nChris\n\n\n", "msg_date": "Fri, 05 Dec 2003 10:15:52 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Friday 05 December 2003 02:07, Ivar Zarans wrote:\n> I have played around with explain and explain analyze and noticed one\n> interesting oddity:\n[snip]\n> Why first example, where recid is given as numeric constant, is using\n> sequential scan, but second example, where recid is given as string\n> constant works with index scan, as expected? Third example shows, that\n> numeric constant must be typecasted in order to function properly.\n>\n> Is this normal behaviour of fields with bigint type?\n\nAs Christopher says, normal (albeit irritating). Not sure it applies here - \nall the examples you've shown me are using the index.\n\nWell - I must admit I'm stumped. Unless you have a *lot* of indexes and \nforeign keys to check, I can't see why it would take so long to update a \nsingle row. Can you post the schema for the table?\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Dec 2003 10:08:20 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Fri, Dec 05, 2003 at 10:08:20AM +0000, Richard Huxton wrote:\n\n> > numeric constant must be typecasted in order to function properly.\n> >\n> > Is this normal behaviour of fields with bigint type?\n> \n> As Christopher says, normal (albeit irritating). Not sure it applies here - \n> all the examples you've shown me are using the index.\n\nI guess i have solved this mystery. Problem appears to be exactly with\nthis - numeric constant representation in query.\n\nI am using PyPgSQL for PostgreSQL access and making update queries as this:\n\nqry = \"UPDATE table1 SET status = %s WHERE recid = %s\"\ncursor.execute(qry, status, recid)\n\nExecute method of cursor object is supposed to merge \"status\" and\n\"recid\" values into \"qry\", using proper quoting. When i started to play\naround with debug information i noticed, that this query used sequential\nscan for \"recid\". Then i also noticed, that query, sent to server looked\nlike this:\n\"UPDATE table1 SET status = 'SKIP' WHERE recid = 199901\"\n\nSure enough, when i used psql and EXPLAIN on this query, i got query\nplan with sequential scan. And using recid value as string or typecasted\ninteger gave correct results with index scan. I wrote about this in my\nprevious message.\n\nIt seems, that PyPgSQL query quoting is not aware of this performance\nproblem (to which Cristopher referred) and final query, sent to server\nis correct SQL, but not correct, considering PostgreSQL bugs.\n\nOne more explanation - previously i posted some logs, showing correct\nquery, using index scan, but still taking 0.29 seconds. Reason for this\ndelay is logging itself - it generates enough IO traffic to have impact\non query speed. With logging disabled, this query takes around 0.0022\nseconds, which is perfectly normal.\n\nFinally - what would be correct solution to this problem? Upgrading to\n7.5 CVS is not an option :) One possibility is not to use PyPgSQL\nvariable substitution and create every query \"by hand\" - not very nice\nsolution, since variable substitution and quoting is quite convenient.\n\nSecond (and better) possibility is to ask PyPgSQL develeopers to take care\nof PostgreSQL oddities.\n\nAny other suggestions?\n\n-- \nIvar\n\n", "msg_date": "Fri, 5 Dec 2003 14:38:43 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Ivar Zarans wrote:\n> It seems, that PyPgSQL query quoting is not aware of this performance\n> problem (to which Cristopher referred) and final query, sent to server\n> is correct SQL, but not correct, considering PostgreSQL bugs.\n\nPersonally I don't consider a bug but anyways.. You are the one facing problem \nso I understand..\n\n> Finally - what would be correct solution to this problem? Upgrading to\n> 7.5 CVS is not an option :) One possibility is not to use PyPgSQL\n> variable substitution and create every query \"by hand\" - not very nice\n> solution, since variable substitution and quoting is quite convenient.\n> \n> Second (and better) possibility is to ask PyPgSQL develeopers to take care\n> of PostgreSQL oddities.\n> \n> Any other suggestions?\n\nI know zero in python but just guessing..\n\nWill following help?\n\nqry = \"UPDATE table1 SET status = %s WHERE recid = '%s'\"\ncursor.execute(qry, status, recid)\n\n Just a thought..\n\n Shridhar\n\n", "msg_date": "Fri, 05 Dec 2003 18:19:46 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Fri, Dec 05, 2003 at 06:19:46PM +0530, Shridhar Daithankar wrote:\n\n> >is correct SQL, but not correct, considering PostgreSQL bugs.\n> \n> Personally I don't consider a bug but anyways.. You are the one facing \n> problem so I understand..\n\nWell, if this is not bug, then what is consideration behind this\nbehaviour? BTW, according to Cristopher it is fixed in 7.5 CVS.\nWhy fix it if this is not a bug? :))\n\nOne more question - is this \"feature\" related only to \"bigint\" fields,\nor are other datatypes affected as well?\n\n> Will following help?\n> \n> qry = \"UPDATE table1 SET status = %s WHERE recid = '%s'\"\n> cursor.execute(qry, status, recid)\n\nYes, this helps. But then it sort of obsoletes PyPgSQL-s own quoting\nlogic. I would prefer to take care of this all by myself or trust some\nunderlying code to do this for me. And PyPgSQL is quite nice - it\nchecks datatype and acts accordingly.\n\n-- \nIvar\n\n", "msg_date": "Fri, 5 Dec 2003 15:13:25 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Friday 05 December 2003 12:49, Shridhar Daithankar wrote:\n> Ivar Zarans wrote:\n> > It seems, that PyPgSQL query quoting is not aware of this performance\n> > problem (to which Cristopher referred) and final query, sent to server\n> > is correct SQL, but not correct, considering PostgreSQL bugs.\n\n>\n> Will following help?\n>\n> qry = \"UPDATE table1 SET status = %s WHERE recid = '%s'\"\n> cursor.execute(qry, status, recid)\n\nBetter IMHO would be: \"UPDATE table1 SET status = %s WHERE recid = %s::int8\"\n\nPG is very strict regarding types - normally a good thing, but it can hit you \nunexpectedly in this scenario. The reason is that the literal number is \ntreated as int4, whereas quoted it is marked as type unknown. Unkown gets \ncast to int8, whereas int4 gets left as-is. If you want to know why int4 \ndoesn't get promoted to int8 automatically, browse the hackers list for the \nlast couple of years.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Dec 2003 13:23:43 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Ivar Zarans wrote:\n> On Fri, Dec 05, 2003 at 06:19:46PM +0530, Shridhar Daithankar wrote:\n> \n> \n>>>is correct SQL, but not correct, considering PostgreSQL bugs.\n>>Personally I don't consider a bug but anyways.. You are the one facing \n>>problem so I understand..\n> Well, if this is not bug, then what is consideration behind this\n> behaviour? BTW, according to Cristopher it is fixed in 7.5 CVS.\n> Why fix it if this is not a bug? :))\n\nThis is not a bug. It is just that people find it confusing when postgresql \nplanner consider seemingly same type as different. e.g. treating int8 as \ndifferent than int4. Obvious thinking is they should be same. But given \npostgresql's flexibility with create type, it is difficult to promote.\n\nAFAIK, the fix in CVS is to make indexes operatable with seemingly compatible \ntypes. Which does not change the fact that postgresql can not upgrade data types \non it's own.\n\nWrite good queries which adhere to strict data typing. It is better to \nunderstand anyway.\n\n> One more question - is this \"feature\" related only to \"bigint\" fields,\n> or are other datatypes affected as well?\n\nEvery data type is affected. int2 will not use a int4 index and so on.\n\n>>Will following help?\n>>\n>>qry = \"UPDATE table1 SET status = %s WHERE recid = '%s'\"\n>>cursor.execute(qry, status, recid)\n> \n> \n> Yes, this helps. But then it sort of obsoletes PyPgSQL-s own quoting\n> logic. I would prefer to take care of this all by myself or trust some\n> underlying code to do this for me. And PyPgSQL is quite nice - it\n> checks datatype and acts accordingly.\n\nWell, then pypgsql should be upgraded to query the pg catalogd to find exact \ntype of column. But that would be too cumbersome I guess.\n\n Shridhar\n\n", "msg_date": "Fri, 05 Dec 2003 19:21:38 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Fri, Dec 05, 2003 at 01:23:43PM +0000, Richard Huxton wrote:\n\n> Better IMHO would be: \"UPDATE table1 SET status = %s WHERE recid = %s::int8\"\n\nThanks for the hint!\n\n> unexpectedly in this scenario. The reason is that the literal number is \n> treated as int4, whereas quoted it is marked as type unknown. Unkown gets \n> cast to int8, whereas int4 gets left as-is.\n\nThis explains a lot. Thanks!\nBTW, is this mentioned somewhere in PostgreSQL documentation? I can't\nremember anything on this subject. Maybe i just somehow skipped it...\n\n-- \nIvar\n\n", "msg_date": "Fri, 5 Dec 2003 18:47:43 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "On Fri, Dec 05, 2003 at 07:21:38PM +0530, Shridhar Daithankar wrote:\n\n> planner consider seemingly same type as different. e.g. treating int8 as \n> different than int4. Obvious thinking is they should be same. But given \n> postgresql's flexibility with create type, it is difficult to promote.\n\nOK, this makes sense and explains a lot. Thanks!\n\n> Well, then pypgsql should be upgraded to query the pg catalogd to find \n> exact type of column. But that would be too cumbersome I guess.\n\nYes, so it seems. Time to rewrite my queries :)\nThanks again for help and explanations!\n\n-- \nIvar\n\n", "msg_date": "Fri, 5 Dec 2003 18:52:53 +0200", "msg_from": "Ivar Zarans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "I just spent 2 days tracking this error down in my own code, actually. \nWhat I wound up doing is having the two places where I generate the \nqueries (everything in my system goes through those two points, as I'm \nusing a middleware layer) check values used as identifying fields for \nthe presence of a bigint, and if one exists, replaces it with a wrapper \nthat does the coerced-string representation:\n\n class Wrap:\n def __init__( self, value ):\n self.value = value\n def __str__( self ):\n return \"'%s'::bigint\"%(self.value,)\n __repr__ = __str__\n value = Wrap(value)\n\nJust doing that for the indexing/identifying values ATM. pyPgSQL will \nback up to using simple repr for the object (rather than raising an \nerror as it would if you were using a formatted string), but will \notherwise treat it as a regular value for quoting and the like, so no \nother modifications to the code required.\n\nBy no means an elegant fix, but since your post (well, the resulting \nthread) managed to solve my problem, figured I should at least tell \neveryone thanks and how I worked around the problem. You wouldn't want \nthis kind of hack down in the pyPgSQL level I would think, as it's \nDB-version specific. I suppose you could alter the __repr__ of the \nPgInt8 class/type to always use the string or coerced form, but it seems \nwrong to me. I'm actually hesitant to include it in our own middleware \nlayer, but oh well, it does seem to be necessary for even somewhat \nreasonable performance.\n\nBTW, my case was a largish (88,000 record) table with a non-unique \nbigint key, explain on update shows sequential search, while with \n'int'::bigint goes to index search. Using pyPgSQL as the interface to \n7.3.4 and 7.3.3.\n\nEnjoy,\nMike\n\nIvar Zarans wrote:\n\n>On Fri, Dec 05, 2003 at 10:08:20AM +0000, Richard Huxton wrote:\n> \n>\n...\n\n>I am using PyPgSQL for PostgreSQL access and making update queries as this:\n> \n>\n...\n\n>It seems, that PyPgSQL query quoting is not aware of this performance\n>problem (to which Cristopher referred) and final query, sent to server\n>is correct SQL, but not correct, considering PostgreSQL bugs.\n> \n>\n...\n\n>Finally - what would be correct solution to this problem? Upgrading to\n>7.5 CVS is not an option :) One possibility is not to use PyPgSQL\n>variable substitution and create every query \"by hand\" - not very nice\n>solution, since variable substitution and quoting is quite convenient.\n>\n>Second (and better) possibility is to ask PyPgSQL develeopers to take care\n>of PostgreSQL oddities.\n>\n>Any other suggestions?\n> \n>\n\n_______________________________________\n Mike C. Fletcher\n Designer, VR Plumber, Coder\n http://members.rogers.com/mcfletch/\n\n\n\n", "msg_date": "Fri, 05 Dec 2003 12:05:45 -0500", "msg_from": "\"Mike C. Fletcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Ivar Zarans wrote:\n\n>On Fri, Dec 05, 2003 at 01:23:43PM +0000, Richard Huxton wrote:\n> \n>\n>>Better IMHO would be: \"UPDATE table1 SET status = %s WHERE recid = %s::int8\"\n>> \n>>\n>\n>Thanks for the hint!\n> \n>\nWhich makes the wrapper class need:\n def __str__( self ):\n return \"%s::int8\"%(self.value,)\n\nEnjoy,\nMike\n\n_______________________________________\n Mike C. Fletcher\n Designer, VR Plumber, Coder\n http://members.rogers.com/mcfletch/\n\n\n\n", "msg_date": "Fri, 05 Dec 2003 12:12:07 -0500", "msg_from": "\"Mike C. Fletcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Ivar Zarans <[email protected]> writes:\n\n> > qry = \"UPDATE table1 SET status = %s WHERE recid = '%s'\"\n> > cursor.execute(qry, status, recid)\n> \n> Yes, this helps. But then it sort of obsoletes PyPgSQL-s own quoting\n> logic. I would prefer to take care of this all by myself or trust some\n> underlying code to do this for me. And PyPgSQL is quite nice - it\n> checks datatype and acts accordingly.\n\nYou should tell the PyPgSQL folk to use the new binary protocol for parameters\nso that there are no quoting issues at all.\n\nBut if it's going to interpolate strings into the query then pyPgSQL really\nought to be doing '%s' as above even for numbers. This lets postgres decide\nwhat the optimal datatype is based on what you're comparing it to. Skipping\nthe quotes will only cause headaches. \n\n-- \ngreg\n\n", "msg_date": "05 Dec 2003 12:28:30 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" }, { "msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> This is not a bug. It is just that people find it confusing when\n> postgresql planner consider seemingly same type as different.\n\nIt certainly is a bug, or at least a deficiency: PostgreSQL planner\n*could* use the index to process the query, but the planner doesn't\nconsider doing so. The fact that it isn't able to do the necessary\ntype coercion is the *cause* of the bug, not a defence for this\nbehavior.\n\n> AFAIK, the fix in CVS is to make indexes operatable with seemingly\n> compatible types. Which does not change the fact that postgresql can\n> not upgrade data types on it's own.\n\nI'm not sure what you mean by that. In any case, I just checked, and\nit does seem Tom has fixed this in CVS:\n\ntemplate1=# create table abc (b int8);\nCREATE TABLE\ntemplate1=# set enable_seqscan = false;\nSET\ntemplate1=# create index abc_b_idx on abc (b);\nCREATE INDEX\ntemplate1=# explain select * from abc where b = 4;\n QUERY PLAN \n----------------------------------------------------------------------\n Index Scan using abc_b_idx on abc (cost=0.00..17.07 rows=5 width=8)\n Index Cond: (b = 4)\n(2 rows)\n\nCool!\n\n-Neil\n\n", "msg_date": "Sat, 06 Dec 2003 14:54:03 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow UPADTE, compared to INSERT" } ]
[ { "msg_contents": "Ace's Hardware has put together a fairly comprehensive comparison \nbetween Xeon & Opteron platforms running server apps. Unfortunately, \nonly MySQL \"data mining\" benchmarks as the review crew doesn't have that \nmuch experience with OLTP-type systems but I'm gonna try to convince \nthem to add the ODSL DB benchmarks assuming they work fairly well with \nPostgres.\n\nRead up the goodies here:\n\nhttp://www.aceshardware.com/read.jsp?id=60000275\n\n", "msg_date": "Fri, 05 Dec 2003 11:06:25 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Slightly OT -- Xeon versus Opteron Comparison" } ]
[ { "msg_contents": "Hello,\nI use php as front-end to query our database. When I use System Monitor to check the usage of cpu and memory, I noticed that the cpu very easily gets up to 100%. Is that normal? if not, could someone points out possible reason? \n\n \nI am using linux7.3, pgsql 7.3.4, 1G Memory and 2GHz CPU. \n\nRegards,\nWilliam\n\n", "msg_date": "Fri, 05 Dec 2003 13:03:42 -0800", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "query using cpu nearly 100%, why?" }, { "msg_contents": "On Friday 05 December 2003 21:03, LIANHE SHAO wrote:\n> Hello,\n> I use php as front-end to query our database. When I use System Monitor to\n> check the usage of cpu and memory, I noticed that the cpu very easily gets\n> up to 100%. Is that normal? if not, could someone points out possible\n> reason?\n\nThe idea is that CPU should go to 100% when there's work to be done, and drop \noff when the system is idle. There's nothing to be gained with having the CPU \nat 50% and taking twice as long to perform a task.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Sat, 6 Dec 2003 17:46:32 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query using cpu nearly 100%, why?" }, { "msg_contents": "LIANHE SHAO <[email protected]> writes:\n> Hello, I use php as front-end to query our database. When I use\n> System Monitor to check the usage of cpu and memory, I noticed that\n> the cpu very easily gets up to 100%. Is that normal? if not, could\n> someone points out possible reason?\n\nYou haven't given us nearly enough information about the problem to\nallow us to provide any meaningful advice.\n\n-Neil\n\n", "msg_date": "Sat, 06 Dec 2003 14:41:36 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query using cpu nearly 100%, why?" } ]
[ { "msg_contents": "\nI need some help tracking down a sudden, massive slowdown\nin inserts in one of our databases.\n\nPG: 7.2.3 (RedHat 8.0)\n\nBackground. We currently run nearly identical systems\nat two sites: Site A is a 'lab' site used for development,\nSite B is a production site.\n\nThe databases in question have identical structure:\n\n A simple table with 4 columns with a trigger function\n on inserts (which checks to see if the entry already\n exists, and if so, changes the insert into an update...)\n A simple view with 4 columns into the above table.\n\nAll access is through jdbc (JDK 1.3.1, jdbc 7.1-1.3),\npostgresql.conf's are identical.\n\nThe two sites were performing at comparable speeds until\na few days ago, when we deleted several million records\nfrom each database and then did a vacuum full; analyze\non both. Now inserts at Site B are several orders of\nmagnitude slower than at Site A. The odd thing is that\nSite B's DB now has only 60,000 records while Site A's is\nup around 3 million. Inserts at A average 63ms, inserts\nat B are now up at 4.5 seconds!\n\nEXPLAIN doesn't show any difference between the two.\n\nCan someone suggest ways to track this down? I don't know\nmuch about postgresql internals/configuration.\n\nThanks!\nSteve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Fri, 05 Dec 2003 14:51:48 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": true, "msg_subject": "Help tracking down problem with inserts slowing down..." }, { "msg_contents": "Steve Wampler <[email protected]> writes:\n> PG: 7.2.3 (RedHat 8.0)\n\nYou're using PG 7.2.3 with the PG 7.1 JDBC driver; FWIW, upgrading to\nnewer software is highly recommended.\n\n> The two sites were performing at comparable speeds until a few days\n> ago, when we deleted several million records from each database and\n> then did a vacuum full; analyze on both. Now inserts at Site B are\n> several orders of magnitude slower than at Site A.\n\nTwo thoughts:\n\n (1) Can you confirm that the VACUUM FULL on site B actually\n removed all the tuples you intended it to remove? Concurrent\n transactions can limit the amount of data that VACUUM FULL is\n able to reclaim. If you run contrib/pgstattuple (or compare\n the database's disk consumption with the number of live rows\n in it), you should be able to tell.\n\n (2) Look at the EXPLAIN for the SELECTs generated by the ON INSERT\n trigger -- is there any difference between site A and B?\n\n-Neil\n\n", "msg_date": "Fri, 05 Dec 2003 18:38:47 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tracking down problem with inserts slowing" }, { "msg_contents": "On Friday 05 December 2003 16:51, Steve Wampler wrote:\n> I need some help tracking down a sudden, massive slowdown\n> in inserts in one of our databases.\n>\n> PG: 7.2.3 (RedHat 8.0)\n>\n> Background. We currently run nearly identical systems\n> at two sites: Site A is a 'lab' site used for development,\n> Site B is a production site.\n>\n> The databases in question have identical structure:\n>\n> A simple table with 4 columns with a trigger function\n> on inserts (which checks to see if the entry already\n> exists, and if so, changes the insert into an update...)\n> A simple view with 4 columns into the above table.\n>\n> All access is through jdbc (JDK 1.3.1, jdbc 7.1-1.3),\n> postgresql.conf's are identical.\n>\n> The two sites were performing at comparable speeds until\n> a few days ago, when we deleted several million records\n> from each database and then did a vacuum full; analyze\n> on both. Now inserts at Site B are several orders of\n> magnitude slower than at Site A. The odd thing is that\n> Site B's DB now has only 60,000 records while Site A's is\n> up around 3 million. Inserts at A average 63ms, inserts\n> at B are now up at 4.5 seconds!\n>\n> EXPLAIN doesn't show any difference between the two.\n>\n> Can someone suggest ways to track this down? I don't know\n> much about postgresql internals/configuration.\n>\n\nWhat does explain analyze show for the insert query?\n\nAre there FK and/or Indexes involved here? Did you you reindex?\nA vacuum verbose could give you a good indication if you need to reindex, \ncompare the # of pages in the index with the # in the table. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Fri, 5 Dec 2003 21:54:52 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tracking down problem with inserts slowing down..." }, { "msg_contents": "On Fri, Dec 05, 2003 at 09:54:52PM -0500, Robert Treat wrote:\n> On Friday 05 December 2003 16:51, Steve Wampler wrote:\n> > I need some help tracking down a sudden, massive slowdown\n> > in inserts in one of our databases.\n> >\n> > PG: 7.2.3 (RedHat 8.0)\n> >\n> > Background. We currently run nearly identical systems\n> > at two sites: Site A is a 'lab' site used for development,\n> > Site B is a production site.\n> >\n> > The databases in question have identical structure:\n> >\n> > A simple table with 4 columns with a trigger function\n> > on inserts (which checks to see if the entry already\n> > exists, and if so, changes the insert into an update...)\n> > A simple view with 4 columns into the above table.\n> >\n> > All access is through jdbc (JDK 1.3.1, jdbc 7.1-1.3),\n> > postgresql.conf's are identical.\n> >\n> > The two sites were performing at comparable speeds until\n> > a few days ago, when we deleted several million records\n> > from each database and then did a vacuum full; analyze\n> > on both. Now inserts at Site B are several orders of\n> > magnitude slower than at Site A. The odd thing is that\n> > Site B's DB now has only 60,000 records while Site A's is\n> > up around 3 million. Inserts at A average 63ms, inserts\n> > at B are now up at 4.5 seconds!\n> >\n> > EXPLAIN doesn't show any difference between the two.\n> >\n> > Can someone suggest ways to track this down? I don't know\n> > much about postgresql internals/configuration.\n> >\n> \n> What does explain analyze show for the insert query?\n> \n> Are there FK and/or Indexes involved here? Did you you reindex?\n> A vacuum verbose could give you a good indication if you need to reindex, \n> compare the # of pages in the index with the # in the table. \n\nThanks Robert!\n\nIt looks like reindex did the trick. \n\nNow I have a general question - what are the relationships between:\nvacuum, analyze, reindex, and dropping/recreating the indices?\nThat is, which is the following is 'best' (or is there a different\nordering that is better)?:\n\n(1) vacuum\n analyze\n reindex\n\n(2) vacuum\n reindex\n analyze\n\n(3) drop indices\n vacuum\n create indices\n analyze\n\n(4) drop indices\n vacuum\n analyze\n create indices\n\nAnd, is reindex equivalent to dropping, then recreating the indices?\n [it appears to be \"no\", from what I've just seen, but I don't know...]\n\nThanks!\nSteve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Sun, 7 Dec 2003 07:28:16 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tracking down problem with inserts slowing down..." }, { "msg_contents": "On Fri, Dec 05, 2003 at 09:54:52PM -0500, Robert Treat wrote:\n>...\n> A vacuum verbose could give you a good indication if you need to reindex, \n> compare the # of pages in the index with the # in the table. \n\nHmmm, I have a feeling that's not as obvious as I thought... I can't\nidentify the index (named 'id_index') in the output of vacuum verbose.\nThe closest I can find is:\n\nNOTICE: --Relation pg_index--\nNOTICE: Pages 2: Changed 0, Empty 0; Tup 56: Vac 0, Keep 0, UnUsed 42.\n Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nWhich probably isn't correct, right (the name doesn't seem to match)?\n\nThe table's entry is:\n\nNOTICE: --Relation attributes_table--\nNOTICE: Pages 639: Changed 0, Empty 0; Tup 52846: Vac 0, Keep 0, UnUsed 48.\n Total CPU 0.00s/0.01u sec elapsed 0.01 sec.\n\nThanks!\nSteve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Sun, 7 Dec 2003 07:52:35 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tracking down problem with inserts slowing down..." }, { "msg_contents": "Steve Wampler <[email protected]> writes:\n> Hmmm, I have a feeling that's not as obvious as I thought... I can't\n> identify the index (named 'id_index') in the output of vacuum verbose.\n\nIn 7.2, the index reports look like\n\tIndex %s: Pages %u; Tuples %.0f.\nand should appear in the part of the printout that deals with their\nowning table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Dec 2003 11:52:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tracking down problem with inserts slowing down... " }, { "msg_contents": "On Fri, 2003-12-05 at 16:38, Neil Conway wrote:\n\n> \n> (1) Can you confirm that the VACUUM FULL on site B actually\n> removed all the tuples you intended it to remove? Concurrent\n> transactions can limit the amount of data that VACUUM FULL is\n> able to reclaim. If you run contrib/pgstattuple (or compare\n> the database's disk consumption with the number of live rows\n> in it), you should be able to tell.\n\nHmmm, I installed 7.2.3 from RPMs, but the contrib package seems\nto be missing the pgstattuple library code. (According to the\nreadme, I should do:\n\n $ make\n $ make install\n $ psql -e -f /usr/local/pgsql/share/contrib/pgstattuple.sql test\n\nbut the first two lines don't make sense with the binary rpm\ndistribution and trying the last line as (for my world):\n\n ->psql -e -f /usr/share/pgsql/contrib/pgstattuple.sql\nfarm.devel.configdb\n\nyields:\n\n DROP FUNCTION pgstattuple(NAME);\n psql:/usr/share/pgsql/contrib/pgstattuple.sql:1: ERROR: \nRemoveFunction: function 'pgstattuple(name)' does not exist\n CREATE FUNCTION pgstattuple(NAME) RETURNS FLOAT8\n AS '$libdir/pgstattuple', 'pgstattuple'\n LANGUAGE 'c' WITH (isstrict);\n psql:/usr/share/pgsql/contrib/pgstattuple.sql:4: ERROR: stat failed\non file '$libdir/pgstattuple': No such file or directory\n\nI don't need this right now (a reindex seems to have fixed\nour problem for now...), but it sounds like it would be useful\nin the future.\n\nThanks!\nSteve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Sun, 07 Dec 2003 10:28:50 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tracking down problem with inserts slowing" }, { "msg_contents": "On Sun, 2003-12-07 at 09:52, Tom Lane wrote:\n> Steve Wampler <[email protected]> writes:\n> > Hmmm, I have a feeling that's not as obvious as I thought... I can't\n> > identify the index (named 'id_index') in the output of vacuum verbose.\n> \n> In 7.2, the index reports look like\n> \tIndex %s: Pages %u; Tuples %.0f.\n> and should appear in the part of the printout that deals with their\n> owning table.\n\nThanks, Tom. Are there any reasons why it would not appear?:\n-------------------------------------------------------------\nfarm.devel.configdb=# vacuum verbose attributes_table;\nNOTICE: --Relation attributes_table--\nNOTICE: Pages 1389: Changed 0, Empty 0; Tup 111358: Vac 0, Keep 0,\nUnUsed 51.\n Total CPU 0.00s/0.02u sec elapsed 0.03 sec.\nNOTICE: --Relation pg_toast_1743942--\nNOTICE: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nfarm.devel.configdb=# \\d attributes_table\n Table \"attributes_table\"\n Column | Type | Modifiers \n--------+--------------------------+---------------\n id | character varying(64) | not null\n name | character varying(64) | not null\n units | character varying(32) | \n value | text | \n time | timestamp with time zone | default now()\nIndexes: id_index\nPrimary key: attributes_table_pkey\nTriggers: trigger_insert\n---------------------------------------------------------------\n\nThe odd thing is that I could have sworn it appeared yesterday...\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Mon, 08 Dec 2003 08:14:53 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help tracking down problem with inserts slowing" }, { "msg_contents": "Steve Wampler <[email protected]> writes:\n> Thanks, Tom. Are there any reasons why it would not appear?:\n\nOh, I shoulda read the code more carefully. I was looking at the bottom\nof lazy_scan_index, where the printout is done, and failed to notice the\ntest at the top:\n\n /*\n * If the index is not partial, skip the scan, and just assume it has\n * the same number of tuples as the heap.\n */\n\nSo for ordinary indexes, nothing will appear unless vacuum has actual\nwork to do (that is, it recycled at least one dead tuple in the table).\n\nShort answer: update or delete some row in the table, and then try\nvacuum verbose.\n\nAlternatively, you can just look at the pg_class row for the index.\nrelpages and reltuples will contain the info you are after ... and\nthey are certainly up to date at this point ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Dec 2003 10:35:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help tracking down problem with inserts slowing " } ]
[ { "msg_contents": "Greetings all, \n\nI'm wondering is there a website where people can submit their pgbench\nresults along with their hardware and configuration's? If so where are they\nat? I have yet to find any. I think this could be a very useful tool not\nonly for people looking at setting up a new server but for people trying to\ntune their db...\n\nThanks\nRob \n\n", "msg_date": "Sat, 6 Dec 2003 11:09:05 -0600", "msg_from": "\"Rob Sell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Pgbench results" } ]
[ { "msg_contents": "has anyone else noticed a huge difference in \"DELETE TABLE <lol>\"\nvs. \"TRUNCATE <lol>\" starting w/postgres 7.4?\nputting aside details (num rows, indexes....): ca. 300 tables\n(already empty if desired...) ALL to be emptied (via batch file).\nhere's a small \"time pgsql -f kill_all\" output:\n\nDELETE:\n1) 0.03u 0.04s 0:02.46 2.8% (already empty)\n2) 0.05u 0.06s 0:01.19 9.2% (already empty)\n\nTRUNCATE:\n1) 0.10u 0.06s 6:58.66 0.0% (already empty, compile runnig simult.)\n2) 0.10u 0.02s 2:51.71 0.0% (already empty)\n\nlovely, innit?\n\nsettings in 7.4 (wal, shm...) are as for 7.3.x unless dead or (in their\n7.4 default version) even higher.\n\nglimpsing at the quantify output (of the truncate version) it looks\nas if this is \"for (i = 0; i < all; i++)\" whereas (from exec. time)\ndelete does \"\\rm -rf\"\n\nis this a pay-off for autocommit gone away?\na conspiracy?\n...what am i saying...\n\nwe are using TRUNCATE btw, because someone once noted that this was\n\"good style\", saying: \"yes, i want to empty the whole thing\", not:\n\"oops! forgot the where-clause, sorry for your table!\"\n\nwell, enlight me, please!\n\nP.S.: Grammarians dispute - and the case is still before the courts.\n - Horace, Epistles (Ars Poetica)\n\n-- \nHartmut \"Hardy\" Raschick / Dept. t2\nke Kommunikations-Elektronik GmbH\nWohlenberstr. 3, 30179 Hannover\nPhone: ++49 (0)511 6747-564\nFax: ++49 (0)511 6747-340\ne-Mail: [email protected]\nhttp://www.ke-elektronik.de\n", "msg_date": "Mon, 08 Dec 2003 15:03:34 +0100", "msg_from": "Hartmut Raschick <[email protected]>", "msg_from_op": true, "msg_subject": "TRUNCATE veeeery slow compared to DELETE in 7.4" }, { "msg_contents": "Hartmut,\n\n> DELETE:\n> 1) 0.03u 0.04s 0:02.46 2.8% (already empty)\n> 2) 0.05u 0.06s 0:01.19 9.2% (already empty)\n>\n> TRUNCATE:\n> 1) 0.10u 0.06s 6:58.66 0.0% (already empty, compile runnig simult.)\n> 2) 0.10u 0.02s 2:51.71 0.0% (already empty)\n\nHow about some times for a full table?\n\nIncidentally, I believe that TRUNCATE has always been slightly slower than \nDROP TABLE.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 10 Dec 2003 09:18:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE veeeery slow compared to DELETE in 7.4" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Incidentally, I believe that TRUNCATE has always been slightly slower than \n> DROP TABLE.\n\nWell, it would be: it has to delete the original files and then create\nnew ones. I imagine the time to create new, empty indexes is the bulk\nof the time Hartmut is measuring. (Remember that an \"empty\" index has\nat least one page in it, the metadata page, for all of our index types,\nso there is some actual I/O involved to do this.)\n\nIt does not bother me that TRUNCATE takes nonzero time; it's intended\nto be used in situations where DELETE would take huge amounts of time\n(especially after you factor in the subsequent VACUUM activity).\nThe fact that DELETE takes near-zero time on a zero-length table is\nnot very relevant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Dec 2003 14:54:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE veeeery slow compared to DELETE in 7.4 " }, { "msg_contents": "for the clearer understanding: this is NOT about TRUNCATE being\nslow \"as such\" vs. DELETE, but about a change in the order of\na (...) magnitude from 7.3.4 to 7.4...\n\nhere's some more info, plus test results w/a \"full\" db:\n\n300 tables, 20000 pieces of modelled hw, so there's one table\nw/20000 entries, each model has a special table (per type), too;\nso, entries over all of them sum up to 20000; not all types are\npresent.\nplus: some types (w/not many instances) have \"very special\" tables,\ntoo, these sometimes w/lots of columns 500-1600...\n\nalone on a sun fire-280 w/2 U-IIIi cpu's (well, only need one...):\nall the time of the test, no vacuum anything was performed,\nthus - by the book - making things worse... for the DELETE case.\n\n7.4:\n----\n\"full\" database:\nTRUNCATE: 0.03u 0.03s 1:21.40 0.0%\nDELETE: 0.05u 0.01s 0:04.46 1.3%\n\nempty database:\nTRUNCATE:0.02u 0.05s 1:21.00 0.0%\nDELETE: 0.04u 0.04s 0:01.32 6.0%\n\nnow for 7.3.4 database server (same machine, of cause):\n--------------\n\"full\" database:\nTRUNCATE: 0.04u 0.04s 0:03.79 2.1%\nDELETE: 0.03u 0.03s 0:06.51 0.9%\n\nempty database:\nTRUNCATE: 0.04u 0.05s 0:01.51 5.9%\nDELETE: 0.01u 0.02s 0:01.00 3.0%\n\nwhat can i say...\n...please find the attached configs.\n\ni reeeeally don't think this can be explained by table/index\ncomplexity, it's the _same_ schema and contents for both cases,\nthey both were started w/createdb, they both were filled the same\nway (by our server prog), there was no vacuum nowhere, test execution\norder was the same in both cases.\n\nP.S.: Mon pessimisme va jusqu'à suspecter la sincérité des pessimistes.\n - Jean Rostand (1894-1977), Journal d'un caractère, 1931\n\n-- \nHartmut \"Hardy\" Raschick / Dept. t2\nke Kommunikations-Elektronik GmbH\nWohlenberstr. 3, 30179 Hannover\nPhone: ++49 (0)511 6747-564\nFax: ++49 (0)511 6747-340\ne-Mail: [email protected]\nhttp://www.ke-elektronik.de", "msg_date": "Fri, 12 Dec 2003 11:47:54 +0100", "msg_from": "Hartmut Raschick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE veeeery slow compared to DELETE in 7.4" }, { "msg_contents": "Hartmut Raschick <[email protected]> writes:\n> [ TRUNCATE is much slower in 7.4 than in 7.3 ]\n\nAfter looking into this, I think this is because when Rod Taylor\nreimplemented TRUNCATE to make it transaction-safe, he essentially\nturned it into a variant of CLUSTER. It is slow because it is creating\nand deleting dummy tables and indexes. I think this is not really\nnecessary and it could be done better while still being\ntransaction-safe. All we really need is to create a new empty table\nfile, update the table's pg_class row with the new relfilenode, mark\nthe old file for deletion, and then run REINDEX TABLE (which will\nperform similar shenanigans with the indexes).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Dec 2003 13:21:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE veeeery slow compared to DELETE in 7.4 " } ]
[ { "msg_contents": "I'm running PG 7.2.2 on RH Linux 8.0.\n\nI'd like to know why \"VACUUM ANALYZE <table>\" is extemely slow (hours) for \ncertain tables. Here's what the log file shows when I run this command on \nmy \"employees\" table, which has just 5 columns and 55 records:\n\nVACUUM ANALYZE employees\n\nDEBUG: --Relation employees--\nDEBUG: index employees_pkey: Pages 2; Tuples 55: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nDEBUG: index emp_dept_id_idx: Pages 2; Tuples 55: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nDEBUG: index emp_emp_num_idx: Pages 2; Tuples 55: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nDEBUG: recycled transaction log file 00000000000000CC\nDEBUG: geqo_main: using edge recombination crossover [ERX]\n\n(When I get a chance I will enable timestamping of log file entries.)\n\nThanks for any insight. Please reply to me personally ([email protected])\nas well as to the list.\n\n-David\n", "msg_date": "Tue, 9 Dec 2003 14:14:44 -0700", "msg_from": "\"David Shadovitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why is VACUUM ANALYZE <table> so slow?" }, { "msg_contents": "\"David Shadovitz\" <[email protected]> writes:\n> I'm running PG 7.2.2 on RH Linux 8.0.\n\nNote that this version of PostgreSQL is quite old.\n\n> I'd like to know why \"VACUUM ANALYZE <table>\" is extemely slow (hours) for \n> certain tables.\n\nIs there another concurrent transaction that has modified the table\nbut has not committed? VACUUM ANALYZE will need to block waiting for\nit. You might be able to get some insight into this by examining the\npg_locks system view:\n\nhttp://www.postgresql.org/docs/current/static/monitoring-locks.html\n\nAs well as the pg_stat_activity view.\n\n-Neil\n\n", "msg_date": "Tue, 16 Dec 2003 17:51:18 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is VACUUM ANALYZE <table> so slow?" } ]
[ { "msg_contents": "Hello, \nToday I met a very strange query problem, which I\nspend several hours on it but have no clue. To make\nthing clear, let me write somewhat in detail.\n\nI have two almost exactly same queries, except that\none is: lower(annotation) = lower (chip), another\nis: annotation = chip. While the first one can get\nresult in less 10 seconds, the second one will hange\nfor more that 5 minutes. What a big differents !! \n\nI checked the indexes, there are both index for\nlower() and without lower(). I even droped these\nindexes and recreated them, then use vacuum analyze,\nreindex, but thing does not change. the query plan\ngive quite different paths.\n \nCould somebody give any clues where difference comes\nfrom? Thanks a lot.\n\nThe first query, which get results in less than 10\nseconds\n\n PGA=> explain select ei.expid, er.geneid,\ner.sampleid, ei.annotation, si.samplename, \nei.title as exp_name, aaa.chip,\naaa.sequence_derived_from as accession_number,\naaa.gene_symbol, aaa.title as gene_function,\ner.exprs, er.mas5exprs from expressiondata er,\nexperimentinfo ei, sampleinfo si,\naffy_array_annotation aaa where exists (select\ndistinct ei.expid from experimentinfo) and\nlower(ei.annotation) = lower (aaa.chip) and (lower\n(aaa.title) like '%mif%' or\nlower(aaa.sequence_description) like '%mif%') and\nexists (select distinct ei.annotation from\nexperimentinfo) and ei.expid = er.expid and er.expid\n= si.expid and er.sampleid = si.sampleid and\ner.geneid = aaa.probeset_id order by si.sampleid\nlimit 20;\n \n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n-------------------\n Limit (cost=24289.05..24289.10 rows=19 width=256)\n -> Sort (cost=24289.05..24289.10 rows=19 width=256)\n Sort Key: si.sampleid\n -> Hash Join (cost=6.11..24288.64 rows=19\nwidth=256)\n Hash Cond: (\"outer\".expid =\n\"inner\".expid)\n Join Filter: (\"outer\".sampleid =\n\"inner\".sampleid)\n -> Nested Loop (cost=0.00..24278.66\nrows=27 width=217)\n Join Filter: (\"outer\".expid =\n\"inner\".expid)\n -> Nested Loop \n(cost=0.00..18378.77 rows=45 width=180)\n -> Seq Scan on\nexperimentinfo ei (cost=0.00..374.50 rows=5 width=99)\n Filter: ((subplan)\nAND (subplan))\n SubPlan\n -> Unique \n(cost=8.67..8.78 rows=2 width=0)\n -> Sort \n(cost=8.67..8.72 rows=21 width=0)\n Sort\nKey: $0\n -> \nSeq Scan on experimentinfo (cost=0.00..8.21 rows=21\nwidth=0)\n -> Unique \n(cost=8.67..8.78 rows=2 width=0)\n -> Sort \n(cost=8.67..8.72 rows=21 width=0)\n Sort\nKey: $1\n -> \nSeq Scan on experimentinfo (cost=0.00..8.21 rows=21\nwidth=0)\n -> Index Scan using\naffy_array_annotation_lower_chip_idx on\naffy_array_annotation aaa (cost=0.00..3429.2\n4 rows=9 width=81)\n Index Cond:\n(lower((\"outer\".annotation)::text) =\nlower((aaa.chip)::text))\n Filter:\n((lower(title) ~~ '%mif%'::text) OR\n(lower(sequence_description) ~~ '%mif%'::text))\n -> Index Scan using\nexpressiondata_geneid_idx on expressiondata er \n(cost=0.00..130.96 rows=34 width=37)\n Index Cond: (er.geneid =\n\"outer\".probeset_id)\n -> Hash (cost=4.55..4.55 rows=155\nwidth=39)\n -> Seq Scan on sampleinfo si \n(cost=0.00..4.55 rows=155 width=39)\n(27 rows)\n\n=====================\nThe second query, which hangs. \n\n\nPGA=> explain select ei.expid, er.geneid,\ner.sampleid, ei.annotation, si.samplename, \nei.title as exp_name, aaa.chip,\naaa.sequence_derived_from as accession_number,\naaa.gene_symbol, aaa.title as gene_function,\ner.exprs, er.mas5exprs from expressiondata er,\nexperimentinfo ei, sampleinfo si,\naffy_array_annotation aaa where exists (select\ndistinct ei.expid from experimentinfo) and\nei.annotation = aaa.chip and (lower (aaa.title)\nlike '%mif%' or lower(aaa.sequence_description) like\n'%mif%') and exists (select distinct ei.annotation\nfrom experimentinfo) and ei.expid = er.expid and\ner.expid = si.expid and er.sampleid = si.sampleid\nand er.geneid = aaa.probeset_id order by si.sampleid\nlimit 20;\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=157127.91..157128.38 rows=20 width=256)\n -> Merge Join (cost=157127.91..157137.33\nrows=401 width=256)\n Merge Cond: ((\"outer\".sampleid =\n\"inner\".sampleid) AND (\"outer\".expid = \"inner\".expid))\n -> Sort (cost=157117.73..157119.11\nrows=553 width=217)\n Sort Key: er.sampleid, er.expid\n -> Merge Join \n(cost=154417.78..157092.52 rows=553 width=217)\n Merge Cond:\n((\"outer\".annotation = \"inner\".chip) AND\n(\"outer\".geneid = \"inner\".probeset_id))\n -> Sort \n(cost=96501.38..97830.62 rows=531694 width=136)\n Sort Key: ei.annotation,\ner.geneid\n -> Nested Loop \n(cost=0.00..20188.81 rows=531694 width=136)\n -> Seq Scan on\nexperimentinfo ei (cost=0.00..374.50 rows=5 width=99)\n Filter:\n((subplan) AND (subplan))\n SubPlan\n -> Unique\n (cost=8.67..8.78 rows=2 width=0)\n -> \nSort (cost=8.67..8.72 rows=21 width=0)\n \n Sort Key: $0\n \n -> Seq Scan on experimentinfo (cost=0.00..8.21\nrows=21 width=0)\n -> Unique\n (cost=8.67..8.78 rows=2 width=0)\n -> \nSort (cost=8.67..8.72 rows=21 width=0)\n \n Sort Key: $1\n \n -> Seq Scan on experimentinfo (cost=0.00..8.21\nrows=21 width=0)\n -> Index Scan\nusing expressiondata_expid_idx on expressiondata er\n (cost=0.00..2508.21 rows=101275 width=37)\n Index Cond:\n(\"outer\".expid = er.expid)\n -> Sort \n(cost=57916.40..57920.67 rows=1710 width=81)\n Sort Key: aaa.chip,\naaa.probeset_id\n -> Seq Scan on\naffy_array_annotation aaa (cost=0.00..57824.60\nrows=1710 width=81)\n Filter:\n((lower(title) ~~ '%mif%'::text) OR\n(lower(sequence_description) ~~ '%mif%'::text))\n -> Sort (cost=10.19..10.58 rows=155 width=39)\n Sort Key: si.sampleid, si.expid\n -> Seq Scan on sampleinfo si \n(cost=0.00..4.55 rows=155 width=39)\n(30 rows)\n\n=================\nThe related tables:\n\n Table \"public.experimentinfo\"\n Column | Type | Modifiers\n---------------+------------------------+-----------\n expid | integer |\n name | character varying(128) |\n lab | character varying(128) |\n contact | character varying(128) |\n title | character varying(128) |\n abstract | text |\n nsamples | integer |\n disease_type | character varying(32) |\n annotation | character varying(32) |\nIndexes: experimetininfo_annotation_idx btree\n(annotation),\n experimetininfo_lower_annotation_idx btree\n(lower(annotation)),\n expinfo btree (expid)\n\n\n Table \"public.affy_array_annotation\"\n Column | Type \n | Modifiers\n-----------------------------------+------------------------+-----------\n chip | character\nvarying(32) | not null\n organism | character\nvarying(24) |\n annotation_date | character\nvarying(24) |\n sequence_type | character\nvarying(24) |\n sequence_source | character\nvarying(32) |\n sequence_derived_from | character\nvarying(32) |\n sequence_description | text \n |\n sequence_id | text \n |\n transcript_id | character\nvarying(32) |\n group_id | character\nvarying(64) |\n title | text \n |\n gene_symbol | character\nvarying(64) |\n\nIndexes: affy_array_annotation_chip_idx btree (chip),\n affy_array_annotation_idx_gene_symbol btree\n(gene_symbol),\n affy_array_annotation_idx_locuslink btree\n(locuslink),\n affy_array_annotation_idx_omim btree (omim),\n affy_array_annotation_idx_pfam btree (pfam),\n \naffy_array_annotation_idx_sequence_derived_from\nbtree (sequence_derived_from),\n \naffy_array_annotation_idx_sequence_description btree\n(sequence_description),\n \n affy_array_annotation_idx_title btree (title),\n \n affy_array_annotation_lower_chip_idx btree\n(lower(chip)),\n affy_array_annotation_lower_gene_symbol_idx\nbtree (lower(gene_symbol)),\n \n affy_array_annotation_lower_probeset_id_idx\nbtree (lower(probeset_id)),\n \naffy_array_annotation_lower_sequence_description_idx\nbtree (lower(sequence_description)),\n affy_array_annotation_lower_title_idx btree\n(lower(title)),\n \n affy_array_annotation_pkey btree\n(probeset_id, chip),\n affy_array_annotation_probeset_id_idx btree\n(probeset_id),\n \n\n\n\n\n\nRegards,\nWilliam\n\n", "msg_date": "Tue, 09 Dec 2003 23:24:19 +0000 (GMT)", "msg_from": "LIANHE SHAO <[email protected]>", "msg_from_op": true, "msg_subject": "Index problem or function problem?" }, { "msg_contents": "LIANHE SHAO <[email protected]> writes:\n> PGA=> explain select ei.expid, er.geneid,\n> er.sampleid, ei.annotation, si.samplename, \n> ei.title as exp_name, aaa.chip,\n> aaa.sequence_derived_from as accession_number,\n> aaa.gene_symbol, aaa.title as gene_function,\n> er.exprs, er.mas5exprs from expressiondata er,\n> experimentinfo ei, sampleinfo si,\n> affy_array_annotation aaa where exists (select\n> distinct ei.expid from experimentinfo) and\n> ei.annotation = aaa.chip and (lower (aaa.title)\n> like '%mif%' or lower(aaa.sequence_description) like\n> '%mif%') and exists (select distinct ei.annotation\n> from experimentinfo) and ei.expid = er.expid and\n> er.expid = si.expid and er.sampleid = si.sampleid\n> and er.geneid = aaa.probeset_id order by si.sampleid\n> limit 20;\n\nWhat is the purpose of the EXISTS() clauses? They are almost surely not\ndoing what you intended, because AFAICS they are just an extremely\nexpensive means of producing a constant-TRUE result. In\n\texists (select distinct ei.expid from experimentinfo)\n\"ei.expid\" is an outer reference, which will necessarily be the same\nvalue over all rows of the sub-select. After computing this same value\nfor every row of experimentinfo, the system performs a DISTINCT\noperation (sort + unique, not cheap) ... and then all it checks for is\nwhether at least one row was produced, which means the DISTINCT\noperation was completely unnecessary. The only way the EXISTS could\nreturn false is if experimentinfo were empty, but if it were so then the\nouter FROM would've produced no rows and we'd not have got to WHERE\nanyway.\n\nI'm not sure why you get a worse plan for the simpler variant of the\nquery; it would help to see EXPLAIN ANALYZE rather than EXPLAIN output.\nBut it's not worth trying to improve the performance until you are\ncalculating correct answers, and I suspect the above is not doing\nwhat you are after at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Dec 2003 13:05:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index problem or function problem? " } ]
[ { "msg_contents": "This is a well-worn thread title - apologies, but these results seemed \ninteresting, and hopefully useful in the quest to get better performance \non Solaris:\n\nI was curious to see if the rather uninspiring pgbench performance \nobtained from a Sun 280R (see General: ATA Disks and RAID controllers \nfor database servers) could be improved if more time was spent \ntuning. \n\nWith the help of a fellow workmate who is a bit of a Solaris guy, we \ndecided to have a go.\n\nThe major performance killer appeared to be mounting the filesystem with \nthe logging option. The next most significant seemed to be the choice of \nsync_method for Pg - the default (open_datasync), which we initially \nthought should be the best - appears noticeably slower than fdatasync.\n\nWe also tried changing some of the tuneable filesystem options using \ntunefs - without any measurable effect.\n\nAre Pg/Solaris folks running with logging on and sync_method default out \nthere ? - or have most of you been through this already ?\n\n\nPgbench Results (no. clients and transactions/s ) :\n\nSetup 1: filesystem mounted with logging\n\nNo. tps\n-----------\n1 17\n2 17\n4 22\n8 22\n16 28\n32 32\n64 37\n\nSetup 2: filesystem mounted without logging\n\nNo. tps\n-----------\n1 48\n2 55\n4 57\n8 62\n16 65\n32 82\n64 95\n\nSetup 3 : filesystem mounted without logging, Pg sync_method = fdatasync\n\nNo. tps\n-----------\n1 89\n2 94\n4 95\n8 93\n16 99\n32 115\n64 122\n\nNote : The Pgbench runs were conducted using -s 10 and -t 1000 -c 1->64, \n2 - 3 runs of each setup were performed (averaged figures shown).\n\nMark\n\n", "msg_date": "Wed, 10 Dec 2003 18:56:38 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Solaris Performance (Again)" }, { "msg_contents": "On Wed, 10 Dec 2003 18:56:38 +1300\nMark Kirkwood <[email protected]> wrote:\n\n> The major performance killer appeared to be mounting the filesystem\n> with the logging option. The next most significant seemed to be the\n> choice of sync_method for Pg - the default (open_datasync), which we\n> initially thought should be the best - appears noticeably slower than\n> fdatasync.\n> \n\nSome interesting stuff, I'll have to play with it. Currently I'm pleased\nwith my solaris performance.\n\nWhat version of PG?\n\nIf it is before 7.4 PG compiles with _NO_ optimization by default and\nwas a huge part of the slowness of PG on solaris. \n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Wed, 10 Dec 2003 08:53:23 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris Performance (Again)" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> Note : The Pgbench runs were conducted using -s 10 and -t 1000 -c\n> 1->64, 2 - 3 runs of each setup were performed (averaged figures\n> shown).\n\nFYI, the pgbench docs state:\n\n NOTE: scaling factor should be at least as large as the largest\n number of clients you intend to test; else you'll mostly be\n measuring update contention.\n\n-Neil\n\n", "msg_date": "Wed, 10 Dec 2003 14:15:35 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris Performance (Again)" }, { "msg_contents": "Good point -\n\nIt is Pg 7.4beta1 , compiled with\n\nCFLAGS += -O2 -funroll-loops -fexpensive-optimizations\n\nJeff wrote:\n\n>\n>What version of PG?\n>\n>If it is before 7.4 PG compiles with _NO_ optimization by default and\n>was a huge part of the slowness of PG on solaris. \n>\n>\n> \n>\n\n", "msg_date": "Thu, 11 Dec 2003 19:04:15 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris Performance (Again)" }, { "msg_contents": "yes - originally I was going to stop at 8 clients, but once the bit was \nbetween the teeth....If I get another box to myself I will try -s 50 or \n100 and see what that shows up.\n\ncheers\n\nMark\n\nNeil Conway wrote:\n\n> FYI, the pgbench docs state:\n>\n> NOTE: scaling factor should be at least as large as the largest\n> number of clients you intend to test; else you'll mostly be\n> measuring update contention.\n>\n>-Neil\n>\n> \n>\n\n", "msg_date": "Thu, 11 Dec 2003 19:09:47 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris Performance (Again)" }, { "msg_contents": "Mark Kirkwood wrote:\n> This is a well-worn thread title - apologies, but these results seemed \n> interesting, and hopefully useful in the quest to get better performance \n> on Solaris:\n> \n> I was curious to see if the rather uninspiring pgbench performance \n> obtained from a Sun 280R (see General: ATA Disks and RAID controllers \n> for database servers) could be improved if more time was spent \n> tuning. \n> \n> With the help of a fellow workmate who is a bit of a Solaris guy, we \n> decided to have a go.\n> \n> The major performance killer appeared to be mounting the filesystem with \n> the logging option. The next most significant seemed to be the choice of \n> sync_method for Pg - the default (open_datasync), which we initially \n> thought should be the best - appears noticeably slower than fdatasync.\n\nI thought the default was fdatasync, but looking at the code it seems\nthe default is open_datasync if O_DSYNC is available.\n\nI assume the logic is that we usually do only one write() before\nfsync(), so open_datasync should be faster. Why do we not use O_FSYNC\nover fsync().\n\nLooking at the code:\n\t\n\t#if defined(O_SYNC)\n\t#define OPEN_SYNC_FLAG O_SYNC\n\t#else\n\t#if defined(O_FSYNC)\n\t#define OPEN_SYNC_FLAG O_FSYNC\n\t#endif\n\t#endif\n\t\n\t#if defined(OPEN_SYNC_FLAG)\n\t#if defined(O_DSYNC) && (O_DSYNC != OPEN_SYNC_FLAG)\n\t#define OPEN_DATASYNC_FLAG O_DSYNC\n\t#endif\n\t#endif\n\t\n\t#if defined(OPEN_DATASYNC_FLAG)\n\t#define DEFAULT_SYNC_METHOD_STR \"open_datasync\"\n\t#define DEFAULT_SYNC_METHOD SYNC_METHOD_OPEN\n\t#define DEFAULT_SYNC_FLAGBIT OPEN_DATASYNC_FLAG\n\t#else\n\t#if defined(HAVE_FDATASYNC)\n\t#define DEFAULT_SYNC_METHOD_STR \"fdatasync\"\n\t#define DEFAULT_SYNC_METHOD SYNC_METHOD_FDATASYNC\n\t#define DEFAULT_SYNC_FLAGBIT 0\n\t#else\n\t#define DEFAULT_SYNC_METHOD_STR \"fsync\"\n\t#define DEFAULT_SYNC_METHOD SYNC_METHOD_FSYNC\n\t#define DEFAULT_SYNC_FLAGBIT 0\n\t#endif\n\t#endif\n\nI think the problem is that we prefer O_DSYNC over fdatasync, but do not\nprefer O_FSYNC over fsync.\n\nRunning the attached test program shows on BSD/OS 4.3:\n\n\twrite 0.000360\n\twrite & fsync 0.001391\n\twrite, close & fsync 0.001308\n\topen o_fsync, write 0.000924\n\nshowing O_FSYNC faster than fsync().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n/*\n *\ttest_fsync.c\n *\t\ttests if fsync can be done from another process than the original write\n */\n\n#include <sys/types.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include <unistd.h>\n\nvoid die(char *str);\nvoid print_elapse(struct timeval start_t, struct timeval elapse_t);\n\nint main(int argc, char *argv[])\n{\n\tstruct timeval start_t;\n\tstruct timeval elapse_t;\n\tint tmpfile;\n\tchar *strout = \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\";\n\n\t/* write only */\t\n\tgettimeofday(&start_t, NULL);\n\tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n\t\tdie(\"can't open /var/tmp/test_fsync.out\");\n\twrite(tmpfile, &strout, 200);\n\tclose(tmpfile);\t\t\n\tgettimeofday(&elapse_t, NULL);\n\tunlink(\"/var/tmp/test_fsync.out\");\n\tprintf(\"write \");\n\tprint_elapse(start_t, elapse_t);\n\tprintf(\"\\n\");\n\n\t/* write & fsync */\n\tgettimeofday(&start_t, NULL);\n\tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n\t\tdie(\"can't open /var/tmp/test_fsync.out\");\n\twrite(tmpfile, &strout, 200);\n\tfsync(tmpfile);\n\tclose(tmpfile);\t\t\n\tgettimeofday(&elapse_t, NULL);\n\tunlink(\"/var/tmp/test_fsync.out\");\n\tprintf(\"write & fsync \");\n\tprint_elapse(start_t, elapse_t);\n\tprintf(\"\\n\");\n\n\t/* write, close & fsync */\n\tgettimeofday(&start_t, NULL);\n\tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n\t\tdie(\"can't open /var/tmp/test_fsync.out\");\n\twrite(tmpfile, &strout, 200);\n\tclose(tmpfile);\n\t/* reopen file */\n\tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n\t\tdie(\"can't open /var/tmp/test_fsync.out\");\n\tfsync(tmpfile);\n\tclose(tmpfile);\t\t\n\tgettimeofday(&elapse_t, NULL);\n\tunlink(\"/var/tmp/test_fsync.out\");\n\tprintf(\"write, close & fsync \");\n\tprint_elapse(start_t, elapse_t);\n\tprintf(\"\\n\");\n\n\t/* open_fsync, write */\n\tgettimeofday(&start_t, NULL);\n\tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT | O_FSYNC)) == -1)\n\t\tdie(\"can't open /var/tmp/test_fsync.out\");\n\twrite(tmpfile, &strout, 200);\n\tclose(tmpfile);\n\tgettimeofday(&elapse_t, NULL);\n\tunlink(\"/var/tmp/test_fsync.out\");\n\tprintf(\"open o_fsync, write \");\n\tprint_elapse(start_t, elapse_t);\n\tprintf(\"\\n\");\n\n\treturn 0;\n}\n\nvoid print_elapse(struct timeval start_t, struct timeval elapse_t)\n{\n\tif (elapse_t.tv_usec < start_t.tv_usec)\n\t{\n\t\telapse_t.tv_sec--;\n\t\telapse_t.tv_usec += 1000000;\n\t}\n\n\tprintf(\"%ld.%06ld\", (long) (elapse_t.tv_sec - start_t.tv_sec),\n\t\t\t\t\t (long) (elapse_t.tv_usec - start_t.tv_usec));\n}\n\nvoid die(char *str)\n{\n\tfprintf(stderr, \"%s\", str);\n\texit(1);\n}", "msg_date": "Fri, 12 Dec 2003 01:49:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "fsync method checking" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\twrite 0.000360\n>\twrite & fsync 0.001391\n>\twrite, close & fsync 0.001308\n>\topen o_fsync, write 0.000924\n> \n>\nThat's 1 milliseconds vs. 1.3 milliseconds. Neither value is realistic - \nI guess the hw cache on and the os doesn't issue cache flush commands. \nRealistic values are probably 5 ms vs 5.3 ms - 6%, not 30%. How large is \nthe syscall latency with BSD/OS 4.3?\n\nOne advantage of a seperate write and fsync call is better performance \nfor the writes that are triggered within AdvanceXLInsertBuffer: I'm not \nsure how often that's necessary, but it's a write while holding both the \nWALWriteLock and WALInsertLock. If every write contains an implicit \nsync, that call would be much more expensive than necessary.\n\n--\n Manfred\n\n", "msg_date": "Fri, 12 Dec 2003 21:54:34 +0100", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Manfred Spraul <[email protected]> writes:\n> One advantage of a seperate write and fsync call is better performance \n> for the writes that are triggered within AdvanceXLInsertBuffer: I'm not \n> sure how often that's necessary, but it's a write while holding both the \n> WALWriteLock and WALInsertLock. If every write contains an implicit \n> sync, that call would be much more expensive than necessary.\n\nIdeally that path isn't taken very often. But I'm currently having a\ndiscussion off-list with a CMU student who seems to be seeing a case\nwhere it happens a lot. (She reports that both WALWriteLock and\nWALInsertLock are causes of a lot of process blockages, which seems to\nmean that a lot of the WAL I/O is being done with both held, which would\nhave to mean that AdvanceXLInsertBuffer is doing the I/O. More when we\nfigure out what's going on exactly...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Dec 2003 16:28:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "\nI have been poking around with our fsync default options to see if I can\nimprove them. One issue is that we never default to O_SYNC, but default\nto O_DSYNC if it exists, which seems strange.\n\nWhat I did was to beef up my test program and get it into CVS for folks\nto run. What I found was that different operating systems have\ndifferent optimal defaults. On BSD/OS and FreeBSD, fdatasync/fsync was\nbetter, but on Linux, O_DSYNC/O_SYNC was faster.\n\nBSD/OS 4.3:\n\tSimple write timing:\n\t write 0.000055\n\t\n\tCompare fsync before and after write's close:\n\t write, fsync, close 0.000707\n\t write, close, fsync 0.000808\n\t\n\tCompare one o_sync write to two:\n\t one 16k o_sync write 0.009762\n\t two 8k o_sync writes 0.008799\n\t\n\tCompare file sync methods with one 8k write:\n\t (o_dsync unavailable)\n\t open o_sync, write 0.000658\n\t (fdatasync unavailable)\n\t write, fsync, 0.000702\n\t\n\tCompare file sync methods with 2 8k writes:\n\t(The fastest should be used for wal_sync_method)\n\t (o_dsync unavailable)\n\t open o_sync, write 0.010402\n\t (fdatasync unavailable)\n\t write, fsync, 0.001025\n\nThis shows terrible O_SYNC performance for 2 8k writes, but is faster\nfor a single 8k write. Strange.\n\nFreeBSD 4.9:\n\tSimple write timing:\n\t write 0.000083\n\t\n\tCompare fsync before and after write's close:\n\t write, fsync, close 0.000412\n\t write, close, fsync 0.000453\n\t\n\tCompare one o_sync write to two:\n\t one 16k o_sync write 0.000409\n\t two 8k o_sync writes 0.000993\n\t\n\tCompare file sync methods with one 8k write:\n\t (o_dsync unavailable)\n\t open o_sync, write 0.000683\n\t (fdatasync unavailable)\n\t write, fsync, 0.000405\n\t\n\tCompare file sync methods with 2 8k writes:\n\t (o_dsync unavailable)\n\t open o_sync, write 0.000789\n\t (fdatasync unavailable)\n\t write, fsync, 0.000414\n\nThis shows fsync to be fastest in both cases.\n\nLinux 2.4.9:\n\tSimple write timing:\n\t write 0.000061\n\t\n\tCompare fsync before and after write's close:\n\t write, fsync, close 0.000398\n\t write, close, fsync 0.000407\n\t\n\tCompare one o_sync write to two:\n\t one 16k o_sync write 0.000570\n\t two 8k o_sync writes 0.000340\n\t\n\tCompare file sync methods with one 8k write:\n\t (o_dsync unavailable)\n\t open o_sync, write 0.000166\n\t write, fdatasync 0.000462\n\t write, fsync, 0.000447\n\t\n\tCompare file sync methods with 2 8k writes:\n\t (o_dsync unavailable)\n\t open o_sync, write 0.000334\n\t write, fdatasync 0.000445\n\t write, fsync, 0.000447\n\t\nThis shows O_SYNC to be fastest, even for 2 8k writes.\n\nThis unapplied patch:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/fsync\n\nadds DEFAULT_OPEN_SYNC to the bsdi/freebsd/linux template files, which\ncontrols the default for those platforms. Platforms with no template\ndefault to fdatasync/fsync.\n\nWould other users run src/tools/fsync and report their findings so I can\nupdate the template files for their OS's? This is a process similar to\nour thread testing.\n\nThanks.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Mark Kirkwood wrote:\n> > This is a well-worn thread title - apologies, but these results seemed \n> > interesting, and hopefully useful in the quest to get better performance \n> > on Solaris:\n> > \n> > I was curious to see if the rather uninspiring pgbench performance \n> > obtained from a Sun 280R (see General: ATA Disks and RAID controllers \n> > for database servers) could be improved if more time was spent \n> > tuning. \n> > \n> > With the help of a fellow workmate who is a bit of a Solaris guy, we \n> > decided to have a go.\n> > \n> > The major performance killer appeared to be mounting the filesystem with \n> > the logging option. The next most significant seemed to be the choice of \n> > sync_method for Pg - the default (open_datasync), which we initially \n> > thought should be the best - appears noticeably slower than fdatasync.\n> \n> I thought the default was fdatasync, but looking at the code it seems\n> the default is open_datasync if O_DSYNC is available.\n> \n> I assume the logic is that we usually do only one write() before\n> fsync(), so open_datasync should be faster. Why do we not use O_FSYNC\n> over fsync().\n> \n> Looking at the code:\n> \t\n> \t#if defined(O_SYNC)\n> \t#define OPEN_SYNC_FLAG O_SYNC\n> \t#else\n> \t#if defined(O_FSYNC)\n> \t#define OPEN_SYNC_FLAG O_FSYNC\n> \t#endif\n> \t#endif\n> \t\n> \t#if defined(OPEN_SYNC_FLAG)\n> \t#if defined(O_DSYNC) && (O_DSYNC != OPEN_SYNC_FLAG)\n> \t#define OPEN_DATASYNC_FLAG O_DSYNC\n> \t#endif\n> \t#endif\n> \t\n> \t#if defined(OPEN_DATASYNC_FLAG)\n> \t#define DEFAULT_SYNC_METHOD_STR \"open_datasync\"\n> \t#define DEFAULT_SYNC_METHOD SYNC_METHOD_OPEN\n> \t#define DEFAULT_SYNC_FLAGBIT OPEN_DATASYNC_FLAG\n> \t#else\n> \t#if defined(HAVE_FDATASYNC)\n> \t#define DEFAULT_SYNC_METHOD_STR \"fdatasync\"\n> \t#define DEFAULT_SYNC_METHOD SYNC_METHOD_FDATASYNC\n> \t#define DEFAULT_SYNC_FLAGBIT 0\n> \t#else\n> \t#define DEFAULT_SYNC_METHOD_STR \"fsync\"\n> \t#define DEFAULT_SYNC_METHOD SYNC_METHOD_FSYNC\n> \t#define DEFAULT_SYNC_FLAGBIT 0\n> \t#endif\n> \t#endif\n> \n> I think the problem is that we prefer O_DSYNC over fdatasync, but do not\n> prefer O_FSYNC over fsync.\n> \n> Running the attached test program shows on BSD/OS 4.3:\n> \n> \twrite 0.000360\n> \twrite & fsync 0.001391\n> \twrite, close & fsync 0.001308\n> \topen o_fsync, write 0.000924\n> \n> showing O_FSYNC faster than fsync().\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n> /*\n> *\ttest_fsync.c\n> *\t\ttests if fsync can be done from another process than the original write\n> */\n> \n> #include <sys/types.h>\n> #include <fcntl.h>\n> #include <stdio.h>\n> #include <stdlib.h>\n> #include <time.h>\n> #include <unistd.h>\n> \n> void die(char *str);\n> void print_elapse(struct timeval start_t, struct timeval elapse_t);\n> \n> int main(int argc, char *argv[])\n> {\n> \tstruct timeval start_t;\n> \tstruct timeval elapse_t;\n> \tint tmpfile;\n> \tchar *strout = \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\";\n> \n> \t/* write only */\t\n> \tgettimeofday(&start_t, NULL);\n> \tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n> \t\tdie(\"can't open /var/tmp/test_fsync.out\");\n> \twrite(tmpfile, &strout, 200);\n> \tclose(tmpfile);\t\t\n> \tgettimeofday(&elapse_t, NULL);\n> \tunlink(\"/var/tmp/test_fsync.out\");\n> \tprintf(\"write \");\n> \tprint_elapse(start_t, elapse_t);\n> \tprintf(\"\\n\");\n> \n> \t/* write & fsync */\n> \tgettimeofday(&start_t, NULL);\n> \tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n> \t\tdie(\"can't open /var/tmp/test_fsync.out\");\n> \twrite(tmpfile, &strout, 200);\n> \tfsync(tmpfile);\n> \tclose(tmpfile);\t\t\n> \tgettimeofday(&elapse_t, NULL);\n> \tunlink(\"/var/tmp/test_fsync.out\");\n> \tprintf(\"write & fsync \");\n> \tprint_elapse(start_t, elapse_t);\n> \tprintf(\"\\n\");\n> \n> \t/* write, close & fsync */\n> \tgettimeofday(&start_t, NULL);\n> \tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n> \t\tdie(\"can't open /var/tmp/test_fsync.out\");\n> \twrite(tmpfile, &strout, 200);\n> \tclose(tmpfile);\n> \t/* reopen file */\n> \tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT)) == -1)\n> \t\tdie(\"can't open /var/tmp/test_fsync.out\");\n> \tfsync(tmpfile);\n> \tclose(tmpfile);\t\t\n> \tgettimeofday(&elapse_t, NULL);\n> \tunlink(\"/var/tmp/test_fsync.out\");\n> \tprintf(\"write, close & fsync \");\n> \tprint_elapse(start_t, elapse_t);\n> \tprintf(\"\\n\");\n> \n> \t/* open_fsync, write */\n> \tgettimeofday(&start_t, NULL);\n> \tif ((tmpfile = open(\"/var/tmp/test_fsync.out\", O_RDWR | O_CREAT | O_FSYNC)) == -1)\n> \t\tdie(\"can't open /var/tmp/test_fsync.out\");\n> \twrite(tmpfile, &strout, 200);\n> \tclose(tmpfile);\n> \tgettimeofday(&elapse_t, NULL);\n> \tunlink(\"/var/tmp/test_fsync.out\");\n> \tprintf(\"open o_fsync, write \");\n> \tprint_elapse(start_t, elapse_t);\n> \tprintf(\"\\n\");\n> \n> \treturn 0;\n> }\n> \n> void print_elapse(struct timeval start_t, struct timeval elapse_t)\n> {\n> \tif (elapse_t.tv_usec < start_t.tv_usec)\n> \t{\n> \t\telapse_t.tv_sec--;\n> \t\telapse_t.tv_usec += 1000000;\n> \t}\n> \n> \tprintf(\"%ld.%06ld\", (long) (elapse_t.tv_sec - start_t.tv_sec),\n> \t\t\t\t\t (long) (elapse_t.tv_usec - start_t.tv_usec));\n> }\n> \n> void die(char *str)\n> {\n> \tfprintf(stderr, \"%s\", str);\n> \texit(1);\n> }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 12:46:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n>I have been poking around with our fsync default options to see if I can\n>improve them. One issue is that we never default to O_SYNC, but default\n>to O_DSYNC if it exists, which seems strange.\n>\n>What I did was to beef up my test program and get it into CVS for folks\n>to run. What I found was that different operating systems have\n>different optimal defaults. On BSD/OS and FreeBSD, fdatasync/fsync was\n>better, but on Linux, O_DSYNC/O_SYNC was faster.\n>\n>[snip]\n>\n>Linux 2.4.9:\n>\t\n>\n\nThis is a pretty old kernel (I am writing from a machine running 2.4.22)\n\nMaybe before we do this for Linux testing on a more modern kernel might \nbe wise.\n\ncheers\n\nandrew\n\n", "msg_date": "Thu, 18 Mar 2004 13:23:14 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync method checking" }, { "msg_contents": "Andrew Dunstan wrote:\n> \n> \n> Bruce Momjian wrote:\n> \n> >I have been poking around with our fsync default options to see if I can\n> >improve them. One issue is that we never default to O_SYNC, but default\n> >to O_DSYNC if it exists, which seems strange.\n> >\n> >What I did was to beef up my test program and get it into CVS for folks\n> >to run. What I found was that different operating systems have\n> >different optimal defaults. On BSD/OS and FreeBSD, fdatasync/fsync was\n> >better, but on Linux, O_DSYNC/O_SYNC was faster.\n> >\n> >[snip]\n> >\n> >Linux 2.4.9:\n> >\t\n> >\n> \n> This is a pretty old kernel (I am writing from a machine running 2.4.22)\n> \n> Maybe before we do this for Linux testing on a more modern kernel might \n> be wise.\n\nSure, I am sure someone will post results.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 13:40:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync method checking" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have been poking around with our fsync default options to see if I can\n> improve them. One issue is that we never default to O_SYNC, but default\n> to O_DSYNC if it exists, which seems strange.\n\nAs I recall, that was based on testing on some different platforms.\nIt's not particularly \"strange\": O_SYNC implies writing at least two\nplaces on the disk (file and inode). O_DSYNC or fdatasync should\ntheoretically be the fastest alternatives, O_SYNC and fsync the worst.\n\t\n> \tCompare fsync before and after write's close:\n> \t write, fsync, close 0.000707\n> \t write, close, fsync 0.000808\n\nWhat does that mean? You can't fsync a closed file.\n\n> This shows terrible O_SYNC performance for 2 8k writes, but is faster\n> for a single 8k write. Strange.\n\nI'm not sure I believe these numbers at all... my experience is that\ngetting trustworthy disk I/O numbers is *not* easy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 13:44:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I have been poking around with our fsync default options to see if I can\n> > improve them. One issue is that we never default to O_SYNC, but default\n> > to O_DSYNC if it exists, which seems strange.\n> \n> As I recall, that was based on testing on some different platforms.\n> It's not particularly \"strange\": O_SYNC implies writing at least two\n> places on the disk (file and inode). O_DSYNC or fdatasync should\n> theoretically be the fastest alternatives, O_SYNC and fsync the worst.\n\nBut why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over\nfsync?\n\n> \t\n> > \tCompare fsync before and after write's close:\n> > \t write, fsync, close 0.000707\n> > \t write, close, fsync 0.000808\n> \n> What does that mean? You can't fsync a closed file.\n\nYou reopen and fsync.\n\n> > This shows terrible O_SYNC performance for 2 8k writes, but is faster\n> > for a single 8k write. Strange.\n> \n> I'm not sure I believe these numbers at all... my experience is that\n> getting trustworthy disk I/O numbers is *not* easy.\n\nThese numbers were reproducable on all the platforms I tested.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 13:50:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On Thu, Mar 18, 2004 at 01:50:32PM -0500, Bruce Momjian wrote:\n> > I'm not sure I believe these numbers at all... my experience is that\n> > getting trustworthy disk I/O numbers is *not* easy.\n> \n> These numbers were reproducable on all the platforms I tested.\n\nIt's not because they are reproducable that they mean anything in\nthe real world.\n\n\nKurt\n\n", "msg_date": "Thu, 18 Mar 2004 20:18:40 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Kurt Roeckx wrote:\n> On Thu, Mar 18, 2004 at 01:50:32PM -0500, Bruce Momjian wrote:\n> > > I'm not sure I believe these numbers at all... my experience is that\n> > > getting trustworthy disk I/O numbers is *not* easy.\n> > \n> > These numbers were reproducable on all the platforms I tested.\n> \n> It's not because they are reproducable that they mean anything in\n> the real world.\n\nOK, what better test do you suggest? Right now, there has been no\ntesting of these.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 14:22:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> As I recall, that was based on testing on some different platforms.\n\n> But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over\n> fsync?\n\nIt's what tested out as the best bet. I think we were using pgbench\nas the test platform, which as you know I have doubts about, but at\nleast it is testing one actual write/sync pattern Postgres can generate.\nThe choice between the open flags and fdatasync/fsync depends a whole\nlot on your writing patterns (how much data you tend to write between\nfsync points), so I don't have a lot of faith in randomly-chosen test\nprograms as a guide to what to use for Postgres.\n\n>> What does that mean? You can't fsync a closed file.\n\n> You reopen and fsync.\n\nUm. I just looked at that test program, and I think it needs a whole\nlot of work yet.\n\n* Some of the test cases count open()/close() overhead, some don't.\n This is bad, especially on platforms like Solaris where open() is\n notoriously expensive.\n\n* You really cannot put any faith in measuring a single write,\n especially on a machine that's not *completely* idle otherwise.\n I'd feel somewhat comfortable if you wrote, say, 1000 8K blocks and\n measured the time for that. (And you have to think about how far\n apart the fsyncs are in that sequence; you probably want to repeat the\n measurement with several different fsync spacings.) It would also be\n a good idea to compare writing 1000 successive blocks with rewriting\n the same block 1000 times --- if the latter does not happen roughly\n at the disk RPM rate, then we know the drive is lying and all the\n numbers should be discarded as meaningless.\n\n* The program is claimed to test whether you can write from one process\n and fsync from another, but it does no such thing AFAICS.\n\nBTW, rather than hard-wiring the test file name, why don't you let it be\nspecified on the command line? That would make it lots easier for\npeople to compare the performance of several disk drives, if they have\n'em.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 14:28:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> As I recall, that was based on testing on some different platforms.\n> \n> > But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over\n> > fsync?\n> \n> It's what tested out as the best bet. I think we were using pgbench\n> as the test platform, which as you know I have doubts about, but at\n> least it is testing one actual write/sync pattern Postgres can generate.\n> The choice between the open flags and fdatasync/fsync depends a whole\n> lot on your writing patterns (how much data you tend to write between\n> fsync points), so I don't have a lot of faith in randomly-chosen test\n> programs as a guide to what to use for Postgres.\n\nI assume pgbench has so much variance that trying to see fsync changes\nin there would be hopeless.\n\n> >> What does that mean? You can't fsync a closed file.\n> \n> > You reopen and fsync.\n> \n> Um. I just looked at that test program, and I think it needs a whole\n> lot of work yet.\n> \n> * Some of the test cases count open()/close() overhead, some don't.\n> This is bad, especially on platforms like Solaris where open() is\n> notoriously expensive.\n\nThe only one I saw that had an extra open() was the fsync after close\ntest. I add a do-nothing open/close to the previous test so they are\nthe same.\n\n> * You really cannot put any faith in measuring a single write,\n> especially on a machine that's not *completely* idle otherwise.\n> I'd feel somewhat comfortable if you wrote, say, 1000 8K blocks and\n> measured the time for that. (And you have to think about how far\n\nOK, it now measures a loop of 1000.\n\n> apart the fsyncs are in that sequence; you probably want to repeat the\n> measurement with several different fsync spacings.) It would also be\n> a good idea to compare writing 1000 successive blocks with rewriting\n> the same block 1000 times --- if the latter does not happen roughly\n> at the disk RPM rate, then we know the drive is lying and all the\n> numbers should be discarded as meaningless.\n\n\n> \n> * The program is claimed to test whether you can write from one process\n> and fsync from another, but it does no such thing AFAICS.\n\nIt really just shows whether the fsync fater the close has similar\ntiming to the one before the close. That was the best way I could think\nto test it.\n\n> BTW, rather than hard-wiring the test file name, why don't you let it be\n> specified on the command line? That would make it lots easier for\n> people to compare the performance of several disk drives, if they have\n> 'em.\n\nI have updated the test program in CVS.\n\nNew BSD/OS results:\n\nSimple write timing:\n write 0.034801\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 0.868831\n write, close, fsync 0.717281\n\nCompare one o_sync write to two:\n one 16k o_sync write 10.121422\n two 8k o_sync writes 4.405151\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 1.542213\n (fdatasync unavailable)\n write, fsync, 1.703689\n\nCompare file sync methods with 2 8k writes:\n(The fastest should be used for wal_sync_method)\n (o_dsync unavailable)\n open o_sync, write 4.498607\n (fdatasync unavailable)\n write, fsync, 2.473842\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 14:55:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On Thu, Mar 18, 2004 at 02:22:10PM -0500, Bruce Momjian wrote:\n> \n> OK, what better test do you suggest? Right now, there has been no\n> testing of these.\n\nI suggest you start by doing atleast preallocating a 16 MB file\nand do the tests on that, to atleast be somewhat simular to what\nWAL does.\n\nI have no idea what the access pattern is for normal WAL\noperations or how many times it gets synched. Does it only do\nf(data)sync() at commit time, or for every block it writes?\n\nI think if you write more data you'll see more differences\nbetween O_(D)SYNC and f(data)sync().\n\nI guess it can depend on if you have lots of small transactions,\nor more big ones.\n\nAtleast try to make something that covers different access\npatterns.\n\n\nKurt\n\n", "msg_date": "Thu, 18 Mar 2004 21:03:59 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> It's what tested out as the best bet. I think we were using pgbench\n>> as the test platform, which as you know I have doubts about, but at\n>> least it is testing one actual write/sync pattern Postgres can generate.\n\n> I assume pgbench has so much variance that trying to see fsync changes\n> in there would be hopeless.\n\nThe results were fairly reproducible, as I recall; else we'd have looked\nfor another test method. You may want to go back and consult the\npghackers archives.\n\n>> * Some of the test cases count open()/close() overhead, some don't.\n\n> The only one I saw that had an extra open() was the fsync after close\n> test. I add a do-nothing open/close to the previous test so they are\n> the same.\n\nWhy is it sensible to include open/close overhead in the \"simple write\"\ncase and not in the \"o_sync write\" cases, for instance? Doesn't seem\nlike a fair comparison to me. Adding the open overhead to all cases\nmight make it \"fair\", but it would also make it not what we want to\nmeasure.\n\n>> * The program is claimed to test whether you can write from one process\n>> and fsync from another, but it does no such thing AFAICS.\n\n> It really just shows whether the fsync fater the close has similar\n> timing to the one before the close. That was the best way I could think\n> to test it.\n\nSure, but where's the \"separate process\" part? What this seems to test\nis whether a single process can sync its own writes through a different\nfile descriptor; which is interesting but by no means the only thing we\nneed to be sure of if we want to make the bgwriter handle syncing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 15:08:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "Kurt Roeckx wrote:\n> On Thu, Mar 18, 2004 at 02:22:10PM -0500, Bruce Momjian wrote:\n> > \n> > OK, what better test do you suggest? Right now, there has been no\n> > testing of these.\n> \n> I suggest you start by doing atleast preallocating a 16 MB file\n> and do the tests on that, to atleast be somewhat simular to what\n> WAL does.\n> \n> I have no idea what the access pattern is for normal WAL\n> operations or how many times it gets synched. Does it only do\n> f(data)sync() at commit time, or for every block it writes?\n> \n> I think if you write more data you'll see more differences\n> between O_(D)SYNC and f(data)sync().\n> \n> I guess it can depend on if you have lots of small transactions,\n> or more big ones.\n> \n> Atleast try to make something that covers different access\n> patterns.\n\nOK, I preallocated 16mb. New results:\n\nSimple write timing:\n write 0.037900\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 0.692942\n write, close, fsync 0.762524\n\nCompare one o_sync write to two:\n one 16k o_sync write 8.494621\n two 8k o_sync writes 4.177680\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 1.836835\n (fdatasync unavailable)\n write, fsync, 1.780872\n\nCompare file sync methods with 2 8k writes:\n(The fastest should be used for wal_sync_method)\n (o_dsync unavailable)\n open o_sync, write 4.255614\n (fdatasync unavailable)\n write, fsync, 2.120843\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 15:09:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Kurt Roeckx <[email protected]> writes:\n> I have no idea what the access pattern is for normal WAL\n> operations or how many times it gets synched. Does it only do\n> f(data)sync() at commit time, or for every block it writes?\n\nIf we are using fsync/fdatasync, we issue those at commit time or when\ncompleting a WAL segment. If we are using the open flags, then of\ncourse there's no separate sync call.\n\nMy previous point about checking different fsync spacings corresponds to\ndifferent assumptions about average transaction size. I think a useful\ntool for determining wal_sync_method has got to be able to reflect that\nrange of possibilities.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 15:20:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "Here are my results on Linux 2.6.1 using cvs version 1.7.\n\nThose times with > 20 seconds, you really hear the disk go crazy.\n\nAnd I have the feeling something must be wrong. Those results\nare reproducible.\n\n\nKurt\n\n\nSimple write timing:\n write 0.139558\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 8.249364\n write, close, fsync 8.356813\n\nCompare one o_sync write to two:\n one 16k o_sync write 28.487650\n two 8k o_sync writes 2.310304\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 1.010688\n write, fdatasync 25.109604\n write, fsync, 26.051218\n\nCompare file sync methods with 2 8k writes:\n(The fastest should be used for wal_sync_method)\n (o_dsync unavailable)\n open o_sync, write 2.212223\n write, fdatasync 27.439907\n write, fsync, 27.772294\n\n", "msg_date": "Thu, 18 Mar 2004 21:26:21 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Kurt Roeckx wrote:\n> Here are my results on Linux 2.6.1 using cvs version 1.7.\n> \n> Those times with > 20 seconds, you really hear the disk go crazy.\n> \n> And I have the feeling something must be wrong. Those results\n> are reproducible.\n> \n\nWow, your O_SYNC times are great. Where can I buy some? :-)\n\nAnyway, we do need to find a way to test this because obviously there is\nhuge platform variability.\n\n---------------------------------------------------------------------------\n\n\n> \n> Kurt\n> \n> \n> Simple write timing:\n> write 0.139558\n> \n> Compare fsync times on write() and non-write() descriptor:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> write, fsync, close 8.249364\n> write, close, fsync 8.356813\n> \n> Compare one o_sync write to two:\n> one 16k o_sync write 28.487650\n> two 8k o_sync writes 2.310304\n> \n> Compare file sync methods with one 8k write:\n> (o_dsync unavailable)\n> open o_sync, write 1.010688\n> write, fdatasync 25.109604\n> write, fsync, 26.051218\n> \n> Compare file sync methods with 2 8k writes:\n> (The fastest should be used for wal_sync_method)\n> (o_dsync unavailable)\n> open o_sync, write 2.212223\n> write, fdatasync 27.439907\n> write, fsync, 27.772294\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 15:34:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Tom, Bruce,\n\n> My previous point about checking different fsync spacings corresponds to\n> different assumptions about average transaction size. I think a useful\n> tool for determining wal_sync_method has got to be able to reflect that\n> range of possibilities.\n\nQuestions:\n1) This is an OSS project. Why not just recruit a bunch of people on \nPERFORMANCE and GENERAL to test the 4 different synch methods using real \ndatabases? No test like reality, I say ....\n\n2) Won't Jan's work on 7.5 memory and I/O management mean that we have to \nre-evaluate synching anyway?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 18 Mar 2004 12:39:58 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Josh Berkus wrote:\n> Tom, Bruce,\n> \n> > My previous point about checking different fsync spacings corresponds to\n> > different assumptions about average transaction size. I think a useful\n> > tool for determining wal_sync_method has got to be able to reflect that\n> > range of possibilities.\n> \n> Questions:\n> 1) This is an OSS project. Why not just recruit a bunch of people on \n> PERFORMANCE and GENERAL to test the 4 different synch methods using real \n> databases? No test like reality, I say ....\n\nWell, I wrote the program to allow testing. I don't see a complex test\nas being that much better than simple one. We don't need accurate\nnumbers. We just need to know if fsync or O_SYNC is faster.\n\n> \n> 2) Won't Jan's work on 7.5 memory and I/O management mean that we have to \n> re-evaluate synching anyway?\n\nNo, it should not change sync issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 15:49:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> 1) This is an OSS project. Why not just recruit a bunch of people on \n> PERFORMANCE and GENERAL to test the 4 different synch methods using real \n> databases? No test like reality, I say ....\n\nI agree --- that is likely to yield *far* more useful results than\nany standalone test program, for the purpose of finding out what\nwal_sync_method to use in real databases. However, there's a second\nissue here: we would like to move sync/checkpoint responsibility into\nthe bgwriter, and that requires knowing whether it's valid to let one\nprocess fsync on behalf of writes that were done by other processes.\nThat's got nothing to do with WAL sync performance. I think that it\nwould be sensible to make a test program that focuses on this one\nspecific question. (There has been some handwaving to the effect that\neverybody knows this is safe on Unixen, but I question whether the\nhandwavers have seen the internals of HPUX or AIX for instance; and\nbesides we need to worry about Windows now.)\n\nA third reason for having a simple test program is to confirm whether\nyour drives are syncing at all (cf. hdparm discussion).\n\n> 2) Won't Jan's work on 7.5 memory and I/O management mean that we have to \n> re-evaluate synching anyway?\n\nSo far nothing's been done that touches WAL writing. However, I am\nthinking about making the bgwriter process take some of the load of\nwriting WAL buffers (right now it only writes data-file buffers).\nAnd you're right, after that happens we will need to re-measure.\nThe open flags will probably become considerably more attractive than\nthey are now, if the bgwriter handles most non-commit writes of WAL.\n(We might also think of letting the bgwriter use a different sync method\nthan the backends do.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 16:00:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, I wrote the program to allow testing. I don't see a complex test\n> as being that much better than simple one. We don't need accurate\n> numbers. We just need to know if fsync or O_SYNC is faster.\n\nFaster than what? The thing everyone is trying to point out here is\nthat it depends on context, and we have little faith that this test\nprogram creates a context similar to a live Postgres database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 16:04:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "On Thu, Mar 18, 2004 at 03:34:21PM -0500, Bruce Momjian wrote:\n> Kurt Roeckx wrote:\n> > Here are my results on Linux 2.6.1 using cvs version 1.7.\n> > \n> > Those times with > 20 seconds, you really hear the disk go crazy.\n> > \n> > And I have the feeling something must be wrong. Those results\n> > are reproducible.\n> > \n> \n> Wow, your O_SYNC times are great. Where can I buy some? :-)\n> \n> Anyway, we do need to find a way to test this because obviously there is\n> huge platform variability.\n\nNew results with version 1.8:\n\nSimple write timing:\n write 0.150613\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 9.170472\n write, close, fsync 8.851715\n\nCompare one o_sync write to two:\n one 16k o_sync write 2.617860\n two 8k o_sync writes 2.563437\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 1.031721\n write, fdatasync 25.599010\n write, fsync, 26.192824\n\nCompare file sync methods with 2 8k writes:\n(The fastest should be used for wal_sync_method)\n (o_dsync unavailable)\n open o_sync, write 2.268718\n write, fdatasync 27.029396\n write, fsync, 27.399243\n\n", "msg_date": "Thu, 18 Mar 2004 22:09:51 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Well, I wrote the program to allow testing. I don't see a complex test\n> > as being that much better than simple one. We don't need accurate\n> > numbers. We just need to know if fsync or O_SYNC is faster.\n> \n> Faster than what? The thing everyone is trying to point out here is\n> that it depends on context, and we have little faith that this test\n> program creates a context similar to a live Postgres database.\n\nNote, too, that the preferred method isn't likely to depend just on the\noperating system, it's likely to depend also on the filesystem type\nbeing used.\n\nLinux provides quite a few of them: ext2, ext3, jfs, xfs, and reiserfs,\nand that's just off the top of my head. I imagine the performance of\nthe various syncing methods will vary significantly between them.\n\n\nIt seems reasonable to me that decisions such as which sync method to\nuse should initially be made at installation time: have the test program\nrun on the target filesystem as part of the installation process, and\nbuild the initial postgresql.conf based on the results. You might even\nbe able to do some additional testing such as measuring the difference\nbetween random block access and sequential access, and again feed the\nresults into the postgresql.conf file. This is no substitute for\nexperience with the platform, but I expect it's likely to get you closer\nto something optimal than doing nothing. The only question, of course,\nis whether or not it's worth going to the effort when it may or may not\ngain you a whole lot. Answering that is going to require some\nexperimentation with such an automatic configuration system.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 18 Mar 2004 16:41:12 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Tom Lane wrote:\n> > It really just shows whether the fsync fater the close has similar\n> > timing to the one before the close. That was the best way I could think\n> > to test it.\n> \n> Sure, but where's the \"separate process\" part? What this seems to test\n> is whether a single process can sync its own writes through a different\n> file descriptor; which is interesting but by no means the only thing we\n> need to be sure of if we want to make the bgwriter handle syncing.\n\nI am not sure how to easily test if a separate process can do the same. \nI am sure it can be done, but for me it was enough to see that it works\nin a single process. Unix isn't very process-centered for I/O, so I\ndon't think it would make much of a difference. Now, Win32, that might\nbe an issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 23:08:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "I wrote:\n> Note, too, that the preferred method isn't likely to depend just on the\n> operating system, it's likely to depend also on the filesystem type\n> being used.\n> \n> Linux provides quite a few of them: ext2, ext3, jfs, xfs, and reiserfs,\n> and that's just off the top of my head. I imagine the performance of\n> the various syncing methods will vary significantly between them.\n\nFor what it's worth, my database throughput for transactions involving\na lot of inserts, updates, and deletes is about 12% faster using\nfdatasync() than O_SYNC under Linux using JFS.\n\nI'll run the test program and report my results with it as well, so\nwe'll be able to see if there's any consistency between it and the live\ndatabase.\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 19 Mar 2004 19:48:17 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On 18 Mar, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> 1) This is an OSS project. Why not just recruit a bunch of people on \n>> PERFORMANCE and GENERAL to test the 4 different synch methods using real \n>> databases? No test like reality, I say ....\n> \n> I agree --- that is likely to yield *far* more useful results than\n> any standalone test program, for the purpose of finding out what\n> wal_sync_method to use in real databases. However, there's a second\n> issue here: we would like to move sync/checkpoint responsibility into\n> the bgwriter, and that requires knowing whether it's valid to let one\n> process fsync on behalf of writes that were done by other processes.\n> That's got nothing to do with WAL sync performance. I think that it\n> would be sensible to make a test program that focuses on this one\n> specific question. (There has been some handwaving to the effect that\n> everybody knows this is safe on Unixen, but I question whether the\n> handwavers have seen the internals of HPUX or AIX for instance; and\n> besides we need to worry about Windows now.)\n\nI could certainly do some testing if you want to see how DBT-2 does.\nJust tell me what to do. ;)\n\nMark\n", "msg_date": "Mon, 22 Mar 2004 09:33:59 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "[email protected] writes:\n> I could certainly do some testing if you want to see how DBT-2 does.\n> Just tell me what to do. ;)\n\nJust do some runs that are identical except for the wal_sync_method\nsetting. Note that this should not have any impact on SELECT\nperformance, only insert/update/delete performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 2004 12:41:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "[email protected] wrote:\n> On 18 Mar, Tom Lane wrote:\n> > Josh Berkus <[email protected]> writes:\n> >> 1) This is an OSS project. Why not just recruit a bunch of people on \n> >> PERFORMANCE and GENERAL to test the 4 different synch methods using real \n> >> databases? No test like reality, I say ....\n> > \n> > I agree --- that is likely to yield *far* more useful results than\n> > any standalone test program, for the purpose of finding out what\n> > wal_sync_method to use in real databases. However, there's a second\n> > issue here: we would like to move sync/checkpoint responsibility into\n> > the bgwriter, and that requires knowing whether it's valid to let one\n> > process fsync on behalf of writes that were done by other processes.\n> > That's got nothing to do with WAL sync performance. I think that it\n> > would be sensible to make a test program that focuses on this one\n> > specific question. (There has been some handwaving to the effect that\n> > everybody knows this is safe on Unixen, but I question whether the\n> > handwavers have seen the internals of HPUX or AIX for instance; and\n> > besides we need to worry about Windows now.)\n> \n> I could certainly do some testing if you want to see how DBT-2 does.\n> Just tell me what to do. ;)\n\nTo test, you would run from CVS version src/tools/fsync, find the\nfastest fsync method from the last group of outputs, then try the\nwal_fsync_method setting to see if the one that tools/fsync says is\nfastest is actually fastest. However, it might be better to run your\ntests and get some indication of how frequently writes and fsync's are\ngoing to WAL and modify tools/fsync to match what your DBT-2 test does.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 22 Mar 2004 12:42:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Tom Lane wrote:\n\n>[email protected] writes:\n> \n>\n>>I could certainly do some testing if you want to see how DBT-2 does.\n>>Just tell me what to do. ;)\n>> \n>>\n>\n>Just do some runs that are identical except for the wal_sync_method\n>setting. Note that this should not have any impact on SELECT\n>performance, only insert/update/delete performance.\n> \n>\nI've made a test run that compares fsync and fdatasync: The performance \nwas identical:\n- with fdatasync:\n\nhttp://khack.osdl.org/stp/290607/\n\n- with fsync:\nhttp://khack.osdl.org/stp/290483/\n\nI don't understand why. Mark - is there a battery backed write cache in \nthe raid controller, or something similar that might skew the results? \nThe test generates quite a lot of wal traffic - around 1.5 MB/sec. \nPerhaps the writes are so large that the added overhead of syncing the \ninode is not noticable?\nIs the pg_xlog directory on a seperate drive?\n\nBtw, it's possible to request such tests through the web-interface, see\nhttp://www.osdl.org/lab_activities/kernel_testing/stp/script_param.html\n\n--\n Manfred\n\n", "msg_date": "Thu, 25 Mar 2004 07:21:35 +0100", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On 25 Mar, Manfred Spraul wrote:\n> Tom Lane wrote:\n> \n>>[email protected] writes:\n>> \n>>\n>>>I could certainly do some testing if you want to see how DBT-2 does.\n>>>Just tell me what to do. ;)\n>>> \n>>>\n>>\n>>Just do some runs that are identical except for the wal_sync_method\n>>setting. Note that this should not have any impact on SELECT\n>>performance, only insert/update/delete performance.\n>> \n>>\n> I've made a test run that compares fsync and fdatasync: The performance \n> was identical:\n> - with fdatasync:\n> \n> http://khack.osdl.org/stp/290607/\n> \n> - with fsync:\n> http://khack.osdl.org/stp/290483/\n> \n> I don't understand why. Mark - is there a battery backed write cache in \n> the raid controller, or something similar that might skew the results? \n> The test generates quite a lot of wal traffic - around 1.5 MB/sec. \n> Perhaps the writes are so large that the added overhead of syncing the \n> inode is not noticable?\n> Is the pg_xlog directory on a seperate drive?\n> \n> Btw, it's possible to request such tests through the web-interface, see\n> http://www.osdl.org/lab_activities/kernel_testing/stp/script_param.html\n\nWe have 2 Adaptec 2200s controllers, without the battery backed add-on,\nconnected to four 10-disk arrays in those systems. I can't think of\nanything off hand that would skew the results.\n\nThe pg_xlog directory is not on a separate drive. I haven't found the\nbest way to lay out of the drives on those systems yet, so I just have\neverything on a 28 drive lvm2 volume.\n\nMark\n", "msg_date": "Thu, 25 Mar 2004 09:16:40 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "[email protected] wrote:\n> > I've made a test run that compares fsync and fdatasync: The performance \n> > was identical:\n> > - with fdatasync:\n> > \n> > http://khack.osdl.org/stp/290607/\n> > \n> > - with fsync:\n> > http://khack.osdl.org/stp/290483/\n> > \n> > I don't understand why. Mark - is there a battery backed write cache in \n> > the raid controller, or something similar that might skew the results? \n> > The test generates quite a lot of wal traffic - around 1.5 MB/sec. \n> > Perhaps the writes are so large that the added overhead of syncing the \n> > inode is not noticable?\n> > Is the pg_xlog directory on a seperate drive?\n> > \n> > Btw, it's possible to request such tests through the web-interface, see\n> > http://www.osdl.org/lab_activities/kernel_testing/stp/script_param.html\n> \n> We have 2 Adaptec 2200s controllers, without the battery backed add-on,\n> connected to four 10-disk arrays in those systems. I can't think of\n> anything off hand that would skew the results.\n> \n> The pg_xlog directory is not on a separate drive. I haven't found the\n> best way to lay out of the drives on those systems yet, so I just have\n> everything on a 28 drive lvm2 volume.\n\nWe don't actually extend the WAL file during writes (preallocated), and\nthe access/modification timestamp is only in seconds, so I wonder of the\nOS only updates the inode once a second. What else would change in the\ninode more frequently than once a second?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 25 Mar 2004 13:52:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "Bruce,\n\n> We don't actually extend the WAL file during writes (preallocated), and\n> the access/modification timestamp is only in seconds, so I wonder of the\n> OS only updates the inode once a second. What else would change in the\n> inode more frequently than once a second?\n\nWhat about really big writes, when WAL files are getting added/recycled?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 25 Mar 2004 11:10:55 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On 22 Mar, Tom Lane wrote:\n> [email protected] writes:\n>> I could certainly do some testing if you want to see how DBT-2 does.\n>> Just tell me what to do. ;)\n> \n> Just do some runs that are identical except for the wal_sync_method\n> setting. Note that this should not have any impact on SELECT\n> performance, only insert/update/delete performance.\n\nOk, here are the results I have from my 4-way xeon system, a 14 disk\nvolume for the log and a 52 disk volume for everything else:\n\thttp://developer.osdl.org/markw/pgsql/wal_sync_method.html\n\n7.5devel-200403222 \n\nwal_sync_method metric\ndefault (fdatasync) 1935.28\nfsync 1613.92\n\n# ./test_fsync -f /opt/pgdb/dbt2/pg_xlog/test.out\nSimple write timing:\n write 0.018787\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 13.057781\n write, close, fsync 13.311313\n\nCompare one o_sync write to two:\n one 16k o_sync write 6.515122\n two 8k o_sync writes 12.455124\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable) \n open o_sync, write 6.270724\n write, fdatasync 13.275225\n write, fsync, 13.359847\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable) \n open o_sync, write 12.479563\n write, fdatasync 13.651709\n write, fsync, 14.000240\n", "msg_date": "Thu, 25 Mar 2004 13:46:56 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking " }, { "msg_contents": "[email protected] wrote:\n\n>Compare file sync methods with one 8k write:\n> (o_dsync unavailable) \n> open o_sync, write 6.270724\n> write, fdatasync 13.275225\n> write, fsync, 13.359847\n> \n>\nOdd. Which filesystem, which kernel? It seems fdatasync is broken and \nsyncs the inode, too.\n\n--\n Manfred\n\n", "msg_date": "Fri, 26 Mar 2004 07:25:53 +0100", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On 26 Mar, Manfred Spraul wrote:\n> [email protected] wrote:\n> \n>>Compare file sync methods with one 8k write:\n>> (o_dsync unavailable) \n>> open o_sync, write 6.270724\n>> write, fdatasync 13.275225\n>> write, fsync, 13.359847\n>> \n>>\n> Odd. Which filesystem, which kernel? It seems fdatasync is broken and \n> syncs the inode, too.\n\nIt's linux-2.6.5-rc1 with ext2 filesystems.\n", "msg_date": "Fri, 26 Mar 2004 08:09:43 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "[email protected] wrote:\n> On 26 Mar, Manfred Spraul wrote:\n> > [email protected] wrote:\n> > \n> >>Compare file sync methods with one 8k write:\n> >> (o_dsync unavailable) \n> >> open o_sync, write 6.270724\n> >> write, fdatasync 13.275225\n> >> write, fsync, 13.359847\n> >> \n> >>\n> > Odd. Which filesystem, which kernel? It seems fdatasync is broken and \n> > syncs the inode, too.\n> \n> It's linux-2.6.5-rc1 with ext2 filesystems.\n\nWould you benchmark open_sync for wal_sync_method too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 26 Mar 2004 11:54:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On 26 Mar, Bruce Momjian wrote:\n> [email protected] wrote:\n>> On 26 Mar, Manfred Spraul wrote:\n>> > [email protected] wrote:\n>> > \n>> >>Compare file sync methods with one 8k write:\n>> >> (o_dsync unavailable) \n>> >> open o_sync, write 6.270724\n>> >> write, fdatasync 13.275225\n>> >> write, fsync, 13.359847\n>> >> \n>> >>\n>> > Odd. Which filesystem, which kernel? It seems fdatasync is broken and \n>> > syncs the inode, too.\n>> \n>> It's linux-2.6.5-rc1 with ext2 filesystems.\n> \n> Would you benchmark open_sync for wal_sync_method too?\n\nOh yeah. Will try to get results later today.\n\nMark \n\n", "msg_date": "Fri, 26 Mar 2004 09:00:56 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "On Fri, Mar 26, 2004 at 07:25:53AM +0100, Manfred Spraul wrote:\n\n> >Compare file sync methods with one 8k write:\n> > (o_dsync unavailable) \n> > open o_sync, write 6.270724\n> > write, fdatasync 13.275225\n> > write, fsync, 13.359847\n> > \n> >\n> Odd. Which filesystem, which kernel? It seems fdatasync is broken and \n> syncs the inode, too.\n\nThis may be relevant.\n\n From the man page for fdatasync on a moderately recent RedHat installation:\n\n BUGS\n Currently (Linux 2.2) fdatasync is equivalent to fsync.\n\nCheers,\n Steve\n", "msg_date": "Fri, 26 Mar 2004 15:14:59 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" } ]
[ { "msg_contents": "I have some problems on performance using postgresql v. 7.3.2 running on Linux RedHat 9. An update involving several rows (about 500000) on a table having 2800000 tuples takes in the order of 6 minutes. It is more than it takes on other plataforms (SqlServer, FOX). I think that there�s something wrong on my configuration. I�ve already adjusted some parameters as I could understand memory and disk usage. Next, I send a description of parameters changed in postgresql.conf, a scheme of the table, and an EXPLAIN ANALYZE of the command. The hardware configuration is a Pentium III 1 Ghz, 512 MB of memory, and an SCSI drive of 20 GB. Following goes the description:\n\n-- Values changed in postgresql.conf\n\ntcpip_socket = true\nmax_connections = 64\nshared_buffers = 4096\nwal_buffers = 100\nvacuum_mem = 16384\nvacuum_mem = 16384\nsort_mem = 32168\ncheckpoint_segments = 8\neffective_cache_size = 10000\n\n\n--\n-- PostgreSQL database dump\n--\n\n\\connect - nestor\n\nSET search_path = public, pg_catalog;\n\n--\n-- TOC entry 2 (OID 22661417)\n-- Name: jugadas; Type: TABLE; Schema: public; Owner: nestor\n--\n\nCREATE TABLE jugadas (\n fecha_ju character(8),\n hora_ju character(4),\n juego character(2),\n juego_vta character(2),\n sorteo_p character(5),\n sorteo_v character(5),\n nro_servidor character(1),\n ticket character(9),\n terminal character(4),\n sistema character(1),\n agente character(5),\n subagente character(3),\n operador character(2),\n importe character(7),\n anulada character(1),\n icode character(15),\n codseg character(15),\n tipo_moneda character(1),\n apuesta character(100),\n extraido character(1)\n);\n\n\n--\n-- TOC entry 4 (OID 25553754)\n-- Name: key_jug_1; Type: INDEX; Schema: public; Owner: nestor\n--\n\nCREATE UNIQUE INDEX key_jug_1 ON jugadas USING btree (juego, juego_vta, sorteo_p, nro_servidor, ticket);\n\nboss=# explain analyze update jugadas set extraido = 'S' where juego = '03' and\njuego_vta = '03' and sorteo_p = '89353' and extraido = 'N';\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on jugadas (cost=0.00..174624.96 rows=70061 width=272) (actual time=21223.88..51858.07 rows=517829 loops=1)\n Filter: ((juego = '03'::bpchar) AND (juego_vta = '03'::bpchar) AND (sorteo_p\n= '89353'::bpchar) AND (extraido = 'N'::bpchar))\n Total runtime: 291167.36 msec\n(3 rows)\n\nboss=# show enable_seqscan;\n enable_seqscan\n----------------\n on\n(1 row)\n\n\n************* FORCING INDEX SCAN ***********************************\n\nboss=# set enable_seqscan = false;\nSET\n\nboss=# explain analyze update jugadas set extraido = 'N' where juego = '03' and\njuego_vta = '03' and sorteo_p = '89353' and extraido = 'S';\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using key_jug_1 on jugadas (cost=0.00..597959.76 rows=98085 width=272) (actual time=9.93..39947.93 rows=517829 loops=1)\n Index Cond: ((juego = '03'::bpchar) AND (juego_vta = '03'::bpchar) AND (sorteo_p = '89353'::bpchar))\n Filter: (extraido = 'S'::bpchar)\n Total runtime: 335280.56 msec\n(4 rows)\n\nboss=#\n\nThank you in advance for any help.\n\nNestor\n\n", "msg_date": "Wed, 10 Dec 2003 16:29:58 -0300 (GMT+3)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "\nOn Wed, 10 Dec 2003 [email protected] wrote:\n\n> I have some problems on performance using postgresql v. 7.3.2 running on\n> Linux RedHat 9. An update involving several rows (about 500000) on a\n> table having 2800000 tuples takes in the order of 6 minutes. It is more\n> than it takes on other plataforms (SqlServer, FOX). I think that there�s\n> something wrong on my configuration. I�ve already adjusted some\n> parameters as I could understand memory and disk usage. Next, I send a\n> description of parameters changed in postgresql.conf, a scheme of the\n> table, and an EXPLAIN ANALYZE of the command. The hardware configuration\n> is a Pentium III 1 Ghz, 512 MB of memory, and an SCSI drive of 20 GB.\n> Following goes the description:\n\n> -- Values changed in postgresql.conf\n\n> CREATE TABLE jugadas (\n> fecha_ju character(8),\n> hora_ju character(4),\n> juego character(2),\n> juego_vta character(2),\n> sorteo_p character(5),\n> sorteo_v character(5),\n> nro_servidor character(1),\n> ticket character(9),\n> terminal character(4),\n> sistema character(1),\n> agente character(5),\n> subagente character(3),\n> operador character(2),\n> importe character(7),\n> anulada character(1),\n> icode character(15),\n> codseg character(15),\n> tipo_moneda character(1),\n> apuesta character(100),\n> extraido character(1)\n> );\n\nAre there any tables that reference this one or other triggers? If so,\nwhat do the tables/contraints/triggers involve look like?\n\nI'm guessing there might be given the difference in the actual time\nnumbers to the total runtime on the explain analyze.\n\n", "msg_date": "Wed, 10 Dec 2003 12:35:05 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Hello,\n\nI am facing a problem trying to put 500 concurrent users accessing\na postgresql instance. Basically, the machine begins to do a lot i/o...\nswap area increases more and more...\n\nThe vmstat began with 9200 (swpd) and after 20 minutes it was like that:\n\nVMSTAT:\n\n procs memory swap io system \n cpu\n r b w swpd free buff cache si so bi bo in cs us \n sy id\n 2 29 1 106716 9576 7000 409876 32 154 5888 1262 616 1575 8 \n 12 80\n 0 29 1 107808 9520 6896 409904 60 220 5344 1642 662 1510 9 \n 15 76\n 0 89 1 108192 9528 6832 410184 172 138 6810 1750 693 2466 11 \n 16 73\n 0 27 1 108192 9900 6824 409852 14 112 4488 1294 495 862 2 \n 9 88\n 8 55 1 108452 9552 6800 410284 26 12 6266 1082 651 2284 8 \n 11 81\n 5 78 2 109220 8688 6760 410816 148 534 6318 1632 683 1230 6 \n 13 81\n\n\nThe application that I am trying to running mimmics the tpc-c benchmark...\nActually, I am simulating the tpc-c workload without considering\nscreens and other details. The only interesting is\non the database workload proposed by the benchmark and its distributions.\n\nThe machine is a dual-processor pentium III, with 1GB, external storage \ndevice. It runs Linux version 2.4.21-dt1 (root@dupond) (gcc version 2.96 \n20000731 (Red Hat Linux 7.3 2.96-113)) #7 SMP Mon Apr 21 19:43:17 GMT \n2003, Postgresql 7.5devel.\n\nPostgresql configuration:\n\neffective_cache_size = 35000\nshared_buffers = 5000\nrandom_page_cost = 2\ncpu_index_tuple_cost = 0.0005\nsort_mem = 10240\n\nI would like to know if this behaivor is normal considering\nthe number of clients, the workload and the database size (7.8 GB) ?\nOr if there is something that I can change to get better results.\n\nBest regards,\n\nAlfranio Junior.\n\n", "msg_date": "Thu, 11 Dec 2003 04:13:28 +0000", "msg_from": "Alfranio Correia Junior <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with a higher number of clients" }, { "msg_contents": "Alfranio Correia Junior wrote:\n> Postgresql configuration:\n> \n> effective_cache_size = 35000\n> shared_buffers = 5000\n> random_page_cost = 2\n> cpu_index_tuple_cost = 0.0005\n> sort_mem = 10240\n\nLower sort mem to say 2000-3000, up shared buffers to 10K and up effective cache \nsize to around 65K. That should make it behave bit better.\n\nI guess tuning sort mem alone would give you performance you are expecting.. \nTune them one by one.\n\nHTH\n\n Shridhar\n", "msg_date": "Thu, 11 Dec 2003 12:06:23 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with a higher number of clients" }, { "msg_contents": "On Thu, 11 Dec 2003 04:13:28 +0000\nAlfranio Correia Junior <[email protected]> wrote:\n\n> r b w swpd free buff cache si so bi bo in cs \n> us sy id\n> 2 29 1 106716 9576 7000 409876 32 154 5888 1262 616 1575 \n> 8 12 80\n\nOn linux I've found as soon as it has to swap its oh-so-wonderful VM\nbrings the machine to a screeching halt. \n\n\n> sort_mem = 10240\n> \nHere's a big problem\n\nThis gives _EACH SORT_ 10MB (No more, no less) to play with. \n10MB * 500 connections == 5000MB in one case.. Some queries may\nhave more sort steps. It is possible 1 connection could be using\n30-40MB of sort_mem. You'll need to bring that value down to prevent\nswapping.\n\nIf you have a few \"common\" queries that are run a lot check out hte\nexplain analyze. You can see about how much sort_mem you'll need. Look\nin the sort step. it should tell you the width and the # of rows.\nMultiply those. That is sort of how much memory you'll need (I'd round\nit up a bit)\n\nIf under normal workload your DB is swapping you have problems. You'll\nneed to either tune your config or get bigger hardware. You may want to\nalso consider an OS that deals with that situation a bit better.\n\ngood luck.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Thu, 11 Dec 2003 08:28:26 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with a higher number of clients" }, { "msg_contents": "Alfranio Correia Junior <[email protected]> writes:\n> I am facing a problem trying to put 500 concurrent users accessing\n> a postgresql instance.\n\nI think you're going to need to buy more RAM. 1Gb of RAM means there\nis a maximum of 2Mb available per Postgres process before you start\nto go into swap hell --- in practice a lot less, since you have to allow\nfor other things like the kernel and other applications.\n\nAFAIR TPC-C doesn't involve any complex queries, so it's possible you\ncould run it with only 1Mb of workspace per process, but not when\nyou've configured\n\n> sort_mem = 10240\n\nThat's ten times more than your configuration can possibly support.\n(I don't recall whether TPC-C uses any queries that would sort, so\nit's possible this setting isn't affecting you; but if you are doing\nany sorts then it's killing you.)\n\nBottom line is you probably need more RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Dec 2003 09:54:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with a higher number of clients " }, { "msg_contents": "Thanks for the advices,\nThe performance is a bit better now. Unfortunately, the machine does not \nallow\nto put more than 200 - ~250 users without noticing swap hell.\nI have to face the fact that I don't have enough memory....\n\nI used the following configuration:\n\neffective_cache_size = 65000 \nshared_buffers = 10000 \nrandom_page_cost = 2 \ncpu_index_tuple_cost = 0.0005 \nsort_mem = 512 - I tested each query to see the amount of space \nrequired to sort as Jeff suggested --> nothing above this value\n\nI tested the system with 100, 200, 300, 400, 500 and finally 250 users.\nUntil ~250 users the system presents good response time and the swap \nalmost does not exist.\nDuring these expirements, I also started psql and tried to run some \nqueries.\nUnfortunately, even with ~250 users there is one query that takes too \nlong to finish...\nIn fact, I canceled its execution after 5 minutes waiting to see anything.\n\nThis is the query:\n\nselect count(distinct(s_i_id))\n from stock, order_line\n where ol_w_id = _xx_ and\n ol_d_id = _xx_ and\n ol_o_id between _xx_ and\n _xx_ and\n s_w_id = ol_w_id and\n s_i_id = ol_i_id and\n s_quantity < _xx_;\n\nWhen the system has no load, after a vacuum -f, I can execute the query \nand the plan produced is presented as follows:\n Aggregate (cost=49782.16..49782.16 rows=1 width=4) (actual \ntime=52361.573..52361.574 rows=1 loops=1)\n -> Nested Loop (cost=0.00..49780.24 rows=768 width=4) (actual \ntime=101.554..52328.913 rows=952 loops=1)\n -> Index Scan using pk_order_line on order_line o \n(cost=0.00..15779.32 rows=8432 width=4) (actual time=84.352..151.345 \nrows=8964 loops=1)\n Index Cond: ((ol_w_id = 4) AND (ol_d_id = 4) AND (ol_o_id \n >= 100) AND (ol_o_id <= 1000))\n -> Index Scan using pk_stock on stock (cost=0.00..4.02 rows=1 \nwidth=4) (actual time=5.814..5.814 rows=0 loops=8964)\n Index Cond: ((stock.s_w_id = 4) AND (stock.s_i_id = \n\"outer\".ol_i_id))\n Filter: (s_quantity < 20)\n Total runtime: 52403.673 ms\n(8 rows)\n\nThe talbes are designed as follows:\n\n--ROWS ~5000000\nCREATE TABLE stock (\n s_i_id int NOT NULL ,\n s_w_id int NOT NULL ,\n s_quantity int NULL ,\n s_dist_01 char (24) NULL ,\n s_dist_02 char (24) NULL ,\n s_dist_03 char (24) NULL ,\n s_dist_04 char (24) NULL ,\n s_dist_05 char (24) NULL ,\n s_dist_06 char (24) NULL ,\n s_dist_07 char (24) NULL ,\n s_dist_08 char (24) NULL ,\n s_dist_09 char (24) NULL ,\n s_dist_10 char (24) NULL ,\n s_ytd int NULL ,\n s_order_cnt int NULL ,\n s_remote_cnt int NULL ,\n s_data char (50) NULL\n);\n\n--ROWS ~15196318\nCREATE TABLE order_line (\n ol_o_id int NOT NULL ,\n ol_d_id int NOT NULL ,\n ol_w_id int NOT NULL ,\n ol_number int NOT NULL ,\n ol_i_id int NULL ,\n ol_supply_w_id int NULL ,\n ol_delivery_d timestamp NULL ,\n ol_quantity int NULL ,\n ol_amount numeric(6, 2) NULL ,\n ol_dist_info char (24) NULL\n);\n\nALTER TABLE stock ADD\nCONSTRAINT PK_stock PRIMARY KEY\n (\n s_w_id,\n s_i_id\n );\nALTER TABLE order_line ADD\n CONSTRAINT PK_order_line PRIMARY KEY\n (\n ol_w_id,\n ol_d_id,\n ol_o_id,\n ol_number\n );\nCREATE INDEX IX_order_line ON order_line(ol_i_id);\n\nAny suggestion ?\n\n\nTom Lane wrote:\n\n>Alfranio Correia Junior <[email protected]> writes:\n> \n>\n>>I am facing a problem trying to put 500 concurrent users accessing\n>>a postgresql instance.\n>> \n>>\n>\n>I think you're going to need to buy more RAM. 1Gb of RAM means there\n>is a maximum of 2Mb available per Postgres process before you start\n>to go into swap hell --- in practice a lot less, since you have to allow\n>for other things like the kernel and other applications.\n>\n>AFAIR TPC-C doesn't involve any complex queries, so it's possible you\n>could run it with only 1Mb of workspace per process, but not when\n>you've configured\n>\n> \n>\n>>sort_mem = 10240\n>> \n>>\n>\n>That's ten times more than your configuration can possibly support.\n>(I don't recall whether TPC-C uses any queries that would sort, so\n>it's possible this setting isn't affecting you; but if you are doing\n>any sorts then it's killing you.)\n>\n>Bottom line is you probably need more RAM.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n", "msg_date": "Fri, 12 Dec 2003 02:35:16 +0000", "msg_from": "Alfranio Tavares Correia Junior <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with a higher number of clients" } ]
[ { "msg_contents": "Hi list,\n\nI need to know if there is anything like hints of Oracle in \nPostgres..otherwise..I wish to find a way to force a query plan to use the \nindexes or tell the optimizer things like \"optimize based in statistics\", \"I \nwant to define the order of the a join\" , \"optimize based on a execution \nplan that I consider the best\" ...\n\nthanks.\n\n_________________________________________________________________\nLas mejores tiendas, los precios mas bajos, entregas en todo el mundo, \nYupiMSN Compras: http://latam.msn.com/compras/\n\n", "msg_date": "Thu, 11 Dec 2003 11:00:19 -0500", "msg_from": "\"sandra ruiz\" <[email protected]>", "msg_from_op": true, "msg_subject": "hints in Postgres?" }, { "msg_contents": "hello\n\nmaybe\n\nhttp://www.gtsm.com/oscon2003/toc.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nbye\nPavel\n\n\nOn Thu, 11 Dec 2003, sandra ruiz wrote:\n\n> Hi list,\n> \n> I need to know if there is anything like hints of Oracle in \n> Postgres..otherwise..I wish to find a way to force a query plan to use the \n> indexes or tell the optimizer things like \"optimize based in statistics\", \"I \n> want to define the order of the a join\" , \"optimize based on a execution \n> plan that I consider the best\" ...\n> \n> thanks.\n> \n> _________________________________________________________________\n> Las mejores tiendas, los precios mas bajos, entregas en todo el mundo, \n> YupiMSN Compras: http://latam.msn.com/compras/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Thu, 11 Dec 2003 17:22:13 +0100 (CET)", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hints in Postgres?" }, { "msg_contents": "Quoth [email protected] (\"sandra ruiz\"):\n> I need to know if there is anything like hints of Oracle in\n> Postgres..otherwise..I wish to find a way to force a query plan to use\n> the indexes or tell the optimizer things like \"optimize based in\n> statistics\", \"I want to define the order of the a join\" , \"optimize\n> based on a execution plan that I consider the best\" ...\n\nIt is commonly considered a MISFEATURE of Oracle that it forces you to\ntweak all of those sorts of 'knobs.'\n\nThe approach taken with PostgreSQL is to use problems discovered to\ntry to improve the quality of the query optimizer. It is usually\nclever enough to do a good job, and if it can be improved to\nautomatically notice that \"better\" plan, then that is a better thing\nthan imposing the burden of tuning each query on you.\n\nTom Lane is \"Doctor Optimization,\" and if you look at past discussion\nthreads of this sort, you'll see that he tends to rather strongly\noppose the introduction of \"hints.\"\n-- \nselect 'aa454' || '@' || 'freenet.carleton.ca';\nhttp://www3.sympatico.ca/cbbrowne/linux.html\nAs of next Monday, COMSAT will be flushed in favor of a string and two tin\ncans. Please update your software.\n", "msg_date": "Thu, 11 Dec 2003 11:31:45 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hints in Postgres?" }, { "msg_contents": "On Thu, Dec 11, 2003 at 11:00:19 -0500,\n sandra ruiz <[email protected]> wrote:\n> Hi list,\n> \n> I need to know if there is anything like hints of Oracle in \n> Postgres..otherwise..I wish to find a way to force a query plan to use the \n> indexes or tell the optimizer things like \"optimize based in statistics\", \n> \"I want to define the order of the a join\" , \"optimize based on a execution \n> plan that I consider the best\" ...\n\nThere are a few things you can do.\n\nYou can explicitly fix the join order using INNER JOIN (in 7.4 you have to set\na GUC variable for this to force join order).\n\nYou can disable specific plan types (though sequential just becomes very\nexpensive as sometimes there is no other way to do things).\n\nYou can set tuning values to properly express the relative cost of things\nlike CPU time, sequential disk reads and random disk reads.\n\nThese are done by setting GUC variables either in the postgres config\nfile or using SET commands. They are per backend so some queries can\nbe done using one set of values while others going on at the same time\nuse different values.\n", "msg_date": "Thu, 11 Dec 2003 14:27:55 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hints in Postgres?" } ]
[ { "msg_contents": "I am working on migrating to postgres and had some questions regarding \noptimization that I could not find references in the documentation:\n\n\n1. Is there any performance difference for declaring a primary or \nforeign key a column or table contraint? From the documentation, which \nway is faster and/or scales better:\n\n\nCREATE TABLE distributors (\n did integer,\n name varchar(40),\n PRIMARY KEY(did)\n);\n\nCREATE TABLE distributors (\n did integer PRIMARY KEY,\n name varchar(40)\n);\n\n\n2. Is DEFERRABLE and INITIALLY IMMEDIATE or INITIALLY DEFERRABLE \nperferred for performance? We generally have very small transactions \n(web app) but we utilize a model of:\n\nview (limit scope for security) -> rules -> before triggers (validate \npermissions and to set proper permissions) -> tables.\n\nI know there were some issues with deferring that was fixed but does it \nbenefit performance or cause any reliability issues?\n\n\nThank you for your assistance and let me know if I can offer additional \ninformation.\n\n\t\t\t\t\t\t\t--spt\n\n\n\n", "msg_date": "Thu, 11 Dec 2003 11:38:10 -0500", "msg_from": "\"Sean P. Thomas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing FK & PK performance..." }, { "msg_contents": "\"Sean P. Thomas\" <[email protected]> writes:\n> 1. Is there any performance difference for declaring a primary or\n> foreign key a column or table contraint? From the documentation,\n> which way is faster and/or scales better:\n>\n> CREATE TABLE distributors (\n> did integer,\n> name varchar(40),\n> PRIMARY KEY(did)\n> );\n>\n> CREATE TABLE distributors (\n> did integer PRIMARY KEY,\n> name varchar(40)\n> );\n\nThese are equivalent -- the performance should be the same.\n\n-Neil\n\n", "msg_date": "Tue, 16 Dec 2003 17:47:59 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing FK & PK performance..." }, { "msg_contents": "> 1. Is there any performance difference for declaring a primary or \n> foreign key a column or table contraint? From the documentation, which \n> way is faster and/or scales better:\n> \n> \n> CREATE TABLE distributors (\n> did integer,\n> name varchar(40),\n> PRIMARY KEY(did)\n> );\n> \n> CREATE TABLE distributors (\n> did integer PRIMARY KEY,\n> name varchar(40)\n> );\n\nNo difference - they're parsed to exactly the same thing (the first \nversion).\n\n> 2. Is DEFERRABLE and INITIALLY IMMEDIATE or INITIALLY DEFERRABLE \n> perferred for performance? We generally have very small transactions \n> (web app) but we utilize a model of:\n\nNo idea on this one :/\n\nChris\n\n", "msg_date": "Wed, 17 Dec 2003 09:17:28 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing FK & PK performance..." } ]
[ { "msg_contents": "show\n", "msg_date": "Thu, 11 Dec 2003 13:45:17 -0300 (GMT+3)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Command" } ]
[ { "msg_contents": "\nHi everyone,\n\nI want to pick your brains for hardware suggestions about a \nLinux-based PostgreSQL 7.4 server. It will be a dedicated DB server \nbacking our web sites and hit by application servers (which do \nconnection pooling). I've hopefully provided all relevant \ninformation below. Any thoughts, comments or suggestions are welcome.\n\nOur current server and database:\n\tMac OS X Server 10.2.8\n\tsingle 1.25GHz G4\n\t2 GB 333MHz RAM\n\t7200 rpm SCSI drive for OS, logs\n\t15k rpm SCSI drive for data\n\n\tPostgreSQL 7.3.4\n\t1 database, 1.1 GB in size, growing by ~15 MB / week\n\t60 tables, 1 schema, largest is 1m rows, 1 at 600k, 3 at 100k\n\tPeak traffic:\n\t\t500 UPDATEs, INSERTs and DELETEs / minute\n\t\t6000 SELECTs / minutes\n\t\t90 connections\n\nPerformance is fine most of the time, but not during peak loads. \nWe're never swapping and disk IO during the SELECT peaks is hardly \nanything (under 3MB/sec). I think UPDATE peaks might be saturating \ndisk IO. Normally, most queries finish in under .05 seconds. Some \ntake 2-3 seconds. During peaks, the fast queries are just OK and the \nslower ones take too long (like over 8 seconds).\n\nWe're moving to Linux from OS X for improved stability and more \nhardware options. We need to do this soon. The current server is \nmax'd out at 2GB RAM and I'm afraid might start swapping in a month.\n\nProjected database/traffic in 12 months:\n\tDatabase size will be at least 2.5 GB\n\tLargest table still 1m rows, but 100k tables will grow to 250k\n\tWill be replicated to a suitable standby slave machine\n\tPeak traffic:\n\t\t2k UPDATEs, INSERTs, DELETEs / minute\n\t\t20k SELECTs / minute\n\t\t150 - 200 connections\n\nWe're willing to shell out extra bucks to get something that will \nundoubtedly handle the projected peak load in 12 months with \nexcellent performance. But we're not familiar with PG's performance \non Linux and don't like to waste money.\n\nI've been thinking of this (overkill? not enough?):\n\t2 Intel 32-bit CPUs\n\tLowest clock speed chip for the fastest available memory bus\n\t4 GB RAM (maybe we only need 3 GB to start with?)\n\tSCSI RAID 1 for OS\n\tFor PostgreSQL data and logs ...\n\t\t15k rpm SCSI disks\n\t\tRAID 5, 7 disks, 256MB battery-backed write cache\n\t\t(Should we save $ and get a 4-disk RAID 10 array?)\n\nI wonder about the 32bit+bigmem vs. 64bit question. At what database \nsize will we need more than 4GB RAM?\n\nWe'd like to always have enough RAM to cache the entire database. \nWhile 64bit is in our long-term future, we're willing to stick with \n32bit Linux until 64bit Linux on Itanium/Opteron and 64bit PostgreSQL \n\"settle in\" to proven production-quality.\n\nTIA,\n- Jeff\n\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Thu, 11 Dec 2003 13:19:42 -0700", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "Jeff Bohmer wrote:\n> We're willing to shell out extra bucks to get something that will \n> undoubtedly handle the projected peak load in 12 months with excellent \n> performance. But we're not familiar with PG's performance on Linux and \n> don't like to waste money.\n\nProperly tuned, PG on Linux runs really nice. A few people have \nmentioned the VM swapping algorithm on Linux is semi-dumb. I get around \nthat problem by having a ton of memory and almost no swap.\n\n> I've been thinking of this (overkill? not enough?):\n> 2 Intel 32-bit CPUs\n> Lowest clock speed chip for the fastest available memory bus\n> 4 GB RAM (maybe we only need 3 GB to start with?)\n> SCSI RAID 1 for OS\n> For PostgreSQL data and logs ...\n> 15k rpm SCSI disks\n> RAID 5, 7 disks, 256MB battery-backed write cache\n> (Should we save $ and get a 4-disk RAID 10 array?)\n> \n> I wonder about the 32bit+bigmem vs. 64bit question. At what database \n> size will we need more than 4GB RAM?\n\nWith 4GB of RAM, you're already running into bigmem. By default, Linux \ngives 2GB of address space to programs and 2GB to kernel. I usually see \npeople quote 5%-15% penalty in general for using PAE versus a flat \naddress space. I've seen simple MySQL benchmarks where 64-bit versions \nrun 35%+ faster versus 32-bit+PAE but how that translates to PG, I dunno \nyet.\n\n> We'd like to always have enough RAM to cache the entire database. While \n> 64bit is in our long-term future, we're willing to stick with 32bit \n> Linux until 64bit Linux on Itanium/Opteron and 64bit PostgreSQL \"settle \n> in\" to proven production-quality.\n\nWell if this is the case, you probably should get an Opteron server \n*now* and just run 32-bit Linux on it until you're sure about the \nsoftware. No point in buying a Xeon and then throwing the machine away \nin a year when you decide you need 64-bit for more speed.\n\n", "msg_date": "Thu, 11 Dec 2003 12:50:06 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "\n>Properly tuned, PG on Linux runs really nice. A few people have \n>mentioned the VM swapping algorithm on Linux is semi-dumb. I get \n>around that problem by having a ton of memory and almost no swap.\n\nI think we want your approach: enough RAM to avoid swapping altogether.\n\n\n\n>With 4GB of RAM, you're already running into bigmem. By default, \n>Linux gives 2GB of address space to programs and 2GB to kernel.\n\nIt seems I don't fully understand the bigmem situation. I've \nsearched the archives, googled, checked RedHat's docs, etc. But I'm \ngetting conflicting, incomplete and/or out of date information. Does \nanyone have pointers to bigmem info or configuration for the 2.4 \nkernel?\n\nIf Linux is setup with 2GB for kernel and 2GB for user, would that be \nOK with a DB size of 2-2.5 GB? I'm figuring the kernel will cache \nmost/all of the DB in it's 2GB and there's 2GB left for PG processes. \nWhere does PG's SHM buffers live, kernel or user? (I don't plan on \ngoing crazy with buffers, but will guess we'd need about 128MB, 256MB \nat most.)\n\n\n\n>I usually see people quote 5%-15% penalty in general for using PAE \n>versus a flat address space. I've seen simple MySQL benchmarks where \n>64-bit versions run 35%+ faster versus 32-bit+PAE but how that \n>translates to PG, I dunno yet.\n>\n>>We'd like to always have enough RAM to cache the entire database. \n>>While 64bit is in our long-term future, we're willing to stick with \n>>32bit Linux until 64bit Linux on Itanium/Opteron and 64bit \n>>PostgreSQL \"settle in\" to proven production-quality.\n>\n>Well if this is the case, you probably should get an Opteron server \n>*now* and just run 32-bit Linux on it until you're sure about the \n>software. No point in buying a Xeon and then throwing the machine \n>away in a year when you decide you need 64-bit for more speed.\n\nThat's a good point. I had forgotten about the option to run 32bit \non an Operton. If we had 3GB or 4GB initially on an Opteron, we'd \nneed bigmem for 32bit Linux, right?\n\nThis might work nicely since we'd factor in the penalty from PAE for \nnow and have the performance boost from moving to 64bit available on \ndemand. Not having to build another DB server in a year would also \nbe nice.\n\nFYI, we need stability first and performance second.\n\nThank you,\n- Jeff\n\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Thu, 11 Dec 2003 15:02:11 -0700", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "\nJust one more piece of advice, you might want to look into a good battery \nbacked cache hardware RAID controller. They work quite well for heavily \nupdated databases. The more drives you throw at the RAID array the faster \nit will be.\n\n", "msg_date": "Thu, 11 Dec 2003 15:48:58 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "Jeff Bohmer wrote:\n> It seems I don't fully understand the bigmem situation. I've searched \n> the archives, googled, checked RedHat's docs, etc. But I'm getting \n> conflicting, incomplete and/or out of date information. Does anyone \n> have pointers to bigmem info or configuration for the 2.4 kernel?\n\nBigmem is the name for Linux's PAE support.\n\n> If Linux is setup with 2GB for kernel and 2GB for user, would that be OK \n> with a DB size of 2-2.5 GB? I'm figuring the kernel will cache most/all \n> of the DB in it's 2GB and there's 2GB left for PG processes. Where does \n> PG's SHM buffers live, kernel or user? (I don't plan on going crazy \n> with buffers, but will guess we'd need about 128MB, 256MB at most.)\n\nPG's SHM buffers live in user. Whether Linux's OS caches lives in user \nor kernel, I think it's in kernel and I remember reading a max of ~950KB \nw/o bigmem which means your 3.5GB of available OS memory will definitely \nhave to be swapped in and out of kernel space using PAE.\n\n>> Well if this is the case, you probably should get an Opteron server \n>> *now* and just run 32-bit Linux on it until you're sure about the \n>> software. No point in buying a Xeon and then throwing the machine away \n>> in a year when you decide you need 64-bit for more speed.\n> \n> That's a good point. I had forgotten about the option to run 32bit on \n> an Operton. If we had 3GB or 4GB initially on an Opteron, we'd need \n> bigmem for 32bit Linux, right?\n> \n> This might work nicely since we'd factor in the penalty from PAE for now \n> and have the performance boost from moving to 64bit available on \n> demand. Not having to build another DB server in a year would also be \n> nice.\n> \n> FYI, we need stability first and performance second.\n\nWe ordered a 2x Opteron server the moment the CPU was released and it's \nbeen perfect -- except for one incident where the PCI riser card had \ndrifted out of the PCI slot due to the heavy SCSI cables connected to \nthe card.\n\nI think most of the Opteron server MBs are pretty solid but you want \nextra peace-of-mind, you could get a server from Newisys as they pack in \na cartload of extra monitoring features.\n\n", "msg_date": "Thu, 11 Dec 2003 15:32:47 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "Jeff Bohmer wrote:\n>> Well if this is the case, you probably should get an Opteron server \n>> *now* and just run 32-bit Linux on it until you're sure about the \n>> software. No point in buying a Xeon and then throwing the machine away \n>> in a year when you decide you need 64-bit for more speed.\n> \n> \n> That's a good point. I had forgotten about the option to run 32bit on \n> an Operton. If we had 3GB or 4GB initially on an Opteron, we'd need \n> bigmem for 32bit Linux, right?\n> \n> This might work nicely since we'd factor in the penalty from PAE for now \n> and have the performance boost from moving to 64bit available on \n> demand. Not having to build another DB server in a year would also be \n> nice.\n\nFWIW, there are only two pieces of software that need 64bit aware for a typical \nserver job. Kernel and glibc. Rest of the apps can do fine as 32 bits unless you \nare oracle and insist on outsmarting OS.\n\nIn fact running 32 bit apps on 64 bit OS has plenty of advantages like \neffectively using the cache. Unless you need 64bit, going for 64bit software is \nnot advised.\n\n Shridhar\n\n-- \n-----------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:[email protected]\nPhone:- +91-20-5676700 Extn.270\nFax :- +91-20-5676701\n-----------------------------\n\n", "msg_date": "Fri, 12 Dec 2003 12:05:49 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "Shridhar Daithankar wrote:\n> \n> FWIW, there are only two pieces of software that need 64bit aware for a \n> typical server job. Kernel and glibc. Rest of the apps can do fine as 32 \n> bits unless you are oracle and insist on outsmarting OS.\n> \n> In fact running 32 bit apps on 64 bit OS has plenty of advantages like \n> effectively using the cache. Unless you need 64bit, going for 64bit \n> software is not advised.\n\nThis is a good point. While doing research on this matter a few months \nback, I saw comments by people testing 64-bit MySQL that some operations \nwould run faster and some slower due to the use of 64-bit datatypes \nversus 32-bit. The best solution in the end is probably to run 32-bit \nPostgres under a 64-bit kernel -- unless your DB tends to have a lot of \n64-bit datatypes.\n\n", "msg_date": "Fri, 12 Dec 2003 08:35:17 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": ">Just one more piece of advice, you might want to look into a good battery\n>backed cache hardware RAID controller. They work quite well for heavily\n>updated databases. The more drives you throw at the RAID array the faster\n>it will be.\n\nI've seen this list often recommended such a setup. We'll probably \nget battery-backed write cache and start out with a 4 disk RAID 10 \narray. Then add more disks and change RAID 5 if more read \nperformance is needed.\n\nThanks,\n- Jeff\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Sat, 13 Dec 2003 15:57:04 -0700", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "\n>Shridhar Daithankar wrote:\n>>\n>>FWIW, there are only two pieces of software that need 64bit aware \n>>for a typical server job. Kernel and glibc. Rest of the apps can do \n>>fine as 32 bits unless you are oracle and insist on outsmarting OS.\n>>\n>>In fact running 32 bit apps on 64 bit OS has plenty of advantages \n>>like effectively using the cache. Unless you need 64bit, going for \n>>64bit software is not advised.\n>\n>This is a good point. While doing research on this matter a few \n>months back, I saw comments by people testing 64-bit MySQL that some \n>operations would run faster and some slower due to the use of 64-bit \n>datatypes versus 32-bit. The best solution in the end is probably to \n>run 32-bit Postgres under a 64-bit kernel -- unless your DB tends to \n>have a lot of 64-bit datatypes.\n\n\nThanks Shridhar and William,\n\nThis advice has been very helpful. I would imagine a lot of folks \nare, or will soon be looking at 32- vs. 64-bit just for memory \nreasons and not 64-bit apps.\n\n- Jeff\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Sat, 13 Dec 2003 16:00:32 -0700", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "I don't know what your budget is, but there are now 10k RPM SATA 150 \ndrives on the market. Their price/performance is impressive. You may \nwant to consider going with a bunch of these instead of SCSI disks (more \nspindles vs. faster spindles). 3ware makes a hardware raid card that can \ndrive up to 12 SATA disks. I have been told by a few people who have \nused it that the linux driver is very solid.\n\nDrew\n\n\nJeff Bohmer wrote:\n\n>> Just one more piece of advice, you might want to look into a good \n>> battery\n>> backed cache hardware RAID controller. They work quite well for heavily\n>> updated databases. The more drives you throw at the RAID array the \n>> faster\n>> it will be.\n>\n>\n> I've seen this list often recommended such a setup. We'll probably \n> get battery-backed write cache and start out with a 4 disk RAID 10 \n> array. Then add more disks and change RAID 5 if more read performance \n> is needed.\n>\n> Thanks,\n> - Jeff\n\n\n\n", "msg_date": "Sun, 14 Dec 2003 23:17:19 -0500", "msg_from": "\"Andrew G. Hammond\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": "In the last exciting episode, [email protected] (\"Andrew G. Hammond\") wrote:\n> I don't know what your budget is, but there are now 10k RPM SATA 150\n> drives on the market. Their price/performance is impressive. You may\n> want to consider going with a bunch of these instead of SCSI disks\n> (more spindles vs. faster spindles). 3ware makes a hardware raid\n> card that can drive up to 12 SATA disks. I have been told by a few\n> people who have used it that the linux driver is very solid.\n\nWe got a couple of those in for testing purposes; when opportunity\npresents itself, I'll have to check to see if they are any more honest\nabout commits than traditional IDE drives.\n\nIf they still \"lie\" the same way IDE drives do, it is entirely\npossible that they are NOT nearly as impressive as you presently\nimagine. It's not much good if they're \"way fast\" if you can't trust\nthem to actually store data when they claim it is stored...\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/lisp.html\n\"Much of this software was user-friendly, meaning that it was intended\nfor users who did not know anything about computers, and furthermore\nhad absolutely no intention whatsoever of learning.\"\n-- A. S. Tanenbaum, \"Modern Operating Systems, ch 1.2.4\"\n", "msg_date": "Mon, 15 Dec 2003 00:14:29 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" }, { "msg_contents": ">In the last exciting episode, [email protected] (\"Andrew G. Hammond\") wrote:\n>> I don't know what your budget is, but there are now 10k RPM SATA 150\n>> drives on the market. Their price/performance is impressive. You may\n>> want to consider going with a bunch of these instead of SCSI disks\n>> (more spindles vs. faster spindles). 3ware makes a hardware raid\n>> card that can drive up to 12 SATA disks. I have been told by a few\n>> people who have used it that the linux driver is very solid.\n>\n>We got a couple of those in for testing purposes; when opportunity\n>presents itself, I'll have to check to see if they are any more honest\n>about commits than traditional IDE drives.\n>\n>If they still \"lie\" the same way IDE drives do, it is entirely\n>possible that they are NOT nearly as impressive as you presently\n>imagine. It's not much good if they're \"way fast\" if you can't trust\n>them to actually store data when they claim it is stored...\n\nWe lost data because of this very problem when a UPS didn't signal \nthe shut down before it ran out of juice.\n\nHere's an excellent explanation of the problem:\nhttp://archives.postgresql.org/pgsql-general/2003-10/msg01343.php\n\nThis post indicates that SATA drives still have problems, but a new \nATA standard might fix things in the future:\nhttp://archives.postgresql.org/pgsql-general/2003-10/msg01395.php\n\nSATA RAID is a good option for a testing server, though.\n\n\n- Jeff\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Mon, 15 Dec 2003 09:37:29 -0700", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for Linux/PGSQL server" } ]
[ { "msg_contents": "Hi,\n\nI've got very slow insert performance on some \ntable which has trigger based on complex PL/pgSQL function.\nApparently insert is slow due some slow sql inside that function,\nsince CPU load is very high and disk usage is low during insert.\nI run Red Hat 9\nAnthlon 2.6\n1GB ram\nFast IDE Disk\n\nSetting following in postgres.conf apparently doesn't help:\nlog_statement = true\nlog_duration = true\nsince it logs only sql issued by client. It logs only once \nper session the sql text but during call to the PL/pgSQL function,\nbut of course no duration. \n\nDue the complexity of PL/pgSQL function trying to step by step \nsee the execution plans is very time consuming. \n\nQ1) Is there any way to see which statements are called for PL/pgSQL\nand their duration?\n\nI've tried to measure the duration of sql with printing out\n\"localtimestamp\" but for some reason during the same pg/plsql call it\nreturns the same \nvalue:\n\nExample:\nFollowing gets and prints out the localtimestamp value in the loop\ncreate or replace function foobar()\n returns integer as '\n declare \n v timestamp;\n begin \n loop\n select localtimestamp into v;\n raise notice ''Timestamp: %'', v;\n end loop;\n return null;\n end; ' language 'plpgsql'\n;\n\nand as result of \"select foobar();\" \n\ni constantly get the same value:\nNOTICE: Timestamp: 2003-12-12 01:51:35.768053\nNOTICE: Timestamp: 2003-12-12 01:51:35.768053\nNOTICE: Timestamp: 2003-12-12 01:51:35.768053\nNOTICE: Timestamp: 2003-12-12 01:51:35.768053\nNOTICE: Timestamp: 2003-12-12 01:51:35.768053\n\nQ2) what i do wrong here and what is the \"Proper Way\" to measure\nexecution time of sql called inside PG/plSQL.\n\nThanks in advance \n\nWBR\n--\nAram\n\n\n\n", "msg_date": "12 Dec 2003 02:17:06 +0100", "msg_from": "Aram Kananov <[email protected]>", "msg_from_op": true, "msg_subject": "Measuring execution time for sql called from PL/pgSQL" }, { "msg_contents": "> I've tried to measure the duration of sql with printing out\n> \"localtimestamp\" but for some reason during the same pg/plsql call \n> it returns the same value:\n\nAram,\n\n>From http://www.postgresql.org/docs/current/static/functions-datetime.html:\n\nThere is also the function timeofday(), which for historical reasons returns \na text string rather than a timestamp value: \n\nSELECT timeofday();\n Result: Sat Feb 17 19:07:32.000126 2001 EST\n\nIt is important to know that CURRENT_TIMESTAMP and related functions return \nthe start time of the current transaction; their values do not change during \nthe transaction. This is considered a feature: the intent is to allow a \nsingle transaction to have a consistent notion of the \"current\" time, so that \nmultiple modifications within the same transaction bear the same time stamp. \ntimeofday() returns the wall-clock time and does advance during transactions. \n\n-David\n", "msg_date": "Thu, 11 Dec 2003 18:50:31 -0700", "msg_from": "\"David Shadovitz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Measuring execution time for sql called from PL/pgSQL" }, { "msg_contents": "Dnia 2003-12-12 02:17, Uďż˝ytkownik Aram Kananov napisaďż˝:\n> select localtimestamp into v;\n> raise notice ''Timestamp: %'', v;\n\nDon't use localtimestamp, now() neither any transaction based time \nfunction. They all return the same value among whole transaction. The \nonly time function, which can be used for performance tests is timeofday().\n\nYou can read more about time functions in manual.\n\nRegards,\nTomasz Myrta\n\n\n\n", "msg_date": "Fri, 12 Dec 2003 08:46:37 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Measuring execution time for sql called from PL/pgSQL" } ]
[ { "msg_contents": "Well, now that I have the plan for my slow-running query, what do I do? Where \nshould I focus my attention?\nThanks.\n-David\n\n\nHash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual \ntime=118773.28..580889.01 rows=57076 loops=1)\n\t-> Hash Join (cost=16619.49..21628.48 rows=40133 width=249) (actual \ntime=118771.29..535709.47 rows=57076 loops=1)\n\t\t-> Hash Join (cost=16618.41..20724.39 rows=40133 width=240) (actual \ntime=118768.04..432327.82 rows=57076 loops=1)\n\t\t\t-> Hash Join (cost=16617.34..19920.66 rows=40133 width=223) (actual \ntime=118764.67..340333.78 rows=57076 loops=l)\n\t\t\t\t-> Hash Join (cost=16616.14..19217.14 rows=4Ol33 width=214) (actual \ntime=118761.38..258978.8l row=57076 loops=1)\n\t\t\t\t\t-> Merge Join (cost=16615.07..18413.42 rows=40133 width=205)\n\t\t\t\t\t (actual time=118758.74..187180.55 rows=57076 loops=i)\n\t\t\t\t\t\t-> Index Scan using grf_grf_id_idx on giraffes (cost=O.O0..1115.61 \nrows=53874 width=8)\n\t\t\t\t\t\t (actual \ntime=2.37..6802.38 rows=57077 loops=l)\n\t\t\t\t\t\t-> Sort (cost=l66l5.07..16615.07 rows=18554 width=197) (actual \ntime=118755.11..120261.06 rows=59416 loops=l)\n\t\t\t\t\t\t\t-> Hash Join (cost=8126.08..14152.54 rows=18554 width=197)\n\t\t\t\t\t\t\t (actual time=50615.72..l09853.7l rows=16310 loops=1)\n\t\t\t\t\t\t\t\t-> Hash Join (cost=8124.39..12690.30 rows=24907 width=179)\n\t\t\t\t\t\t\t\t (actual time=50607.36..86868.58 rows=iSBiS loops=1)\n\t\t\t\t\t\t\t\t\t-> Hash Join (cost=249.26..2375.23 rows=24907 width=131)\n\t\t\t\t\t\t\t\t\t (actual time=23476.42..35107.80 rows=16310 loops=l)\n\t\t\t\t\t\t\t\t\t\t-> Nested Loop (cost=248.2l..1938.31 rows=24907 width=118)\n\t\t\t\t\t\t\t\t\t\t (actual time=23474.70..28155.13 rows=16310 loops=1)\n\t\t\t\t\t\t\t\t\t\t\t-> Seq Scan on zebras (cost=0.00..l.0l rows=l width=14)\n\t\t\t\t\t\t\t\t\t\t\t (actual time=O.64..0.72 rows=1 ioops=1)\n\t\t\t\t\t\t\t\t\t\t\t-> Materialize (cost=1688.23..l688.23 rows=24907 width=104)\n\t\t\t\t\t\t\t\t\t\t\t (actual time=23473.77..23834.26 rows=16310 loops=l)\n\t\t\t\t\t\t\t\t\t\t\t\t\t-> Hash Join (cost=248.21..1688.23 rows=24907 width=lO4)\n\t\t\t\t\t\t\t\t\t\t\t\t\t (actual time=1199.26..23059.92 rows=16310 loops=l)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t-> Seq Scan on frogs (cost=0.00..755.07 rows=24907 width=83)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t (actual time=0.53..4629.58 rows=25702 \nloops=l)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t-> Hash (cost=225.57..225.57 rows=9057 width=21)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t (actual time=1198.0l..1198.01 rows=0 loops=1)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t-> Seq Scan on tigers (cost=0.00..225.57 rows=9057 width=21)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t (actual time=0.39..892.67 rows=9927 \nloops=1)\n\t\t\t\t\t\t\t\t\t\t-> Hash (cost=l.O4..1.-4 rows=4 width=13) (actual time=l.07..1.07 \nrows=0 loops=1)\n\t\t\t\t\t\t\t\t\t\t\t-> Seq Scan on deers (cost=0.0O..1.04 rows=4 width=13)\n\t\t\t\t\t\t\t\t\t\t\t (actual time=0.64..0.95 rows=4 loops=1)\n\t\t\t\t\t\t\t\t\t-> Hash (cost=4955.28..4955.28 rows=91528 width=48)\n\t\t\t\t\t\t\t\t\t (actual tlne=27O40.82..27040.82 rows=0 loops=1)\n\t\t\t\t\t\t\t\t\t\t-> Seq Scan on warthogs (cost=0.00..4955.28 rows=91528 width=48)\n\t\t\t\t\t\t\t\t\t\t (actual time=3.92..24031.27 rows=91528 \nloops=1)\n\t\t\t\t\t\t\t\t-> Hash (cost=1.55..1.55 rows=55 width=18) (actual time=7.l3..7.13 \nrows=0 loops=1)\n\t\t\t\t\t\t\t\t\t-> Seq Scan on monkeys (cost=0.00..l.55 rows=55 width=18)\n\t\t\t\t\t\t\t\t\t (actual time=0.64..5.38 rows=55 loops=1)\n\t\t\t\t\t-> Hash (cost=l.O5..1.05 rows=S width=9) (actual time=1.16..l.l6 rows=0 \nloops=1)\n\t\t\t\t\t\t-> Seq Scan on worms (cost=0.00..1.05 rows=S width=9) (actual \ntime=0.65..1.00 rows=5 loops=1)\n\t\t\t\t-> Hash (cost=1.16..1.16 rows=16 width=9) (actual time=l.86..1.86 rows=0 \nloops=1)\n\t\t\t\t\t-> Seq Scan on lions (cost=0.00..l.16 rows=16 width=9) (actual \ntime=0.lO..1.36 rows=16 loops=1)\n\t\t\t-> Hash\t(cost=1.06..1.06 rows=6 width=17) (actual time=1.35..1.35 rows=0 \nloops=1)\n\t\t\t\t-> Seq Scan on dogs (cost=0.00..1.06 rows=6 width=17) (actual \ntime=0.65..1.16 rows=6 loops=l)\n\t\t-> Hash\t\t\t(cost=1.07..1.07 rows=3 width=9) (actual time=1.23..1.23 rows=0 \nloops=1)\n\t\t\t-> Seq Scan on parrots (cost=0.00..1.07 rows=3 width=9) (actual \ntime=0.69..1.13 rows=3 loops=1)\n\t-> Hash\t(cost=l.08..1.08 rows=8 width=17) (actual time=0.98..0.98 rows=0 \nloops=1)\n\t\t->\tSeq Scan on rhinos (cost=0.00..1.08 rows=8 width=17) (actual \ntime=0.10..0.73 rows=8 loops=1)\n\nTotal runtime: 58l341.00 msec\n\n", "msg_date": "Fri, 12 Dec 2003 00:18:12 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan - now what?" }, { "msg_contents": "David Shadovitz wrote:\n\n> Well, now that I have the plan for my slow-running query, what do I do? Where \n> should I focus my attention?\n\nBriefly looking over the plan and seeing the estimated v/s actual row mismatch,I \ncan suggest you following.\n\n1. Vacuum(full) the database. Probably you have already done it.\n2. Raise statistics_target to 500 or more and reanalyze the table(s) in question.\n3. Set enable_hash_join to false, before running the query and see if it helps.\n\n HTH\n\n Shridhar\n", "msg_date": "Fri, 12 Dec 2003 14:44:05 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan - now what?" }, { "msg_contents": "David Shadovitz <[email protected]> writes:\n> Well, now that I have the plan for my slow-running query, what do I\n> do?\n\nThis is not very informative when you didn't show us the query nor\nthe table schemas (column datatypes and the existence of indexes\nare the important parts). I have a feeling that you might be well\nadvised to fold the multiple tables into one \"animals\" table, but\nthere's not enough info here to make that recommendation for sure.\n\nBTW, what did you do with this, print and OCR it? It's full of the\nmost bizarre typos ... mostly \"l\" for \"1\", but others too ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Dec 2003 10:33:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan - now what? " } ]
[ { "msg_contents": "> Running the attached test program shows on BSD/OS 4.3:\n> \n> \twrite 0.000360\n> \twrite & fsync 0.001391\n\nI think the \"write & fsync\" pays for the previous \"write\" test (same filename).\n\n> \twrite, close & fsync 0.001308\n> \topen o_fsync, write 0.000924\n\nI have tried to modify the program to more closely resemble WAL \nwrites (all writes to WAL are 8k), the file is usually already open, \nand test larger (16k) transactions.\n\nzeu@a82101002:~> test_sync1\nwrite 0.000625\nwrite & fsync 0.016748\nwrite & fdatasync 0.006650\nwrite, close & fsync 0.017084\nwrite, close & fdatasync 0.006890\nopen o_dsync, write 0.015997\nopen o_dsync, one write 0.007128\n\nFor the last line xlog.c would need to be modified, but the measurements\nseem to imply that it is only worth it on platforms that have O_DSYNC\nbut not fdatasync. \n\nAndreas", "msg_date": "Fri, 12 Dec 2003 13:22:00 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fsync method checking" }, { "msg_contents": "\nI have updated my program with your suggested changes and put in\nsrc/tools/fsync. Please see how you like it.\n\n---------------------------------------------------------------------------\n\nZeugswetter Andreas SB SD wrote:\n> \n> > Running the attached test program shows on BSD/OS 4.3:\n> > \n> > \twrite 0.000360\n> > \twrite & fsync 0.001391\n> \n> I think the \"write & fsync\" pays for the previous \"write\" test (same filename).\n> \n> > \twrite, close & fsync 0.001308\n> > \topen o_fsync, write 0.000924\n> \n> I have tried to modify the program to more closely resemble WAL \n> writes (all writes to WAL are 8k), the file is usually already open, \n> and test larger (16k) transactions.\n> \n> zeu@a82101002:~> test_sync1\n> write 0.000625\n> write & fsync 0.016748\n> write & fdatasync 0.006650\n> write, close & fsync 0.017084\n> write, close & fdatasync 0.006890\n> open o_dsync, write 0.015997\n> open o_dsync, one write 0.007128\n> \n> For the last line xlog.c would need to be modified, but the measurements\n> seem to imply that it is only worth it on platforms that have O_DSYNC\n> but not fdatasync. \n> \n> Andreas\n\nContent-Description: test_sync1.c\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 18 Mar 2004 12:34:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync method checking" } ]
[ { "msg_contents": "Hi List,\n \n First of all, I tried to subcribe the ODBC list but it seems that the \nsubscription's link is broken ! So here it goes:\n\n I have a delphi software use ttable components that converts dbf information \nto PostgreSQL an Oracle Databases. My problem is that PostgreSQL is too slow, \nthe oracle db makes this convertion in 3.45 min and the Pg db makes int 29 min.\n The software is the same ( only the database reference is diferent ) , this \nsotware uses BDE to access the database with oracle native driver and using \npostgreSQL odbc driver version 5. Both databases are in the same machine ( \nPentium 4 1.8Ghz, 384MB RAM DDR ) running RH 9 , Oracle 9i and PostgreSQL 7.3.2-\n3.\n When I ran this conversion I \"snorted\" the communication between the server \nand the station to see how it does the sql requests , here it goes:\n\nORACLE :\n\n- select owner, object_name, object_type, created from sys.all_objects where \nobject_type in ('TABLE', 'VIEW' ) and owner = 'VENDAS' and object_name \n= 'FTCOFI00' order by 1 ASC, 2 ASC\n\n- select owner, index_name, uniqueness from sys.all_indexes where table_owner \n= 'VENDAS' and table_name = 'FTCOFI00' order by owner ASC, index_name ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCOFI01' order by column_position ASC\n\n- \nSELECT \"EMP\" ,\"FIL\" ,\"CODIGO_FISCAL\" ,\"CODIGO_FISCAL_ESTORNO\" ,\"DESCRICAO_FISCAL\n\" ,\"CODIGO_OPERACIONAL\" ,\"DESCRICAO_USUARIO\" ,\"COD_NATIPI\" ,\"COD_NATIBGE\" ,\"EXTO\n_NF1\" ,\"TEXTO_NF2\" ,\"NF_NORMALDIF\" ,\"NF_TRANSFILIAL\" ,\"COD_FILIAL\" ,\"COD_LANCTO_\nFILIAL\" ,\"NF_EXPORTACAO_DIRETA\" ,\"NF_EXPORTACAO_INDIRETA\" ,\"NF_SIMPREMESSA\" ,\"NF\n_DEVOLUCAO\" ,\"NF_ENTRADA\" ,\"NF_REPOSICAO\" ,\"NF_OUTRASERIE\" ,\"NF_CONSIGNACAO\" ,\"N\nF_PRODGRATIS\" ,\"NF_FATURANTECIP\" ,\"NF_DIFBASEICM\" ,\"NF_DIF_VALORICM\" ,\"NF_DIFBAS\nEIPI\" ,\"NF_DIFVALORIPI\" ,\"NF_DIFPRECO\" ,\"BLOQ_CREDITO\" ,\"LIBERA_CREDITO\" ,\"VER_P\nARAM_VENDAS\" ,\"ENTRA_COBRANCA\" ,\"BASECALC_VLRBRUTO\" ,\"DESCNF_REFICM\" ,\"ALIQICM_I\nGUALEST\" ,\"COD_TRIBICM\" ,\"COD_TRIBIPI\" ,\"ATUAL_ESTOQUE\" ,\"ATUAL_FABRICACAO\" ,\"AT\nUAL_FATURA\" ,\"ATUAL_OUTENTR\" ,\"ATUAL_OUTSAIDA\" ,\"ATUAL_TRANFIL\" ,\"ATUAL_SEMIACAB\n\" ,\"ATUAL_CARTPED\" ,\"ATUAL_ENTRSAID\" ,\"REV_CUSTMEDIO\" ,\"DIGITAR_FISICO\" ,\"DIGITA\nR_FINANCEIRO\" ,\"USAR_CUSTO_CMU_INFORMAR\" ,\"GRUPO_FATURAMENTO\" ,\"TIPO_NF\" ,\"RESUM\nO_FISCAL_CODIGO\" ,\n\"ATUAL_DISTRIB\" ,\"IMPR_OBS_NF_REG_ES\" ,\"DIFE_RECEITA\" ,\"COD_LANCTO\" ,\"SITUACAO\" \n FROM \"FTCOFI00\" ORDER BY \"EMP\" ASC , \"FIL\" ASC , \"CODIGO_FISCAL\" ASC\n\n- select owner, object_name, object_type, created from sys.all_objects where \nobject_type in ('TABLE', 'VIEW') and owner = 'VENDAS' and object_name \n= 'FTCLCR00' order by 1 ASC, 2 ASC\n\n- select owner, index_name, uniqueness from sys.all_indexes where table_owner \n= 'VENDAS' and table_name = 'FTCLCR00' order by owner ASC, index_name ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR01' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR02' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR03' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR04' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR05' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR06' order by column_position ASC\n\n- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and \nindex_name = 'FTCLCR07' order by column_position ASC\n\n- \nSELECT \"EMP\" ,\"FIL\" ,\"TIPO_CADASTRO\" ,\"CODIGO\" ,\"RAZAO_SOCIAL\" ,\"NOME_FANTASIA\" \n,\"EMP_ENDERECO\" ,\"EMP_NRO\" ,\"EMP_COMPLEMENTO\" ,\"EMP_BAIRRO\" ,\"EMP_CIDADE\" ,\"EMP_\nESTADO\" ,\"EMP_CEP\" ,\"EMP_PAIS\" ,\"EMP_EAN\" ,\"COB_ENDERECO\" ,\"COB_NRO\" ,\"COB_COMPL\nEMENTO\" ,\"COB_BAIRRO\" ,\"COB_CIDADE\" ,\"COB_ESTADO\" ,\"COB_CEP\" ,\"COB_PAIS\" ,\"COB_E\nAN\" ,\"ENT_ENDERECO\" ,\"ENT_NRO\" ,\"ENT_COMPLEMENTO\" ,\"ENT_BAIRRO\" ,\"ENT_CIDADE\" ,\"\nENT_ESTADO\",\n\"ENT_CEP\" ,\"ENT_PAIS\" ,\"ENT_EAN\" ,\"LOJA_EAN\" ,\"TELEFONE\" ,\"CELULAR\" ,\"FAX\" ,\"EMA\nIL\" ,\"SITE\" ,\"CONTATO_NOME\" ,\"CONTATO_TELEFONE\" ,\"CONTATO_EMAIL\" ,\"CONTATO_DDMM_\nANIV\" ,\"SITUACAO_CADASTRO\" ,\"OBSERVACOES\" ,\"DATA_CADASTRO\" ,\"DATA_ALTERACAO\" ,\"T\nIPO_CONTRIBUINTE\" ,\"CODIGO_CONTRIBUINTE\" ,\"TIPO_INSCRICAO\" ,\"CODIGO_INSCRICAO\",\"\nCODIGO_REDE\" ,\"CODIGO_TIPO_CLIENTE\" ,\"CODIGO_GRUPO_CLIENTE\" ,\"CODIGO_SUFRAMA\" ,\"\nDATA_VALIDADE_SUFRAMA\" ,\"LIMITE_CREDITO\" ,\n\"MARCA\" ,\"CLASSE\" ,\"BANDEIRA_CLIENTE\" ,\"CODIGO_TIPO_CREDOR\" ,\"NOME_REPRESENTANTE\n\" ,\"TIPO_CONDICAO_PGTO\" ,\"PRAZO_PGTO_01\" ,\"PRAZO_PGTO_02\" ,\"PRAZO_PGTO_03\" ,\"COD\nIGO_MOEDA_COMPRA\" ,\"FATOR_QUALIDADE\" ,\"DESPESA_FINANCEIRA\" ,\"CODIGO_DARF\" ,\"CODI\nGO_NATUREZA_RENDIMENTO\" ,\"CONTA_CORRENTE_BANCO\" ,\"CONTA_CORRENTE_AGENCIA\" ,\"CONT\nA_CORRENTE_NUMERO\" ,\"FORNECEDOR_SULPLASTIC\" ,\"SUFRAMA_TRIB_ICM\" ,\"SUFRAMA_TRIB_I\nPI\" ,\"CONTA_CORRENTE_AGENC_DC\" ,\n\"CONTA_CORRENTE_NUM_DC\" ,\"CONTA_COR_FORMA_PAGTO\" ,\"FORMA_CREDITO\" ,\"SENHA\" ,\"LIM\nITE_CREDITO_PUIG\" ,\"COD_REPRES\" ,\"COD_CLIENTE_TEP\" ,\"EDI_MERCADOR\" ,\"TIPO_NF\" ,\"\nBONIFIC_BALCAO\" FROM \"FTCLCR00\" ORDER BY \"EMP\" ASC , \"FIL\" \nASC , \"TIPO_CADASTRO\" ASC , \"CODIGO\" ASC\n\n \nPostgreSQL:\n\n- select relname, nspname, relkind from pg_catalog.pg_class, \npg_catalog.pg_namespace where relkind in ('r', 'v') and nspname like 'vendas' \nand relname like 'ftcofi00' and relname !~ '^pg_|^dd_' and pg_namespace.oid = \nrelnamespace order by nspname, relname\n\n- select u.nspname, c.relname, a.attname, a.atttypid, t.typname,a.attnum, \na.attlen, a.atttypmod, a.attnotnull, c.relhasrules, c.relkind from \npg_catalog.pg_namespace u, pg_catalog.pg_class c, pg_catalog.pg_attribute a, \npg_catalog.pg_type t where u.oid = c.relnamespace and (not a.attisdropped) and \nc.oid= a.attrelid and a.atttypid = t.oid and (a.attnum > 0) and c.relname \nlike 'ftcofi00' and u.nspname like 'vendas' order by u.nspname, c.relname, \nattnum\n\n- select u.nspname, c.relname, a.attname, a.atttypid, t.typname,a.attnum, \na.attlen, a.atttypmod, a.attnotnull, c.relhasrules, c.relkind from \npg_catalog.pg_namespace u, pg_catalog.pg_class c, pg_catalog.pg_attribute a, \npg_catalog.pg_type t where u.oid = c.relnamespace and (not a.attisdropped) and \nc.oid= a.attrelid and a.atttypid = t.oid and (a.attnum > 0) and c.relname \n= 'ftcofi00'and u.nspname = 'vendas' order by u.nspname, c.relname, attnum\n\n- select c.relname, i.indkey, i.indisunique, i.indisclustered, a.amname, \nc.relhasrules, n.nspname from pg_catalog.pg_index i, pg_catalog.pg_class c, \npg_catalog.pg_class d, pg_catalog.pg_am a, pg_catalog.pg_namespace n where \nd.relname = 'ftcofi00' and n.nspname = 'vendas' and n.oid = d.relnamespace and \nd.oid = i.indrelid and i.indexrelid = c.oid and c.relam = a.oid order by \ni.indisprimary desc, i.indisunique, n.nspname, c.relname\n\n- \nSELECT \"emp\" ,\"fil\" ,\"codigo_fiscal\" ,\"codigo_fiscal_estorno\" ,\"descricao_fiscal\n\" ,\"codigo_operacional\" ,\"descricao_usuario\" ,\"cod_natipi\" ,\"cod_natibge\" ,\"text\no_nf1\" ,\"texto_nf2\" ,\"nf_normaldif\" ,\"nf_transfilial\" ,\"cod_filial\" ,\"cod_lancto\n_filial\" ,\"nf_exportacao_direta\" ,\"nf_exportacao_indireta\" ,\"nf_simpremessa\" ,\"n\nf_devolucao\" ,\"nf_entrada\" ,\"nf_reposicao\" ,\"nf_outraserie\" ,\"nf_consignacao\" ,\"\nnf_prodgratis\" ,\"nf_faturantecip\" ,\"nf_difbaseicm\" ,\"nf_dif_valoricm\" ,\"nf_difba\nseipi\" ,\"nf_difvaloripi\" ,\"nf_difpreco\" ,\"bloq_credito\" ,\"libera_credito\" ,\"ver_\nparam_vendas\" ,\"entra_cobranca\" ,\"basecalc_vlrbruto\" ,\"descnf_reficm\" ,\"aliqicm_\nigualest\" ,\"cod_tribicm\" ,\"cod_tribipi\" ,\"atual_estoque\" ,\"atual_fabricacao\" ,\"a\ntual_fatura\" ,\"atual_outentr\" ,\"atual_outsaida\" ,\"atual_tranfil\" ,\"atual_semiaca\nb\" ,\"atual_cartped\" ,\"atual_entrsaid\" ,\"rev_custmedio\" ,\"digitar_fisico\" ,\"digit\nar_financeiro\",\"usar_custo_cmu_informar\" ,\"grupo_faturamento\" ,\"tipo_nf\" ,\"resum\no_fiscal_codigo\" ,\"atual_distrib\" ,\n\"impr_obs_nf_reg_es\" ,\"difer_receita\" ,\"cod_lancto\" ,\"situacao\" \nFROM \"vendas\".\"ftcofi00\" ORDER BY \"emp\" ASC , \"fil\" ASC , \"codigo_fiscal\" ASC \n\n- select relname, nspname, relkind from pg_catalog.pg_class, \npg_catalog.pg_namespace where relkind in ('r', 'v') and nspname like 'vendas' \nand relname like 'ftclcr00' and relname !~ '^pg_|^dd_' and pg_namespace.oid = \nrelnamespace order by nspname, relname\n\n- select u.nspname, c.relname, a.attname, a.atttypid, t.typname,a.attnum, \na.attlen, a.atttypmod, a.attnotnull, c.relhasrules, c.relkind from \npg_catalog.pg_namespace u, pg_catalog.pg_class c, pg_catalog.pg_attribute a, \npg_catalog.pg_type t where u.oid = c.relnamespace and (not a.attisdropped) and \nc.oid= a.attrelid and a.atttypid = t.oid and (a.attnum > 0) and c.relname \nlike 'ftclcr00' and u.nspname like 'vendas' order by u.nspname, c.relname, \nattnum\n\n- select u.nspname, c.relname, a.attname, a.atttypid, t.typname,a.attnum, \na.attlen, a.atttypmod, a.attnotnull, c.relhasrules, c.relkind from \npg_catalog.pg_namespace u, pg_catalog.pg_class c, pg_catalog.pg_attribute a, \npg_catalog.pg_type t where u.oid = c.relnamespace and (not a.attisdropped) and \nc.oid= a.attrelid and a.atttypid = t.oid and (a.attnum > 0) and c.relname \n= 'ftclcr00'and u.nspname = 'vendas' order by u.nspname, c.relname, attnum\n\n- select c.relname, i.indkey, i.indisunique, i.indisclustered, a.amname, \nc.relhasrules, n.nspname from pg_catalog.pg_index i, pg_catalog.pg_class c, \npg_catalog.pg_class d, pg_catalog.pg_am a, pg_catalog.pg_namespace n where \nd.relname = 'ftclcr00' and n.nspname = 'vendas' and n.oid = d.relnamespace and \nd.oid = i.indrelid and i.indexrelid = c.oid and c.relam = a.oid order by \ni.indisprimary desc, i.indisunique, n.nspname, c.relname\n\n- \nSELECT \"emp\" ,\"fil\" ,\"tipo_cadastro\" ,\"codigo\" ,\"razao_social\",\"nome_fantasia\" ,\n\"emp_endereco\" ,\"emp_nro\" ,\"emp_complemento\" ,\"emp_bairro\" ,\"emp_cidade\" ,\"emp_e\nstado\" ,\"emp_cep\" ,\"emp_pais\",\"emp_ean\" ,\"cob_endereco\" ,\"cob_nro\" ,\"cob_complem\nento\" ,\"cob_bairro\" ,\"cob_cidade\" ,\"cob_estado\" ,\"cob_cep\" ,\"cob_pais\" ,\"cob_ean\n\" ,\"ent_endereco\" ,\"ent_nro\" ,\"ent_complemento\" ,\"ent_bairro\" ,\"ent_cidade\" ,\"en\nt_estado\" ,\"ent_cep\" ,\"ent_pais\" ,\"ent_ean\" ,\"loja_ean\" ,\"telefone\" ,\"celular\" ,\n\"fax\" ,\"email\" ,\"site\" ,\"contato_nome\" ,\"contato_telefone\" ,\"contato_email\" ,\"co\nntato_ddmm_aniv\" ,\"situacao_cadastro\" ,\"observacoes\" ,\"data_cadastro\" ,\"data_alt\neracao\" ,\"tipo_contribuinte\" ,\"codigo_contribuinte\" ,\"tipo_inscricao\" ,\"codigo_i\nnscricao\" ,\"codigo_rede\" ,\"codigo_tipo_cliente\" ,\"codigo_grupo_cliente\" ,\"codigo\n_suframa\" ,\"data_validade_suframa\" ,\"limite_credito\" ,\"marca\" ,\"classe\" ,\"bandei\nra_cliente\" ,\"codigo_tipo_credor\" ,\"nome_representante\" ,\"tipo_condicao_pgto\" ,\"\nprazo_pgto_01\" ,\"prazo_pgto_02\" ,\"prazo_pgto_03\" ,\n\"codigo_moeda_compra\" ,\"fator_qualidade\" ,\"despesa_financeira\" ,\"codigo_darf\" ,\"\ncodigo_natureza_rendimento\" ,\"conta_corrente_banco\" ,\"conta_corrente_agencia\" ,\"\nconta_corrente_numero\" ,\"fornecedor_sulplastic\" ,\"suframa_trib_icm\" ,\"suframa_tr\nib_ipi\" ,\"conta_corrente_agenc_dc\" ,\"conta_corrente_num_dc\" ,\"conta_cor_forma_pa\ngto\" ,\"forma_credito\" ,\"senha\" ,\"limite_credito_puig\" ,\"cod_repres\" ,\"cod_client\ne_tep\" ,\"edi_mercador\" ,\"tipo_nf\" ,\"bonific_balcao\" FROM \"vendas\".\"ftclcr00\" \nORDER BY \"emp\" ASC , \"fil\" ASC , \"tipo_cadastro\" ASC , \"codigo\" ASC\n\n So , this snort generated a 3MB file for Oracle and it didn't request a \nbigger windows swap file but PostgreSQL generated a 153 MB file and I needed a \n700 MB windows swap file ( this is unacceptable !!!! ).\n I tried changing the ttables components to a SQL Query but Pg did it in \n49min an Oracle in 29min ( it looks like a index problem but there's no way to \nforce an index in Pqsql ).\n I don't know whate else to do , and I really want to use PgSQL instead of \nOracle but to do this I must PgSQL working in a compatibile time !\n Any suggestions ?\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n", "msg_date": "Fri, 12 Dec 2003 10:39:59 -0200", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC Driver generates a too big \"windows swap file\" and it's too slow" }, { "msg_contents": "On Fri, 12 Dec 2003, Rhaoni Chiu Pereira wrote:\n\n\nHi, is there a switch in your pgsql/odbc connector to enable cursors? If \nso, try turning that on.\n\n", "msg_date": "Fri, 12 Dec 2003 08:48:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Driver generates a too big \"windows swap file\" and" } ]
[ { "msg_contents": "> This is not very informative when you didn't show us the query nor\n> the table schemas..\n\n> BTW, what did you do with this, print and OCR it?\n\nTom,\n\nI work in a classified environment, so I had to sanitize the query plan, print \nit, and OCR it. I spent a lot of time fixing typos, but I guess at midnight my \neyes missed some. This hassle is why I posted neither the query nor the \nschema. The database is normalized, though, but my use of animal names of \ncouse masks this.\n\nIf you think that you or anyone else would invest the time, I could post more \ninfo.\n\nI will also try Shridhar's suggestions on statistics_target and \nenable_hash_join.\n\nThanks.\n-David\n", "msg_date": "Fri, 12 Dec 2003 07:46:36 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan - now what? " }, { "msg_contents": "David Shadovitz <[email protected]> writes:\n> If you think that you or anyone else would invest the time, I could post more\n> info.\n\nI doubt you will get any useful help if you don't post more info.\n\n> I will also try Shridhar's suggestions on statistics_target and \n> enable_hash_join.\n\nIt seemed to me that the row estimates were not so far off that I would\ncall it a statistical failure; you can try increasing the stats target\nbut I'm not hopeful about that. My guess is that you will have to look\nto revising either the query or the whole database structure (ie,\nmerging tables). We'll need the info I asked for before we can make\nany recommendations, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Dec 2003 11:52:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan - now what? " } ]
[ { "msg_contents": "Dear all ,\n\nI have created my tables without OIDS now my doubts are :\n1. Will this speed up the data insertion process\n2. Though I have not written any code in my any of the pgsql functions \nwhich depend on OIDS\n 1. Will without OIDS the functions behave internally differently\n 2. Will my application break at any point\n3. I decided to work with out OIDS because\n 1. It has a limit of -2147483648 to +2147483647\n 2 Due to this limitation I would not like to drop recreate my \ndatabase because it is a bit difficult/dirty process\n\nAll links and suggestion pertaining to OIDS are most welcome my mail box \nis at your disposal and dont hassitate to\ndrop a two line comment.\n-----------------------\nMy Sys Config:\nRH 9.0\nPostgreSQL 7.3.4\nGCC 3.2.2\nPHP 4.3.4\n----------------------\nRegards,\nV Kashyap\n\n\n\n", "msg_date": "Fri, 12 Dec 2003 21:43:10 +0530", "msg_from": "Sai Hertz And Control Systems <[email protected]>", "msg_from_op": true, "msg_subject": "Tables Without OIDS and its effect" }, { "msg_contents": "Sai Hertz And Control Systems <[email protected]> writes:\n> I have created my tables without OIDS now my doubts are :\n> 1. Will this speed up the data insertion process\n\nSlightly. It means that each inserted row will be 4 bytes smaller (on\ndisk), which in turn means you can fit more tuples on a page, and\ntherefore you'll need fewer pages and less disk space. However, I'd be\nsurprised if the performance improvement is very significant.\n\n> 2. Though I have not written any code in my any of the pgsql functions\n> which depend on OIDS\n> 1. Will without OIDS the functions behave internally differently\n> 2. Will my application break at any point\n\nNo.\n\nBTW, we intend to phase out the use of OIDs for user tables in the\nlong term. There have been a few threads on -hackers that discuss the\nplans for doing this.\n\n-Neil\n\n", "msg_date": "Fri, 12 Dec 2003 18:10:21 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Tables Without OIDS and its effect" }, { "msg_contents": "Hello Neil Conway,\n\nWe are doing some test on our applications and will let know the \ncommunity if without OIDS we could gain\nmore speed .\n\n>>2. Though I have not written any code in my any of the pgsql functions\n>>which depend on OIDS\n>> 1. Will without OIDS the functions behave internally differently\n>> 2. Will my application break at any point\n>> \n>>\n>\n>No.\n>\n>BTW, we intend to phase out the use of OIDs for user tables in the\n>long term. There have been a few threads on -hackers that discuss the\n>plans for doing this.\n> \n>\nThis was a relief for us all , but an the same time we have found one \nincompatibility\n\nThis incompatibility is with\n1. StarOffice 7.0\n2. OpenOffice 1.1\nand the incompatibility is when I retrieve data into Star SpreadSheet \nor Open Office SpreadSheet\nI am greeted with an error field *OID* not found.\nThough these both are connecting to PostgreSQL 7.3.4 (Linux GCC 3.x) \nvia psqlODBC 07.02.0003\nOn the Same time WinSQL connects as usual via psqlODBC 07.02.0003 and \nis working fine.\n\nThough this does not effect us a lot since we are using PHP to show \nand retrieve data\nWe are posting this such that any one relying totally on OpenOffice for \ndata retrieve and display\nbetter know this ,\n\nOur Test config was:\n--------------------------\nClient :-\nO.S Win XP (No service pack)\nOpenOffice 1.1 Windows version\nStarOffice 7.0 Eval Pack\npsqlODBC 07.02.0003\nServer :-\nOS RH 9.0 kernel-2.4.20-24.9\nPostgreSQL 7.3.4\n\nPlease if anyone has a different story while using WITHOUT OIDS \nplease let us and every one know .\n\n\nRegards,\nV Kashyap\n\n\n\n\n\n\n\n\n\n\n\nHello  Neil Conway,\n\nWe are doing some test on our applications and will let know the\ncommunity  if without OIDS we could gain \nmore speed .\n\n\n2. Though I have not written any code in my any of the pgsql functions\nwhich depend on OIDS\n 1. Will without OIDS the functions behave internally differently\n 2. Will my application break at any point\n \n\n\nNo.\n\nBTW, we intend to phase out the use of OIDs for user tables in the\nlong term. There have been a few threads on -hackers that discuss the\nplans for doing this.\n \n\nThis was a relief  for us all , but an the same time we have found one\nincompatibility\n\nThis incompatibility is with\n1.  StarOffice 7.0 \n2. OpenOffice 1.1 \nand the incompatibility is when I retrieve data into Star SpreadSheet\nor  Open Office SpreadSheet\nI am greeted with an error field OID  not found.\nThough these both are connecting to PostgreSQL  7.3.4 (Linux GCC 3.x)\nvia psqlODBC  07.02.0003 \nOn the Same time WinSQL connects as usual via psqlODBC  07.02.0003  and\nis working fine.\n\nThough  this does not  effect us a lot since we are using PHP to show\nand retrieve data\nWe are posting this such that any one relying totally on OpenOffice for\ndata  retrieve and display\nbetter know this , \n\nOur Test config was:\n--------------------------\nClient :- \nO.S           Win XP (No service pack)\nOpenOffice    1.1  Windows version\nStarOffice      7.0   Eval Pack\npsqlODBC 07.02.0003\nServer  :-\nOS                      RH 9.0 kernel-2.4.20-24.9\nPostgreSQL       7.3.4\n\nPlease if anyone has a different story  while using  WITHOUT  OIDS\nplease let us and every one know .\n\n\nRegards,\nV Kashyap", "msg_date": "Sat, 13 Dec 2003 16:03:37 +0530", "msg_from": "Sai Hertz And Control Systems <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Tables Without OIDS and its effect" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> BTW, we intend to phase out the use of OIDs for user tables in the\n> long term.\n\nI don't believe anyone has proposed removing the facility altogether.\nThere's a big difference between making the default behavior be not\nto have OIDs and removing the ability to have OIDs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Dec 2003 17:52:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Tables Without OIDS and its effect " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> I don't believe anyone has proposed removing the facility\n> altogether. There's a big difference between making the default\n> behavior be not to have OIDs and removing the ability to have OIDs.\n\nRight, that's what I had meant to say. Sorry for the inaccuracy.\n\n-Neil\n\n", "msg_date": "Sun, 14 Dec 2003 21:10:15 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Tables Without OIDS and its effect" } ]
[ { "msg_contents": "Hi everyone,\nI found that performance get worse as the size of a given table \nincreases. I mean, for example I�ve just run some scripts shown in \n\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\nI understand that those scripts are designed to see the behavior of postgresql under different filesystems. However, since them generate\na lot of I/O activity, I think they can be used to adjust some \nconfiguration parameters. In that way, I increased the number of tuples inserted in the initial table to 2000000 and 3000000. What \nI saw is that the running time goes from 3 min., to 11 min. My question is, is it possible to use that test to tune \nsome parameters?, if the answer is yes, what parameters should I change to get shorter running times?\n\nThanks a lot\n\nNestor\n", "msg_date": "Fri, 12 Dec 2003 15:04:49 -0300 (GMT+3)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Performance related to size of tables" }, { "msg_contents": "If you want to speed up the elapsed times, then the first thing would be \nto attempt to reduce the IO using some indexes, e.g. on test1(anumber), \ntest2(anumber), test3((anumber%13)), test3((anumber%5)) and \ntest4((anumber%27))\n\nHowever if you wish to keep hammering the IO then the you would not use \nany indexes. However elapsed times for operations like:\n\nCREATE TABLE test4 AS SELECT ... FROM test1 JOIN test2 ON \ntest1.anumber=test2.anumber;\n\nare going to increase non linearly with the size of the source table \ntest1 (unless there are indexes on the anumber columns).\n\nI think this particular test is designed as a testbed for measuring IO \nperformance - as opposed to Postgresql performance.\n\n\nregards\n\nMark\n\[email protected] wrote:\n\n>Hi everyone,\n>I found that performance get worse as the size of a given table \n>increases. I mean, for example I�ve just run some scripts shown in \n>\n>http://www.potentialtech.com/wmoran/postgresql.php\n>\n>I understand that those scripts are designed to see the behavior of postgresql under different filesystems. However, since them generate\n>a lot of I/O activity, I think they can be used to adjust some \n>configuration parameters. In that way, I increased the number of tuples inserted in the initial table to 2000000 and 3000000. What \n>I saw is that the running time goes from 3 min., to 11 min. My question is, is it possible to use that test to tune \n>some parameters?, if the answer is yes, what parameters should I change to get shorter running times?\n>\n>\n> \n>\n\n", "msg_date": "Sat, 13 Dec 2003 10:11:29 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance related to size of tables" } ]
[ { "msg_contents": "Some arbitrary data processing job\n\nWAL on single drive: 7.990 rec/s\nWAL on 2nd IDE drive: 8.329 rec/s\nWAL on tmpfs: 13.172 rec/s\n\nA huge jump in performance but a bit scary having a WAL that can \ndisappear at any time. I'm gonna workup a rsync script and do some \npower-off experiments to see how badly it gets mangled.\n\nThis could be good method though when you're dumping and restore an \nentire DB. Make a tmpfs mount, restore, shutdown DB and then copy the \nWAL back to the HD.\n\nI checked out the SanDisk IDE FlashDrives. They have a write cycle life \nof 2 million. I'll explore more expensive solid state drives.\n\n", "msg_date": "Fri, 12 Dec 2003 14:02:03 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Update on putting WAL on ramdisk/" }, { "msg_contents": "> WAL on single drive: 7.990 rec/s\n> WAL on 2nd IDE drive: 8.329 rec/s\n> WAL on tmpfs: 13.172 rec/s\n>\n> A huge jump in performance but a bit scary having a WAL that can\n> disappear at any time. I'm gonna workup a rsync script and do some\n> power-off experiments to see how badly it gets mangled.\n\nSurely this is just equivalent to disabling fsync? If you put a WAL on a\nvolatile file system, there's not a whole lot of point in having one at all.\n\n--------------------------------------------------------------------\nRuss Garrett [email protected]\n http://last.fm\n\n", "msg_date": "Fri, 12 Dec 2003 22:36:26 -0000", "msg_from": "\"Russell Garrett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update on putting WAL on ramdisk/" }, { "msg_contents": "Russell Garrett wrote:\n>>WAL on single drive: 7.990 rec/s\n>>WAL on 2nd IDE drive: 8.329 rec/s\n>>WAL on tmpfs: 13.172 rec/s\n>>\n>>A huge jump in performance but a bit scary having a WAL that can\n>>disappear at any time. I'm gonna workup a rsync script and do some\n>>power-off experiments to see how badly it gets mangled.\n> \n> \n> Surely this is just equivalent to disabling fsync? If you put a WAL on a\n> volatile file system, there's not a whole lot of point in having one at all.\n\nThese tests were all with fsync off.\n\nAnd no, it's not equivalent to fsync off since the WAL is always written \nimmediately regardless of fsync setting.\n\n", "msg_date": "Fri, 12 Dec 2003 14:45:31 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update on putting WAL on ramdisk/" } ]
[ { "msg_contents": "Hello everyone.\nCan anyone explain why this table which has never had more than a couple rows in it shows > 500k in the query planner even after running vacuum full. Its terribly slow to return 2 rows of data. The 2 rows in it are being updated a lot but I couldn't find any explanation for this behavior. Anything I could try besides droping db and recreating? \nThanks - Russ\n \ntoolshed=# explain analyze select * from stock_log_positions ;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Seq Scan on stock_log_positions (cost=0.00..10907.77 rows=613577 width=22) (actual time=701.39..701.41 rows=2 loops=1)\n Total runtime: 701.54 msec\n(2 rows)\n \ntoolshed=# vacuum full analyze verbose stock_log_positions;\nINFO: --Relation public.stock_log_positions--\nINFO: Pages 4773: Changed 1, reaped 767, Empty 0, New 0; Tup 613737: Vac 57620, Keep/VTL 613735/613713, UnUsed 20652, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 4322596/4322596; EndEmpty/Avail. Pages 0/4773.\n CPU 9.11s/13.68u sec elapsed 22.94 sec.\nINFO: Index idx_stock_log_positions_when_log_filename: Pages 9465; Tuples 613737: Deleted 57620.\n CPU 1.55s/1.27u sec elapsed 6.69 sec.\nINFO: Rel stock_log_positions: Pages: 4773 --> 4620; Tuple(s) moved: 59022.\n CPU 1.00s/4.45u sec elapsed 8.83 sec.\nINFO: Index idx_stock_log_positions_when_log_filename: Pages 9778; Tuples 613737: Deleted 2897.\n CPU 1.32s/0.44u sec elapsed 6.23 sec.\nINFO: Analyzing public.stock_log_positions\nVACUUM\n \ntoolshed=# explain analyze select * from stock_log_positions ;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Seq Scan on stock_log_positions (cost=0.00..10757.37 rows=613737 width=22) (actual time=789.21..789.24 rows=2 loops=1)\n Total runtime: 789.40 msec\n(2 rows)\n \ntoolshed=# select * from stock_log_positions ;\n when_log | filename | position \n------------+--------------+----------\n 2003-12-11 | ActiveTrader | 0\n 2003-12-11 | Headlines | 0\n(2 rows)\n\n\n \nHello everyone.\nCan anyone explain why this table which has never \nhad more than a couple rows in it shows > 500k in the query planner even \nafter running vacuum full.  Its terribly slow to return 2 rows of \ndata.  The 2 rows in it are being updated a lot but I couldn't find any \nexplanation for this behavior.  Anything I could try besides droping db and \nrecreating?  \nThanks - Russ\n \ntoolshed=# explain analyze select * from stock_log_positions \n;                                                       \nQUERY \nPLAN                                                        \n------------------------------------------------------------------------------------------------------------------------- Seq \nScan on stock_log_positions  (cost=0.00..10907.77 rows=613577 width=22) \n(actual time=701.39..701.41 rows=2 loops=1) Total runtime: 701.54 \nmsec(2 rows)\n \ntoolshed=# vacuum full analyze verbose stock_log_positions;INFO:  \n--Relation public.stock_log_positions--INFO:  Pages 4773: Changed 1, \nreaped 767, Empty 0, New 0; Tup 613737: Vac 57620, Keep/VTL 613735/613713, \nUnUsed 20652, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 4322596/4322596; \nEndEmpty/Avail. Pages 0/4773.        CPU \n9.11s/13.68u sec elapsed 22.94 sec.INFO:  Index \nidx_stock_log_positions_when_log_filename: Pages 9465; Tuples 613737: Deleted \n57620.        CPU 1.55s/1.27u sec elapsed \n6.69 sec.INFO:  Rel stock_log_positions: Pages: 4773 --> 4620; \nTuple(s) moved: 59022.        CPU \n1.00s/4.45u sec elapsed 8.83 sec.INFO:  Index \nidx_stock_log_positions_when_log_filename: Pages 9778; Tuples 613737: Deleted \n2897.        CPU 1.32s/0.44u sec elapsed \n6.23 sec.INFO:  Analyzing public.stock_log_positionsVACUUM\n \ntoolshed=# explain analyze select * from stock_log_positions \n;                                                       \nQUERY \nPLAN                                                        \n------------------------------------------------------------------------------------------------------------------------- Seq \nScan on stock_log_positions  (cost=0.00..10757.37 rows=613737 width=22) \n(actual time=789.21..789.24 rows=2 loops=1) Total runtime: 789.40 \nmsec(2 rows)\n \ntoolshed=# select * from stock_log_positions ;  when_log  \n|   filename   | position \n------------+--------------+---------- 2003-12-11 | ActiveTrader \n|        0 2003-12-11 | \nHeadlines    |        0(2 \nrows)", "msg_date": "Fri, 12 Dec 2003 14:40:28 -0800", "msg_from": "\"Chadwick, Russell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Excessive rows/tuples seriously degrading query performance" }, { "msg_contents": "Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:\n> \n> Hello everyone.\n> Can anyone explain why this table which has never had more than a\n> couple rows in it shows > 500k in the query planner even after running\n> vacuum full. Its terribly slow to return 2 rows of data. The 2 rows\n> in it are being updated a lot but I couldn't find any explanation for\n> this behavior. \n\nIt can be that there is an idle transaction somewhere that has locked a\nlot of rows (i.e. all your updates have been running inside the same\ntransaction for hour or days)\n\ntry: \n$ ps ax| grep post\n\non my linux box this gives\n\n 1683 ? S 0:00 /usr/bin/postmaster -p 5432\n 1704 ? S 0:00 postgres: stats buffer process\n 1705 ? S 0:00 postgres: stats collector process\n 5520 ? S 0:00 postgres: hu hannu [local] idle in transaction\n 5524 pts/2 S 0:00 grep post\n\nwhere backend 5520 seems to be the culprit.\n\n> Anything I could try besides droping db and recreating? \n\nmake sure that no other backend is connected to db and do your \n> vacuum full; analyze;\n\n\nor if there seems to be something unidentifieable making your table\nunusable, then just recreate that table:\n\nbegin;\ncreate table stock_log_positions_tmp \n as select * from stock_log_positions;\ndrop table stock_log_positions;\nalter table stock_log_positions_tmp\n rename to stock_log_positions;\n-- if you have any constraints, indexes or foreign keys\n-- then recreate them here as well\ncommit;\n\n> Thanks - Russ\n> \n---------------\nhannu\n\n", "msg_date": "Tue, 16 Dec 2003 23:24:45 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Excessive rows/tuples seriously degrading query" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:\n>> Can anyone explain why this table which has never had more than a\n>> couple rows in it shows > 500k in the query planner even after running\n>> vacuum full.\n\n> It can be that there is an idle transaction somewhere that has locked a\n> lot of rows (i.e. all your updates have been running inside the same\n> transaction for hour or days)\n\nIn fact an old open transaction is surely the issue, given that the\nVACUUM report shows a huge number of \"kept\" tuples:\n\n>> INFO: Pages 4773: Changed 1, reaped 767, Empty 0, New 0; Tup 613737: Vac 57620, Keep/VTL 613735/613713, UnUsed 20652, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 4322596/4322596; EndEmpty/Avail. Pages 0/4773.\n>> CPU 9.11s/13.68u sec elapsed 22.94 sec.\n\n\"Keep\" is the number of tuples that are committed dead but can't be\nremoved yet because there is some other open transaction that is old\nenough that it should be able to see them if it looks.\n\nApparently the access pattern on this table is constant updates of the\ntwo logical rows, leaving lots and lots of dead versions. You need to\nvacuum it more often to keep down the amount of deadwood, and you need\nto avoid having very-long-running transactions open when you vacuum.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Dec 2003 13:34:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Excessive rows/tuples seriously degrading query " } ]
[ { "msg_contents": "\nHi!\n\nWe have been running a rather busy website using pg 7.3 as the database.\nPeak hitrate is something like 120 request / second without images and\nother static stuff. The site is a sort of image gallery for IRC users.\n\nI evaluated pg 7.4 on our development server and it looked just fine\nbut performance with production loads seems to be quite poor. Most of\nperformance problems are caused by nonsensical query plans but there's\nalso some strange slowness that I can't locate.\n\n\nI have included the essential tables, columns and indexes that\nparticipate to queries in this mail.\n\ntable rows\n------- ----\nusers 50k\nimage 400k\ncomment 17M\n\n Table \"public.users\"\n Column | Type |\n-------------+-----------------------------+\n uid | integer |\n nick | character varying(40) |\n status | character(1) |\nIndexes:\n \"users_pkey\" primary key, btree (uid)\n \"users_upper_nick\" unique, btree (upper((nick)::text))\n \"users_status\" btree (status)\n\n Table \"public.image\"\n Column | Type |\n----------------------+-----------------------------+\n image_id | integer |\n uid | integer |\n status | character(1) |\nIndexes:\n \"image_pkey\" primary key, btree (image_id)\n \"image_uid_status\" btree (uid, status)\n\n Table \"public.comment\"\n Column | Type |\n------------+-----------------------------+\n comment_id | integer |\n image_id | integer |\n uid_sender | integer |\n comment | character varying(255) |\nIndexes:\n \"comment_pkey\" primary key, btree (comment_id)\n \"comment_image_id\" btree (image_id)\n \"comment_uid_sender\" btree (uid_sender)\n\n\n\nPlanner estimates the cost of nested loop to be much higher than\nhash join _although_ the other side of join consists of only one\nrow (which is found using a unique index). Well, bad estimation.\nDifference in runtime is huge.\n\n\ngalleria=# explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM users u INNER JOIN image i ON i.uid = u.uid WHERE upper(u.nick) = upper('Intuitio') AND (i.status = 'd' OR i.status = 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=21690.23..21694.22 rows=1595 width=64) (actual time=2015.615..2015.637 rows=35 loops=1)\n Sort Key: i.status, i.stamp\n -> Hash Join (cost=907.20..21605.38 rows=1595 width=64) (actual time=891.400..2015.464 rows=35 loops=1)\n Hash Cond: (\"outer\".uid = \"inner\".uid)\n -> Seq Scan on image i (cost=0.00..18207.19 rows=330005 width=54) (actual time=0.012..1607.278 rows=341086 loops=1)\n Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n -> Hash (cost=906.67..906.67 rows=213 width=14) (actual time=0.128..0.128 rows=0 loops=1)\n -> Index Scan using users_upper_nick on users u (cost=0.00..906.67 rows=213 width=14) (actual time=0.120..0.122 rows=1 loops=1)\n Index Cond: (upper((nick)::text) = 'INTUITIO'::text)\n Filter: (status = 'a'::bpchar)\n Total runtime: 2015.756 ms\n\ngalleria=# set enable_hashjoin = false;\nSET\n\ngalleria=# explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM users u INNER JOIN image i ON i.uid = u.uid WHERE upper(u.nick) = upper('Intuitio') AND (i.status = 'd' OR i.status = 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=31090.72..31094.71 rows=1595 width=64) (actual time=5.240..5.267 rows=35 loops=1)\n Sort Key: i.status, i.stamp\n -> Nested Loop (cost=0.00..31005.87 rows=1595 width=64) (actual time=4.474..5.082 rows=35 loops=1)\n -> Index Scan using users_upper_nick on users u (cost=0.00..906.67 rows=213 width=14) (actual time=3.902..3.906 rows=1 loops=1)\n Index Cond: (upper((nick)::text) = 'INTUITIO'::text)\n Filter: (status = 'a'::bpchar)\n -> Index Scan using image_uid_status on image i (cost=0.00..141.03 rows=23 width=54) (actual time=0.537..0.961 rows=35 loops=1)\n Index Cond: (i.uid = \"outer\".uid)\n Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n Total runtime: 5.479 ms\n(10 rows)\n\nIs there anything to do for this besides forcing hashjoin off?\nI think there were similar problems with 7.3\n\n\n\nNow something specific to 7.4.\n\nThe following query selects all comments written to user's image. It worked\njust fine with pg 7.3 but there seems to be a Materialize in a bit strange place.\n\ngalleria=# explain SELECT s.nick, c.comment, c.private, c.admin, c.parsable, c.uid_sender, c.stamp, i.image_id, c.comment_id FROM users s, comment c, image i WHERE s.uid = c.uid_sender AND s.status = 'a' AND c.visible = 'y' AND c.image_id = i.image_id AND i.image_id = 184239 ORDER BY c.comment_id DESC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Sort (cost=1338.43..1339.41 rows=392 width=92)\n Sort Key: c.comment_id\n -> Nested Loop (cost=1308.41..1321.54 rows=392 width=92)\n -> Index Scan using image_pkey on image i (cost=0.00..5.29 rows=2 width=4)\n Index Cond: (image_id = 184239)\n -> Materialize (cost=1308.41..1310.37 rows=196 width=92)\n -> Nested Loop (cost=0.00..1308.41 rows=196 width=92)\n -> Index Scan using comment_image_id on \"comment\" c (cost=0.00..60.68 rows=207 width=82)\n Index Cond: (184239 = image_id)\n Filter: (visible = 'y'::bpchar)\n -> Index Scan using users_pkey on users s (cost=0.00..6.02 rows=1 width=14)\n Index Cond: (s.uid = \"outer\".uid_sender)\n Filter: (status = 'a'::bpchar)\n\n\nHowever, when the joins are written in a different style the plan seems to be just right.\n\ngalleria=# explain SELECT u.nick, c.comment, c.private, c.admin, c.parsable, c.uid_sender, c.stamp, i.image_id, c.comment_id FROM image i INNER JOIN comment c ON c.image_id = i.image_id INNER JOIN users u ON u.uid = c.uid_sender WHERE c.visible = 'y' AND c.image_id = i.image_id AND i.image_id = 184239 ORDER BY c.comment_id DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Sort (cost=17.76..17.76 rows=1 width=92)\n Sort Key: c.comment_id\n -> Nested Loop (cost=0.00..17.75 rows=1 width=92)\n -> Nested Loop (cost=0.00..11.72 rows=1 width=82)\n -> Index Scan using image_pkey on image i (cost=0.00..5.29 rows=2 width=4)\n Index Cond: (image_id = 184239)\n -> Index Scan using comment_image_id on \"comment\" c (cost=0.00..3.20 rows=1 width=82)\n Index Cond: ((c.image_id = \"outer\".image_id) AND (184239 = c.image_id))\n Filter: (visible = 'y'::bpchar)\n -> Index Scan using users_pkey on users u (cost=0.00..6.01 rows=1 width=14)\n Index Cond: (u.uid = \"outer\".uid_sender)\n(11 rows)\n\n\n\nI happened to look into this query when one of them got stuck. Normally\npostgres performs tens of these in a second, but after shutting down\nthe web server one them was still running. I gathered some statistics\nand the runtime was something like half an hour! It was causing pretty\nmuch disk io but quite little cpu load. Don't know what it was doing...\n\n\ngalleria=# select * from pg_stat_activity where current_query != '<IDLE>';\n datid | datname | procpid | usesysid | usename | current_query | query_start\n-------+----------+---------+----------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------\n 17144 | galleria | 27849 | 100 | galleria | SELECT s.nick, c.comment, c.private, c.admin, c.parsable, c.uid_sender, c.stamp, i.image_id, c.comment_id FROM users s, comment c, image i WHERE s.uid = c.uid_sender AND s.status = 'a' AND c.visible = 'y' AND c.image_id = i.image_id AND i.image_id = 95406 | 2003-12-08 19:15:10.218859+02\n(1 row)\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ Command\n29756 tuner 25 0 1212 1212 800 R 42.7 0.0 2:51.03 top\n27849 postgres 15 0 783m 783m 780m D 6.2 20.0 0:55.86 postmaster\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 90912 551104 16180 3195612 0 0 2724 0 692 628 5 0 95 0\n 0 1 90912 548280 16192 3198464 0 0 2864 0 810 689 3 2 95 0\n 1 0 90912 545644 16192 3201068 0 0 2604 0 686 663 5 1 95 0\n 0 1 90912 542980 16192 3203712 0 0 2644 0 684 673 3 1 96 0\n 1 0 90912 540260 16220 3206480 0 0 2780 40 827 684 4 1 95 0\n 0 1 90912 537724 16224 3209032 0 0 2556 0 613 666 3 0 97 0\n 0 1 90912 534920 16224 3211840 0 0 2808 0 658 714 6 0 94 0\n 0 1 90912 532172 16224 3214596 0 0 2756 0 678 769 5 0 95 0\n\n\n\nThere's some other slowness with 7.4 as well. After running just fine for several\nhours pg starts to eat a LOT of cpu. Query plans are just like normally and\npg_stat_activity shows nothing special. Disconnecting and reconnecting all clients\nseems to help (restarting web server).\n\n\nhw/sw configuration is something like this:\nDual Xeon 2.8GHz with 4GB of memory. RAID is fast enuff.\nLinux 2.4 (Debian)\n\nPostgres is compiled using gcc 3.2, cflags: CFLAGS=-march=pentium4 -O3 -msse2 -mmmx\n\n\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n", "msg_date": "Sat, 13 Dec 2003 20:38:52 +0200 (EET)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "a lot of problems with pg 7.4" }, { "msg_contents": "On Sat, 13 Dec 2003, Kari Lavikka wrote:\n\n> I evaluated pg 7.4 on our development server and it looked just fine\n> but performance with production loads seems to be quite poor. Most of\n> performance problems are caused by nonsensical query plans but there's\n> also some strange slowness that I can't locate.\n\n\tI had the same problem. I use Fedora Core 1 and after I updated\nfrom 7.4RC1/7.4RC2 (I build my own RPMs) to 7.4 using the binary RPMs\nfrom a mirror site and sometimes I had to restart postmaster to make\nsomething work.\n\tI rebuilt the src.rpm from current rawhide (7.4-5) and now\neverything is ok. The guys from redhat/fedora also add some patches\n(rpm-pgsql-7.4.patch seems to be the most important, the rest seem to be\nfor a proper compile) but I didn't have the time to test if the loss of\nperformance is because in the original binary RPMs from postgresql.org\nthe patch(es) is(are) not present, because of the compiler and optflags\nused to build the RPMs are not chosed well or something else. I used gcc\n3.3.2 (from FC1 distro) and the following optflags:\n\n- On a P4 machine: optflags: i686 -O2 -g -march=pentium4 -msse2 -mfpmath=sse -fomit-frame-pointer -fforce-addr -fforce-mem -maccumulate-outgoing-args -finline-limit=2048\n\n- On a Celeron Tualatin: optflags: i686 -O2 -g -march=pentium3 -msse -mfpmath=sse -fomit-frame-pointer -fforce-addr -fforce-mem -maccumulate-outgoing-args -finline-limit=2048\n\n\tSo, if you use the original binaries from postgresql.org try to\nrecompile from sources setting CFLAGS and CXXFLAGS to proper values \n(maybe -msse2 -mfpmath=sse are not a good choice, you can try removing \nthem).\n\tIf not then review your postgresql configuration (buffers,\nmemory, page cost, etc), because 7.4 seems to be faster than 7.3 and \nthere is no reason for it to run slower on your system.\n\n-- \nAny views or opinions presented within this e-mail are solely those of\nthe author and do not necessarily represent those of any company, unless\notherwise expressly stated.\n", "msg_date": "Sun, 14 Dec 2003 11:54:41 +0200 (EET)", "msg_from": "Tarhon-Onu Victor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a lot of problems with pg 7.4" }, { "msg_contents": "On Sat, 13 Dec 2003, Kari Lavikka wrote:\n\n> I evaluated pg 7.4 on our development server and it looked just fine\n> but performance with production loads seems to be quite poor. Most of\n> performance problems are caused by nonsensical query plans\n\nSome of the estimates that pg made in the plans you showed was way off. I \nassume you have run VACUUM ANALYZE recently? If that does not help maybe \nyou need to increaste the statistics gathering on some columns so that pg \nmakes better estimates. With the wrong statistics it's not strange that pg \nchooses bad plans.\n\n-- \n/Dennis\n\n", "msg_date": "Sun, 14 Dec 2003 18:14:00 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a lot of problems with pg 7.4" }, { "msg_contents": "On Sat, 13 Dec 2003, Kari Lavikka wrote:\n\n> \n> Hi!\n> \n> We have been running a rather busy website using pg 7.3 as the database.\n> Peak hitrate is something like 120 request / second without images and\n> other static stuff. The site is a sort of image gallery for IRC users.\n> \n> I evaluated pg 7.4 on our development server and it looked just fine\n> but performance with production loads seems to be quite poor. Most of\n> performance problems are caused by nonsensical query plans but there's\n> also some strange slowness that I can't locate.\n\nHave you analyzed your database since putting the new data into it?\n\nAlso, you might need to increase your statistics target before analyzing \nto get proper results as well.\n\n", "msg_date": "Tue, 16 Dec 2003 09:37:28 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a lot of problems with pg 7.4" } ]
[ { "msg_contents": "Here are my query and schema. The ERD is at http://dshadovi.f2o.org/pg_erd.jpg \n(sorry about its resolution).\n-David\n\nSELECT\n zbr.zebra_name\n , dog.dog_name\n , mnk.monkey_name\n , wrm.abbreviation || ptr.abbreviation as abbrev2\n , whg.warthog_num\n , whg.color\n , rhn.rhino_name\n , der.deer_name\n , lin.designator\n , frg.frog_id\n , frg.sound_id\n , tgr.tiger_name\n , frg.leg_length\n , frg.jump_distance\nFROM\n frogs frg\n , deers der\n , warthogs whg\n , rhinos rhn\n , zebras zbr\n , dogs dog\n , monkeys mnk\n , worms wrm\n , parrots prt\n , giraffes grf\n , lions lin\n , tigers tgr\nWHERE 1 = 1\nAND frg.deer_id = der.deer_id\nAND whg.whg_id = frg.frg_id\nAND frg.rhino_id = rhn.rhino_id\nAND zbr.zebra_id = dog.zebra_id\nAND dog.dog_id = mky.dog_id\nAND mky.dog_id = whg.dog_id\nAND mky.monkey_num = whg.monkey_num\nAND whg.worm_id = wrm.worm_id\nAND whg.parrot_id = prt.parrot_id\nAND prt.beak = 'L'\nAND frg.frog_id = grf.frog_id\nAND grf.lion_id = lin.lion_id\nAND frg.tiger_id = tgr.tiger_id\n;\n\n\nCREATE TABLE zebras (\n zebra_id INTEGER NOT NULL,\n zebra_name VARCHAR(25),\n PRIMARY KEY (zebra_id),\n UNIQUE (zebra_name));\n\nCREATE TABLE dogs (\n zebra_id INTEGER NOT NULL,\n dog_id INTEGER NOT NULL,\n dog_name VARCHAR(25),\n FOREIGN KEY (zebra_id) REFERENCES zebras (zebra_id),\n PRIMARY KEY (dog_id),\n UNIQUE (dog_name, dog_num));\n\nCREATE TABLE monkeys (\n dog_id INTEGER NOT NULL,\n monkey_num INTEGER,\n monkey_name VARCHAR(25),\n PRIMARY KEY (dog_id, monkey_num),\n FOREIGN_KEY (dog_id) REFERENCES dogs (dog_id));\n\nCREATE INDEX mnk_dog_id_idx ON monkeys (dog_id);\nCREAIE INDEX mnk_mnk_num_idx ON monkeys (monkey_num);\n\nCREATE TABLE warthogs (\n warthog_id INTEGER NOT NULL,\n warthog_num INTEGER,\n color VARCHAR(25) NOT NULL,\n dog_id INTEGER NOT NULL,\n monkey_num INTEGER NOT NULL,\n parrot_id INTEGER,\n beak CHAR(l),\n worm_id INTEGER,\n PRIMARY KEY (warthog_id),\n FOREIGN KEY (parrot_id, beak) REFERENCES parrots (parrot_id, beak)\n FOREIGN KEY (dog_id, monkey_num) REFERENCES monkeys (dog_id, monkey_nun)\n FOREIGN KEY (worm_id) REFERENCES worms (worm_id));\n\nCREATE UNIQUE INDEX whg_whg_id_idx ON warthogs (warthog_id)\nCREATE INDEX whg_dog_id_idx ON warthogs (dog_id);\nCREATE INDEX whg_mnk_num_idx ON warthogs (monkey_num)\nCREATE INDEX whg_wrm_id_idx ON warthogs (worm_id);\nCREATE INDEX IDX_warthogs_1 ON warthogs (monkey_num, dog_id)\nCREATE INDEX lOX warthogs_2 ON warthogs (beak, parrot_id);\n\nCREATE TABLE worms (\n worm_id INTEGER NOT NULL,\n abbreviation CHAR(l),\n PRIMARY KEY worm_id));\n\nCREATE TABLE parrots (\n parrot_id INTEGER NOT NULL,\n beak CHAR(1) NOT NULL,\n abbreviation CHAR(1),\n PRIMARY KEY (parrot_id, beak));\n\nCREATE INDEX prt_prt_id_idx ON parrots (parrot_id)\nCREATE INDEX prt_beak_idx ON parrots (beak):\n\nCREATE TABLE deers (\n deer_id INTEGER NOT NULL,\n deer_name VARCHAR(40),\n PRIMARY KEY (deer_id));\n\nCREATE UNIQUE INDEX der_der_id_unq_idx ON deers (deer_id);\n\nCREATE TABLE rhinos (\n rhino_id INTEGER NOT NULL,\n rhino_name VARCHAR(255),\n CONSTRAINT rhn_rhn_name_unique UNIQUE,\n CONSTRAINT PK_rhn PRIMARY KEY (rhino_id));\n\nCREATE UNIQUE INDEX rhn_rhn_id_unq_idx ON rhinos (rhino_id);\n\nCREATE TABLE tigers (\n tiger_id INTEGER NOT NULL,\n tiger_name VARCHAR(255),\n PRIMARY KEY (tiger_id));\n\nCREATE UNIQUE INDEX tgr_tgr_id_unq_idx ON tigers (tiger_id);\n\nCREATE TABLE frogs (\n frog_id INTEGER NOT NULL,\n warthog_id INTEGER NOT NULL,\n rhino_id INTEGER NOT NULL,\n deer_id INTEGER NOT NULL,\n sound_id INTEGER,\n tiger_id INTEGER,\n leg_length VARCHAR(255),\n jump_distance VARCHAR(lOO),\n PRIMARY KEY (frog_id));\n\nALTER TABLE frogs ADD FOREIGN KEY (warthog_id) REFERENCES warthogs \n(warthog_id),\nALTER TABLE frogs ADD FOREIGN KEY (rhino_id) REFERENCES rhinos (rhino_id);\nALTER TABLE frogs ADD FOREIGN KEY (deer id) REFERENCES deers (deer_id)\nALTER TABLE frogs ADD FOREIGN KEY (sound_id) REFERENCES sounds (sound id);\nALTER TABLE frogs ADD FOREIGN KEY (tiger_id) REFERENCES tigers (tiger_id);\n\nCREATE UNIQUE INDEX frg_frg_id_unq_idx ON frogs (frog_id);\nCREATE UNIQUE INDEX frg_w_r_d_t_unq_idx ON frogs (warthog_id, rhino_id, \ndeer_id, tiger_id);\nCREATE INDEX frg_whg_id_idx ON frogs (warthog_id);\nCREATE INDEX frg rhn_id_idx ON frogs (rhino_id);\nCREATE INDEX frg_der_id_idx ON frogs (deer_id);\nCREATE INDEX frg_snd_id_idx ON frogs (sound_id);\nCREATE INDEX frg_tgr_id_idx ON frogs (tiger_id);\n\nCREATE TABLE lions (\n lion_id INTEGER NOT NULL,\n deer_id INTEGER,\n PRIMARY KEY (lion_id));\n\nCREATE UNIQUE INDEX lin_lin_id_unq_idx ON lions (lion_id);\n\nCREATE TABLE frogs_lions (\n frog_id INTEGER NOT NULL,\n lion_id INTEGER NOT NULL,\n PRIMARY KEY (frog_id, lion_id));\n\nALTER TABLE frogs_lions ADD FOREIGN KEY (lion_id) REFERENCES lions (lion_id);\nALTER TABLE frogs_lions ADD FOREIGN KEY (frog id) REFERENCES frogs (frog_id);\n\nCREATE UNIQUE INDEX frg_lin_frg_id_lin_id_unq_idx ON frogs_lions (frog_id, \nlion_id);\nCREATE INDEX frg_lin_lin_id_idx ON frogs_lions (lion_id);\nCREATE INDEX frg_lin_frg_id_idx ON frogs_lions (frog_id);\n\n\n", "msg_date": "Sat, 13 Dec 2003 23:21:36 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan - now what? " } ]
[ { "msg_contents": "Hi, I am developing a program using postgres and linux like operating\nsystem. My problem is this:\nI have a quite complicated view with roughly 10000 record. When I execute a\nsimple query like this \n\t\"select * from myview\"\npostgres respond after 50 - 55 minutes roughly. I hope that someone can help\nme with some suggestion about reason of this behavior and some solution to\nreduce time ti have results. Thank you for your attentions and I hope to\nreceive some feedback as soon as possible\n\n\n\n\n\nPostgres respond after toomany times to a query view\n\n\nHi, I am developing a program using postgres and linux like operating system. My problem is this:\nI have a quite complicated view with roughly 10000 record. When I execute a simple query like this \n        \"select * from myview\"\npostgres respond after 50 - 55 minutes roughly. I hope that someone can help me with some suggestion about reason of this behavior and some solution to reduce time ti have results. Thank you for your attentions and I hope to receive some feedback as soon as possible", "msg_date": "Tue, 16 Dec 2003 16:40:03 +0100", "msg_from": "Claudia D'amato <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres respond after toomany times to a query view" } ]
[ { "msg_contents": "Hi-\n\nI'm trying to optimize a query that I *think* should run very fast.\nEssentially, I'm joining two tables that have very selective indexes and\nconstraining the query on an indexed field. (There's a third small lookup\ntable in the mix, but it doesn't really affect the bottom line.)\n\nactor is a table containing roughly 3 million rows with an index on\nactor_full_name_uppercase and a unique index on actor_id.\n\nactor_summary also contains roughly 3 million rows. Its PK is a unique\ncombined index on (actor_id, county_id, case_disp_global_code).\n\nThe vast majority of the rows in actor correspond to a single row in\nactor_summary I'd estimate this at 95% or more. The remaining actors with\nmultiple records generally have two corresponding rows in actor summary.\nActor summary was created as a performance enhancer, where we can store some\npre-calculated values such as the number of court cases an actor is involved\nin.\n\nThe constraint is applied first, with reasonable speed. In the example\nbelow, it takes about 15 seconds to gather the matches in actor.\n\nI'm unsure what is happening next. I notice that an index scan is occurring\non actor_summary_pk, with an \"actual time\" of 9.15, but then it looks like a\nnested loop occurs at the next level to join these tables. Does this mean\nthat each probe of the actor_summary index will take 9.15 msec, but the\nnested loop is going to do this once for each actor_id?\n\nThe nested loop appears to be where most of my time is going, so I'm\nfocusing on this area, but don't know if there is a better approach to this\njoin.\n\nIs there a more efficient means than a nested loop to handle such a join?\nWould a different method be chosen if there was exactly one row in\nactor_summary for every row in actor?\n\n-Nick\n\nThe query & explain analyze:\n\n\nalpha=#\nalpha=#\nalpha=# explain analyze\nalpha-# select\nalpha-# min(actor.actor_id) as actor_id,\nalpha-# min(actor.actor_entity_type) as actor_entity_type,\nalpha-# min(actor.role_class_code) as role_class_code,\nalpha-# min(actor.actor_full_name) as actor_full_name,\nalpha-# min(actor.actor_person_date_of_birth) as\nactor_person_date_of_birth,\nalpha-# min(actor.actor_entity_acronym) as actor_entity_acronym,\nalpha-# min(actor.actor_person_last_name) as actor_person_last_name,\nalpha-# min(actor.actor_person_first_name) as actor_person_first_name,\nalpha-# min(actor.actor_person_middle_name) as actor_person_middle_name,\nalpha-# min(actor.actor_person_name_suffix) as actor_person_name_suffix,\nalpha-# min(actor.actor_person_place_of_birth) as\nactor_person_place_of_birth,\nalpha-# min(actor.actor_person_height) as actor_person_height,\nalpha-# min(actor.actor_person_height_unit) as actor_person_height_unit,\nalpha-# min(actor.actor_person_weight) as actor_person_weight,\nalpha-# min(actor.actor_person_weight_unit) as actor_person_weight_unit,\nalpha-# min(actor.actor_person_ethnicity) as actor_person_ethnicity,\nalpha-# min(actor.actor_person_citizenship_count) as\nactor_person_citizenship_count,\nalpha-# min(actor.actor_person_hair_color) as actor_person_hair_color,\nalpha-# min(actor.actor_person_scars_marks_tatto) as\nactor_person_scars_marks_tatto,\nalpha-# min(actor.actor_person_marital_status) as\nactor_person_marital_status,\nalpha-# min(actor.actor_alias_for_actor_id) as actor_alias_for_actor_id,\nalpha-# min(to_char(data_source.source_last_update, 'MM/DD/YYYY HH12:MI\nAM TZ')) as last_update,\nalpha-# min(actor_summary.single_case_public_id) as case_public_id,\nalpha-# min(actor_summary.single_case_id) as case_id,\nalpha-# sum(actor_summary.case_count)as case_count\nalpha-# from\nalpha-# actor,\nalpha-# actor_summary,\nalpha-# data_source\nalpha-# where\nalpha-# actor.actor_id = actor_summary.actor_id\nalpha-# and data_source.source_id = actor.source_id\nalpha-# and actor_full_name_uppercase like upper('sanders%')\nalpha-# group by\nalpha-# actor.actor_id\nalpha-# order by\nalpha-# min(actor.actor_full_name_uppercase),\nalpha-# case_count desc,\nalpha-# min(actor_summary.case_disp_global_code)\nalpha-# limit\nalpha-# 1000\nalpha-# ;\n\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Limit (cost=2555.58..2555.59 rows=1 width=547) (actual\ntime=48841.76..48842.90 rows=1000 loops=1)\n -> Sort (cost=2555.58..2555.59 rows=1 width=547) (actual\ntime=48841.76..48842.18 rows=1001 loops=1)\n Sort Key: min((actor.actor_full_name_uppercase)::text),\nsum(actor_summary.case_count),\nmin((actor_summary.case_disp_global_code)::text)\n -> Aggregate (cost=2555.50..2555.57 rows=1 width=547) (actual\ntime=48604.17..48755.28 rows=3590 loops=1)\n -> Group (cost=2555.50..2555.50 rows=1 width=547) (actual\ntime=48604.04..48647.91 rows=3594 loops=1)\n -> Sort (cost=2555.50..2555.50 rows=1 width=547)\n(actual time=48604.01..48605.70 rows=3594 loops=1)\n Sort Key: actor.actor_id\n -> Nested Loop (cost=1.14..2555.49 rows=1\nwidth=547) (actual time=69.09..48585.83 rows=3594 loops=1)\n -> Hash Join (cost=1.14..900.39 rows=204\nwidth=475) (actual time=46.92..15259.02 rows=3639 loops=1)\n Hash Cond: (\"outer\".source_id =\n\"inner\".source_id)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..895.04 rows=222 width=463)\n(actual time=46.54..15220.77 rows=3639 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Hash (cost=1.11..1.11 rows=11\nwidth=12) (actual time=0.05..0.05 rows=0 loops=1)\n -> Seq Scan on data_source\n(cost=0.00..1.11 rows=11 width=12) (actual time=0.02..0.04 rows=11 loops=1)\n -> Index Scan using actor_summary_pk on\nactor_summary (cost=0.00..8.11 rows=1 width=72) (actual time=9.14..9.15\nrows=1 loops=3639)\n Index Cond: (\"outer\".actor_id =\nactor_summary.actor_id)\n Total runtime: 48851.85 msec\n(18 rows)\n\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n", "msg_date": "Tue, 16 Dec 2003 12:06:20 -0500", "msg_from": "\"Nick Fankhauser - Doxpop\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop question" }, { "msg_contents": "On Tuesday 16 December 2003 17:06, Nick Fankhauser - Doxpop wrote:\n> Hi-\n>\n> I'm trying to optimize a query that I *think* should run very fast.\n> Essentially, I'm joining two tables that have very selective indexes and\n> constraining the query on an indexed field. (There's a third small lookup\n> table in the mix, but it doesn't really affect the bottom line.)\n\n> I'm unsure what is happening next. I notice that an index scan is occurring\n> on actor_summary_pk, with an \"actual time\" of 9.15, but then it looks like\n> a nested loop occurs at the next level to join these tables. Does this mean\n> that each probe of the actor_summary index will take 9.15 msec, but the\n> nested loop is going to do this once for each actor_id?\n\nThat's right - you need to multiply the actual time by the number of loops. In \nyour case this would seem to be about 33 seconds.\n\n> -> Index Scan using actor_summary_pk on\n> actor_summary (cost=0.00..8.11 rows=1 width=72) (actual time=9.14..9.15\n> rows=1 loops=3639)\n> Index Cond: (\"outer\".actor_id =\n> actor_summary.actor_id)\n\n> The nested loop appears to be where most of my time is going, so I'm\n> focusing on this area, but don't know if there is a better approach to this\n> join.\n>\n> Is there a more efficient means than a nested loop to handle such a join?\n> Would a different method be chosen if there was exactly one row in\n> actor_summary for every row in actor?\n\nHmm - tricky to say in your case. PG has decided to filter on actor then look \nup the corresponding values in actor_summary. Given that you have 3 million \nrows in both tables that seems a reasonable approach. You could always try \nforcing different plans by switching the various ENABLE_HASHJOIN etc options \n(see the runtime configuration section of the manuals). I'm not sure that \nwill help you here though.\n\nThe fact that it's taking you 9ms to do each index lookup suggests to me that \nit's going to disk each time. Does that sound plausible, or do you think you \nhave enough RAM to cache your large indexes?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 17 Dec 2003 09:52:15 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop question" }, { "msg_contents": "\n> The fact that it's taking you 9ms to do each index lookup\n> suggests to me that\n> it's going to disk each time. Does that sound plausible, or do\n> you think you\n> have enough RAM to cache your large indexes?\n\nI'm sure we don't have enough RAM to cache all of our large indexes, so your\nsupposition makes sense. We have 1GB on this machine. In responding to the\nperformance problems we're having, one of the questions has been adding\nmemory vs crafting \"helper\" tables to speed things up. The issue is that\nthis database needs to be able to scale easily to about 10 times the size,\nso although we could easily triple the memory at reasonable expense, we'd\neventually hit a wall.\n\nIs there any solid method to insure that a particular index always resides\nin memory? A hybrid approach that might scale reliably would be to bump up\nour memory and then make sure key indexes are cached. however, I'm concerned\nthat if we didn't have a way to ensure that the indexes that we choose\nremain cached, we would have very inconsistent responses.\n\nThanks for your ideas!\n\n-Nick\n\n\n\n", "msg_date": "Wed, 17 Dec 2003 10:26:19 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop question" } ]
[ { "msg_contents": "Hi-\n\nI'm trying to optimize a query that I *think* should run very fast.\nEssentially, I'm joining two tables that have very selective indexes and\nconstraining the query on an indexed field. (There's a third small lookup\ntable in the mix, but it doesn't really affect the bottom line.)\n\nactor is a table containing roughly 3 million rows with an index on\nactor_full_name_uppercase and a unique index on actor_id.\n\nactor_summary also contains roughly 3 million rows. Its PK is a unique\ncombined index on (actor_id, county_id, case_disp_global_code).\n\nThe vast majority of the rows in actor correspond to a single row in\nactor_summary I'd estimate this at 95% or more. The remaining actors with\nmultiple records generally have two corresponding rows in actor summary.\nActor summary was created as a performance enhancer, where we can store some\npre-calculated values such as the number of court cases an actor is involved\nin.\n\nThe constraint is applied first, with reasonable speed. In the example\nbelow, it takes about 15 seconds to gather the matches in actor.\n\nI'm unsure what is happening next. I notice that an index scan is occurring\non actor_summary_pk, with an \"actual time\" of 9.15, but then it looks like a\nnested loop occurs at the next level to join these tables. Does this mean\nthat each probe of the actor_summary index will take 9.15 msec, but the\nnested loop is going to do this once for each actor_id?\n\nThe nested loop appears to be where most of my time is going, so I'm\nfocusing on this area, but don't know if there is a better approach to this\njoin.\n\nIs there a more efficient means than a nested loop to handle such a join?\nWould a different method be chosen if there was exactly one row in\nactor_summary for every row in actor?\n\n-Nick\n\nThe query & explain analyze:\n\n\nalpha=#\nalpha=#\nalpha=# explain analyze\nalpha-# select\nalpha-# min(actor.actor_id) as actor_id,\nalpha-# min(actor.actor_entity_type) as actor_entity_type,\nalpha-# min(actor.role_class_code) as role_class_code,\nalpha-# min(actor.actor_full_name) as actor_full_name,\nalpha-# min(actor.actor_person_date_of_birth) as\nactor_person_date_of_birth,\nalpha-# min(actor.actor_entity_acronym) as actor_entity_acronym,\nalpha-# min(actor.actor_person_last_name) as actor_person_last_name,\nalpha-# min(actor.actor_person_first_name) as actor_person_first_name,\nalpha-# min(actor.actor_person_middle_name) as actor_person_middle_name,\nalpha-# min(actor.actor_person_name_suffix) as actor_person_name_suffix,\nalpha-# min(actor.actor_person_place_of_birth) as\nactor_person_place_of_birth,\nalpha-# min(actor.actor_person_height) as actor_person_height,\nalpha-# min(actor.actor_person_height_unit) as actor_person_height_unit,\nalpha-# min(actor.actor_person_weight) as actor_person_weight,\nalpha-# min(actor.actor_person_weight_unit) as actor_person_weight_unit,\nalpha-# min(actor.actor_person_ethnicity) as actor_person_ethnicity,\nalpha-# min(actor.actor_person_citizenship_count) as\nactor_person_citizenship_count,\nalpha-# min(actor.actor_person_hair_color) as actor_person_hair_color,\nalpha-# min(actor.actor_person_scars_marks_tatto) as\nactor_person_scars_marks_tatto,\nalpha-# min(actor.actor_person_marital_status) as\nactor_person_marital_status,\nalpha-# min(actor.actor_alias_for_actor_id) as actor_alias_for_actor_id,\nalpha-# min(to_char(data_source.source_last_update, 'MM/DD/YYYY HH12:MI\nAM TZ')) as last_update,\nalpha-# min(actor_summary.single_case_public_id) as case_public_id,\nalpha-# min(actor_summary.single_case_id) as case_id,\nalpha-# sum(actor_summary.case_count)as case_count\nalpha-# from\nalpha-# actor,\nalpha-# actor_summary,\nalpha-# data_source\nalpha-# where\nalpha-# actor.actor_id = actor_summary.actor_id\nalpha-# and data_source.source_id = actor.source_id\nalpha-# and actor_full_name_uppercase like upper('sanders%')\nalpha-# group by\nalpha-# actor.actor_id\nalpha-# order by\nalpha-# min(actor.actor_full_name_uppercase),\nalpha-# case_count desc,\nalpha-# min(actor_summary.case_disp_global_code)\nalpha-# limit\nalpha-# 1000\nalpha-# ;\n\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Limit (cost=2555.58..2555.59 rows=1 width=547) (actual\ntime=48841.76..48842.90 rows=1000 loops=1)\n -> Sort (cost=2555.58..2555.59 rows=1 width=547) (actual\ntime=48841.76..48842.18 rows=1001 loops=1)\n Sort Key: min((actor.actor_full_name_uppercase)::text),\nsum(actor_summary.case_count),\nmin((actor_summary.case_disp_global_code)::text)\n -> Aggregate (cost=2555.50..2555.57 rows=1 width=547) (actual\ntime=48604.17..48755.28 rows=3590 loops=1)\n -> Group (cost=2555.50..2555.50 rows=1 width=547) (actual\ntime=48604.04..48647.91 rows=3594 loops=1)\n -> Sort (cost=2555.50..2555.50 rows=1 width=547)\n(actual time=48604.01..48605.70 rows=3594 loops=1)\n Sort Key: actor.actor_id\n -> Nested Loop (cost=1.14..2555.49 rows=1\nwidth=547) (actual time=69.09..48585.83 rows=3594 loops=1)\n -> Hash Join (cost=1.14..900.39 rows=204\nwidth=475) (actual time=46.92..15259.02 rows=3639 loops=1)\n Hash Cond: (\"outer\".source_id =\n\"inner\".source_id)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..895.04 rows=222 width=463)\n(actual time=46.54..15220.77 rows=3639 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Hash (cost=1.11..1.11 rows=11\nwidth=12) (actual time=0.05..0.05 rows=0 loops=1)\n -> Seq Scan on data_source\n(cost=0.00..1.11 rows=11 width=12) (actual time=0.02..0.04 rows=11 loops=1)\n -> Index Scan using actor_summary_pk on\nactor_summary (cost=0.00..8.11 rows=1 width=72) (actual time=9.14..9.15\nrows=1 loops=3639)\n Index Cond: (\"outer\".actor_id =\nactor_summary.actor_id)\n Total runtime: 48851.85 msec\n(18 rows)\n\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n", "msg_date": "Tue, 16 Dec 2003 12:11:59 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop performance" }, { "msg_contents": "On Tue, Dec 16, 2003 at 12:11:59PM -0500, Nick Fankhauser wrote:\n> \n> I'm trying to optimize a query that I *think* should run very fast.\n> Essentially, I'm joining two tables that have very selective indexes and\n> constraining the query on an indexed field. (There's a third small lookup\n> table in the mix, but it doesn't really affect the bottom line.)\n> \n> actor is a table containing roughly 3 million rows with an index on\n> actor_full_name_uppercase and a unique index on actor_id.\n> \n> actor_summary also contains roughly 3 million rows. Its PK is a unique\n> combined index on (actor_id, county_id, case_disp_global_code).\n\n...\n\n> I'm unsure what is happening next. I notice that an index scan is occurring\n> on actor_summary_pk, with an \"actual time\" of 9.15, but then it looks like a\n> nested loop occurs at the next level to join these tables. Does this mean\n> that each probe of the actor_summary index will take 9.15 msec, but the\n> nested loop is going to do this once for each actor_id?\n\n...\n\n> Is there a more efficient means than a nested loop to handle such a join?\n> Would a different method be chosen if there was exactly one row in\n> actor_summary for every row in actor?\n\nIt seems that your basic problem is that you're fetching lots of rows\nfrom two big ol' tables. The innermost estimation mistake being made\nby the planner is that the restriction on actor_full_name_uppercase\nwill be much more selective than it is; it thinks there will be 222\nmatching actors and in fact there are 3639. But being right about this\nwouldn't make things a lot quicker, if it would make them quicker at\nall; the index scan for them is taking about 15 seconds and presumably\na sequential scan of that table would be at least in the same ballpark.\n\nOnce it's got those rows it needs to look up matches for them in\nactor_summary. Again, that's 3639 index scans of an index into a\nwide-ish table; your interpretation of the 9.15 is correct. (9 ms *\n3639 rows =~ 30 seconds). \n\nIt doesn't seem to me that there would be a substantially better plan\nfor this query with your tables as they stand. If your data were more\nnormalised, then your big scans might be quicker (because their rows\nwould be smaller so they would hit fewer disk pages), and the extra\nlookups in your detail tables would only be done for the rows which\nactually ended up getting returned - but that would hardly be likely\nto make an order-of-magnitude difference to your overall speed.\n\nIf it were my query and I really really needed it to be considerably\nfaster, I'd think about hyper-normalising in the hope that my main\ntables would shrink so far I could keep them in RAM effectively all\nthe time. The answers to your direct questions are (1) yes, (2) no,\nnot really, and (3) no.\n\nRichard\n", "msg_date": "Tue, 16 Dec 2003 23:55:41 +0000", "msg_from": "Richard Poole <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop performance" }, { "msg_contents": "\nOn Tue, 16 Dec 2003, Nick Fankhauser wrote:\n\n> Is there a more efficient means than a nested loop to handle such a join?\n> Would a different method be chosen if there was exactly one row in\n> actor_summary for every row in actor?\n\nAs a question, what does explain analyze give you if you\nset enable_nestloop=false; before trying the query?\n", "msg_date": "Tue, 16 Dec 2003 17:17:26 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop performance" }, { "msg_contents": "\n> As a question, what does explain analyze give you if you\n> set enable_nestloop=false; before trying the query?\n\nHere are the results- It looks quite a bit more painful than the other plan,\nalthough the wall time is in the same ballpark.\n\nalpha=# explain analyze\nalpha-# select\nalpha-# min(actor.actor_id) as actor_id,\nalpha-# min(actor.actor_entity_type) as actor_entity_type,\nalpha-# min(actor.role_class_code) as role_class_code,\nalpha-# min(actor.actor_full_name) as actor_full_name,\nalpha-# min(actor.actor_person_date_of_birth) as\nactor_person_date_of_birth,\nalpha-# min(actor.actor_entity_acronym) as actor_entity_acronym,\nalpha-# min(actor.actor_person_last_name) as actor_person_last_name,\nalpha-# min(actor.actor_person_first_name) as actor_person_first_name,\nalpha-# min(actor.actor_person_middle_name) as actor_person_middle_name,\nalpha-# min(actor.actor_person_name_suffix) as actor_person_name_suffix,\nalpha-# min(actor.actor_person_place_of_birth) as\nactor_person_place_of_birth,\nalpha-# min(actor.actor_person_height) as actor_person_height,\nalpha-# min(actor.actor_person_height_unit) as actor_person_height_unit,\nalpha-# min(actor.actor_person_weight) as actor_person_weight,\nalpha-# min(actor.actor_person_weight_unit) as actor_person_weight_unit,\nalpha-# min(actor.actor_person_ethnicity) as actor_person_ethnicity,\nalpha-# min(actor.actor_person_citizenship_count) as\nactor_person_citizenship_count,\nalpha-# min(actor.actor_person_hair_color) as actor_person_hair_color,\nalpha-# min(actor.actor_person_scars_marks_tatto) as\nactor_person_scars_marks_tatto,\nalpha-# min(actor.actor_person_marital_status) as\nactor_person_marital_status,\nalpha-# min(actor.actor_alias_for_actor_id) as actor_alias_for_actor_id,\nalpha-# min(to_char(data_source.source_last_update, 'MM/DD/YYYY HH12:MI\nAM TZ')) as last_update,\nalpha-# min(actor_summary.single_case_public_id) as case_public_id,\nalpha-# min(actor_summary.single_case_id) as case_id,\nalpha-# sum(actor_summary.case_count)as case_count\nalpha-# from\nalpha-# actor,\nalpha-# actor_summary,\nalpha-# data_source\nalpha-# where\nalpha-# actor.actor_id = actor_summary.actor_id\nalpha-# and data_source.source_id = actor.source_id\nalpha-# and actor.actor_full_name_uppercase like upper('sanders%')\nalpha-# group by\nalpha-# actor.actor_id\nalpha-# order by\nalpha-# min(actor.actor_full_name_uppercase),\nalpha-# case_count desc,\nalpha-# min(actor_summary.case_disp_global_code)\nalpha-# limit\nalpha-# 1000;\n\nQUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------\n----------------------\n Limit (cost=168919.98..168920.03 rows=20 width=548) (actual\ntime=91247.95..91249.05 rows=1000 loops=1)\n -> Sort (cost=168919.98..168920.03 rows=20 width=548) (actual\ntime=91247.95..91248.35 rows=1001 loops=1)\n Sort Key: min((actor.actor_full_name_uppercase)::text),\nsum(actor_summary.case_count),\nmin((actor_summary.case_disp_global_code)::text)\n -> Aggregate (cost=168904.95..168919.54 rows=20 width=548)\n(actual time=91015.00..91164.68 rows=3590 loops=1)\n -> Group (cost=168904.95..168905.95 rows=201 width=548)\n(actual time=90999.87..91043.25 rows=3594 loops=1)\n -> Sort (cost=168904.95..168905.45 rows=201\nwidth=548) (actual time=90999.83..91001.57 rows=3594 loops=1)\n Sort Key: actor.actor_id\n -> Hash Join (cost=903.08..168897.24 rows=201\nwidth=548) (actual time=25470.63..90983.45 rows=3594 loops=1)\n Hash Cond: (\"outer\".actor_id =\n\"inner\".actor_id)\n -> Seq Scan on actor_summary\n(cost=0.00..150715.43 rows=3455243 width=73) (actual time=8.03..52902.24\nrows=3455243 loops=1)\n -> Hash (cost=902.57..902.57 rows=204\nwidth=475) (actual time=25459.92..25459.92 rows=0 loops=1)\n -> Hash Join (cost=1.14..902.57\nrows=204 width=475) (actual time=155.92..25451.25 rows=3639 loops=1)\n Hash Cond: (\"outer\".source_id =\n\"inner\".source_id)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..897.20 rows=223 width=463)\n(actual time=144.93..25404.\n10 rows=3639 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::\ncharacter varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Hash (cost=1.11..1.11\nrows=11 width=12) (actual time=10.66..10.66 rows=0 loops=1)\n -> Seq Scan on\ndata_source (cost=0.00..1.11 rows=11 width=12) (actual time=10.63..10.64\nrows=11 loops=1)\n Total runtime: 91275.18 msec\n(19 rows)\n\nalpha=#\n\n\n", "msg_date": "Wed, 17 Dec 2003 10:26:20 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested loop performance" }, { "msg_contents": "> It seems that your basic problem is that you're fetching lots of rows\n> from two big ol' tables.\n\n> It doesn't seem to me that there would be a substantially better plan\n> for this query with your tables as they stand.\n\nThat's more or less the conclusion I had come to. I was just hoping someone\nelse could point out an approach I've been missing. (sigh!)\n\n\n\n> If your data were more\n> normalised, then your big scans might be quicker (because their rows\n> would be smaller so they would hit fewer disk pages),\n\nThis started off as a 5-table join on well-normalized data. Unfortunately,\nthe actor table doesn't get any smaller, and the work involved in\ncalculating the \"case_count\" information on the fly was clearly becoming a\nproblem- particularly with actors that had a heavy caseload. (Busy attorneys\nand judges.) The actor_summary approach makes these previous problem cases\ngo away, but the payback is that (as you correctly pointed out) queries on\naverage citizens who only have one case suffer from the de-normalized\napproach.\n\nWe're currently considering the approach of just returning all of the rows\nto our application, and doing the aggregation and limit work in the app. The\ninconsistency of the data makes it very tough for the query planner to come\nup with an strategy that is always a winner.\n\nThanks for your thoughts!\n\n-Nick\n\n\n", "msg_date": "Wed, 17 Dec 2003 10:26:25 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested loop performance" } ]
[ { "msg_contents": "I have stored procedure written in pl/pgsql which takes about 13 seconds\nto finish. I was able to identify that the slowness is caused by one\nupdate SQL:\n\nUPDATE shopping_cart SET sc_sub_total=sc_subtotal, sc_date=now() \nWHERE sc_id=sc_id;\n\nIf I comment this sql out, the stored procedure returns within 1 second.\n\nWhat puzzles me is that if I execute the same update SQL in psql\ninterface, it returns very fast. The following is the explain analyze\noutput for that SQL. \n\n#>explain analyze UPDATE shopping_cart SET sc_sub_total=1, sc_date=now()\nwhere sc_id=260706;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using shopping_cart_pkey on shopping_cart (cost=0.00..5.01\nrows=1 width=144) (actual time=0.22..0.37 rows=1 loops=1)\n Index Cond: (sc_id = 260706::numeric)\n Total runtime: 1.87 msec\n(3 rows)\n\nIs it true that using pl/pgsql increases the overhead that much?\n\nTIA,\nJenny\n-- \nJenny Zhang\nOpen Source Development Lab\n12725 SW Millikan Way, Suite 400\nBeaverton, OR 97005\n(503)626-2455 ext 31\n\n\n", "msg_date": "Tue, 16 Dec 2003 15:52:41 -0800", "msg_from": "Jenny Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "update slows down in pl/pgsql function" }, { "msg_contents": "On Tue, 16 Dec 2003, Jenny Zhang wrote:\n\n> I have stored procedure written in pl/pgsql which takes about 13 seconds\n> to finish. I was able to identify that the slowness is caused by one\n> update SQL:\n>\n> UPDATE shopping_cart SET sc_sub_total=sc_subtotal, sc_date=now()\n> WHERE sc_id=sc_id;\n\nUmm, is that exactly the condition you're using? Isn't that going to\nupdate the entire table?\n", "msg_date": "Tue, 16 Dec 2003 15:54:34 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update slows down in pl/pgsql function" }, { "msg_contents": "Oops, I named the var name the same as the column name. Changing it to\nsomething else solved the problem.\n\nThanks,\nJenny\nOn Tue, 2003-12-16 at 15:54, Stephan Szabo wrote:\n> On Tue, 16 Dec 2003, Jenny Zhang wrote:\n> \n> > I have stored procedure written in pl/pgsql which takes about 13 seconds\n> > to finish. I was able to identify that the slowness is caused by one\n> > update SQL:\n> >\n> > UPDATE shopping_cart SET sc_sub_total=sc_subtotal, sc_date=now()\n> > WHERE sc_id=sc_id;\n> \n> Umm, is that exactly the condition you're using? Isn't that going to\n> update the entire table?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n", "msg_date": "Tue, 16 Dec 2003 16:07:26 -0800", "msg_from": "Jenny Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update slows down in pl/pgsql function" } ]
[ { "msg_contents": "Neil,\n\nThanks for the good advice. I noticed that I had some sessions for which I \ncould not account, and I think even a 2nd postmaster running. It looks like \nI've cleaned everything up, and now I can VACUUM and I can DROP an index which \nwouldn't drop.\n\nAnd I'm looking into upgrading PostgreSQL.\n\n-David\n\nOn Tuesday, December 16, 2003 2:51 PM, Neil Conway [SMTP:[email protected]] \nwrote:\n> \"David Shadovitz\" <[email protected]> writes:\n> > I'm running PG 7.2.2 on RH Linux 8.0.\n>\n> Note that this version of PostgreSQL is quite old.\n>\n> > I'd like to know why \"VACUUM ANALYZE <table>\" is extemely slow (hours) for\n> > certain tables.\n>\n> Is there another concurrent transaction that has modified the table\n> but has not committed? VACUUM ANALYZE will need to block waiting for\n> it. You might be able to get some insight into this by examining the\n> pg_locks system view:\n>\n> http://www.postgresql.org/docs/current/static/monitoring-locks.html\n>\n> As well as the pg_stat_activity view.\n>\n> -Neil\n", "msg_date": "Tue, 16 Dec 2003 20:37:02 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is VACUUM ANALYZE <table> so slow?" } ]
[ { "msg_contents": "I backed up my database using pg_dump, and then restored it onto a different \nserver using psql. I see that the query \"SELECT COUNT(*) FROM myTable\" \nexecutes immediately on the new server but takes several seconds on the old \none. (The servers are identical.)\n\nWhat could account for this difference? Clustering? How can I get the \noriginal server to perform as well as the new one?\n\nThanks.\n-David\n", "msg_date": "Tue, 16 Dec 2003 20:42:58 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Why is restored database faster?" }, { "msg_contents": "David Shadovitz <[email protected]> writes:\n> What could account for this difference?\n\nLots of things -- disk fragmentation, expired tuples that aren't being\ncleaned up by VACUUM due to a long-lived transaction, the state of the\nkernel buffer cache, the configuration of the kernel, etc.\n\n> How can I get the original server to perform as well as the new one?\n\nWell, you can start by giving us some more information. For example,\nwhat is the output of VACUUM VERBOSE on the slow server? How much disk\nspace does the database directory take up on both machines?\n\n(BTW, \"SELECT count(*) FROM table\" isn't a particularly good DBMS\nperformance indication...)\n\n-Neil\n\n", "msg_date": "Wed, 17 Dec 2003 01:00:13 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is restored database faster?" }, { "msg_contents": "Neil Conway wrote:\n\n>>How can I get the original server to perform as well as the new one?\n\nWell, you have the answer. Dump the database, stop postmaster and restore it. \nThat should be faster than original one.\n\n> \n> (BTW, \"SELECT count(*) FROM table\" isn't a particularly good DBMS\n> performance indication...)\n\nParticularly in case of postgresql..:-)\n\n Shridhar\n", "msg_date": "Wed, 17 Dec 2003 12:01:06 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is restored database faster?" }, { "msg_contents": "On Tue, 16 Dec 2003, David Shadovitz wrote:\n\n> I backed up my database using pg_dump, and then restored it onto a different \n> server using psql. I see that the query \"SELECT COUNT(*) FROM myTable\" \n> executes immediately on the new server but takes several seconds on the old \n> one. (The servers are identical.)\n> \n> What could account for this difference? Clustering? How can I get the \n> original server to perform as well as the new one?\n\nYou probably need to run VACUUM FULL. It locks the tables during its \nexecution so only do it when the database is not in full use.\n\nIf this helps you probably need to do normal vacuums more often and maybe\ntune the max_fsm_pages to be bigger. \n\n-- \n/Dennis\n\n", "msg_date": "Wed, 17 Dec 2003 07:42:46 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is restored database faster?" } ]
[ { "msg_contents": "\n> Ideally that path isn't taken very often. But I'm currently having a\n> discussion off-list with a CMU student who seems to be seeing a case\n> where it happens a lot. (She reports that both WALWriteLock and\n> WALInsertLock are causes of a lot of process blockages, which seems to\n> mean that a lot of the WAL I/O is being done with both held, which would\n> have to mean that AdvanceXLInsertBuffer is doing the I/O. \n> More when we figure out what's going on exactly...)\n\nI would figure, that this is in a situation where a large transaction\nfills one XLInsertBuffer, and a lot of WAL buffers are not yet written.\n\nAndreas\n", "msg_date": "Wed, 17 Dec 2003 17:57:52 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fsync method checking" } ]
[ { "msg_contents": "Hi-\n\nAfter having done my best to squeeze better performance out of our\napplication by tuning within our existing resources, I'm falling back on\nadding memory as a short-term solution while we get creative for a long-term\nfix. I'm curious about what experiences others have had with the process of\nadding big chunks of RAM. In particular, if I'm trying to encourage the OS\nto cache more of my index information in RAM, what sort of configuration\nshould I do at both the PostgreSQL and OS level?\n\nIn a slightly off-topic vein, I'd also like to hear about it if anyone knows\nabout any gotchas at the OS level that might become a problem.\n\nThe server is a dual processor Athlon 1.2GHz box with hardware SCSI RAID. It\ncurrently has 1 GB RAM, and we're planning to add one GB more for a total of\n2GB. The OS is Debian Linux Kernel 2.4.x, and we're on PostgreSQL v7.3.2\n\nMy current memory related settings are:\n\nSHMMAX and SHMALL set to 128MB (OS setting)\nshared buffers 8192 (64MB)\nsort_mem 16384 (16MB)\neffective_cache_size 65536 (512MB)\n\n\nWe support up to 70 active users, sharing a connection pool of 16\nconnections. Most of the queries center around 3 tables that are about 1.5\nGB each.\n\n\nThanks.\n -Nick\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n", "msg_date": "Wed, 17 Dec 2003 14:57:02 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Adding RAM: seeking advice & warnings of hidden \"gotchas\" " }, { "msg_contents": "\nOn Dec 17, 2003, at 11:57 AM, Nick Fankhauser wrote:\n\n> Hi-\n>\n> After having done my best to squeeze better performance out of our\n> application by tuning within our existing resources, I'm falling back \n> on\n> adding memory as a short-term solution while we get creative for a \n> long-term\n> fix. I'm curious about what experiences others have had with the \n> process of\n> adding big chunks of RAM. In particular, if I'm trying to encourage \n> the OS\n> to cache more of my index information in RAM, what sort of \n> configuration\n> should I do at both the PostgreSQL and OS level?\n\nYou need bigmem compiled in the kernel, which you should already have \nat the 1 gig level iirc.\nYou should bump up your effective cache size, probably to around 1.75 \ngig.\n\nI wouldn't bump up the shared buffers beyond where you have them now. \nIf you're swapping out sorts to disk, you may gain boosting sortmem \nsome since you have the additional memory to use.\n\n> The server is a dual processor Athlon 1.2GHz box with hardware SCSI \n> RAID. It\n> currently has 1 GB RAM, and we're planning to add one GB more for a \n> total of\n> 2GB. The OS is Debian Linux Kernel 2.4.x, and we're on PostgreSQL \n> v7.3.2\n\nI've got a machine running Debian Stable w/2.4.x, 1.3 ghz p3, 1.5 gig \nram, pg 7.2.4 and it's rock solid.\n\n\neric\n\n", "msg_date": "Wed, 17 Dec 2003 13:38:44 -0800", "msg_from": "Eric Soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding RAM: seeking advice & warnings of hidden \"gotchas\" " }, { "msg_contents": "If you have 3 1.5GB tables then you might as well go for 4GB while you're at\nit. Make sure you've got a bigmem kernel either running or available, and\nboost effective_cache_size by whatever amount you increase the RAM by. We\nrun a Quad Xeon/4GB server on Redhat 7.3 and it's solid as a rock.\n\nThere is no way I know of to get indexes preferentially cached over data\nthough.\n\nMatt\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Nick\n> Fankhauser\n> Sent: 17 December 2003 19:57\n> To: Pgsql-Performance@Postgresql. Org\n> Subject: [PERFORM] Adding RAM: seeking advice & warnings of hidden\n> \"gotchas\"\n>\n>\n> Hi-\n>\n> After having done my best to squeeze better performance out of our\n> application by tuning within our existing resources, I'm falling back on\n> adding memory as a short-term solution while we get creative for\n> a long-term\n> fix. I'm curious about what experiences others have had with the\n> process of\n> adding big chunks of RAM. In particular, if I'm trying to encourage the OS\n> to cache more of my index information in RAM, what sort of configuration\n> should I do at both the PostgreSQL and OS level?\n>\n> In a slightly off-topic vein, I'd also like to hear about it if\n> anyone knows\n> about any gotchas at the OS level that might become a problem.\n>\n> The server is a dual processor Athlon 1.2GHz box with hardware\n> SCSI RAID. It\n> currently has 1 GB RAM, and we're planning to add one GB more for\n> a total of\n> 2GB. The OS is Debian Linux Kernel 2.4.x, and we're on PostgreSQL v7.3.2\n>\n> My current memory related settings are:\n>\n> SHMMAX and SHMALL set to 128MB (OS setting)\n> shared buffers 8192 (64MB)\n> sort_mem 16384 (16MB)\n> effective_cache_size 65536 (512MB)\n>\n>\n> We support up to 70 active users, sharing a connection pool of 16\n> connections. Most of the queries center around 3 tables that are about 1.5\n> GB each.\n>\n>\n> Thanks.\n> -Nick\n>\n> ---------------------------------------------------------------------\n> Nick Fankhauser\n>\n> [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\n> doxpop - Court records at your fingertips - http://www.doxpop.com/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n", "msg_date": "Wed, 17 Dec 2003 22:18:45 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding RAM: seeking advice & warnings of hidden \"gotchas\" " } ]
[ { "msg_contents": "Dennis, Shridhar, and Neil,\n\nThanks for your input. Here are my responses:\n\nI ran VACUUM FULL on the table in question. Although that did reduce \"Pages\" \nand \"UnUsed\", the \"SELECT *\" query is still much slower on this installation \nthan in the new, restored one.\n\n Old server:\n # VACUUM FULL abc;\n VACUUM\n # VACUUM VERBOSE abc;\n NOTICE: --Relation abc--\n NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed 32.\n Total CPU 0.07s/0.52u sec elapsed 0.60 sec.\n VACUUM\n\n New server:\n # VACUUM VERBOSE abc;\n NOTICE: --Relation abc--\n NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.02s/0.00u sec elapsed 0.02 sec.\n VACUUM\n\nmax_fsm_pages is at its default value, 10000.\n\nPeople don't have the practice of dumping and restoring just for the purpose of \nimproving performance, do they?\n\nNeil asked how much disk space the database directory takes on each machine. \n What directory is of interest? The whole thing takes up about 875 MB on each \nmachine.\n\n-David \n", "msg_date": "Wed, 17 Dec 2003 19:54:45 -0800", "msg_from": "David Shadovitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is restored database faster?" }, { "msg_contents": "On Thursday 18 December 2003 09:24, David Shadovitz wrote:\n> Old server:\n> # VACUUM FULL abc;\n> VACUUM\n> # VACUUM VERBOSE abc;\n> NOTICE: --Relation abc--\n> NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed\n> 32. Total CPU 0.07s/0.52u sec elapsed 0.60 sec.\n> VACUUM\n>\n> New server:\n> # VACUUM VERBOSE abc;\n> NOTICE: --Relation abc--\n> NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed\n> 0. Total CPU 0.02s/0.00u sec elapsed 0.02 sec.\n> VACUUM\n>\n> max_fsm_pages is at its default value, 10000.\n\nWell, then the only issue left is file sytem defragmentation. Which file \nsystem is this anyway\n\n> People don't have the practice of dumping and restoring just for the\n> purpose of improving performance, do they?\n\nWell, at times it is required. Especially if it is update intensive \nenvironment. An no database is immune to that\n\n> Neil asked how much disk space the database directory takes on each\n> machine. What directory is of interest? The whole thing takes up about 875\n> MB on each machine.\n\nThat is fairly small.. Should not take much time..in my guess, the time it \ntakes to vacuum is more than time to dump and reload.\n\nAnother quick way to defragment a file system is to copy entire data directory \nto another partition(Shutdown postmaster first), delete it from original \npartition and move back. Contegous wriing to a partition results in \ndefragmentation effectively.\n\nTry it and see if it helps. It could be much less trouble than dump/restore..\n\nHTH\n\n Shridhar\n\n", "msg_date": "Thu, 18 Dec 2003 12:17:12 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is restored database faster?" }, { "msg_contents": "On Thu, 18 Dec 2003, Shridhar Daithankar wrote:\n\n> Well, then the only issue left is file sytem defragmentation.\n\nAnd the internal fragmentation that can be \"fixed\" with the CLUSTER \ncommand.\n\n-- \n/Dennis\n\n", "msg_date": "Thu, 18 Dec 2003 16:12:16 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is restored database faster?" } ]
[ { "msg_contents": "Hi,\nThis a kind of newbie-question. I've been using Postgres for a long time in a low transction environment - and it is great.\n\nNow I've got an inquiry for using Postgresql in a heavy-load on-line system. This system must handle something like 20 questions per sec with a response time at 1/10 sec. Each question will result in approx 5-6 reads and a couple of updates.\nAnybody have a feeling if this is realistic on a Intelbased Linux server with Postgresql. Ofcourse I know that this is too little info for an exact answer but - as I said - maybe someone can give a hint if it's possible. Maybe someone with heavy-load can give an example of what is possible...\n\nRegards\nConny Thimr�n\n\n", "msg_date": "Thu, 18 Dec 2003 18:04:52 +0100", "msg_from": "Conny Thimren <[email protected]>", "msg_from_op": true, "msg_subject": "general peformance question" }, { "msg_contents": "On Thu, 2003-12-18 at 12:04, Conny Thimren wrote:\n> Hi,\n> This a kind of newbie-question. I've been using Postgres for a long time in a low transction environment - and it is great.\n> \n> Now I've got an inquiry for using Postgresql in a heavy-load on-line system. This system must handle something like 20 questions per sec with a response time at 1/10 sec. Each question will result in approx 5-6 reads and a couple of updates.\n> Anybody have a feeling if this is realistic on a Intelbased Linux server with Postgresql. Ofcourse I know that this is too little info for an exact answer but - as I said - maybe someone can give a hint if it's possible. Maybe someone with heavy-load can give an example of what is possible...\n\nOk, is that 20 questions per second (20 in parallel taking 1 second\neach) or serialized taking 50ms each.\n\nAre they simple selects / updates (less than 10 rows in result set, very\nsimple joins) or are they monster 30 table join queries?\n\n", "msg_date": "Mon, 22 Dec 2003 15:45:24 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: general peformance question" }, { "msg_contents": "On Thu, 18 Dec 2003, Conny Thimren wrote:\n\n> Hi,\n> This a kind of newbie-question. I've been using Postgres for a long time in a low transction environment - and it is great.\n> \n> Now I've got an inquiry for using Postgresql in a heavy-load on-line system. This system must handle something like 20 questions per sec with a response time at 1/10 sec. Each question will result in approx 5-6 reads and a couple of updates.\n> Anybody have a feeling if this is realistic on a Intelbased Linux server with Postgresql. Ofcourse I know that this is too little info for an exact answer but - as I said - maybe someone can give a hint if it's possible. Maybe someone with heavy-load can give an example of what is possible...\n\nThat really depends on how heavy each query is, so it's hard to say from \nwhat little you've given us.\n\nIf you are doing simple banking style transactions, then you can easily \nhandle this load, if you are talking a simple shopping cart, ditto, if, \nhowever, you are talking about queries that run 4 or 5 tables with \nmillions of rows againts each other, you're gonna have to test it \nyourself.\n\nWith the autovacuum daemon running, I ran a test overnight of pgbench \n(more for general purpose burn in than anything else) \n\npgbench -i -s 100\npgbench -c 50 -t 250000\n\nthat's 10 million transactions, and it took just over twelve hours to \ncomplete at 220+ transactions per second.\n\nso, for financials, you're likely to find it easy to meet your target. \nBut as the tables get bigger / more complex / more interconnected you'll \nsee a drop in performance.\n\n", "msg_date": "Mon, 22 Dec 2003 15:30:56 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: general peformance question" } ]
[ { "msg_contents": "Hi,\n\nit seems to me that the optimizer parameters (like random_page_cost \netc.) could easily be calculated and adjusted dynamically be the DB \nbackend based on the planner's cost estimates and actual run times for \ndifferent queries. Perhaps the developers could comment on that?\n\nI'm not sure how the parameters are used internally (apart from whatever \n\"EXPLAIN\" shows), but if cpu_operator_cost is the same for all \noperators, this should probably also be adjusted for individual \noperators (I suppose that \">\" is not as costly as \"~*\").\n\nAs far as the static configuration is concerned, I'd be interested in \nother users' parameters and hardware configurations. Here's ours (for a \nwrite-intensive db that also performs many queries with regular \nexpression matching):\n\neffective_cache_size = 1000000 # typically 8KB each\n#random_page_cost = 0.2 # units are one sequential page fetch cost\nrandom_page_cost = 3 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\ncpu_index_tuple_cost = 0.01 # (same) 0.1\n#cpu_operator_cost = 0.0025 # (same)\ncpu_operator_cost = 0.025 # (same)\n\nother options:\n\nshared_buffers = 240000 # 2*max_connections, min 16, typically 8KB each\nmax_fsm_relations = 10000 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 10000000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 20 # min 10\nwal_buffers = 128 # min 4, typically 8KB each\nsort_mem = 800000 # min 64, size in KB\nvacuum_mem = 100000 # min 1024, size in KB\ncheckpoint_segments = 80 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\ncommit_delay = 100000 # range 0-100000, in microseconds\ncommit_siblings = 5 # range 1-1000\n\n12GB RAM, dual 2,80GHz Xeon, 6x 10K rpm disks in a RAID-5, Linux 2.4.23 \nwith HT enabled.\n\nRegards,\n Marinos\n\n", "msg_date": "Thu, 18 Dec 2003 19:44:49 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "why do optimizer parameters have to be set manually?" }, { "msg_contents": "[email protected] (\"Marinos J. Yannikos\") writes:\n> it seems to me that the optimizer parameters (like random_page_cost\n> etc.) could easily be calculated and adjusted dynamically be the DB\n> backend based on the planner's cost estimates and actual run times for\n> different queries. Perhaps the developers could comment on that?\n\nYes, it seems like a Small Matter Of Programming.\n\nhttp://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?SMOP\n\nIn seriousness, yes, it would seem a reasonable idea to calculate some\nof these values a bit more dynamically. \n\nI would be inclined to start with something that ran a workload, and\nprovided static values based on how that workload went. That would\nrequire NO intervention inside the DB server; it could be accomplished\nsimply by writing a database script. Feel free to contribute either a\nscript or a backend \"hack\"...\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Thu, 18 Dec 2003 14:17:43 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why do optimizer parameters have to be set manually?" }, { "msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> it seems to me that the optimizer parameters (like random_page_cost \n> etc.) could easily be calculated and adjusted dynamically be the DB \n> backend based on the planner's cost estimates and actual run times for \n> different queries. Perhaps the developers could comment on that?\n\nNo, they are not that easy to determine. In particular I think the idea\nof automatically feeding back error measurements is hopeless, because\nyou cannot tell which parameters are wrong.\n\n> I'm not sure how the parameters are used internally (apart from whatever \n> \"EXPLAIN\" shows), but if cpu_operator_cost is the same for all \n> operators, this should probably also be adjusted for individual \n> operators (I suppose that \">\" is not as costly as \"~*\").\n\nIn theory perhaps, but in practice this is far down in the noise in most\nsituations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Dec 2003 14:56:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why do optimizer parameters have to be set manually? " }, { "msg_contents": "It appears that the optimizer only uses indexes for = clause? \n\nDave\n\n", "msg_date": "18 Dec 2003 18:18:48 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "is it possible to get the optimizer to use indexes with a like clause" }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n\n>> It appears that the optimizer only uses indexes for = clause?\n>\n> The optimizer will used indexes for LIKE clauses, so long as the\n> clause is a prefix search, eg:\n>\n> SELECT * FROM test WHERE a LIKE 'prf%';\n\nDoesn't this still depend on your locale?\n\n-Doug\n\n", "msg_date": "Thu, 18 Dec 2003 20:38:13 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "> It appears that the optimizer only uses indexes for = clause? \n\nThe optimizer will used indexes for LIKE clauses, so long as the clause \nis a prefix search, eg:\n\nSELECT * FROM test WHERE a LIKE 'prf%';\n\nChris\n\n", "msg_date": "Fri, 19 Dec 2003 09:38:50 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "after vacuum verbose analyze, I still get\n\nexplain select * from isppm where item_upc_cd like '06038301234';\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on isppm (cost=100000000.00..100009684.89 rows=2 width=791)\n Filter: (item_upc_cd ~~ '06038301234'::text)\n(2 rows)\n \nisp=# explain select * from isppm where item_upc_cd = '06038301234';\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using isppm_x0 on isppm (cost=0.00..5.86 rows=2 width=791)\n Index Cond: (item_upc_cd = '06038301234'::bpchar)\n(2 rows)\n\n\nDave\nOn Thu, 2003-12-18 at 20:38, Christopher Kings-Lynne wrote:\n> > It appears that the optimizer only uses indexes for = clause? \n> \n> The optimizer will used indexes for LIKE clauses, so long as the clause \n> is a prefix search, eg:\n> \n> SELECT * FROM test WHERE a LIKE 'prf%';\n> \n> Chris\n> \n> \n\n", "msg_date": "18 Dec 2003 22:08:37 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "[email protected] (Dave Cramer) wrote:\n> It appears that the optimizer only uses indexes for = clause? \n\nIt can use indices only if there is a given prefix.\n\nThus:\n where text_field like 'A%'\n\ncan use the index, essentially transforming this into the clauses\n\n where text_field >= 'A' and\n text_field < 'B'.\n\nYou can't get much out of an index for\n where text_field like '%SOMETHING'\n-- \n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/wp.html\n\"When the grammar checker identifies an error, it suggests a\ncorrection and can even makes some changes for you.\" \n-- Microsoft Word for Windows 2.0 User's Guide, p.35:\n", "msg_date": "Thu, 18 Dec 2003 22:22:38 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes with a like\n\tclause" }, { "msg_contents": "\nOn Thu, 18 Dec 2003, Dave Cramer wrote:\n\n> after vacuum verbose analyze, I still get\n>\n> explain select * from isppm where item_upc_cd like '06038301234';\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Seq Scan on isppm (cost=100000000.00..100009684.89 rows=2 width=791)\n> Filter: (item_upc_cd ~~ '06038301234'::text)\n> (2 rows)\n\nIIRC, the other limitation is that it only does so in \"C\" locale due to\nwierdnesses in other locales.\n", "msg_date": "Thu, 18 Dec 2003 19:36:02 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> after vacuum verbose analyze, I still get [a seqscan]\n\nThe other gating factor is that you have to have initdb'd in C locale.\nNon-C locales tend to use wild and wooly sort orders that are not\ncompatible with what LIKE needs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Dec 2003 22:44:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes " }, { "msg_contents": "So even in a north-american locale, such as en_CA this will be a\nproblem?\n\nDave\nOn Thu, 2003-12-18 at 22:44, Tom Lane wrote:\n> Dave Cramer <[email protected]> writes:\n> > after vacuum verbose analyze, I still get [a seqscan]\n> \n> The other gating factor is that you have to have initdb'd in C locale.\n> Non-C locales tend to use wild and wooly sort orders that are not\n> compatible with what LIKE needs.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n", "msg_date": "19 Dec 2003 04:50:12 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "Tom Lane wrote:\n> No, they are not that easy to determine. In particular I think the idea\n> of automatically feeding back error measurements is hopeless, because\n> you cannot tell which parameters are wrong.\n\nIsn't it just a matter of solving an equation system with n variables (n \nbeing the number of parameters), where each equation stands for the \ncalculation of the run time of a particular query? I.e. something like\nthis for a sequential scan over 1000 rows with e.g. 2 operators used per \niteration that took 2 seconds (simplified so that the costs are actual \ntimings and not relative costs to a base value):\n\n1000 * sequential_scan_cost + 1000 * 2 * cpu_operator_cost = 2.0 seconds\n\nWith a sufficient number of equations (not just n, since not all query \nplans use all the parameters) this system can be solved for the \nparticular query mix that was used. E.g. with a second sequential scan \nover 2000 rows with 1 operator per iteration that took 3 seconds you can \nderive:\n\nsequential_scan_cost = 1ms\ncpu_operator_cost = 0.5ms\n\nThis could probably be implemented with very little overhead compared to \nthe actual run times of the queries.\n\nRegard,\n Marinos\n\n\n\n", "msg_date": "Fri, 19 Dec 2003 14:53:22 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why do optimizer parameters have to be set manually?" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> So even in a north-american locale, such as en_CA this will be a\n> problem?\n\nIf it's not \"C\" we won't try to optimize LIKE. I know en_US does not\nwork (case-insensitive, funny rules about spaces, etc) and I would\nexpect en_CA has the same issues.\n\nIf you're using 7.4 you have the option to create a special index\ninstead of re-initdb'ing your whole database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Dec 2003 09:38:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes " }, { "msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> Tom Lane wrote:\n>> No, they are not that easy to determine. In particular I think the idea\n>> of automatically feeding back error measurements is hopeless, because\n>> you cannot tell which parameters are wrong.\n\n> Isn't it just a matter of solving an equation system with n variables (n \n> being the number of parameters), where each equation stands for the \n> calculation of the run time of a particular query?\n\nIf we knew all the variables involved, it might be (though since the\nequations would be nonlinear, the solution would be more difficult than\nyou suppose). The real problems are:\n\n1. There is lots of noise in any real-world measurement, mostly due to\ncompetition from other processes.\n\n2. There are effects we don't even try to model, such as the current\ncontents of kernel cache. Everybody who's done any work with Postgres\nknows that for small-to-middling tables, running the same query twice in\na row will yield considerably different runtimes, because the second\ntime through all the data will be in kernel cache. But we don't have\nany useful way to model that in the optimizer, since we can't see what\nthe kernel has in its buffers.\n\n3. Even for the effects we do try to model, some of the equations are\npretty ad-hoc and might not fit real data very well. (I have little\nconfidence in the current correction for index order correlation, for\nexample.)\n\nIn short, if you just try to fit the present cost equations to real\ndata, what you'll get will inevitably be \"garbage in, garbage out\".\nYou could easily end up with parameter values that are much less\nrealistic than the defaults.\n\nOver time we'll doubtless improve the optimizer's cost models, and\nsomeday we might get to a point where this wouldn't be a fool's errand,\nbut I don't see it happening in the foreseeable future.\n\nI think a more profitable approach is to set up special test code to try\nto approximate the value of individual parameters measured in isolation.\nFor instance, the current default of 4.0 for random_page_cost was\ndeveloped through rather extensive testing a few years ago, and I think\nit's still a decent average value (for the case where you are actually\ndoing I/O, mind you). But if your disks have particularly fast or slow\nseek times, maybe it's not good for you. It might be useful to package\nup a test program that repeats those measurements on particular systems\n--- though the problem of noisy measurements still applies. It is not\neasy or cheap to get a measurement that isn't skewed by kernel caching\nbehavior. (You need a test file significantly larger than RAM, and\neven then you'd better repeat the measurement quite a few times to see\nhow much noise there is in it.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Dec 2003 10:07:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why do optimizer parameters have to be set manually? " }, { "msg_contents": "Hello,\n\ni got indexes to work with \"text_pattern_ops\" for locale et_EE.\n\nSo instead of:\ncreate index some_index_name on some_table(some_text_field);\n\nnor\n\ncreate index some_index_name on some_table(some_text_field text_ops);\n\ntry to create index as follows:\ncreate index some_index_name on some_table(some_text_field \ntext_pattern_ops);\n\nNote that text_pattern_ops is available pg >= 7.4.\n\nRegards,\n\nErki Kaldj�rv\nWebware O�\nwww.webware.ee\n\nTom Lane wrote:\n\n>Dave Cramer <[email protected]> writes:\n> \n>\n>>So even in a north-american locale, such as en_CA this will be a\n>>problem?\n>> \n>>\n>\n>If it's not \"C\" we won't try to optimize LIKE. I know en_US does not\n>work (case-insensitive, funny rules about spaces, etc) and I would\n>expect en_CA has the same issues.\n>\n>If you're using 7.4 you have the option to create a special index\n>instead of re-initdb'ing your whole database.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n\n\n\n\n\n\nHello,\n\ni got indexes to work with \"text_pattern_ops\" for locale et_EE.\n\nSo instead of:\ncreate index some_index_name on some_table(some_text_field);\n\nnor\n\ncreate index some_index_name on some_table(some_text_field text_ops);\n\ntry to create index as follows:\ncreate index some_index_name on some_table(some_text_field text_pattern_ops);\n\nNote that text_pattern_ops is available pg >= 7.4.\n\nRegards,\n\nErki Kaldjärv\nWebware OÜ\nwww.webware.ee\n\nTom Lane wrote:\n\nDave Cramer <[email protected]> writes:\n \n\nSo even in a north-american locale, such as en_CA this will be a\nproblem?\n \n\n\nIf it's not \"C\" we won't try to optimize LIKE. I know en_US does not\nwork (case-insensitive, funny rules about spaces, etc) and I would\nexpect en_CA has the same issues.\n\nIf you're using 7.4 you have the option to create a special index\ninstead of re-initdb'ing your whole database.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster", "msg_date": "Fri, 19 Dec 2003 18:00:25 +0200", "msg_from": "=?ISO-8859-1?Q?Erki_Kaldj=E4rv?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" }, { "msg_contents": "Tom Lane wrote:\n> easy or cheap to get a measurement that isn't skewed by kernel caching\n> behavior. (You need a test file significantly larger than RAM, and\n> even then you'd better repeat the measurement quite a few times to see\n> how much noise there is in it.)\n\nI found a really fast way in Linux to flush the kernel cache and that is \nto unmount the drive and then remount. Beats having to read though a \nfile > RAM everytime.\n\n", "msg_date": "Fri, 19 Dec 2003 09:44:17 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why do optimizer parameters have to be set manually?" }, { "msg_contents": "Doug,\n\nYes, it does depend on the locale, you can get around this in 7.4 by\nbuilding the index with smart operators\n\nDave\nOn Thu, 2003-12-18 at 20:38, Doug McNaught wrote:\n> Christopher Kings-Lynne <[email protected]> writes:\n> \n> >> It appears that the optimizer only uses indexes for = clause?\n> >\n> > The optimizer will used indexes for LIKE clauses, so long as the\n> > clause is a prefix search, eg:\n> >\n> > SELECT * FROM test WHERE a LIKE 'prf%';\n> \n> Doesn't this still depend on your locale?\n> \n> -Doug\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n", "msg_date": "24 Dec 2003 08:58:34 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it possible to get the optimizer to use indexes" } ]
[ { "msg_contents": "Hi, \nThis a kind of newbie-question. I've been using Postgres for a long time in a low transction environment - and it is great. \n\nNow I've got an inquiry for using Postgresql in a heavy-load on-line system. This system must handle something like 20 questions per sec with a response time at 1/10 sec. Each question will result in approx 5-6 reads and a couple of updates. \nAnybody have a feeling if this is realistic on a Intelbased Linux server with Postgresql. Ofcourse I know that this is too little info for an exact answer but - as I said - maybe someone can give a hint if it's possible. Maybe someone with heavy-load can give an example of what is possible... \n\nRegards \nConny Thimrén \n\n\n\n\n\n\n\n\nHi, This a \nkind of newbie-question. I've been using Postgres for a long time in a low \ntransction environment - and it is great. Now I've got an inquiry for \nusing Postgresql in a heavy-load on-line system. This system must handle \nsomething like 20 questions per sec with a response time at 1/10 sec. Each \nquestion will result in approx 5-6 reads and a couple of updates. Anybody \nhave a feeling if this is realistic on a Intelbased Linux server with \nPostgresql. Ofcourse I know that this is too little info for an exact answer but \n- as I said - maybe someone can give a hint if it's possible. Maybe someone with \nheavy-load can give an example of what is possible... Regards Conny \nThimrén", "msg_date": "Thu, 18 Dec 2003 20:43:29 +0100", "msg_from": "\"Conny Thimren\" <[email protected]>", "msg_from_op": true, "msg_subject": "general performance question" } ]
[ { "msg_contents": "Hi.\n\nI have a table with 24k records and btree index on column 'id'. Is this\nnormal, that 'select max(id)' or 'select count(id)' causes a sequential\nscan? It takes over 24 seconds (on a pretty fast machine):\n\n=> explain ANALYZE select max(id) from ogloszenia;\n QUERY PLAN\n----------------------------------------------------------------------\n Aggregate (cost=3511.05..3511.05 rows=1 width=4) (actual\ntime=24834.629..24834.629 rows=1 loops=1)\n -> Seq Scan on ogloszenia (cost=0.00..3473.04 rows=15204 width=4)\n(actual time=0.013..24808.377 rows=16873 loops=1)\n Total runtime: 24897.897 ms\n\nMaybe it's caused by a number of varchar fields in this table? However,\n'id' column is 'integer' and is primary key.\n\nClustering table on index created on 'id' makes such a queries\nmany faster, but they still use a sequential scan.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Mon, 22 Dec 2003 11:39:18 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "\"select max/count(id)\" not using index" }, { "msg_contents": "\n> I have a table with 24k records and btree index on column 'id'. Is this\n> normal, that 'select max(id)' or 'select count(id)' causes a sequential\n> scan? It takes over 24 seconds (on a pretty fast machine):\n> \n> => explain ANALYZE select max(id) from ogloszenia;\n\nYes, it is. It is a known issue with Postgres's extensible operator \narchitecture.\n\nThe work around is to have an index on the id column and do this instead:\n\nSELECT id FROM ogloszenia ORDER BY id DESC LIMIT 1;\n\nWhich will be really fast.\n\nChris\n\n", "msg_date": "Mon, 22 Dec 2003 18:56:50 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"select max/count(id)\" not using index" }, { "msg_contents": "Guten Tag Ryszard Lach,\n\nAm Montag, 22. Dezember 2003 um 11:39 schrieben Sie:\n\nRL> Hi.\n\nRL> I have a table with 24k records and btree index on column 'id'. Is this\nRL> normal, that 'select max(id)' or 'select count(id)' causes a sequential\nRL> scan? It takes over 24 seconds (on a pretty fast machine):\n\nYes, that was occasionally discussed on the mailinglists. For the\nmax(id) you can use instead \"SELECT id FROM table ORDER BY id DESC\nLIMIT 1\"\n\n\nChristoph Nelles\n\n\n=>> explain ANALYZE select max(id) from ogloszenia;\nRL> QUERY PLAN\nRL> ----------------------------------------------------------------------\nRL> Aggregate (cost=3511.05..3511.05 rows=1 width=4) (actual\nRL> time=24834.629..24834.629 rows=1 loops=1)\nRL> -> Seq Scan on ogloszenia (cost=0.00..3473.04 rows=15204 width=4)\nRL> (actual time=0.013..24808.377 rows=16873 loops=1)\nRL> Total runtime: 24897.897 ms\n\nRL> Maybe it's caused by a number of varchar fields in this table? However,\nRL> 'id' column is 'integer' and is primary key.\n\nRL> Clustering table on index created on 'id' makes such a queries\nRL> many faster, but they still use a sequential scan.\n\nRL> Richard.\n\n\n\n\n-- \nMit freundlichen Grďż˝ssen\nEvil Azrael mailto:[email protected]\n\n", "msg_date": "Mon, 22 Dec 2003 11:59:58 +0100", "msg_from": "Evil Azrael <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"select max/count(id)\" not using index" }, { "msg_contents": "Hello\n\nIt is normal behavior PostgreSQL. Use\n\nSELECT id FROM tabulka ORDER BY id DESC LIMIT 1;\n\nregards\nPavel\n\nOn Mon, 22 Dec 2003, Ryszard Lach wrote:\n\n> Hi.\n> \n> I have a table with 24k records and btree index on column 'id'. Is this\n> normal, that 'select max(id)' or 'select count(id)' causes a sequential\n> scan? It takes over 24 seconds (on a pretty fast machine):\n> \n> => explain ANALYZE select max(id) from ogloszenia;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Aggregate (cost=3511.05..3511.05 rows=1 width=4) (actual\n> time=24834.629..24834.629 rows=1 loops=1)\n> -> Seq Scan on ogloszenia (cost=0.00..3473.04 rows=15204 width=4)\n> (actual time=0.013..24808.377 rows=16873 loops=1)\n> Total runtime: 24897.897 ms\n> \n> Maybe it's caused by a number of varchar fields in this table? However,\n> 'id' column is 'integer' and is primary key.\n> \n> Clustering table on index created on 'id' makes such a queries\n> many faster, but they still use a sequential scan.\n> \n> Richard.\n> \n> \n\n", "msg_date": "Mon, 22 Dec 2003 12:03:05 +0100 (CET)", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"select max/count(id)\" not using index" }, { "msg_contents": "Dnia 2003-12-22 11:39, Uďż˝ytkownik Ryszard Lach napisaďż˝:\n\n> Hi.\n> \n> I have a table with 24k records and btree index on column 'id'. Is this\n> normal, that 'select max(id)' or 'select count(id)' causes a sequential\n> scan? It takes over 24 seconds (on a pretty fast machine):\n'select count(id)'\nYes, this is normal. Because of MVCC all rows must be checked and \nPostgres doesn't cache count(*) like Mysql.\n\n'select max(id)'\nThis is also normal, but try to change this query into:\nselect id from some_table order by id desc limit 1;\n\nWhat is your Postgresql version?\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Mon, 22 Dec 2003 12:03:45 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"select max/count(id)\" not using index" } ]
[ { "msg_contents": " I just restored a database running on a solaris box to a linux box \nand queries take forever to execute. The linux box is faster and has \ntwice the memory allocated to postgresql, is there anything obvious that \nI should look at? It is using a journal file system.\n\n\n\n", "msg_date": "Mon, 22 Dec 2003 14:11:54 -0500", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql performance on linux port" }, { "msg_contents": "Michael Guerin <[email protected]> writes:\n> I just restored a database running on a solaris box to a linux box \n> and queries take forever to execute.\n\nDid you remember to run ANALYZE? Have you applied the same\nconfiguration settings that you were using before?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Dec 2003 17:32:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance on linux port " }, { "msg_contents": "Hi Tom,\n\n I don't believe I did run Analyze, I was under the assumption that the\nstatistics would have been up to date when the indexes were created.\nThanks for the quick response.\n\n-mike\n\n\nTom Lane wrote:\n\n> Michael Guerin <[email protected]> writes:\n> > I just restored a database running on a solaris box to a linux box\n> > and queries take forever to execute.\n>\n> Did you remember to run ANALYZE? Have you applied the same\n> configuration settings that you were using before?\n>\n> regards, tom lane\n\n", "msg_date": "Mon, 22 Dec 2003 19:33:37 -0500", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql performance on linux port" } ]
[ { "msg_contents": "\nG'day all ...\n\nDave asked me today about 'slow downs' on the search engines, so am\nlooking at the various queries generated by enabling\nlog_statement/log_duration, to get a feel for is something is \"off\" ...\nand the following seems a bit weird ...\n\nQueryA and QueryB are the same query, but against two different tables in\nthe databases ... QueryA takes ~4x longer to run then QueryB, but both\nEXPLAINs look similar ... in fact, looking at the EXPLAIN ANALYZE output,\nI would expect that QueryB would be the slower of the two ... but, the\nactual vs estimated times for ndict5/ndict4 seem off (ndict4 is estimated\nhigh, ndict5 is estimated low) ...\n\nQueryA:\n\n186_archives=# explain analyze SELECT ndict5.url_id,ndict5.intag\n FROM ndict5, url\n WHERE ndict5.word_id=1343124681\n AND url.rec_id=ndict5.url_id\n AND ((url.url || '') LIKE 'http://archives.postgresql.org/%%');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..69799.69 rows=44 width=8) (actual time=113.067..26477.672 rows=14112 loops=1)\n -> Index Scan using n5_word on ndict5 (cost=0.00..34321.89 rows=8708 width=8) (actual time=27.349..25031.666 rows=15501 loops=1)\n Index Cond: (word_id = 1343124681)\n -> Index Scan using url_rec_id on url (cost=0.00..4.06 rows=1 width=4) (actual time=0.061..0.068 rows=1 loops=15501)\n Index Cond: (url.rec_id = \"outer\".url_id)\n Filter: ((url || ''::text) ~~ 'http://archives.postgresql.org/%%'::text)\n Total runtime: 26550.566 ms\n(7 rows)\n\nQueryB:\n\n186_archives=# explain analyze SELECT ndict4.url_id,ndict4.intag\n FROM ndict4, url\n WHERE ndict4.word_id=-2038735111\n AND url.rec_id=ndict4.url_id\n AND ((url.url || '') LIKE 'http://archives.postgresql.org/%%');\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..99120.97 rows=62 width=8) (actual time=26.330..6630.581 rows=2694 loops=1)\n -> Index Scan using n4_word on ndict4 (cost=0.00..48829.52 rows=12344 width=8) (actual time=7.954..6373.098 rows=2900 loops=1)\n Index Cond: (word_id = -2038735111)\n -> Index Scan using url_rec_id on url (cost=0.00..4.06 rows=1 width=4) (actual time=0.059..0.066 rows=1 loops=2900)\n Index Cond: (url.rec_id = \"outer\".url_id)\n Filter: ((url || ''::text) ~~ 'http://archives.postgresql.org/%%'::text)\n Total runtime: 6643.462 ms\n(7 rows)\n\n\n\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n", "msg_date": "Mon, 22 Dec 2003 17:11:25 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "mnogosearch under 7.4 ..." } ]
[ { "msg_contents": "I consider using PostgreSQL for a project we have in our company and, to\nget a better picture of the product, I started scanning its source code\nand internal documentation.\nBased on what I saw (and maybe I didn't see enough) it seems that the\noptimizer will always decide to repeatedly scan the whole row set\nreturned by sub selects in the context of an IN clause sequentially, as\nopposed to what I would expect it to do (which is to create some index\nor hash structure to improve performance).\nFor example, if I have the following query:\nSelect * from a where x in (select y from b where z=7) \nThen I would expect an index or hash structure to be created for b.y\nwhen it is first scanned and brought into the cache but I couldn't see\nit happening in the source.\nAs I said, I only inferred it from reading the source - not from actual\nexperiments - so I may be wrong.\n1. Am I wrong?\n2. If I'm right, is there any plan to change it (after all, in the\ncontext of an IN clause, an index on the returned row set is all that is\nneeded - the row set itself does not seem to matter).\n \nThank you,\n \nMichael Rothschild\n\nMessage\n\n\n\n\nI consider using PostgreSQL for a project we have in \nour company and, to get a better picture of the product, I started \nscanning its source code and internal documentation.\nBased on what I saw (and maybe I didn't see enough) \nit seems that the optimizer will always decide to repeatedly scan the \nwhole row set returned by sub selects in the context of an IN \nclause sequentially, as opposed to what I would expect it to do (which is \nto create some index or hash structure to improve \nperformance).\nFor example, if I have the following \nquery:Select * from a where x in (select y from \nb where z=7) \nThen I would expect an index or hash structure to be created \nfor b.y when it is first scanned and brought into the cache but I couldn't see \nit happening in the source.\nAs I said, I only inferred it from reading the \nsource - not from actual experiments - so I may be \nwrong.\n1. Am I \nwrong?\n2. If I'm right, is \nthere any plan to change it (after all, in the context of an IN clause, an index \non the returned row set is all that is needed - the row set itself does not seem \nto matter).\n \nThank \nyou,\n \nMichael \nRothschild", "msg_date": "Wed, 24 Dec 2003 18:25:43 +0200", "msg_from": "\"Michael Rothschild\" <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "> For example, if I have the following query:\n> Select * from a where x in (select y from b where z=7)\n> Then I would expect an index or hash structure to be created for b.y\n> when it is first scanned and brought into the cache but I couldn't see\n> it happening in the source.\n> As I said, I only inferred it from reading the source - not from actual\n> experiments - so I may be wrong.\n> 1. Am I wrong?\n\nYou are wrong - this is old behaviour and one of the major speed\nimprovements of PostgreSQL 7.4 is that IN subqueries now use a hash index\nand hence they are much faster.\n\nChris\n\n\n", "msg_date": "Thu, 25 Dec 2003 10:47:28 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\"Michael Rothschild\" <[email protected]> writes:\n> Based on what I saw (and maybe I didn't see enough) it seems that the\n> optimizer will always decide to repeatedly scan the whole row set\n> returned by sub selects in the context of an IN clause sequentially,\n\nWhat version were you looking at?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Dec 2003 11:28:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "I have a database where the vast majority of information that is related to\na customer never changes. However, there is a single field (i.e. balance)\nthat changes potentially tens to hundreds of times per day per customer\n(customers ranging in the 1000s to 10000s). This information is not indexed.\nBecause Postgres requires VACUUM ANALYZE more frequently on updated tables,\nshould I break this single field out into its own table, and if so what kind\nof a speed up can I expect to achieve. I would be appreciative of any\nguidance offered.\n \nBTW, currently using Postgres 7.3.4\n \n \nKeith\n \n \n\n\n\nMessage\n\n\nI have a database \nwhere the vast majority of information that is related to a customer never \nchanges. However, there is a single field (i.e. balance) that changes \npotentially tens to hundreds of times per day per customer (customers ranging in \nthe 1000s to 10000s). This information is not indexed. Because Postgres requires \nVACUUM ANALYZE more frequently on updated tables, should I break this single \nfield out into its own table, and if so what kind of a speed up can I expect to \nachieve. I would be appreciative of any guidance offered.\n \nBTW, currently using \nPostgres 7.3.4\n \n \n\nKeith", "msg_date": "Fri, 26 Dec 2003 18:11:19 -0600", "msg_from": "\"Keith Bottner\" <[email protected]>", "msg_from_op": true, "msg_subject": "What's faster?" }, { "msg_contents": "\"Keith Bottner\" <[email protected]> writes:\n> I have a database where the vast majority of information that is related to\n> a customer never changes. However, there is a single field (i.e. balance)\n> that changes potentially tens to hundreds of times per day per customer\n> (customers ranging in the 1000s to 10000s). This information is not indexed.\n> Because Postgres requires VACUUM ANALYZE more frequently on updated tables,\n> should I break this single field out into its own table,\n\nVery likely a good idea, if the primary key that you'd need to add to\nidentify the balance is narrow. Hard to say exactly how large the\nbenefit would be, but I'd think the update costs would be reduced\nconsiderably.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Dec 2003 19:49:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's faster? " }, { "msg_contents": "> Because Postgres requires VACUUM ANALYZE more frequently on updated tables,\n> should I break this single field out into its own table, and if so what kind\n> of a speed up can I expect to achieve. I would be appreciative of any\n> guidance offered.\n\nUnless that field is part of the key, I wouldn't think that a vacuum \nanalyze would be needed, as the key distribution isn't changing. \n\nI don't know if that is still true if that field is indexed. Tom?\n\nEven then, as I understand things vacuum analyze doesn't rebuild indexes, \nso I could see a need to drop and rebuild indexes on a regular basis,\neven if you move that field into a separate table. \n--\nMike Nolan\n", "msg_date": "Fri, 26 Dec 2003 19:06:21 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's faster?" }, { "msg_contents": "Mike Nolan <[email protected]> writes:\n>> Because Postgres requires VACUUM ANALYZE more frequently on updated tables,\n>> should I break this single field out into its own table, and if so what kind\n>> of a speed up can I expect to achieve. I would be appreciative of any\n>> guidance offered.\n\n> Unless that field is part of the key, I wouldn't think that a vacuum \n> analyze would be needed, as the key distribution isn't changing. \n\nThe \"analyze\" wouldn't matter ... but the \"vacuum\" would. He needs to\nget rid of the dead rows in a timely fashion. The wider the rows, the\nmore disk space is at stake.\n\nAlso, if he has more than just a primary index on the main table,\nthe cost of updating the secondary indexes must be considered.\nA balance-only table would presumably have just one index to update.\n\nAgainst all this you have to weigh the cost of doing a join to get the\nbalance, so it's certainly not a no-brainer choice. But I think it's\nsurely worth considering such a design.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Dec 2003 23:00:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's faster? " }, { "msg_contents": "On December 26, 2003 07:11 pm, Keith Bottner wrote:\n> I have a database where the vast majority of information that is related to\n> a customer never changes. However, there is a single field (i.e. balance)\n> that changes potentially tens to hundreds of times per day per customer\n> (customers ranging in the 1000s to 10000s). This information is not\n> indexed. Because Postgres requires VACUUM ANALYZE more frequently on\n> updated tables, should I break this single field out into its own table,\n> and if so what kind of a speed up can I expect to achieve. I would be\n> appreciative of any guidance offered.\n\nWe went through this recently. One thing we found that may apply to you is \nhow many fields in the client record have a foreign key constraint. We find \nthat tables with lots of FKeys are a lot more intensive on updates. In our \ncase it was another table, think of it as an order or header table with a \nbalance, that has over 10 million records. Sometimes we have 200,000 \ntransactions a day where we have to check the balance. We eventually moved \nevery field that could possibly be updated on a regular basis out to separate \ntables. The improvement was dramatic.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 27 Dec 2003 05:52:07 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's faster?" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <[email protected]> writes:\n> We went through this recently. One thing we found that may apply to you is \n> how many fields in the client record have a foreign key constraint. We find \n> that tables with lots of FKeys are a lot more intensive on updates.\n\nBTW, this should have gotten better in 7.3.4 and later --- there is\nlogic to skip checking an FKey reference if the referencing columns\ndidn't change during the update.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Dec 2003 13:30:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's faster? " } ]
[ { "msg_contents": "To all,\n\nThe facts:\n\nPostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI \ndrives in hardware RAID 0 configuration. Database size with indexes is \ncurrently 122GB. Schema for the table in question is at the end of this \nemail. The DB has been vacuumed full and analyzed. Between 2 and 3 \nmillion records are added to the table in question each night. An \nanalyze is run on the entire DB after the data has been loaded each \nnight. There are no updates or deletes of records during the nightly \nload, only insertions.\n\nI am trying to understand why the performance between the two queries \nbelow is so different. I am trying to find the count of all pages that \nhave a 'valid' content_key. -1 is our 'we don't have any content' key. \nThe first plan below has horrendous performance. we only get about 2% \nCPU usage and iostat shows 3-5 MB/sec IO. The second plan runs at 30% \ncpu and 15-30MB.sec IO. \n\nCould someone shed some light on why the huge difference in \nperformance? Both are doing index scans plus a filter. We have no \ncontent_keys below -1 at this time so the queries return the same results.\n\nThanks.\n\n\n--sean\n\n\nexplain select count (distinct (persistent_cookie_key) ) from \nf_pageviews where date_key between 305 and 334 and content_key > -1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Aggregate (cost=688770.29..688770.29 rows=1 width=4)\n -> Index Scan using idx_pageviews_content on f_pageviews \n(cost=0.00..645971.34 rows=17119580 width=4)\n Index Cond: (content_key > -1)\n Filter: ((date_key >= 305) AND (date_key <= 334))\n(4 rows)\n\n\nexplain select count (distinct (persistent_cookie_key) ) from \nf_pageviews where date_key between 305 and 334 and content_key <> -1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1365419.12..1365419.12 rows=1 width=4)\n -> Index Scan using idx_pageviews_date_nov_2003 on f_pageviews \n(cost=0.00..1322615.91 rows=17121284 width=4)\n Index Cond: ((date_key >= 305) AND (date_key <= 334))\n Filter: (content_key <> -1)\n(4 rows)\n\n\n \\d f_pageviews\n Table \"public.f_pageviews\"\n Column | Type | Modifiers\n------------------------+---------+-------------------------------------------------------------\n id | integer | not null default \nnextval('public.f_pageviews_id_seq'::text)\n date_key | integer | not null\n time_key | integer | not null\n content_key | integer | not null\n location_key | integer | not null\n session_key | integer | not null\n subscriber_key | text | not null\n persistent_cookie_key | integer | not null\n ip_key | integer | not null\n referral_key | integer | not null\n servlet_key | integer | not null\n tracking_key | integer | not null\n provider_key | text | not null\n marketing_campaign_key | integer | not null\n orig_airport | text | not null\n dest_airport | text | not null\n commerce_page | boolean | not null default false\n job_control_number | integer | not null\n sequenceid | integer | not null default 0\n url_key | integer | not null\n useragent_key | integer | not null\n web_server_name | text | not null default 'Not Available'::text\n cpc | integer | not null default 0\n referring_servlet_key | integer | default 1\n first_page_key | integer | default 1\n newsletterid_key | text | not null default 'Not Available'::text\nIndexes:\n \"f_pageviews_pkey\" primary key, btree (id)\n \"idx_pageviews_content\" btree (content_key)\n \"idx_pageviews_date_dec_2003\" btree (date_key) WHERE ((date_key >= \n335) AND (date_key <= 365))\n \"idx_pageviews_date_nov_2003\" btree (date_key) WHERE ((date_key >= \n304) AND (date_key <= 334))\n \"idx_pageviews_referring_servlet\" btree (referring_servlet_key)\n \"idx_pageviews_servlet\" btree (servlet_key)\n \"idx_pageviews_session\" btree (session_key)\n\n\n\n", "msg_date": "Mon, 29 Dec 2003 11:35:43 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Question about difference in performance of 2 queries on large table" }, { "msg_contents": "Please show EXPLAIN ANALYZE output for your queries, not just EXPLAIN.\nAlso it would be useful to see the pg_stats rows for the date_key and\ncontent_key columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Dec 2003 11:48:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about difference in performance of 2 queries on large\n\ttable" }, { "msg_contents": "On Mon, 29 Dec 2003, Sean Shanny wrote:\n\n> The first plan below has horrendous performance. we only get about 2% \n> CPU usage and iostat shows 3-5 MB/sec IO. The second plan runs at 30% \n> cpu and 15-30MB.sec IO. \n> \n> Could someone shed some light on why the huge difference in \n> performance? Both are doing index scans plus a filter. We have no \n> content_keys below -1 at this time so the queries return the same results.\n\nEXPLAIN ANALYZE gives more information then EXPLAIN, and is prefered.\n\nIt uses different indexes in the two queries, and one seems to be \nfaster then the other. Why, I can't tell yet.\n\nI would assume that you would get the fastet result if you had an index \n\n (content_key, date_key)\n\nI don't know if pg will even use an index to speed up a <> operation. When \nyou had > then it could use the idx_pageviews_content index. Why it choose \nthat when the other would be faster I don't know. Maybe explain analyze \nwill give some hint.\n\n-- \n/Dennis\n\n", "msg_date": "Mon, 29 Dec 2003 17:48:49 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about difference in performance of 2 queries" }, { "msg_contents": "I am running explain analyze now and will post results as they finish.\n\nThanks.\n\n--sean\n\nTom Lane wrote:\n\n>Please show EXPLAIN ANALYZE output for your queries, not just EXPLAIN.\n>Also it would be useful to see the pg_stats rows for the date_key and\n>content_key columns.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n", "msg_date": "Mon, 29 Dec 2003 12:44:11 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about difference in performance of 2 queries" }, { "msg_contents": "Here is the pg_stats data. The explain analyze queries are still running.\n\nselect * from pg_stats where tablename = 'f_pageviews' and attname = \n'date_key';\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| \nmost_common_freqs \n| histogram_bounds | correlation\n------------+-------------+----------+-----------+-----------+------------+-------------------------------------------+---------------------------------------------------------------------------------------------------+-----------------------------------------------+-------------\n public | f_pageviews | date_key | 0 | 4 | \n60 | {335,307,309,336,308,321,314,342,322,316} | \n{0.0283333,0.0243333,0.0243333,0.0243333,0.024,0.0233333,0.0226667,0.0226667,0.0223333,0.0216667} \n| {304,311,318,325,329,334,341,346,351,356,363} | 0.345026\n(1 row)\n\nselect * from pg_stats where tablename = 'f_pageviews' and attname = \n'content_key';\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals | most_common_freqs \n| \nhistogram_bounds | correlation\n------------+-------------+-------------+-----------+-----------+------------+------------------+-----------------------+-------------------------------------------------------------------------------------+-------------\n public | f_pageviews | content_key | 0 | 4 | \n983 | {-1,1528483} | {0.749333,0.00166667} | \n{38966,323835,590676,717061,919148,1091875,1208244,1299702,1375366,1434079,1528910} \n| 0.103399\n(1 row)\n\nThanks.\n\n--sean\n\n\nTom Lane wrote:\n\n>Please show EXPLAIN ANALYZE output for your queries, not just EXPLAIN.\n>Also it would be useful to see the pg_stats rows for the date_key and\n>content_key columns.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n", "msg_date": "Mon, 29 Dec 2003 13:46:09 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about difference in performance of 2 queries" }, { "msg_contents": "Here is one of the explain analyzes. This is the from the faster \nquery. Ignore the total runtime as we are currently doing other queries \non this machine so it is slightly loaded.\n\nThanks.\n\n--sean\n\n\nexplain analyze select count (distinct (persistent_cookie_key) ) from \nf_pageviews where date_key between 305 and 334 and content_key <> -1;\n \nQUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1384925.95..1384925.95 rows=1 width=4) (actual \ntime=4541462.030..4541462.034 rows=1 loops=1)\n -> Index Scan using idx_pageviews_date_nov_2003 on f_pageviews \n(cost=0.00..1343566.52 rows=16543772 width=4) (actual \ntime=83.267..4286664.678 rows=15710722 loops=1)\n Index Cond: ((date_key >= 305) AND (date_key <= 334))\n Filter: (content_key <> -1)\n Total runtime: 4541550.832 ms\n(5 rows)\n\nTom Lane wrote:\n\n>Please show EXPLAIN ANALYZE output for your queries, not just EXPLAIN.\n>Also it would be useful to see the pg_stats rows for the date_key and\n>content_key columns.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n", "msg_date": "Mon, 29 Dec 2003 14:25:38 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about difference in performance of 2 queries" }, { "msg_contents": "Sean Shanny <[email protected]> writes:\n> Here is the pg_stats data. The explain analyze queries are still running.\n\n> select * from pg_stats where tablename = 'f_pageviews' and attname = \n> 'content_key';\n> schemaname | tablename | attname | null_frac | avg_width | \n> n_distinct | most_common_vals | most_common_freqs \n> | \n> histogram_bounds | correlation\n> ------------+-------------+-------------+-----------+-----------+------------+------------------+-----------------------+-------------------------------------------------------------------------------------+-------------\n> public | f_pageviews | content_key | 0 | 4 | \n> 983 | {-1,1528483} | {0.749333,0.00166667} | \n\nOh-ho, I see the problem: about 75% of your table has content_key = -1.\n\nWhy is that a problem, you ask? Well, the planner realizes that\n\"content_key > -1\" is a pretty good restriction condition (better than\nthe date condition, apparently) and so it tries to use that as the index\nscan condition. The problem is that in 7.4 and before, the btree index\ncode implements a \"> -1\" scan starting boundary by finding the first -1\nand then advancing to the first key that's not -1. So you end up\nscanning through 75% of the index before anything useful happens :-(\n\nI just fixed this poor behavior in CVS tip a couple weeks ago:\nhttp://archives.postgresql.org/pgsql-committers/2003-12/msg00220.php\nbut the patch seems too large and unproven to risk back-patching into\n7.4.*.\n\nIf you expect that a pretty large fraction of your data will always have\ndummy content_key, it'd probably be worth changing the index to not\nindex -1's at all --- that is, make it a partial index with the\ncondition \"WHERE content_key > -1\". Another workaround is to leave the\nindex as-is but phrase the query WHERE condition as \"content_key >= 0\"\ninstead of \"> -1\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Dec 2003 14:39:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about difference in performance of 2 queries " }, { "msg_contents": "Tom,\n\nThanks. I will make the changes you suggest concerning the indexes. I \nam finding partial indexes to be very handy. :-)\n\nI canceled the explain analyze on the other query as we have found the \nproblem and who knows how long it would take to complete.\n\nThanks again.\n\n--sean\n\nTom Lane wrote:\n\n>Sean Shanny <[email protected]> writes:\n> \n>\n>>Here is the pg_stats data. The explain analyze queries are still running.\n>> \n>>\n>\n> \n>\n>>select * from pg_stats where tablename = 'f_pageviews' and attname = \n>>'content_key';\n>> schemaname | tablename | attname | null_frac | avg_width | \n>>n_distinct | most_common_vals | most_common_freqs \n>>| \n>>histogram_bounds | correlation\n>>------------+-------------+-------------+-----------+-----------+------------+------------------+-----------------------+-------------------------------------------------------------------------------------+-------------\n>> public | f_pageviews | content_key | 0 | 4 | \n>>983 | {-1,1528483} | {0.749333,0.00166667} | \n>> \n>>\n>\n>Oh-ho, I see the problem: about 75% of your table has content_key = -1.\n>\n>Why is that a problem, you ask? Well, the planner realizes that\n>\"content_key > -1\" is a pretty good restriction condition (better than\n>the date condition, apparently) and so it tries to use that as the index\n>scan condition. The problem is that in 7.4 and before, the btree index\n>code implements a \"> -1\" scan starting boundary by finding the first -1\n>and then advancing to the first key that's not -1. So you end up\n>scanning through 75% of the index before anything useful happens :-(\n>\n>I just fixed this poor behavior in CVS tip a couple weeks ago:\n>http://archives.postgresql.org/pgsql-committers/2003-12/msg00220.php\n>but the patch seems too large and unproven to risk back-patching into\n>7.4.*.\n>\n>If you expect that a pretty large fraction of your data will always have\n>dummy content_key, it'd probably be worth changing the index to not\n>index -1's at all --- that is, make it a partial index with the\n>condition \"WHERE content_key > -1\". Another workaround is to leave the\n>index as-is but phrase the query WHERE condition as \"content_key >= 0\"\n>instead of \"> -1\".\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n> \n>\n\n", "msg_date": "Mon, 29 Dec 2003 14:52:38 -0500", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about difference in performance of 2 queries" }, { "msg_contents": "Tom,\n\nI understand the problem and your solution makes sense although I am \nstill puzzled by the machine under-utilization. If you run the original \nquery and monitor the IO/CPU usage you find that it is minimal.\n\nhere is the output from iostat 1 for a brief portion of the query. I am \nvery curious to understand why when scanning the index the IO/CPU \nutilization is seemingly low.\n\nCheers\nNick Shanny\nTripAdvisor, Inc.\n\n 0 77 32.00 106 3.31 0.00 0 0.00 0.00 0 0.00 0 0 2 0 \n98\n 0 76 32.00 125 3.90 0.00 0 0.00 0.00 0 0.00 0 0 2 \n0 97\n 0 76 32.00 125 3.90 0.00 0 0.00 0.00 0 0.00 0 0 1 \n1 98\n 0 76 32.75 127 4.05 0.00 0 0.00 0.00 0 0.00 0 0 1 \n0 99\n tty aacd0 acd0 fd0 \ncpu\n tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy \nin id\n 0 76 32.00 127 3.96 0.00 0 0.00 0.00 0 0.00 0 0 3 \n0 97\n 0 229 32.24 135 4.24 0.00 0 0.00 0.00 0 0.00 0 0 4 \n0 95\n 0 76 32.00 129 4.02 0.00 0 0.00 0.00 0 0.00 0 0 2 \n0 97\n 0 76 32.00 123 3.84 0.00 0 0.00 0.00 0 0.00 0 0 2 \n0 98\n 0 76 31.72 115 3.56 0.00 0 0.00 0.00 0 0.00 0 0 2 \n0 98\n 0 76 32.50 126 3.99 0.00 0 0.00 0.00 0 0.00 0 0 3 \n1 96\n 0 76 32.00 123 3.84 0.00 0 0.00 0.00 0 0.00 0 0 3 \n0 97\n 0 76 32.00 122 3.81 0.00 0 0.00 0.00 0 0.00 1 0 2 \n0 97\n 0 76 32.00 135 4.21 0.00 0 0.00 0.00 0 0.00 0 0 2 \n1 97\n 0 76 32.00 97 3.03 0.00 0 0.00 0.00 0 0.00 0 0 3 \n0 97\n\nOn Dec 29, 2003, at 2:39 PM, Tom Lane wrote:\n\n> Sean Shanny <[email protected]> writes:\n>> Here is the pg_stats data. The explain analyze queries are still \n>> running.\n>\n>> select * from pg_stats where tablename = 'f_pageviews' and attname =\n>> 'content_key';\n>> schemaname | tablename | attname | null_frac | avg_width |\n>> n_distinct | most_common_vals | most_common_freqs\n>> |\n>> histogram_bounds | correlation\n>> ------------+-------------+-------------+-----------+----------- \n>> +------------+------------------+----------------------- \n>> +--------------------------------------------------------------------- \n>> ----------------+-------------\n>> public | f_pageviews | content_key | 0 | 4 |\n>> 983 | {-1,1528483} | {0.749333,0.00166667} |\n>\n> Oh-ho, I see the problem: about 75% of your table has content_key = -1.\n>\n> Why is that a problem, you ask? Well, the planner realizes that\n> \"content_key > -1\" is a pretty good restriction condition (better than\n> the date condition, apparently) and so it tries to use that as the \n> index\n> scan condition. The problem is that in 7.4 and before, the btree index\n> code implements a \"> -1\" scan starting boundary by finding the first -1\n> and then advancing to the first key that's not -1. So you end up\n> scanning through 75% of the index before anything useful happens :-(\n>\n> I just fixed this poor behavior in CVS tip a couple weeks ago:\n> http://archives.postgresql.org/pgsql-committers/2003-12/msg00220.php\n> but the patch seems too large and unproven to risk back-patching into\n> 7.4.*.\n>\n> If you expect that a pretty large fraction of your data will always \n> have\n> dummy content_key, it'd probably be worth changing the index to not\n> index -1's at all --- that is, make it a partial index with the\n> condition \"WHERE content_key > -1\". Another workaround is to leave the\n> index as-is but phrase the query WHERE condition as \"content_key >= 0\"\n> instead of \"> -1\".\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n", "msg_date": "Tue, 30 Dec 2003 10:00:30 -0500", "msg_from": "Nicholas Shanny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about difference in performance of 2 queries " } ]
[ { "msg_contents": "I'm observing that when I have many processes doing some work on my\nsystem that the transactions run along almost in lockstep. It appears\nfrom messages posted here that the foreign keys are acquiring and\nholding locks during the transactions, which seems like it would cause\nthis behavior.\n\nI'd like to experiment with deferred foreign key checks so that the\nlock is only held during the commit when the checks are done.\n\nMy questions are:\n\n1) can I, and if so, how do I convert my existing FK's to deferrable\n without drop/create of the keys. Some of the keys take a long time\n to create and I'd like to avoid the hit.\n\n2) do I increase the liklihood of deadlocks when the FK locks are\n being acquired or is it just as likeley as with the current\n non-deferred checking?\n\nI'm running 7.4 (soon to be 7.4.1)\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 29 Dec 2003 15:33:57 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "deferred foreign keys" }, { "msg_contents": "One more question: does the FK checker know to skip checking a\nconstraint if the column in question did not change during an update?\n\nThat is, if I have a user table that references an owner_id in an\nowners table as a foreign key, but I update fields other than owner_id\nin the user table, will it still try to verify that owner_id is a\nvalid value even though it did not change?\n\nI'm using PG 7.4.\n\nThanks.\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 31 Dec 2003 12:17:50 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "> One more question: does the FK checker know to skip checking a\n> constraint if the column in question did not change during an update?\n> \n> That is, if I have a user table that references an owner_id in an\n> owners table as a foreign key, but I update fields other than owner_id\n> in the user table, will it still try to verify that owner_id is a\n> valid value even though it did not change?\n> \n> I'm using PG 7.4.\n\nAs of 7.4, yes the check is skipped.\n\nChris\n", "msg_date": "Fri, 02 Jan 2004 12:57:09 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": ">>>>> \"CK\" == Christopher Kings-Lynne <[email protected]> writes:\n\n>> One more question: does the FK checker know to skip checking a\n>> constraint if the column in question did not change during an update?\n\nCK> As of 7.4, yes the check is skipped.\n\n\nThanks. Then it sorta makes it moot for me to try deferred checks,\nsince the Pimary and Foreign keys never change once set. I wonder\nwhat is making the transactions appear to run lockstep, then...\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 05 Jan 2004 11:33:40 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "On Mon, Jan 05, 2004 at 11:33:40 -0500,\n Vivek Khera <[email protected]> wrote:\n> \n> Thanks. Then it sorta makes it moot for me to try deferred checks,\n> since the Pimary and Foreign keys never change once set. I wonder\n> what is making the transactions appear to run lockstep, then...\n\nI think this is probably the issue with foreign key checks needing an\nexclusive lock, since there is no shared lock that will prevent deletes.\nThis problem has been discussed a number of times on the lists and you\nshould be able to find out more information from the archives if you\nwant to confirm that this is the root cause of your problems.\n", "msg_date": "Mon, 5 Jan 2004 12:38:59 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "On Mon, 5 Jan 2004, Bruno Wolff III wrote:\n\n> On Mon, Jan 05, 2004 at 11:33:40 -0500,\n> Vivek Khera <[email protected]> wrote:\n> >\n> > Thanks. Then it sorta makes it moot for me to try deferred checks,\n> > since the Pimary and Foreign keys never change once set. I wonder\n> > what is making the transactions appear to run lockstep, then...\n>\n> I think this is probably the issue with foreign key checks needing an\n> exclusive lock, since there is no shared lock that will prevent deletes.\n\nBut, if he's updating the fk table but not the keyed column, it should no\nlonger be doing the check and grabbing the locks. If he's seeing it grab\nthe row locks still a full test case would be handy because it'd probably\nmean we missed something.\n", "msg_date": "Mon, 5 Jan 2004 10:57:02 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "\nOn Jan 5, 2004, at 1:38 PM, Bruno Wolff III wrote:\n\n> I think this is probably the issue with foreign key checks needing an\n> exclusive lock, since there is no shared lock that will prevent \n> deletes.\n>\n\nThat was my original thought upon reading all the discussion of late \nregarding the FK checking locks. I figured if I deferred the checks to \ncommit, I could save some contention time. However, if FK checks are \nskipped if the field in question is not updated, what locks would there \nbe? Are they taken even if the checks are not performed on some sort \nof \"be prepared\" principle?\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n", "msg_date": "Mon, 5 Jan 2004 13:57:07 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "\nOn Jan 5, 2004, at 1:57 PM, Stephan Szabo wrote:\n\n> But, if he's updating the fk table but not the keyed column, it should \n> no\n> longer be doing the check and grabbing the locks. If he's seeing it \n> grab\n> the row locks still a full test case would be handy because it'd \n> probably\n> mean we missed something.\n>\n\nI'm not *sure* it is taking any locks. The transactions appear to be \nrunning lock step (operating on different parts of the same pair of \ntables) and I was going to see if deferring the locks made the \ndifference. It is my feeling now that it will not. However, if there \nis a way to detect if locks are being taken, I'll do that. I'd like to \navoid dropping and recreating the foreign keys if I can since it takes \nup some bit of time on the table with 20+ million rows.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n", "msg_date": "Mon, 5 Jan 2004 14:02:00 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "On Mon, 5 Jan 2004, Vivek Khera wrote:\n\n>\n> On Jan 5, 2004, at 1:57 PM, Stephan Szabo wrote:\n>\n> > But, if he's updating the fk table but not the keyed column, it should\n> > no\n> > longer be doing the check and grabbing the locks. If he's seeing it\n> > grab\n> > the row locks still a full test case would be handy because it'd\n> > probably\n> > mean we missed something.\n> >\n>\n> I'm not *sure* it is taking any locks. The transactions appear to be\n> running lock step (operating on different parts of the same pair of\n> tables) and I was going to see if deferring the locks made the\n> difference. It is my feeling now that it will not. However, if there\n> is a way to detect if locks are being taken, I'll do that. I'd like to\n> avoid dropping and recreating the foreign keys if I can since it takes\n> up some bit of time on the table with 20+ million rows.\n\nThe only way I can think of to see the locks is to do just one of the\noperations and then manually attempting to select for update the\nassociated pk row.\n\n", "msg_date": "Mon, 5 Jan 2004 11:48:26 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "On Mon, 2004-01-05 at 14:48, Stephan Szabo wrote:\n> On Mon, 5 Jan 2004, Vivek Khera wrote:\n> \n> >\n> > On Jan 5, 2004, at 1:57 PM, Stephan Szabo wrote:\n> >\n> > > But, if he's updating the fk table but not the keyed column, it should\n> > > no\n> > > longer be doing the check and grabbing the locks. If he's seeing it\n> > > grab\n> > > the row locks still a full test case would be handy because it'd\n> > > probably\n> > > mean we missed something.\n> > >\n> >\n> > I'm not *sure* it is taking any locks. The transactions appear to be\n> > running lock step (operating on different parts of the same pair of\n> > tables) and I was going to see if deferring the locks made the\n> > difference. It is my feeling now that it will not. However, if there\n> > is a way to detect if locks are being taken, I'll do that. I'd like to\n> > avoid dropping and recreating the foreign keys if I can since it takes\n> > up some bit of time on the table with 20+ million rows.\n> \n> The only way I can think of to see the locks is to do just one of the\n> operations and then manually attempting to select for update the\n> associated pk row.\n\nWhen a locker runs into a row lock held by another transaction, the\nlocker will show a pending lock on the transaction id in pg_locks.\n\n\n", "msg_date": "Mon, 05 Jan 2004 15:14:24 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" }, { "msg_contents": "On Mon, 5 Jan 2004, Rod Taylor wrote:\n\n> On Mon, 2004-01-05 at 14:48, Stephan Szabo wrote:\n> > On Mon, 5 Jan 2004, Vivek Khera wrote:\n> >\n> > >\n> > > On Jan 5, 2004, at 1:57 PM, Stephan Szabo wrote:\n> > >\n> > > > But, if he's updating the fk table but not the keyed column, it should\n> > > > no\n> > > > longer be doing the check and grabbing the locks. If he's seeing it\n> > > > grab\n> > > > the row locks still a full test case would be handy because it'd\n> > > > probably\n> > > > mean we missed something.\n> > > >\n> > >\n> > > I'm not *sure* it is taking any locks. The transactions appear to be\n> > > running lock step (operating on different parts of the same pair of\n> > > tables) and I was going to see if deferring the locks made the\n> > > difference. It is my feeling now that it will not. However, if there\n> > > is a way to detect if locks are being taken, I'll do that. I'd like to\n> > > avoid dropping and recreating the foreign keys if I can since it takes\n> > > up some bit of time on the table with 20+ million rows.\n> >\n> > The only way I can think of to see the locks is to do just one of the\n> > operations and then manually attempting to select for update the\n> > associated pk row.\n>\n> When a locker runs into a row lock held by another transaction, the\n> locker will show a pending lock on the transaction id in pg_locks.\n\nYeah, but AFAIR that won't let you know if it's blocking on the particular\nrow lock you're expecting.\n", "msg_date": "Mon, 5 Jan 2004 14:33:17 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deferred foreign keys" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI guess this may have come up before, but now that 7.4 has the IN with \nimproved performance, it may be time to revisit this topic.\n\nCompare these two algorithms (in plpgsql):\n\n(a)\nDELETE FROM foo WHERE ctid IN (\n\tSELECT foo.ctid\n\tFROM ... WHERE ...\n);\n\n(b)\nFOR result IN SELECT foo.ctid FROM ... WHERE ... LOOP\n\tDELETE FROM foo WHERE ctid = result;\nEND LOOP;\n\nMy poor understanding of how the IN operator works leaves me to believe \nthat for a large set of data in the IN group, a hash is used and a \ntablescan done on foo. However, for a small set of data in the IN group, \nno tablescan is performed.\n\nI assume that (a) works at O(ln(N)) for large N, and O(N) for small N, \nwhile (b) works at O(N) universally. Therefore, (a) is the superior \nalgorithm. I welcome criticism and correction.\n\n- -- \nJonathan Gardner\[email protected]\nLive Free, Use Linux!\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/8aipWgwF3QvpWNwRAk8GAJoDWISjxG7LMB1FdCFmwlOafsmZTwCePx18\nlyHLNBJ8nP0RHzv6WfRzQ+M=\n=FPdW\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 30 Dec 2003 08:32:38 -0800", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE ... WHERE ctid IN (...) vs. Iteration" } ]
[ { "msg_contents": "Happy new year!\n\nWhen performing\n\"pg_restore -L list --disable-triggers -d db1 -v my_archive\"\n, my hard disk for Linux box (with 96MB RAM) becomes \nextremely busy.\n\nOne example is that it takes more than 5 miniutes to restore \n for a table from 7800 rows. Each row has less than 117 \nbytes in length with total of 6 columns. Hence I think the \namount of the to-restore data is not the cause of \nperformance problem.\n\nThe swap size is only 68K. Therefore, I don't think small \namount of RAM is a problem, either.\n\nkjournald uses 2% CPU and postmaster uses 10%. CPU is about \n95% idle.\n\nWhat makes the restore so slow? How do I speed it up?\n\nRegards,\nCN\n", "msg_date": "Fri, 02 Jan 2004 01:14:02 +0800 (CST)", "msg_from": "\"cnliou\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore makes disk busy" } ]
[ { "msg_contents": "I have these two tables:\n\n Table \"de.summary\"\n Column | Type | Modifiers \n--------------+-----------------------------+---------------\n isbn | character varying(10) | not null\n source | character varying(20) | not null\n condition | smallint | \n availability | smallint | \n price_list | numeric(11,2) | \n price_min | numeric(11,2) | \n last_update | timestamp without time zone | default now()\nIndexes:\n \"summary_pkey\" primary key, btree (isbn, source)\n\n Table \"de.inventory\"\n Column | Type | Modifiers \n--------------+-----------------------+-----------\n isbn | character varying(10) | \n condition | integer | \n availability | integer | \n price | numeric(9,2) | \nIndexes:\n \"inventory_isbn_idx\" btree (isbn)\n\n\nBoth tables are clustered on their respective indexes. The entire\ndatabase has been freshly VACUUM FULL'd and ANALYZE'd (after\nclustering).\n\nI want to run the following query, but it takes a *very* long time. \nLike this:\n\nbookshelf=> explain analyze update summary set price_min=0,\navailability=2, condition=9 where isbn = inventory.isbn and price_min =\ninventory.price; \n QUERY PLAN \n-----------------------------------------------------------------\n----------------------------------------------------------- \nMerge Join (cost=496170.66..517271.50 rows=5051 width=51) (actual\ntime=226940.723..292247.643 rows=419277 loops=1) \n Merge Cond: ((\"outer\".price_min = \"inner\".price) AND\n(\"outer\".\"?column7?\" = \"inner\".\"?column3?\"))\n -> Sort (cost=366877.05..371990.05 rows=2045201 width=61) (actual\ntime=162681.929..177216.158 rows=2045200 loops=1)\n Sort Key: summary.price_min, (summary.isbn)::text \n -> Seq Scan on summary (cost=0.00..44651.01 rows=2045201 width=61)\n(actual time=8.139..22179.379 rows=2045201 loops=1)\n -> Sort(cost=129293.61..131499.09 rows=882192 width=25) (actual\ntime=64213.663..67563.175 rows=882192 loops=1)\n Sort Key: inventory.price, (inventory.isbn)::text \n -> Seq Scan on inventory(cost=0.00..16173.92 rows=882192\nwidth=25)(actual time=5.773..21548.942 rows=882192 loops=1) \nTotal runtime: 3162319.477 ms(9 rows)\n\nRunning what I believe to be the comparable select query is more\nreasonable:\n\nbookshelf=> explain analyze select s.* from summary s, inventory i where\ns.isbn = i.isbn and s.price_min = i.price; \n QUERY PLAN \n-----------------------------------------------------------------------\nMerge Join (cost=495960.66..517061.50 rows=5051 width=59) (actual\ntime=194048.974..215835.982 rows=419277 loops=1)\n Merge Cond: ((\"outer\".price_min = \"inner\".price) AND\n(\"outer\".\"?column8?\" =\"inner\".\"?column3?\"))\n -> Sort (cost=366667.05..371780.05 rows=2045201 width=59) (actual\ntime=147678.109..149945.170 rows=2045200 loops=1)\n Sort Key: s.price_min, (s.isbn)::text\n -> Seq Scan on summary s (cost=0.00..49431.01 rows=2045201\nwidth=59) (actual time=0.056..9304.803 rows=2045201 loops=1)\n -> Sort (cost=129293.61..131499.09 rows=882192 width=25) (actual\ntime=46338.696..47183.739 rows=882192 loops=1)\n Sort Key: i.price, (i.isbn)::text\n -> Seq Scan on inventory i (cost=0.00..16173.92 rows=882192\nwidth=25) (actual time=0.089..2419.187 rows=882192 loops=1)\nTotal runtime: 216324.171 ms\n\n\nI had figured that the tables would get sorted on isbn, because of the\nclustering. I understand why price might get chosen (fewer matches),\nbut the planner seems to be making the wrong choice:\n\nbookshelf=> explain analyze select s.* from summary s, inventory i where\ns.isbn = i.isbn; \n QUERY PLAN \n-----------------------------------------------------------------------\nMerge Join (cost=489500.66..512953.69 rows=882192 width=59) (actual\ntime=152247.741..174408.812 rows=882192 loops=1)\n Merge Cond: (\"outer\".\"?column8?\" = \"inner\".\"?column2?\")\n -> Sort (cost=366667.05..371780.05 rows=2045201 width=59) (actual\ntime=118562.097..120817.894 rows=2045146 loops=1)\n Sort Key:(s.isbn)::text\n -> Seq Scan on summary s (cost=0.00..49431.01 rows=2045201\nwidth=59) (actual time=0.062..8766.683 rows=2045201 loops=1)\n -> Sort (cost=122833.61..125039.09 rows=882192 width=14)(actual\ntime=33685.455..34480.190 rows=882192 loops=1)\n Sort Key:(i.isbn)::text\n -> Seq Scan on inventory i (cost=0.00..16173.92 rows=882192\nwidth=14) (actual time=0.088..1942.173 rows=882192 loops=1)\n Total runtime: 174926.115 ms\n\nSo, my first question is: why is the planner still sorting on price when\nisbn seems (considerably) quicker, and how can I force it to sort by\nisbn(if I even should)?\n\nThe second question is: why, oh why does the update take such and\nobscenely long time to complete? The 175s (and even 216s) for the\nselect seems reasonable given the size of the tables, but not 3000s to\nupdate the same rows. The processor (AMD 1.3GHz) is 90%+ utilization for\nmost of the execution time.\n\nI can post more information if it would be helpful, but this post is\nlong enough already.\n\nTIA, and happy new year.\n\n-mike\n\n\n-- \nMike Glover\nKey ID BFD19F2C <[email protected]>", "msg_date": "Thu, 1 Jan 2004 19:34:01 -0800", "msg_from": "Mike Glover <[email protected]>", "msg_from_op": true, "msg_subject": "Very slow update + not using clustered index" }, { "msg_contents": "Mike Glover <[email protected]> writes:\n> I want to run the following query, but it takes a *very* long time. \n> Like this:\n> bookshelf=> explain analyze update summary set price_min=0,\n> availability=2, condition=9 where isbn = inventory.isbn and price_min =\n> inventory.price; \n> ...\n> Total runtime: 3162319.477 ms(9 rows)\n\n> Running what I believe to be the comparable select query is more\n> reasonable:\n\n> bookshelf=> explain analyze select s.* from summary s, inventory i where\n> s.isbn = i.isbn and s.price_min = i.price; \n> ...\n> Total runtime: 216324.171 ms\n\nAFAICS these plans are identical, and therefore the difference in\nruntime must be ascribed to the time spent actually doing the updates.\nIt seems unlikely that the raw row inserts and updating the single\nindex could be quite that slow --- perhaps you have a foreign key\nor trigger performance problem?\n\n> So, my first question is: why is the planner still sorting on price when\n> isbn seems (considerably) quicker, and how can I force it to sort by\n> isbn(if I even should)?\n\nIs this PG 7.4? It looks to me like the planner *should* consider both\npossible orderings of the mergejoin sort keys. I'm not sure that it\nknows enough to realize that the key with more distinct values should be\nput first, however.\n\nA quick experiment shows that if the planner does not have any reason to\nprefer one ordering over another, the current coding will put the last\nWHERE clause first:\n\nregression=# create table t1(f1 int, f2 int);\nCREATE TABLE\nregression=# create table t2(f1 int, f2 int);\nCREATE TABLE\nregression=# explain select * from t1,t2 where t1.f1=t2.f1 and t1.f2=t2.f2;\n QUERY PLAN\n-------------------------------------------------------------------------\n Merge Join (cost=139.66..154.91 rows=25 width=16)\n Merge Cond: ((\"outer\".f2 = \"inner\".f2) AND (\"outer\".f1 = \"inner\".f1))\n -> Sort (cost=69.83..72.33 rows=1000 width=8)\n Sort Key: t1.f2, t1.f1\n -> Seq Scan on t1 (cost=0.00..20.00 rows=1000 width=8)\n -> Sort (cost=69.83..72.33 rows=1000 width=8)\n Sort Key: t2.f2, t2.f1\n -> Seq Scan on t2 (cost=0.00..20.00 rows=1000 width=8)\n(8 rows)\n\nregression=# explain select * from t1,t2 where t1.f2=t2.f2 and t1.f1=t2.f1;\n QUERY PLAN\n-------------------------------------------------------------------------\n Merge Join (cost=139.66..154.91 rows=25 width=16)\n Merge Cond: ((\"outer\".f1 = \"inner\".f1) AND (\"outer\".f2 = \"inner\".f2))\n -> Sort (cost=69.83..72.33 rows=1000 width=8)\n Sort Key: t1.f1, t1.f2\n -> Seq Scan on t1 (cost=0.00..20.00 rows=1000 width=8)\n -> Sort (cost=69.83..72.33 rows=1000 width=8)\n Sort Key: t2.f1, t2.f2\n -> Seq Scan on t2 (cost=0.00..20.00 rows=1000 width=8)\n(8 rows)\n\nand so you could probably improve matters just by switching the order of\nyour WHERE clauses. Of course this answer will break as soon as anyone\ntouches any part of the related code, so I'd like to try to fix it so\nthat there is actually a principled choice made. Could you send along\nthe pg_stats rows for these columns?\n\n> The second question is: why, oh why does the update take such and\n> obscenely long time to complete?\n\nSee above --- the problem is not within the plan, but must be sought\nelsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jan 2004 23:06:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow update + not using clustered index " }, { "msg_contents": "Tom-\n\n Thanks for the quick response. More details are inline.\n\n-mike\n\nOn Thu, 01 Jan 2004 23:06:11 -0500\nTom Lane <[email protected]> wrote:\n\n> Mike Glover <[email protected]> writes:\n\n> AFAICS these plans are identical, and therefore the difference in\n> runtime must be ascribed to the time spent actually doing the updates.\n> It seems unlikely that the raw row inserts and updating the single\n> index could be quite that slow --- perhaps you have a foreign key\n> or trigger performance problem?\n\n There are no foreign keys or triggers for either of the tables.\n\n> Is this PG 7.4? \n\nYes, PG 7.4\n\n> \n> A quick experiment shows that if the planner does not have any reason\n> to prefer one ordering over another, the current coding will put the\n> last WHERE clause first:\n[snip]> \n> and so you could probably improve matters just by switching the order\n> of your WHERE clauses. Of course this answer will break as soon as\n> anyone touches any part of the related code, so I'd like to try to fix\n> it so that there is actually a principled choice made. Could you send\n> along the pg_stats rows for these columns?\n> \n\nIt looks like the planner is already making a principled choice:\n\nbookshelf=> explain select s.* from summary s, inventory i where s.isbn\n= i.isbn and s.price_min = i.price; \n QUERY PLAN \n-----------------------------------------------------------------------\nMerge Join (cost=491180.66..512965.72 rows=9237 width=58)\n Merge Cond: ((\"outer\".price_min = \"inner\".price)\nAND (\"outer\".\"?column8?\" = \"inner\".\"?column3?\"))\n -> Sort (cost=361887.05..367000.05 rows=2045201 width=58)\n Sort Key: s.price_min, (s.isbn)::text\n -> Seq Scan on summary s (cost=0.00..44651.01 rows=2045201\nwidth=58)\n -> Sort (cost=129293.61..131499.09 rows=882192 width=25)\n Sort Key: i.price, (i.isbn)::text\n -> Seq Scan on inventory i (cost=0.00..16173.92 rows=882192\nwidth=25)\n(8 rows)\n\nbookshelf=> explain select s.* from summary s, inventory i where\ns.price_min = i.price and s.isbn = i.isbn; \n QUERY PLAN \n-----------------------------------------------------------------------\nMerge Join (cost=491180.66..512965.72 rows=9237 width=58)\n Merge Cond: ((\"outer\".price_min = \"inner\".price) AND\n(\"outer\".\"?column8?\" =\"inner\".\"?column3?\"))\n -> Sort (cost=361887.05..367000.05 rows=2045201 width=58)\n Sort Key: s.price_min, (s.isbn)::text\n -> Seq Scan on summary s (cost=0.00..44651.01 rows=2045201\nwidth=58)\n -> Sort(cost=129293.61..131499.09 rows=882192 width=25)\n Sort Key: i.price, (i.isbn)::text\n -> Seq Scan on inventory i (cost=0.00..16173.92 rows=882192\nwidth=25)\n(8 rows)\n\nHere are the pg_stats rows:\nbookshelf=> select * from pg_stats where schemaname='de' and\ntablename='inventory' and attname='isbn'; schemaname | tablename |\nattname | null_frac | avg_width | n_distinct | most_common_vals |\nmost_common_freqs | \nhistogram_bounds |\ncorrelation\n------------+-----------+---------+-----------+-----------+------------\n+------------------+-------------------+-------------------------------\n-----------------------------------------------------------------------\n----------------------+------------- de | inventory | isbn | \n 0 | 14 | -1 | | \n|\n{0002551543,0198268211,0375507299,0486231305,0673395197,0767901576,0810\n304430,0865738890,0931595029,1574160052,9971504014} | 1(1 row)\n\nbookshelf=> select * from pg_stats where schemaname='de' and\ntablename='inventory' and attname='price'; schemaname | tablename |\nattname | null_frac | avg_width | n_distinct | \nmost_common_vals | \n most_common_freqs | \n histogram_bounds |\ncorrelation\n------------+-----------+---------+-----------+-----------+------------\n+--------------------------------------------------------------+-------\n-----------------------------------------------------------------------\n-----------------------+-----------------------------------------------\n--------------------------+------------- de | inventory | price \n | 0 | 11 | 1628 |\n{59.95,0.00,54.88,53.30,60.50,64.25,73.63,49.39,50.02,53.37} |\n{0.259667,0.00633333,0.00533333,0.00466667,0.00466667,0.00466667,0.0046\n6667,0.00433333,0.004,0.004} |\n{49.16,52.06,55.53,59.56,63.78,68.90,76.90,88.53,106.16,143.75,1538.88}\n| 0.149342(1 row)\n\nbookshelf=> select * from pg_stats where schemaname='de' and\ntablename='summary' and attname='isbn'; schemaname | tablename | attname\n| null_frac | avg_width | n_distinct | most_common_vals |\nmost_common_freqs | \nhistogram_bounds |\ncorrelation\n------------+-----------+---------+-----------+-----------+------------\n+------------------+-------------------+-------------------------------\n-----------------------------------------------------------------------\n----------------------+------------- de | summary | isbn | \n 0 | 14 | -1 | | \n|\n{0001984209,020801912X,0395287693,055214911X,0722525915,0787630896,0822\n218100,0883856263,1413900275,1843910381,9999955045} | 1(1 row)\n\nbookshelf=> select * from pg_stats where schemaname='de' and\ntablename='summary' and attname='price_min'; schemaname | tablename | \nattname | null_frac | avg_width | n_distinct | \nmost_common_vals | \n most_common_freqs | \n histogram_bounds |\ncorrelation\n------------+-----------+-----------+-----------+-----------+----------\n--+---------------------------------------------------------+----------\n-----------------------------------------------------------------------\n-------------------+---------------------------------------------------\n------------------+------------- de | summary | price_min | \n 0 | 10 | 1532 |\n{0.00,59.95,6.95,6.00,4.07,10.17,11.53,10.85,4.75,8.81} |\n{0.425333,0.029,0.0193333,0.00533333,0.00333333,0.00333333,0.00333333,0\n.003,0.00266667,0.00266667} |\n{0.05,7.11,10.30,14.28,19.54,27.86,50.47,61.25,76.44,104.79,744.73} | \n0.0546667(1 row)\n\n(mangled a bit by the auto-linewrap, I'm afraid)\n\n> > The second question is: why, oh why does the update take such and\n> > obscenely long time to complete?\n> \n> See above --- the problem is not within the plan, but must be sought\n> elsewhere.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n-- \nMike Glover\nKey ID BFD19F2C <[email protected]>", "msg_date": "Thu, 1 Jan 2004 22:16:30 -0800", "msg_from": "Mike Glover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow update + not using clustered index" }, { "msg_contents": "Mike Glover <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> It seems unlikely that the raw row inserts and updating the single\n>> index could be quite that slow --- perhaps you have a foreign key\n>> or trigger performance problem?\n\n> There are no foreign keys or triggers for either of the tables.\n\nHmph. It's clear that it is the update overhead that's taking the time\n(since you show 292 seconds actual time in the update's top plan node\n--- that's the time to find the rows and compute their new values, and\nall the rest of the elapsed 3162 sec has to be update overhead). Maybe\nyou just have a slow disk.\n\nJust out of curiosity, how much time does the update take if you don't\nhave any index on the summary table? Try\n\n\tcreate temp table tsummary as select * from summary;\n\tvacuum analyze tsummary;\n\texplain analyze update tsummary set ... ;\n\n\n>> A quick experiment shows that if the planner does not have any reason\n>> to prefer one ordering over another, the current coding will put the\n>> last WHERE clause first:\n> [snip]> \n\n> It looks like the planner is already making a principled choice:\n\nAfter a little bit of experimentation I was reminded that the planner\ndoes account for the possibility that a merge join can stop short of\nfull execution when the first mergejoin columns have different data\nranges. In this case it's preferring to put price first because there\nis a greater discrepancy in the ranges of s.price_min and i.price than\nthere is in the ranges of the isbn columns. I'm not sure that it's\nwrong. You could try increasing the statistics target on the price\ncolumns (and re-ANALYZing) to see if finer-grain data changes that\nestimate at all.\n\nIn any case, the fact that the chosen plan doesn't make use of your\nindex on isbn doesn't mean that such a plan wasn't considered. It was,\nbut this plan was estimated to be less expensive. You could check out\nalternative plans and see if the estimate is accurate by fooling with\nenable_seqscan and enable_sort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jan 2004 10:45:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow update + not using clustered index " } ]
[ { "msg_contents": "Hi,\n\nI have tried to tune a database that I'm using only for statistical access ... \nI mean that I'm importing a dump of my production database each night, but \npreserving some aggregat tables, and statistics ones ... (that I'm \ncalculating after the importation of the dump). This database is only used by \nfew people but make some big requests, tables have mixed sizes between 200 \n000 rows up to 10 000 000 records.\n\nThis server's got 2Gb memory, and 100 Gb RAID 5 Hard disk, is a woody Debian, \nand I'm using a self compiled version of PotsgreSQL v7.3.4.\n\nMy postgresql.conf file looks like this :\n\n#\n# Shared Memory Size\n#\nshared_buffers = 31000 # min max_connections*2 or 16, 8KB each\nmax_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\nwal_buffers = 32 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 32768 # min 64, size in KB\nvacuum_mem = 32768 # min 1024, size in KB\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 160 # range 30-3600, in seconds\neffective_cache_size = 400000 # typically 8KB each\nrandom_page_cost = 1.5 # units are one sequential page fetch cost\n\nBefore my effective_cache_size was 1000 ... and reading some tuning pages and \ncomments telling : \"effective_cache_size: You should adjust this according to \nthe amount of free memory you have.\" ... I grow it to 400000 ...\n\nThen ... first point I'm only using 5% of my memory (all linux system,and \nsoftware) ... and no swap (good point for this) ... Why I don't use more \nmemory ... ??\n\nSecond point ... after importing my dump ... I make a vacuum full analyze of \nmy base (in same time because of my caculation of the day before for my \naggregats and stats tables about 200 000 row deleted and/or inserted for more \nthan 20 tables (each)) ... but It takes about 5 hours ...\n\nExample of a (for me) really slow vacuum ... more than 85 min for a table with \nonly 9105740 records ...\n\nINFO: ᅵ--Relation public.hebcnt--\nINFO: ᅵPages 175115: Changed 0, reaped 3309, Empty 0, New 0; Tup 9105740: Vac \n175330, Keep/VTL 0/0, UnUsed 0, MinLen 148, MaxLen 148; Re-using: Free/Avail. \nSpace 46265980/26336600; EndEmpty/Avail. Pages 0/3310.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 6.75s/1.67u sec elapsed 91.41 sec.\nINFO: ᅵIndex ix_hebcnt_idc: Pages 40446; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 2.94s/6.17u sec elapsed 222.34 sec.\nINFO: ᅵIndex ix_hebcnt_cweek: Pages 229977; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 9.64s/3.14u sec elapsed 1136.01 sec.\nINFO: ᅵIndex ix_hebcnt_cpte: Pages 72939; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 4.86s/9.13u sec elapsed 398.73 sec.\nINFO: ᅵIndex ix_hebcnt_idctweek: Pages 66014; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 3.87s/8.61u sec elapsed 163.26 sec.\nINFO: ᅵRel hebcnt: Pages: 175115 --> 171807; Tuple(s) moved: 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 16.49s/52.04u sec elapsed 1406.34 sec.\nINFO: ᅵIndex ix_hebcnt_idc: Pages 40446; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 1.76s/5.65u sec elapsed 124.98 sec.\nINFO: ᅵIndex ix_hebcnt_cweek: Pages 230690; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 10.07s/2.60u sec elapsed 1095.17 sec.\nINFO: ᅵIndex ix_hebcnt_cpte: Pages 72940; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 4.51s/8.90u sec elapsed 353.45 sec.\nINFO: ᅵIndex ix_hebcnt_idcweek: Pages 66015; Tuples 9105740: Deleted 175330.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 3.96s/8.58u sec elapsed 147.64 sec.\nINFO: ᅵ--Relation pg_toast.pg_toast_76059978--\nINFO: ᅵPages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL \n0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/\nAvail. Pages 0/0.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: ᅵIndex pg_toast_76059978_index: Pages 1; Tuples 0.\nᅵᅵᅵᅵᅵᅵᅵᅵCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: ᅵAnalyzing public.hebcnt\n\nStructure of this table :\nfrstats=# \\d hebcnt\n Table \"public.hebcnt\"\n Column | Type | Modifiers\n------------------+-----------------------------+------------------------\n id_c | integer | not null\n contrat | text | not null\n arrete_week | text | not null\n cpte | text | not null\n is_active | boolean | not null\n year | text | not null\n use | integer | not null\n use_priv | integer | not null\n use_ind | integer | not null\n passback | integer | not null\n resa | integer | not null\n noshow | integer | not null\n nbc | integer | not null\n dureecnt | integer | not null\n dureecpt | integer | not null\n anciennete2 | integer | not null\n c_week | text | not null\n blacklist | integer | not null\n dcrea | timestamp without time zone | not null default now()\n dmaj | timestamp without time zone |\nIndexes: ix_hebcnt_cweek btree (c_week),\n ix_hebcnt_cpte btree (cpte),\n ix_hebcnt_idc btree (id_c),\n ix_hebcnt_idcweek btree (id_c, c_week)\n\nAny idea ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n\n", "msg_date": "Fri, 2 Jan 2004 10:42:57 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Why memory is not used ? Why vacuum so slow ?" }, { "msg_contents": "Here's a scheme for query optimization that probably needs to be\navoided in that it would run afoul of a patent held by Oracle...\n\n<http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/srchnum.htm&r=1&f=G&l=50&s1=5761654.WKU.&OS=PN/5761654&RS=PN/5761654>\n\nIt looks like what they have patented is pretty much a \"greedy search\"\nheuristic, starting by finding the table in a join that has the\ngreatest selectivity (e.g. - where the number of entries selected is\ncut the most by the selection criteria), and then describes how to\nsearch for the \"best\" approach to joining in the other tables.\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\n\"If I could find a way to get [Saddam Hussein] out of there, even\nputting a contract out on him, if the CIA still did that sort of a\nthing, assuming it ever did, I would be for it.\" -- Richard M. Nixon\n", "msg_date": "Fri, 02 Jan 2004 07:33:46 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Tuning Techniques To Avoid?" }, { "msg_contents": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]> writes:\n> Second point ... after importing my dump ... I make a vacuum full analyze of \n> my base (in same time because of my caculation of the day before for my \n> aggregats and stats tables about 200 000 row deleted and/or inserted for more\n> than 20 tables (each)) ... but It takes about 5 hours ...\n\nDon't do vacuum full. You should not need it in ordinary circumstances,\nif you are doing plain vacuums on a reasonable schedule and you have the\nFSM parameters set high enough. (You do not BTW ... with 175000 pages in\nthis table alone, 10000 FSM pages for the whole database is surely too\nlow.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jan 2004 09:42:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why memory is not used ? Why vacuum so slow ? " }, { "msg_contents": "Hi Tom,\n\nLe Vendredi 2 Janvier 2004 15:42, Tom Lane a ᅵcrit :\n> =?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]> writes:\n> > Second point ... after importing my dump ... I make a vacuum full analyze\n> > of my base (in same time because of my caculation of the day before for\n> > my aggregats and stats tables about 200 000 row deleted and/or inserted\n> > for more than 20 tables (each)) ... but It takes about 5 hours ...\n>\n> Don't do vacuum full. You should not need it in ordinary circumstances,\n> if you are doing plain vacuums on a reasonable schedule and you have the\n> FSM parameters set high enough. (You do not BTW ... with 175000 pages in\n> this table alone, 10000 FSM pages for the whole database is surely too\n> low.)\n\nOk for this ... I have now configured the FSM pages to 300 000 ... then when I \nhave started the database I get a message about my SHMMAX too low ... it was \nset to :\nmore /proc/sys/kernel/shmmax\n262111232\n\nThen I put 300000000 ... PostgreSQL accepted to start ... What can be maximum \nvalue for this ? To be usufull to the entire configuration ... ?\n\nLike this during during the vacuum full this is my used memory ...\n total used free shared buffers cached\nMem: 2069608 2059052 10556 0 8648 1950672\n-/+ buffers/cache: 99732 1969876\nSwap: 2097136 16080 2081056\n\nSeems that's I'm really using 5% of my memory ??? no ? or I missed something ?\n\nNow difficult to test again ... I will have to wait tomorrow morning to see \nthe result ... because I have already vacuumed the base to day ...\n\nBut I have done again a full vacuum to see if I have quick visible \ndifference ... and I have also saw that the full vacuum for pg_atribute seems \nto be so slow ... more than 1 min for 7256 tupples ? Is this is normal ?\n\nINFO: --Relation pg_catalog.pg_attribute--\nINFO: Pages 119: Changed 0, reaped 1, Empty 0, New 0; Tup 7256: Vac 0, Keep/\nVTL 0/0, UnUsed 3, MinLen 128, MaxLen 128; Re-using: Free/Avail. Space \n14664/504; EndEmpty/Avail. Pages 0/1.\n CPU 0.00s/0.00u sec elapsed 0.08 sec.\nINFO: Index pg_attribute_relid_attnam_index: Pages 21082; Tuples 7256: \nDeleted 0.\n CPU 0.83s/0.13u sec elapsed 59.32 sec.\nINFO: Index pg_attribute_relid_attnum_index: Pages 5147; Tuples 7256: Deleted \n0.\n CPU 0.26s/0.03u sec elapsed 8.79 sec.\nINFO: Rel pg_attribute: Pages: 119 --> 119; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Analyzing pg_catalog.pg_attribute\n\nThanks for your help ...\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n\n", "msg_date": "Fri, 2 Jan 2004 16:18:28 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why memory is not used ? Why vacuum so slow ?" }, { "msg_contents": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]> writes:\n> and I have also saw that the full vacuum for pg_atribute seems \n> to be so slow ... more than 1 min for 7256 tupples ? Is this is normal ?\n\n> INFO: --Relation pg_catalog.pg_attribute--\n> INFO: Pages 119: Changed 0, reaped 1, Empty 0, New 0; Tup 7256: Vac 0, Keep/\n> VTL 0/0, UnUsed 3, MinLen 128, MaxLen 128; Re-using: Free/Avail. Space \n> 14664/504; EndEmpty/Avail. Pages 0/1.\n> CPU 0.00s/0.00u sec elapsed 0.08 sec.\n> INFO: Index pg_attribute_relid_attnam_index: Pages 21082; Tuples 7256: \n> Deleted 0.\n> CPU 0.83s/0.13u sec elapsed 59.32 sec.\n> INFO: Index pg_attribute_relid_attnum_index: Pages 5147; Tuples 7256: Deleted \n> 0.\n> CPU 0.26s/0.03u sec elapsed 8.79 sec.\n\nYou're suffering from index bloat (21000 pages in an index for a\n119-page table!?). Updating to 7.4 would probably fix this, but\nif that's not practical consider reindexing pg_attribute.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jan 2004 10:51:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why memory is not used ? Why vacuum so slow ? " }, { "msg_contents": "\nChristopher Browne <[email protected]> writes:\n\n> Here's a scheme for query optimization that probably needs to be\n> avoided in that it would run afoul of a patent held by Oracle...\n\nWhat does this have to do with Herv� Piedvache's post \"Why memory is not\nused?\" ?\n\n-- \ngreg\n\n", "msg_date": "02 Jan 2004 21:18:45 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Techniques To Avoid?" } ]
[ { "msg_contents": "I tried searching the archives to find something like this. The search\nfunction doesn't like me much, and believe me the feeling is mutual. So\nI'm forced to pollute your inboxes with yet another \"why the hell isn't\nthis thing using my index\" email. I apologize in advance.\n\nI have a many-to-many relation table with a multipart primary key:\n\nsiren=# \\d listcontact\n Table \"public.listcontact\"\n Column | Type | Modifiers \n----------------+---------+-----------\n contactlistid | integer | not null\n contactid | bigint | not null\nIndexes: listcontact_pkey primary key btree (contactlistid, contactid)\n\n(There were some FKs in there too, but I stripped off everything I could\nduring my investigation and they didn't survive.) I'm doing some\nperformance testing so I loaded it with a few elephant piles:\n\nsiren=# select count(*) from listcontact;\n count \n---------\n 1409196\n(1 row)\n\nAnd packed it down good:\n\nsiren=# vacuum full analyze;\nVACUUM\n\nI didn't get the performance I expected. I took one of our queries and\nmutilated it and found some curious behavior on this table. I started\nrunning queries on just this table and couldn't explain what I was\nseeing. I tried this:\n\nsiren=# EXPLAIN ANALYZE SELECT * FROM ListContact WHERE contactListID=-1\nAND contactID=91347;\n\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------\n Seq Scan on listcontact (cost=0.00..29427.94 rows=1 width=12) (actual\ntime=893.15..5079.52 rows=1 loops=1)\n Filter: ((contactlistid = -1) AND (contactid = 91347))\n Total runtime: 5079.74 msec\n(3 rows)\n\nA seqscan... Fair enough, there's lots of memory on this box. I didn't\nwant to see a seqscan though, I wanted to see an index. So, I disabled\nseqscan and tried it again:\n\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using listcontact_pkey on listcontact (cost=0.00..58522.64\nrows=1 width=12) (actual time=402.73..9419.77 rows=1 loops=1)\n Index Cond: (contactlistid = -1)\n Filter: (contactid = 91347)\n Total runtime: 9419.97 msec\n(4 rows)\n\nAm I reading this right? Is it only using half of the fully-qualified\npk index? How do I diagnose this? Has anyone seen this before?\n\npostgresql 7.3.1\nlinux 2.6.0\nquad xeon 450\n\nchris\n\n", "msg_date": "Sat, 03 Jan 2004 00:37:43 -0500", "msg_from": "Chris Trawick <[email protected]>", "msg_from_op": true, "msg_subject": "\"fun with multipart primary keys\" hobby kit" }, { "msg_contents": "Chris Trawick <[email protected]> writes:\n> contactid | bigint | not null\n ^^^^^^\n\n> Am I reading this right? Is it only using half of the fully-qualified\n> pk index? How do I diagnose this? Has anyone seen this before?\n\nSurely you've been around here long enough to remember the\nmust-cast-bigint-constants problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jan 2004 00:57:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"fun with multipart primary keys\" hobby kit " }, { "msg_contents": "Actually, it would appear that I was born yesterday. I had no idea. \nAdded the cast and it fell right in. Thanks!\n\nchris <-- feeling pretty dumb right now\n\n\nOn Sat, 2004-01-03 at 00:57, Tom Lane wrote:\n> Chris Trawick <[email protected]> writes:\n> > contactid | bigint | not null\n> ^^^^^^\n> \n> > Am I reading this right? Is it only using half of the fully-qualified\n> > pk index? How do I diagnose this? Has anyone seen this before?\n> \n> Surely you've been around here long enough to remember the\n> must-cast-bigint-constants problem.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Sat, 03 Jan 2004 01:07:14 -0500", "msg_from": "Chris Trawick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"fun with multipart primary keys\" hobby kit" } ]
[ { "msg_contents": "\nI've been debating with a collegue who argues that indexing a\nboolean column is a BAD idea and that is will actually slow\ndown queries.\n\nMy plan is to have a table with many rows sharing 'versions'\n(version/archive/history) of data where the most current row\nis the one where 'is_active' contains a true value.\n\nIf the table begins to look like this:\n\ndata_id(pk) | data_lookup_key | data_is_active | ...\n------------+-----------------+----------------+--------\n1 | banana | false | ...\n2 | banana | false | ...\n3 | banana | false | ...\n4 | banana | false | ...\n5 | banana | false | ...\n6 | banana | false | ...\n7 | banana | false | ...\n8 | banana | false | ...\n9 | banana | true | ...\n10 | apple | true | ...\n11 | pear | false | ...\n12 | pear | false | ...\n13 | pear | false | ...\n14 | pear | false | ...\n15 | pear | false | ...\n...\n1000000 | pear | true | ...\n\nWill an index on the 'data_is_active' column be used or work\nas I expect? I'm assuming that I may have a million entries\nsharing the same 'data_lookup_key' and I'll be using that to\nsearch for the active version of the row.\n\n SELECT *\n FROM table\n WHERE data_lookup_key = 'pear'\n AND data_is_active IS TRUE;\n\nDoes it make sense to have an index on data_is_active?\n\nNow, I've read that in some databases the index on a column that\nhas relatively even distribution of values over a small set of values\nwill not be efficient. \n\nI bet this is in a FAQ somewhere. Can you point me in the right\ndirection?\n\nDante\n\n\n\n\n", "msg_date": "Sat, 03 Jan 2004 19:18:34 -0600", "msg_from": "\"D. Dante Lorenso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing a Boolean or Null column?" }, { "msg_contents": "\"D. Dante Lorenso\" <[email protected]> writes:\n> Does it make sense to have an index on data_is_active?\n\nHard to say. You weren't very clear about what fraction of the table\nrows you expect to have data_is_active = true. If that's a very small\nfraction, then an index might be worthwhile.\n\nHowever, I'd suggest using a partial index that merges the is_active\ntest with some other useful behavior. For example, if this is a\ncommon pattern:\n\n> SELECT *\n> FROM table\n> WHERE data_lookup_key = 'pear'\n> AND data_is_active IS TRUE;\n\nthen what you really want is\n\nCREATE INDEX myindex ON table (data_lookup_key) WHERE data_is_active IS TRUE;\n\n> I bet this is in a FAQ somewhere. Can you point me in the right\n> direction?\n\nSee the docs on partial indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jan 2004 23:26:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column? " }, { "msg_contents": "> Will an index on the 'data_is_active' column be used or work\n> as I expect? I'm assuming that I may have a million entries\n> sharing the same 'data_lookup_key' and I'll be using that to\n> search for the active version of the row.\n\nAn index just on a boolean column won't be 'selective enough'. eg. The \nindex will only be able to choose 50% of the table - since it's faster \nto do a full table scan in that case, the index won't get used.\n\nA multi keyed index, however will work a bit better, eg an index over \n(data_lookup_key, data_is_active).\n\nThat way, the index will first be able to find the correct key\t(which is \nnicely selective) and then will be able to halve the resulting search \nspace to get the active ones.\n\nBTW, you shouldn't use 'banana', 'pear', etc as the data_lookup_key, you \nshould make another table like this:\n\nid\tname\n1\tbanana\n2\tapple\n3\tpear\n\nAnd then replace the data_lookup_key col with a column of integers that \nis a foreign key to the names table - waaaaay faster to process.\n\nChris\n\n", "msg_date": "Sun, 04 Jan 2004 12:32:36 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column?" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n > > Will an index on the 'data_is_active' column be used or work\n > > as I expect? I'm assuming that I may have a million entries\n > > sharing the same 'data_lookup_key' and I'll be using that to\n > > search for the active version of the row.\n\n > An index just on a boolean column won't be 'selective enough'.\n > eg. The index will only be able to choose 50% of the table -\n > since it's faster to do a full table scan in that case, the\n > index won't get used.\n\nOk, so ...evenly distributed data on small set of values forces\nsequential scan since that's faster. I expected that based on\nwhat I've read so far.\n\n > A multi keyed index, however will work a bit better, eg an index\n > over (data_lookup_key, data_is_active).\n >\n > That way, the index will first be able to find the correct\n > key (which is nicely selective) and then will be able to\n > halve the resulting ? search space to get the active ones.\n\nI'm not using the 50% TRUE / 50% FALSE model. My model will be\nmore like only ONE value IS TRUE for 'is_active' for each\n'data_lookup_key' in my table. All the rest are FALSE. In\nthis case for 100 rows all having the same 'data_lookup_key'\nwe are looking at a 99% FALSE / 1% TRUE model ... and what I'll\nbe searching for is the ONE TRUE.\n\nIn this case, it WILL pay off to have the index on a boolean\ncolumn, yes? Will I win my debate with my collegue? ;-)\n\nI think Tom Lanes suggestion of partial indexes is what I need to\nlook into.\n\n > BTW, you shouldn't use 'banana', 'pear', etc as the data_lookup_key,\n > you should make another table like this: ... And then replace the\n > data_lookup_key col with a column of integers that is a foreign\n > key to the names table - waaaaay faster to process.\n\nGotcha, yeah, I'm targeting as close to 3NF as I get get. Was just\ntrying to be generic for my example ... bad example, oops.\n\nDante\n\n", "msg_date": "Sat, 03 Jan 2004 23:24:37 -0600", "msg_from": "\"D. Dante Lorenso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing a Boolean or Null column?" }, { "msg_contents": "> In this case, it WILL pay off to have the index on a boolean\n> column, yes? Will I win my debate with my collegue? ;-)\n> \n> I think Tom Lanes suggestion of partial indexes is what I need to\n> look into.\n\nYes, given that it will be highly skewed towards false entries, Tom's \nsuggestion is perfect.\n\nChris\n\n", "msg_date": "Sun, 04 Jan 2004 15:21:01 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column?" }, { "msg_contents": "> Ok, so ...evenly distributed data on small set of values forces\n> sequential scan since that's faster. I expected that based on\n> what I've read so far.\n\nActually, it's more a case of that fetching an item via and index is \nconsidered, say, four times slower than fetching something off a \nsequential scan (sort of). Hence, if you are selecting more than 25% of \nthe table, then a sequential scan will be faster, even though it has to \nprocess more rows.\n\nChris\n\n", "msg_date": "Sun, 04 Jan 2004 15:22:17 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column?" }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> Ok, so ...evenly distributed data on small set of values forces\n>> sequential scan since that's faster. I expected that based on\n>> what I've read so far.\n\n> Actually, it's more a case of that fetching an item via and index is \n> considered, say, four times slower than fetching something off a \n> sequential scan (sort of). Hence, if you are selecting more than 25% of \n> the table, then a sequential scan will be faster, even though it has to \n> process more rows.\n\nActually it's worse than that: if an indexscan is going to fetch more\nthan a few percent of the table, the planner will think it slower than\na sequential scan --- and usually it'll be right. The four-to-one ratio\nrefers to the cost of fetching a whole page (8K) randomly versus\nsequentially. In a seqscan, you can examine all the rows on a page\n(dozens to hundreds usually) for the price of one page fetch. In an\nindexscan, one page fetch might bring in just one row that you care\nabout. So the breakeven point is a lot worse than 4:1.\n\nThere is constant debate about the values of these parameters; in\nparticular the 4:1 page fetch cost ratio breaks down if you are able\nto cache a significant fraction of the table in RAM. See the list\narchives for details. But it's certainly true that an indexscan has to\nbe a lot more selective than 25% before it's going to be a win over\na seqscan. I'd say 1% to 5% is the right ballpark.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jan 2004 02:48:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column? " }, { "msg_contents": "After a long battle with technology, [email protected] (\"D. Dante Lorenso\"), an earthling, wrote:\n> I've been debating with a collegue who argues that indexing a\n> boolean column is a BAD idea and that is will actually slow\n> down queries.\n\nNo, it would be expected to slow down inserts, but not likely queries.\n\n> Will an index on the 'data_is_active' column be used or work\n> as I expect? I'm assuming that I may have a million entries\n> sharing the same 'data_lookup_key' and I'll be using that to\n> search for the active version of the row.\n\n> SELECT *\n> FROM table\n> WHERE data_lookup_key = 'pear'\n> AND data_is_active IS TRUE;\n>\n> Does it make sense to have an index on data_is_active?\n\nNot really.\n\n> Now, I've read that in some databases the index on a column that has\n> relatively even distribution of values over a small set of values\n> will not be efficient.\n\nThe problem is (and this is likely to be true for just about any\ndatabase system that is 'page-based,' which is just about any of them,\nthese days) that what happens, with the elements being so pervasive,\nthroughout the table, queries will be quite likely to hit nearly every\npage of the table.\n\nIf you're hitting practically every page, then it is more efficient to\njust walk thru the pages (Seq Scan) rather than to bother reading the\nindex.\n\nThe only improvement that could (in theory) be made is to cluster all\nthe \"true\" values onto one set of pages, and all the \"false\" ones onto\nanother set of pages, and have a special sort of index that knows\nwhich pages are \"true\" and \"false\". I _think_ that Oracle's notion of\n\"cluster tables\" function rather like this; it is rather debatable\nwhether it would be worthwhile to do similar with PostgreSQL.\n\nA way of 'clustering' with PostgreSQL might be to have two tables\n table_active\n and\n table_inactive\nwhere a view, representing the 'join' of them, would throw in the\n'data_is_active' value. By clever use of some rules/triggers, you\ncould insert into the view, and have values get shuffled into the\nappropriate table. \n\nWhen doing a select on the view, if you asked for \"data_is_active is\nTRUE\", the select would only draw data from table_inactive, or\nvice-versa.\n\nUnfortunately, sometimes the query optimizer may not be clever enough\nwhen working with the resulting joins, though that may just be a\nSimple Matter Of Programming to make it more clever in future versions.\n:-)\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/spreadsheets.html\nRules of the Evil Overlord #136. \"If I build a bomb, I will simply\nremember which wire to cut if it has to be deactivated and make every\nwire red.\" <http://www.eviloverlord.com/>\n", "msg_date": "Sun, 04 Jan 2004 16:39:37 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing a Boolean or Null column?" } ]
[ { "msg_contents": "I have a very large table (about a million rows) which I most frequently\nwant to select a subset of rows from base on a date field. That date field\nis indexed, and when Postgres uses that index, queries are fast. But\nsometimes it decides not to use the index, resorting to a sequential scan\ninstead. This is really, really slow.\n\nTo try to convince it to use my date index, I turned off the sequential scan\nstrategy in the planner. That worked on one copy of the db, but not on\nanother, where it decided to use an index from another column entirely,\nwhich didn't help performance. I dropped the other index, leaving only the\ndate index, and performance was good again.\n\nObviously the planner is making some bad choices here. I know that it is\ntrying to avoid random seeks or other scary things implied by a\n\"correlation\" statistic that is not close to 1 or -1, but it is massively\noverestimating the hit caused by those seeks and seemingly not taking into\naccount the size of the table! This is Postgres 7.4 on Linux and Mac OS X,\nBTW.\n\nAnyway, to \"fix\" the situation, I clustered the table on the date column.\nBut I fear that the data will slowly \"drift\" back to a state where the\nplanner decides again that a sequential scan is a good idea. Blah.\n\nSo, my question: how can I prevent this? Ideally, the planner should be\nsmarter. Failing that, I'd like to be able to force it to use the index\nthat I know will result in the fastest queries (3 seconds vs. 30 seconds in\nmany cases). Suggestions?\n\n-John\n\n", "msg_date": "Sun, 04 Jan 2004 18:36:16 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Use my (date) index, darn it!" }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> Obviously the planner is making some bad choices here.\n\nA fair conclusion ...\n\n> I know that it is trying to avoid random seeks or other scary things\n> implied by a \"correlation\" statistic that is not close to 1 or -1, but\n> it is massively overestimating the hit caused by those seeks and\n> seemingly not taking into account the size of the table!\n\nYou haven't given any evidence to support these conclusions, though.\nCould we see some table schemas, EXPLAIN ANALYZE output, and relevant\npg_stats entries for the various cases?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 01:55:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use my (date) index, darn it! " }, { "msg_contents": "On 1/5/04 1:55 AM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> Obviously the planner is making some bad choices here.\n> \n> A fair conclusion ...\n> \n>> I know that it is trying to avoid random seeks or other scary things\n>> implied by a \"correlation\" statistic that is not close to 1 or -1, but\n>> it is massively overestimating the hit caused by those seeks and\n>> seemingly not taking into account the size of the table!\n> \n> You haven't given any evidence to support these conclusions, though.\n\nWell here's what I was basing that theory on: before clustering, the\ncorrelation for the date column was around 0.3. After clustering, it was 1,\nand the index was always used. Does clustering change any other statistics\nother that correlation? I ran analyze immediately before and after the\ncluster operation.\n\n> Could we see some table schemas, EXPLAIN ANALYZE output, and relevant\n> pg_stats entries for the various cases?\n\nWell, the table is clustered now, so I can't reproduce the situation. Is\nthere any way to \"uncluster\" a table? Should I just cluster it on a\ndifferent column?\n\n-John\n\n", "msg_date": "Mon, 05 Jan 2004 11:19:04 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use my (date) index, darn it! " }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> Is there any way to \"uncluster\" a table? Should I just cluster it on a\n> different column?\n\nThat should work, if you choose one that's uncorrelated with the\nprevious clustering attribute.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 11:29:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use my (date) index, darn it! " }, { "msg_contents": "After a long battle with technology, [email protected] (John Siracusa), an earthling, wrote:\n> On 1/5/04 1:55 AM, Tom Lane wrote:\n>> John Siracusa <[email protected]> writes:\n>>> Obviously the planner is making some bad choices here.\n>> \n>> A fair conclusion ...\n>> \n>>> I know that it is trying to avoid random seeks or other scary things\n>>> implied by a \"correlation\" statistic that is not close to 1 or -1, but\n>>> it is massively overestimating the hit caused by those seeks and\n>>> seemingly not taking into account the size of the table!\n>> \n>> You haven't given any evidence to support these conclusions, though.\n>\n> Well here's what I was basing that theory on: before clustering, the\n> correlation for the date column was around 0.3. After clustering, it was 1,\n> and the index was always used. Does clustering change any other statistics\n> other that correlation? I ran analyze immediately before and after the\n> cluster operation.\n>\n>> Could we see some table schemas, EXPLAIN ANALYZE output, and relevant\n>> pg_stats entries for the various cases?\n>\n> Well, the table is clustered now, so I can't reproduce the situation. Is\n> there any way to \"uncluster\" a table? Should I just cluster it on a\n> different column?\n\nThat would presumably work...\n\nIt sounds to me as though the statistics that are being collected\naren't \"good enough.\" That tends to be a sign that the quantity of\nstatistics (e.g. - bins in the histogram) are insufficient.\n\nThis would be resolved by changing the number of bins (default of 10)\nvia \"ALTER TABLE FOO ALTER COLUMN BAR SET STATISTICS 100\" (or some\nother value higher than 10).\n\nClustering would rearrange the contents of the table, and perhaps make\nthe histogram 'more representative.' Increasing the \"SET STATISTICS\"\nvalue will quite likely be even more helpful, and is a lot less\nexpensive than clustering the table...\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\nRules of the Evil Overlord #158. \"I will exchange the labels on my\nfolder of top-secret plans and my folder of family recipes. Imagine\nthe hero's surprise when he decodes the stolen plans and finds\ninstructions for Grandma's Potato Salad.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Mon, 05 Jan 2004 11:45:54 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use my (date) index, darn it!" }, { "msg_contents": "On 1/5/04 11:45 AM, Christopher Browne wrote:\n> It sounds to me as though the statistics that are being collected\n> aren't \"good enough.\" That tends to be a sign that the quantity of\n> statistics (e.g. - bins in the histogram) are insufficient.\n> \n> This would be resolved by changing the number of bins (default of 10)\n> via \"ALTER TABLE FOO ALTER COLUMN BAR SET STATISTICS 100\" (or some\n> other value higher than 10).\n\nI did that, but I wasn't sure what value to use and what column to increase.\nI believe I increased the date column itself to 50 or something, but then I\nwasn't sure what to do next. I re-analyzed the table with the date column\nset to 50 but it didn't seem to help, so I resorted to clustering.\n\n> Clustering would rearrange the contents of the table, and perhaps make\n> the histogram 'more representative.' Increasing the \"SET STATISTICS\"\n> value will quite likely be even more helpful, and is a lot less\n> expensive than clustering the table...\n\nWhat column(s) should I increase? Do I have to do anything after increasing\nthe statistics, or do I just wait for the stats collector to do its thing?\n\n-John\n\n", "msg_date": "Mon, 05 Jan 2004 13:15:56 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use my (date) index, darn it!" }, { "msg_contents": "In the last exciting episode, [email protected] (John Siracusa) wrote:\n> What column(s) should I increase? Do I have to do anything after increasing\n> the statistics, or do I just wait for the stats collector to do its thing?\n\nYou have to ANALYZE the table again, to force in new statistics.\n\nAnd if the index in question is on _just_ the date column, then it is\nprobably only that date column where the \"SET STATISTICS\" needs to be\nincreased.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/sap.html\nFaith is the quality that enables you to eat blackberry jam on a\npicnic without looking to see whether the seeds move. -- DeMara Cabrera\n", "msg_date": "Mon, 05 Jan 2004 15:27:43 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use my (date) index, darn it!" } ]
[ { "msg_contents": "Hi,\n\nwe are new to Postgres and we are evaluating Postgres 7.4 on MacOS X as \nan alternative to FrontBase 3.6.27.\n\n From the available features Postgres is the choice #1.\n\nWe have some tests to check the performance and FrontBase is about 10 \ntimes faster than Postgres. We already played around with explain \nanalyse select. It seems that for large tables Postgres does not use an \nindex. We often see the scan message in the query plan. Were can we \nfind more hints about tuning the performance? The database is about 350 \nMB large, without BLOB's. We tried to define every important index for \nthe selects but it seems that something still goes wrong: FrontBase \nneeds about 23 seconds for about 4300 selects and Postgres needs 4 \nminutes, 34 seconds.\n\nAny clues?\n\nregards David\n\n", "msg_date": "Mon, 5 Jan 2004 12:28:32 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing Postgres queries" }, { "msg_contents": "On Monday 05 January 2004 16:58, David Teran wrote:\n> We have some tests to check the performance and FrontBase is about 10\n> times faster than Postgres. We already played around with explain\n> analyse select. It seems that for large tables Postgres does not use an\n> index. We often see the scan message in the query plan. Were can we\n> find more hints about tuning the performance? The database is about 350\n> MB large, without BLOB's. We tried to define every important index for\n> the selects but it seems that something still goes wrong: FrontBase\n> needs about 23 seconds for about 4300 selects and Postgres needs 4\n> minutes, 34 seconds.\n\nCheck \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nAre you sure you are using correct data types on indexes?\n\ne.g. if field1 is an int2 field, then following query would not use an index.\n\nselect * from table where field1=2;\n\nHowever following will\n\nselect * from table where field1=2::int2;\n\nIt is called as typecasting and postgresql is rather strict about it when it \ncomes to making a decision of index usage.\n\nI am sure above two tips could take care of some of the problems. \n\nSuch kind of query needs more specific information. Can you post explain \nanalyze output for queries and database schema.\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 5 Jan 2004 17:05:55 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "Hi Shridhar,\n\n> Are you sure you are using correct data types on indexes?\n>\nDid not know about this...\n\n> e.g. if field1 is an int2 field, then following query would not use an \n> index.\n>\nour fk have the type bigint, when i try one simple select like this:\n\nexplain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE \nt0.ID_FOREIGN_TABLE = 21110;\n\ni see that no index is being used whereas when i use\n\nexplain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE \nt0.ID_FOREIGN_TABLE = 21110::bigint;\n\nan index is used. Very fine, the performance is about 10 to 100 times \nfaster for the single select.\n\nI am using WebObjects with JDBC. I will now create a DB with integer \ninstead of bigint and see how this performs.\n\nregards David\n\n", "msg_date": "Mon, 5 Jan 2004 13:05:03 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "On Monday 05 January 2004 17:35, David Teran wrote:\n> explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE\n> t0.ID_FOREIGN_TABLE = 21110;\n>\n> i see that no index is being used whereas when i use\n>\n> explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE\n> t0.ID_FOREIGN_TABLE = 21110::bigint;\n>\n> an index is used. Very fine, the performance is about 10 to 100 times\n> faster for the single select.\n>\n> I am using WebObjects with JDBC. I will now create a DB with integer\n> instead of bigint and see how this performs.\n\nThe performance will likely to be the same. Its just that integer happens to \nbe default integer type and hence it does not need an explicit typecast. ( I \ndon't remember exactly which integer is default but it is either of int2,int4 \nand int8...:-))\n\nThe performance diffference is likely due to use of index, which is in turn \ndue to typecasting. If you need bigint, you should use them. Just remember to \ntypecast whenever required.\n\n Shridhar\n\n", "msg_date": "Mon, 5 Jan 2004 17:40:05 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "Hi,\n\n> The performance will likely to be the same. Its just that integer \n> happens to\n> be default integer type and hence it does not need an explicit \n> typecast. ( I\n> don't remember exactly which integer is default but it is either of \n> int2,int4\n> and int8...:-))\n>\nThe docs say int4 is much faster than int8, but i will check this.\n\n> The performance diffference is likely due to use of index, which is in \n> turn\n> due to typecasting. If you need bigint, you should use them. Just \n> remember to\n> typecast whenever required.\n\nThis is my bigger problem: i am using EOF (OR mapping tool) which frees \nme more or less form writing a lot of SQL. If i need to typecast to use \nan index then i have to see how to do this with this framework.\n\nRegards David\n\n", "msg_date": "Mon, 5 Jan 2004 13:18:06 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "On Monday 05 January 2004 17:48, David Teran wrote:\n> Hi,\n>\n> > The performance will likely to be the same. Its just that integer\n> > happens to\n> > be default integer type and hence it does not need an explicit\n> > typecast. ( I\n> > don't remember exactly which integer is default but it is either of\n> > int2,int4\n> > and int8...:-))\n>\n> The docs say int4 is much faster than int8, but i will check this.\n\nWell yes. That is correct as well. \n\nWhat I (really) meant to say that an index scan to pick few in4 tuples \nwouldn't be hell much faster than an index scan to pick same number of tuples \nwith int8 definition. \n\nThe initial boost you got from converting to index scan, would be probably \nbest you can beat out of it..\n\nOf course if you are scanning a few million of them sequentially, then it is \ndifferent story.\n\n> This is my bigger problem: i am using EOF (OR mapping tool) which frees\n> me more or less form writing a lot of SQL. If i need to typecast to use\n> an index then i have to see how to do this with this framework.\n\nWell, you can direct your queries to a function rather than table, that would \ncast the argument appropriately and select. Postgresql support function \noverloading as well, in case you need different types of arguments with same \nname.\n\nOr you can write an instead rule on server side which will perform casting \nbefore touching the table.\n\nI am not sure of exact details it would take to make it work, but it should \nwork, at least in theory. That way you can preserve the efforts invested in \nthe mapping tool. \n\nOf course, converting everything to integer might be a simpler option after \nall..:-)\n\n\n Shridhar\n\n", "msg_date": "Mon, 5 Jan 2004 18:07:53 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "\n> explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE \n> t0.ID_FOREIGN_TABLE = 21110::bigint;\n> \n> an index is used. Very fine, the performance is about 10 to 100 times \n> faster for the single select.\n\nAn alternative technique is to do this:\n\n... t0.ID_FOREIGN_TABLE = '21110';\n\nChris\n", "msg_date": "Mon, 05 Jan 2004 23:15:27 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "David Teran <[email protected]> writes:\n> This is my bigger problem: i am using EOF (OR mapping tool) which frees \n> me more or less form writing a lot of SQL. If i need to typecast to use \n> an index then i have to see how to do this with this framework.\n\nIt's worth pointing out that this problem is fixed (at long last) in\nCVS tip. Ypu probably shouldn't expend large amounts of effort on\nworking around a problem that will go away in 7.5.\n\nIf you don't anticipate going to production for six months or so, you\ncould adopt CVS tip as your development platform, with the expectation\nthat 7.5 will be released by the time you need a production system.\nI wouldn't recommend running CVS tip as a production database but it\nshould be plenty stable enough for devel purposes.\n\nAnother plan would be to use int4 columns for the time being with the\nintention of widening them to int8 when you move to 7.5. This would\ndepend on how soon you anticipate needing values > 32 bits, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 10:22:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "Hi Tom,\n\n> It's worth pointing out that this problem is fixed (at long last) in\n> CVS tip. Ypu probably shouldn't expend large amounts of effort on\n> working around a problem that will go away in 7.5.\n>\nWe have now changed the definition to integer, this will work for some \ntime. We are currently evaluating and have several production database \nwe might switch in some time.\n\nWhat we found out now is that a query with a single 'where' works fine, \nthe query planer uses the index but when we have 'two' where clauses it \ndoes not use the index anymore:\n\nEXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE \n(t0.\"ID_VALUE\" = 14542); performs fine, less than one millisecond.\n\nEXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE \n(t0.\"ID_VALUE\" = 14542 OR t0.\"ID_VALUE\" = 14550); performs bad: about \n235 milliseconds.\n\nI tried to change the second one to use IN but this did not help at \nall. Am i doing something wrong? I have an index defined like this:\n\nCREATE INDEX key_value_meta_data__id_value__fk_index ON \n\"KEY_VALUE_META_DATA\" USING btree (\"ID_VALUE\");\n\nRegards David\n\n", "msg_date": "Mon, 5 Jan 2004 19:47:28 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "David Teran <[email protected]> writes:\n> What we found out now is that a query with a single 'where' works fine, \n> the query planer uses the index but when we have 'two' where clauses it \n> does not use the index anymore:\n\n> EXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE \n> (t0.\"ID_VALUE\" = 14542); performs fine, less than one millisecond.\n\n> EXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE \n> (t0.\"ID_VALUE\" = 14542 OR t0.\"ID_VALUE\" = 14550); performs bad: about \n> 235 milliseconds.\n\nPlease, when you ask this sort of question, show the EXPLAIN ANALYZE\noutput. It is not a virtue to provide minimal information and see if\nanyone can guess what's happening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 13:52:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "Hi Tom,\n\n\n> David Teran <[email protected]> writes:\n>> What we found out now is that a query with a single 'where' works \n>> fine,\n>> the query planer uses the index but when we have 'two' where clauses \n>> it\n>> does not use the index anymore:\n>\n>> EXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE\n>> (t0.\"ID_VALUE\" = 14542); performs fine, less than one millisecond.\n>\n>> EXPLAIN ANALYZE SELECT columns... FROM \"KEY_VALUE_META_DATA\" t0 WHERE\n>> (t0.\"ID_VALUE\" = 14542 OR t0.\"ID_VALUE\" = 14550); performs bad: about\n>> 235 milliseconds.\n>\n> Please, when you ask this sort of question, show the EXPLAIN ANALYZE\n> output. It is not a virtue to provide minimal information and see if\n> anyone can guess what's happening.\n>\nSorry for that, i thought this is such a trivial question that the \nanswer is easy.\n\nexplain result from first query:\n\nIndex�Scan�using�key_value_meta_data__id_value__fk_index�on�\"KEY_VALUE_M \nETA_DATA\"�t0��(cost=0.00..1585.52�rows=467�width=1068)�(actual�time=0.42 \n4..0.493�rows=13�loops=1)\n\n��Index�Cond:�(\"ID_VALUE\"�=�21094)\n\nTotal runtime: 0.608 ms\n\n\n\nexplain result from second query:\n\nSeq�Scan�on�\"KEY_VALUE_META_DATA\"�t0��(cost=0.00..2671.16�rows=931�width \n=1068)�(actual�time=122.669..172.179�rows=25�loops=1)\n\n��Filter:�((\"ID_VALUE\"�=�21094)�OR�(\"ID_VALUE\"�=�21103))\n\nTotal runtime: 172.354 ms\n\n\n\nI found out that its possible to disable seq scans with set \nenable_seqscan to off; then the second query result looks like this:\n\nIndex�Scan�using�key_value_meta_data__id_value__fk_index,�key_value_meta \n_data__id_value__fk_index�on�\"KEY_VALUE_META_DATA\"�t0��(cost=0.00..3173. \n35�rows=931�width=1068)�(actual�time=0.116..0.578�rows=25�loops=1)\n\n��Index�Cond:�((\"ID_VALUE\"�=�21094)�OR�(\"ID_VALUE\"�=�21103))\n\nTotal runtime: 0.716 ms\n\n\nBut i read in the docs that its not OK to turn this off by default. I \nreally wonder if this is my fault or not, from my point of view this is \nsuch a simple select that the query plan should not result in a table \nscan.\n\nRegards David\n\n", "msg_date": "Mon, 5 Jan 2004 20:02:01 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "David Teran <[email protected]> writes:\n> explain result from second query:\n\n> Seq Scan on \"KEY_VALUE_META_DATA\" t0 (cost=0.00..2671.16 rows=931 width \n> =1068) (actual time=122.669..172.179 rows=25 loops=1)\n> Filter: ((\"ID_VALUE\" = 21094) OR (\"ID_VALUE\" = 21103))\n\nThe problem is evidently that the row estimate is so far off (931\nestimate vs 25 actual). Have you done ANALYZE or VACUUM ANALYZE\non this table recently? If you have, I'd be interested to see the\npg_stats row for ID_VALUE. It might be that you need to increase\nthe statistics target for this table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 14:05:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "Hi Tom,\n\nfirst of all thanks for your help! I really appreciate your fast \nresponse and if you ever have a question about WebObjects, just drop me \nline ;-)\n\n>> Seq Scan on \"KEY_VALUE_META_DATA\" t0 (cost=0.00..2671.16 rows=931 \n>> width\n>> =1068) (actual time=122.669..172.179 rows=25 loops=1)\n>> Filter: ((\"ID_VALUE\" = 21094) OR (\"ID_VALUE\" = 21103))\n>\n> The problem is evidently that the row estimate is so far off (931\n> estimate vs 25 actual). Have you done ANALYZE or VACUUM ANALYZE\n> on this table recently? If you have, I'd be interested to see the\n> pg_stats row for ID_VALUE. It might be that you need to increase\n> the statistics target for this table.\n>\nI am absolutely new to PostgreSQL. OK, after VACUUM ANALYZE i get:\n\nIndex�Scan�using�key_value_meta_data__id_value__fk_index,�key_value_meta \n_data__id_value__fk_index�on�\"KEY_VALUE_META_DATA\"�t0��(cost=0.00..19.94 \n�rows=14�width=75)�(actual�time=0.615..1.017�rows=25�loops=1)\n��Index�Cond:�((\"ID_VALUE\"�=�21094)�OR�(\"ID_VALUE\"�=�21103))\nTotal runtime: 2.565 ms\n\nand the second time i invoke this i get\n\n\nIndex�Scan�using�key_value_meta_data__id_value__fk_index,�key_value_meta \n_data__id_value__fk_index�on�\"KEY_VALUE_META_DATA\"�t0��(cost=0.00..19.94 \n�rows=14�width=75)�(actual�time=0.112..0.296�rows=25�loops=1)\n��Index�Cond:�((\"ID_VALUE\"�=�21094)�OR�(\"ID_VALUE\"�=�21103))\nTotal runtime: 0.429 ms\n\nMuch better. So i think i will first read more about this optimization \nstuff and regular maintenance things. This is something i like very \nmuch from FrontBase: no need for such things, simply start and run. But \nother things were not so fine ;-).\n\nIs there any hint where to start to understand more about this \noptimization problem?\n\nregards David\n\n\t\t\n", "msg_date": "Mon, 5 Jan 2004 20:20:49 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "David Teran <[email protected]> writes:\n> Much better. So i think i will first read more about this optimization \n> stuff and regular maintenance things.\n\nSee http://www.postgresql.org/docs/7.4/static/maintenance.html\n\n> Is there any hint where to start to understand more about this \n> optimization problem?\n\nhttp://www.postgresql.org/docs/7.4/static/performance-tips.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jan 2004 14:23:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "... wow:\n\nexecuting a batch file with about 4250 selects, including lots of joins \nother things PostgreSQL 7.4 is about 2 times faster than FrontBase \n3.6.27. OK, we will start to make larger tests but this is quite \ninteresting already: we did not optimize a lot, just invoked VACUUM \nANALYZE and then the selects ;-)\n\nThanks to all who answered to this thread.\n\ncheers David\n\n", "msg_date": "Mon, 5 Jan 2004 20:57:47 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing Postgres queries " }, { "msg_contents": "David Teran wrote:\n> Index?Scan?using?key_value_meta_data__id_value__fk_index,?key_value_meta \n> _data__id_value__fk_index?on?\"KEY_VALUE_META_DATA\"?t0??(cost=0.00..19.94 \n> ?rows=14?width=75)?(actual?time=0.112..0.296?rows=25?loops=1)\n> ??Index?Cond:?((\"ID_VALUE\"?=?21094)?OR?(\"ID_VALUE\"?=?21103))\n> Total runtime: 0.429 ms\n> \n> Much better. So i think i will first read more about this optimization \n> stuff and regular maintenance things. This is something i like very \n> much from FrontBase: no need for such things, simply start and run. But \n> other things were not so fine ;-).\n> \n> Is there any hint where to start to understand more about this \n> optimization problem?\n\nRead the FAQ. There is an item about slow queries and indexes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 6 Jan 2004 00:25:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" }, { "msg_contents": "On Mon, 2004-01-05 at 14:57, David Teran wrote:\n> ... wow:\n> \n> executing a batch file with about 4250 selects, including lots of joins \n> other things PostgreSQL 7.4 is about 2 times faster than FrontBase \n> 3.6.27. OK, we will start to make larger tests but this is quite \n> interesting already: we did not optimize a lot, just invoked VACUUM \n> ANALYZE and then the selects ;-)\n> \n> Thanks to all who answered to this thread.\n\nI presume that batch file was executed linearly -- no parallelism?\nYou're actually testing one of PostgreSQL's shortcomings.\n\nPostgreSQL (in my experience) does much better in such comparisons with\na parallel load -- multiple connections executing varied work (short\nselects, complex selects, inserts, updates, deletes).\n\nAnyway, just a tip that you will want to test your actual load. If you\ndo batch work with a single thread, what you have is fine. But if you\nhave a website with tens or hundreds of simultaneous connections then\nyour non-parallel testing will not reflect that work load.\n\n", "msg_date": "Thu, 08 Jan 2004 22:16:49 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing Postgres queries" } ]
[ { "msg_contents": "Speaking of special cases (well, I was on the admin list) there are two\nkinds that would really benefit from some attention.\n\n1. The query \"select max(foo) from bar\" where the column foo has an index.\nAren't indexes ordered? If not, an \"ordered index\" would be useful in this\nsituation so that this query, rather than doing a sequential scan of the\nwhole table, would just \"ask the index\" for the max value and return nearly\ninstantly.\n\n2. The query \"select count(*) from bar\" Surely the total number of rows in\na table is kept somewhere convenient. If not, it would be nice if it could\nbe :) Again, rather than doing a sequential scan of the entire table, this\ntype of query could return instantly.\n\nI believe MySQL does both of these optimizations (which are probably a lot\neasier in that product, given its data storage system). These were the\nfirst areas where I noticed a big performance difference between MySQL and\nPostgres.\n\nEspecially with very large tables, hearing the disks grind as Postgres scans\nevery single row in order to determine the number of rows in a table or the\nmax value of a column (even a primary key created from a sequence) is pretty\npainful. If the implementation is not too horrendous, this is an area where\nan orders-of-magnitude performance increase can be had.\n\n-John\n\n", "msg_date": "Mon, 05 Jan 2004 14:03:12 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Select max(foo) and select count(*) optimization" }, { "msg_contents": "> Especially with very large tables, hearing the disks grind as Postgres scans\n> every single row in order to determine the number of rows in a table or the\n> max value of a column (even a primary key created from a sequence) is pretty\n> painful. If the implementation is not too horrendous, this is an area where\n> an orders-of-magnitude performance increase can be had.\n\nActually, it's very painful. For MySQL, they've accepted the concurrancy\nhit in order to accomplish it -- PostgreSQL would require a more subtle\napproach.\n\nAnyway, with Rules you can force this:\n\nON INSERT UPDATE counter SET tablecount = tablecount + 1;\n\nON DELETE UPDATE counter SET tablecount = tablecount - 1;\n\n\nYou need to create a table \"counter\" with a single row that will keep\ntrack of the number of rows in the table. Just remember, you've now\nserialized all writes to the table, but in your situation it may be\nworth while.\n\nmax(foo) optimizations requires an extension to the aggregates system.\nIt will likely happen within a few releases. A work around can be\naccomplished today through the use of LIMIT and ORDER BY.\n\n", "msg_date": "Mon, 05 Jan 2004 14:52:06 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On 1/5/04 2:52 PM, Rod Taylor wrote:\n> max(foo) optimizations requires an extension to the aggregates system.\n> It will likely happen within a few releases.\n\nLooking forward to it.\n\n> A work around can be accomplished today through the use of LIMIT and ORDER BY.\n\nWowzers, I never imagined that that'd be so much faster. Thanks! :)\n\n-John\n\n", "msg_date": "Mon, 05 Jan 2004 15:01:18 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> 1. The query \"select max(foo) from bar\" where the column foo has an index.\n> Aren't indexes ordered? If not, an \"ordered index\" would be useful in this\n> situation so that this query, rather than doing a sequential scan of the\n> whole table, would just \"ask the index\" for the max value and return nearly\n> instantly.\n\nhttp://www.postgresql.org/docs/current/static/functions-aggregate.html\n\n-Neil\n\n", "msg_date": "Mon, 05 Jan 2004 15:23:16 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Oops! [email protected] (John Siracusa) was seen spray-painting on a wall:\n> Speaking of special cases (well, I was on the admin list) there are two\n> kinds that would really benefit from some attention.\n>\n> 1. The query \"select max(foo) from bar\" where the column foo has an\n> index. Aren't indexes ordered? If not, an \"ordered index\" would be\n> useful in this situation so that this query, rather than doing a\n> sequential scan of the whole table, would just \"ask the index\" for\n> the max value and return nearly instantly.\n>\n> 2. The query \"select count(*) from bar\" Surely the total number of\n> rows in a table is kept somewhere convenient. If not, it would be\n> nice if it could be :) Again, rather than doing a sequential scan of\n> the entire table, this type of query could return instantly.\n>\n> I believe MySQL does both of these optimizations (which are probably\n> a lot easier in that product, given its data storage system). These\n> were the first areas where I noticed a big performance difference\n> between MySQL and Postgres.\n>\n> Especially with very large tables, hearing the disks grind as\n> Postgres scans every single row in order to determine the number of\n> rows in a table or the max value of a column (even a primary key\n> created from a sequence) is pretty painful. If the implementation\n> is not too horrendous, this is an area where an orders-of-magnitude\n> performance increase can be had.\n\nThese are both VERY frequently asked questions.\n\nIn the case of question #1, the optimization you suggest could be\naccomplished via some Small Matter Of Programming. None of the people\nthat have wanted the optimization have, however, offered to actually\nDO the programming.\n\nIn the case of #2, the answer is \"surely NOT.\" In MVCC databases,\nthat information CANNOT be stored anywhere convenient because queries\nrequested by transactions started at different points in time must get\ndifferent answers.\n\nI think we need to add these questions and their answers to the FAQ so\nthat the answer can be \"See FAQ Item #17\" rather than people having to\ngratuitously explain it over and over and over again.\n-- \n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/finances.html\nRules of the Evil Overlord #127. \"Prison guards will have their own\ncantina featuring a wide variety of tasty treats that will deliver\nsnacks to the guards while on duty. The guards will also be informed\nthat accepting food or drink from any other source will result in\nexecution.\" <http://www.eviloverlord.com/>\n", "msg_date": "Mon, 05 Jan 2004 15:26:15 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Not that I'm offering to do the porgramming mind you, :) but . . \n\n\nIn the case of select count(*), one optimization is to do a scan of the\nprimary key, not the table itself, if the table has a primary key. In a\ncertain commercial, lesser database, this is called an \"index fast full\nscan\". It would be important to scan the index in physical order\n(sequential physical IO) and not in key order (random physical IO)\n\nI'm guessing the payoff as well as real-world-utility of a max(xxx)\noptimization are much higher than a count(*) optimization tho\n\n\nOn Mon, 2004-01-05 at 12:26, Christopher Browne wrote:\n> Oops! [email protected] (John Siracusa) was seen spray-painting on a wall:\n> > Speaking of special cases (well, I was on the admin list) there are two\n> > kinds that would really benefit from some attention.\n> >\n> > 1. The query \"select max(foo) from bar\" where the column foo has an\n> > index. Aren't indexes ordered? If not, an \"ordered index\" would be\n> > useful in this situation so that this query, rather than doing a\n> > sequential scan of the whole table, would just \"ask the index\" for\n> > the max value and return nearly instantly.\n> >\n> > 2. The query \"select count(*) from bar\" Surely the total number of\n> > rows in a table is kept somewhere convenient. If not, it would be\n> > nice if it could be :) Again, rather than doing a sequential scan of\n> > the entire table, this type of query could return instantly.\n> >\n> > I believe MySQL does both of these optimizations (which are probably\n> > a lot easier in that product, given its data storage system). These\n> > were the first areas where I noticed a big performance difference\n> > between MySQL and Postgres.\n> >\n> > Especially with very large tables, hearing the disks grind as\n> > Postgres scans every single row in order to determine the number of\n> > rows in a table or the max value of a column (even a primary key\n> > created from a sequence) is pretty painful. If the implementation\n> > is not too horrendous, this is an area where an orders-of-magnitude\n> > performance increase can be had.\n> \n> These are both VERY frequently asked questions.\n> \n> In the case of question #1, the optimization you suggest could be\n> accomplished via some Small Matter Of Programming. None of the people\n> that have wanted the optimization have, however, offered to actually\n> DO the programming.\n> \n> In the case of #2, the answer is \"surely NOT.\" In MVCC databases,\n> that information CANNOT be stored anywhere convenient because queries\n> requested by transactions started at different points in time must get\n> different answers.\n> \n> I think we need to add these questions and their answers to the FAQ so\n> that the answer can be \"See FAQ Item #17\" rather than people having to\n> gratuitously explain it over and over and over again.\n\n", "msg_date": "05 Jan 2004 16:24:26 -0800", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Paul Tuckfield <[email protected]> writes:\n\n> In the case of select count(*), one optimization is to do a scan of the\n> primary key, not the table itself, if the table has a primary key. In a\n> certain commercial, lesser database, this is called an \"index fast full\n> scan\". It would be important to scan the index in physical order\n> (sequential physical IO) and not in key order (random physical IO)\n\nThat won't work because you still have to hit the actual tuple to\ndetermine visibility.\n\n-Doug\n\n", "msg_date": "Mon, 05 Jan 2004 19:29:56 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Paul Tuckfield) wrote:\n> Not that I'm offering to do the porgramming mind you, :) but . . \n>\n> In the case of select count(*), one optimization is to do a scan of the\n> primary key, not the table itself, if the table has a primary key. In a\n> certain commercial, lesser database, this is called an \"index fast full\n> scan\". It would be important to scan the index in physical order\n> (sequential physical IO) and not in key order (random physical IO)\n\nThe problem is that this \"optimization\" does not actually work. The\nindex does not contain transaction visibility information, so you have\nto go to the pages of tuples in order to determine if any given tuple\nis visible.\n\n> I'm guessing the payoff as well as real-world-utility of a max(xxx)\n> optimization are much higher than a count(*) optimization tho\n\nThat's probably so.\n\nIn many cases, approximations, such as page counts, may be good\nenough, and pray consider, that (\"an approximation\") is probably all\nyou were getting from the database systems that had an \"optimization\"\nto store the count in a counter.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://www3.sympatico.ca/cbbrowne/linuxxian.html\n\"No, you misunderstand. Microsoft asked some hackers how they could\nmake their system secure - the hackers replied \"Turn it off.\". So they\ndid.\" -- Anthony Ord\n", "msg_date": "Mon, 05 Jan 2004 20:46:00 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "[email protected] (Rod Taylor) wrote:\n>> Especially with very large tables, hearing the disks grind as Postgres scans\n>> every single row in order to determine the number of rows in a table or the\n>> max value of a column (even a primary key created from a sequence) is pretty\n>> painful. If the implementation is not too horrendous, this is an area where\n>> an orders-of-magnitude performance increase can be had.\n>\n> Actually, it's very painful. For MySQL, they've accepted the concurrancy\n> hit in order to accomplish it -- PostgreSQL would require a more subtle\n> approach.\n>\n> Anyway, with Rules you can force this:\n>\n> ON INSERT UPDATE counter SET tablecount = tablecount + 1;\n>\n> ON DELETE UPDATE counter SET tablecount = tablecount - 1;\n>\n> You need to create a table \"counter\" with a single row that will keep\n> track of the number of rows in the table. Just remember, you've now\n> serialized all writes to the table, but in your situation it may be\n> worth while.\n\nThere's a still more subtle approach that relieves the serialization\nconstraint, at some cost...\n\n- You add rules that _insert_ a row each time there is an\n insert/delete\n ON INSERT insert into counts(table, value) values ('our_table', 1);\n ON DELETE insert into counts(table, value) values ('our_table', -1);\n\n- The \"select count(*) from our_table\" is replaced by \"select\n sum(value) from counts where table = 'our_table'\"\n\n- Periodically, a \"compression\" process goes through and either:\n\n a) Deletes the rows for 'our_table' and replaces them with one\n row with a conventionally-scanned 'count(*)' value, or\n\n b) Computes \"select table, sum(value) as value from counts group\n by table\", deletes all the existing rows in counts, and replaces\n them by the preceding selection, or\n\n c) Perhaps does something incremental that's like b), but which\n only processes parts of the \"count\" table at once. Process\n 500 rows, then COMMIT, or something of the sort...\n\nNote that this \"counts\" table can potentially grow _extremely_ large.\nThe \"win\" comes when it gets compressed, so that instead of scanning\nthrough 500K items, it index-scans through 27, the 1 that has the\n\"497000\" that was the state of the table at the last compression, and\nthen 26 singletons.\n\nA win comes in if an INSERT that adds in 50 rows can lead to\ninserting ('our_table', 50) into COUNTS, or a delete that eliminates\n5000 rows puts in ('our_table', -5000).\n\nIt's vital to run the \"compression\" reasonably often (much like VACUUM\n:-)) in order that the COUNTS summary table stays relatively small.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/wp.html\nDebugging is twice as hard as writing the code in the first place.\nTherefore, if you write the code as cleverly as possible, you are, by\ndefinition, not smart enough to debug it. -- Brian W. Kernighan\n", "msg_date": "Mon, 05 Jan 2004 21:31:29 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On Tuesday 06 January 2004 07:16, Christopher Browne wrote:\n> Martha Stewart called it a Good Thing when [email protected] (Paul \nTuckfield) wrote:\n> > Not that I'm offering to do the porgramming mind you, :) but . .\n> >\n> > In the case of select count(*), one optimization is to do a scan of the\n> > primary key, not the table itself, if the table has a primary key. In a\n> > certain commercial, lesser database, this is called an \"index fast full\n> > scan\". It would be important to scan the index in physical order\n> > (sequential physical IO) and not in key order (random physical IO)\n>\n> The problem is that this \"optimization\" does not actually work. The\n> index does not contain transaction visibility information, so you have\n> to go to the pages of tuples in order to determine if any given tuple\n> is visible.\n\nIt was rejected as an idea to add transaction visibility information to \nindexes. The time I proposed, my idea was to vacuum tuples on page level \nwhile postgresql pushes buffers out of shared cache. If indexes had \nvisibility information, they could be cleaned out of order than heap tuples.\n\nThis wouldn't have eliminated vacuum entirely but at least frequently hit data \nwould be clean.\n\nBut it was rejected because of associated overhead. \n\nJust thought worh a mention..\n\n Shridhar\n\n\n", "msg_date": "Tue, 6 Jan 2004 12:01:53 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On Tuesday 06 January 2004 01:22, Rod Taylor wrote:\n> Anyway, with Rules you can force this:\n>\n> ON INSERT UPDATE counter SET tablecount = tablecount + 1;\n>\n> ON DELETE UPDATE counter SET tablecount = tablecount - 1;\n\nThat would generate lot of dead tuples in counter table. How about\n\nselect relpages,reltuples from pg_class where relname=<tablename>;\n\nAssuming the stats are recent enough, it would be much faster and accurate..\n\n Shridhar\n\n", "msg_date": "Tue, 6 Jan 2004 12:12:21 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Hi,\n\nShridhar Daithankar wrote:\n\n> \n> select relpages,reltuples from pg_class where relname=<tablename>;\n> \n> Assuming the stats are recent enough, it would be much faster and accurate..\n\nthis needs an analyze <tablename>; before select from pg_class, cause \nonly after analyze will update pg the pg_class\n\nC.\n", "msg_date": "Tue, 06 Jan 2004 12:51:13 +0100", "msg_from": "CoL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On January 6, 2004 01:42 am, Shridhar Daithankar wrote:\n> On Tuesday 06 January 2004 01:22, Rod Taylor wrote:\n> > Anyway, with Rules you can force this:\n> >\n> > ON INSERT UPDATE counter SET tablecount = tablecount + 1;\n> >\n> > ON DELETE UPDATE counter SET tablecount = tablecount - 1;\n>\n> That would generate lot of dead tuples in counter table. How about\n>\n> select relpages,reltuples from pg_class where relname=<tablename>;\n>\n> Assuming the stats are recent enough, it would be much faster and\n> accurate..\n\nWell, I did this:\n\ncert=# select relpages,reltuples from pg_class where relname= 'certificate';\n relpages | reltuples\n----------+-------------\n 399070 | 2.48587e+07\n(1 row)\n\nCasting seemed to help:\n\ncert=# select relpages,reltuples::bigint from pg_class where relname= \n'certificate';\n relpages | reltuples\n----------+-----------\n 399070 | 24858736\n(1 row)\n\nBut:\n\ncert=# select count(*) from certificate;\n[*Crunch* *Crunch* *Crunch*]\n count\n----------\n 19684668\n(1 row)\n\nAm I missing something? Max certificate_id is 20569544 btw.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 6 Jan 2004 07:18:08 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:\n> On January 6, 2004 01:42 am, Shridhar Daithankar wrote:\n> cert=# select relpages,reltuples::bigint from pg_class where relname=\n> 'certificate';\n> relpages | reltuples\n> ----------+-----------\n> 399070 | 24858736\n> (1 row)\n>\n> But:\n>\n> cert=# select count(*) from certificate;\n> [*Crunch* *Crunch* *Crunch*]\n> count\n> ----------\n> 19684668\n> (1 row)\n>\n> Am I missing something? Max certificate_id is 20569544 btw.\n\nDo 'vacuum analyze certificate' and try..:-)\n\nThe numbers from pg_class are estimates updated by vacuum /analyze. Of course \nyou need to run vacuum frequent enough for that statistics to be updated all \nthe time or run autovacuum daemon..\n\nRan into same problem on my machine till I remembered about vacuum..:-)\n\n Shridhar\n\n", "msg_date": "Tue, 6 Jan 2004 17:50:09 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:\n> On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:\n> > On January 6, 2004 01:42 am, Shridhar Daithankar wrote:\n> > cert=# select relpages,reltuples::bigint from pg_class where relname=\n> > 'certificate';\n> > relpages | reltuples\n> > ----------+-----------\n> > 399070 | 24858736\n> > (1 row)\n> >\n> > But:\n> >\n> > cert=# select count(*) from certificate;\n> > [*Crunch* *Crunch* *Crunch*]\n> > count\n> > ----------\n> > 19684668\n> > (1 row)\n> >\n> > Am I missing something? Max certificate_id is 20569544 btw.\n> \n> Do 'vacuum analyze certificate' and try..:-)\n> \n> The numbers from pg_class are estimates updated by vacuum /analyze. Of course \n> you need to run vacuum frequent enough for that statistics to be updated all \n> the time or run autovacuum daemon..\n> \n> Ran into same problem on my machine till I remembered about vacuum..:-)\n> \n\nActually you only need to run analyze to update the statistics.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "06 Jan 2004 11:03:23 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "Robert Treat wrote:\n> On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:\n\n>>The numbers from pg_class are estimates updated by vacuum /analyze. Of course \n>>you need to run vacuum frequent enough for that statistics to be updated all \n>>the time or run autovacuum daemon..\n>>Ran into same problem on my machine till I remembered about vacuum..:-)\n> Actually you only need to run analyze to update the statistics.\n\nOld habits die hard..:-)\n\n shridhar\n", "msg_date": "Tue, 06 Jan 2004 22:01:20 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "if this situation persists after 'analyze certificate', then you need to:\n\nincrease the statistics target 'alter table certificate alter column \ncertificate_id set statistics 100' \n\nor\n\n'vacuum full certificate'\n\ni.e : there are lots of (dead) updated or deleted tuples in the \nrelation, distributed in such a way as to throw off analyze's estimate.\n\nregards\n\nMark\n\nD'Arcy J.M. Cain wrote:\n\n>\n>Well, I did this:\n>\n>cert=# select relpages,reltuples from pg_class where relname= 'certificate';\n> relpages | reltuples\n>----------+-------------\n> 399070 | 2.48587e+07\n>(1 row)\n>\n>Casting seemed to help:\n>\n>cert=# select relpages,reltuples::bigint from pg_class where relname= \n>'certificate';\n> relpages | reltuples\n>----------+-----------\n> 399070 | 24858736\n>(1 row)\n>\n>But:\n>\n>cert=# select count(*) from certificate;\n>[*Crunch* *Crunch* *Crunch*]\n> count\n>----------\n> 19684668\n>(1 row)\n>\n>Am I missing something? Max certificate_id is 20569544 btw.\n>\n> \n>\n\n", "msg_date": "Wed, 07 Jan 2004 08:28:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "On January 6, 2004 07:20 am, Shridhar Daithankar wrote:\n> On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:\n> > On January 6, 2004 01:42 am, Shridhar Daithankar wrote:\n> > cert=# select relpages,reltuples::bigint from pg_class where relname=\n> > 'certificate';\n> > relpages | reltuples\n> > ----------+-----------\n> > 399070 | 24858736\n> > (1 row)\n> >\n> > But:\n> >\n> > cert=# select count(*) from certificate;\n> > [*Crunch* *Crunch* *Crunch*]\n> > count\n> > ----------\n> > 19684668\n> > (1 row)\n> >\n> > Am I missing something? Max certificate_id is 20569544 btw.\n>\n> Do 'vacuum analyze certificate' and try..:-)\n\nKind of invalidates the part about being accurate then, don't it? Besides, I \nvacuum that table every day (*) and we have reorganized the schema so that we \nnever update it except in exceptional cases. I would be less surprised if \nthe result was less than the real count since we only insert into that table.\n\nIn any case, if I have to vacuum a 20,000,000 row table to get an accurate \ncount then I may as well run count(*) on it.\n\n(*): Actually I only analyze but I understand that that should be sufficient.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 6 Jan 2004 17:57:05 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <[email protected]> writes:\n> In any case, if I have to vacuum a 20,000,000 row table to get an accurate \n> count then I may as well run count(*) on it.\n> (*): Actually I only analyze but I understand that that should be sufficient.\n\nANALYZE without VACUUM will deliver a not-very-accurate estimate, since it\nonly looks at a sample of the table's pages and doesn't grovel through\nevery one. Any of the VACUUM variants, on the other hand, will set\npg_class.reltuples reasonably accurately (as the number of rows actually\nseen and left undeleted by the VACUUM pass).\n\nThere are pathological cases where ANALYZE's estimate of the overall row\ncount can be horribly bad --- mainly, when the early pages of the table\nare empty or nearly so, but there are well-filled pages out near the\nend. I have a TODO item to try to make ANALYZE less prone to getting\nfooled that way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jan 2004 18:19:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select max(foo) and select count(*) optimization " } ]
[ { "msg_contents": "I have just discovered that if one does a SELECT with a LIMIT and OFFSET\n values, say\n\nSELECT myfunc(mycol) FROM table LIMIT 50 OFFSET 10000 ;\n\nThen the whole of the selection expressions, including the function calls,\nare actuall executed for every record, not just those being selected but\nalso those being skipped, i.e. 10050 in this case.\nActually it's even odder, as the number is that plus one, as the next\nrecord in sequence is also passed to the function.\n\nI discovered this by accident, since I was using a user-defined function\nin pl/pgsql and included by mistake some debug code using RAISE INFO, so\nthis diagnostic output gave the game away (and all of it came out before\nany of the results of the selection, which was another surprise).\n\nIt looks as if OFFSET is implemented just be throwing away the results,\nuntil the OFFSET has been reached.\n\nIt would be nice if OFFSET could be implemented in some more efficient\nway.\n\n\n\n-- \nClive Page,\nDept of Physics & Astronomy,\nUniversity of Leicester, U.K. \n\n", "msg_date": "Tue, 6 Jan 2004 15:40:48 +0000", "msg_from": "Clive Page <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient SELECT with OFFSET and LIMIT" }, { "msg_contents": "\nClive Page <[email protected]> writes:\n\n> SELECT myfunc(mycol) FROM table LIMIT 50 OFFSET 10000 ;\n\n> It looks as if OFFSET is implemented just be throwing away the results,\n> until the OFFSET has been reached.\n> \n> It would be nice if OFFSET could be implemented in some more efficient\n> way.\n\nYou could do something like:\n\nselect myfunc(mycol) from (select mycol from table limit 50 offset 10000) as x;\n\nI think it's not easy for the optimizer to do it because there are lots of\ncases where it can't. Consider if you had an ORDER BY clause on the myfunc\noutput column for example. Or if myfunc was a set-returning function.\n\n-- \ngreg\n\n", "msg_date": "06 Jan 2004 13:12:17 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient SELECT with OFFSET and LIMIT" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Clive Page <[email protected]> writes:\n>> It would be nice if OFFSET could be implemented in some more efficient\n>> way.\n\n> You could do something like:\n\n> select myfunc(mycol) from (select mycol from table limit 50 offset 10000) as x;\n\nNote that this won't eliminate the major inefficiency, which is having\nto read 10000+50 rows from the table. But if myfunc() has side-effects\nor is very expensive to run, it'd probably be worth doing.\n\n> I think it's not easy for the optimizer to do it because there are lots of\n> cases where it can't.\n\nI don't actually know of any cases where it could do much of anything to\navoid fetching the OFFSET rows. The problems are basically the same as\nwith COUNT(*) optimization: without examining each row, you don't know\nif it would have been returned or not. We could possibly postpone\nevaluation of the SELECT output list until after the OFFSET step (thus\nautomating the above hack), but even that only works if there are no\nset-returning functions in the output list ...\n\n\t\t\tregards, tom lane\n\nPS: BTW, the one-extra-row effect that Clive noted is gone in 7.4.\n", "msg_date": "Tue, 06 Jan 2004 13:36:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient SELECT with OFFSET and LIMIT " } ]
[ { "msg_contents": "I have reported this on the pgadmin-support mailing list, but Andreas Pflug \nhas asked me to post it here.\n\nWith a particular database, PgAdmin3 takes a very long time to connect to a \ndatabase. this is not a general problem with PgAdmin, but only with one \ndatabase out of many. Other databases do not have the problem. And only \nwith one particular server. The exact same database on a different server \ndoes not have the problem.\n\nThe server in question is running PostgreSQL 7.3.2 on \nsparc-sun-solaris2.8, compiled by GCC 2.95.2\n\nThe other server which has the same database is running Postgres 7.3.4 on \ni386-redhat-linux-gnu, complied by GCC i386-redhat-linux-gcc 3.2.2.\n\nI have attached the query that Andreas says is the one that is run when \nPgAdmin first connects to a database as well as the output from running the \nquery with explain turned on.\n\nBoth Andreas and I would be every interested if this group might have any \nideas why the query is so slow.\n\nNOTE: I have vacuumed the database, but that did not affect the timing at all.\nNOTE: The startup on the sparc server is 44 seconds, The startup on the \nlinux server is 5 seconds.\n\nAndreas writes:\nI can't see too much from this query plan, it just seems you have 321 \ntriggers an 4750 dependencies which isn't too extraordinary much. But 48 \nseconds execution time *is* much.\n\nPlease repost this to pgsql-performance, including the query, backend \nversion, and modified server settings. I'm not deep enough in planner items \nto analyze this sufficiently.\nPlease let me CCd on this topic so I can see what I should change in \npgAdmin3 (if any).\n\n\n\n---\nMichael\n\n\n---\nMichael", "msg_date": "Tue, 06 Jan 2004 14:49:16 -0600", "msg_from": "Michael Shapiro <[email protected]>", "msg_from_op": true, "msg_subject": "PgAdmin startup query VERY slow" }, { "msg_contents": "Michael,\n\n> With a particular database, PgAdmin3 takes a very long time to connect to a \n> database. this is not a general problem with PgAdmin, but only with one \n> database out of many. Other databases do not have the problem. And only \n> with one particular server. The exact same database on a different server \n> does not have the problem.\n\nHave you run VACUUM ANALYZE *as the superuser* on the faulty server recently? \n>From the look of the explain, PG is grossly underestimating the number of \nitems in the pg_trigger and pg_depend tables, and thus choosing an \ninappropriate nested loop execution.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 6 Jan 2004 13:01:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgAdmin startup query VERY slow" }, { "msg_contents": "Mark,\n\n> That seemed to fix it. What does VACUUM ANALYZE do that VACUUM FULL does \n> not? What causes a database to need vacuuming?\n\nSee the Online Docs:\nhttp://www.postgresql.org/docs/current/static/maintenance.html\n\nIncidentally, just ANALYZE would probably have fixed your problem. Please do \nsuggest to the PGAdmin team that they add a FAQ item about this.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 6 Jan 2004 13:17:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgAdmin startup query VERY slow" }, { "msg_contents": "That seemed to fix it. What does VACUUM ANALYZE do that VACUUM FULL does \nnot? What causes a database to need vacuuming?\n\n\n\nAt 01:01 PM 1/6/2004 -0800, Josh Berkus wrote:\n>Michael,\n>\n> > With a particular database, PgAdmin3 takes a very long time to connect \n> to a\n> > database. this is not a general problem with PgAdmin, but only with one\n> > database out of many. Other databases do not have the problem. And only\n> > with one particular server. The exact same database on a different server\n> > does not have the problem.\n>\n>Have you run VACUUM ANALYZE *as the superuser* on the faulty server \n>recently?\n> >From the look of the explain, PG is grossly underestimating the number of\n>items in the pg_trigger and pg_depend tables, and thus choosing an\n>inappropriate nested loop execution.\n>\n>--\n>-Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n---\nMichael \n\n", "msg_date": "Tue, 06 Jan 2004 15:31:29 -0600", "msg_from": "Michael Shapiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PgAdmin startup query VERY slow" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Incidentally, just ANALYZE would probably have fixed your problem.\n\n... or just VACUUM; that would have updated the row count which is all\nthat was really needed here. The main point is that you do have to do\nthat as superuser, since the same commands issued as a non-superuser\nwon't touch the system tables (or any table you do not own).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jan 2004 18:12:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgAdmin startup query VERY slow " } ]
[ { "msg_contents": "\nI need to know that original number of rows that WOULD have been returned\nby a SELECT statement if the LIMIT / OFFSET where not present in the \nstatement.\nIs there a way to get this data from PG ?\n\n SELECT\n ... ;\n\n ----> returns 100,000 rows\n\nbut,\n\n SELECT\n ...\n LIMIT x\n OFFSET y;\n\n ----> returns at most x rows\n\nIn order to build a list pager on a web site, I want to select 'pages' of a\nresult set at a time. However, I need to know the original select \nresult set\nsize because I still have to draw the 'page numbers' to display what \npages are\navailable.\n\nI've done this TWO ways in the past:\n\n 1) TWO queries. The first query will perform a SELECT COUNT(*) ...; and\n the second query performs the actualy SELECT ... LIMIT x OFFSET y;\n\n 2) Using PHP row seek and only selecting the number of rows I need.\n\nHere is an example of method number 2 in PHP:\n\n //----------------------------------------------------------------------\n function query_assoc_paged ($sql, $limit=0, $offset=0) {\n $this->num_rows = false;\n\n // open a result set for this query...\n $result = $this->query($sql);\n if (! $result) return (false);\n\n // save the number of rows we are working with\n $this->num_rows = @pg_num_rows($result);\n\n // moves the internal row pointer of the result to point to our\n // desired offset. The next call to pg_fetch_assoc() would return\n // that row.\n if (! empty($offset)) {\n if (! @pg_result_seek($result, $offset)) {\n return (array());\n };\n }\n\n // gather the results together in an array of arrays...\n $data = array();\n while (($row = pg_fetch_assoc($result)) !== false) {\n $data[] = $row;\n \n // After reading N rows from this result set, free our memory\n // and return the rows we fetched...\n if (! empty($limit) && count($data) >= $limit) {\n pg_free_result($result);\n return ($data);\n }\n }\n\n pg_free_result($result);\n return($data);\n }\n\n //----------------------------------------------------------------------\n\nIn this approach, I am 'emulating' the LIMIT / OFFSET features in PostgreSQL\nby just seeking forward in the result set (offset) and only fetching the\nnumber of rows that match my needs (LIMIT).\n\nQUESTION: Is this the best way to do this, or is there a more efficient way\nto get at the data I want? Is there a variable set in PG that tells me the\noriginal number of rows in the query? Something like:\n\n SELECT ORIG_RESULT_SIZE, ...\n ...\n LIMIT x\n OFFSET y;\n\nOr can I run another select right afterwards...like:\n\n SELECT ...\n ...\n LIMIT x\n OFFSET y;\n\n SELECT unfiltered_size_of_last_query();\n\nAny thoughts? Sure, the PHP function I'm using above 'works', but is it\nthe most efficient? I hope I'm not actually pulling all 100,000 records\nacross the wire when I only intend to show 10 at a time. See what I'm\ngetting at?\n\nTIA,\n\nDante\n\n---------\nD. Dante Lorenso\[email protected]\n\n", "msg_date": "Wed, 07 Jan 2004 10:57:38 -0600", "msg_from": "\"D. Dante Lorenso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Find original number of rows before applied LIMIT/OFFSET?" }, { "msg_contents": "\n\"D. Dante Lorenso\" <[email protected]> writes:\n\n> Any thoughts? Sure, the PHP function I'm using above 'works', but is it\n> the most efficient? I hope I'm not actually pulling all 100,000 records\n> across the wire when I only intend to show 10 at a time. See what I'm\n> getting at?\n\nI tend to do it using a separate select count(*). My thinking is that the\ncount(*) query can be simplified and exclude things like the ORDER BY clause\nand any select list entries that require extra work. It can often even exclude\nwhole joins.\n\nBy doing a separate query I can do that extra work only for the rows that i\nactually need for display. Hopefully using an index to pull up those rows. And\ndo the count(*) in the most efficient way possible, probably a sequential scan\nwith no joins for foreign keys etc.\n\nBut I suspect the two methods both work out to suck about equally.\n\n-- \ngreg\n\n", "msg_date": "07 Jan 2004 14:41:04 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Find original number of rows before applied\n\tLIMIT/OFFSET?" } ]
[ { "msg_contents": "Any tips for speeding up index creation?\n\nI need to bulk load a large table with 100M rows and several indexes,\nsome of which span two columns.\n\nBy dropping all indexes prior to issuing the 'copy from' command, the\noperation completes 10x as fast (1.5h vs 15h).\n\nUnfortunately, recreating a single index takes nearly as long as loading\nall of the data into the table; this more or less eliminates the time\ngained by dropping the index in the first place.\n\nAlso, there doesn't seem to be a simple way to disable/recreate all\nindexes for a specific table short of explicitely dropping and later\nrecreating each index?\n\n--\nEric Jain\n\n", "msg_date": "Wed, 7 Jan 2004 18:08:06 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index creation" }, { "msg_contents": "On Wed, 7 Jan 2004 18:08:06 +0100\n\"Eric Jain\" <[email protected]> wrote:\n\n> Any tips for speeding up index creation?\n> \n> I need to bulk load a large table with 100M rows and several indexes,\n> some of which span two columns.\n> \n> By dropping all indexes prior to issuing the 'copy from' command, the\n> operation completes 10x as fast (1.5h vs 15h).\n> \n> Unfortunately, recreating a single index takes nearly as long as\n> loading all of the data into the table; this more or less eliminates\n> the time gained by dropping the index in the first place.\n> \n> Also, there doesn't seem to be a simple way to disable/recreate all\n> indexes for a specific table short of explicitely dropping and later\n> recreating each index?\n\nBefore creating your index bump up your sort_mem high.\n\nset sort_mem = 64000 \ncreate index foo on baz(a, b);\n\nBIG increases.\n[This also helps on FK creation]\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Wed, 7 Jan 2004 12:20:15 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation" }, { "msg_contents": "On Wed, 7 Jan 2004, Eric Jain wrote:\n\n> Any tips for speeding up index creation?\n> \n> I need to bulk load a large table with 100M rows and several indexes,\n> some of which span two columns.\n> \n> By dropping all indexes prior to issuing the 'copy from' command, the\n> operation completes 10x as fast (1.5h vs 15h).\n> \n> Unfortunately, recreating a single index takes nearly as long as loading\n> all of the data into the table; this more or less eliminates the time\n> gained by dropping the index in the first place.\n> \n> Also, there doesn't seem to be a simple way to disable/recreate all\n> indexes for a specific table short of explicitely dropping and later\n> recreating each index?\n\nNote that you can issue the following command to see all the index \ndefinitions for a table:\n\nselect * from pg_indexes where tablename='sometable';\n\nAnd store those elsewhere to be reused when you need to recreate the \nindex.\n\n", "msg_date": "Wed, 7 Jan 2004 12:51:30 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation" } ]
[ { "msg_contents": "\nDoes anyone have any data to support arguing for a particular stripe size in\nRAID-0? Do large stripe sizes allow drives to stream data more efficiently or\ndefeat read-ahead?\n\n-- \ngreg\n\n", "msg_date": "07 Jan 2004 14:45:24 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": true, "msg_subject": "RAID array stripe sizes" } ]
[ { "msg_contents": "Hi all,\n\nChris Browne (one of my colleagues here) has posted some tests in the\npast indicating that jfs may be the fastest filesystem for Postgres\nuse on Linux.\n\nWe have lately had a couple of cases where machines either locked up,\nslowed down to the point of complete unusability, or died completely\nwhile using jfs. We are _not_ sure that jfs is in fact the culprit. \nIn one case, a kernel panic appeared to be referring to the jfs\nkernel module, but I can't be sure as I lost the output immediately\nthereafter. Yesterday, we had a problem of data corruption on a\nfailed jfs volume.\n\nNone of this is to say that jfs is in fact to blame, nor even that,\nif it is, it does not have something to do with the age of our\ninstallations, &c. (these are all RH 8). In fact, I suspect hardware\nin both cases. But I thought I'd mention it just in case other\npeople are seeing strange behaviour, on the principle of \"better\nsafe than sorry.\"\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 7 Jan 2004 18:06:08 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "failures on machines using jfs" }, { "msg_contents": "Andrew,\n\n> None of this is to say that jfs is in fact to blame, nor even that,\n> if it is, it does not have something to do with the age of our\n> installations, &c. (these are all RH 8). In fact, I suspect hardware\n> in both cases. But I thought I'd mention it just in case other\n> people are seeing strange behaviour, on the principle of \"better\n> safe than sorry.\"\n\nAlways useful. Actually, I just fielded on IRC a report of poor I/O \nutilization with XFS during checkpointing. Not sure if the problem is XFS \nor PostgreSQL, but the fact that XFS (alone among filesystems) does its own \ncache management instead of using the kernel cache makes me suspicious.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 8 Jan 2004 10:52:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "When grilled further on (Wed, 7 Jan 2004 18:06:08 -0500),\nAndrew Sullivan <[email protected]> confessed:\n\n> \n> We have lately had a couple of cases where machines either locked up,\n> slowed down to the point of complete unusability, or died completely\n> while using jfs. We are _not_ sure that jfs is in fact the culprit. \n> In one case, a kernel panic appeared to be referring to the jfs\n> kernel module, but I can't be sure as I lost the output immediately\n> thereafter. Yesterday, we had a problem of data corruption on a\n> failed jfs volume.\n> \n> None of this is to say that jfs is in fact to blame, nor even that,\n> if it is, it does not have something to do with the age of our\n> installations, &c. (these are all RH 8). In fact, I suspect hardware\n> in both cases. But I thought I'd mention it just in case other\n> people are seeing strange behaviour, on the principle of \"better\n> safe than sorry.\"\n> \n\nInterestingly enough, I'm using JFS on a new scsi disk with Mandrake 9.1 and\nwas having similar problems. I was generating heavy disk usage through database\nand astronomical data reductions. My machine (dual AMD) would suddenly hang. \nNo new jobs would run, just increase the load, until I reboot the machine.\n\nI solved my problems by creating a 128Mb ram disk (using EXT2) for the temp\ndata produced my reduction runs.\n\nI believe JFS was to blame, not hardware, but you never know...\n\nCheers,\nRob\n\n-- \n 20:22:27 up 12 days, 10:13, 4 users, load average: 2.00, 2.01, 2.03", "msg_date": "Fri, 9 Jan 2004 20:28:16 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "[email protected] (Robert Creager) writes:\n> When grilled further on (Wed, 7 Jan 2004 18:06:08 -0500),\n> Andrew Sullivan <[email protected]> confessed:\n>\n>> We have lately had a couple of cases where machines either locked\n>> up, slowed down to the point of complete unusability, or died\n>> completely while using jfs. We are _not_ sure that jfs is in fact\n>> the culprit. In one case, a kernel panic appeared to be referring\n>> to the jfs kernel module, but I can't be sure as I lost the output\n>> immediately thereafter. Yesterday, we had a problem of data\n>> corruption on a failed jfs volume.\n>> \n>> None of this is to say that jfs is in fact to blame, nor even that,\n>> if it is, it does not have something to do with the age of our\n>> installations, &c. (these are all RH 8). In fact, I suspect\n>> hardware in both cases. But I thought I'd mention it just in case\n>> other people are seeing strange behaviour, on the principle of\n>> \"better safe than sorry.\"\n>\n> Interestingly enough, I'm using JFS on a new scsi disk with Mandrake\n> 9.1 and was having similar problems. I was generating heavy disk\n> usage through database and astronomical data reductions. My machine\n> (dual AMD) would suddenly hang. No new jobs would run, just\n> increase the load, until I reboot the machine.\n>\n> I solved my problems by creating a 128Mb ram disk (using EXT2) for\n> the temp data produced my reduction runs.\n>\n> I believe JFS was to blame, not hardware, but you never know...\n\nInteresting.\n\nThe set of concurrent factors that came together to appear when this\nhappened \"consistently\" were thus:\n\n 1. Heavy DB updates taking place on JFS filesystems;\n\n 2. SMP (we suspected Xeon hyperthreading as a possible factor, but\n shut it off and still saw the same problem...)\n\n 3. The third factor that appeared a catalyst was copying, via scp, a\n file > 2GB in size onto the system.\n\nThe third piece was a particularly interesting aspect; the file would\nget copied over successfully, and the scp process would hang (to the\npoint of \"kill -9\" being unable to touch it) immediately thereafter.\n\nAt that point, processes on the system that were accessing files on\nthe hung-up filesystem were locked, also unkillable by \"kill 9.\"\nThat's certainly consistent with JFS being at the root of the problem,\nwhether it was the cause or not...\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Sat, 10 Jan 2004 21:08:50 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" } ]
[ { "msg_contents": "Hi,\n\nWe've set up a little test box (1GHz Athlon, 40G IDE drive, 256M RAM, \nRedhat 9) to do some basic comparisons between postgresql and firebird \n1.0.3 and 1.5rc8. Mostly the results are comparable, with one \nsignificant exception.\n\nQUERY\nselect invheadref, invprodref, sum(units)\nfrom invtran\ngroup by invheadref, invprodref\n\nRESULTS\npg 7.3.4 - 5.5 min\npg 7.4.0 - 10 min\nfb 1.0.3 - 64 sec\nfb 1.5 - 44 sec\n\n* The invtran table has about 2.5 million records, invheadref and \ninvprodref are both char(10) and indexed.\n* shared_buffers = 12000 and sort_mem = 8192 are the only changes I've \nmade to postgresql.conf, with relevant changes to shmall and shmmax.\n\nThis is an explain analyse plan from postgresql 7.4:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------ \n\nGroupAggregate (cost=572484.23..601701.15 rows=1614140 width=39) \n(actual time=500091.171..554203.189 rows=147621 loops=1)\n -> Sort (cost=572484.23..578779.62 rows=2518157 width=39) (actual \ntime=500090.939..527500.940 rows=2521530 loops=1)\n Sort Key: invheadref, invprodref\n -> Seq Scan on invtran (cost=0.00..112014.57 rows=2518157 \nwidth=39) (actual time=16.002..25516.917 rows=2521530 loops=1)\n Total runtime: 554826.827 ms\n(5 rows)\n\nAm I correct in interpreting that most time was spent doing the sorting? \nExplain confuses the heck out of me and any help on how I could make \nthis run faster would be gratefully received.\n\nCheers,\n\nBradley.\n\n", "msg_date": "Thu, 08 Jan 2004 16:52:05 +1100", "msg_from": "Bradley Tate <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query problem" }, { "msg_contents": "On Thu, 08 Jan 2004 16:52:05 +1100\nBradley Tate <[email protected]> wrote:\n> Am I correct in interpreting that most time was spent doing the\n> sorting? \n\nlooks so. your table is about 70MB total size, and its getting loaded\ncompletely into memory (you have 12000 * 8k = 96M available). 26s to\nload 70MB from disk seems reasonable. The rest of the time is used for\nsorting.\n\n> Explain confuses the heck out of me and any help on how I could make \n> this run faster would be gratefully received.\n> \n\nYou should bump sort_mem as high as you can stand. with only 8MB sort\nmemory available, you're swapping intermediate sort pages to disk --\na lot. Try the query with sort_mem set to 75MB (to do the entire sort in\nmemory). \n\n-mike\n\n> Cheers,\n> \n> Bradley.\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 8: explain analyze is your\n> friend\n\n\n-- \nMike Glover\nKey ID BFD19F2C <[email protected]>", "msg_date": "Thu, 8 Jan 2004 19:27:16 -0800", "msg_from": "Mike Glover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "On Thu, Jan 08, 2004 at 19:27:16 -0800,\n Mike Glover <[email protected]> wrote:\n> \n> You should bump sort_mem as high as you can stand. with only 8MB sort\n> memory available, you're swapping intermediate sort pages to disk --\n> a lot. Try the query with sort_mem set to 75MB (to do the entire sort in\n> memory). \n\nPostgres also might be able to switch to a hash aggregate instead of\nusing a sort if sortmem is made large enough to hold the results for\nall of the (estimated) groups.\n", "msg_date": "Thu, 8 Jan 2004 22:23:38 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "Mike Glover <[email protected]> writes:\n> You should bump sort_mem as high as you can stand. with only 8MB sort\n> memory available, you're swapping intermediate sort pages to disk --\n> a lot. Try the query with sort_mem set to 75MB (to do the entire sort in\n> memory). \n\n7.4 will probably flip over to a hash-based aggregation method, and not\nsort at all, once you make sort_mem large enough that it thinks the hash\ntable will fit in sort_mem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jan 2004 00:12:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem " }, { "msg_contents": "On Thu, 8 Jan 2004, Bradley Tate wrote:\n\n> We've set up a little test box (1GHz Athlon, 40G IDE drive, 256M RAM, \n> Redhat 9) to do some basic comparisons between postgresql and firebird \n> 1.0.3 and 1.5rc8. Mostly the results are comparable, with one \n> significant exception.\n> \n> QUERY\n> select invheadref, invprodref, sum(units)\n> from invtran\n> group by invheadref, invprodref\n> \n> RESULTS\n> pg 7.3.4 - 5.5 min\n> pg 7.4.0 - 10 min\n> fb 1.0.3 - 64 sec\n> fb 1.5 - 44 sec\n> \n> * The invtran table has about 2.5 million records, invheadref and \n> invprodref are both char(10) and indexed.\n\nFor the above query, shouldn't you have one index for both columns\n(invheadref, invprodref). Then it should not need to sort at all to do the\ngrouping and it should all be fast.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 9 Jan 2004 08:29:57 +0100 (CET)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "On Friday 09 January 2004 07:29, Dennis Björklund wrote:\n> On Thu, 8 Jan 2004, Bradley Tate wrote:\n> >\n> > select invheadref, invprodref, sum(units)\n> > from invtran\n> > group by invheadref, invprodref\n\n> For the above query, shouldn't you have one index for both columns\n> (invheadref, invprodref). Then it should not need to sort at all to do the\n> grouping and it should all be fast.\n\nNot sure if that would make a difference here, since the whole table is being \nread. \n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 9 Jan 2004 08:54:46 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "On Fri, 9 Jan 2004, Richard Huxton wrote:\n\n> > > select invheadref, invprodref, sum(units)\n> > > from invtran\n> > > group by invheadref, invprodref\n> \n> > For the above query, shouldn't you have one index for both columns\n> > (invheadref, invprodref). Then it should not need to sort at all to do the\n> > grouping and it should all be fast.\n> \n> Not sure if that would make a difference here, since the whole table is being \n> read. \n\nThe goal was to avoid the sorting which should not be needed with that \nindex (I hope). So I still think that it would help in this case.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 9 Jan 2004 09:57:09 +0100 (CET)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "On Friday 09 January 2004 08:57, Dennis Björklund wrote:\n> On Fri, 9 Jan 2004, Richard Huxton wrote:\n> > > > select invheadref, invprodref, sum(units)\n> > > > from invtran\n> > > > group by invheadref, invprodref\n> > >\n> > > For the above query, shouldn't you have one index for both columns\n> > > (invheadref, invprodref). Then it should not need to sort at all to do\n> > > the grouping and it should all be fast.\n> >\n> > Not sure if that would make a difference here, since the whole table is\n> > being read.\n>\n> The goal was to avoid the sorting which should not be needed with that\n> index (I hope). So I still think that it would help in this case.\n\nSorry - not being clear. I can see how it _might_ help, but will the planner \ntake into account the fact that even though:\n index-cost > seqscan-cost\nthat\n (index-cost + no-sorting) < (seqscan-cost + sort-cost)\nassuming of course, that the costs turn out that way.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 9 Jan 2004 09:19:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "Dennis Bj�rklund wrote:\n\n>On Fri, 9 Jan 2004, Richard Huxton wrote:\n>\n> \n>\n>>>>select invheadref, invprodref, sum(units)\n>>>>from invtran\n>>>>group by invheadref, invprodref\n>>>> \n>>>>\n>>>For the above query, shouldn't you have one index for both columns\n>>>(invheadref, invprodref). Then it should not need to sort at all to do the\n>>>grouping and it should all be fast.\n>>> \n>>>\n>>Not sure if that would make a difference here, since the whole table is being \n>>read. \n>> \n>>\n>\n>The goal was to avoid the sorting which should not be needed with that \n>index (I hope). So I still think that it would help in this case.\n>\n> \n>\nThanks for the advice. I tried creating a compound index along with \nclustering the invtran table on it, adding another 512MB RAM, increasing \nshared_buffers to 60000 and increasing sort_mem to 100MB, playing with \neffective cache size in postgresql.conf. This cut the execution time \ndown to 4 minutes, which was helpful but still way behind firebird. \nThere was still an awful lot of disk activity while it was happening \nwhich seems to imply lots of sorting going on (?)\n\nInvtran is a big table but it is clustered and static i.e. no updates, \nselect statements only.\n\nMostly my performance problems are with sorts - group by, order by. I \nwas hoping for better results than I've been getting so far.\n\nThanks.\n\np.s.\nCan someone confirm whether this should work from pgadmin3? i.e. will \nthe size of the sort_mem be changed for the duration of the query or \nsession?\n\nset sort_mem to 100000;\nselect ....etc....;\n\n\n\n", "msg_date": "Sat, 10 Jan 2004 01:53:41 +1100", "msg_from": "Bradley Tate <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query problem" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n>> The goal was to avoid the sorting which should not be needed with that\n>> index (I hope). So I still think that it would help in this case.\n\n> Sorry - not being clear. I can see how it _might_ help, but will the planner \n> take into account the fact that even though:\n> index-cost > seqscan-cost\n> that\n> (index-cost + no-sorting) < (seqscan-cost + sort-cost)\n\nYes, it would.\n\n> assuming of course, that the costs turn out that way.\n\nThat I'm less sure about. A sort frequently looks cheaper than a full\nindexscan, unless the table is pretty well clustered on that index,\nor you knock random_page_cost way down.\n\nWith no stats at all, CVS tip has these preferences:\n\nregression=# create table fooey (f1 int, f2 int, unique(f1,f2));\nNOTICE: CREATE TABLE / UNIQUE will create implicit index \"fooey_f1_key\" for table \"fooey\"\nCREATE TABLE\nregression=# explain select * from fooey group by f1,f2;\n QUERY PLAN\n---------------------------------------------------------------\n HashAggregate (cost=25.00..25.00 rows=1000 width=8)\n -> Seq Scan on fooey (cost=0.00..20.00 rows=1000 width=8)\n(2 rows)\n\nregression=# set enable_hashagg TO 0;\nSET\nregression=# explain select * from fooey group by f1,f2;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Group (cost=0.00..57.00 rows=1000 width=8)\n -> Index Scan using fooey_f1_key on fooey (cost=0.00..52.00 rows=1000 width=8)\n(2 rows)\n\nregression=# set enable_indexscan TO 0;\nSET\nregression=# explain select * from fooey group by f1,f2;\n QUERY PLAN\n---------------------------------------------------------------------\n Group (cost=69.83..77.33 rows=1000 width=8)\n -> Sort (cost=69.83..72.33 rows=1000 width=8)\n Sort Key: f1, f2\n -> Seq Scan on fooey (cost=0.00..20.00 rows=1000 width=8)\n(4 rows)\n\nbut remember this is for a relatively small (estimated size of) table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jan 2004 10:07:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem " }, { "msg_contents": "On Fri, 9 Jan 2004, Richard Huxton wrote:\n\n> On Friday 09 January 2004 08:57, Dennis Bj�rklund wrote:\n> > On Fri, 9 Jan 2004, Richard Huxton wrote:\n> > > > > select invheadref, invprodref, sum(units)\n> > > > > from invtran\n> > > > > group by invheadref, invprodref\n> > > >\n> > > > For the above query, shouldn't you have one index for both columns\n> > > > (invheadref, invprodref). Then it should not need to sort at all to do\n> > > > the grouping and it should all be fast.\n> > >\n> > > Not sure if that would make a difference here, since the whole table is\n> > > being read.\n> >\n> > The goal was to avoid the sorting which should not be needed with that\n> > index (I hope). So I still think that it would help in this case.\n>\n> Sorry - not being clear. I can see how it _might_ help, but will the planner\n> take into account the fact that even though:\n> index-cost > seqscan-cost\n> that\n> (index-cost + no-sorting) < (seqscan-cost + sort-cost)\n> assuming of course, that the costs turn out that way.\n\nAFAICS, yes it does take that effect into account (as best\nit can with the estimates).\n\n", "msg_date": "Fri, 9 Jan 2004 07:38:01 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query problem" } ]
[ { "msg_contents": "Hi,\n\nI searched through the archive and could not find any conclusive\ndiscussion of results on this.\n\nHas anyone compared the disk space usage between PostgreSQL\nand Oracle ?\n\nI am interested in knowing for the same tuple (i.e same\ndictionary), the disk usage between the two.\n\nThanks.\n\nGan\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Thu, 8 Jan 2004 22:19:53 -0600", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL vs. Oracle disk space usage" }, { "msg_contents": "Hi,\n\n From the same Database based on pgbench (TPC-B), I have noted that the\nDatabase size is twice lager in Postgres 7.3.3 & 7.4 than with Oracle 9i.\nAnd don't know why.\nFirstly, I thought that as one column name filler type char(88), char(84) or\nchar(22) according the tables which were not initialized, was optimised by\nOracle to mark them as NULL and save disk space.\nBut, I have modified the data generation to generate random string and force\nOracle to put something in the columns. As results, the both Database was\ntwice bigger but Postgres Database size ~= 2 x Oracle Database size.\n\nAnd I have not found any explanation.\n\nRegards,\n\nThierry Missimilly\n\nSeum-Lim Gan wrote:\n\n> Hi,\n>\n> I searched through the archive and could not find any conclusive\n> discussion of results on this.\n>\n> Has anyone compared the disk space usage between PostgreSQL\n> and Oracle ?\n>\n> I am interested in knowing for the same tuple (i.e same\n> dictionary), the disk usage between the two.\n>\n> Thanks.\n>\n> Gan\n> --\n> +--------------------------------------------------------+\n> | Seum-Lim GAN email : [email protected] |\n> | Lucent Technologies |\n> | 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n> | Naperville, IL 60566, USA. fax : (630)-713-7272 |\n> | web : http://inuweb.ih.lucent.com/~slgan |\n> +--------------------------------------------------------+\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly", "msg_date": "Wed, 21 Jan 2004 10:59:46 +0100", "msg_from": "Thierry Missimilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle disk space usage" } ]
[ { "msg_contents": "Hi there,\n\nI am quite new to postgresql, and love the explain feature. It enables \nus to predict which SQL queries needs to be optimized before we see any \nproblems. However, I've run into an issue where explain tells us a the \ncosts of a quiry are tremendous (105849017586), but the query actually \nruns quite fast. Even \"explain analyze\" shows these costs.\n\nThis makes me wonder: can the estimates explain shows be dead wrong?\n\nI can explain in more detail (including the query and output of explain) \nif needed. I'm using 7.4 on Solaris 8.\n\nSincerely,\n\n-- \nRichard van den Berg, CISSP\n\nTrust Factory B.V. | http://www.trust-factory.com/\nBazarstraat 44a | Phone: +31 70 3620684\nNL-2518AK The Hague | Fax : +31 70 3603009\nThe Netherlands |\n\nVisit us at Lotusphere 2004 http://www.trust-factory.com/lotusphere\n\n\n", "msg_date": "Fri, 09 Jan 2004 15:12:42 +0100", "msg_from": "Richard van den Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Explain not accurate" }, { "msg_contents": "On Fri, 9 Jan 2004, Richard van den Berg wrote:\n\n> problems. However, I've run into an issue where explain tells us a the \n> costs of a quiry are tremendous (105849017586), but the query actually \n> runs quite fast. Even \"explain analyze\" shows these costs.\n\nIt would be helpful if you can show the query and the EXPLAIN ANALYZE of\nthe query (and not just EXPLAIN).\n\n> This makes me wonder: can the estimates explain shows be dead wrong?\n\nOf course they can. An estimation is just an estimation. If you have not\nanalyzed the database then it's most likely wrong. Dead wrong is not\ncommon, but not impossible.\n\nRun VACUUM ANALYZE and see if the estimate is better after that.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 11 Jan 2004 20:55:00 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain not accurate" }, { "msg_contents": "You need to regularly run 'analyze'.\n\nChris\n\nRichard van den Berg wrote:\n> Hi there,\n> \n> I am quite new to postgresql, and love the explain feature. It enables \n> us to predict which SQL queries needs to be optimized before we see any \n> problems. However, I've run into an issue where explain tells us a the \n> costs of a quiry are tremendous (105849017586), but the query actually \n> runs quite fast. Even \"explain analyze\" shows these costs.\n> \n> This makes me wonder: can the estimates explain shows be dead wrong?\n> \n> I can explain in more detail (including the query and output of explain) \n> if needed. I'm using 7.4 on Solaris 8.\n> \n> Sincerely,\n> \n", "msg_date": "Mon, 12 Jan 2004 08:24:00 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain not accurate" }, { "msg_contents": "\nRichard van den Berg <[email protected]> writes:\n\n> Hi there,\n> \n> I am quite new to postgresql, and love the explain feature. It enables us to\n> predict which SQL queries needs to be optimized before we see any problems.\n> However, I've run into an issue where explain tells us a the costs of a quiry\n> are tremendous (105849017586), but the query actually runs quite fast. Even\n> \"explain analyze\" shows these costs.\n\nDo you have any of the optimization parameters off, enable_seqscan perhaps?\n\nenable_seqscan works by penalizing plans that use sequential plans, but there\nare still lots of queries that cannot be done any other way. I'm not sure\nwhether the same holds for all the other parameters.\n\nIf your tables are all going to grow drastically then this may still indicate\na problem, probably a missing index. But if one of them is a reference table\nthat will never grow then perhaps the index will never be necessary.\n\n\nOr perhaps you just need to run analyze. Send the \"EXPLAIN ANALYZE\" output for\nthe query for starters. You might also send the output of \"SHOW ALL\".\n\n-- \ngreg\n\n", "msg_date": "11 Jan 2004 21:55:26 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain not accurate" } ]
[ { "msg_contents": "I'm running PostgreSQL 7.4 on a quad Xeon attached to a\nbeefy disk array. However, I am begining to wonder if this is\na waste of CPU power.\n\nI think I read somewhere that PostgreSQL is NOT multi-threaded.\nBut, will it be able to take advantage of multiple CPUs? Will\nI have to run separate postmaster instances to get the advantage?\n\nI'm not running a high load on that machine yet, So I can't tell\nif the load is being balanced across the CPUs. I expect that as\nsome of the newly launched sites grow it will require more resources\nbut maybe some of you could share your results of this type of\ndeployment setup.\n\nDante\n\n", "msg_date": "Sun, 11 Jan 2004 09:53:38 -0600", "msg_from": "\"D. Dante Lorenso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql on Quad CPU machine" }, { "msg_contents": "\"D. Dante Lorenso\" <[email protected]> writes:\n\n> I'm running PostgreSQL 7.4 on a quad Xeon attached to a\n> beefy disk array. However, I am begining to wonder if this is\n> a waste of CPU power.\n>\n> I think I read somewhere that PostgreSQL is NOT multi-threaded.\n> But, will it be able to take advantage of multiple CPUs? Will\n> I have to run separate postmaster instances to get the advantage?\n\nPG uses a separate backend process for each connection, so if you have\nmultiple simultaneous connections they will use different CPUs.\nSingle queries will not be split across CPUs.\n\n-Doug\n\n", "msg_date": "Sun, 11 Jan 2004 11:07:25 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql on Quad CPU machine" } ]
[ { "msg_contents": "It would seem we're experiencing somthing similiar with our scratch\nvolume (JFS mounted with noatime). It is still much faster than our\nexperiments with ext2, ext3, and reiserfs but occasionally during\nlarge loads it will hiccup for a couple seconds but no crashes yet.\n\nI'm reluctant to switch back to any other file system because the\ndata import took a little over 1.5 hours but now takes just under\n20 minutes and we haven't crashed yet.\n\nFor future reference:\n\n RedHat 7.3 w/2.4.18-18.7smp\n PostgreSQL 7.3.3 from source\n jfsutils 1.0.17-1\n Dual PIII Intel 1.4GHz & 2GB ECC\n Internal disk: 2xU160 SCSI, mirrored, location of our JFS file system\n External disk Qlogic 2310 attached to FC-SW @2Gbps with ext3 on those LUNs\n\nGreg\n\n\n-----Original Message-----\nFrom: Christopher Browne\nTo: [email protected]\nSent: 1/10/04 9:08 PM\nSubject: Re: [PERFORM] failures on machines using jfs\n\[email protected] (Robert Creager) writes:\n> When grilled further on (Wed, 7 Jan 2004 18:06:08 -0500),\n> Andrew Sullivan <[email protected]> confessed:\n>\n>> We have lately had a couple of cases where machines either locked\n>> up, slowed down to the point of complete unusability, or died\n>> completely while using jfs. We are _not_ sure that jfs is in fact\n>> the culprit. In one case, a kernel panic appeared to be referring\n>> to the jfs kernel module, but I can't be sure as I lost the output\n>> immediately thereafter. Yesterday, we had a problem of data\n>> corruption on a failed jfs volume.\n>> \n>> None of this is to say that jfs is in fact to blame, nor even that,\n>> if it is, it does not have something to do with the age of our\n>> installations, &c. (these are all RH 8). In fact, I suspect\n>> hardware in both cases. But I thought I'd mention it just in case\n>> other people are seeing strange behaviour, on the principle of\n>> \"better safe than sorry.\"\n>\n> Interestingly enough, I'm using JFS on a new scsi disk with Mandrake\n> 9.1 and was having similar problems. I was generating heavy disk\n> usage through database and astronomical data reductions. My machine\n> (dual AMD) would suddenly hang. No new jobs would run, just\n> increase the load, until I reboot the machine.\n>\n> I solved my problems by creating a 128Mb ram disk (using EXT2) for\n> the temp data produced my reduction runs.\n>\n> I believe JFS was to blame, not hardware, but you never know...\n\nInteresting.\n\nThe set of concurrent factors that came together to appear when this\nhappened \"consistently\" were thus:\n\n 1. Heavy DB updates taking place on JFS filesystems;\n\n 2. SMP (we suspected Xeon hyperthreading as a possible factor, but\n shut it off and still saw the same problem...)\n\n 3. The third factor that appeared a catalyst was copying, via scp, a\n file > 2GB in size onto the system.\n\nThe third piece was a particularly interesting aspect; the file would\nget copied over successfully, and the scp process would hang (to the\npoint of \"kill -9\" being unable to touch it) immediately thereafter.\n\nAt that point, processes on the system that were accessing files on\nthe hung-up filesystem were locked, also unkillable by \"kill 9.\"\nThat's certainly consistent with JFS being at the root of the problem,\nwhether it was the cause or not...\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\"\n[name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n", "msg_date": "Sun, 11 Jan 2004 11:21:14 -0500", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "\"Spiegelberg, Greg\" <[email protected]> writes:\n> PostgreSQL 7.3.3 from source\n\n*Please* update to 7.3.4 or 7.3.5 before you get bitten by the\nWAL-page-boundary bug ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jan 2004 12:04:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs " }, { "msg_contents": "Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:\n> It would seem we're experiencing somthing similiar with our scratch\n> volume (JFS mounted with noatime).\n\nWhich files/directories do you keep on \"scratch\" volume ?\n\nAll postgres files or just some (WAL, tmp) ?\n\n-------------\nHannu\n\n", "msg_date": "Mon, 12 Jan 2004 18:12:09 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "Hannu Krosing wrote:\n> Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:\n> \n>>It would seem we're experiencing somthing similiar with our scratch\n>>volume (JFS mounted with noatime).\n> \n> \n> Which files/directories do you keep on \"scratch\" volume ?\n> \n> All postgres files or just some (WAL, tmp) ?\n\nNo Postgres files are kept in scratch only the files being loaded\ninto the database via COPY or lo_import.\n\nMy WAL logs are kept on a separate ext3 file system.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n", "msg_date": "Mon, 12 Jan 2004 12:03:49 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:\n> Hannu Krosing wrote:\n> > Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:\n> > \n> >>It would seem we're experiencing somthing similiar with our scratch\n> >>volume (JFS mounted with noatime).\n> > \n> > \n> > Which files/directories do you keep on \"scratch\" volume ?\n> > \n> > All postgres files or just some (WAL, tmp) ?\n> \n> No Postgres files are kept in scratch only the files being loaded\n> into the database via COPY or lo_import.\n\nthen the speedup does not make any sense !\n\nIs reading from jfs filesystem also 5 times faster than reading from\next3 ?\n\nThe only explanation I can give to filling database from jfs volume to\nbe so much faster could be some strange filesystem cache interactions.\n\n----------------\nHannu\n\n\n", "msg_date": "Tue, 13 Jan 2004 14:46:08 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "Hannu Krosing wrote:\n> Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:\n> \n>>Hannu Krosing wrote:\n>>\n>>>Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:\n>>>\n>>>\n>>>>It would seem we're experiencing somthing similiar with our scratch\n>>>>volume (JFS mounted with noatime).\n>>>\n>>>\n>>>Which files/directories do you keep on \"scratch\" volume ?\n>>>\n>>>All postgres files or just some (WAL, tmp) ?\n>>\n>>No Postgres files are kept in scratch only the files being loaded\n>>into the database via COPY or lo_import.\n> \n> \n> then the speedup does not make any sense !\n> \n> Is reading from jfs filesystem also 5 times faster than reading from\n> ext3 ?\n> \n> The only explanation I can give to filling database from jfs volume to\n> be so much faster could be some strange filesystem cache interactions.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 13 Jan 2004 07:59:08 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" }, { "msg_contents": "Hannu Krosing wrote:\n> Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:\n> \n>>Hannu Krosing wrote:\n>>\n>>>Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:\n>>>\n>>>\n>>>>It would seem we're experiencing somthing similiar with our scratch\n>>>>volume (JFS mounted with noatime).\n>>>\n>>>\n>>>Which files/directories do you keep on \"scratch\" volume ?\n>>>\n>>>All postgres files or just some (WAL, tmp) ?\n>>\n>>No Postgres files are kept in scratch only the files being loaded\n>>into the database via COPY or lo_import.\n> \n> \n> then the speedup does not make any sense !\n\nWe do a lot of preprocessing before the data gets loaded. It's that\nprocess that experiences the hiccups I mentioned.\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n", "msg_date": "Tue, 13 Jan 2004 10:04:20 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failures on machines using jfs" } ]